SenSat, a U.K. startup aiming to use visual and spatial data to “simulate reality” and help computers better understand the physical world, has raised $4.5 million in seed funding — cash it will use to further develop the technology, and invest in its San Francisco office. The round was backed by Force Over Mass, Round Hill Venture Partners, and Zag (the venture arm of global creative agency BBH).
Launched in 2017 by founders James Dean (CEO) and Harry Atkinson (Head of Product), SenSat turns complex visual and spatial data into what is described as “real-time simulated reality” designed to enable computers to solve real world problems.
The idea is to let companies that operate in physical domains — starting with infrastructure construction — use AI to help make better informed decisions based on multiple variables, which are large in number and complexity.
But to do this, first the real world needs to be simulated and those simulations injected with data that computers can understand and interact with. And that starts with using new technology to photograph the real world at a level of detail that goes beyond satellite imagery.
“My background is in satellite remote sensing, the science of understanding an object without coming into contact with it,” SenSat CEO Dean tells me. “This actually gave me the initial idea, ‘if everything we do from satellites can be done 200 miles closer using autonomous drones, then the resolution of the corresponding information must be commercially valuable’”.
Dean says the tech that SenSat has since developed is making it possible for computers to understand the real world through the lens of highly detailed simulated realities in order to “learn how things work and to change the way we make decisions”. The company does this by creating digital replicas of real world locations, then infusing real-time spatial data-sets with a high degree of statistical accuracy from both open and proprietary data sources.
“The resulting simulations are realistic and fully digital, allowing large-scale machine learning and data analysis at an unprecedented scale,” he says.
But why has SenSat chosen to initially target infrastructure construction? “On a technical level it allows us to build simulated realities for medium to small physical areas which we have known variables for,” explains Dean. “This means we can check and quantify our results against the real world, helping us build a foundation that can scale in size and complexity… Construction, whilst remaining a fundamental pillar of world economies, is the second least innovative sector on the planet (beaten only by hunting and fishing). As a sector it has seen a zero percent productivity increase since 1970, meaning there are lots of low hanging fruit opportunities for automation”.
In addition, the time and cost for the design phases of large civil infrastructure construction projects can be up to 40 percent of the entire asset value. Because SenSat digitally re-creates the world and teaches its AI to understand it, the startup can automate many manual design tasks.
For example, Dean says that when building a new railway, it might be stipulated that the track can only have a 5 degree gradient, gantries must be placed every 100 metres and tracks must be laid 1.4 metres apart. Traditionally this would take engineers months to painstakingly measure over large distances, hypothesise and test, but SenSat’s AI can run thousands of options, following the exact same design rules, in a matter of minutes. The startup can then produce a fully validated best option design, often representing millions of dollars in savings.
Meanwhile, beyond infrastructure construction, the startup has a number of research streams looking at how else its technology could evolve and be applied. One area being explored is how autonomous vehicles might use the platform to run millions of hours of driverless simulation.
“Our simulated reality replicates exactly what is happening in the real world, and as such it becomes a sensible place to trial developing technologies within ‘real world’ environments, helping the reinforcement learning feedback loop by providing access to real world scenarios,” adds Dean.
“Based on the world’s highest resolution digital representations, including furniture such as street lamps, lane markings and signage, we can simulate millions of hours of driving in real world conditions to train autonomous agents and prove safety use cases. This will be an important step in convincing regulators to transition to free flow AVs on our streets, especially as the technology begins to reach level 4 autonomy and the integration problem becomes the halting factor”.
Source: TechCrunch