Simulated Worlds for Devices that See

Highwai develops tools that enable software engineers to automatically generate the data needed to train advanced neural networks

 Data trains deep learning algorithms to see

Data trains deep learning algorithms to see

Data: The new source code

Machine learning techniques are revolutionizing software design. Neural networks are taking over many of the core functions that were previously handled with painstaking algorithm design. In contrast, neural networks are trained by example and effective training requires huge quantities of targeted, annotated data.

This data has been called the new source code and companies that lack access to it face enormous barriers to entering this new software world.

Highwai generates the data needed

Autonomous cars, smart cities, security and retail analytics all require devices that can see; these devices must be trained and the Highwai Simulator allows the system designer to create scenes, design scenarios and run scripted simulations that output high-quality physically correct video sequences containing perfectly annotated ground truth data.

 Simulator generates perfectly annotated data

Simulator generates perfectly annotated data

 Simulator: 3D data visualization

Simulator: 3D data visualization

Real-world 3d simulator

Highwai's Real World 3D Simulator uses physics-based rendering models, an advanced physics engine and motion-capture animation sequences to produce realistically lit, shadowed and animated sequences. A scenario editor allows moving objects such as automobiles to be placed into a scene alongside animated characters to create multiple scenarios for any given scene; typical uses are the creation of libraries of traffic activity at a road intersection or behaviors of interest for a security application.

Repetitive scenario runs with parametric variations such as time of day are controlled through a scriptable interface. Simulation runs can be deterministic or randomized.

Watch the 3d Simulator in action


Available annotations include object segmentation, 2D and 3D bounding boxes, per-object pixel-perfect masks and more. Multiple simultaneous visible light, infra-red, depth and time of flight (LIDAR) cameras are supported. Special materials such as retroreflective surfaces can be conveniently modelled.


The Highwai AI Toolkit supports multiple AI frameworks including TensorFlow, Keras and PyTorch with the ability to prepare Highwai datasets for training, run customized inference and prediction, and generate detailed statistics and visualizations on the results. The Data Augmentation Toolkit allows the user to model camera effects such as noise and compression artifacts among many other techniques for deriving an effective training set from the raw modeled data. Its data management and verification tools effectively eliminate the problems of handling big data.

Together, the Highwai Simulator and AI Toolkit form a complete, end to end tool chain for producing, managing and visualizing training data right through the training process. The training loop can be quickly and easily closed with the production of new training sequences based on results, enabling design space exploration as well as software-in-the-loop use cases. Highwai also provides a selection of deployment-ready neural networks, trained to the customer’s specifications.