Archives

VISIMO Wins Phase II Air Force Award to Tackle Training Data Problem for AFRL

VISIMO
VISIMO

VISIMO, with Research Institute partner Colorado State University, has won an Air Force Phase II Small Business Technology Transfer (STTR) award to develop a generator capable of creating synthetic, annotated image datasets in minutes, improving on the months – and even years – of hours required to manually annotate datasets using current methods.

Air Force Research Laboratory, Information Directorate is collaborating as the end-user of VISIMO’s technology. AFRL/RI works to prototype game-changing technologies, transitioning them to interested users across the Air Force and Department of Defense. VISIMO‘s technology will aid the Lab in the development of a broad range of technologies.

“Specialized, large-scale datasets are required to train machine learning models. The advancement of ML and what these models can accomplish are often slowed due to a lack of appropriate training data,” said VISIMO’s Chief Data Scientist, Dino Mintas.

Researchers are often forced to manually create training datasets, which is especially cumbersome when specialized or rare image data are needed. Models that train on overhead imagery include tracking and detection algorithms – the types of models used for autonomous vehicle navigation, humanitarian aid drops, wildfire management, search and rescue, and more.

VISIMO is building a conditional generative adversarial network (CGAN) that learns to generate unlimited original backgrounds from a limited amount of landscape images. It then inserts annotated objects into the generated backgrounds, automating a typically manual process. In Phase I, the CGAN proof-of-concept focused on overhead satellite imagery and in Phase II it will be expanded to other types of image data, like SAR or radar/lidar.

Phase II will focus on expanding the number of specialized output parameters the CGAN can create, including various security features, biome changes like snow or desert, and visibility changes like shadow, sunlight, fog, or smoke. Phase II builds from successful Phase I work, in which the proof-of-concept generated full, customized datasets in just 15 minutes.