{"id":805,"date":"2021-04-23T17:34:00","date_gmt":"2021-04-23T17:34:00","guid":{"rendered":"https:\/\/unigine.com\/blog\/?p=805"},"modified":"2023-08-03T05:30:15","modified_gmt":"2023-08-03T05:30:15","slug":"imaginator-simulation-as-a-service-from-daedalean-ag-4","status":"publish","type":"post","link":"https:\/\/unigine.com\/blog\/2021\/04\/23\/imaginator-simulation-as-a-service-from-daedalean-ag-4\/","title":{"rendered":"Imaginator: Simulation as a Service from Daedalean AG"},"content":{"rendered":"\n<p><em>The article was prepared specifically for the UNIGINE blog by the press service of Daedalean AG<\/em><\/p>\n\n\n\n<p>Today we would like to introduce you to Imaginator, a simulator project aimed at training and testing visual-based flight control systems. It is aimed to be a developer tool and simulates the output of onboard visual sensors (cameras).<strong> <\/strong>Imaginator was created by Daedalean, a Swiss technological company, and is powered by the UNIGINE rendering engine.&nbsp;<\/p>\n\n\n\n<p><strong>What is Daedalean? <\/strong>&nbsp;A Zurich-based start-up founded in 2016, Daedalean develops flight control systems based on \u0441omputer vision and machine learning. They will be eventually used to power AI-based piloting systems capable of outperforming human pilots in all their functions. But even today they already have their use: the systems that Daedalean is currently preparing to release serve as pilot aid, providing data for air traffic detection, GPS-independent navigation and landing. Such \u2018AI co-pilots\u2019 work through real-time processing of the visual data from cameras installed on an aircraft by neural networks working in the onboard computer. These systems, as well as the eventual full autonomy, are eagerly expected by the aviation industry. But on top of that, they will play a key role in the development of the new sector which is not yet started: advanced air mobility\u2014pilotless electric aircraft serving as an air taxi, cargo delivery, ambulance fleet, etc.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"689\" src=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-1024x689.png\" alt=\"VPS Visualisation\" class=\"wp-image-777\" srcset=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-1024x689.png 1024w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-300x202.png 300w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-768x517.png 768w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-1536x1034.png 1536w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-830x559.png 830w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-230x155.png 230w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-350x236.png 350w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation-480x323.png 480w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation.png 1600w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<p><strong>How did the idea arise? <\/strong>Daedalean is training its neural networks to recognize images received from cameras:&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\"><li>Visual navigation\u2014identify key points on the landscape and compare them to a map to identify their own position.<\/li><li>Landing advisory\u2014spot and identify a helipad or runway, determine distance and the angle of approach, recognize safe spots and obstacles for emergency landings (for example, when a car moves into the spot marked as safe, it\u2019s not safe anymore);&nbsp;<\/li><li>Traffic detection\u2014detect and classify hazards (other aircraft<strong> <\/strong>as well as birds, clouds, buildings, trees, wires, etc.).<\/li><\/ul>\n\n\n\n<div class=\"wp-block-jetpack-tiled-gallery aligncenter is-style-square\"><div class=\"tiled-gallery__gallery\"><div class=\"tiled-gallery__row columns-2\"><div class=\"tiled-gallery__col\"><figure class=\"tiled-gallery__item\"><img decoding=\"async\" srcset=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean3-1.png?resize=512%2C512&#038;strip=info&#038;ssl=1 512w\" alt=\"\" data-height=\"512\" data-id=\"780\" data-link=\"https:\/\/unigine.com\/blog\/?attachment_id=780\" data-url=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean3-1.png\" data-width=\"512\" src=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean3-1.png?ssl=1&amp;resize=512%2C512\" layout=\"responsive\"\/><\/figure><\/div><div class=\"tiled-gallery__col\"><figure class=\"tiled-gallery__item\"><img decoding=\"async\" srcset=\"https:\/\/i1.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean4.png?resize=512%2C512&#038;strip=info&#038;ssl=1 512w\" alt=\"Daedalean\" data-height=\"512\" data-id=\"778\" data-link=\"https:\/\/unigine.com\/blog\/?attachment_id=778\" data-url=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean4.png\" data-width=\"512\" src=\"https:\/\/i1.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean4.png?ssl=1&amp;resize=512%2C512\" layout=\"responsive\"\/><\/figure><\/div><\/div><\/div><\/div>\n\n\n\n<p>Training and testing, then adjusting the parameters and testing again, require hundreds of thousands of flight hours. The cost for real flight testing would be prohibitive. This is why Daedalean began developing an alternative solution of its own: a simulator able to provide the synthetic input to the algorithms, that is, to duplicate as closely as possible how cameras onboard an aircraft see the environment under real-world flying conditions.&nbsp;<\/p>\n\n\n\n<p>The architecture of Imaginator is innovative for professional flight simulators: it\u2019s cloud-based, fully scalable, and all the \u2018heavy lifting\u2019 is done on the server-side. This architectural approach made it the first visual simulator of its class provided as software-as-a-service, and a welcome and friendly tool for computer vision engineers to develop, train, and evaluate their visual algorithms, set up their own simulations, and explore edge cases.<\/p>\n\n\n\n<div class=\"wp-block-jetpack-tiled-gallery aligncenter is-style-rectangular\"><div class=\"tiled-gallery__gallery\"><div class=\"tiled-gallery__row\"><div class=\"tiled-gallery__col\" style=\"flex-basis:50%\"><figure class=\"tiled-gallery__item\"><img decoding=\"async\" srcset=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png?strip=info&#038;w=600&#038;ssl=1 600w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png?strip=info&#038;w=900&#038;ssl=1 900w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png?strip=info&#038;w=1200&#038;ssl=1 1200w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png?strip=info&#038;w=1280&#038;ssl=1 1280w\" alt=\"Daedalean\" data-height=\"720\" data-id=\"781\" data-link=\"https:\/\/unigine.com\/blog\/?attachment_id=781\" data-url=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png\" data-width=\"1280\" src=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean5.png?ssl=1\" layout=\"responsive\"\/><\/figure><\/div><div class=\"tiled-gallery__col\" style=\"flex-basis:50%\"><figure class=\"tiled-gallery__item\"><img decoding=\"async\" srcset=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png?strip=info&#038;w=600&#038;ssl=1 600w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png?strip=info&#038;w=900&#038;ssl=1 900w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png?strip=info&#038;w=1200&#038;ssl=1 1200w,https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png?strip=info&#038;w=1280&#038;ssl=1 1280w\" alt=\"Daedalean\" data-height=\"720\" data-id=\"782\" data-link=\"https:\/\/unigine.com\/blog\/?attachment_id=782\" data-url=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png\" data-width=\"1280\" src=\"https:\/\/i0.wp.com\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/Daedalean6.png?ssl=1\" layout=\"responsive\"\/><\/figure><\/div><\/div><\/div><\/div>\n\n\n\n<p><strong>What does the UNIGINE engine do?<\/strong>&nbsp; UNIGINE is capable of rendering immersive 3D landscapes of any place on Earth in real-time, with superior photorealism. According to Peter de Lange, Daedalean&#8217;s Team Lead of Simulation: \u201cFor the purpose of machine learning, developers need to have the base data used as \u2018ground truth\u2019 (the gold standard, to which they compare the output generated by neural networks which are being trained), and this is where UNIGINE proves indispensable\u2014its images of the environment provide a perfect synthetic ground truth.\u201d<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"536\" src=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1.png\" alt=\"Clouds\" class=\"wp-image-783\" srcset=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1.png 1024w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-300x157.png 300w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-768x402.png 768w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-830x434.png 830w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-230x120.png 230w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-350x183.png 350w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/6-1024x536-1-480x251.png 480w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<p>However, paradoxically, the simulated imaginary should not be too perfect: it should strive to duplicate realistic weather conditions\u2014visibility should change over the entire spectrum of possibilities, be it mist, rain, clouds, etc. The actual imaging from an airborne camera will not be stable, nor will its resolution be very high, and the lenses will be exposed to moisture, debris, and dirt. Moving objects can be perceived by sensors as pixelated, lagged or distorted. A visual simulator must reflect all these variables to be considered as a feasible alternative to flight tests.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-1024x576.jpg\" alt=\"Daedalean\" class=\"wp-image-815\" srcset=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-1024x576.jpg 1024w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-300x169.jpg 300w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-768x432.jpg 768w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-1536x864.jpg 1536w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-830x467.jpg 830w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-230x129.jpg 230w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-350x197.jpg 350w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4-480x270.jpg 480w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/frame_06-4.jpg 1920w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<p>Imaginator\u2019s motto is \u201cBridge the gap between SIM and REAL.\u201d Thanks to UNIGINE\u2019s features, Imaginator provides images of a broad range of available weather conditions (including sunshine, up to three layers of clouds, rain, snow, fog, and haze) that can be \u2018realified\u2019: adapted for low resolution, vibration, rotating cameras, etc. Imaginator gives its users full control over weather conditions, flying scenarios, and image quality settings thus allowing them to push their algorithms\u2019 limits during testing. The system needs to detect and classify various objects as well as estimate the distance to them\u2014so the engine easily renders any number of other objects in the air, including various models of airplanes, helicopters, drones, or whatever air obstacles developers want to include in their scenarios.&nbsp;<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1.jpg\" alt=\"Daedalean\" class=\"wp-image-785\" srcset=\"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1.jpg 1024w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-300x169.jpg 300w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-768x432.jpg 768w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-830x467.jpg 830w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-230x129.jpg 230w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-350x197.jpg 350w, https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/01-1024x576-1-480x270.jpg 480w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><\/div>\n\n\n\n<p><strong>Why UNIGINE? <\/strong>&nbsp;UNIGINE\u2019s trademark game engine technology ensures that the images are easily customizable for the Daedalean needs, provides scalability, and allows to overcome real-time constraints. Daedalean considered alternative solutions, but none of them provided a similar level of precision, photorealism, or scriptability. Unlike UNIGINE, their APIs do not allow for full control over rendering or accessing the internal rendering buffers; and simulators designed for the automotive industry are optimized for a very different use case: ground view.<\/p>\n\n\n\n<p><strong>Imaginator infrastructure.<\/strong> The main innovative feature of the architecture is moving the UNIGINE-based services to the server-side in the cloud and employing the simulation as a set of microservices. The service can be used immediately, without any installation or overhead, and can be accessed from anywhere. The API allows a developer to generate the first scenario within minutes. Any degree of customization within a client\u2019s request (e.g. locations, weather, image settings, configurations of cameras, the density of air traffic, various malfunctions, etc.) is available smoothly and instantaneously. Load balancing ensures that client demands are distributed equally among the Imaginator servers, and allows for pooling, which is extremely useful for generating large datasets.&nbsp;<\/p>\n\n\n\n<p>Imaginator, though, is not a finished product, it is still under development. The creators have a lot of ideas for its roadmap. To identify the business requirements and possible use cases, Daedalean partners with aircraft and eVTOL manufacturers, as well as avionics manufacturers, so that they can test Imaginator, apply it to their tasks at hand, and offer their feedback. Daedalean plans to launch Imaginator to the market as a SaaS application\u2014Simulation As A Service.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The article was prepared specifically for the UNIGINE blog by the press service of Daedalean AG Today we would like to introduce you to Imaginator, a simulator project aimed at training and testing visual-based flight control systems. It is aimed to be a developer tool and simulates the output of onboard visual sensors (cameras). Imaginator [&hellip;]<\/p>\n","protected":false},"author":7,"featured_media":777,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[174],"tags":[160,171,161,162,173,169,164,170,166,172],"class_list":["post-805","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-case-study","tag-ai-learning","tag-aviation","tag-computer-vision","tag-deep-learning","tag-evtol","tag-interview","tag-neural-networks","tag-simulators","tag-uav","tag-urban-aerial-mobility"],"jetpack_featured_media_url":"https:\/\/unigine.com\/blog\/wp-content\/uploads\/2021\/04\/VPS-visualisation.png","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/posts\/805","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/users\/7"}],"replies":[{"embeddable":true,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/comments?post=805"}],"version-history":[{"count":2,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/posts\/805\/revisions"}],"predecessor-version":[{"id":816,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/posts\/805\/revisions\/816"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/media\/777"}],"wp:attachment":[{"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/media?parent=805"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/categories?post=805"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/unigine.com\/blog\/wp-json\/wp\/v2\/tags?post=805"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}