When cutting away sick tissue, surgeons meantime must correctly identify vital anatomical structures such as nerves, lymphatic tissue and blood vessels to prevent accidental damage to these structures. Identifying these to prevent damage remains enormously challenging, especially due to natural differences between individual human bodies. High tech imaging techniques are truly a break through aid for surgeons in addition to their anatomical knowledge for reliable high-resolution visual discrimination of critical anatomical structures.
H3D-VISIONAIR aims to create 3D-microscope glasses that visualise the invisible for surgeons by combining 3D-multispectral cameras, advanced computer analytics and Near Eye Displays (NED).
The targeted breakthrough and disruptive system offers head-worn augmented reality (AR) for surgical use. It consists of two multi-spectral cameras (combining visual range with near infrared visualisation), a belt computer with data processing, and a high end stereoscopic near-eye display with wireless connection to the operating room digital display and archive infrastructure. The spectral signature of specific pre-defined tissues will be used to develop machine learning models to segment vital anatomical structures.
These models will be used to generate the AR-overlays on top of the current clinical field of view. The main objective of the current proposal is development of a modular 3D-demonstrator that integrates image capture and processing software with the head-worn NED-pair. The demonstrator consists of a dual camera system, one for each eye, which capture two data streams consisting of two times four channels per time frame. The processing of these two data streams will be synchronised and real time to produce augmented 3D vision.
Consequently, H3D-VISIONAIR will enable a surgeon to see the natural colours of the tissue in full HD resolution, including 3D depth perception, and on top of that also a real-time AR with critical tissues enhanced. Because both the visible and invisible structures are seen through the same fully electronic chain, they will be exactly pixel-to-pixel aligned. Processing 60 times/second per pixel at a staggering rate of 2 million pixels per frame in a belt-worn computer provides a huge technical challenge (especially with tight margins for delay!).H3D-VISIONAIR provides a modular platform which over time, when computer capacities increase according to Moore’s law, and so offers compatibility with future camera generations on the roadmap from multispectral to hyperspectral imaging that are presently underway within other ongoing EU-projects like EXIST and ASTONISH.