1 Introduction

The incidence of renal cell carcinoma (RCC) ranks within the top 20 among all solid tumors [1]. With the increasing number of early-stage RCC cases and the improvement of surgical techniques and instruments, the proportion of partial nephrectomy (PN) is increasing [2]. The European Association of Urology guidelines have suggested that the oncological outcomes achieved by PN are comparable to those achieved by radical nephrectomy (RN) for T1 RCC. PN also preserves kidney function better and potentially limits the incidence of cardiovascular disorders [3].

PN is technically demanding and has various challenges. (1) During the operation, the key targets that need to be handled are blocked by perirenal fat or adjacent organs, making them invisible to the naked eye, and they often cannot be found quickly and accurately due to the lack of guidance of clear anatomical landmarks. Delays or mistakes in this process may lead to vascular injury, opening of the collecting system, tumor rupture, or other risks. (2) The information provided by two-dimensional (2D) CT/MR imaging is insufficient. Thus, it is necessary for surgeons to “translate” cross-sectional planar imaging into stereoscopic imaging, which is a process of cognitive reconstruction. This building-in-mind process is particularly difficult for complex lesions or for inexperienced surgeons. (3) Due to the insufficient grasp of the local anatomical details of renal tumors, surgeons sometimes avoid PN for some complex cases and instead use a safer method, such as radical nephrectomy. (4) Similarly, due to the insufficient information obtained before the operation, the PN operation often turns into an “encounter”, such as encountering unknown vascular variants or failing to find endogenous tumors after kidney incision, which introduces risks to the operation.

In recent years, new technological tools have been developed for the reconstruction of three-dimensional (3D) virtual models from standard 2D imaging. Holographic imaging (also known as 3D imaging, augmented reality (AR) imaging, 3D visualization models, and holograms) is reconstructed based on surface rendering techniques from contrast CT or MRI DICOM data using 3D virtual reconstruction technology [4]. Holographic imaging provides more intuitive three-dimensional images, enhances the spatial understanding of the operator, and has a powerful interactive function, guiding the precise implementation of the operation. Previous studies have shown that the application of the holographic imaging technique in laparoscopic partial nephrectomy (LPN) and robotic-assisted partial nephrectomy (RAPN) results in reduced operative time, estimated blood loss, complications, and length of hospital stay [31]. Bertolo et al. compared the ability of 3D images to 2D images in expanding the PN indication for complex renal tumors [32], and they reported that more than 20% of surgeons change their decision for these complex renal tumors from RN to PN after reviewing the 3D images. These results support the potential of holographic imaging in surgical planning. Shirk et al. compared 3D VR models with conventional CT/MR imaging for surgical planning and surgical outcomes in RAPN; their results showed a reduction in operative time, estimated blood loss, clamp time, and hospital stay in the 3D group [33].

3.2.4 Navigation in surgery

Holographic imaging superimposes holographic images on the endoscopic view of the anatomy, allowing intraoperative navigation. The holographic imaging navigation technique achieves holographic images fused with the real-time intraoperative endoscopic view, allowing the surgeon to access the targets directly and minimizing the damage to surrounding vessels and other structures. Currently, although it is not yet an automatic navigation system, holographic imaging still helps the console surgeon in perceiving the three-dimensionality of the kidney and correctly localizing the tumor, resulting in precise and safe tumor resection. Holographic imaging is particularly useful in complex renal tumor cases, such as hilar tumors, in which surgeons must perform tumor resection close to the renal vein/artery.

During holographic imaging navigation surgery, it is currently necessary to expose some anatomical landmarks, such as renal hilar vessels or renal contours, to achieve registration and tracking of holographic images with an intraoperative endoscopic view [17]. In holographic imaging navigation PN, when the virtual renal pedicle is precisely fused with the real renal vascular pedicle, the surgeon is guided to perform safe vascular dissection and to identify the renal artery or vein branches and clam** as well as to implement personalized vascular management strategies, such as high selective renal artery clam**. When the virtual kidney is completely fused with the real kidney, by adjusting the transparency of the model, the location of the endophytic tumor and the tumor relationship with the adjacent blood vessels, calyces, and other intraparenchymal structures can be visualized, which is conducive to more accurate tumor excision [34].

However, there are some concerns regarding holographic imaging navigation. One is the accuracy of registration of holographic images on static anatomical structures because it is not easy to precisely align virtual holographic images and their physical counterparts in spatial and rotational coordinates [35]. In LPN or RAPN, the establishment of pneumoperitoneum deforms the abdominal cavity and changes the spatial relationship of the kidney compared to that before the operation. In addition, the kidney shifts, deforms, rotates, and changes its relative position to neighboring organs due to gravity and the jostling of surgical tools. Based on these factors, the previous rigid matching technology produces a large deviation after organ deformation. Deformable models have been introduced by some researchers, and this problem can be alleviated to some extent by modifying the preoperative model during surgery [36]. There is also a method called nonlinear parametric deformation to simulate the deformation of an organ during surgery [34].

Currently, to maximize the accuracy of superposition, it is often necessary to use manual registration. The assistant will manipulate the fusion system during the entire procedure to create proper orientation and deformation of the model. Manual registration and tracking are simple methods to “anchor” a 3D model to its counterpart in real time. However, these methods require an additional assistant surgeon to control the AR workstation [18]. This work is labor-consuming, and the accuracy of image fusion depends on the experience of the assistant.

Intraoperative tracking is another major challenge. It is a great challenge to maintain satisfactory real-time accuracy in laparoscopic AR surgery because the endoscopic view of a surgical scene is highly dynamic. It is difficult for the assistant to adjust the model and match it with the endoscopic image in time. Similar to manual registration, manual tracking is also a common method under current conditions, but it is labor intensive. In addition, the efficacy of manual tracking is affected by the experience of the assistants.

3.2.5 Surgical training

Holographic imaging can also be used to enhance the education of medical students and fellows, thus aiding their professional development. Rai et al. reported that medical students who use the interactive 3D VR simulator based on PN cases significantly improve their subjective ability to localize the tumor position [37]. Knoedler et al. evaluated the effect of 3D printed physical renal models on enhancing medical trainees’ understanding of kidney tumor characterization and localization [38]; they reported that the overall trainee nephrometry score accuracy is significantly improved with the 3D model vs. CT scan, and there is also more consistent agreement among trainees when using the 3D models compared to CT scans to assess the nephrometry score. Thus, 3D models improve trainees’ understanding and characterization of kidney tumors in patients.

3.3 Impact on PN outcomes

Zhu et al. reported their experience of holographic image navigation in urological laparoscopic and robotic surgery, including 27 partial nephrectomy cases; they reported that this technology reduces tissue injury, decreases complications, and improves the surgical success rate [39].

The application of 3D imaging in PN for complex renal tumors, such as renal hilar tumors, has attracted extensive attention. Wang et al. included 26 cases of renal hilar tumors and found that 3D imaging reconstruction and navigation technology have the advantages of accurate localization, a high complete resection rate, and fewer perioperative complications [40]. Porpiglia et al. reported their results of using 3D imaging during RAPN for complex renal tumors (PADUA ≥ 10) [34]. Compared to 2D ultrasound guidance, the 3D imaging and AR guidance group had a lower rate of global ischemia, a higher rate of enucleation, a lower rate of collecting system violation, a low risk of surgery-related complications, and lower renal blood flow decrease 3 months after the operation. The combination of holographic imaging with da Vinci robotic surgical systems allowed accurate recognition, increased flexibility, and real-time navigation, which made the RAPN easier and safer for renal hilar tumors. Zhang et al. reported their series of combining holographic imaging with RAPN for renal hilar tumor treatment [18]; they reported that this technique reduces the risk of conversion to open surgery or RN for renal hilar tumors, increases the success rate, and decreases complications. Zhang et al. also reported a new technique of combining holographic imaging and clip** tumor bed artery branches outside the kidney to reduce PN-related secondary bleeding, to reduce the need for postoperative interventional embolization, and to shorten the length of hospital stay.

Endophytic kidney tumors present a great challenge as they are not visible on the kidney surface. Porpiglia et al. presented their use of AR images to visualize endophytic tumors [34]. AR technology potentially increases the 3D perception of the lesion’s features and the surgeon’s confidence in tumor excision and guide precise resection. Compared to the ultrasound-guided group, Porpiglia et al. observed that the enucleation rate of the 3D AR group was higher (p = 0.02), the percentage of preserved healthy renal parenchyma was higher, and the opening rate of the collecting system was lower (p = 0.0003).

A systematic review has examined the effectiveness of AR-assisted technology in LPN compared to conventional techniques [45]. In addition, a specially developed software, called Indocyanine Green Auto Augmented Reality (IGNITE), allows 3D models to be automatically anchored to real organs and takes advantage of the enhanced views provided by NIRF technology. There are also some other reports of surgical tracking technology [46], and the subsequent development of these technologies deserves our attention.

3.4.2 Artificial intelligence

Artificial intelligence (AI) should be an important direction for future holographic imaging-guided surgery. Recently, it has been reported that kidneys, renal tumors, arteries, and veins can be automatically segmented and 3D-modeled by deep learning [47, 48]. He Y et al. proposed the first deep learning framework, called the Meta Grayscale Adaptive Network (MGANet), which simultaneously segments the kidney, renal tumors, arteries, and veins on CTA images in one inference, resulting in a better 3D integrated renal structure segmentation quality [49]. Houshyar et al. developed and evaluated a convolutional neural network (CNN) to act as a surgical planning aid by determining renal tumor and kidney volumes through segmentation on single-phase CT. The results showed that the end-to-end trained CNNs perform renal parenchyma and tumor segmentation on test cases in an average of 5.6 s. Houshyar et al. concluded that the deep learning model rapidly and accurately segments kidneys and renal tumors on single-phase contrast-enhanced CT scans as well as calculates tumor and renal volumes [50]. Zhang et al. developed a 3D kidney perfusion model based on deep learning techniques to automatically demonstrate the segmentation of renal arteries, and they verified its accuracy and reliability in LPN [51]. It is technically feasible for AI to realize automatic 3D modeling, but the process relies on a large amount of calibrated data for training.

Marker-based and deformation-based registration techniques have been preliminarily reported to achieve more accurate registration [42]. The use of deep learning to automatically recognize image and video information is expected to achieve automatic registration and tracking. Recently, Padovan et al. introduced a deep learning framework through convolutional neural networks and motion analysis, which determines the position and rotation information of target organs in endoscopic video in real time [52]. This work has taken an important step for the application of deep learning to generalize the automatic registration process.

4 Conclusion

PN is a challenging surgical procedure. Holographic imaging helps surgeons to thoroughly understand the individualized anatomy of the kidney and tumor as well as to set up a more optimized surgical plan and to facilitate patient counseling. The implementation of holographic imaging navigation helps the surgeon to accurately identify and locate the target tumor, renal artery/vein and branches, and collecting system, thus reducing complications and conversion to open surgery or RN.

The application of holographic imaging in PN has significant benefits in reducing the warm ischemia time, collecting system opening, blood loss, and incidence of serious complications, but it is similar to traditional technology in conversion to RN, complication rate, changes in glomerular infiltration rate, and surgical margins. Holographic imaging in PN is particularly valuable in cases of endophytic renal tumors, complex renal hilar tumors, and super-selective clam**.

At present, the main deficiency in holographic imaging is that automatic 3D modeling and intraoperative automatic registration have not yet been fully realized, and the accuracy of registration still needs to be improved. Deep learning is expected to solve these challenges in the future.