Haku

Multilevel feature fusion in digital pathology

QR-koodi

Multilevel feature fusion in digital pathology

The breast cancer stage and prognosis are mainly diagnosed from surgically removed sentinel lymph nodes which are dissected, stained and scanned for the presence of tumor cells. The extent of tumor cells spreading to lymph nodes affects the treatment plan and prognosis of the patient. Currently, the best practice for scanning stained glass slides of lymph node tissue sections is a microscopic examination by a trained pathologist. The time-consuming examination is prone to subjective errors as the scanner produced images are typically gigapixel size and tumor areas are relatively small. An isolated tumor cell (ITC) especially, is hard to spot, and requires a trained eye.

The recent progress with convolutional neural networks (CNN) in the image processing area has also proven effective in detecting metastases from tissue sample images. Their performance has been on par with a group of expert pathologists. CNN’s could be used routinely to highlight possible tumor areas for pathologists or even independently assess the level of metastatic regions in samples to ease human experts’ workload.

What makes the tumor detection from tissue images challenging is the high resolution of images the commercial scanners export. Processing a whole gigapixel image requires a lot of memory so the image is typically split into smaller windows that are fed through CNN separately. The weakness of such procedure is that the field-of-view per sample window is narrow and the information about the surrounding context is lost. This thesis examines the benefits of feeding image batch samples of different zoom levels, cropped from the same location, to a CNN tumor classifier.

Tallennettuna: