SM Journal of Biomedical Engineering ISSN: 2573-3702

Research Article

Automatic Segmentation of Glioma from 3D MR Images by Using Location Free Asymmetry Detection

Guoqing Wu1#, Chunhong Ji1#, Jinhua Yu1,2*, Yuanyuan Wang1, Liang Chen3, Zhifeng Shi3 and Ying Mao3

Abstract

Accurate segmentation of glioma from Magnetic Resonance (MR) imagery undoubtedly provides essential assistance for glioma resection and following progress evaluation after the resection. Numerous methods have been presented to segment glioma from Two Dimensional (2D) or Three Dimensional (3D) MR images. To deal with the complex structure of brain and the various shapes of glioma, methods based on selecting asymmetric areas with respect to the approximate symmetry of brain are widely used. This kind of methods, however, may fail in the case of segmenting the glioma across the mid-sagittal plane. This paper developed a fully 3D automatic asymmetry detection method for the glioma segmentation, while overcoming the location limitation in conventional asymmetry detection methods. The proposed 3D bounding box method locates glioma in the Voxel of Interest (VOI), which is checked and corrected by the reflectional method. With the accurate VOI, the improved 3D GrowCut method is employed to segment glioma automatically and quickly. We evaluated the accuracy of the proposed method by using both synthetic and real clinical MR image data. Experimental results show that our method conquers the difficulties of conventional asymmetry detection method when segmenting the glioma across the mid-sagittal plane successfully. Our method provides similar segmentation performance with manual segmentation and shows obvious higher efficiency and more convenience than 2D automatic segmentation method.

Introduction

Glioma is one of the most common malignant brain tumors [1,2], with high mortality and morbidity. Surgical resection followed by adjuvant chemotherapy and radiotherapy is the standard protocol treating glioma patients [3]. The Gross Total Resection (GTR) or Subtotal Resection (STR) of glioma seems to be correlated with a longer survival [4-7]. However, achieving the GTR/STR of glioma is very difficult for its diffuse growth pattern. Accurate segmentation of glioma from Magnetic Resonance (MR) images enhances intraoperative techniques for tumor delineation [8]. It contributes to the goal of maximal tumor resection while keeping the minimal neurologic deficits at an acceptable level [9]. After surgery, accurate glioma segmentation allows to quantify the tumor residue if present, including volume and other morphological characteristic of the glioma, which is especially helpful for patients who need long term follow-up regularly. Segmentation is also a vital and basic step for subsequent registration, feature detection, classification and construction of pathological brain atlases [10]. However, the accurate and fast segmentation is still a challenging task due to various locations and shapes of the gliomas and the complex structure of the brain.

Numerous algorithms have been developed to perform brain tumor detection and segmentation, including semi-automatic and automatic approaches. These methods include the level set methods [11], asymmetry analysis based methods [12-15], region growing [16-19], watershed algorithms [18,20] and atlas based method [21,22]. Among them, segmentation methods based on asymmetry detection are widely used and researched. To detect existence of brain tumor on 3D MR neuroimages rapidly, Wang, et al. [12] proposed a symmetry analysis method with respect to the midsagittal plane. Four parameters: correlation coefficient, root mean square error, integral of absolute difference and integral of normalized absolute difference were used to estimate the similarity between the grey level histograms of both hemispheres. However, this method could not localize the tumor position on data in which suspicious tumors are detected. In 2009, Khotanlou, et al. [13] roughly located the tumor by calculating the histogram difference between normal and pathological hemispheres. A thresholding with tumor peak range values and the gray level ranges of the tumor were selected manually in the histogram difference. A limit of this approach was that the symmetry analysis may fail in case of symmetrical tumor across the mid-sagittal plane. Saha, et al. [14] proposed the bounding box method, which computed with gray level intensity histograms to obtain the score function based on the Bhattacharya coefficient. Bounding box that circumscribed the tumor was located on 2D MR slice by the score function. This approach was robust to the variation of intensities among different MR image slices and was completely unsupervised and efficient. However, these methods may fail when tumor is located across the mid-sagittal plane. This problem is still unsolved.

Recently, supervised and unsupervised learning algorithms based on statistical approaches have been proposed. Support Vector Machines (SVM) was used to segment healthy and pathological tissues [23,24]. However, to learn discriminant functions for the posterior segmentation, a sufficiently large set of labelled samples is required in supervised learning methods [25]. Unsupervised learning tackles this limitation at the cost of correctness by clustering, such as Gaussian clustering [26-28], Fuzzy clustering [29,30] and spectral clustering [31]. Two of the main steps of the segmentation model are the statistical analysis of intensity distribution and feature extraction [32,33]. However, the MR images quality is often subject to various uncertainties, such as the in homogeneity of RF coil, the partial volume effect, and the presence of electronic noise etc. These factors would decrease the segmentation performance [34]. And because of explicit dependency on intensity features, the segmentation is restricted to images acquired with the exact same imaging protocol as the one used for the training data [35]. Compared with asymmetry analysis based methods, supervised and unsupervised learning algorithms are more complex and are limited to the size and quality of the dataset.

This paper proposed a fully automatic method which also utilizes the asymmetry detection strategy. The method extended the quick detection method proposed by Saha, et al. [14] to 3D unsupervised change detection method. It can segment gliomas quickly and accurately even when gliomas locate across the mid-sagittal plane, while no prior knowledge of the possible locations of gliomas is needed. The proposed method includes three main steps: searching the Volume of Interest (VOI), checking and correcting the VOI, and detecting boundaries of gliomas.

Since providing accurate initial position of glioma is one of the main difficulties for extending semi-automatic methods into automatic ones, volume of interest which contains glioma should be searched in advance to reduce the difficulty of segmentation. The bounding box method proposed by Saha, et al. was used to provide active contour, normalized graph cut and other segmentation techniques with seeds on 2D MR slices [14]. However, before applying the 2D bounding box method, the slices without the glioma should better be firstly eliminated to save time. In this paper, the bounding box is extended into a 3D search method by locating the glioma in a cuboid instead of rectangles in 2D MR slices. The proposed 3D bounding box method locates glioma in VOI more quickly than dealing with 2D MR images slice by slice.

Kiryati, et al. proposed a global optimization approach to detect dominant local symmetry which could be used for guiding visual attention and segmentation algorithms [36]. Based on the result of bounding box, we apply the reflectional symmetry theory to detect glioma in a relative small range instead of the whole MR image with complex brain tissues. The distance between the centers of glioma found by bounding box and reflectional symmetry method is calculated to judge the accuracy of VOI. The limitation of the bounding box method is overcome by incorporating with the reflectional symmetry method. An accurate VOI could be located despite of the location of gliomas.

Once the VOI has been obtained, many methods could be used for accurate segmentation of glioma. Vezhnevets, et al. proposed the interactive GrowCut method which is fast and widely used in generic photos and medical images [37]. Given a small number of user-labelled pixels, the rest of the image is segmented automatically based on cellular automaton algorithm [38]. Because too many seed points should be manually selected in 3D MR data, the segmentation by GrowCut method is difficult and inconvenient. In this paper, the VOI found beforehand is introduced to the GrowCut method in the step of setting seed labels. The traditional GrowCut method is also extended into a 3D and fully automatic method. The accuracy of the proposed method depends no more on the correctness of the usermarked labels.

The rest of this paper is organized as follows. Section 2 details the realization of the proposed method. Section 3 presents the experimental results and analysis. Finally, some conclusions are presented in Section 4.

Methods

In this paper, a novel 3D automatic segmentation method is proposed to detect the glioma in MR data. First, through the presented 3D bounding box, volume of interest which contains glioma is found. The reflectional symmetry method is then applied to overcome the limitation of the bounding box method when segmenting the glioma across the mid-sagittal plane. Finally, the 3D semi-automatic GrowCut algorithm is improved into fully automatic method by utilizing the obtained VOI as mark labels.

The 3D bounding box method

We first present the original 2D bounding box method proposed by Sahaet, al. [14]. The axial brain MR image is firstly divided into two rectangles which supply as the test and reference image respectively. The score function is then calculated to search for a little axis-parallel rectangle on one of the obtained rectangles that is very dissimilar from the other.

Here we extend the 2D bounding box into 3D algorithm which is also based on the score function to search a volume-based global change. Figure 1 illustrates the notations.

Figure 1: Searching anomaly D from test cuboid I using reference cuboid R. (a) searching along x direction, (b) searching along y direction, (c) searching along z direction.

The volume of change (D) is detected on a test cuboid (I), when compared with a reference cuboid (R). The volume of change D restricted by an axis-parallel cuboid contains the region of abnormality. The size of the test cuboid I is M×N×L. A novel score function identifies D through searching bounds along the x, y, and z direction, respectively. The score function along x direction is defined as follows:

(1)

where T(l) and B(l) are the “top” and “bottom” sub-cuboids divided at a distance l from the top of the cuboids: T(l)=[1, l]×[1, N]×[1, L] and B(l)=[l+1, M]×[1, N]×[1, L]. PIT(l) denotes the normalized intensity histogram of cuboid I within T(l). PRT(l), PIB(l), and PRB(l) are defined accordingly. BC represents the Bhattacharya coefficient [39] between two normalized histogram. It measures the similarity between two normalized intensity histograms and is defined as:

(2)

(3)

When two normalized histograms are the same, the BC between them is 1. Conversely, the associated BC value is 0 when two normalized histograms are completely dissimilar. The increasing and decreasing segments of score function have been proved to meet at the x1 and x2 which are the upper and lower bounds of D, respectively. After the x1 and x2 are obtained, the volume above x1 and the volume under x2 are cut as shown in Figure 1b. T(l) and B(l) can be redefined as: T(l)=[1, x2-x1]×[1, l]×[1, L] and B(l)= [1, x2-x1] × [l+1, N]×[1, L]. The y1 and y2 which are the left and right bounds of D can be obtained from the score function plot along y direction. Similarly, the z1 and z2 which are the front and back bounds of D can also be obtained.

The proposed 3D bounding box can be applied based on the assumption that healthy human brain is innately left-right symmetric, with the mid-sagittal plane as the symmetry plane. The region of gliomas is the volume of change D which leads to asymmetry. The left and right brain hemispheres represent I and R, respectively. The 3D MR data should be preprocessed by the following steps to obtain the two symmetric cuboids. The skull boundary is firstly detected by automatic global thresholding [40] and fitted by ellipsoid. The angle α between the long axis view of ellipsoid and the z axis is then calculated. Finally, we rotate the 3D MR image α degree and cut it so as to divide the whole image into two parts which contains the left and right brain hemispheres, respectively. Applying the two symmetric hemispheres to the 3D bounding box method, the volume of change D can be obtained and defined as the VOI.

The reflectional symmetry detection method

Based on the assumption that the two hemispheres are symmetric except the volume of the glioma, the bounding box method could accurately detect the glioma contained in only one hemisphere. However, it only detects the different part between two hemispheres when the glioma is located across the mid-sagittal plane [13] as shown in Figure 2. This limits the accuracy of automatic segmentation methods. In clinics, the location of glioma may cross the mid-sagittal plane. The limitation of the original bounding box method should be conquered.

Figure 2: The result of bounding box detecting anomaly. The two symmetric parts (blue and green) cannot be detected. While the asymmetric part (red) is detected and treated as the VOI.

Reflectional symmetry is one of the important features of object recognition in both human and computer vision systems. Kiryati, et al. proposed a global optimization approach to detect local symmetry in grey level images [36]. The measurement of the local symmetry is treated as finding the global maximum function parameterized by the location of the center, radius, and the orientation of symmetry axis. Let f(x,y) be a 2D function that equals zero except within a circle of radius L center. A measure of 2D reflectional symmetry in the x, y coordinate system is defined as follow:

(4)

where for each x the norm is of 1D functions of y along a line segment extending from –L to L in parallel to the y axis. S{f} ranges from 0 to 1. The value of S{f} more close to 1 the stronger symmetry the region inside circle is. In order to improve the robustness, the circular region is specified in a Gaussian window:

(5)

where r refers to the effective radius of the support. The maximum of Eqn (4) is calculated and the optimal parameters are obtained.

Because multi-targets including eyeball, cerebrospinal fluid, and brainstem are in the 3D MR image, the reflectional symmetry could not be used lonely to detect glioma. Since the VOI found by bounding box contains part of the glioma, we apply the reflection symmetry method based on it. The symmetry is detected in an axial slice of VOI which contains the max skull (image F). And the search scope is the expansion of the VOI in image F (each length is expanded to 1.5 times of the maximum edge length). The distance between the center of the reflectional symmetry and the center of VOI found by bounding box method are calculated. If the value of distance is small (smaller than one-third times of the minimum edge length of the VOI found by bounding box method), the result of bounding box is correct and accepted. Otherwise, the result of bounding box should be replaced with the center of reflectional symmetry. Six planes of VOI should also be adjusted to make the VOI contain the whole glioma. The upper and lower surfaces perpendicular to x axis should be moved -r/+r along x axis. The other planes are moved accordingly.

The 3D GrowCut method

GrowCut method is one of the region-growing based segmentation algorithms [37]. In this method, certain image pixels that belong to objects are firstly specified by user to provide the hard constraint. Labels of all other image pixels are then assigned automatically by the Cellular Automaton (CA) [38]. Finally, labels of the same object are assigned the same. Meanwhile, labels of different objects are assigned differently. In this paper, the method is improved by automatic setting seed labels in MR images. Users have no need to set labels of glioma and surroundings on MR data slice by slice.

Setting seed labels: Because the size of brain changes from small to large to small along axial direction, centers of glioma in different slices are relatively moved and should be adjusted. The axial slice of VOI which contains the max skull is used as a reference (IS). Centers of glioma in the image IS and the VOI are the same and denoted as (x_centerS, y_centerS). The center and radius of glioma in different slice are defined as follows:

(6)

(7)

(8)

where x_centerk and y_centerk are the coordinates of the center of glioma in the kth slice. The (Xk,Yk) and (lxk,lyk) are the center and length of enclosing rectangle of the skull in the kth slice, respectively. Similarly, the (lxS,lyS) is the center of enclosing rectangle of the skull in the image IS. The S is half of the minimum length of VOI.

After calculating each center of the glioma in MR slices, seed labels of glioma and surrounding brain structures should be set. To label the glioma accurately, a small circle with radius of rk and centered at (x_centerk, y_centerk,) in the kth slice is marked as true (+1). Meanwhile, the VOI is expanded 1.2 times to be a larger cuboid and then marked as false (-1). The glioma and peripheral background are labeled as ellipsoid and cuboid, respectively.

Diffusing seed labels: The voxel labelling process is treated as growth and struggling for domination. The unknown voxels are set to zero at initial stage. For each voxel, label is assigned iteratively according to its strength and that of its neighboring voxels. The p in a set of voxels P is a voxel to be set. The Moore neighborhood N(p) is accepted and each voxel p has 26 neighboring voxels. The q is one of the voxels in N(p). At iteration t+1, the label and strength of the voxel p are updated as follows:

(9)

(10)

where Cp is the luminance value of the voxel p. g(x) is a monotonous decreasing function bounded to [0,1]:

(11)

The iteration is stopped when label of voxel changes no more or the maximum number of iteration is reached. In the end, the region assigned with “+1” and “-1” represent the glioma and background, respectively. The maximum number of iteration can be set in 200 to 500. In order to obtain a fast processing speed, we set it to 200.

Materials

We retrospectively studied the “Brain Tumor Image Bank” of Neurosurgical Department, Huashan Hospital of Fudan University in Shanghai, China and enrolled 48 patients who were diagnosed with lower grade glioma. For each patient, the 3D T2-flair MR images of all patients were provided. These images contained gliomas with different sizes, intensities, shapes and locations, which allowed us to illustrate the accuracy and validity of our method. There were 29 women aged from 26 to 61 and 19 men aged from 24 to 58, respectively. This study was approved by the institutional review board, and each patient was informed and consented to join the research.

Experiments and Results

We evaluated the proposed 3D segmentation method using both synthetic and real 3D MR images. The synthetic images were obtained by combining real MR images with simulated tumor. The manual method was utilized as the standard. In order to verify the necessity and effectivity of the proposed 3D bounding box, we compared the performance of the method with the original 2D bounding box method proposed by Saha et al. [13]. Besides, the original 2D bounding box method was also improved to a 2D automatic algorithm in the similar way as this paper proposed. Both in the original and improved 2D automatic bounding box based GrowCut method, labels were set piece by piece. The results of 3D automatic segmentation method were compared with that of improved 2D algorithm.

To compare the results quantitatively, metrics including True Positive rate (TP), False Positive rate (FP), False Negative rate (FN), Similarity Index (SI) and total accuracy rate (ACC) were calculated:

(12)

(13)

(14)

(15)

(16)

where ST is the real area of glioma. SA is the area detected by segmentation method. All denotes the total number of voxels in the images, SA∩ST denotes the glioma region voxels that were correctly determined, All − SA∪ST denotes the non-glioma region voxels that were correctly determined. Besides, the total number of points in the area ST was also calculated and denoted as Total.

Segmentation results of synthetic images

In order to validate the accuracy of segmenting glioma in random locations, we simulated the glioma in two cases by overlapping two or three spheres of the same size. The size of images were 128×128×27 pixels. In case 1, glioma was fully located in the left side of the brain. In case 2, glioma was located across the mid-sagittal plane. The Region of Interest (ROI) searched by original 2D bounding box method is shown in Figure 3. The results of VOI obtained from our proposed 3D bounding box are presented in Figure 4. The original 2D method accurately locates glioma in case 1, while only detects part of the glioma in case 2. From the cross-section, sagittal plane and coronal plane, we can see that the gliomas are located in the VOI accurately. Figure 5 shows the original 2D, improved 2D and 3D segmentation results of different slices. Both the improved 2D and 3D segmentation results are accurate. Due to the inaccurate location, the original 2D method partially segments the glioma, which is unsatisfactory. After dealing with all the slices, 3D surface of the segmented glioma in case 1 and case 2 are reconstructed and illustrated in Figure 6. The corresponding results of TP, FP and SI are presented in Table 1. The running time of improved 2D and 3D automatic segmentation method are recorded in Table 2.

Figure 3: Results of original 2D bounding box method. (a) ROI searched in case 1, (b) ROI searched in case 2.

Figure 4: VOI obtained from our proposed 3D bounding box method showed by three observation plane. (a) Result of cross-section in case 1, (b) result of sagittal plane in case 1, (c) result of coronal plane in case 1, (d) result of cross-section in case 2, (e) result of sagittal plane in case 2, (f) result of coronal plane in case 2.

Figure 5: Segmentation results. (a) one of the piece results in case 1, (b) one of the piece results in case 1, (c) one of the piece results in case 2, (d) one of the piece results in case 2. The first line is the original images. The second to fourth lines are the results of original 2D, improved 2D and 3D segmentation method, respectively.

Figure 6: Reconstructed results. (a) Simulated glioma in case 1, (b) reconstructed result of improved 2D method in case 1, (c) reconstructed result of 3D method in case 1, (d) simulated glioma in case 2, (e) reconstructed result of improved 2D segmentation method in case 2, (f) reconstructed result of 3D method in case 2.

Table 1: The metrics for comparing the improved 2D with 3D segmentation methods.

 

Method

Total

TP

FP

FN

SI

ACC

Case 1

2D

9265

0.8821

0.0099

0.1789

0.8735

0.9973

3D

9265

0.8905

0.0028

0.1095

0.8881

0.9977

Case 2

2D

6790

0.867

0.0015

0.133

0.8657

0.9979

3D

6790

0.8802

0.0005

0.1198

0.8798

0.9982

Table 2: The running time for comparing the improved 2D with 3D segmentation methods.

 

Method

Bounding box (seconds)

GrowCut

Total

(seconds)

(seconds)

Case 1

2D

24.397

34.965

59.362

3D

7.198

46.377

53.575

Case 2

2D

25.338

31.717

57.055

3D

7.32

43.841

51.161

Segmentation results of clinical images

The 48 real MR data were also segmented to test the robustness of our proposed method. In these experiments, the manual segmentation was performed by two individual neurosurgeons from the Huashan Hospital of Fudan University with consistent agreement piece by piece. Two representative cases were shown as the examples. The size of clinical images were 464×542×68 voxels. In case 3, glioma was fully located in the left side of the cerebrum. While glioma was located across the mid-sagittal plane in case 4. Figure 7 and Figure 8 show the results of locating gliomas in both cases. Similarly, we can see the original 2D method could only search the different part between hemispheres compared with the 3D bounding box method. The original 2D, improved 2D and 3D segmentation results of different slices are showed in Figure 9. According to the manual segmentation results, the original 2D method correctly segments the glioma. While the improved 2D and 3D segmentation results are accurate. The 3D surface of the segmented glioma in case 3 and case 4 are reconstructed and illustrated in Figure 10. Table 3 presents the metrics of improved 2D and 3D automatic segmentation results. Table 4 presents the segmentation volume comparison for the improved 2D and 3D automatic segmentation methods, where Total volume represents the real glioma volume, TP volume represents the glioma volume that was correctly determined, FP volume represents the volume that was falsely identified as glioma. The running time of the improved 2D and 3D automatic segmentation method are recorded in Table 5.

Figure 7: Results of original 2D bounding box method. (a) ROI searched in case 3, (b) ROI searched in case 4.

Figure 8: VOI obtained from our proposed 3D bounding box method showed by three observation plane. (a) Result of cross-section in case 3, (b) result of sagittal plane in case 3, (c) result of coronal plane in case 3, (d) result of cross-section in case 4, (e) result of sagittal plane in case 4, (f) result of coronal plane in case 4.

Figure 9: Segmentation results. (a) one of the piece results in case 3, (b) one of the piece results in case 3, (c) one of the piece results in case 4, (d) one of the piece results in case 4. The first line is the original images. The second to fourth lines are the results of original 2D, improved 2D and 3D segmentation method, respectively.

Figure 10: Reconstructed results. (a) Reconstructed result of manual segmentation in case 3, (b) reconstructed result of 2D method in case 3, (c) reconstructed result of 3D method in case 3, (d)reconstructed result of manual segmentation in case 4, (e) reconstructed result of 2D method in case 4, (f) reconstructed result of 3D method in case 4.

Table 3: The metrics for comparing the improved 2D with 3D segmentation methods.

 

Method

Total

TP

FP

FN

SI

ACC

Case 3

2D

77629

0.822

0.0642

0.178

0.7724

0.9989

3D

77629

0.8461

0.0298

0.1539

0.8216

0.9991

Case 4

2D

80877

0.8485

0.0927

0.1515

0.7765

0.9988

3D

80877

0.8346

0.0237

0.1654

0.8153

0.9991

Table 4: The segmentation volume comparison for the improved 2D with 3D segmentation methods.

 

Method

Total volume (mm3)

TP volume (mm3)

FP volume (mm3)

Case 3

2D

50412

41439

3237

3D

50412

42654

1502

Case 4

2D

52522

44565

4868

3D

52522

43835

1244

Table 5: The running time comparison for the improved 2D with 3D segmentation methods.

 

Method

Bounding box (seconds)

GrowCut (seconds)

Total (seconds)

Case 3

2D

593.077

1762.649

2355.726

3D

162.394

1871.333

2033.724

Case 4

2D

568.622

1346.64

1915.262

3D

173.203

1402.509

1575.712

Statistical tests

We have also measured the effectiveness of the proposed methods by statistical tests, which were performed with SPSS version 16.0 software (SPSS for windows, SPSS, Inc. Chicago, IL). The manual method was still considered as the gold standard. The mean and standard deviation (SD) of TP, FP and SI were calculated and shown in Table 6. The Pearson’s correlation coefficient (r) and P values of two-side T test between segmentation results of different methods were also calculated. Correlation coefficient r represents the correlation between the classification result (the labels of all voxels) of the proposed method and the manual method. The P values of twoside T test were considered statistically significant when they were less than 0.05. Table 7 shows the results of correlation between three methods.

Table 6: The metrics for comparing the improved 2D with 3D segmentation methods.

 

 

TP

FP

SI

2D

Mean

0.8458

0.0811

0.7764

SD

0.0255

0.0302

0.0059

3D

Mean

0.8399

0.0342

0.819

SD

0.0181

0.0213

0.00495

Table 7: The correlation between the classification result of manual method and the proposed method.

 

r

P value

2D method versus manual method

0.8413

0.013

3D method versus manual method

0.8322

0.018

Table 8: The theoretical complexity of the original 2D and proposed 3D bounding box methods.

Direction

Method

Addition

Multiplication

One direction

2D

2O[MNL2+(p-1)L]

4O(pL)

3D

2O[MNL2+(p-1)]

4O(p)

Three direction

2D

2O[2MNL2+(2p-1)L]

8O(pL)

3D

3O[2MNL2+(2p-1)]

12O(p)

Discussion and Conclusion

From the experiments, we can see that compared with the original 2D bounding box method, our 3D bounding box method located glioma more accurately. Besides, the 3D bounding box overcomes the defect that fails to segment the glioma across the midsagittal plane. The 3D segmentation results and reconstructed results were similar to the results of the improved 2D method and manual method. In table 7, the two correlation coefficients are not very high, one possibility is that tumor segmentation is a challenging task, the manual segmentation itself may have some segmentation error due to the complex structure of brain MR iamges and the infiltration of Glioma.

The metrics showed that the accuracy of the proposed method was comparable with that of improved 2D segmentation method. However, the computation time of the 3D automatic method was less than that of the 2D automatic method. The complexity is determined by Eqns (1), (2) and (3). The image is assumed in the size of M×N×L and the number of grayscale for image is p which we set 20 in this paper. Table 8 shows the theoretical analysis of the computation time for the bounding box method. Besides, due to cutting the searching region after getting bounds of the axial direction, the actual computation time of 3D automatic bounding box method is less than in Table 8. In Eqns (9) and (10), the number of comparisons is increased from 8 to 26 when extending the 2D GrowCut method to 3D method. As a result, more time is needed in this part. We can see that the theoretical results are in consistent with the experimental results. However, due to the rough location of glioma has already obtained by the bounding box; the computation time can be saved by reducing the scope of 3D GrowCut from the whole image to the VOI. In the total segmentation process, our proposed method not only segments gliomas accurately but also saves calculation time.

In this paper we have presented an improved bounding box method to accurately locate the glioma within VOI, even in cases that the glioma extends across the mid-sagital plane. It is then introduced to the GrowCut method for automatic segmenting the glioma from 3D MR images. The proposed method overcomes the disadvantage of semi-automatic GrowCut method which is difficult to label seeds in 3D MR images and time consuming. It is also faster than the improved 2D segmentation method.

Acknowledgement

This work is supported by the National Basic Research Program of China (2015CB755500), National Natural Science Foundation of China (61471125, 81101049 and 61271071).

References

  1. Yang P, Wang Y, Peng X, You G, Zhang W, Yan W, et al. Management and survival rates in patients with glioma in China (2004 –2010): a retrospective study from a single-institution. J Neuro-Oncol. 2013; 113: 259-266.
  2. Yan H, Parsons DW, Jin G, McLendon R, Rasheed BA, Yuan W, et al. IDH1 and IDH2 mutations in gliomas. N Engl J Med. 2009; 360: 765-773.
  3. Stupp R, Mason WP, van den Bent MJ, Weller M, Fisher B, Taphoorn MJ, et al. Radio-therapy plus concomitant and adjuvant temozolomide for glioblastoma. New Eng J Med. 2005; 352: 987-996.
  4. Keles GE, Anderson B, Berger MS. The effect of extent of resection on time to tumor progression and survival in patients with glioblastoma multiforme of the cerebral hemisphere. Surg Neurol. 1999; 52: 371-379.
  5. Lacroix M, Abi-Said D, Fourney DR, Gokaslan ZL, Shi W, DeMonte F, et al. A multivariate analysis of 416 patients with glioblastoma multiforme: prognosis, extent of resection, and survival. J Neurosurg. 2001; 95: 190-198.
  6. Sanai N, Polley MY, McDermott MW, Parsa AT, Berger MS. An extent of resection threshold for newly diagnosed glioblastomas: clinical article. J Neurosurg. 2011; 115: 3-8.
  7. Kuhnt D, Becker A, Ganslandt O, Bauer M, Buchfelder M, Nimsky C. Correlation of the extent of tumor volume resection and patient survival in surgery of glioblastoma multiforme with high-field intraoperative MRI guidance. Neuro-oncology. 2011; 13: 1339-1348.
  8. Schneider JP, Trantakis C, Rubach M, Schulz T, Dietrich J, Winkler D, et al. Intraoperative MRI to guide the resection of primary supratentorial glioblastoma multiforme-a quantitative radiological analysis. Neuroradiology. 2005; 47: 489-500.
  9. Schucht P, Beck J, Abu-Isa J, Andereggen L, Murek M, Seidel K, et al. Gross total resection rates in contemporary glioblastoma surgery: results of an institutional protocol combining 5-aminolevulinic acid intraoperative fluorescence imaging and brain mappi. Neurosurgery. 2012; 71: 927-935.
  10. Toga AW, Thompson PM, Mega MS, Narr KL, Blanton RE. Probabilistic approaches for atlasing normal and disease-specific brain variability. Anatomy and Embryology. 2001; 204: 267-282.
  11. Taheri S, Ong SH, Chong VFH. Level-set segmentation of brian tumors using a threshold-based speed function. Image Vision Comput. 2010; 28: 26-37.
  12. Wang ZJ, Hu QM, Loe KF, Aziz Aamer, L Wieslaw Nowinski. Rapid and automatic detection of brain tumors in MR images. International Society for Optics and Photonics (Medical Imaging, 2004). 2004; 602-612.
  13. Khotanlou H, Colliot O, Atif J, Bloch I. 3D brain tumor segmentation in MRI using fuzzy classification, symmetry analysis and spatially constrained deformable models. Fuzzy Sets and Systems. 2009; 160: 1457-1473.
  14. Saha BN, Ray N, Greiner R, Murtha A, Zhang H. Quick detection of brain tumors and edemas: abounding box method using symmetry. Comput Med Imag Graphics. 2012; 36: 95-107.
  15. Iscan Z, Dokur Z, Ölmez T. Tumor detection by using Zernike moments on segmented magnetic resonance brain images. Expert Systems with Applications. 2010; 37: 2540-2549.
  16. WangY, CaoJ, LiuL, LinZ. An automatic tumor segmentation system of brain tumor from MRI based on a noval energy function. Journal of Convergence Information Technology. 2011; 6: 59-67.
  17. Khotanlou H, Colliot O, Bloch I. Automatic brain tumor segmentation using symmetry analysis and deformable models. Processing of the international conference on Advances (Pattern Recognition ICAR, 2007). 2007; 198-202.
  18. Hsieh TM, Liu YM, Liao CC, Xiao F, Chiang IJ. Automatic segmentation of meningioma from non-contrasted brain MRI intergrating fuzzy clustering and region growing. BMC Med Informat Decision Making. 2011; 11: 54.
  19. Behzadfar N, Soltanian-Zadeh H. Automatic segmentation of brain tumors in magnetic resonance images. Processing of the international conference (Biomedical and Health Informatics, 2012). 2012; 329-332.
  20. Ratan R, Sharma S, Sharma SK. Brain tumor detection based on multi-parameter MRI image analysis. Int J Graphics Vision Image Process. 2009; 9: 9-11.
  21. Kaus MR, Warfield SK, Nabavi A, Chatzidakis E, PM Black, FA Jolesz, et al. Segmentation of menigiomas and low grade gliomas in MRI. MICCAI, Cambridge, UK, Lecture Notes (Computer Science, Springer, Berlin, 1999). 1999; 1-10.
  22. Kaus MR, Warfield SK, Nabavi A, Black PM, Jolesz FA, Kikinis R. Automated segmentation of MR images of brain tumors. Radiology. 2001; 218: 586-591.
  23. Ruan S, Zhang N, Liao Q, Zhu Y. Image fusion for following-up brain tumor evolution. IEEE International Symposium on Biomedical Imaging: From Nano to Macro. 2011; 1: 281-284.
  24. Bauer S, Nolte LP, Reyes M. Fully automatic segmentation of brain tumor images using support vector machine classification in combination with hierarchical conditional random field regularization. Medical Image Computing and Computer-Assisted Intervention. 2011; 14: 354-361.
  25. Gordillo N, Montseny E, Sobrevilla P. State of the art survey on MRI brain tumor segmentation. Magnetic Resonance Imaging. 2013; 31: 1426-1438.
  26. Nie J, Xue Z, Liu T, Young GS, Setayesh K, Guo L, et al. Automated brain tumor segmentation using spatial accuracy-weighted hidden Markov Random Field. Computerized Medical Imaging and Graphics. 2009; 33: 431-441.
  27. ZhangY, BradyM, SmithS. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Transactions on Medical Imaging. 2001; 20: 45-57.
  28. Doyle S, Vasseur F, Dojat M, Forbes F. Fully automatic brain tumor segmentation from multiple MR sequences using hidden Markov fields and variational EM. Proceedings of NCI-MICCAI BRATS. 2013; 1: 18-22.
  29. Fletcher-Heath LM, Hall LO, Goldgof DB, Murtagh FR. Automatic segmentation of non-enhancing brain tumors in magnetic resonance images. Artificial intelligence in medicine. 2001; 21: 43-63.
  30. Kanas VG, Zacharaki EI, Davatzikos C, Sgarbas KN, Megalooikonomou V. A low cost approach for brain tumor segmentation based on intensity modeling and 3D Random Walker. Biomedical Signal Processing and Control. 2015; 22: 19-30.
  31. Sindhumol S, Kumar A, Balakrishnan K. Spectral clustering independent component analysis for tissue classification from brain MRI. Biomedical Signal Processing and Control. 2013; 8: 667-674.
  32. Iftekharuddin M Khan. Texture models for brain tumor segmentation. Quantitative Medical Imaging (Optical Society of America, 2013). 2013.
  33. Juanalbarracín J, Fustergarcia E, Manjón JV, Robles M, Aparici F, L Martí-Bonmatí, et al. Automated glioblastoma segmentation based on a multiparametric structured unsupervised classification. 2015; 10.
  34. Liu X, Chen F. Automatic segmentation of 3-D brain MR Images by using global tissue spatial structure Information. IEEE Transactions on Applied Superconductivity. 2014; 24: 1-5.
  35. Menze B, Reyes M, Leemput KV. The multimodal brain tumor Image segmentation benchmark (BRATS). IEEE Transactions on Medical Imaging. 2014; 99: 1.
  36. Kiryati N, Gofman Y. Detecting symmetry in grey level images: The global optimization approach. Int J Comput Vis. 1998; 29: 29-45.
  37. Vezhnevets V, Konouchine V. Grow cut-interactive multi-label N-D image segmentation by cellular automata. Proc Graphicon. 2005; 150-156.
  38. Hernandez G, Herrmann HJ. Cellular automata for elementary image enhancement. Graphical Models and Image Processing. 1996; 58: 82-89.
  39. Comaniciu D, Ramesh V, Meer P. Real-time tracking of non-rigid objects using mean shift. Processing of IEEE conference on computer vision and pattern recognition. 2000; 142-149.
  40. Otsu N. A threshold selection method from gray level histogram. Automatica. 1975; 11: 23-27.

Citation: Wu G, Ji C, Yu J, Wang Y, Chen L, Shi Z, et al. Automatic Segmentation of Glioma from 3D MR Images by Using Location Free Asymmetry Detection. SM J Biomed Eng. 2017; 3(1): 1012.

Download PDF