Introduction

Brain tumor segmentation is a process that identifies the brain tissues and tumor types based on the scanned images. Brain tumor segmentation is a complicated task to perform which requires proper data for the segmentation process1. Positron emission tomography (PET) is used to scan the brain tissue activities of patients. PET scan images are used for clinical diagnosis to identify the impact and level of brain tumors. PET images reduce the latency and complexity level in the brain tumor segmentation process2,3. Segmentation methods address the region detection and classification problems by differentiating multiple indistinct features extracted. The precise problem of region classification, segment extraction, and inflated feature detection remains a problem due to noise and errors in the acquired images4. An efficient wavelet-based image fusion technique is used to address the above issues. The wavelet-based technique detects the size and texture of infected brain tissues by reducing errors and disclosing the actual regions for validation. The image fusion technique extracts the relevant patterns and details from PET images to classify the noise-prone regions5.

Feature-based region segmentation methods are used for brain tumors. The actual region of interest (ROI) range is detected using features. Features provide optimal brain tumor information for segmentation and detection processes6. Combined features are used for region segmentation in brain tumors. A hybrid method based on texture features is used for segmentation. The exact ROI and features of brain tissues are detected using PET and magnetic resonance imaging (MRI)7. The texture features provide the structures, size, types, and condition of the tumors. Extracting such features increases the complexity and feature-based repetitions to improve the precision. Therefore, a limited count of iteration or learning rate is less feasible for this process. Therefore, hybrid methods with high feature fusion and pattern classification with free-hand iteration are required for the purpose8. An ROI-aided deep learning technique is also used for tumor segmentation. The ROI-aided technique is an automatic brain tumor segmentation which reduces the time consumption in the computation process9. The ROI-aided technique localizes the structure and texture of tumors based on MRI images acquired from different datasets. A raw dataset provides information with discreteness wherein the false positives are undetectable. The techniques used must focus on reducing the error level due to imbalanced/discrete information available across various segmented regions10.

Deep learning (DL) methods and algorithms are commonly used for the detection and prediction process. The DL method is mainly used to improve the overall accuracy of the detection process11. A 3D U-Net deep neural network-based classification method is used for brain tumors. A convolutional neural network (CNN) algorithm is used in the method to detect types of tumors12. The U-Net-based technique analyses the structure and condition of tumors from the given MRI images. The main aim is to classify the exact types of tumors which provide important information for the tumor diagnosis process13. The U-Net-based technique facilitates the feasibility range of the diagnosis process. The MRI images provide clinical data for the decision-making process which reduces the latency in the tumor classification process14. An adaptive-adaptive neuro-fuzzy inference system (Adaptive-ANFIS) classifier-based method is used for tumor classification. The adaptive-ANFIS classifies the types and classes of tumors using MRI images. The ANFIS classifier identifies the exact segmentation of the tumor from the MRI images. The INFIS-based method increases the accuracy range of the brain tumor segmentation process15. This article proposes a novel segmentation model using improved classifier learning to mitigate the aforementioned issues in brain tumor detection. This segmentation model classifies features based on textural differences to identify maximum differences and coexisting features. Based on this classification, the discrete pixel-related features are identified to improve the region segmentation. The difference parameter is used to train the classifier learning to ensure further classifications across various features extracted. The contributions are summarized as follows:

  1. 1.

    Introducing a novel finite segmentation model using a modified classifier for brain tumor detection from PET images.

  2. 2.

    Modifying the conventional classifier process for discrete segmentation differentiation using feature existence and unanimous matching over different regions.

  3. 3.

    To analyze the performance of the proposed segmentation model using the metrics precision, classifications, error rate, classification time, and segmentation accuracy.

  4. 4.

    To compare the performance of the proposed model with the existing NRAN27, DSSE-V-Net17, and DenseUNet+23 using the above metrics.

Related works

The related works section provides a brief description of different methods proposed by various authors. In this section, the methods discussed the pros and cons of the previous proposals with the description of the methodologies proposed. The end of the section presents a research gap between the surveyed methods and emphasizes the need for the proposed method.

Zhuang et al.16 developed an aligned cross-modality interaction network (ACMINet) for brain tumor segmentation. Magnetic resonance images (MRI) are used in the method which provides relevant data for tissue segmentation. Volumetric feature alignment is used here which provides high-level features and patterns for further processes. It is mostly used as a 3D network which reduces the complexity of segmentation. The developed ACMINet increases the effectiveness of the tumor diagnosis process.

Liu et al.17 proposed an encoder-decoder neural network for brain tumor segmentation. A deep supervised 3D squeezer and excitation V-net (DSSE-V-Net) is implemented in the method to classify the tumors. The encoder and decoder are mainly used here to segment the tumor based on the given datasets. The V-Net is used here to identify the exact types of brain tumors. The proposed network improves the performance range in the tumor segmentation process.

Ilyas et al.18 introduced a hybrid weight alignment with a multi-dilated attention network (Hybrid-DANet) for brain tumor segmentation. The hybrid DANet used an automatic segmentation method which minimizes the complexity of the detection process. It investigates the segments that are produced by the images. The hybrid DANet reduces the optimization problems during segmentation. The introduced method increases the accuracy level in the segmentation process.

Yan et al.19 designed a squeeze and excitation network-based U-Net (SEResU-Net) model for brain tumor segmentation. It is mostly used for small-scale tumors which require a proper segmentation process. MRI is used in the model which provides significant information for the detection and diagnosis process. U-Net segments the exact types of tumors that reduce the latency in the diagnosis process. The designed model increases the performance range of tumor segmentation.

Metlek et al.20 developed a new convolution-based hybrid model for brain tumor segmentation. The main aim of the model is to identify the exact type and structure of the tumor from the given MRI images. Region of interest (ROI) is detected from the image that provides feasible data for the further detection process. The developed model reduces the energy consumption in the computation process. When compared with other models, the developed model improves the accuracy of the segmentation process.

Zhou et al.21 proposed an attention-aware fusion-guided multi-modal for brain tumor segmentation. A 3D U-Net is implemented in the modal which extracts the important features for segmentation. The segmentation features are evaluated and produce optimal data for the diagnosis process. The proposed modal increases the feasibility and reliability range of the systems. The proposed modal reduces the overall complexity level in segmentation.

Gao et al.22 introduced a deep mutual learning with a fusion network for brain tumor segmentation. The actual goal of the method is to identify the regions and sub-regions of tumors from given magnetic resonance (MR) images. The MR images reduce the time consumption level in the segmentation process. The introduced method increases the performance range of the network.

Çetiner et al.23 developed a new hybrid segmentation approach using multi-modality images for brain tumor segmentation. MRI images are used here to perform tumor segmentation which improves the efficiency range of the systems. A U-Net architecture is used in the approach which identifies the dense blocks from the MRI images. The dense block contains the necessary information for tumor segmentation. The developed approach maximizes the accuracy level of the segmentation process.

Chen et al.24 designed a multi-threading dilated convolutional network (MTDCNet) for brain tumor segmentation. A pyramid matrix fusion (PMF) algorithm is used in the network to identify the important characteristics for segmentation. The PMF algorithm also detects the structural and semantic information to recognize the exact types of tumors. Experimental results show that the designed MTDCNet improves the performance range of automatic segmentation systems.

Bidkar et al.25 proposed a salp water optimization-based deep belief network (SWO-based DBN) approach for brain tumor classification. It identifies the important patterns and features from the MRI images. The identified features produce relevant information for the tumor classification process. SWO is mainly used here to reduce the error ratio in tumor classification. The proposed network increases the accuracy of brain tumor classification.

Sindhiya et al.26 introduced a hybrid deep learning (DL) based approach for brain tumor classification. An adaptive kernel fuzzy c means clustering technique is used in the approach which selects the necessary features. The c-means clustering technique enhances the energy-efficiency range of the systems. The introduced approach maximizes the performance and feasibility range of tumor classification systems.

Sun et al.27 proposed a semantic segmentation using a residual attention network for tumor segmentation. An improved residual attention block (RAB) is used here to segment the blocks for the segmentation process. The RAB utilizes the necessary features that reduce the error in tumor prediction and detection processes. The proposed method enhances the overall accuracy level of the segmentation process.

AboElenein et al.28 developed an inception residual dense nested U-net (IRDNU-Net) for brain tumor segmentation. The main aim of the model is to increase the width of the structure and to reduce the computational complexity level. The IRDN extracts the important information for segmentation which minimizes the latency in the computation process. The developed model improves the reliability and robustness range of the tumor segmentation process.

Shaukat et al.29 introduced a 3D U-Net architecture for brain tumor segmentation. A deep convolutional neural network (DCNN) algorithm is implemented in the method to train the datasets. The DCNN produces optimal data which reduces the complexity of the computation process. It is used as a path extraction scheme that segments the sub-regions of the images. The introduced architecture improves the accuracy range of tumor segmentation.

Kumar et al.30 proposed a convolutional neural network (CNN) based brain tumor segmentation and classification method. MRI images are used here which high quality images for the classification process. It differentiates the exact types of tumor that produce necessary information for further diagnosis process. The proposed method increases the performance and feasibility level of tumor classification and segmentation processes.

Vimala et al.31 projected a CNN-based brain tumor differentiation method to identify the survival probability of patients. The authors used MRI inputs operated by median filtering and growth distribution depth to achieve 97% high tumor classification. Weighted tumor support factor is used in this method to perform optimal classification of features extracted from the input images.

Amsaveni et al.32 proposed a novel medical image watermarking concept for data embedding to ensure the security of critical clinical inputs. This method used the pixel pairing concept to embed data with fair mapping through a maximum similarity index. The similarity index-based validations are performed using random transform computations. This method achieved a 96% high irreversibility of the embedded data in medical images.

Satheesh Kumar and Jeevitha, Manikandan33 employed artificial neural networks for COVID-19 diagnosis in cardiovascular systems. This network is used to categorize the type of cardiovascular disease that is exposed due to the virus infection by accounting for the sensor-based observations from different intervals. This method uses spectral features and Lyapunov exponent to classify the cardiovascular disease caused by COVID-19.

Positron Emission Tomography (PET) is an imaging technique that is utilized in clinical settings for brain tumor detection by visualizing tissue metabolic activity. Compared to the MRI-based diagnosis, PET is challenging due to its limited spatial pixel representations. This confines the image size, pixel distribution, and variants due to feature extraction. However, illuminating the PET images is manually challenging. Besides, the noise interference due to the illuminating characteristics of the PET is high where in feature and resolution are influenced by the noise. The challenges from the above methods are range-based as in17,28, multi-feature dependent as in16,24, and sub-region classifier-based as in19,20,22. Such processes result in multiple discreteness between the identified region/feature and the consecutive representation. These problems fail to improve the accuracy of the input based on pixel correlation or labeled input training. This research work aims to handle the problem of discrete segmentation using the Finite Segmentation Model (FSM) with Improved Classifier Learning (ICL). Different from the above-discussed methods, problems such as visual classification of regions, feature range of the infected pixels, and sub-region detection are addressed in this proposed model. The segmentation is based on textural differences regardless of the feature extractable regions. Considering the discreteness between each pixel distribution, the classification based on differences other than homogeneity is performed. Therefore the visual classification or region-excluded feature differentiation problems are mitigated using this classifier learning.

Methodology

The methodology section presents the discussion of the proposed model with a detailed description, illustrative figures, and mathematical models. The different sections in the proposed model are explained in various subsections numbered below.

Finite segmentation model (FSM) using improved classifier learning (ICL)

The FSM supports both existing and improved classifier functions to efficiently segment regions of interest within the images that correspond to the tumors. In the existing segmentation process, the algorithm determines the different regions depending on textural differences in the images. This involves relating the labeled inputs to innovate the distinct segments. In difference, the altered classifier process takes a more developed approach. It uses the defined characteristic that delivers discrete and continuous region detection. These features are distinguished by their presence and the maximum difference between tumor and non-tumor regions. Figure 1 portrays the proposed segmentation process.

Fig. 1
figure 1

Proposed Segmentation Process Illustration.

In the above representation, PET images are used for detecting tumors other than MRI or CT inputs. PET images are reliable in detecting cell/tissue level changes due to tumor cells. Such changes are reflected in the image surface over the disease spread and classification. Irrespective of the overview of the regions, the change in tissue/cell level features is observed from PET. This serves as the smaller level of region detection between the pixel distribution sequences. The classifier learning is used for differentiating the feature existence with minimum or maximum difference under continuous iterations. The classification is instigated based on the textural differences and matching features that are extracted. Thus, the conventional image processing steps with the classification process are extended to the classification process for discreteness detection. The training of the conventional and modified classifiers revolves around achieving the highest feasible precision in segmenting the tumor regions. The acquired characteristics help in the training procedures. These features help the approach to understand the modulations of these regions, allowing for accurate classification. The accuracy of the classification is identified using its low false positives and precise region segmentation. In particular, the change in textural differences that are unnoticed impacts the feature extraction process. This is specific to retaining the segmentation precision of the proposed model. Following the representation in Fig. 1 the flow graph of the proposed process is illustrated in Fig. 2.

Fig. 2
figure 2

Flow Graph of the Proposed FSM-ICL.

The flow graph of the proposed FSM-ICL is depicted in the above Fig. 2. The extracted features are validated for their textural differences. Using the computed textural differences, the region with a current high difference is classified. If the difference is the maximum among all the features extracted, that region is segmented and used for training. The rest of the region features are used for pattern matching and similarity estimation. If the difference of the current extraction is low, then the pixel existence is verified to augment the new feature extraction. The uniqueness of the approach lies in its adaptability. The modified finite process is merged into the conventional classifier operation, but only if the segmentation of tumor regions is highly precise. This procedure optimizes the segmentation process, specifically when handling both discrete and continuous PET image segments. This paper presents a Finite Segmentation Model combined with Improved Classifier Learning to evaluate the challenges of discrete segmentation in PET image-based intelligent analysis.

The PET images are taken as the input image for the further feature extraction operation. These images establish the perceptions of the metabolic activities of the tissues. These images are utilized as the initial point for observing brain tumors. Characteristics extraction involves determining and capturing the consequential features from these images. By supporting experienced approaches, particular frameworks, textures, and differences within the PET images are determined. These acquired features are separated and transferred into distinct features and these characteristics are necessary for training the segmentation model. They establish the required information to distinguish between the tumor and non-tumor regions, enabling the method to make precise classifications. The feature extraction procedure acts as a cross-over between the raw PET images and then the subsequent stages of observation, assisting the approach’s ability to understand and differentiate the difficult visual information. The process of extracting the features from the given PET input images is explained using Eq. (1)10,16.

$$\:\left.\begin{array}{c}{\beta\:}_{0}={G}_{0}\left(t,\beta\:\left(t\right)\right)\\\:\frac{\partial\:\beta\:}{\partial\:t}\left(t\right){G}_{0}={\beta\:}_{\left(0\right)}*\beta\:\left(t\right)\\\:\sum\:_{t,0}\left({G}_{0},t\right)=\left(n\right(\beta\:\left(0\right)\\\:\beta\:\left(t\right)\left({G}_{0},t\right)={\beta\:}_{\left(0\right)}*\beta\:\left(t\right)\\\:{\beta\:}_{\left(0\right)}+\left({G}_{0},t\right)=\underset{0}{\overset{t}{\int\:}}{G}_{n}\left(t,\beta\:\left(t\right)\right)dt\\\:Similarly,\\\:{G}_{n}\left(t,\beta\:\left(t\right)\right)={\beta\:}_{0}\left(t\right)+\frac{\partial\:\beta\:}{\partial\:t}\\\:\frac{\partial\:\beta\:}{\partial\:t}\left(t,{\beta\:}_{0}\right)=\left({\beta\:}_{0}+{G}_{0}\right)*{G}_{n}\left(t,\beta\:\left(t\right)\right)dt\\\:Therefore,\\\:\frac{d}{dt}\left(\begin{array}{c}G\left(t\right)\\\:\beta\:\left(t\right)\\\:n\left(t\right)\end{array}\right)=\left(\begin{array}{c}-G\left(\beta\:\right)\left(t\right)\\\:G\left(t\right)\beta\:\left(t\right)-n\left(t\right)\\\:n\beta\:\left(t\right)\end{array}\right)\end{array}\right\}$$
(1)

.

Where \(\:\beta\:\) is denoted as the feature extraction operation, \(\:G\) is represented as the given PET images, \(\:t\) is denoted as the subsequent stages of analysis. Now the classification process is performed in the segmentation procedure. This step involves classifying the different regions within the images to differentiate the tumor and non-tumor regions. This classification operation uses a set of priory-defined measures and learned frameworks to decide these distinctions. The FSM with the ICL introduces two different methods in determining the textural difference and then the matching images. The process of classifying the tumor and non-tumor regions is explained using Eq. (2).

$$\:\left.\begin{array}{c}\begin{array}{c}{\beta\:}_{j+1}={\beta\:}_{j}+{G}_{\sigma\:}\left(j,{\beta\:}_{j}\right),\\\:\frac{dy}{dt}\left(t\right)={G}_{\sigma\:}\left(t,\beta\:\left(t\right)\right)\\\:\begin{array}{c}\frac{\beta\:\left({t}_{j}+1\right)-\beta\:\left({t}_{j}\right)}{\varDelta\:t}\approx\:\frac{d\beta\:}{dt}\left({t}_{j}\right)={G}_{\sigma\:}\left({t}_{j},\beta\:\left({t}_{j}\right)\right),\\\:\beta\:\left({t}_{j}+1\right)=\beta\:\left({t}_{j}\right)+t{G}_{\sigma\:}\left({t}_{j},\beta\:\left({t}_{j}\right)\right)\end{array}\end{array}\\\:\begin{array}{c}\sum\:_{\sigma\:}\frac{1}{N}\sum\:_{i=1}^{N}\parallel\:\frac{d{\beta\:}_{\sigma\:}}{dt}\left({t}_{i}\right)-G\left({t}_{i},{\beta\:}_{\sigma\:}\left({t}_{i}\right)\right)\parallel\:\\\:\beta\:\left(0\right)={\beta\:}_{0}\\\:\begin{array}{c}\frac{d\gamma\:}{dt}\left(t\right)={\gamma\:}_{\sigma\:}\left(t,\beta\:\left(t\right)\right)\\\:\gamma\:\left(0\right)={\gamma\:}_{0}\\\:\frac{d\gamma\:}{dt}\left(t\right)=G\left(t,\beta\:\left(t\right)\right)\end{array}\end{array}\end{array}\right\}$$
(2)

.

where \(\:\sigma\:\) is represented as the regions in the acquired PET input images, \(\:\gamma\:\) is denoted as the distinctions, \(\:j\) is denoted as the variance occurred in the operation, \(\:i\) is represented as the necessary frameworks. The classification process undergoes the training operation to attain its maximum accuracy. This helps in utilizing the features for further differentiation procedures with the help of the conventional neural network. CNN in the deep learning algorithm helps in this process and interprets visual data like PRT images. The conventional classification process is illustrated in Fig. 3.

Fig. 3
figure 3

Conventional Classification Process Illustrations.

The extracted \(\:\:\beta\:\)is used for classifying the regions of the input image for \(\:\:j\)and\(\:\:\gamma\:\). In this process, the \(\:\:j{\prime\:}s\) difference and \(\:\:i{\prime\:}s\)similarity are used under conventional segregation and complicated subsequent analysis. Based on the \(\:\:\gamma\:\) the \(\:\:V\) variation and\(\:\:Z\) analysis are performed. This is utilized for new\(\:\:\beta\:\) such that \(\:\:\alpha\:\) is profounded for difference analysis (Fig. 2). They analyze the extracted characteristics from the PET images, recognizing complicated frameworks and structures that represent the tumor regions. This allows it to recognize difficult relationships, which enables the accurate classification of these regions as either tumor or non-tumor. This integration of the CNNs improves the segmentation procedure by supporting their capability in the classification process. The process of utilizing the CNN in the classification process is explained using Eq. (3).

$$\:\left.\begin{array}{c}-\frac{1}{N}\sum\:_{i=1}^{N}{V}_{i}\bullet\:{V}_{\sigma\:}\left({\beta\:}_{i}\right)\\\:\frac{d\alpha\:}{dt}\left(t\right)=X\alpha\:\left(t\right)-\gamma\:X\left(t\right)\beta\:\left(t\right)\\\:\begin{array}{c}\frac{d\beta\:}{dt}\left(t\right)=-Z\left(t\right)+dX\left(t\right)\beta\:\left(t\right)\\\:\frac{1}{NL}\sum\:_{i=1}^{N}\sum\:_{j=1}^{L}({\alpha\:}_{x}\left(0\right),{\beta\:}_{x}\left(0\right)\left({t}_{j}\right)-{x}_{i}({{t}_{j})}^{2}+({\beta\:}_{x}\left(0\right),{\alpha\:}_{x}\left(0\right)({{t}_{j})}^{2}\end{array}\end{array}\right\}$$
(3)

.

Where \(\:V\) is denoted as the textures of the acquired images, \(\:X\) is represented as the complicated relationships, \(\:Z\) is represented as the structures in the images, \(\:\alpha\:\) is denoted as the integration operation. Now the textural difference from the classification is determined in the segmentation process with the help of the CNN. As the CNN analyzes the extracted characteristics from the PET images, it effectively determines and classifies the difference in the textures and patterns of those manifesting different tissue regions. Through the framework of conventional layers, the CNN helps in capturing the complicated visual understanding which is difficult to detect through the conventional methods. These learned characteristics enable the CNN to distinguish between the tumor and non-tumor regions based on their distinct textural signatures. This is computed using Eq. (4).

$$\:\left.\begin{array}{c}\begin{array}{c}\begin{array}{c}\frac{dB}{dt}\left(t\right)=\frac{dF}{d\alpha\:}\left(\alpha\:\left(t\right),\beta\:\left(t\right)\right)\\\:\frac{dB}{dt}\left(t\right)=-\frac{dF}{d\alpha\:}\left(\alpha\:\left(t\right),X\left(t\right)\right)\\\:\begin{array}{c}{F}_{\sigma\:}\left(B,L\right)=\frac{1}{2}{\alpha\:}^{T}{N}_{\sigma\:}^{-T}\left(\alpha\:\right)F+X\left(\alpha\:\right)\\\:\frac{dB}{dt}\left(t\right)=\frac{dF}{d\alpha\:}\left(B\left(t\right),\beta\:\left(t\right)\right)\\\:\frac{dF}{dt}\left(t\right)=-\frac{dB}{dt}\left(B\left(t\right),F\left(t\right)+{G}_{\sigma\:}\left(N\right)\beta\:\left(N\right)\right)\end{array}\end{array}\\\:P={\left(\frac{{\partial\:}^{2}{\beta\:}_{\sigma\:}}{{\partial\:}^{2}P}\right)}^{-1}\end{array}\\\:=\left(\frac{\partial\:{\beta\:}_{\sigma\:}}{\partial\:P}-\frac{{\partial\:}^{2}{\beta\:}_{\sigma\:}}{\partial\:P\partial\:\beta\:}\right)\end{array}\right\}$$
(4)

.

By operating the image data through the various convolutional and merging layers, the CNN becomes the high analyzing the variations, edges, and structures that distribute to the textural variations. The CNN’s ability to efficaciously interpret these textural differences importantly improves the segmentation operation, by allowing for accurate determination of tumor regions within the PET images. The process of determining the textural difference in the segmentation process by CNN is explained by the following equation given above. Where \(\:B\) is denoted as the textural difference, \(\:F\) is represented as the visual understanding operation, \(\:P\) is represented as the tumor tissues of the images. Now the matching features are determined in the segmentation process by using the CNN technique. The features that are extracted from the PET images are framework and aspects that confine importance in differentiating between tumor and non-tumor regions. The CNN engages its layer to evaluate the complicated regions within these characteristics, by allowing it to determine the related patterns between the extracted features and distributed measures for tumor detection. The CNN refines its capability to match these extracted features to known tumor characteristics. This process ensures that the CNN becomes efficient in determining even matching features that indicate the presence of the tumor. It helps in acquiring the matching features from input PET images. The process of acquiring the matching features in this segmentation process with the help of the CNN is explained using Eq. (5).

:

$$\:\left.\begin{array}{c}\beta\:\left(0\right)\sim(0,{J}_{i*j})\\\:\frac{d\beta\:}{dt}\left(t\right)={G}_{\sigma\:}\left(t,\beta\:\left(t\right)\right)\:\:for\:t\in\:[0,\beta\:]\\\:\begin{array}{c}{G}_{\sigma\:}:[0,\beta\:]\times\:{J}^{i}\to\:J\\\:\frac{dL}{dt}(t\to\:\sum\:_{{P}_{\sigma\:}}\left(t,\beta\:\left(t\right)\right)\left(t\right)=-\sum\:_{N=1}^{N}\frac{\partial\:{G}_{\sigma\:}L}{\partial\:{\beta\:}_{L}}\left(t,\beta\:\left(t\right)\right)\\\:\begin{array}{c}\beta\:\left(P,\alpha\:\right)=\alpha\:\\\:\frac{\partial\:\beta\:}{dt}\left(t,\alpha\:\right)={G}_{\sigma\:}\left(t,\beta\:\left(t,\alpha\:\right)\right)\:\:for\:t\in\:[0,L]\end{array}\end{array}\end{array}\right\}\:\:\:\left(5\right)$$

Where \(\:L\) is denoted as the matching features. Now from the textural differences, the classification process is performed. This is the modified classification operation that is performed based on the textural difference. In this classification process, the distinctive features are extracted from the PET images, capturing an understanding of textures that differentiate between the tumor and non-tumor regions. These features are then analyzed using the CNN which aids in recognizing the complex patterns and textures. The process of modified classification is explained using Eq. (5).

$$\:\left.\begin{array}{c}\sum\:_{L=1}^{U}\frac{\partial\:{G}_{\sigma\:},L}{\partial\:{\beta\:}_{L}}\left(t,\beta\:\right)=U\left(\frac{\partial\:{G}_{\sigma\:}}{\partial\:\beta\:}(t,\beta\:)\right)\\\:={V}_{\sigma\:}\left({\sigma\:}^{t}\left(\frac{\partial\:{G}_{\sigma\:}}{\partial\:\beta\:}\left(t,\beta\:\right)\right)\sigma\:\right)\\\:\begin{array}{c}-\frac{1}{N}\sum\:_{i=1}^{N}{V}_{\sigma\:}\left(t,{\beta\:}_{i}\right)=-\frac{1}{N}\sum\:_{i=1}^{N}[\beta\:\left(0,\beta\:\left(0,{\beta\:}_{i}\right)\right]\\\:=V\left[\underset{0}{\overset{t}{\int\:}}{\epsilon\:}^{T}\frac{\partial\:{G}_{\sigma\:}}{\partial\:\beta\:}\left(t,\beta\:\left(t,{\beta\:}_{i}\right)\right)\sigma\:dt\right]\end{array}\end{array}\right\}$$
(5)

.

Where \(\:U\) is denoted as the modified classification which is performed based on the acquired textural difference. Now the existing features are determined from the modified classification. These features denote the unique characteristics extracted from the PET images that grip important information about the tissue regions. The modified classification process is illustrated in Fig. 4.

Fig. 4
figure 4

Modified Classification Process.

The similarity rate for the identified\(\:\:\sigma\:\) is used for validating the difference and\(\:\:Z\). Across the varying segments, two factors are extracted namely\(\:\:L\) and\(\:\:\alpha\:\). Based on the \(\:\:X\in\:t\) and \(\:\:B\) the visual operations on \(\:\:P\) is performed. Compared to the previous process the modification in classification is performed for \(\:\frac{d\beta\:}{dt}\left(t\right)\) until\(\:\:\frac{\partial\:\beta\:}{\partial\:t}\:\left(t,\alpha\:\right)={G}_{\sigma\:}\left(t,\beta\:\left(t,\alpha\:\right)\right)\) Thus the training for \(\:\:V\) (repeated) is performed under\(\:\:P\) or\(\:\:L\) feature for identifying tumors (Refer to Fig. 3). Through CNN, the modified classification process extracts and focuses on these characteristics, holding complex patterns, etc. The classification algorithm is given in Algorithm 1.

Algorithm 1

Classification Process.

Input: The feature classification is estimated across the segments.

Output: Identify the tumor.

Functions:

1. if\(\:\:\sigma\:=Z\), then//variance detected is the same as the differences in the extracted regions.

2. 

\(\text{i}\text{f}\:\beta\:\left(0\:\right)>P,\:\text{o}\text{r}\:P<Z,\) then

3. Compute the current differences.

4. else.

5. Compute the current Similarity rate.

6. end if.

7. else.

8. if \(\:\alpha\:<\beta\:\left(0\right)\) then//extracted features are high than the difference features

9. Compute the current feature’s segment-matching rate.

10. else.

11. for\(\:\:V+\left(P*L\right)\approx\:\beta\:\) do 

12. \(\:\beta\:=\left(t,\alpha\:\right)\text{*}{G}_{\sigma\:}\)

13.end if.

14. else.

15. if\(\:\:{G}_{\sigma\:}\left({\alpha\:}_{i}\right)=Z\), then//Similarity rate of the extracted regions is the same as the non-variance regions

16. Declare the available features.

17. else.

18. Initialize the training process.

19. end if.

20. end if.

The existing features enclose the abundance of the data which enables the algorithm to tackle the regions accurately. The process of detecting the existing features from the modified classification is explained using Eq. (6)15.

$$\:\left.\begin{array}{c}{\rho\:}_{\sigma\:}{\beta\:}_{\sigma\:}\left(\alpha\:\right)=\prod\:_{j=0}^{N}{\rho\:}_{\sigma\:},\beta\:\left({t}_{j},{\beta\:}_{0}\right)\left({\alpha\:}_{j}\right)\\\:\beta\:\left({t}_{j},{\beta\:}_{0}\right)*N=\sum\:_{P\left(t\right)}\left[{Q}_{\sigma\:},{\alpha\:}_{i}\left|\right|N(0,{L}_{\alpha\:,\beta\:})\right]\\\:\frac{1}{N}\sum\:_{i=1}^{N}[{L}_{\beta\:\sim}\left[-\sum\:_{i,j}{\rho\:}_{\sigma\:},{\beta\:}_{\sigma\:}\left({\alpha\:}_{i}\right)\right]=\sum\:_{\alpha\:,\beta\:}\left(Pt\right){\beta\:}_{0}\\\:\left[{Q}_{\sigma\:},{\alpha\:}_{i}\left|\right|N(0,{L}_{\alpha\:,\beta\:})\right]+{\rho\:}_{\sigma\:}={\beta\:}_{\sigma\:}\left({\alpha\:}_{i}\right)\\\:{\beta\:}_{\sigma\:}\left({\alpha\:}_{i}\right)+N=\prod\:_{{\rho\:}_{\sigma\:}}{L}_{\beta\:\sim}*{Q}_{\sigma\:}\left(Pt\right)+\beta\:\\\:{\beta\:}_{\sigma\:}\left(N+\alpha\:\right)=\left[Pt*{L}_{\alpha\:,\beta\:},{Q}_{\sigma\:}\right]*P\left(t\right)\\\:Pt*{L}_{\alpha\:}\left({Q}_{\sigma\:}\right)=\sum\:_{N}{\beta\:}_{0}+\left(\beta\:\left({t}_{j},{\beta\:}_{0}\right)\left({\alpha\:}_{j}\right)\right)\\\:\begin{array}{c}P\left(t\right)=\sum\:_{\alpha\:,\beta\:}\left(Pt\right){\beta\:}_{0}\\\:{\beta\:}_{i+1}={\beta\:}_{j}+{G}_{0}\left({\beta\:}_{j}\right)\\\:\begin{array}{c}with\\\:{\beta\:}_{i+1}={\beta\:}_{j}+\phi\:({L}_{ij}+{\beta\:}_{j})\\\:{\gamma\:}_{j+1}={\gamma\:}_{j}-\phi\:({L}_{ij}+{\beta\:}_{j})\end{array}\end{array}\end{array}\right\}$$
(6)

.

Where \(\:\rho\:\) is denoted as the existing features, \(\:\phi\:\) is denoted as the unique characteristics from the modified classification. Now the maximum difference is determined from the existing features of the modified classification. This represents the most important difference between the features related to the tumor and non-tumor regions within the PET images. By determining the features that distribute the maximum variance between these two types of regions, the method effectively spots the differentiating features that are most complex of tumor presence. The process of detecting the maximum difference is explained using Eq. (7).

$$\:\left.\begin{array}{c}\frac{d}{dt}\left(\genfrac{}{}{0pt}{}{\beta\:}{\gamma\:}\right)\left(t\right)=\varnothing\:(W\left(t\right)\left(\genfrac{}{}{0pt}{}{\beta\:\left(t\right)}{\gamma\:\left(t\right)}\right)+\beta\:(t)\\\:where\:W\left(t\right)=\left(\begin{array}{cc}0&\:L\left(t\right)\\\:-L\left(t\right)&\:0\end{array}\right)\\\:\sum\:_{t=0}\left({\varnothing\:}^{{\prime\:}}\right(W\left(t\right)\left(\genfrac{}{}{0pt}{}{\beta\:\left(t\right)}{\gamma\:\left(t\right)}\right)+\beta\:\left(t\right)\left)\:W\right(t)\end{array}\right\}$$
(7)

.

Where \(\:\varnothing\:\) is denoted as the maximum difference, \(\:W\) is represented as the efficacious spots in the tumor regions. Based on the modified classification outcome, the training is given to the classification process by CNN. The outputs include the features and then the maximum differences, which serve as important information for enhancing the CNN. The CNN is trained to recognize and correlate these features with the presence of the brain tumor regions. It helps in acquiring the accurate distinctions between the tumor and non-tumor regions within the PET images. The process of training the CNN for the enhancement of accuracy is explained using Eq. (8).

$$\:\left.\begin{array}{c}{A}_{\sigma\:}\left(t,\beta\:\right)=\left\{\begin{array}{c}{A}_{0,n}\left(t,\beta\:\right)\:\:\:\:t\in\:\left[{t}_{0},{t}_{1}\right]\\\:{A}_{0,n}\left(t,\beta\:\right)\:\:\:\:t\in\:\left[{t}_{n-1},{t}_{n}\right]\end{array}\right.\\\:{\propto\:}_{\sigma\:}\left(t\right)=\sum\:_{j=1}^{n}{\sigma\:}_{0}{\phi\:}_{i}\left(t\right)\\\:\begin{array}{c}{G}_{\sigma\:}\left(t,\beta\:\left(t\right)\right)=\stackrel{\sim}{{G}_{\sigma\:}}\left(t\right)(t,\beta\:\left(t\right))\\\:\alpha\:\left(0\right)={\alpha\:}_{\sigma\:},\:\:\:\:\:\frac{d\alpha\:}{dt}\left(t\right)={G}_{\sigma\:}\left(t,\alpha\:\left(t\right)\right),\\\:\beta\:\left(0\right)={\beta\:}_{0},\:\:\:\frac{d\beta\:}{dt}\left(t\right)=\stackrel{\sim}{{G}_{\sigma\:}}\:(t,\beta\:\left(t\right))\end{array}\end{array}\right\}$$
(8)

.

Where \(\:A\) is denoted as the training operation to the CNN in the classification of the segmentation process. Also, the modified classification helps in determining the continuous and discrete regions. The continuous and discrete regions are identified based on the different feature distributions for multiple regions that ease true positive analysis. The true positives for the different segments are used to improve the precise detection based on monotonous pixel distributions. Based on different feature extraction rates, the change in the monotonous nature is used to identify the discreteness. This is categorized by the varying features coexisting in the same region as the feature extracted and variance estimated regions. At first the detection of the continuous regions using finite features based on the high maximum accuracies. These continuous regions are detected through the altered classification process, demonstrating the substantial difference between the tumor and non-tumor regions. By designating the features with the highest accuracy in selecting these differences, the algorithm aims for the most reliable representation of the continuous tumor regions. This method ensures that the segmentation model enhances the aspects of the image data, which results in precise identification during the analysis. This is obtained using Eqs. (9) and (10).

$$\:\left.\begin{array}{c}{V}_{j}^{i}={V}_{j}^{i-1}+{\left(\nabla\:j\left({G}_{j}^{i}\right)\right)}^{2}\\\:{G}_{j}^{i+1}={G}_{j}^{i}-\frac{n}{\sqrt{{V}_{j}^{i}+y}}\nabla\:j\left({G}_{j}^{i}\right),\\\:\begin{array}{c}{V}_{j}^{i}=\alpha\:{V}_{j}^{i+1}+(1-\alpha\:){\left(\nabla\:j\left({G}_{j}^{i}\right)\right)}^{2}\\\:{G}_{j}^{i+1}={G}_{j}^{i}-\frac{n}{\sqrt{{V}_{j}^{i}+y}}\nabla\:j\left({G}_{j}^{i}\right),\\\:\begin{array}{c}{y}_{j}^{i}={\alpha\:}_{1}{y}_{j}^{i-1}+\left(1-{\alpha\:}_{1}\right)\nabla\:j\left({G}_{j}^{i}\right),\\\:{x}_{j}^{i}={\alpha\:}_{2}{x}_{j}^{i-1}+\left(1-{\alpha\:}_{2}\right){\left(\nabla\:j\left({G}_{j}^{i}\right)\right)}^{2}\end{array}\end{array}\end{array}\right\}$$
(9)
$$\:\left.\begin{array}{c}{K}^{1}={W}^{1}\alpha\:+{y}^{1}\\\:{y}^{1}={\sigma\:}^{n}\left({Z}^{1}\right),\\\:\begin{array}{c}{K}^{2}={W}^{2}\alpha\:+{y}^{2}\\\:{y}^{2}={\sigma\:}^{n}\left({Z}^{2}\right),\\\:\begin{array}{c}{K}^{n}={W}^{n}\alpha\:+{y}^{n}\\\:{y}^{n}={\sigma\:}^{n}\left({Z}^{n}\right),\end{array}\end{array}\end{array}\right\}$$
(10)

.

The process of detecting the continuous regions is explained by the following equation given above. Where \(\:K\) is denoted as the continuous region based on the modified classification. Now the discrete region is extracted in the detection operation. This process involves determining the particular, isolated regions that provide the distinctive features, feasibly representing the presence of tumors. This process associates the finite features, extracted depending on their existence and maximum difference between the tumor and non-tumor regions. The difference estimation is presented in Algorithm 2.

Algorithm 2

Difference Estimation.

Input: Region for detection.

Output: Find the difference in the region.

  1. 1.

    If \(\:\:K<\beta\:\left(G\right)\), then//Difference in feature of the current region is less than the extracted ones

  2. 2.

    If\(\:\:{\beta\:}_{0}\left({\alpha\:}_{i}\right)={G}_{\sigma\:}\) then

  3. 3.

    Compute the maximum matching features of the region.

  4. 4.

    Else.

  5. 5.

    Calculate the modified classification rate.

  6. 6.

    End if.

  7. 7.

    Else.

  8. 8.

    if\(\:\:{\alpha\:}_{i}\left(K\right)>\left({\alpha\:}_{\sigma\:}\right)\), then//High variance check between the identified and extracted regions

  9. 9.

    Identify the finite feature.

  10. 10.

    else.

  11. 11.

    if\(\:\:\alpha\:{U}_{i}+\alpha\:{U}_{j}>\beta\:\left(G\right)\), then

  12. 12.

    if\(\:\:\beta\:\left(G\right)=\rho\:t\), then

  13. 13.

    Compute the feature differences.

  14. 14.

    else.

  15. 15.

    Identify maximum region differences.

  16. 16.

    end if.

  17. 17.

    for\(\:\:{\alpha\:}_{i}\left(K\right)=\beta\:\left(G\right)\) then, do//for all the features that represent the regions with no variance

18. \(\:\beta\:\left(G\right)={\alpha\:}_{i}\left(K\right)\)

  1. 19.

    Else.

  2. 20.

    if\(\:\:\stackrel{\sim}{{G}_{\sigma\:}}\:\left(t,\beta\:\left(t\right)\right)>K\left(G\right)\), then//variance difference is higher than the actual variance estimated

  3. 21.

    Identify the distinctive feature.

  4. 22.

    else.

  5. 23.

    Identify the discrete region.

  6. 24.

    end if.

  7. 25.

    end if.

  8. 26.

    end if.

By analyzing these features and their differences, the discrete regions were detected. The process of determining the discrete regions that represent the presence of a tumor is explained using Eqs. (11) and (12)2,12.

$$\:\left.\begin{array}{c}{A}^{*}=\beta\:\left(G\right)\\\:\beta\:\left(G\right)K\sum\:_{{\beta\:}_{0}}{\alpha\:}_{i}=\frac{1}{2}\left[\rho\:t\left({\sigma\:}^{2}\right)\right]\\\:\rho\:t\left(\beta\:\right)=\stackrel{\sim}{{G}_{\sigma\:}}\:\left(t,\beta\:\left(t\right)\right)\\\:t,\beta\:\left(t\right)=G\left(K\right)*\sum\:_{{G}_{\sigma\:}}\alpha\:\left(t\right)+\rho\:t\left({\sigma\:}^{2}\right)\\\:Consequently,\\\:\sum\:_{G}\left|\beta\:\left(G\right)-G\right|={\alpha\:}_{i}\left(K\right)*\left({\alpha\:}_{\sigma\:}\right)\\\:{\alpha\:}_{\sigma\:}\left(K\right)+\left[\beta\:\left(G\right)\right]=\rho\:t\left(\beta\:\right)*{A}_{0,n}\left(t,\beta\:\right)\\\:\begin{array}{c}\beta\:\left(G\right)=\alpha\:{U}_{i}+\alpha\:{U}_{j}\\\:=\sum\:_{x,y}\left[\frac{\rho\:}{\partial\:t}+\varDelta\:\frac{\partial\:t}{\partial\:x}\right]\\\:Therefore,\end{array}\\\:=\frac{1}{2}{\left[\sigma\:t\left(\sum\:_{i,j}\left[\frac{x}{\partial\:t}+\frac{y}{\partial\:t}\right]\right)\right]}^{2}\end{array}\right\}$$
(11)
$$\:\left.\begin{array}{c}\frac{\partial\:{\pi\:}_{1}}{\partial\:t}=x\left(t\right)+{y}_{1}+{y}_{2}-\left(1+{t}^{2}+{y}^{2}\left(t\right)\right)\\\:\frac{\partial\:{\pi\:}_{2}}{\partial\:t}=2t-\left(1+{t}^{2}\right)x\left(t\right)+{y}_{1}{y}_{2}\\\:\begin{array}{c}where\:\:\:\:t\in\:\left[\text{0,1}\right]\\\:{\pi\:}_{1}\left(0\right)=0\\\:\begin{array}{c}{\pi\:}_{2}\left(0\right)=1\\\:{\pi\:}_{1}=x\left(t\right)\\\:{\pi\:}_{2}=1+{t}^{2}\end{array}\end{array}\end{array}\right\}$$
(12)

.

Where \(\:\pi\:\) is represented as the discrete regions that represent the tumor. The training operation for discrete and continuous region detection is illustrated in Fig. 5.

Fig. 5
figure 5

Training Process Illustrations.

The second classification identifies the need for training under \(\:\:P{\prime\:}s\)process. First, if\(\:\:\rho\:=true\:\) then maximum matching is pursued for validating \(\:\:\psi\:=true\) or not. If\(\:\:\psi\:\) is true then the \(\:\:\rho\:=\beta\:\) is valid and hence the region is continuous without\(\:\:j\). Contrarily there are two cases where\(\:\:\rho\:\) is not true and\(\:\:\psi\:\) is false. If \(\:\:\rho\:=false\:\) then the max difference is identified for training. The\(\:\:\psi\:=\) true case performs integration of \(\:\:\rho\:\) and\(\:\:\beta\:\in\:\psi\:\). This integration identified \(\:\:A\) different from \(\:\:K\) region (Fig. 4). This process helps in detecting the presence of a tumor with the maximum accuracy. The segmentation process is efficacious with the help of the CNN technique. This process helps in reducing errors while handling the discrete and continuous PET image segments with high adaptability. The training is given to the CNN until it attains the précised segmentation.

Performance assessment

In the performance assessment section, the discussion of experimental results using external dataset images and comparative analysis using metrics and methods are described. The performance assessment is discussed using experimental and comparative analysis. For the experimental analysis, the images from34 (Synthetic whole-head brain tumor segmentation dataset) are used for tumor segmentation. This dataset provides 3D segmented images with 0 to 2 labels indexing background, forehead, and tumor region in order35. The number of training and testing images used are 1000 and 426 respectively. The images are split into 10 regions maximum. The training is initiated with 800 iterations extended up to 1200 (for large images). The training rate is set from 0.6 to 1.0 for which the minimum epochs is 3 and the maximum is 5. The learning requires a minimum of 3–4 epochs to classify the output texture. The classifier learning is trained at a rate between 0.6 and 1 with a drop rate between consecutive 2 intervals. Besides, the classification iteration is halted for the maximum saturation between the 3–5 epochs. With this information, the experimental analysis using MATLAB is summarized in Tables 1 and 2 using the sample inputs36,37. The MATLAB software is deployed in a computer with a 2.8 GHz intel processor, 256GB secondary storage, and 8GB physical memory.

Table 1 Conventional classification Output.
Table 2 Modified classification Output.

Apart from the above experimental analysis, the following section presents a discussion on metric-based comparisons. The metrics precision, classifications, error rate, classification time, and segmentation accuracy are considered in this comparative analysis. The number of regions and features are varied up to 10 and 12 respectively for analysis. The methods NRAN27, DSSE-V-Net17, and DenseUNet+23 are considered in this comparative analysis along the proposed method38. The parameters used for this comparative study include image height and variances which are used to find whether it is normalized or not. The characteristic of the image consists of noise and contrast, the obtained image is in grayscale, so the noise can be easily identified\(\:\:\beta\:\left(0\right)*\sum\:_{{\alpha\:}_{i}}\rho\:\left(t\right)\). The validation of image segmentation is done for the precise output with segmentation accuracy. Here, it relates to the two formats such as continuous, and discrete, based on this representation the image classification is processed. The matching is estimated whether it is a tumor or not and provides the resultant. Based on the matching process, the differences are estimated for the image parameter\(\:\:G\left(\beta\:\right)*{\alpha\:}_{i}\). From the matching, the \(\:k\) regions are extracted where it is given to the integration process to analyze the region39. The precision and error metrics are inferred from \(\:{\gamma\:}_{\sigma\:}\left(t,\beta\:\left(t\right)\right)\) and\(\:\:G\left(t,\beta\:\left(t\right)\right)\) estimation that inversely validates the identified regions. The mismatching of continuous and discrete textures results in increasing false rates. If the false rate is not satisfying\(\:\:{A}^{*}=\beta\:\left(G\right)\)condition, then the error increases and thereby precision decreases. The case of\(\:\:\psi\:=true\) ensures high classification regardless of the number of matching cases from which the precision is improved40. Therefore, references for error and precision are inferred from these parameters for validation. Following the above, the hyperparameter analysis for sensitivity is tabulated in Table 3 based on the maximum difference identified. This analysis of sensitivity identifies the maximum true positives irrespective of the true positive rates.

Table 3 Sensitivity analysis for maximum Regions.

The sensitivity analysis for the varying regions and their corresponding difference ranges is tabulated in Table 3 above. As the difference rate increases, the sensitivity fluctuates without precise identification of the maximum difference region. The classifier learning operates on both the difference and variance of the extracted features to improve the sensitivity. This identifies the maximum true positives to leverage the region differences rather than avoiding them from the computations. As this process is iterated until the maximum regions are identified, the change in sensitivity is observed. The p-values and the corresponding error values for uncertainty are tabulated in Table 4 below. This tabulation considers the number of regions for the same difference ranges.

Table 4 p-Values with error values for Uncertainty.

The region extraction is computed with the higher classification rate to reduce the noise and periodically analyze the height of the input image. The error rate is reduced based on the parameter used for the identification of tumor from the PET image\(\:\:\sum\:_{G}\left|\beta\:\left(G\right)-G\right|\). This evaluation of image segmentation is derived as the output from the continuous and discrete image. The maximum difference is used to estimate the region and provides the training based on its image segmentation and it is described as\(\:\:{\alpha\:}_{i}\left(K\right)*\left({\alpha\:}_{\sigma\:}\right)\). The tumor cell is used to improve the accuracy rate from the classification method and ensure the segmentation. The available and differences in the image features are clumped together and ensure the image training which is not on the detection boundary. The maximum region difference is extracted by using the mapping function and estimating the validation function for the precise output. The maximum region is extracted and the segmentation with the parameters such as noise and contrast of PET image. The image characteristics are considered for the segmentation where the regions are split into smaller portions and found for the mapping. The boundary-based image region is extracted and estimates the training by matching the difference whether it is a tumor or not. The labeled region is used to find the tumor cell in an ordered manner and improves the accuracy and it is described as\(\:\:\left(1+{t}^{2}+{y}^{2}\left(t\right)\right)\). Here, it illustrates the region of the image by testing the image and producing the output with higher precision by decreasing the computation time. The similarity rate is enhanced for the textural classifier where the discrete and continuous images are evaluated for the segmented output. The correlation is used to find the tumor and non-tumor from the discrete PET image. Considering the parameters such as characteristics, size, and pattern of the image are difficult to detect. These difficulties are identified by introducing CNN which improves the classification level of PET images and results in accurate detection of tumor region. The tumor detection is in the form of a discrete image where the region is split into a smaller portion \(\:\rho\:t\left(\beta\:\right)*{A}_{0,n}\left(t,\beta\:\right)\). From the parameters used for the improvement in this comparative study, the characteristic of the image is considered along with the nature, size, and height of the PET image. Based on these characteristics the normal and abnormal region is identified with the reference output. From the parameter used in this proposed work the precision, and classification are improved whereas, the error rate and classification time are reduced.

Precision

The precision is effective in this process with the help of classifier learning. This operation integrates both the conventional and modified classifier functions, each distributing to refine the accuracy of segmentation in PET image analysis. The CNN classifier supports finite features and high-accuracy matching to achieve discrete and continuous region detection. Through iterative training, these classifiers adjust their acquired input values to enhance the accuracy of the images. The finite characteristics represent the most variant features while capturing the essential distinctions between the tumor and non-tumor regions. By using classifier learning, the variations are detected precisely, which leads to high precision in segmentation procedures. The inclusion of the modified finite operation within the conventional classifier utilizes its procedures. This inclusion depends on the accuracy of the tumor region segmentation. By incorporating the modified process when segmentation accuracy is high, the errors are minimized and then the overall precision of the operation is enhanced as shown in Fig. 6.

Fig. 6
figure 6

Results of Precision of the Classifiers.

Classifications

The classification is effective in this process after the extraction of the input PET image features. These features help distinguish between the tumor and non-tumor regions. After the features are extracted, these features are sent as the input to the classification process, which engages the CNN. The CNN accurately analyzes the extracted characteristics, recognizing the complex patterns, frameworks, and differences within the PET images. By supporting the CNN, it understands the complex association within the input PET image data. This enables them to precisely differentiate the region as either tumor or non-tumor. The acquired features allow the CNN to make the solutions on the distinctive attributes of each region within the PET images. As the classification process incorporates, the CNN training helps in the classification process with the maximum accuracy. This operation leads to an overall precision of the segmentation operation. Then the integration of the classification after the feature extraction process enhances the accuracy and reliability of the determining brain tumor regions within the PET images as shown in Fig. 7.

Fig. 7
figure 7

Results of Classification of the Classifiers.

Error rate

The error rate is lesser in this process with the help of the precise segmentation procedure and classifier learning. The CNN helps in the segmentation process by ensuring the accurate determination of both continuous and discrete tumor regions within the PET images. Through repetitive training, the classifiers enhance their accuracy after knowing their outcomes. The proper features which are selected based on their distinctiveness and difference; enhance the classifier’s ability to distinguish between the tumor and non-tumor regions. The help of the modified classification with the conventional classifier further mitigates the errors. By integrating the altered operation when segmentation precision is maximum, this operation becomes more vigorous by reducing the errors. The symbiosis between the precise segmentation operation and classifier learning delivers a lower rate. This association ensures that the brain tumor regions are identified more accurately without any errors and also it enhances the precision of the procedure as shown in Fig. 8.

Fig. 8
figure 8

Error Rate of the Classifiers.

Classification time

The time taken for the classification process is less in this process with the help of CNN. The textural difference and then the matching features are determined in this classification procedure. These differences help in determining the complex patterns and differences delivering the distinction between the tumor and non-tumor regions. The CNN determines the matching features by observing the extracted features. Through the layers of the CNN and combining, these are effectively correlated for precise tumor detection. This method utilizes the efficiency of the classification operation by fleet processing and determining the complex elements that distinguish between the regions. The CNN ability is enhanced to process information from the given PET images, the classification operation becomes faster and more précised. The orderly determination of the textural difference and matching features ensures that the method efficiently determines the brain tumor regions within the PET images, making the procedure more time-effective as shown in Fig. 9.

Fig. 9
figure 9

Classification Time of the Classifiers.

Segmentation accuracy

The segmentation accuracy is higher in this process with the help of the CNN in classification and modified classification procedures. CNN with their developed pattern recognition abilities helps in enhancing the precision of segmenting the brain tumor within the PET images. During the classification process, the CNN analyzes the extracted features, refining the intricate patterns, and textures that are the representation of the tumor regions. Thus this analysis enables the precise determination and categorization of these regions, delivering the maximum segmentation accuracy. The modified classification operation helps in determining the complex features that distinguish the tumor and non-tumor regions. This process enhances the accuracy of segmenting the discrete and continuous tumor areas. The incorporation of the CNN utilizes the segmentation precision by enhancing the CNN abilities. The accurate classification and determination of the textural differences, combined with the efficacious recognition of the features, ensure precision in detecting the brain tumor regions within the PET images as shown in Fig. 10.

Fig. 10
figure 10

Segmentation Accuracy of the Classifiers.

Specificity

In Table 5, the specificity comparisons are presented. In this comparative analysis, the methods IRDNN-Net28 and MTDC-Net24 are augmented along the methods considered previously.

Table 5 Specificity Comparisons.

The specificity of the proposed model is high compared to the other methods of the same kind as presented in the above table. The proposed model is reliable in performing the classification of features based on differences and variances. This process is iterated to induce various classification instances across multiple segments and regions identified. The matching and un-matching features are identified through this classification to improve the specificity regardless of the regions. In this process, the variance is estimated as high or low depending on the number of segments across various feature extraction rates. In this process, the classification differentiates the maximum regions based on the\(\:\:\alpha\:\) and\(\:\:\beta\:\) variants to leverage the detection accuracy. Therefore the true negatives are identified from these classifications for multiple deviations identified.

In Tables 6 and 7, the comparative analysis results are tabulated for each variant used.

Table 6 Comparative analysis results for Regions.

The proposed FSM improves precision, classification, and segmentation accuracy by 10.09%, 10.96%, and 10.03% respectively. This method reduces error and classification time by 11.29% and 10.24% respectively.

Table 7 Comparative analysis results for Features.

The proposed FSM improves precision, classification, and segmentation accuracy by 8.43%, 8.51%, and 9.23% respectively. This method reduces error and classification time by 9.34% and 10.4% respectively.

From the above results, it is seen that the proposed model is reliable in handling smaller variations between the consecutive regions identified. The classification process is used to improve the segmentation accuracy ahead of the iterations. Therefore, any changes in the feature extraction results in computations of the differences across multiple segments. In the difference estimation for the patterns identified, the regions are used for feature matching which is a lagging concept in the existing network models. Therefore, the specificity measure consents to the sensitivity feature to improve the precision.

Conclusion

This paper discussed the working process of the novel finite segmentation model using improved classifier learning. The proposed model aimed and succeeded at improving the segmentation accuracy by addressing the discrete problem defined. The problem is addressed using dual functions of the classifier: the conventional and the modified. In the conventional process, the segments are identified based on correlation using the labeled inputs. In contrast to the conventional process, the modified classifier identifies the discrete and continuous regions using specific feature parameters. In these cases, the difference and high matching factors are used for classification accordingly. Based on these two factors, the training process is pursued to improve the segmentation process. Therefore the error-causing discrete sequences are identified through recurrent training using feature existence and its uniqueness. Thus from the comparative analysis, it is seen that the proposed model improves precision by 10.09%, classifications by 10.96%, and reduces the error by 11.29% for the varying regions. However, the backward training in this model requires additional feature-matching instances that are less feasible for dense pixel-packed inputs. For handling this problem, micro-segmentation approaches with pre-classification are planned to be incorporated in future work. The pre-classification is reliable in identifying variance regions as patches other than segmenting the whole region. Therefore, the micro-segmentation for the variance region is alone performed to reduce the true negatives. This is monotonous for any rate of feature and region extractions.