Table 1 Detailed Steps in the SVM Optimization Process for Breast Cancer Classification.

From: Learning quality-guided multi-layer features for classifying visual types with ball sports application

Step

Description

1.

Learn the GSS Features

Extract Gaze Shift Sequence (GSS) features\(F_i\)for each breast image.

GSS features are derived using a deep neural network that models human gaze behavior and attention.

These features capture both visual and semantic properties of the patches, representing image areas of interest.

2.

Compute Similarity Between Feature Vectors

Use a kernel function\(\theta (F_i, F_j)\)to calculate the similarity between feature vectors\(F_i\)and\(F_j\)for image pairs.

This similarity is typically computed using a Gaussian RBF kernel, which measures the closeness of patches in the feature space:

\(\theta (F_i, F_j) = \exp \left( -\frac{\Vert F_i - F_j\Vert ^2}{\sigma ^2}\right)\), where\(\sigma\)is the kernel width.

This term quantifies how similar two feature vectors are, determining their proximity in the ranking model.

3.

Maximize the Objective Function

The objective function\(Z(\omega )\)is maximized to find optimal weights\(\omega\)for the SVM.

The first term\(\sum _{i=1}^M \omega _i\)aims to maximize the margin between classes by boosting the importance of certain patches.

The second term\(\frac{1}{2} \sum _{i=1}^M \sum _{j=1}^M \omega _i \omega _j a_i a_j \theta (F_i, F_j)\)incorporates the kernel function.

It penalizes large weight assignments for patches whose similarity is inconsistent with their class labels, ensuring the decision boundary is correct.

The optimization procedure involves using quadratic programming (QP) methods or similar techniques to find the optimal\(\omega\).

4.

Satisfy Constraints

The optimization is subject to two key constraints:

(i)\(0 \le \omega _i \le H\), where\(H\)is an upper bound that prevents excessive weight values and overfitting.

(ii)\(\sum _{i=1}^M \omega _i a_i = 1\), which ensures the balance between the positive and negative samples. This constraint prevents the SVM from becoming biased.

The weights are iteratively adjusted while maintaining these constraints, ensuring the model generalizes well to unseen data.

The constraints can be handled using standard SVM optimization techniques such as the SMO (Sequential Minimal Optimization) algorithm.