Abstract
This study identifies the core competencies of managers of intellectual property (IP)-based startups and proposes an evaluation model to enhance managerial competency criteria during technology assessments. Despite IP-based startups’ role in fostering innovation and job creation, current research on these startups primarily emphasizes business feasibility, marketability, and technical viability while overlooking managerial competencies. Technology evaluations often prioritize secondary factors and neglect core managerial skills that are crucial for success. To address this gap, this study employed traditional methods (i.e., the Delphi method and expert workshops), alongside machine learning algorithms to derive the key competencies. This approach enables agile adaptation to the rapidly evolving technological and managerial landscape and moves beyond traditional behavior-focused evaluations. These derived competencies and growth rules are integrated with assessment center techniques, resulting in a novel, systematic, and clear evaluation framework. In this way, this study provides a robust model for accurately assessing the managerial competencies essential for achieving high performance in IP-based startups. The findings offer valuable insights for improving technology evaluation systems that can benefit both current managers and aspiring entrepreneurs.
Similar content being viewed by others
Introduction
As technological innovation continues to accelerate, the industrial system is rapidly shifting from tangible to intangible assets. According to Ocean Tomo, a global intellectual property (IP) merchant bank, the proportion of intangible assets of the top 500 U.S. S&P companies has surpassed that of tangible ones since the mid-1990s. Today, intangible assets make up 90% of total assets (Ocean Tomo, 2020). Intangible assets are not visible, but they have economic benefits that are closely linked to intellectual property rights. In today’s fast-paced, knowledge-driven economy, intellectual property rights not only provide protection and legal status to new creations but also offer exclusive rights to companies seeking to maximize profit through technological innovation, thereby supporting continuous conditions for more innovation (Coulibaly et al. 2018; Paramba et al. 2023; Salamzadeh et al. 2023).
Globally, activities linked to intellectual property rights (e.g., patents) have increased, and larger-scale intellectual property rights are associated with higher industrial revenue performance. Developed countries are investing significantly in expanding the infrastructure in various intellectual property fields, with the activation of IP-based startups being a prime example. IP-based startups are technology-driven and often yield more significant effects than general startups and companies. The emergence of successful IP-based startups not only boosts national competitiveness, but also creates jobs, thereby addressing employment issues (Jeong, 2019; Le Trinh, 2019). In addition, the nature of startups allows for the easy application of new technologies and quick responses to innovation, creating a significant advantage. In IP-based startups, technology is a core resource for sustaining competitive advantages. Many scholars have emphasized the importance of technology in entrepreneurship, with Cohen and Klepper (1992) being notable for arguing that new technologies born from R&D have the most significant impacts on value creation for businesses and industries. Although technology is understood to be crucial for intellectual property, its value is inherently difficult to measure compared to measuring tangible assets. Thus, the role of technology evaluation in objectively assessing and quantifying the economic utility, competitiveness, and technology level is clearly vital.
As awareness of the importance of technology assessment grows, related research has steadily expanded. To derive implications for improving technology assessments, this study selected representative Korean cases and conducted a thematic keyword analysis. Technology assessment systems differ across countries due to variations in legal, economic, and cultural contexts and can generally be categorized into private-sector-led and government-led models. Government-led assessments offer advantages in verifying policy effects and maintaining consistency in evaluation criteria. Accordingly, Korea—a country with a strong government-led system—was chosen for this study. In total, 134 papers were analyzed, including 80 from the Science and Technology Policy Institute (which leads national policy research) and 54 from the Journal of Korea Technology Innovation Society. The classification results by thematic categories are presented in Table 1.
In studies aimed at improving technology evaluation methodologies, as shown in Table 1, most research has primarily focused on refining evaluation criteria such as business feasibility, technological capability, and marketability. However, the managerial dimension of technology assessment, particularly managers’ core competencies, has been largely overlooked. Numerous studies emphasize that managerial competencies are critical to startups’ success (Hambrick and Mason, 1984; Lee et al. 2017; Choi et al. 2022; Kim and Hwang, 2022; Salamzadeh et al. 2023). In addition, managerial competence is considered especially vital during the early growth stage of startups (Finkelstein, 1988; Gupta, 1988). Despite this, existing technology assessments rarely apply structured approaches to evaluate managerial competencies. Instead, they tend to rely on superficial indicators, such as educational background or career history, which are less informative and poorly systematized compared to other evaluation domains. This limitation is not confined to Korea, but is also evident in North America, where private-sector-led assessments dominate.
This study aims to identify the core competencies of managers in IP-based startups, which are increasingly important in today’s innovation-driven economy. To advance beyond prior research, we apply machine learning techniques to uncover core competencies closely associated with actual business growth. In particular, decision tree algorithms are used to derive interpretable performance rules in addition to predicting outcomes. These identified core competencies and growth-related rules are then aligned with assessment center techniques, enabling the development of a systematic evaluation model for accurately measuring managerial competencies in technology assessments. Existing competency modeling, including the work by Spencer and Spencer (1993), has largely focused on large corporations and traditional industries, limiting its relevance to the dynamic environments and distinctive management styles of startups. Traditional models heavily rely on researcher judgment, such as expert opinions and behavioral event interviews. Even when quantitative techniques are applied, they often consist of basic statistical analyses grounded in retrospective data, making it difficult to respond to new or changing environments. As a result, these models are not only time-consuming to update, but also lack predictive flexibility due to their dependence on subjective human input. In contrast, the competency modeling method proposed in this study applies machine learning to extract meaningful patterns from business performance datasets. By incorporating decision tree algorithms such as C4.5, the approach enables the exploration of diverse business scenarios, offering insights beyond the identification of core competencies. This method supports a more systematic and practical evaluation of managerial competencies, particularly in technology assessments. This study is guided by two research questions (RQs):
-
RQ 1: What are the core competencies of managers in IP-based startups that are related to business growth?
-
RQ 2: How can the core competencies of managers and growth-level rules derived from machine learning be applied to technology assessments?
This study is organized as follows. Section “Literature Review” reviews existing literature and forms a candidate group to derive provisional competencies for IP-based startup managers from prior studies. Section “Research Methodology” validates these competencies using the Delphi method and expert workshops and then applies machine learning to identify core competencies and growth-level rules. Section “Development of a Technology Evaluation Model Using the Evaluation Center Technique” presents an evaluation model by integrating the derived competencies with assessment center techniques. Section “Conclusion” concludes with implications, limitations, and future research directions.
Literature review
Competencies and competency modeling methods
The concept of competency began to be seriously discussed in 1973, when McClelland conducted research for the U.S. Department of State to select effective diplomats. According to McClelland (1973), competency should focus on individuals rather than tasks and is defined as the characteristics of people who demonstrate superior performance in specific situations, essentially identifying the behaviors and thought processes of high performers. Since then, various scholars have refined the research on competencies and discussed the numerous findings related to the various roles and critical tasks. Table 2 summarizes the definitions of competency that these major studies offered.
Despite variations in these definitions, this study defines competency as a combination of objective knowledge, acquired skills, and personal attributes, such as the attitudes that influence how individuals perceive different phenomena. As the precise identification and systematic application of competencies have increased, research on methodologies to derive competencies, known as competency modeling, has also developed through the efforts of various scholars. Competency modeling is a systematic process for identifying and defining the competencies necessary to perform tasks effectively in a given environment (McLagan, 1996; Dubois and Prade, 1998; Lucia and Lepsinger, 1999). Scholars have proposed various methods for this purpose, which can be broadly classified into qualitative and quantitative approaches. Utilizing a validated competency model from prior research can improve efficiency in data collection and instrument development. However, such models may offer limited explanatory power in context-specific situations (Spencer and Spencer, 1993; Lucia and Lepsinger, 1999; Stevens, 2013; George, 2022).
In summary, competency modeling is the process of identifying the competencies required to successfully perform a specific role, utilizing a synthesis of various methodologies. Table 3 summarizes major scholars’ representative methods.
As shown in Table 3, most competency modeling studies rely on qualitative methods, such as analyzing behavioral examples of high performers through structured interviews. Although useful for identifying past behaviors, these approaches involve a high degree of researcher subjectivity. Quantitative methods, like regression or structural equation modeling, offer more objectivity but mainly identify influencing factors rather than classify decision-making variables or predict outcomes based on actual business growth (Ji and Lee, 2022).
Research on managerial competencies has been conducted as actively as its importance suggests. However, most such studies have focused on the context of leadership, emphasizing managers’ roles (Park and Lee, 2011; Jung, 2004; Park, 2010). This role-centered leadership context tends to emphasizing the knowledge, skills, and attitudes managers require, thereby making it difficult to reflect on the specific qualities of managers that are particularly suited to certain management environments or eras. This limitation is even more pronounced in management environments related to new technologies, such as IP-based startups. In the case of IP-based startups, which primarily deal with technology and innovation, there is a need to approach competency modeling differently. Therefore, it is essential for traditional competency modeling, which focuses on identification, to shift toward an approach that analyzes multidimensional factors in an integrative manner for predictive purposes.
This study proposes a new competency modeling approach that integrates the Delphi technique with machine learning to identify core competencies of managers in IP-based startups. Using the C4.5 decision tree algorithm, it also extracts rules associated with business growth. This approach enhances adaptability to today’s fast-changing business and technological environments.
Determining provisional competency candidates
This section analyzes prior studies on managerial competencies to build a pool of competency candidates, forming the basis for tentative competencies. We first reviewed literature focused on managers, particularly in small and startup businesses, to ensure relevance. The extracted competencies served as input for the Delphi process. As a result, a pool of 84 tentative competency candidates was established. The main studies reviewed as sources for this pool are as follows:
-
(1)
Jung (2004) focused on the determinants of performance in small manufacturing enterprises, emphasizing CEO competencies, organizational capabilities, and competitive strategies. He modeled CEO competencies as organizational capabilities, identifying 19 significant competencies.
-
(2)
Park and Lee (2011) developed a competency model for the CEOs of small and medium enterprises (SMEs) using qualitative research methods and selecting CEOs of SMEs who had received national recognition as research subjects. They primarily used interview techniques to collect behavioral cases and redefine them, structuring the competencies of SME managers into 7 competency groups and deriving 18 competencies, then focusing on 16 of them.
-
(3)
Kim and Hwang (2022) researched the moderating effect of open innovation on the impacts of startup founders’ core competencies on firm performance. Although their work did not specialize in competency modeling, their findings on the influence of startup founders’ core competencies on firm performance were applicable and thus included in the pool of candidate competencies.
-
(4)
Ahn (2008) examined the relationship between the success factors of IT venture companies and actual marketing performance, identifying key elements for achieving marketing success among venture company managers.
-
(5)
Byun et al. (2022) conducted research on accelerator investors, who are closely related to startups, explaining the decision-making factors for undertaking startup investments by accelerator investors and founders using the concept of relative importance.
-
(6)
Lee et al. (2017) studied the factors determining startups’ initial success. They derived insights by associating the elements that influence the successful initial market entry of startups with competencies.
-
(7)
Kim and Lee (2020) examined how the entrepreneurial competencies of CEOs in SMEs affect business performance. They argued that resource-constrained SMEs need to focus on strengthening specialized and segmented core competencies by industry and sector rather than pursuing comprehensive competency enhancement across all areas.
-
(8)
Yang et al. (2011) sought to identify the efforts needed to improve technology startups’ performance by verifying entrepreneurial competencies’ and technology commercialization capabilities’ impact on business performance. To test their hypotheses and validate their research model, they selected a sample of 125 technology startups companies.
-
(9)
Park (2010) emphasized the importance of social enterprises as an alternative for community integration and regional economic revitalization by referencing competencies related to startup managers.
-
(10)
Kim and Lee (2021) examined how market orientation, technology orientation, and CEO competencies in early-stage tech startups influence business performance, highlighting the significant impact of these factors through comparative analysis.
-
(11)
Lee et al. (2012) explored how patent activities influence the performance of early-stage technology startups. Although not directly addressing managerial competencies, the study’s focus on technology startups offers relevant insights. It highlighted the causal link between IP-related activities and business outcomes, supporting the distinction of IP-based startups.
In addition to research on managerial competencies, studies on manager characteristics influencing performance in technology startups were also reviewed. Among the 84 tentative competencies, six (e.g., ethical awareness, social competency, and crisis management) were commonly found and thus removed due to redundancy. However, similar competencies with distinct definitions were retained, anticipating further refinement through the Delphi method and workshops. As a result, 78 provisional competency candidates were finalized, as summarized in Table 4.
Research methodology
The research framework
The purpose of the study is to derive the core competencies and performance rules of managers in IP-based startups, linked to actual business performance, and to develop an improved evaluation model for systematically measuring managerial competencies in technology assessments. To achieve this, data from actual IP-based startup managers were collected. The sample was selected from companies participating in a government-verified IP-based startup support program. To ensure high-quality data, factors such as industry, region, and revenue size were considered, and data were collected using an online survey method. The collected data were utilized to develop predictive models through the application of machine learning. As Fig. 1 illustrates, the research framework can be broadly divided into four main stages: (1) the tentative competency derivation; (2) pre-processing; (3) prediction model development through growth level classification (feature selection and prediction model); and (4) developing a technology evaluation model based on key techniques gathered from the assessment centers.
(This figure describes the research procedure in four stages, detailing the processes of deriving potential competencies, applying machine learning, and developing an evaluation model).
Deriving the provisional competencies
This section uses the Delphi method and expert workshops to validate the tentative managerial competencies; confirmed competencies are then used as independent variables in a survey targeting managers of IP-based startups. The Delphi method facilitates consensus by gathering expert opinions through iterative surveys and structured feedback. Unlike general surveys, it emphasizes repeated rounds to refine agreement. Its effectiveness relies heavily on the careful selection of panel members. Previous research has suggested that approximately 10 experts typically yield reliable results, with participant diversity and consensus-building methods being critical success factors (Ewing, 1992; Anderson, 1997). The current study used purposive sampling to select a panel of experts with both theoretical knowledge and practical experience in startup management, guidance, or consulting. Panel details are provided in Table 5.
The Delphi method in this study was conducted in three rounds, each with a specific objective. The first round aimed to refine the initial list of tentative competencies, resulting in a consolidated set of 52 competencies. In the second round, these competencies were classified and structured to form a preliminary competency framework. The third round focused on finalizing the competencies through expert consensus. This multi-round approach enabled the systematic refinement of managerial competencies based on their perceived importance. The reliability of the Delphi process was supported by a content validity ratio (CVR) of 0.87, which exceeded the 0.62 threshold recommended by Lawshe (1975), confirming strong validity and reliability.
Despite these results, the study recognized limitations in relying exclusively on the Delphi method, as no prior research has specifically addressed managerial competencies in IP-based startups. To address this limitation, an additional expert workshop was held to redefine and reorganize the derived competencies in alignment with the unique management environment of IP-based startups. Six experts, selected from the original Delphi panel due to their deep understanding of the study, participated in the workshop. Conducted face-to-face over eight hours, this session enabled a more tailored and context-specific competency framework. For the workshop, we utilized Spencer and Spencer’s (1993) future-oriented competency development method. This expert workshop allowed the restructuring of the existing competency area to consider the characteristics of IP-based startups. The provisional competency divisions from the Delphi method (common competencies, leadership competencies, and job competencies) were redefined into the following areas:
-
Business: these competencies focused on the business performance necessary for sustaining essential and continuous business operations.
-
Value: these competencies focused on the foundational qualities and values that startup managers pursued.
-
People: these competencies focused on the interactions necessary for exerting influence on others, such as customers and employees, in managing startups.
Table 6 presents the finalized managerial tentative competencies from the workshop. These results were then used as independent variables for collecting the actual survey data.
Data preprocessing
In this study, the target variable was the actual compound annual growth rate (CAGR) of IP-based startups. CAGR is a measure that represents the mean annual growth rate of an investment over a specified time period that is longer than one year, assuming the profits are reinvested at the end of each period. It is an advantageous way to compare the growth rates of businesses over the same period and is one of the most robust indicators for predicting a company’s growth potential. Traditional competency modeling often uses variables like employee satisfaction with managers or managers’ self-assessed job satisfaction as target variables. However, these subjective indicators have limitations in accurately reflecting actual managerial performance. Instead, this study uses annual business growth rate as an objective performance indicator to identify core competencies. Based on the OECD’s classification of high-growth enterprises, the growth rate was categorized into three levels (i.e., high, medium, and low growth) to improve analytical efficiency and enhance the clarity and interpretability of the derived performance rules linked to managerial competencies.
Data balancing
When building a predictive model using machine learning, data balancing of the target variable was performed to prevent biased predictions. Data balancing helps avoid biased predictions by understanding the class distribution of the target variable. Therefore, a data-balancing process was conducted to align the proportions of each class within the target variable.
Two sampling techniques were considered: over-sampling and under-sampling. Over-sampling increases the representation of minority classes by duplicating existing instances, which expands the dataset but may lead to overfitting. Under-sampling reduces the majority class to match the minority, which avoids overfitting but may discard valuable data. In big data contexts, the drawbacks of under-sampling can be mitigated (Choi and Yoo, 2016).
To assess model robustness across different sampling methods, this study conducted 10 iterations per technique and compared average performances. The model used was in its initial state without feature selection, and all other conditions were held constant. For the no-sampling case, 10 datasets were generated using varying training and validation splits. Under-sampling and over-sampling employed random sampling to create 10 datasets each. The results are presented in Table 7.
As shown, the over-sampling technique achieved the highest average accuracy, precision, and F1-score, indicating better overall predictive performance. Although under-sampling slightly outperformed in recall, over-sampling offered a more balanced and robust result. Given the limited size of survey-based data, over-sampling was selected despite potential overfitting risks, as it avoids the information loss associated with under-sampling. Consequently, a dataset with 243 over-sampled instances was used for the machine learning analysis.
Variable selection
The method used for variable selection in this study primarily involved the wrapper approach. The wrapper method involves selecting subsets of input variables and applying a specific algorithm to evaluate the performance, thereby determining the optimal set of input variables through iterative subset formation and algorithm application (Witten and Frank, 2005). This approach can be divided into two main strategies: backward elimination and feedforward selection.
The backward elimination approach removes the least significant variable in each iteration, develops a predictive model, and recalculates the ranking for the importance of the independent variables. This approach allows for the assessment of the current variables’ importance by eliminating the influence of previously excluded variables. Conversely, feedforward selection calculates the importance ranking of variables at the beginning and then adds the most important variable to the model one by one, evaluating the model’s performance and stopping at the appropriate point (Witten and Frank, 2005). In this study, backward elimination was chosen as the variable selection method, as it is generally recognized for its superior performance in identifying optimal feature sets.
Development of the prediction model
In this study, machine learning was employed with a focus on enhancing prediction accuracy, rather than relying on traditional statistical methods (Dua and Du, 2016; Mitchell, 1997). By learning from extensive training data, machine learning algorithms are capable of recognizing patterns and minimizing classification and prediction errors for individual records. These algorithms are generally classified as supervised, unsupervised, or semi-supervised, depending on whether the target variable is labeled. Among these, decision trees are a commonly used supervised learning algorithm. They classify data based on predefined labels and utilize a flowchart-like structure that splits variables into binary decisions, making the model both intuitive and interpretable (Genuer and Poggi, 2020; Smith, 2017). This approach makes decision trees especially effective for identifying rules and patterns in raw data, which is essential for predicting business outcomes (Rokach and Maimon, 2015).
In this study, a C4.5-based decision tree algorithm was used to derive performance rules associated with actual business growth. As a classification analysis method, C4.5 enhances prediction accuracy by partitioning data based on the gain ratio, thereby maximizing the informational value of input variables. Moreover, C4.5 is known for its high explanatory power, generating transparent rules that help clarify links between managerial competencies and business growth. Although other machine learning algorithms such as random forests or artificial neural networks may achieve higher predictive performance, they are limited in interpreting and extracting rule-based patterns, which is essential for managerial competency evaluation.
A total of 42 provisional competencies were initially selected as input variables. Variable selection was conducted using a backward elimination approach, removing the least significant variables iteratively to optimize model performance. The data analysis was performed using Weka 3.8.6, an open-source platform widely used in academic research. The dataset was divided into training and testing sets using an 80:20 ratio, and the minNumObj parameter for the decision tree was set to 7. This configuration yielded a prediction accuracy of 61.2%, indicating a high level of model performance. The results for optimal prediction rates and variable selection are presented in Table 8.
Among the predictive models developed through backward elimination, the core competencies were defined as the independent variables in the model that achieved the highest performance in predicting high company growth. This analysis directly addresses RQ1: What are the core competencies of managers in IP-based startups that are related to business growth?
Based on the results of the predictive model with the highest prediction rate, a total of 13 variables were used (i.e., C18, C2, C12, C30, C1, C17, C32, C20, C9, C8, C27, C25, and C28), achieving a top prediction rate of 61.2%. Accordingly, managers’ core competencies were finalized to be a total of 13. When the tentative managerial competencies were initially derived, we structured them into three categories using the Delphi method and workshops. Analyzing the derived core competencies across these three categories revealed that balanced results were obtained without any bias toward a specific category, thereby providing further evidence that the competency modeling method using the machine learning algorithm was appropriate. The structured and summarized results of the competency modeling for managers of IP-based startups are presented in Table 9.
Using the selected core competencies, a decision tree analysis was conducted to derive growth-level rules for IP-based startups. The accuracy of the rules generated by the final decision tree ranged from 45% to 80%, with an average of 63%, indicating robust performance. Given that the target variable was categorized into three groups (i.e., high, medium, and low growth), this level of accuracy is notably above average. The decision tree analysis yielded 11 growth-level rules, which are visually presented in Fig. 2.
(This figure describes the rules of key competencies according to the classification of the target variable, which is the annual average growth rate).
As Fig. 2 illustrates, the decision tree rules are distributed as follows: 3 rules for high-growth startups, 2 for medium-growth startups, and 6 for low-growth startups. This classification is also summarized in Table 10. The derived rules not only distinguish growth levels but also highlight the importance of core competencies at each node. The machine learning-based competency modeling presented in this study offers both detailed performance rules and priority information among competencies, enabling practical applications. As shown in Table 10, the growth-level rules can serve as reference indicators in managerial competency evaluations.
Development of a technology evaluation model using the evaluation center technique
The evaluation model for managerial competencies in IP-based startups
Managerial competency evaluation in technology assessments has traditionally lacked the systematic structure observed in other assessment components. Past approaches have often focused on secondary factors and relied heavily on qualitative judgments rather than core competency-based evaluations. In the U.S., the National Technology Transfer Center (NTTC), a leading institution in technology assessments, utilizes its proprietary NTTC TOP Index. Within this framework, managerial competency falls under the broad category of organizational requirements, which includes abstract elements such as leadership. These assessments are typically conducted by technical experts rather than trained evaluators, thereby reducing evaluation validity (Spychalski et al. 1997).
In Korea, the Kibo Technology Rating System (KTRS) is widely used for technology assessments, evaluating rights, technology, marketability, and business feasibility. Managerial competencies are embedded within the business feasibility component, which accounts for approximately 30% of the total score. However, this area lacks clarity and objectivity, often evaluated qualitatively by technical professionals, similar to the NTTC’s model. These limitations are common across advanced countries due to the insufficient identification of managerial competencies and the absence of structured evaluation tools.
To address these gaps, assessment centers (i.e., established evaluation methods in the human resources domain) present a promising alternative. They involve multiple trained assessors evaluating candidates using standardized simulations aligned with specific competencies, thereby enhancing reliability and validity (Thornton and Rupp, 2006; Eurich et al. 2009; Povah and Povah, 2012). Applying assessment center techniques to the managerial aspects of technology evaluations could significantly improve current practices. A comparative summary of traditional evaluation methods and the proposed assessment center-based approach is provided in Table 11.
In this section, based on the managers’ core competencies and performance rules derived in Chapter 3, we aim to present a model that can be applied to the evaluation of managerial components by integrating the concept of an assessment center. Assessment centers utilize various types of evaluation techniques. The commonly used techniques include in-basket exercises, presentations, group discussions, role-playing, case analysis, oral fact-finding, assigned leader group tasks, and business games. The main evaluation techniques in the assessment center are utilized by matching them with the competencies to be measured. Thornton and Rupp (2006) presented a matrix that matches key competencies with assessment center techniques. A summary of their content is provided in Table 12.
As previously mentioned, this study structured the core competencies of IP-based startup managers into three areas: business, people, and value. By matching Thornton and Rupp’s (2006) research findings with the three areas proposed in this study, the competencies were compared. The results are summarized in Table 13.
For the final selection of assessment center techniques used to develop the evaluation model, industry characteristics were considered. As a result, Thornton and Rupp’s (2006) group discussions and oral fact-finding techniques were excluded from the evaluation methods due to the limitations in the performance of those being evaluated as well as considerations regarding the assessment subjects and the management environment of IP-based startups. The analysis results achieved by matching the previously identified core competencies with the selected assessment center techniques are presented as a matrix in Table 14.
The 13 core competencies of IP-based startup managers can be combined and utilized for up to a total of 42 different evaluation techniques, thereby allowing for the evaluation to be conducted by matching and selecting various techniques. Based on the matching results of the core competencies of IP-based startup managers and the assessment center techniques, a detailed evaluation model procedure is presented in Fig. 3.
(This figure is an evaluation model developed by the researcher to explain the detailed procedures divided into three stages).
This section addresses the second research question (RQ2): How can the core competencies of managers and growth-level rules derived from machine learning be applied to technology assessments? The proposed evaluation model consisted of three stages: pre-diagnosis, main diagnosis, and post-diagnosis.
In the pre-diagnosis stage, the goal was to analyze the basic characteristics of the assessee and determine suitable evaluation techniques. Based on the competency–technique matching results shown in Table 14, appropriate assessment methods were selected for each core competency. To enable a more comprehensive evaluation, we introduced the Uchida-Kraepelin (U-K) Test, a psychological test well-suited for startup environments. The U-K Test diagnoses the abilities, interests, and personality traits of the assessee through continuous addition tasks and can also analyze the manager’s stress tolerance and perseverance.
In the main diagnosis stage, a full evaluation was conducted based on the finalized competency set and corresponding assessment techniques. Prior to testing, an orientation was provided to familiarize assessees with the process. Multiple methods, including simulation and structured observation, were employed to assess real-time managerial behavior.
The post-diagnosis stage focused on assessor consensus and the validation of results. Evaluators collaboratively reviewed and discussed key behavioral incidents noted during the assessment. The final report included detailed feedback on each competency, identifying both strengths and areas for improvement. This feedback was designed to support the practical development of managerial competencies in IP-based startups.
Discussion
This section explains how the results of the managerial component evaluation model using the assessment center can be applied to technology assessments. Technology assessments typically result in grades or scores. For example, in the KTRS system, the technology assessment is scored out of 100 points, with 30 points allocated to the managerial components. To integrate the results from the newly proposed assessment center-based model, score adjustments are required to align with existing evaluation structures. Three adjustment methods are described herein:
-
(1)
Proportional adjustment method: This method adjusts the score from the new evaluation model in proportion to the weight of the managerial component in the overall technology assessment, ensuring consistency and fairness. For example, a manager scoring 80 out of 100 in the assessment center would be converted to (80/100) × 30 = 24 points for the managerial component. This approach facilitates score integration across different evaluation systems.
-
(2)
Normalization method: This method considers the distribution of scores by using the mean and standard deviation, enabling the conversion of results from the new evaluation model into the technology assessment score system. It improves score comparability and interpretability by reflecting the assessee’s relative position.
-
(3)
Weighting conversion method: This involves assigning weights and adjusting the maximum score by considering the proportion of the managerial component in the total score when converting the results from the new evaluation model to the technology assessment score system. This method plays a crucial role in integrating results and maintaining objectivity within the technology assessment.
In addition to score conversion, this study proposes using growth-level rules derived from decision algorithms to improve performance criteria within the evaluation model. Currently, performance criteria used in assessment centers are typically based on behavioral indicators linked to competencies, assessed using tools such as Behaviorally Anchored Rating Scales (BARS) and Behavior Observation Scales (BOS). Although these tools offer standardized measurement frameworks, they often lack clearly defined reference standards. In contrast, the growth-level rules derived from decision tree algorithms include both competency information and rule-based scores, offering clear criteria for distinguishing between different growth levels. Utilizing these rules can help overcome the limitations of traditional assessment center criteria and enable a more objective evaluation standard. Specific measures include:
-
(1)
Prioritizing managerial competencies: For example, the high-growth rule “Strategic Thinking >3 & Direction Setting and Decision-Making >3 & Sales Force and Market Development Capability >3 & Playing Coach ≤4 & Recognition of IP-based Business Opportunities >4” from Table 10 can guide the prioritization of strategic thinking, direction setting and decision-making, sales force and market development capability, playing coach, and recognition of IP-based business opportunities.
-
(2)
Using criterion scores as weighting factors: Analyzing the example rule, scores of 3 and 4 can serve as criterion scores. Instead of a uniform interval scale, weights can be assigned by distinguishing between “excellent” and “insufficient” based on criterion scores.
In summary, incorporating both score adjustment methods and data-driven performance rules into the managerial competency evaluation model enhances the reliability, validity, and practical relevance of technology assessments.
Conclusion
The importance of technology assessments for IP-based startups is growing, yet evaluations often overlook core managerial competencies and instead focus on secondary factors, despite the widespread recognition that managerial competencies are critical to startup success. To address this gap, this study identified managers’ core competencies in IP-based startups and derived performance-linked rules based on actual business outcomes. The results were matched with techniques from assessment centers, recognized as a form of objective personnel evaluation, to propose a method for systematically evaluating managerial competencies during technology assessments. Data were collected from actual managers of IP-based startups, and a machine learning-enhanced competency modeling approach was applied. As a result, 13 core competencies of IP-based startup managers were identified. Based on the annual growth rates of actual companies, 11 growth-level performance rules were discovered, including 3 for high-growth models, 2 for medium-growth models, and 6 for low-growth models. These findings were incorporated into an evaluation model based on the assessment center framework for the systematic measurement of managerial components.
The theoretical implications of this study can be summarized in three key points. First, it advances traditional competency modeling by integrating machine learning, shifting the focus from expert-driven identification of past behaviors to data-driven prediction of performance. Second, it addresses a significant research gap by focusing on IP-based startups—a field recognized for its importance but lacking sufficient research on managerial competencies. Using data from actual managers, the study applied a novel analytical approach. Third, this study is the first to propose a structured evaluation model for managerial components in technology assessments, an area that remains underdeveloped even in technologically advanced countries.
Practically, this study provides several insights. The core competencies and various growth-oriented rules derived from the study can be useful not only for managers of IP-based startups, but also for prospective entrepreneurs. For instance, managers can receive tailored strategies for improving practical management activities, while prospective entrepreneurs can benefit from startup programs that focus on the competencies that truly matter.
This study applied a novel competency modeling methodology using machine learning to identify the core competencies of managers in IP-based startups and the performance rules linked to growth. Furthermore, it developed and proposed an evaluation model to systematically assess the managerial component within technology assessments using the data-driven results obtained.
However, several limitations need to be addressed in future research. The study did not sufficiently reflect a diverse range of regions and industries, which limits its generalizability. As this research focused on IP-based startups, collecting data from a wider array of regions and industries and applying the data to the model could enhance the generalizability of technology assessments. In addition, the study did not incorporate the execution results of the evaluation model on actual IP-based startup managers. Including the execution and analysis of the model’s results in future research could significantly enhance the validity and reliability of the evaluation model.
Data availability
Data are available from the corresponding author upon reasonable request.
References
Ahn J (2008) A study on the relationship between success factors of IT venture companies and marketing performance. Master’s degree, Changwon National University
Anderson D (1997) Strand of system. The philosophy of C, Peirce. Purdue University Press, West Lafayette
Boyatzis R (1982) The competent manager: A model for effective performance. John Wiley, New York
Byun J, Kim Y, Lee B (2022) A study on the importance evaluation of decision factors in startup investment by accelerator investors and entrepreneurs. Ventur Startup Res 17(4):45–55
Choi G, Yoo D (2016) A study on the key factors and patterns for continuous employment retention after returning to the original workplace for injured workers based on data mining. J Vocational Rehabil 26(3):21–38
Choi S, Han I, Yoon B (2022) The impact of startup CEO characteristics on the investment amount in Series A venture companies. Ventur Startup Res 17(4):17–30
Cohen W, Klepper S (1992) The anatomy of industry R&D intensity distributions. Am Econ Rev 82:773–799
Coulibaly S, Erbao C, Mekongcho T (2018) Economic globalization, entrepreneurship, and development. Technol Forecast Soc Change 127:271–280
Dubois D, Prade H (1998) An introduction to fuzzy systems. Clin Chim Acta 270(1):3–29
Dua S, Du X (2016) Data mining and machine learning in cybersecurity. FL: CRC Press
Eurich T, Krause D, Cigularov K, Thornton G (2009) Assessment center: current practices in the United States. J Bus Psychol 24(4):387–407
Ewing D (1992) Future competencies needed in the preparation of secretaries in the State of Illinois using the Delphi technique. Ph.D. Dissertation, University of Iowa
Finkelstein S (1988) Managerial orientations and organizational outcomes: the moderating roles of managerial decreation and power. Ph.D. Thesis, Columbia University
George S (2022) Competence and competency frameworks. CIPD, Factsheets
Genuer R, Poggi J (2020) Random forests with R. Springer, NY
Green P (1999) Building robust competencies: linking human resource system to organizational strategic. Jossey-Bass, San Francisco
Gupta A (1988) Contingency perspectives on strategic leadership: current knowledge and future research directions. In DC Hambrick (Ed.), The Executive Effect: Concepts and Methods for Studying Top Managers, Greenwich, CT:JAI Press
Hambrick D, Mason P (1984) Upper echelons: the organization as a reflection of its top managers. Acad Manag Rev 9(2):193–206
Jacobs R (1989) Systems theory applied to human resource development. In DB Gradous (Ed.), Systems theory applied to human resource development: theory-to-practice monograph. Alexandria, VA: American Society for Training and Development
Jeong D (2019) The nonlinear relationship between the proportion of intellectual property and funding in startups: the moderating effect of the founder’s knowledge level. Ventur Startup Res 14(5):1–11
Jeon Y, Kim J (2014) Development of a core job competency model for HRD managers in enterprises. J Agric Educ Hum Resour Dev 37(2):11–137
Ji H, Lee Y (2022) Analysis of factors determining the choice of small manufacturing enterprises by four-year college graduates using decision tree analysis. Korean J Soc Sci Res 41(1):133–155
Jung T (2004) Determinants of performance in small manufacturing enterprises: CEO competencies, organizational capabilities, and competitive strategy. Doctoral dissertation, Yeungnam University
Kim M, Lee M (2021) The impact of the characteristics of early-stage technology startups on management performance: a comparison of market orientation, technology orientation, and CEO competencies. Comp Econ Res 28(1):141–165
Kim S, Hwang H (2022) The impact of startup founders’ core competencies and corporate competitiveness on management performance: Focusing on the moderating effect of open innovation. J Korea Acad Ind Cooperation Soc 23(12):427–441
Kim S, Lee S (2020) A study on the impact of the entrepreneurial competencies of SMEs’ CEOs on management performance. J Prof Manag 23(4):1–24
Lawshe C (1975) A quantitative approach to content validity. Pers Psychol 28(4):563–575
Lee H, Hwang B, Kong C (2017) A study on the factors determining the early success of startups. Ventur Startup Res 12(1):1–13
Lee H, Kim M, Kim E (2012) A study on the impact of patent activities of technology startups on early company performance. Ventur Startup Res 7(3):45–53
Le Trinh T (2019) Factors affecting startup performance of small and medium-sized enterprises in Danang city. Entrepreneurial Bus Econ Rev 7(3):187–203
Lucia A, Lepsinger R (1999) The art and science competency models: Pinpointing critical success factors in organization. Jossey-Bass, San Francisco, CA
McClelland D (1973) Testing for competence rather than for Intelligence. Am Psychol 28:1–14
McLagan P (1996) Great ideas revisited. Train Dev 50(1):60–66
Mitchell T (1997) Machine learning. McGraw Hill, New York, NY
Ocean Tomo (2020) Intangible asset market value study. https://oceantomo.com/intangible-asset-market-value-study/. Accessed 7 Jun 2025
Paramba J, Salamzadeh A, Karuthedath S, Rahman M (2023) Intellectual capital and sustainable startup performance: A bibliometric analysis. Herit Sustain Dev 5(1):19–32
Park S (2010) A study on the development of a competency model for social entrepreneurs. HRD Res 12(2):66–67
Park S, Lee C (2011) Development of a competency model for CEOs of SMEs. J Agric Educ Hum Resour Dev 43(1):1–20
Povah N, Povah L (2012) What are assessment centers and how can they enhance organization? In Jackson, JR, Lance, E, & Hoffman, BJ (Eds) New York, NY: Routledge
Rokach L, Maimon O (2015) Data mining with decision trees—theory and applications (2nd Ed.). Singapore: World Scientific Publishing Co. Pte. Ltd
Salamzadeh A, Tajpour M, Hosseini E, Brahmi M (2023) Human capital and the performance of Iranian digital startups: The moderating role of knowledge sharing behaviour. Int J Public Sect Perform Manag 12(1-2):171–186
Smith C (2017) Decision trees and random forests: A visual introduction for beginners. Blue Windmill Media
Spencer L, Spencer S (1993) Competency at work: Models for superior performance. John Wiley and Sons, New York, NY
Sparrow (1996) Competency based Pay too good to be true. People Manag 12:22–25
Spychalski A, Quinones M, Gaugler B, Pohley K (1997) A Survey of assessment center practices in organizations in the United States. Pers Psychol 50(7):1–90
Stevens G (2013) A critical review of the science and practice of competency modeling. Hum Resour Dev Rev 12(1):86–107
Thornton G, Rupp D (2006) Assessment centers in human resource management: strategies for prediction, diagnosis, and development. Erlbaum, Mahwah, NJ
Witten I, Frank E (2005) Data mining: Practical machine learning tools and techniques. Morgan Kaufmann Publishers, San Francisco
Yang S, Kim M, Jeong H (2011) The effects of entrepreneur’s competence and technology commercialization capabilities on business performance of technology-based Start-ups. Asia Pac J Bus Venturing Entrepreneurship 6(4):195–213
Acknowledgements
This study is based on the first author's Ph.D. dissertation and has been revised and supplemented for publication.
Author information
Authors and Affiliations
Contributions
WL and DY contributed to data analysis, interpretation, manuscript writing and revision; WL, JS and DY contributed to algorithm design, manuscript editing and proofreading; and WL, JS and DY contributed to the overall conceptual design of the manuscript, data interpretation, manuscript writing, and revision. All authors approved the version to be published and agreed to take responsibility for all aspects of the work.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Ethics approval
Not applicable.
Informed consent
Not applicable.
Additional information
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
About this article
Cite this article
Lee, W., Shin, J. & Yoo, D. Deriving and applying core competencies for intellectual property-based startup entrepreneurs: a data-driven approach to technology evaluation. Humanit Soc Sci Commun 12, 1207 (2025). https://doi.org/10.1057/s41599-025-05540-1
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1057/s41599-025-05540-1





