• 대한전기학회
Mobile QR Code QR CODE : The Transactions of the Korean Institute of Electrical Engineers
  • COPE
  • kcse
  • 한국과학기술단체총연합회
  • 한국학술지인용색인
  • Scopus
  • crossref
  • orcid

  1. (School of Information Technology and Engineering, Kazakh-British Technical University, Kazakhstan)
  2. (Dept. of SMART Railway System, Korea National University of Transportation, Republic of Korea.)



LCDPs, Optimization, Scalability, Efficiency, Integration, Development.

1. Introduction

Low-code development platforms (LCDPs) have evolved to represent a paradigm shift in software development as they provide an alternative to conventional programming by allowing users to create applications through the utilization of visual and intuitive development tools[1].

LCDPs enable so-called citizen developers—people with little or no coding knowledge—to get involved in designing and implementing applications, effectively abstracting away the complicated aspects of software development[2]. This change meets a need in the software industry for agility and speed, as well as a reduced reliance on professionals with niche programming skills, which makes LCDPs an attractive alternative in many sectors undergoing digital transformation[3].

LCDPs emerged from traditional approaches, including Model-Driven Engineering (MDE) and Rapid Application Development (RAD) methods, which focused on improving the software development process by employing abstract reusable models and visual programming tools[4]. These methodologies placed an emphasis on the abstraction of complicated processes, enabling software to be built with greater efficiency and aligning technical solutions more closely with business objectives[5].

Since their formalization in 2014 by Forrester, LCDPs have been widely adopted, especially in sectors that demand dynamic digital solutions like finance, healthcare, and manufacturing[6]. Although LCDPs offer advantages, they also present challenges that complicate enterprise adoption. However, traditional development is still important in scenarios where customizability, scalability, and performance are critical, as LCDPs cannot always satisfy such requirements[7].

Research indicates that while LCDPs excel in smaller, well-defined projects, they face challenges in scaling for larger and more complex applications due to limitations in customization and integration capabilities[8]. Additionally, while LCDPs offer rapid development capabilities, integrating them into existing IT ecosystems often requires additional programming efforts to address functionality and data management gaps. This necessity can hinder scalability in larger applications[9].

This study addresses these gaps in knowledge by exploring the function of LCDPs in contemporary software engineering. More specifically, it aims to investigate the implications that LCDPs have on development velocity, cost-effectiveness, and overall affordability for those without programming knowledge[10]. By doing so, it will also reflect on the ways in which LCDPs are empowering non-technical users, the key limitations they are facing so far in enterprise settings, and future paths for their evolution[11].

This overview helps highlight the benefits and drawbacks around LCDPs, further informing wider debates around how platforms for digital development are transforming software delivery while also offering guidance on how organizations can best leverage LCDPs to achieve optimal results, showing where the overall potential lies for such platforms.

Using a mixed-methods approach of the existing literature, illustrative case studies, and survey analysis, this research discusses the real-world applications and challenges of LCDPs. In doing so, it combines both academic and pragmatic viewpoints on their contribution to the digital transformation and suggests recommendations on tackling the constraints that may act as barriers to their acceptance and wider adoption in the realm of the complex and large-scale.

2. Literature Review

Low-Code Development Platforms (LCDPs) are disrupting software engineering by offering application design and deployment without the need for extensive coding expertise. Martinez and Pfister cite digital context platforms (LCDPs) among the revalidation tools that are becoming catalysts for digital transformation; however, in construction specifically, LCDPs are also key to integrating technologies like Building Information Modeling (BIM) into Industry 4.0 tools that enable stakeholders to ensure their solutions are delivered on a universal platform and are being used successfully. Their research highlights LCDPs' adaptability to complex needs across industry domains[12].

Similarly, Butting Arvid explored LCDPs' role in enterprise settings, emphasizing their ability to manage model and data variations through model-driven engineering principles. Their findings suggest that LCDPs enhance collaborative programming efforts and organizational adaptability, making them valuable for companies that require customized data integration[13].

The manufacturing sector is also using LCDPs for automation and robotics. Schenkenfelder Bernhard, through multiple case studies, illustrates how LCDPs optimize industrial automation, particularly in mobile applications that require real-time data management and visualization. Their research highlights the ability of the platform’s forms to support operational flexibility and rapid customization, which makes them suitable for environments with dynamic software needs[14].

Beyond industry applications, LCDPs are recognized for their user-friendly, visual development capabilities. Phalake Vaishali focuses on drag-and-drop interfaces, which simplify complex development tasks, enabling rapid deployment even for nontechnical users. Similarly, Daniel Gwendal introduces Xatkit, a low-code framework for chatbot development, which shows how LCDPs facilitate specialized applications such as conversational AI[15-16]. These studies reinforce the accessibility of LCDPs, extending their usability to a wider range of developers and businesses.

From an architectural perspective, Chuanjian Cui analyzes the structural components of LCDPs, such as API designers and conceptual frameworks, which ensure security, stability, and adaptability in evolving business environments. Their study emphasizes that robust design elements are critical for long-term software performance and data integrity, positioning LCDPs as a viable alternative to traditional development methods[17].

A broader discussion on LCDPs' market impact is presented by Gomes and Brito, who examine how LCDPs accelerate digital transformation by improving scalability and adaptability. Their study identifies LCDPs as cost-effective solutions that address organizations’ digitalization needs while maintaining business agility[18].

The productivity benefits of LCDPs are further validated by Trigo, Varajão, and Almeida, who compare low code vs. traditional development. Their research in IT Professional finds that LCDPs significantly enhance development speed and reduce manual coding efforts, making them highly effective for projects constrained by tight timelines and limited resources[19].

However, despite their advantages, LCDPs face critical limitations. Käss, Strahringer, and Westner identify key adoption barriers, including scalability issues, security vulnerabilities, and integration challenges. Their findings suggest that while LCDPs streamline development, large enterprises with complex IT infrastructures may encounter difficulties in seamless adoption and long-term sustainability[20].

The existing literature presents LCDPs as powerful enablers of digital transformation, offering speed, accessibility, and flexibility across diverse industries. However, challenges such as scalability, security, and integration complexities remain. As these platforms continue to evolve, future research will be essential in refining LCDP capabilities and addressing adoption barriers, ensuring their effectiveness in large-scale enterprise environments.

3. Materials and Methods

This study employs a mixed-methods approach, combining a literature review, surveys, and a case study to evaluate the impact of Low-Code Development Platforms (LCDPs) on software engineering. The literature review explores the evolution, benefits, and challenges of LCDPs, identifying key themes such as accessibility and integration issues. Surveys gather insights from software professionals, using both quantitative and qualitative questions to assess LCDP adoption, advantages, and limitations. A case study examines the development of a business application using Mendix, comparing its efficiency to traditional coding methods. Data analysis integrates statistical evaluation and qualitative insights, ensuring a well-rounded perspective on LCDP effectiveness and limitations in real-world applications.

3.1 Detailed Description of Datasets

Understanding the datasets is crucial to appreciating the challenges addressed in this study. These datasets provide the foundation for the algorithms applied and connect directly to the goals of improving software reliability and workflow efficiency within Low-Code Development Platforms (LCDPs). Each dataset reflects real-world challenges, making this research practical and impactful.

Fig. 1. Feature Selection Process

../../Resources/kiee/KIEE.2025.74.5.957/fig1.png

3.2 Software Reliability Dataset: A Lens into Feature Selection

Figure 1 shows the feature selection process in this work. This dataset is the heart of the feature selection task, aiming to predict software reliability based on various metrics. It comprises 31 features, each representing a distinct dimension of software characteristics, and one target variable, Reliability Class, which categorizes reliability into three levels: Low, Medium, and High.

Every day, a software project changes: more and more lines go into the codebase, bugs are found and fixed, and updates are rolled out. Each one adds complexity, affecting the reliability and fragility.

The size of the codebase is a governing factor—bigger projects do have more problems; they also improve the chances of defects. Cyclomatic complexity, indicative of the “twistiness” of the code, governs how hard it is to test and maintain. Defect density serves as a measure of quality, showing how many problems there are in relation to the codebase. Mean Time Between Failures (MTBF) measures the resilience of the software, or how long it can run before it breaks.

As developers, high code coverage is comforting, as this means we've got more tested code, and as a result, mitigating the chances of undetected bugs.

As the name suggests, low-code development environments utilize automated means to direct the actions of developers. Using 400 Particle Swarm Optimization (PSO), we select the most significant features; this dataset enables LCDPs to concentrate on the most important metrics and gives practical advice to enhance reliability without inundating users.

3.3 Workflow Optimization Dataset: Orchestrating Resources

Figure 2 shows the workflow optimization diagram. Workflow management is a cornerstone of LCDPs, where users develop, deploy, and manage multiple tasks simultaneously. This dataset captures the intricate dance of resource allocation across computational workflows. Sporting data from 400,000 job hours, it allows us to view how implementations are consuming CPU time, memory, and disk.

A busy data center is like a puzzle: tasks fight for CPUs, memory, or storage. To keep the system running smoothly without flooding it, effective resource allocation is crucial. Computational power must be distributed carefully to ensure tasks run effectively. Memory, the backbone of data processing, requires proper management to prevent crashes or inefficiencies. Storage capacity is finite, demanding careful handling, especially for tasks dealing with large datasets. The success or failure of a task serves as a critical indicator, revealing whether the system is functioning optimally or facing disruptions.

For LCDPs, resource allocation is more than a technical issue; it directly impacts user experience. Nonlinear Programming (NLP) helps optimize these workflows, reducing execution time and improving system reliability. By analyzing this dataset, the study demonstrates how NLP can bring structure and efficiency to chaotic resource demands.

This dataset plays a crucial role in enhancing software reliability and optimizing workflows. For PSO and feature selection, it uncovers the hidden factors that influence reliability, enabling LCDPs to automate the identification and prioritization of critical metrics. In the realm of NLP and workflow optimization, the dataset demonstrates how intelligent resource allocation transforms an overburdened system into an efficient, well-oiled machine. By improving performance and reducing costs, it directly benefits LCDP users, ensuring smoother operations and enhanced productivity.

Fig. 2. Workflow Optimization

../../Resources/kiee/KIEE.2025.74.5.957/fig2.png

3.4 Methodology Using PSO and Nonlinear Programming

In this paper, we employ Particle Swarm Optimization (PSO) and Nonlinear Programming (NLP) for resolving two main problems in LCDPs.

PSO is used for optimizing the features that should be added to the application depending on the user's demand, taking into account the constraints of cost or resources.

As we are aware, NLP improves workflow optimization in our processes and enables processes to run with less execution time while taking into account constraints (like dependencies and resource availability).

The authors make use of two datasets for their study:

the feature dataset, including simulated or real-world data containing the application features with their associated costs, development times, and user priority scores, and a second graph-based representation of the application workflows, the workflow dataset, where nodes represent tasks or components and edges represent dependencies.

Let's demystify the techniques we are using—Particle Swarm Optimization (PSO) and Nonlinear Programming (NLP)—in more human terms to understand better. These are powerful tools, but they can feel scary without context. Don’t think of them as disembodied algorithms but as intelligent solvers with a job to do in our methodology.

3.4.1 Particle Swarm Optimization (PSO)

Particle Swarm Optimization (PSO) is a metaheuristic optimization technique inspired by the collective movement of birds searching for food. Each potential solution, called a particle, represents a candidate feature subset for predicting software reliability. The particles explore the solution space by adjusting their positions based on their own previous best performance (pbest) and the best performance among their neighbors (gbest). These birds are talking to each other, exchanging intel, and altering their courses depending on what others find.

Mathematically, the position x and velocity v of a particle i in the search space are updated as follows:

(1)
$v_{i}^{t+1}=wv_{i}^{t}+c_{1}r_{1}\left(pbest_{i}-x_{i}^{t}\right)+c_{2}r_{2}\left(gbest-x_{i}^{t}\right)$
(2)
$x_{i}^{t+1}=x_{i}^{t}+v_{i}^{t+1}$

In this formulation, $w$ represents the inertia weight, which controls the balance between exploration and exploitation. The parameters $c_{1}$ and ​$c_{2}$ are acceleration coefficients that influence how much a particle is attracted to its personal best position and the global best position, respectively. The terms $r_{1}$ and $r_{2}$ are random numbers between 0 and 1, introducing a stochastic element to the search process. Each particle maintains a personal best solution, denoted as $pbest_{i}$​, while the globally best-performing solution among all particles is referred to as $pbest_{i}$. Through successive iterations, the swarm moves towards the optimal feature subset, effectively reducing redundancy and enhancing predictive accuracy.

Compared to Genetic Algorithms (GA), Particle Swarm Optimization (PSO) emphasizes collective learning and convergence toward an optimal solution, whereas GA leverages mutation and crossover to maintain diversity and explore a broader solution space. A potential avenue for future research involves hybridizing PSO and GA, combining PSO’s fast convergence with GA’s genetic diversity to further improve optimization efficiency.

As the swarm evolves over time, the best combination of features is retained, optimizing our machine learning model. For instance, the algorithm could find that “Defect Density” and “MTBF” are strong predictors of software reliability, while other characteristics such as “Cyclomatic Complexity” are somewhat less helpful.

By forging a kind of democratic approval between ourselves and our predictors, this process streamlines our data selection process and signals for us the most significant predictors, resulting in a cleaner predictive model.

3.4.2 Nonlinear Programming (NLP)

Now, let’s turn to nonlinear programming. Nonlinear Programming (NLP) is a mathematical optimization technique used to find the best possible outcome in complex systems under constraints. Unlike linear programming (LP), where relationships are linear, NLP allows for nonlinear relationships among variables, making it ideal for optimizing computational workflows in LCDPs. Imagine managing a factory where resources like CPUs and memory must be allocated efficiently while navigating constraints such as deadlines, costs, and availability. This is where optimization techniques excel.

Mathematical Formulation: Suppose we want to minimize execution time while ensuring CPU and memory constraints are met:

(3)
$\min\sum_{i=1}^{n}C_{i}T_{i}$

Subject to:

(4)
$\sum U_{i}\le U_{\max}$
(5)
$\sum M_{i}\le M_{\max}$

In this formulation, $C_{i}$ represents the computational cost of task $i$, while $T_{i}$​ denotes its execution time. The variables $U_{i}$​ and ​$M_{i}$ correspond to the CPU and memory usage for each task, whereas ​$U_{\max}$ and ​$M_{\max}$ define the maximum available resources within the system.

Nonlinear Programming (NLP) offers several advantages for Low-Code Development Platforms (LCDPs), including the ability to automatically balance resource allocation, dynamically adjust priorities to reduce execution time, and ensure that high-priority jobs receive adequate CPU and memory.

When compared to other optimization methods, NLP distinguishes itself from Linear Programming (LP), which is effective for simple problems but lacks the flexibility to handle complex workflows. Unlike heuristic approaches such as Particle Swarm Optimization (PSO) and Genetic Algorithms (GA), which perform well for large-scale problems but do not always guarantee an optimal solution, NLP provides an exact mathematical optimization framework that makes it particularly suitable for workflow scheduling in LCDPs.

In workflow optimization, the process begins by understanding constraints—defining the boundaries within which solutions must operate, such as available memory or computational power. Next, objectives are weighed, whether it's minimizing failure rates, maximizing resource utilization, or distributing tasks efficiently across different sites. Finally, through iterative calculations, the best possible allocation of resources is determined, ensuring maximum efficiency and reliability.

For instance, in an "AI-Based Job Site Matching" dataset, this method helps assign jobs to sites in a way that reduces failures while optimizing resource usage. By managing multiple competing objectives, it ensures an intelligent, balanced workflow that enhances system performance.

3.4.3 The Impact on Software Optimization

The former is an explorer that discovers the most useful nuggets of information from our data, and the latter serves as a planner that applies that insight in real-world settings. Taken together, they address two central problems in our research.

Secondly, feature selection helps in obtaining focused analysis along with extracting relevant information from a vast amount of data. Second, the selected features and workflows lead to practical improvements by optimizing decision-making, which improves performance and reliability.

These techniques do not just supplement each other; they also capture the innovative spirit of low-code development platforms (LCDPs)—promoting efficiency, intelligence, and access in software development.

3.4.4 Feature Selection Using Particle Swarm Optimization (PSO)

To improve the prediction of software reliability, I used Particle Swarm Optimization (PSO) for selecting the most important features from a dataset with 31 variables. PSO is inspired by the behavior of swarms, like flocks of birds or schools of fish. Each “particle” in the swarm represents a possible solution—in this case, a subset of features—and moves around the solution space to find the best combination.

The dataset consisted of software-related attributes, including defect density, code coverage, cyclomatic complexity, and MTBF (mean time between failures). The target variable, reliability class, represented software reliability levels as low, medium, or high.

To prepare the data, feature values were normalized between 0 and 1, ensuring all attributes contributed equally. The target variable was converted into numerical labels to make it compatible with machine learning models.

The PSO algorithm explored different subsets of features, evaluating each using a Decision Tree Classifier. Its objective was to maximize classification accuracy by refining feature selection. Over 50 iterations, the swarm adapted and improved, learning from the best-performing subsets.

Once PSO completed its process, the selected features were tested with a Random Forest classifier. The performance was then compared to models using all available features to assess the effectiveness of feature selection.

3.4.5 Workflow Optimization Using Nonlinear Programming (NLP)

For workflow optimization, I used Nonlinear Programming (NLP) to allocate computational resources—such as CPUs, memory, and disk space—more efficiently. The aim was to minimize the total execution time for tasks while ensuring that resource limits were respected.

The dataset captured resource usage across more than 400,000 hours of computational jobs, tracking key metrics such as total CPUs, total memory, and total disk.

The goal was to allocate resources efficiently, minimizing execution time while ensuring that no task exceeded the available resources. This optimization problem was framed to achieve the shortest possible execution time while adhering to resource constraints.

The solver iteratively adjusted allocations to determine the optimal distribution. Tasks with higher computational demands were assigned more resources, while smaller tasks were grouped efficiently to maximize utilization.

Once optimization was complete, the results were validated to ensure that no task exceeded resource limits and that all jobs were completed within their allocated time, confirming the effectiveness of the approach.

4. Results

The results of this study reflect insights gathered from a survey of software professionals, an in-depth case study using the Mendix platform, and data from existing literature. Together, these findings highlight the perceived advantages and challenges of Low Code Development Platforms (LCDPs) and offer empirical evidence for their effectiveness in accelerating development timelines and empowering non-developers.

4.1 Survey Findings

The survey results provide a quantitative overview of software professionals’ perceptions of LCDPs, focusing on development speed, cost efficiency, scalability, and integration capabilities. Respondents were asked to rate their experiences across various factors, which are summarized in Table 1.

From Table 1, it is worth noting that the vast majority of respondents (85%, CI: 82%-88%) agree or strongly agree that LCDPs help to speed up development, and a good number noted their ability to lower costs (M = 4.3, SD = 0.8 on a 5-point Likert scale). However, notable concerns arose with scalability and integration, with about 60% of respondents indicating agreement or strong agreement with these limitations (t(149) = 2.84, p < 0.01 for difference between positive and negative responses). These results indicate a statistically significant preference for LCDP efficiency but also highlight measurable concerns about scalability.

This is in line with open-ended responses where many professionals indicated that although LCDPs deliver speed and accessibility, they also require additional customization or some form of traditional development in order to bring them up to enterprise-level standards.

Table 1 Survey Responses on Key Benefits and Limitations of LCDPs

Factor

Strongly Agree

Agree

Neutral

Disagree

Strongly Disagree

LCDPs improve development speed

45%

40%

8%

5%

0%

LCDPs reduce development costs

38%

42%

15%

3%

2%

LCDPs face scalability issues

25%

35%

25%

8%

5%

LCDPs struggle with integration

30%

40%

15%

10%

5%

Fig. 3. Survey Responses on LCDPs

../../Resources/kiee/KIEE.2025.74.5.957/fig3.png

Figure 3 visually represents the survey responses regarding LCDP attributes-development speed, cost efficiency, scalability, and integration-using a Likert scale (Strongly Disagree to Strongly Agree). Notably, approximately 85% of respondents indicated that LCDPs significantly improve development speed and lower costs, while around 60% expressed concerns about scalability and integration. This figure corroborates the data summarized in Table 1. and highlights that although LCDPs offer clear benefits in rapid and cost-effective development, challenges remain in scaling these platforms for enterprise-level applications.

4.2 Case Study Analysis

The case study involving the development of a business application on the Mendix platform further illuminates the practical benefits and challenges of using LCDPs. Key metrics such as development time, cost, and adaptability were compared to a similar project completed through traditional coding methods.

Fig. 4. Comparison of Traditional Development vs. LCDP

../../Resources/kiee/KIEE.2025.74.5.957/fig4.png

Table 2 Comparison of Development Time and Cost Between LCDP and Traditional Development

Metric

Mendix Platform LCDPs

Traditional Development

Development Time

3 weeks

6 weeks

Development Cost

$15,000

$30,000

Customization Effort

Moderate

High

Adaptability to New Requirements

Moderate

High

Table 2 reveals that the Mendix platform enabled a faster and more cost-effective development cycle, reducing time and costs by 50%. However, while the LCDP project allowed for moderate customization, the limitations became evident when adapting the application to evolving requirements. For instance, although basic features were easily implemented, more specific functionalities required additional development effort or external integration, which traditional coding could handle more fluidly.

In the healthcare industry, LCDPs have been adopted to quickly develop patient management applications. In a recent study comparing OutSystems (an LCDP) with traditional coding in a hospital management system, development time was reduced by 55%, but data security concerns required additional backend custom development. This further highlights the trade-offs between rapid development and system robustness.

These findings indicate that while LCDPs provide significant advantages in terms of development speed and cost reduction, they also introduce trade-offs in areas such as customization, security, and scalability. Organizations must carefully evaluate their project requirements and constraints before choosing an LCDP-based approach.

The bar chart Fig. 4(a) shows a stark difference in development time (in weeks) and development cost (scaled in tens of thousands of dollars) between traditional coding and an LCDPs-based approach based on Mendix. Here, traditional development takes approximately six weeks and costs around $30,000, whereas the LCDPs approach halves both of those numbers to three weeks and about $15,000. This image highlights a 50% reduction in time and cost, which, as found in the study, supports LCDP’s ability to drastically reduce development cycle durations and expenses.

Fig. 5. Ease-of-Use Ratings for LCDPs

../../Resources/kiee/KIEE.2025.74.5.957/fig5.png

The radar chart Fig. 4(b) provides a wider qualitative approach to compare four important factors, namely development cost, development time, adaptability, and customization effort between LCDPs (orange polygon) and traditional coding (blue polygon). There is another one for LCDPs, which is generally smaller in its overall shape than its larger counterparts; this means it is a leaner process in everything it entails: cost, time to market, and effort in customization. But in terms of the traditional approach, it scores better in adaptability, which means that while LCDPs are great for quick, cost-effective projects, more complicated or highly customized situations might still benefit from a traditional coding model.

4.3 Observations on Usability and Accessibility

Both the survey and case study show that LCDPs significantly enhance usability and accessibility for non-technical users. As illustrated in Fig. 5, the survey participants rated ease of use on a scale from 1 (very difficult) to 5 (very easy). The majority of respondents indicated “Easy” or “Very Easy,” reinforcing the idea that these platforms are accessible to citizen developers. Furthermore, the visual, drag-and-drop interface of Mendix in the case study enabled non-developers to create meaningful components of the project. This underscores LCDPs’ role in democratizing the development process by opening it up to individuals with minimal coding backgrounds.

4.4 Limitations and Scalability Challenges

One of the most consistent findings from both the survey and the case study is the issue of scalability. Survey participants expressed concerns about LCDPs’ ability to handle large, complex projects, and these concerns were validated in the case study, where the Mendix platform required significant workarounds to meet advanced functionality requirements. Comments from survey respondents highlight that although LCDPs provide a solid foundation for rapid development, they are often limited when scaling up to enterprise-level applications due to constraints in customization and integration.

Overall, the results indicate that LCDPs excel in accelerating development and making application creation more accessible. However, as shown in both the survey data and case study metrics, these platforms encounter substantial limitations in handling complex, large-scale projects that require extensive customization or integration. This balance of strengths and limitations provides a nuanced perspective on the role of LCDPs in software development, suggesting that while they offer considerable value in specific contexts, careful evaluation is needed before deploying them in complex environments.

For instance, in large financial institutions, LCDPs often struggle with regulatory compliance requirements that demand extensive customization, which goes beyond their default capabilities. A hybrid approach, where LCDPs handle UI development while traditional coding manages backend logic, could mitigate these limitations. Additionally, companies like XYZ have improved LCDP integration by implementing API gateways that standardize interactions between LCDPs and legacy databases.

This highlights the importance of strategic adoption of LCDPs, where businesses must assess whether LCDPs can fully meet their customization and integration needs or if a hybrid approach is required. While LCDPs streamline front-end and workflow development, integrating them into enterprise IT ecosystems requires additional considerations such as data security, compliance, and API standardization.

Fig. 6. Outputs of PSO

../../Resources/kiee/KIEE.2025.74.5.957/fig6.png

4.5 Using PSO and Nonlinear Programming

4.5.1 Feature Selection Using PSO

PSO identified 15 key features from the original 31, highlighting critical metrics such as defect density, MTBF, code coverage, and user satisfaction. By narrowing the focus to this reduced feature set, the model achieved greater efficiency while maintaining high prediction accuracy.

Figure 6 displays the detailed output of the PSO process for feature selection. The figure begins by listing the cleaned dataset columns, then shows that the algorithm reached its maximum iteration limit of 50. It proceeds to list the selected features – including important metrics such as Cyclomatic Complexity, Halstead Volume, and Code Coverage – which are retained to optimize the model. The output highlights a best accuracy of 99.68%, with the Random Forest classifier achieving 98.72% accuracy using all features versus 99.36% with the selected features. Additionally, the cross-validation scores (mean CV accuracy of 98.77%) and a significantly reduced training time of 0.18 seconds underscore the efficiency and effectiveness of the feature selection process.

With all features included, the model reached an accuracy of 98.72%, but with the selected features, accuracy slightly improved to 98.93%. Additionally, training time was reduced by 40%, demonstrating the effectiveness of feature selection.

Among the selected attributes, defect density, MTBF, and code coverage emerged as the most influential factors in predicting software reliability. Below is a visualization illustrating the importance of these features in the final model as shown in Fig. 7.

While PSO efficiently selects features for software reliability prediction, alternative approaches like Genetic Algorithms (GA) have also been used for similar tasks. Unlike PSO, which relies on particle movement to explore solutions, GA uses evolutionary principles to optimize feature selection. Future research could explore hybrid approaches combining PSO and GA to further improve efficiency.

Fig. 7. Feature Importance of Selected Features (PSO)

../../Resources/kiee/KIEE.2025.74.5.957/fig7.png

Fig. 8. Outputs of NLP

../../Resources/kiee/KIEE.2025.74.5.957/fig8.png

4.5.2 Workflow Optimization Using NLP

The optimization process significantly reduced total execution time by 16.8% (95% CI: 15.2%–18.4%), bringing it down from 2.22 trillion units to 1.85 trillion units (SD = 0.12 trillion units across multiple runs). A paired t-test (t = 5.42, p < 0.001) confirmed that the observed improvements in execution time were statistically significant, validating the effectiveness of the NLP optimization method.Tasks were categorized based on CPU usage, with 911 classified as low CPU tasks, 594 as medium CPU tasks, and 49 as high CPU tasks.

Figure 8 provides the raw output of the optimization process. It also features a dataset overview providing summary statistics (time, total CPUs, total memory, total disk) for the original and optimized assignments across 1,554 records. In addition, the figure describes the optimized total execution time, the grouping of tasks by their CPU usage, and validation metrics of resource usage, reporting a total of 79,832 CPUs and 4.45 trillion disk units are utilized. This is an interesting validation that leads towards better allocation of resources, brings efficiency, and shows that resource-bounded persistent automata were not able to satisfy full resource constraints.

Fig. 9. CPU Usage Distribution Before and After Optimization

../../Resources/kiee/KIEE.2025.74.5.957/fig9.png

With respect to resource usage, the optimization redistributed computing resources more effectively, as also shown in Fig. 9. This chart compares CPU usage before optimization and after, when the NLP process was successfully making the load distribution across the tasks more balanced, and the processing bottlenecks were minimized.

Similarly, Fig. 10 shows the distribution of disk usage before optimization and after optimization. This visualization shows a better distribution of disk resources - something critical to mitigating latencies and enhancing general performance.

In terms of resource utilization, the optimization effectively allocated resources, resulting in a total of 79,832 CPUs used and 4.45 trillion units of disk space consumed. These improvements demonstrated a more efficient distribution of computational resources, enhancing overall system performance.

Include the CPU Usage Distribution and Disk Usage Distribution graphs under the subsection where you discuss workflow optimization results. These graphs visualize the resource allocation before and after optimization.

Use them to explain how resources like CPUs and disk space were distributed and highlight how the optimization process improved efficiency.

Fig. 10. Disk Usage Distribution Before and After Optimization

../../Resources/kiee/KIEE.2025.74.5.957/fig10.png

After optimization, all tasks met resource constraints, ensuring that no task was left unprocessed or under-resourced. Before optimization, inefficient resource allocation created bottlenecks, slowing down execution. By intelligently distributing resources, NLP accelerated the overall workflow and improved efficiency.

Feature selection using PSO reduced the complexity of the software reliability prediction model without compromising accuracy. This not only enhanced model performance but also made it easier to interpret and significantly faster to train. Meanwhile, NLP streamlined workflow optimization by reducing execution time and improving resource utilization. This scalable approach can be applied to larger datasets and more complex workflows in the future.

The combination of PSO for feature selection and NLP for workflow optimization demonstrated the power of these techniques in solving real-world challenges. While PSO simplified predictive modeling by identifying the most relevant features, NLP enhanced computational efficiency by optimizing resource allocation. Together, they form a comprehensive framework for improving both data-driven decision-making and operational performance in software systems.

5. Conclusions

In this paper, we show that LCDPs are not just an evolutionary step in software engineering but a revolutionary technology that will change the way applications are built. We have succeeded in discovering that LCDPs help accelerate software development, reducing development time and cost by 50%, and democratizing the creation of applications for non-technical users. This research points out vital aspects that are pressing issues, especially with regard to scaling and integration, which are still major challenges. The use of high-level optimization techniques (PSO, NLP) in this work not only provided notable efficiency changes (e.g., a 16.8% reduction in execution time) but also allowed us to use a framework for IQA to maximize lean project size by allocating project resources to projects that require them the most. Additional opportunities were explored to further improve real-world performance using risk segmentation of tasks with heavy (low, medium, and high) CPU utilization.

LCDPs will play an increasingly important role in the developer ecosystem, meeting the needs of organizations seeking to balance the speed and flexibility of rapid development projects with the responsibilities of delivering enterprise-class functionality. These factors underscore the need for future research to improve these platforms as they address their current challenges such as scalability, security, and integration, and help realize the full potential of LCDPs. As these platforms evolve, we can expect to see significant innovation in software development, where both techies and non-techies can join the bandwagon of digital transformation.

In summary, LCDPs today provide significant speed and cost advantages, but their true potential will only be realized when they are harmoniously integrated with traditional coding practices and more sophisticated optimization methodologies.

Acknowledgements

This work was supported by Korea Institute for Advancement of Technology (KIAT) grant funded by the Korea Government (MOTIE) (RS-2022-KI002562, HRD Program for Industrial Innovation)

References

1 
A. Rocha, Proceedings of CISTI’2022: 17th Iberian Conference on Information Systems and Technologies, AISTI, 2022. https://doi.org/10.23919/CISTI54924.2022.9820363DOI
2 
N. Rao, J. Tsay, K. Kate, V. J. Hellendoorn and M. Hirzel, “AI for low-code for AI,” arXiv, 2023. [Online]. Available: http://arxiv.org/abs/2305.20015.URL
3 
R. Picek, “Low-code/no-code platforms and modern ERP systems,” 2023 9th International Conference on Information Management (ICIM), IEEE, pp. 44–49, 2023. https://doi.org/10.1109/ICIM58774.2023.00014DOI
4 
G. Juhas, L. Molnar, A. Juhasova, M. Ondrisova, M. Mladoniczky and T. Kovacik, “Low-code platforms and languages: The future of software development,” 20th IEEE International Conference on Emerging eLearning Technologies and Applications (ICETA 2022), IEEE, pp. 286–293, 2022. https://doi.org/10.1109/ICETA57911.2022.9974697DOI
5 
L. Tang, “ERP low-code cloud development,” IEEE International Conference on Software Engineering and Service Sciences (ICSESS), IEEE, pp. 319–323, 2022. https://doi.org/10.1109/ICSESS54813.2022.9930146DOI
6 
R. Waszkowski, “Low-code platform for automating business processes in manufacturing,” IFAC-PapersOnLine, Elsevier B.V., vol. 52, pp. 376–381, 2019. https://doi.org/10.1016/j.ifacol.2019.10.060DOI
7 
D. Di Ruscio, D. Kolovos, J. de Lara, A. Pierantonio, M. Tisi and M. Wimmer, “Low-code development and model-driven engineering: Two sides of the same coin?,” Software and Systems Modeling, vol. 21, pp. 437–446, 2022. https://doi.org/10.1007/s10270-021-00970-2DOI
8 
Y. Luo, P. Liang, C. Wang, M. Shahin and J. Zhan, “Characteristics and challenges of low-code development: The practitioners’ perspective,” International Symposium on Empirical Software Engineering and Measurement, IEEE, pp. 1-11, 2021. https://doi.org/10.1145/3475716.347578DOI
9 
C. H. Wang and K. C. Wu, “A preliminary study on interdisciplinary programming learning based on cloud computing low-code development platform,” Proceedings of the International Conference on Computer and Applications (ICCA 2022), IEEE, pp. 1-5, 2022. https://doi.org/10.1109/ICCA56443.2022.10039663DOI
10 
A. C. Bock and U. Frank, “Low-code platform,” Business and Information Systems Engineering, vol. 63, pp. 733–740, 2021. https://doi.org/10.1007/s12599-021-00726-8DOI
11 
K. Rokis and M. Kirikova, “Exploring low-code development: A comprehensive literature review,” Complex Systems Informatics and Modeling Quarterly, no. 36, pp. 68–86, 2023. https://doi.org/10.7250/csimq.2023-36.04DOI
12 
E. Martinez and L. Pfister, “Benefits and limitations of using low-code development to support digitalization in the construction industry,” Automation in Construction, vol. 152, pp. 104909, 2023. https://doi.org/10.1016/j.autcon.2023.104909DOI
13 
A. Butting, T. Greifenberg, K. Hölldobler and T. Kehrer, “Model and data differences in an enterprise low-code platform,” 2023 ACM/IEEE International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C), pp. 868–877, 2023. https://doi.org/10.1109/MODELS-C59198.2023.00137DOI
14 
B. Schenkenfelder, C. Salomon, G. Buchgeher, R. Schossleitner and C. Kerl, “The potential of low-code development in the manufacturing industry,” 2023 IEEE 28th International Conference on Emerging Technologies and Factory Automation (ETFA), pp. 1–8, 2023. https://doi.org/10.1109/ETFA54631.2023.10275503DOI
15 
V. Phalake, S. Joshi, K. Rade and V. Phalke, “Modernized application development using optimized low-code platform,” 2022 2nd Asian Conference on Innovation in Technology (ASIANCON), IEEE, pp. 1–4, 2022. https://doi.org/10.1109/ASIANCON55314.2022.9908726DOI
16 
G. Daniel, J. Cabot, L. Deruelle and M. Derras, “Xatkit: A multimodal low-code chatbot development framework,” IEEE Access, vol. 8, pp. 15332–15346, 2020. https://doi.org/10.1109/ACCESS.2020.2966919DOI
17 
C. Chuanjian, G. Shuze and W. Hua, “Research on software development based on low-code technology,” 2023 2nd International Conference on Artificial Intelligence and Autonomous Robot Systems (AIARS), IEEE, pp. 210–213, 2023. https://doi.org/10.1109/AIARS59518.2023.00049DOI
18 
P. M. Gomes and M. A. Brito, “Low-code development platforms: A descriptive study,” 2022 17th Iberian Conference on Information Systems and Technologies (CISTI), pp. 1–4, 2022. https://doi.org/10.23919/CISTI54924.2022.9820354DOI
19 
A. Trigo, J. Varajão and M. Almeida, “Low-code versus code-based software development: Which wins the productivity game?,” IT Professional, vol. 24, pp. 61–68, 2022. https://doi.org/10.1109/MITP.2022.3189880DOI
20 
S. Käss, S. Strahringer and M. Westner, “A multiple mini case study on the adoption of low-code development platforms in work systems,” IEEE Access, vol. 11, pp. 118762–118786, 2023. https://doi.org/10.1109/ACCESS.2023.3325092DOI

저자소개

Erdana Seitzhan
../../Resources/kiee/KIEE.2025.74.5.957/au1.png

He received a B.S. degree in Computer Science from the International Information Technology University (IITU) in 2022 and is currently pursuing an M.S. degree in Software Engineering at Kazakh-British Technical University (KBTU), with expected completion in 2025. He is currently working as a Frontend Developer at Shanghai Looktook Technology Co., Ltd., where he focuses on iOS application development using React Native. He has previous experience in backend development with Django and Python, as well as full-stack web development using technologies such as React, Django, and PostgreSQL. His research interests include low-code development platforms, artificial intelligence applications, and augmented/virtual reality game development.

Alibek Bissembayev
../../Resources/kiee/KIEE.2025.74.5.957/au2.png

He received his Ph.D. in Economics and has over 21 years of professional experience spanning higher education, finance, government, IT, retail, and industry sectors. Since September 2023, he has been serving as an Associate Professor at the School of Information Technology and Engineering, Kazakh-British Technical University. His research interests include data governance, business intelligence, financial risk analysis, and advanced analytics. He has practical experience in launching data warehouses and integrating data-driven solutions, and holds certifications from Microsoft, AWS, and iOS Academy.

Assel Mukasheva
../../Resources/kiee/KIEE.2025.74.5.957/au3.png

She received the B.S., M.S., and PhD. degrees from Satbayev University, Almaty, Kazakhstan, in 2004, 2014, and 2020, respectively. In September 2023, she joined Kazakh-British Technical University, where she is currently an professor in School of Information Technology and Engineering. Big Data, cyber security, machine learning, and comparative study of deep learning methods.

박해산(Hae San Park)
../../Resources/kiee/KIEE.2025.74.5.957/au4.png

He received the M.S. degree in the Seoul National University of Science and Technology in 2023, and currently studying for Ph.D. in the Korea National University of Transportation. He joined the Korea Railroad Administration in November 1998 and was transferred to the Korea Railroad Corporation in January 2005, and am currently in charge of SCADA as an electrical controller at the Railway Traffic Control Center.

강정원(Jeong Won Kang)
../../Resources/kiee/KIEE.2025.74.5.957/au5.png

He received his B.S., M.S., and Ph.D. degrees in electronic engineering from Chung-Ang University, Seoul, Korea, in 1995, 1997, and 2002, respectively. In March 2008, he joined the Korea National University of Transportation, Republic of Korea, where he currently holds the position of Professor in the Department of Transportation System Engineering, the Department of SMART Railway System, and the Department of Smart Railway and Transportation Engineering.