Search results for: comparison of algorithms
6386 Thermohydraulic Performance Comparison of Artificially Roughened Rectangular Channels
Authors: Narender Singh Thakur, Sunil Chamoli
Abstract:
The use of roughness geometry in the rectangular channel duct is an effective technique to enhance the rate of heat transfer to the working fluid. The present research concentrates on the performance comparison of a rectangular channel with different roughness geometry of the test plate. The performance enhancement is compared by considering the statistical correlations developed by the various investigators for Nusselt number and friction factor. Among all the investigated geometries multiple v-shaped rib roughened rectangular channel found thermo hydraulically better than other investigated geometries under similar current and operating conditions.Keywords: nusselt number, friction factor, thermohydraulic, performance parameter
Procedia PDF Downloads 4206385 Adapting the Chemical Reaction Optimization Algorithm to the Printed Circuit Board Drilling Problem
Authors: Taisir Eldos, Aws Kanan, Waleed Nazih, Ahmad Khatatbih
Abstract:
Chemical Reaction Optimization (CRO) is an optimization metaheuristic inspired by the nature of chemical reactions as a natural process of transforming the substances from unstable to stable states. Starting with some unstable molecules with excessive energy, a sequence of interactions takes the set to a state of minimum energy. Researchers reported successful application of the algorithm in solving some engineering problems, like the quadratic assignment problem, with superior performance when compared with other optimization algorithms. We adapted this optimization algorithm to the Printed Circuit Board Drilling Problem (PCBDP) towards reducing the drilling time and hence improving the PCB manufacturing throughput. Although the PCBDP can be viewed as instance of the popular Traveling Salesman Problem (TSP), it has some characteristics that would require special attention to the transactions that explore the solution landscape. Experimental test results using the standard CROToolBox are not promising for practically sized problems, while it could find optimal solutions for artificial problems and small benchmarks as a proof of concept.Keywords: evolutionary algorithms, chemical reaction optimization, traveling salesman, board drilling
Procedia PDF Downloads 5176384 Development and Investigation of Efficient Substrate Feeding and Dissolved Oxygen Control Algorithms for Scale-Up of Recombinant E. coli Cultivation Process
Authors: Vytautas Galvanauskas, Rimvydas Simutis, Donatas Levisauskas, Vykantas Grincas, Renaldas Urniezius
Abstract:
The paper deals with model-based development and implementation of efficient control strategies for recombinant protein synthesis in fed-batch E.coli cultivation processes. Based on experimental data, a kinetic dynamic model for cultivation process was developed. This model was used to determine substrate feeding strategies during the cultivation. The proposed feeding strategy consists of two phases – biomass growth phase and recombinant protein production phase. In the first process phase, substrate-limited process is recommended when the specific growth rate of biomass is about 90-95% of its maximum value. This ensures reduction of glucose concentration in the medium, improves process repeatability, reduces the development of secondary metabolites and other unwanted by-products. The substrate limitation can be enhanced to satisfy restriction on maximum oxygen transfer rate in the bioreactor and to guarantee necessary dissolved carbon dioxide concentration in culture media. In the recombinant protein production phase, the level of substrate limitation and specific growth rate are selected within the range to enable optimal target protein synthesis rate. To account for complex process dynamics, to efficiently exploit the oxygen transfer capability of the bioreactor, and to maintain the required dissolved oxygen concentration, adaptive control algorithms for dissolved oxygen control have been proposed. The developed model-based control strategies are useful in scale-up of cultivation processes and accelerate implementation of innovative biotechnological processes for industrial applications.Keywords: adaptive algorithms, model-based control, recombinant E. coli, scale-up of bioprocesses
Procedia PDF Downloads 2556383 Comparison of Parametric and Bayesian Survival Regression Models in Simulated and HIV Patient Antiretroviral Therapy Data: Case Study of Alamata Hospital, North Ethiopia
Authors: Zeytu G. Asfaw, Serkalem K. Abrha, Demisew G. Degefu
Abstract:
Background: HIV/AIDS remains a major public health problem in Ethiopia and heavily affecting people of productive and reproductive age. We aimed to compare the performance of Parametric Survival Analysis and Bayesian Survival Analysis using simulations and in a real dataset application focused on determining predictors of HIV patient survival. Methods: A Parametric Survival Models - Exponential, Weibull, Log-normal, Log-logistic, Gompertz and Generalized gamma distributions were considered. Simulation study was carried out with two different algorithms that were informative and noninformative priors. A retrospective cohort study was implemented for HIV infected patients under Highly Active Antiretroviral Therapy in Alamata General Hospital, North Ethiopia. Results: A total of 320 HIV patients were included in the study where 52.19% females and 47.81% males. According to Kaplan-Meier survival estimates for the two sex groups, females has shown better survival time in comparison with their male counterparts. The median survival time of HIV patients was 79 months. During the follow-up period 89 (27.81%) deaths and 231 (72.19%) censored individuals registered. The average baseline cluster of differentiation 4 (CD4) cells count for HIV/AIDS patients were 126.01 but after a three-year antiretroviral therapy follow-up the average cluster of differentiation 4 (CD4) cells counts were 305.74, which was quite encouraging. Age, functional status, tuberculosis screen, past opportunistic infection, baseline cluster of differentiation 4 (CD4) cells, World Health Organization clinical stage, sex, marital status, employment status, occupation type, baseline weight were found statistically significant factors for longer survival of HIV patients. The standard error of all covariate in Bayesian log-normal survival model is less than the classical one. Hence, Bayesian survival analysis showed better performance than classical parametric survival analysis, when subjective data analysis was performed by considering expert opinions and historical knowledge about the parameters. Conclusions: Thus, HIV/AIDS patient mortality rate could be reduced through timely antiretroviral therapy with special care on the potential factors. Moreover, Bayesian log-normal survival model was preferable than the classical log-normal survival model for determining predictors of HIV patients survival.Keywords: antiretroviral therapy (ART), Bayesian analysis, HIV, log-normal, parametric survival models
Procedia PDF Downloads 1956382 Reducing the Computational Overhead of Metaheuristics Parameterization with Exploratory Landscape Analysis
Authors: Iannick Gagnon, Alain April
Abstract:
The performance of a metaheuristic on a given problem class depends on the class itself and the choice of parameters. Parameter tuning is the most time-consuming phase of the optimization process after the main calculations and it often nullifies the speed advantage of metaheuristics over traditional optimization algorithms. Several off-the-shelf parameter tuning algorithms are available, but when the objective function is expensive to evaluate, these can be prohibitively expensive to use. This paper presents a surrogate-like method for finding adequate parameters using fitness landscape analysis on simple benchmark functions and real-world objective functions. The result is a simple compound similarity metric based on the empirical correlation coefficient and a measure of convexity. It is then used to find the best benchmark functions to serve as surrogates. The near-optimal parameter set is then found using fractional factorial design. The real-world problem of NACA airfoil lift coefficient maximization is used as a preliminary proof of concept. The overall aim of this research is to reduce the computational overhead of metaheuristics parameterization.Keywords: metaheuristics, stochastic optimization, particle swarm optimization, exploratory landscape analysis
Procedia PDF Downloads 1516381 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features
Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan
Abstract:
Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.Keywords: pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction
Procedia PDF Downloads 2606380 Assessment of DNA Sequence Encoding Techniques for Machine Learning Algorithms Using a Universal Bacterial Marker
Authors: Diego Santibañez Oyarce, Fernanda Bravo Cornejo, Camilo Cerda Sarabia, Belén Díaz Díaz, Esteban Gómez Terán, Hugo Osses Prado, Raúl Caulier-Cisterna, Jorge Vergara-Quezada, Ana Moya-Beltrán
Abstract:
The advent of high-throughput sequencing technologies has revolutionized genomics, generating vast amounts of genetic data that challenge traditional bioinformatics methods. Machine learning addresses these challenges by leveraging computational power to identify patterns and extract information from large datasets. However, biological sequence data, being symbolic and non-numeric, must be converted into numerical formats for machine learning algorithms to process effectively. So far, some encoding methods, such as one-hot encoding or k-mers, have been explored. This work proposes additional approaches for encoding DNA sequences in order to compare them with existing techniques and determine if they can provide improvements or if current methods offer superior results. Data from the 16S rRNA gene, a universal marker, was used to analyze eight bacterial groups that are significant in the pulmonary environment and have clinical implications. The bacterial genes included in this analysis are Prevotella, Abiotrophia, Acidovorax, Streptococcus, Neisseria, Veillonella, Mycobacterium, and Megasphaera. These data were downloaded from the NCBI database in Genbank file format, followed by a syntactic analysis to selectively extract relevant information from each file. For data encoding, a sequence normalization process was carried out as the first step. From approximately 22,000 initial data points, a subset was generated for testing purposes. Specifically, 55 sequences from each bacterial group met the length criteria, resulting in an initial sample of approximately 440 sequences. The sequences were encoded using different methods, including one-hot encoding, k-mers, Fourier transform, and Wavelet transform. Various machine learning algorithms, such as support vector machines, random forests, and neural networks, were trained to evaluate these encoding methods. The performance of these models was assessed using multiple metrics, including the confusion matrix, ROC curve, and F1 Score, providing a comprehensive evaluation of their classification capabilities. The results show that accuracies between encoding methods vary by up to approximately 15%, with the Fourier transform obtaining the best results for the evaluated machine learning algorithms. These findings, supported by the detailed analysis using the confusion matrix, ROC curve, and F1 Score, provide valuable insights into the effectiveness of different encoding methods and machine learning algorithms for genomic data analysis, potentially improving the accuracy and efficiency of bacterial classification and related genomic studies.Keywords: DNA encoding, machine learning, Fourier transform, Fourier transformation
Procedia PDF Downloads 226379 An Adaptive Hybrid Surrogate-Assisted Particle Swarm Optimization Algorithm for Expensive Structural Optimization
Authors: Xiongxiong You, Zhanwen Niu
Abstract:
Choosing an appropriate surrogate model plays an important role in surrogates-assisted evolutionary algorithms (SAEAs) since there are many types and different kernel functions in the surrogate model. In this paper, an adaptive selection of the best suitable surrogate model method is proposed to solve different kinds of expensive optimization problems. Firstly, according to the prediction residual error sum of square (PRESS) and different model selection strategies, the excellent individual surrogate models are integrated into multiple ensemble models in each generation. Then, based on the minimum root of mean square error (RMSE), the best suitable surrogate model is selected dynamically. Secondly, two methods with dynamic number of models and selection strategies are designed, which are used to show the influence of the number of individual models and selection strategy. Finally, some compared studies are made to deal with several commonly used benchmark problems, as well as a rotor system optimization problem. The results demonstrate the accuracy and robustness of the proposed method.Keywords: adaptive selection, expensive optimization, rotor system, surrogates assisted evolutionary algorithms
Procedia PDF Downloads 1376378 Enhancing the Recruitment Process through Machine Learning: An Automated CV Screening System
Authors: Kaoutar Ben Azzou, Hanaa Talei
Abstract:
Human resources is an important department in each organization as it manages the life cycle of employees from recruitment training to retirement or termination of contracts. The recruitment process starts with a job opening, followed by a selection of the best-fit candidates from all applicants. Matching the best profile for a job position requires a manual way of looking at many CVs, which requires hours of work that can sometimes lead to choosing not the best profile. The work presented in this paper aims at reducing the workload of HR personnel by automating the preliminary stages of the candidate screening process, thereby fostering a more streamlined recruitment workflow. This tool introduces an automated system designed to help with the recruitment process by scanning candidates' CVs, extracting pertinent features, and employing machine learning algorithms to decide the most fitting job profile for each candidate. Our work employs natural language processing (NLP) techniques to identify and extract key features from unstructured text extracted from a CV, such as education, work experience, and skills. Subsequently, the system utilizes these features to match candidates with job profiles, leveraging the power of classification algorithms.Keywords: automated recruitment, candidate screening, machine learning, human resources management
Procedia PDF Downloads 546377 Predictive Maintenance of Electrical Induction Motors Using Machine Learning
Authors: Muhammad Bilal, Adil Ahmed
Abstract:
This study proposes an approach for electrical induction motor predictive maintenance utilizing machine learning algorithms. On the basis of a study of temperature data obtained from sensors put on the motor, the goal is to predict motor failures. The proposed models are trained to identify whether a motor is defective or not by utilizing machine learning algorithms like Support Vector Machines (SVM) and K-Nearest Neighbors (KNN). According to a thorough study of the literature, earlier research has used motor current signature analysis (MCSA) and vibration data to forecast motor failures. The temperature signal methodology, which has clear advantages over the conventional MCSA and vibration analysis methods in terms of cost-effectiveness, is the main subject of this research. The acquired results emphasize the applicability and effectiveness of the temperature-based predictive maintenance strategy by demonstrating the successful categorization of defective motors using the suggested machine learning models.Keywords: predictive maintenance, electrical induction motors, machine learning, temperature signal methodology, motor failures
Procedia PDF Downloads 1156376 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements
Authors: Yasmeen A. S. Essawy, Khaled Nassar
Abstract:
With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.Keywords: building information modeling (BIM), elemental graph data model (EGDM), geometric and topological data models, graph theory
Procedia PDF Downloads 3826375 Agile Software Effort Estimation Using Regression Techniques
Authors: Mikiyas Adugna
Abstract:
Effort estimation is among the activities carried out in software development processes. An accurate model of estimation leads to project success. The method of agile effort estimation is a complex task because of the dynamic nature of software development. Researchers are still conducting studies on agile effort estimation to enhance prediction accuracy. Due to these reasons, we investigated and proposed a model on LASSO and Elastic Net regression to enhance estimation accuracy. The proposed model has major components: preprocessing, train-test split, training with default parameters, and cross-validation. During the preprocessing phase, the entire dataset is normalized. After normalization, a train-test split is performed on the dataset, setting training at 80% and testing set to 20%. We chose two different phases for training the two algorithms (Elastic Net and LASSO) regression following the train-test-split. In the first phase, the two algorithms are trained using their default parameters and evaluated on the testing data. In the second phase, the grid search technique (the grid is used to search for tuning and select optimum parameters) and 5-fold cross-validation to get the final trained model. Finally, the final trained model is evaluated using the testing set. The experimental work is applied to the agile story point dataset of 21 software projects collected from six firms. The results show that both Elastic Net and LASSO regression outperformed the compared ones. Compared to the proposed algorithms, LASSO regression achieved better predictive performance and has acquired PRED (8%) and PRED (25%) results of 100.0, MMRE of 0.0491, MMER of 0.0551, MdMRE of 0.0593, MdMER of 0.063, and MSE of 0.0007. The result implies LASSO regression algorithm trained model is the most acceptable, and higher estimation performance exists in the literature.Keywords: agile software development, effort estimation, elastic net regression, LASSO
Procedia PDF Downloads 696374 Effect of Pack Aluminising Conditions on βNiAl Coatings
Authors: A. D. Chandio, P. Xiao
Abstract:
In this study, nickel aluminide coatings were deposited onto CMSX-4 single crystal superalloy and pure Ni substrates by using in-situ chemical vapour deposition (CVD) technique. The microstructural evolutions and coating thickness (CT) were studied upon the variation of processing conditions i.e. time and temperature. The results demonstrated (under identical conditions) that coating formed on pure Ni contains no substrate entrapments and have lower CT in comparison to one deposited on the CMSX-4 counterpart. In addition, the interdiffusion zone (IDZ) of Ni substrate is a γ’-Ni3Al in comparison to the CMSX-4 alloy that is βNiAl phase. The higher CT on CMSX-4 superalloy is attributed to presence of γ-Ni/γ’-Ni3Al structure which contains ~ 15 at.% Al before deposition (that is already present in superalloy). Two main deposition parameters (time and temperature) of the coatings were also studied in addition to standard comparison of substrate effects. The coating formation time was found to exhibit profound effect on CT, whilst temperature was found to change coating activities. In addition, the CT showed linear trend from 800 to 1000 °C, thereafter reduction was observed. This was attributed to the change in coating activities.Keywords: βNiAl, in-situ CVD, CT, CMSX-4, Ni, microstructure
Procedia PDF Downloads 2386373 Review of Hydrologic Applications of Conceptual Models for Precipitation-Runoff Process
Authors: Oluwatosin Olofintoye, Josiah Adeyemo, Gbemileke Shomade
Abstract:
The relationship between rainfall and runoff is an important issue in surface water hydrology therefore the understanding and development of accurate rainfall-runoff models and their applications in water resources planning, management and operation are of paramount importance in hydrological studies. This paper reviews some of the previous works on the rainfall-runoff process modeling. The hydrologic applications of conceptual models and artificial neural networks (ANNs) for the precipitation-runoff process modeling were studied. Gradient training methods such as error back-propagation (BP) and evolutionary algorithms (EAs) are discussed in relation to the training of artificial neural networks and it is shown that application of EAs to artificial neural networks training could be an alternative to other training methods. Therefore, further research interest to exploit the abundant expert knowledge in the area of artificial intelligence for the solution of hydrologic and water resources planning and management problems is needed.Keywords: artificial intelligence, artificial neural networks, evolutionary algorithms, gradient training method, rainfall-runoff model
Procedia PDF Downloads 4536372 Body Composition Analyser Parameters and Their Comparison with Manual Measurements
Authors: I. Karagjozova, B. Dejanova, J. Pluncevic, S. Petrovska, V. Antevska, L. Todorovska
Abstract:
Introduction: Medical checking assessment is important in sports medicine. To follow the health condition in subjects who perform sports, body composition parameters, such as intracellular water, extracellular water, protein and mineral content, muscle and fat mass might be useful. The aim of the study was to show available parameters and to compare them to manual assessment. Material and methods: A number of 20 subjects (14 male and 6 female) at age of 20±2 years were determined in the study, 5 performed recreational sports, while others were professional ones. The mean height was 175±7 cm, the mean weight was 72±9 cm, and the body mass index (BMI) was 23±2 kg/m2. The measured compartments were as following: intracellular water (IW), extracellular water (EW), protein component (PC), mineral component (MC), skeletal muscle mass (SMM) and body fat mass (BFM). Lean balance were examined for right and left arm (LA), trunk (T), right leg (RL) and left leg (LL). The comparison was made between the calculation derived by manual made measurements, using Matejka formula and parameters obtained by body composition analyzer (BCA) - Inbody 720 BCA Biospace. Used parameters for the comparison were muscle mass (SMM), body fat mass (BFM). Results: BCA obtained values were for: IW - 22.6±5L, EW - 13.5±2 L, PC - 9.8±0.9 kg, MC - 3.5±0.3, SMM - 27±3 kg, BFM - 13.8±4 kg. Lean balance showed following values for: RA - 2.45±0.2 kg, LA - 2.37±0.4, T - 20.9±5 kg, RL - 7.43±1 kg, and LL - 7.49 ±1.5 kg. SMM showed statistical difference between manual obtained value, 51±01% to BCA parameter 45.5±3% (p<0.001). Manual obtained values for BFM was lower (17±2%) than BCA obtained one, 19.5±5.9% (p<0.02). Discussion: The obtained results showed appropriate values for the examined age, regarding to all examined parameters which contribute to overview the body compartments, important for sport performing. Due to comparison between the manual and BCA assessment, we may conclude that manual measurements may differ from the certain ones, which is confirmed by statistical significance.Keywords: athletes, body composition, bio electrical impedance, sports medicine
Procedia PDF Downloads 4756371 A Survey on Intelligent Traffic Management with Cooperative Driving in Urban Roads
Authors: B. Karabuluter, O. Karaduman
Abstract:
Traffic management and traffic planning are important issues, especially in big cities. Due to the increase of personal vehicles and the physical constraints of urban roads, the problem of transportation especially in crowded cities over time is revealed. This situation reduces the living standards, and it can put human life at risk because the vehicles such as ambulance, fire department are prevented from reaching their targets. Even if the city planners take these problems into account, emergency planning and traffic management are needed to avoid cases such as traffic congestion, intersections, traffic jams caused by traffic accidents or roadworks. In this study, in smart traffic management issues, proposed solutions using intelligent vehicles acting in cooperation with urban roads are examined. Traffic management is becoming more difficult due to factors such as fatigue, carelessness, sleeplessness, social behavior patterns, and lack of education. However, autonomous vehicles, which remove the problems caused by human weaknesses by providing driving control, are increasing the success of practicing the algorithms developed in city traffic management. Such intelligent vehicles have become an important solution in urban life by using 'swarm intelligence' algorithms and cooperative driving methods to provide traffic flow, prevent traffic accidents, and increase living standards. In this study, studies conducted in this area have been dealt with in terms of traffic jam, intersections, regulation of traffic flow, signaling, prevention of traffic accidents, cooperation and communication techniques of vehicles, fleet management, transportation of emergency vehicles. From these concepts, some taxonomies were made out of the way. This work helps to develop new solutions and algorithms for cities where intelligent vehicles that can perform cooperative driving can take place, and at the same time emphasize the trend in this area.Keywords: intelligent traffic management, cooperative driving, smart driving, urban road, swarm intelligence, connected vehicles
Procedia PDF Downloads 3316370 Stochastic Richelieu River Flood Modeling and Comparison of Flood Propagation Models: WMS (1D) and SRH (2D)
Authors: Maryam Safrai, Tewfik Mahdi
Abstract:
This article presents the stochastic modeling of the Richelieu River flood in Quebec, Canada, occurred in the spring of 2011. With the aid of the one-dimensional Watershed Modeling System (WMS (v.10.1) and HEC-RAS (v.4.1) as a flood simulator, the delineation of the probabilistic flooded areas was considered. Based on the Monte Carlo method, WMS (v.10.1) delineated the probabilistic flooded areas with corresponding occurrence percentages. Furthermore, results of this one-dimensional model were compared with the results of two-dimensional model (SRH-2D) for the evaluation of efficiency and precision of each applied model. Based on this comparison, computational process in two-dimensional model is longer and more complicated versus brief one-dimensional one. Although, two-dimensional models are more accurate than one-dimensional method, but according to existing modellers, delineation of probabilistic flooded areas based on Monte Carlo method is achievable via one-dimensional modeler. The applied software in this case study greatly responded to verify the research objectives. As a result, flood risk maps of the Richelieu River with the two applied models (1d, 2d) could elucidate the flood risk factors in hydrological, hydraulic, and managerial terms.Keywords: flood modeling, HEC-RAS, model comparison, Monte Carlo simulation, probabilistic flooded area, SRH-2D, WMS
Procedia PDF Downloads 1396369 Implementation of Successive Interference Cancellation Algorithms in the 5g Downlink
Authors: Mokrani Mohamed Amine
Abstract:
In this paper, we have implemented successive interference cancellation algorithms in the 5G downlink. We have calculated the maximum throughput in Frequency Division Duplex (FDD) mode in the downlink, where we have obtained a value equal to 836932 b/ms. The transmitter is of type Multiple Input Multiple Output (MIMO) with eight transmitting and receiving antennas. Each antenna among eight transmits simultaneously a data rate of 104616 b/ms that contains the binary messages of the three users; in this case, the Cyclic Redundancy Check CRC is negligible, and the MIMO category is the spatial diversity. The technology used for this is called Non-Orthogonal Multiple Access (NOMA) with a Quadrature Phase Shift Keying (QPSK) modulation. The transmission is done in a Rayleigh fading channel with the presence of obstacles. The MIMO Successive Interference Cancellation (SIC) receiver with two transmitting and receiving antennas recovers its binary message without errors for certain values of transmission power such as 50 dBm, with 0.054485% errors when the transmitted power is 20dBm and with 0.00286763% errors for a transmitted power of 32 dBm(in the case of user 1) as well as with 0.0114705% errors when the transmitted power is 20 dBm also with 0.00286763% errors for a power of 24 dBm(in the case of user2) by applying the steps involved in SIC.Keywords: 5G, NOMA, QPSK, TBS, LDPC, SIC, capacity
Procedia PDF Downloads 1026368 Quantum Mechanism Approach for Non-Ruin Probability and Comparison of Path Integral Method and Stochastic Simulations
Authors: Ahmet Kaya
Abstract:
Quantum mechanism is one of the most important approaches to calculating non-ruin probability. We apply standard Dirac notation to model given Hamiltonians. By using the traditional method and eigenvector basis, non-ruin probability is found for several examples. Also, non-ruin probability is calculated for two different Hamiltonian by using the tensor product. Finally, the path integral method is applied to the examples and comparison is made for stochastic simulations and path integral calculation.Keywords: quantum physics, Hamiltonian system, path integral, tensor product, ruin probability
Procedia PDF Downloads 3336367 A Genetic Algorithm Approach to Solve a Weaving Job Scheduling Problem, Aiming Tardiness Minimization
Authors: Carolina Silva, João Nuno Oliveira, Rui Sousa, João Paulo Silva
Abstract:
This study uses genetic algorithms to solve a job scheduling problem in a weaving factory. The underline problem regards an NP-Hard problem concerning unrelated parallel machines, with sequence-dependent setup times. This research uses real data regarding a weaving industry located in the North of Portugal, with a capacity of 96 looms and a production, on average, of 440000 meters of fabric per month. Besides, this study includes a high level of complexity once most of the real production constraints are applied, and several real data instances are tested. Topics such as data analyses and algorithm performance are addressed and tested, to offer a solution that can generate reliable and due date results. All the approaches will be tested in the operational environment, and the KPIs monitored, to understand the solution's impact on the production, with a particular focus on the total number of weeks of late deliveries to clients. Thus, the main goal of this research is to develop a solution that allows for the production of automatically optimized production plans, aiming to the tardiness minimizing.Keywords: genetic algorithms, textile industry, job scheduling, optimization
Procedia PDF Downloads 1556366 Models, Resources and Activities of Project Scheduling Problems
Authors: Jorge A. Ruiz-Vanoye, Ocotlán Díaz-Parra, Alejandro Fuentes-Penna, José J. Hernández-Flores, Edith Olaco Garcia
Abstract:
The Project Scheduling Problem (PSP) is a generic name given to a whole class of problems in which the best form, time, resources and costs for project scheduling are necessary. The PSP is an application area related to the project management. This paper aims at being a guide to understand PSP by presenting a survey of the general parameters of PSP: the Resources (those elements that realize the activities of a project), and the Activities (set of operations or own tasks of a person or organization); the mathematical models of the main variants of PSP and the algorithms used to solve the variants of the PSP. The project scheduling is an important task in project management. This paper contains mathematical models, resources, activities, and algorithms of project scheduling problems. The project scheduling problem has attracted researchers of the automotive industry, steel manufacturer, medical research, pharmaceutical research, telecommunication, industry, aviation industry, development of the software, manufacturing management, innovation and technology management, construction industry, government project management, financial services, machine scheduling, transportation management, and others. The project managers need to finish a project with the minimum cost and the maximum quality.Keywords: PSP, Combinatorial Optimization Problems, Project Management; Manufacturing Management, Technology Management.
Procedia PDF Downloads 4166365 JavaScript Object Notation Data against eXtensible Markup Language Data in Software Applications a Software Testing Approach
Authors: Theertha Chandroth
Abstract:
This paper presents a comparative study on how to check JSON (JavaScript Object Notation) data against XML (eXtensible Markup Language) data from a software testing point of view. JSON and XML are widely used data interchange formats, each with its unique syntax and structure. The objective is to explore various techniques and methodologies for validating comparison and integration between JSON data to XML and vice versa. By understanding the process of checking JSON data against XML data, testers, developers and data practitioners can ensure accurate data representation, seamless data interchange, and effective data validation.Keywords: XML, JSON, data comparison, integration testing, Python, SQL
Procedia PDF Downloads 1386364 Blockchain-Based Decentralized Architecture for Secure Medical Records Management
Authors: Saeed M. Alshahrani
Abstract:
This research integrated blockchain technology to reform medical records management in healthcare informatics. It was aimed at resolving the limitations of centralized systems by establishing a secure, decentralized, and user-centric platform. The system was architected with a sophisticated three-tiered structure, integrating advanced cryptographic methodologies, consensus algorithms, and the Fast Healthcare Interoperability Resources (HL7 FHIR) standard to ensure data security, transaction validity, and semantic interoperability. The research has profound implications for healthcare delivery, patient care, legal compliance, operational efficiency, and academic advancements in blockchain technology and healthcare IT sectors. The methodology adapted in this research comprises of Preliminary Feasibility Study, Literature Review, Design and Development, Cryptographic Algorithm Integration, Modeling the data and testing the system. The research employed a permissioned blockchain with a Practical Byzantine Fault Tolerance (PBFT) consensus algorithm and Ethereum-based smart contracts. It integrated advanced cryptographic algorithms, role-based access control, multi-factor authentication, and RESTful APIs to ensure security, regulate access, authenticate user identities, and facilitate seamless data exchange between the blockchain and legacy healthcare systems. The research contributed to the development of a secure, interoperable, and decentralized system for managing medical records, addressing the limitations of the centralized systems that were in place. Future work will delve into optimizing the system further, exploring additional blockchain use cases in healthcare, and expanding the adoption of the system globally, contributing to the evolution of global healthcare practices and policies.Keywords: healthcare informatics, blockchain, medical records management, decentralized architecture, data security, cryptographic algorithms
Procedia PDF Downloads 546363 Comparative Analysis of Classification Methods in Determining Non-Active Student Characteristics in Indonesia Open University
Authors: Dewi Juliah Ratnaningsih, Imas Sukaesih Sitanggang
Abstract:
Classification is one of data mining techniques that aims to discover a model from training data that distinguishes records into the appropriate category or class. Data mining classification methods can be applied in education, for example, to determine the classification of non-active students in Indonesia Open University. This paper presents a comparison of three methods of classification: Naïve Bayes, Bagging, and C.45. The criteria used to evaluate the performance of three methods of classification are stratified cross-validation, confusion matrix, the value of the area under the ROC Curve (AUC), Recall, Precision, and F-measure. The data used for this paper are from the non-active Indonesia Open University students in registration period of 2004.1 to 2012.2. Target analysis requires that non-active students were divided into 3 groups: C1, C2, and C3. Data analyzed are as many as 4173 students. Results of the study show: (1) Bagging method gave a high degree of classification accuracy than Naïve Bayes and C.45, (2) the Bagging classification accuracy rate is 82.99 %, while the Naïve Bayes and C.45 are 80.04 % and 82.74 % respectively, (3) the result of Bagging classification tree method has a large number of nodes, so it is quite difficult in decision making, (4) classification of non-active Indonesia Open University student characteristics uses algorithms C.45, (5) based on the algorithm C.45, there are 5 interesting rules which can describe the characteristics of non-active Indonesia Open University students.Keywords: comparative analysis, data mining, clasiffication, Bagging, Naïve Bayes, C.45, non-active students, Indonesia Open University
Procedia PDF Downloads 3146362 Meta-Learning for Hierarchical Classification and Applications in Bioinformatics
Authors: Fabio Fabris, Alex A. Freitas
Abstract:
Hierarchical classification is a special type of classification task where the class labels are organised into a hierarchy, with more generic class labels being ancestors of more specific ones. Meta-learning for classification-algorithm recommendation consists of recommending to the user a classification algorithm, from a pool of candidate algorithms, for a dataset, based on the past performance of the candidate algorithms in other datasets. Meta-learning is normally used in conventional, non-hierarchical classification. By contrast, this paper proposes a meta-learning approach for more challenging task of hierarchical classification, and evaluates it in a large number of bioinformatics datasets. Hierarchical classification is especially relevant for bioinformatics problems, as protein and gene functions tend to be organised into a hierarchy of class labels. This work proposes meta-learning approach for recommending the best hierarchical classification algorithm to a hierarchical classification dataset. This work’s contributions are: 1) proposing an algorithm for splitting hierarchical datasets into new datasets to increase the number of meta-instances, 2) proposing meta-features for hierarchical classification, and 3) interpreting decision-tree meta-models for hierarchical classification algorithm recommendation.Keywords: algorithm recommendation, meta-learning, bioinformatics, hierarchical classification
Procedia PDF Downloads 3116361 Status of Communication and Swallowing Therapy in Patient with a Tracheostomy
Authors: Ya-Hui Wang
Abstract:
Lower speech therapy rate of tracheostomized patient was noted in comparison with previous researches. This study is aim to shed light on the referral status of speech therapy in those patients in Taiwan. This study developed an analysis for the size and key characteristics of the population of tracheostomized in-patient in the Taiwan. Method: We analyzed National Healthcare Insurance data (The Collaboration Center of Health Information Application, CCHIA) from Jan 1 2010 to Dec 31 2010. Result: over ages 3, number of tracheostomized in-patient is directly proportional to age. A high service loading was observed in North region in comparison with other regions. Only 4.87% of the tracheostomized in-patients were referred for speech therapy, and 1.9% for swallow examination, 2.5% for communication evaluation.Keywords: refer, speech therapy, training, rehabilitation
Procedia PDF Downloads 4396360 Comparison Between a Droplet Digital PCR and Real Time PCR Method in Quantification of HBV DNA
Authors: Surangrat Srisurapanon, Chatchawal Wongjitrat, Navin Horthongkham, Ruengpung Sutthent
Abstract:
HBV infection causes a potential serious public health problem. The ability to detect the HBV DNA concentration is of the importance and improved continuously. By using quantitative Polymerase Chain Reaction (qPCR), several factors in standardized; source of material, calibration standard curve and PCR efficiency are inconsistent. Digital PCR (dPCR) is an alternative PCR-based technique for absolute quantification using Poisson's statistics without requiring a standard curve. Therefore, the aim of this study is to compare the data set of HBV DNA generated between dPCR and qPCR methods. All samples were quantified by Abbott’s real time PCR and 54 samples with 2 -6 log10 HBV DNA were selected for comparison with dPCR. Of these 54 samples, there were two outlier samples defined as negative by dPCR. Of these two, samples were defined as negative by dPCR, whereas 52 samples were positive by both the tests. The difference between the two assays was less than 0.25 log IU/mL in 24/52 samples (46%) of paired samples; less than 0.5 log IU/mL in 46/52 samples (88%) and less than 1 log in 50/52 samples (96%). The correlation coefficient was r=0.788 and P-value <0.0001. Comparison to qPCR, data generated by dPCR tend to be the overestimation in the sample with low HBV DNA concentration and underestimated in the sample with high viral load. The variation in DNA by dPCR measurement might be due to the pre-amplification bias, template. Moreover, a minor drawback of dPCR is the large quantity of DNA had to be used when compare to the qPCR. Since the technology is relatively new, the limitations of this assay will be improved.Keywords: hepatitis B virus, real time PCR, digital PCR, DNA quantification
Procedia PDF Downloads 4816359 Inter Laboratory Comparison with Coordinate Measuring Machine and Uncertainty Analysis
Authors: Tugrul Torun, Ihsan A. Yuksel, Si̇nem On Aktan, Taha K. Vezi̇roglu
Abstract:
In the quality control processes in some industries, the usage of CMM has increased in recent years. Consequently, the CMMs play important roles in the acceptance or rejection of manufactured parts. For parts, it’s important to be able to make decisions by performing fast measurements. According to related technical drawing and its tolerances, measurement uncertainty should also be considered during assessment. Since uncertainty calculation is difficult and time-consuming, most companies ignore the uncertainty value in their routine inspection method. Although studies on measurement uncertainty have been carried out on CMM’s in recent years, there is still no applicable method for analyzing task-specific measurement uncertainty. There are some standard series for calculating measurement uncertainty (ISO-15530); it is not possible to use it in industrial measurement because it is not a practical method for standard measurement routine. In this study, the inter-laboratory comparison test has been carried out in the ROKETSAN A.Ş. with all dimensional inspection units. The reference part that we used is traceable to the national metrology institute TUBİTAK UME. Each unit has measured reference parts according to related technical drawings, and the task-specific measuring uncertainty has been calculated with related parameters. According to measurement results and uncertainty values, the En values have been calculated.Keywords: coordinate measurement, CMM, comparison, uncertainty
Procedia PDF Downloads 2106358 Acceleration of Lagrangian and Eulerian Flow Solvers via Graphics Processing Units
Authors: Pooya Niksiar, Ali Ashrafizadeh, Mehrzad Shams, Amir Hossein Madani
Abstract:
There are many computationally demanding applications in science and engineering which need efficient algorithms implemented on high performance computers. Recently, Graphics Processing Units (GPUs) have drawn much attention as compared to the traditional CPU-based hardware and have opened up new improvement venues in scientific computing. One particular application area is Computational Fluid Dynamics (CFD), in which mature CPU-based codes need to be converted to GPU-based algorithms to take advantage of this new technology. In this paper, numerical solutions of two classes of discrete fluid flow models via both CPU and GPU are discussed and compared. Test problems include an Eulerian model of a two-dimensional incompressible laminar flow case and a Lagrangian model of a two phase flow field. The CUDA programming standard is used to employ an NVIDIA GPU with 480 cores and a C++ serial code is run on a single core Intel quad-core CPU. Up to two orders of magnitude speed up is observed on GPU for a certain range of grid resolution or particle numbers. As expected, Lagrangian formulation is better suited for parallel computations on GPU although Eulerian formulation represents significant speed up too.Keywords: CFD, Eulerian formulation, graphics processing units, Lagrangian formulation
Procedia PDF Downloads 4126357 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 97