Search results for: searching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3902

Search results for: searching algorithm

2012 Blockchain-Based Decentralized Architecture for Secure Medical Records Management

Authors: Saeed M. Alshahrani

Abstract:

This research integrated blockchain technology to reform medical records management in healthcare informatics. It was aimed at resolving the limitations of centralized systems by establishing a secure, decentralized, and user-centric platform. The system was architected with a sophisticated three-tiered structure, integrating advanced cryptographic methodologies, consensus algorithms, and the Fast Healthcare Interoperability Resources (HL7 FHIR) standard to ensure data security, transaction validity, and semantic interoperability. The research has profound implications for healthcare delivery, patient care, legal compliance, operational efficiency, and academic advancements in blockchain technology and healthcare IT sectors. The methodology adapted in this research comprises of Preliminary Feasibility Study, Literature Review, Design and Development, Cryptographic Algorithm Integration, Modeling the data and testing the system. The research employed a permissioned blockchain with a Practical Byzantine Fault Tolerance (PBFT) consensus algorithm and Ethereum-based smart contracts. It integrated advanced cryptographic algorithms, role-based access control, multi-factor authentication, and RESTful APIs to ensure security, regulate access, authenticate user identities, and facilitate seamless data exchange between the blockchain and legacy healthcare systems. The research contributed to the development of a secure, interoperable, and decentralized system for managing medical records, addressing the limitations of the centralized systems that were in place. Future work will delve into optimizing the system further, exploring additional blockchain use cases in healthcare, and expanding the adoption of the system globally, contributing to the evolution of global healthcare practices and policies.

Keywords: healthcare informatics, blockchain, medical records management, decentralized architecture, data security, cryptographic algorithms

Procedia PDF Downloads 55
2011 Improving the Penalty-free Multi-objective Evolutionary Design Optimization of Water Distribution Systems

Authors: Emily Kambalame

Abstract:

Water distribution networks necessitate many investments for construction, prompting researchers to seek cost reduction and efficient design solutions. Optimization techniques are employed in this regard to address these challenges. In this context, the penalty-free multi-objective evolutionary algorithm (PFMOEA) coupled with pressure-dependent analysis (PDA) was utilized to develop a multi-objective evolutionary search for the optimization of water distribution systems (WDSs). The aim of this research was to find out if the computational efficiency of the PFMOEA for WDS optimization could be enhanced. This was done by applying real coding representation and retaining different percentages of feasible and infeasible solutions close to the Pareto front in the elitism step of the optimization. Two benchmark network problems, namely the Two-looped and Hanoi networks, were utilized in the study. A comparative analysis was then conducted to assess the performance of the real-coded PFMOEA in relation to other approaches described in the literature. The algorithm demonstrated competitive performance for the two benchmark networks by implementing real coding. The real-coded PFMOEA achieved the novel best-known solutions ($419,000 and $6.081 million) and a zero-pressure deficit for the two networks, requiring fewer function evaluations than the binary-coded PFMOEA. In previous PFMOEA studies, elitism applied a default retention of 30% of the least cost-feasible solutions while excluding all infeasible solutions. It was found in this study that by replacing 10% and 15% of the feasible solutions with infeasible ones that are close to the Pareto front with minimal pressure deficit violations, the computational efficiency of the PFMOEA was significantly enhanced. The configuration of 15% feasible and 15% infeasible solutions outperformed other retention allocations by identifying the optimal solution with the fewest function evaluation

Keywords: design optimization, multi-objective evolutionary, penalty-free, water distribution systems

Procedia PDF Downloads 62
2010 Ring FingerPortein 2 (RNF2) Targeting by miRNAs in Breast Cancer Cell Lines

Authors: Ceyda Okudu, Secil Eroglu, Khandakar A. S. M. Saadat, Sibel O. Balci

Abstract:

Ring Finger Protein 2 (RNF2) is a member of polycomb repressive complex 1 (PRC1), which is one of the epigenetic regulators in the genome. When RNF2 combines with other PRC1 members, it mediates the mono-ubiquitination of Histon2A (H2A). In breast cancer, RNF2 is commonly overexpressed, and also it promotes metastasis and invasion in other aggressive tumors like melanoma, prostate, and hepatocarcinoma. The role of RNF2 in the metastasis and invasion of breast cancer has not yet been elucidated. Our aim is to observe the role of RNF2 in metastasis and invasion in this study by miRNA mediated RNF2 gene silencing in breast cancer cell lines. We selected miRNAs, targeting to RNF2 by searching online databases. miR-17-5p, miR20a-5p, and miR-106b-5p were transfected to breast cancer cell lines (MCF-7, MDA-MB-231, SK-BR-3, and ZR-75-1), and also we used normal breast epithelial cell line (hTERT-HME1) to compare RNF2 gene expression level. After 48-72 hours post-transfection, mRNAs were isolated from the cells, and gene expressions were measured by RT-qPCR after from cDNA syntheses. We observed that RNF2 was highly expressed in SK-BR-3 and MDA-MB-231 cell lines opposite to MCF-7 and ZR-75-1 cell lines. RNF2 was downregulated 5, 5 and 7 fold by miR17-5p, miR20a-5p and miR106b-5p respectively in MCF-7. However, in SK-BR-3 and ZR-75-1 cell lines, miRNAs did not affect significantly RNF2 gene expression level. miR20a-5p decreased RNF2 3 fold and miR17-5p and miR106b-5p did not affect MDA-MB-231. After gene expression analysis, we performed metastasis and invasion assay in MCF-7 cells. For metastasis, we used both wound healing assay and Transwell Cell Migration Assay, and we used Transwell Cell Invasion Assay for invasion. The data of this assay showed that miR17-5p and miR20a-5p decreased both invasion and metastasis level, but miR106b-5p has no effect. We would like to conclude that RNF2 can be targeted by miR17-5p, miR20a-5p and miR106b-5p in MCF-7 cells and also RNF2, which is one of the upregulated genes in aggressive tumor, can be decreased by using these miRNAs. In future, we would like to confirm these results at the protein level and also whether these miRNAs are direct target of RNF2 or not.

Keywords: breast cancer, epigenetic, microRNAs, RNF2

Procedia PDF Downloads 180
2009 Transfer Function Model-Based Predictive Control for Nuclear Core Power Control in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The 1MWth PUSPATI TRIGA Reactor (RTP) in Malaysia Nuclear Agency has been operating more than 35 years. The existing core power control is using conventional controller known as Feedback Control Algorithm (FCA). It is technically challenging to keep the core power output always stable and operating within acceptable error bands for the safety demand of the RTP. Currently, the system could be considered unsatisfactory with power tracking performance, yet there is still significant room for improvement. Hence, a new design core power control is very important to improve the current performance in tracking and regulating reactor power by controlling the movement of control rods that suit the demand of highly sensitive of nuclear reactor power control. In this paper, the proposed Model Predictive Control (MPC) law was applied to control the core power. The model for core power control was based on mathematical models of the reactor core, MPC, and control rods selection algorithm. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The proposed MPC was presented in a transfer function model of the reactor core according to perturbations theory. The transfer function model-based predictive control (TFMPC) was developed to design the core power control with predictions based on a T-filter towards the real-time implementation of MPC on hardware. This paper introduces the sensitivity functions for TFMPC feedback loop to reduce the impact on the input actuation signal and demonstrates the behaviour of TFMPC in term of disturbance and noise rejections. The comparisons of both tracking and regulating performance between the conventional controller and TFMPC were made using MATLAB and analysed. In conclusion, the proposed TFMPC has satisfactory performance in tracking and regulating core power for controlling nuclear reactor with high reliability and safety.

Keywords: core power control, model predictive control, PUSPATI TRIGA reactor, TFMPC

Procedia PDF Downloads 241
2008 Parallel Multisplitting Methods for Differential Systems

Authors: Malika El Kyal, Ahmed Machmoum

Abstract:

We prove the superlinear convergence of asynchronous multi-splitting methods applied to differential equations. This study is based on the technique of nested sets. It permits to specify kind of the convergence in the asynchronous mode.The main characteristic of an asynchronous mode is that the local algorithm not have to wait at predetermined messages to become available. We allow some processors to communicate more frequently than others, and we allow the communication delays to be substantial and unpredictable. Note that synchronous algorithms in the computer science sense are particular cases of our formulation of asynchronous one.

Keywords: parallel methods, asynchronous mode, multisplitting, ODE

Procedia PDF Downloads 526
2007 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation

Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton

Abstract:

Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.

Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication

Procedia PDF Downloads 171
2006 Optimal MRO Process Scheduling with Rotable Inventory to Minimize Total Earliness

Authors: Murat Erkoc, Kadir Ertogral

Abstract:

Maintenance, repair and overhauling (MRO) of high cost equipment used in many industries such as transportation, military and construction are typically subject to regulations set by local governments or international agencies. Aircrafts are prime examples for this kind of equipment. Such equipment must be overhauled at certain intervals for continuing permission of use. As such, the overhaul must be completed by strict deadlines, which often times cannot be exceeded. Due to the fact that the overhaul is typically a long process, MRO companies carry so called rotable inventory for exchange of expensive modules in the overhaul process of the equipment so that the equipment continue its services with minimal interruption. The extracted module is overhauled and returned back to the inventory for future exchange, hence the name rotable inventory. However, since the rotable inventory and overhaul capacity are limited, it may be necessary to carry out some of the exchanges earlier than their deadlines in order to produce a feasible overhaul schedule. An early exchange results with a decrease in the equipment’s cycle time in between overhauls and as such, is not desired by the equipment operators. This study introduces an integer programming model for the optimal overhaul and exchange scheduling. We assume that there is certain number of rotables at hand at the beginning of the planning horizon for a single type module and there are multiple demands with known deadlines for the exchange of the modules. We consider an MRO system with identical parallel processing lines. The model minimizes total earliness by generating optimal overhaul start times for rotables on parallel processing lines and exchange timetables for orders. We develop a fast exact solution algorithm for the model. The algorithm employs full-delay scheduling approach with backward allocation and can easily be used for overhaul scheduling problems in various MRO settings with modular rotable items. The proposed procedure is demonstrated by a case study from the aerospace industry.

Keywords: rotable inventory, full-delay scheduling, maintenance, overhaul, total earliness

Procedia PDF Downloads 544
2005 Detection of Safety Goggles on Humans in Industrial Environment Using Faster-Region Based on Convolutional Neural Network with Rotated Bounding Box

Authors: Ankit Kamboj, Shikha Talwar, Nilesh Powar

Abstract:

To successfully deliver our products in the market, the employees need to be in a safe environment, especially in an industrial and manufacturing environment. The consequences of delinquency in wearing safety glasses while working in industrial plants could be high risk to employees, hence the need to develop a real-time automatic detection system which detects the persons (violators) not wearing safety glasses. In this study a convolutional neural network (CNN) algorithm called faster region based CNN (Faster RCNN) with rotated bounding box has been used for detecting safety glasses on persons; the algorithm has an advantage of detecting safety glasses with different orientation angles on the persons. The proposed method of rotational bounding boxes with a convolutional neural network first detects a person from the images, and then the method detects whether the person is wearing safety glasses or not. The video data is captured at the entrance of restricted zones of the industrial environment (manufacturing plant), which is further converted into images at 2 frames per second. In the first step, the CNN with pre-trained weights on COCO dataset is used for person detection where the detections are cropped as images. Then the safety goggles are labelled on the cropped images using the image labelling tool called roLabelImg, which is used to annotate the ground truth values of rotated objects more accurately, and the annotations obtained are further modified to depict four coordinates of the rectangular bounding box. Next, the faster RCNN with rotated bounding box is used to detect safety goggles, which is then compared with traditional bounding box faster RCNN in terms of detection accuracy (average precision), which shows the effectiveness of the proposed method for detection of rotatory objects. The deep learning benchmarking is done on a Dell workstation with a 16GB Nvidia GPU.

Keywords: CNN, deep learning, faster RCNN, roLabelImg rotated bounding box, safety goggle detection

Procedia PDF Downloads 130
2004 Gentrification in Istanbul: The Twin Paradox

Authors: Tugce Caliskan

Abstract:

The gentrification literature in Turkey provided important insights regarding the analysis of the socio-spatial change in İstanbul mostly through the existing gentrification theories which were produced in Anglo-American literature. Yet early researches focused on the classical gentrification while failing to notice other place-specific forms of the phenomena. It was only after the mid-2000s that scholarly attention shifted to the recent discussions in the mainstream such as the neoliberal urban policies, government involvement, and resistance. Although these studies have considerable potential to contribute to the geography of gentrification, it seems that copying the linear timeline of Anglo-American conceptualization limited the space to introduce contextually nuanced way of process in Turkey. More specifically, the gentrification literature in Turkey acknowledged the linear timeline of the process drawing on the mainstream studies, and, made the spontaneous classical gentrification as the starting point in İstanbul at the expense of contextually specific forms of the phenomenon that took place in the same years. This paper is an attempt to understand place-specific forms of gentrification through the abandonment of the linear understanding of time. In this vein, this paper approaches the process as moving both linear and cyclical rather than the waves succeeded each other. Maintaining a dialectical relationship between the cyclical and the linear time, this paper investigates how the components of gentrification have been taken place in the cyclical timeline while becoming bolder in the linear timeline. This paper argues that taking the (re)investment in the secondary circuit of capital and class transformation as the core characteristics of gentrification, and accordingly, searching for these components beyond the linear timeline provide strategic value to decenter the perspectives, not merely for Turkish studies. In this vein, this strategy revealed that Western experience of gentrification did not travel, adopted or copied in Turkey but gentrification -as an abstract and general concept- has emerged as a product of different contextual, historical and temporal forces which must be considered within the framework of state-led urbanization as early as 1980 differing from the Global North trajectories.

Keywords: comparative urbanism, geography of gentrification, linear and cyclical timeline, state-led gentrification

Procedia PDF Downloads 115
2003 Security System for Safe Transmission of Medical Image

Authors: Mohammed Jamal Al-Mansor, Kok Beng Gan

Abstract:

This paper develops an optimized embedding of payload in medical image by using genetic optimization. The goal is to preserve region of interest from being distorted because of the watermark. By using this developed system there is no need of manual defining of region of interest through experts as the system will apply the genetic optimization to select the parts of image that can carry the watermark with guaranteeing less distortion. The experimental results assure that genetic based optimization is useful for performing steganography with less mean square error percentage.

Keywords: AES, DWT, genetic algorithm, watermarking

Procedia PDF Downloads 411
2002 Artificial Neural Network Approach for Modeling and Optimization of Conidiospore Production of Trichoderma harzianum

Authors: Joselito Medina-Marin, Maria G. Serna-Diaz, Alejandro Tellez-Jurado, Juan C. Seck-Tuoh-Mora, Eva S. Hernandez-Gress, Norberto Hernandez-Romero, Iaina P. Medina-Serna

Abstract:

Trichoderma harzianum is a fungus that has been utilized as a low-cost fungicide for biological control of pests, and it is important to determine the optimal conditions to produce the highest amount of conidiospores of Trichoderma harzianum. In this work, the conidiospore production of Trichoderma harzianum is modeled and optimized by using Artificial Neural Networks (AANs). In order to gather data of this process, 30 experiments were carried out taking into account the number of hours of culture (10 distributed values from 48 to 136 hours) and the culture humidity (70, 75 and 80 percent), obtained as a response the number of conidiospores per gram of dry mass. The experimental results were used to develop an iterative algorithm to create 1,110 ANNs, with different configurations, starting from one to three hidden layers, and every hidden layer with a number of neurons from 1 to 10. Each ANN was trained with the Levenberg-Marquardt backpropagation algorithm, which is used to learn the relationship between input and output values. The ANN with the best performance was chosen in order to simulate the process and be able to maximize the conidiospores production. The obtained ANN with the highest performance has 2 inputs and 1 output, three hidden layers with 3, 10 and 10 neurons in each layer, respectively. The ANN performance shows an R2 value of 0.9900, and the Root Mean Squared Error is 1.2020. This ANN predicted that 644175467 conidiospores per gram of dry mass are the maximum amount obtained in 117 hours of culture and 77% of culture humidity. In summary, the ANN approach is suitable to represent the conidiospores production of Trichoderma harzianum because the R2 value denotes a good fitting of experimental results, and the obtained ANN model was used to find the parameters to produce the biggest amount of conidiospores per gram of dry mass.

Keywords: Trichoderma harzianum, modeling, optimization, artificial neural network

Procedia PDF Downloads 158
2001 Automatic Approach for Estimating the Protection Elements of Electric Power Plants

Authors: Mahmoud Mohammad Salem Al-Suod, Ushkarenko O. Alexander, Dorogan I. Olga

Abstract:

New algorithms using microprocessor systems have been proposed for protection the diesel-generator unit in autonomous power systems. The software structure is designed to enhance the control automata of the system, in which every protection module of diesel-generator encapsulates the finite state machine.

Keywords: diesel-generator unit, protection, state diagram, control system, algorithm, software components

Procedia PDF Downloads 419
2000 Multimodal Biometric Cryptography Based Authentication in Cloud Environment to Enhance Information Security

Authors: D. Pugazhenthi, B. Sree Vidya

Abstract:

Cloud computing is one of the emerging technologies that enables end users to use the services of cloud on ‘pay per usage’ strategy. This technology grows in a fast pace and so is its security threat. One among the various services provided by cloud is storage. In this service, security plays a vital factor for both authenticating legitimate users and protection of information. This paper brings in efficient ways of authenticating users as well as securing information on the cloud. Initial phase proposed in this paper deals with an authentication technique using multi-factor and multi-dimensional authentication system with multi-level security. Unique identification and slow intrusive formulates an advanced reliability on user-behaviour based biometrics than conventional means of password authentication. By biometric systems, the accounts are accessed only by a legitimate user and not by a nonentity. The biometric templates employed here do not include single trait but multiple, viz., iris and finger prints. The coordinating stage of the authentication system functions on Ensemble Support Vector Machine (SVM) and optimization by assembling weights of base SVMs for SVM ensemble after individual SVM of ensemble is trained by the Artificial Fish Swarm Algorithm (AFSA). Thus it helps in generating a user-specific secure cryptographic key of the multimodal biometric template by fusion process. Data security problem is averted and enhanced security architecture is proposed using encryption and decryption system with double key cryptography based on Fuzzy Neural Network (FNN) for data storing and retrieval in cloud computing . The proposing scheme aims to protect the records from hackers by arresting the breaking of cipher text to original text. This improves the authentication performance that the proposed double cryptographic key scheme is capable of providing better user authentication and better security which distinguish between the genuine and fake users. Thus, there are three important modules in this proposed work such as 1) Feature extraction, 2) Multimodal biometric template generation and 3) Cryptographic key generation. The extraction of the feature and texture properties from the respective fingerprint and iris images has been done initially. Finally, with the help of fuzzy neural network and symmetric cryptography algorithm, the technique of double key encryption technique has been developed. As the proposed approach is based on neural networks, it has the advantage of not being decrypted by the hacker even though the data were hacked already. The results prove that authentication process is optimal and stored information is secured.

Keywords: artificial fish swarm algorithm (AFSA), biometric authentication, decryption, encryption, fingerprint, fusion, fuzzy neural network (FNN), iris, multi-modal, support vector machine classification

Procedia PDF Downloads 259
1999 Reinforcement Learning For Agile CNC Manufacturing: Optimizing Configurations And Sequencing

Authors: Huan Ting Liao

Abstract:

In a typical manufacturing environment, computer numerical control (CNC) machining is essential for automating production through precise computer-controlled tool operations, significantly enhancing efficiency and ensuring consistent product quality. However, traditional CNC production lines often rely on manual loading and unloading, limiting operational efficiency and scalability. Although automated loading systems have been developed, they frequently lack sufficient intelligence and configuration efficiency, requiring extensive setup adjustments for different products and impacting overall productivity. This research addresses the job shop scheduling problem (JSSP) in CNC machining environments, aiming to minimize total completion time (makespan) and maximize CNC machine utilization. We propose a novel approach using reinforcement learning (RL), specifically the Q-learning algorithm, to optimize scheduling decisions. The study simulates the JSSP, incorporating robotic arm operations, machine processing times, and work order demand allocation to determine optimal processing sequences. The Q-learning algorithm enhances machine utilization by dynamically balancing workloads across CNC machines, adapting to varying job demands and machine states. This approach offers robust solutions for complex manufacturing environments by automating decision-making processes for job assignments. Additionally, we evaluate various layout configurations to identify the most efficient setup. By integrating RL-based scheduling optimization with layout analysis, this research aims to provide a comprehensive solution for improving manufacturing efficiency and productivity in CNC-based job shops. The proposed method's adaptability and automation potential promise significant advancements in tackling dynamic manufacturing challenges.

Keywords: job shop scheduling problem, reinforcement learning, operations sequence, layout optimization, q-learning

Procedia PDF Downloads 24
1998 Efficient Computer-Aided Design-Based Multilevel Optimization of the LS89

Authors: A. Chatel, I. S. Torreguitart, T. Verstraete

Abstract:

The paper deals with a single point optimization of the LS89 turbine using an adjoint optimization and defining the design variables within a CAD system. The advantage of including the CAD model in the design system is that higher level constraints can be imposed on the shape, allowing the optimized model or component to be manufactured. However, CAD-based approaches restrict the design space compared to node-based approaches where every node is free to move. In order to preserve a rich design space, we develop a methodology to refine the CAD model during the optimization and to create the best parameterization to use at each time. This study presents a methodology to progressively refine the design space, which combines parametric effectiveness with a differential evolutionary algorithm in order to create an optimal parameterization. In this manuscript, we show that by doing the parameterization at the CAD level, we can impose higher level constraints on the shape, such as the axial chord length, the trailing edge radius and G2 geometric continuity between the suction side and pressure side at the leading edge. Additionally, the adjoint sensitivities are filtered out and only smooth shapes are produced during the optimization process. The use of algorithmic differentiation for the CAD kernel and grid generator allows computing the grid sensitivities to machine accuracy and avoid the limited arithmetic precision and the truncation error of finite differences. Then, the parametric effectiveness is computed to rate the ability of a set of CAD design parameters to produce the design shape change dictated by the adjoint sensitivities. During the optimization process, the design space is progressively enlarged using the knot insertion algorithm which allows introducing new control points whilst preserving the initial shape. The position of the inserted knots is generally assumed. However, this assumption can hinder the creation of better parameterizations that would allow producing more localized shape changes where the adjoint sensitivities dictate. To address this, we propose using a differential evolutionary algorithm to maximize the parametric effectiveness by optimizing the location of the inserted knots. This allows the optimizer to gradually explore larger design spaces and to use an optimal CAD-based parameterization during the course of the optimization. The method is tested on the LS89 turbine cascade and large aerodynamic improvements in the entropy generation are achieved whilst keeping the exit flow angle fixed. The trailing edge and axial chord length, which are kept fixed as manufacturing constraints. The optimization results show that the multilevel optimizations were more efficient than the single level optimization, even though they used the same number of design variables at the end of the multilevel optimizations. Furthermore, the multilevel optimization where the parameterization is created using the optimal knot positions results in a more efficient strategy to reach a better optimum than the multilevel optimization where the position of the knots is arbitrarily assumed.

Keywords: adjoint, CAD, knots, multilevel, optimization, parametric effectiveness

Procedia PDF Downloads 110
1997 Clinical Advice Services: Using Lean Chassis to Optimize Nurse-Driven Telephonic Triage of After-Hour Calls from Patients

Authors: Eric Lee G. Escobedo-Wu, Nidhi Rohatgi, Fouzel Dhebar

Abstract:

It is challenging for patients to navigate through healthcare systems after-hours. This leads to delays in care, patient/provider dissatisfaction, inappropriate resource utilization, readmissions, and higher costs. It is important to provide patients and providers with effective clinical decision-making tools to allow seamless connectivity and coordinated care. In August 2015, patient-centric Stanford Health Care established Clinical Advice Services (CAS) to provide clinical decision support after-hours. CAS is founded on key Lean principles: Value stream mapping, empathy mapping, waste walk, takt time calculations, standard work, plan-do-check-act cycles, and active daily management. At CAS, Clinical Assistants take the initial call and manage all non-clinical calls (e.g., appointments, directions, general information). If the patient has a clinical symptom, the CAS nurses take the call and utilize standardized clinical algorithms to triage the patient to home, clinic, urgent care, emergency department, or 911. Nurses may also contact the on-call physician based on the clinical algorithm for further direction and consultation. Since August 2015, CAS has managed 228,990 calls from 26 clinical specialties. Reporting is built into the electronic health record for analysis and data collection. 65.3% of the after-hours calls are clinically related. Average clinical algorithm adherence rate has been 92%. An average of 9% of calls was escalated by CAS nurses to the physician on call. An average of 5% of patients was triaged to the Emergency Department by CAS. Key learnings indicate that a seamless connectivity vision, cascading, multidisciplinary ownership of the problem, and synergistic enterprise improvements have contributed to this success while striving for continuous improvement.

Keywords: after hours phone calls, clinical advice services, nurse triage, Stanford Health Care

Procedia PDF Downloads 174
1996 Prioritizing Roads Safety Based on the Quasi-Induced Exposure Method and Utilization of the Analytical Hierarchy Process

Authors: Hamed Nafar, Sajad Rezaei, Hamid Behbahani

Abstract:

Safety analysis of the roads through the accident rates which is one of the widely used tools has been resulted from the direct exposure method which is based on the ratio of the vehicle-kilometers traveled and vehicle-travel time. However, due to some fundamental flaws in its theories and difficulties in gaining access to the data required such as traffic volume, distance and duration of the trip, and various problems in determining the exposure in a specific time, place, and individual categories, there is a need for an algorithm for prioritizing the road safety so that with a new exposure method, the problems of the previous approaches would be resolved. In this way, an efficient application may lead to have more realistic comparisons and the new method would be applicable to a wider range of time, place, and individual categories. Therefore, an algorithm was introduced to prioritize the safety of roads using the quasi-induced exposure method and utilizing the analytical hierarchy process. For this research, 11 provinces of Iran were chosen as case study locations. A rural accidents database was created for these provinces, the validity of quasi-induced exposure method for Iran’s accidents database was explored, and the involvement ratio for different characteristics of the drivers and the vehicles was measured. Results showed that the quasi-induced exposure method was valid in determining the real exposure in the provinces under study. Results also showed a significant difference in the prioritization based on the new and traditional approaches. This difference mostly would stem from the perspective of the quasi-induced exposure method in determining the exposure, opinion of experts, and the quantity of accidents data. Overall, the results for this research showed that prioritization based on the new approach is more comprehensive and reliable compared to the prioritization in the traditional approach which is dependent on various parameters including the driver-vehicle characteristics.

Keywords: road safety, prioritizing, Quasi-induced exposure, Analytical Hierarchy Process

Procedia PDF Downloads 338
1995 A Systematic Review with Meta-Analyses Investigating the Association between Binge Eating and Poor Weight Loss Outcomes in People with Obesity

Authors: Isabella Lobo Sasaoka, Felipe Q. da Luz, Zubeyir Salis, Phillipa Hay, Tamiris Gaeta, Paula Costa Teixeira, Táki Cordás, Amanda Sainsbury

Abstract:

Background: A significant number of people with obesity that seek weight loss treatments experience binge eating episodes. Nonetheless, it is unknown whether binge eating episodes can hinder weight loss outcomes. Objective: To compare weight change in people with or without binge eating submitted to bariatric surgery, pharmacotherapy, nutritional orientation, and/or psychological therapies. Method: We conducted a systematic review with meta-analyses by searching studies in PubMed, American Psychological Association (APA), and Embase. Results: Thirty-four studies were included in our systematic review, and 17 studies were included in the meta-analyses. Overall, we found no significant difference in weight loss between people with or without binge eating submitted to any type of weight loss treatment. Additionally, we found no statistically significant differences in body weight between people with or without binge eating at short and long follow-up assessments following any type of weight loss treatment. We also examined changes in body weight in people with or without binge eating in three additional meta-analyses categorized by the type of weight loss treatment (i.e., behavioural and/or nutritional interventions; bariatric surgery; pharmacotherapy isolated or combined with behavior interventions) and found no difference in weight loss. Eleven out of the 17 studies that were assessed qualitatively (i.e., not included in meta-analyses) did not show differences in weight loss in people with or without binge eating submitted to any type of weight loss treatment. Conclusion: This systematic review with meta-analyses showed no difference in weight loss in people with or without binge eating submitted to a variety of weight loss treatments. Nonetheless, specialized therapies can be required to address eating disorder psychopathology and recurrent binge eating in people with obesity that seek weight loss.

Keywords: obesity, binge eating, weight loss, systematic review, meta-analysis

Procedia PDF Downloads 154
1994 Ant System with Acoustic Communication

Authors: Saad Bougrine, Salma Ouchraa, Belaid Ahiod, Abdelhakim Ameur El Imrani

Abstract:

Ant colony optimization is an ant algorithm framework that took inspiration from foraging behaviour of ant colonies. Indeed, ACO algorithms use a chemical communication, represented by pheromone trails, to build good solutions. However, ants involve different communication channels to interact. Thus, this paper introduces the acoustic communication between ants while they are foraging. This process allows fine and local exploration of search space and permits optimal solution to be improved.

Keywords: acoustic communication, ant colony optimization, local search, traveling salesman problem

Procedia PDF Downloads 586
1993 Factors Influencing the Use of Mobile Phone by Smallholder Farmers in Vegetable Marketing in Fogera District

Authors: Molla Tadesse Lakew

Abstract:

This study was intended to identify the factors influencing the use of mobile phones in vegetable marketing in Fogera district. The use of mobile phones in vegetable marketing and factors influencing mobile phone use were specific objectives of the study. Three kebeles from the Fogera district were selected purposively based on their vegetable production potential. A simple random sampling technique (lottery method) was used to select 153 vegetable producer farmers. Interview schedule and key informants interviews were used to collect primary data. For analyzing the data, descriptive statistics like frequency and percentage, two independent t-tests, and chi-square were used. Furthermore, econometric analysis (binary logistic model) was used to assess the factors influencing mobile phone use for vegetable market information. Contingency coefficient and variance inflation factor were used to check multicollinearity problems between the independent variables. Of 153 respondents, 82 (61.72%) were mobile phone users, while 71 (38.28 %) were mobile phone nonusers. Moreover, the main use of mobile phones in vegetable marketing includes communicating at a distance to save time and minimizing transport costs, getting vegetable marketing price information, identifying markets and buyers to sell the vegetable, deciding when to sell the vegetable, negotiating with buyers for better vegetable prices and for searching of the fast market to avoid from losing of product through perishing. The model result indicated that the level of education, size of land, income, access to credit, and age were significant variables affecting the use of mobile phones in vegetable marketing. It could be recommended to encourage adult education or give training for farmers on how to operate mobile phones and create awareness for the elderly rural farmers as they are able to use the mobile phone for their vegetable marketing. Moreover, farmers should be aware that mobile phones are very important for those who own very small land to get maximum returns from their production. Lastly, providing access to credit and improving and diversifying income sources for the farmers to have mobile phones were recommended to improve the livelihood of farmers.

Keywords: mobile phone, farmers, vegetable marketing, Fogera District

Procedia PDF Downloads 73
1992 A Support Vector Machine Learning Prediction Model of Evapotranspiration Using Real-Time Sensor Node Data

Authors: Waqas Ahmed Khan Afridi, Subhas Chandra Mukhopadhyay, Bandita Mainali

Abstract:

The research paper presents a unique approach to evapotranspiration (ET) prediction using a Support Vector Machine (SVM) learning algorithm. The study leverages real-time sensor node data to develop an accurate and adaptable prediction model, addressing the inherent challenges of traditional ET estimation methods. The integration of the SVM algorithm with real-time sensor node data offers great potential to improve spatial and temporal resolution in ET predictions. In the model development, key input features are measured and computed using mathematical equations such as Penman-Monteith (FAO56) and soil water balance (SWB), which include soil-environmental parameters such as; solar radiation (Rs), air temperature (T), atmospheric pressure (P), relative humidity (RH), wind speed (u2), rain (R), deep percolation (DP), soil temperature (ST), and change in soil moisture (∆SM). The one-year field data are split into combinations of three proportions i.e. train, test, and validation sets. While kernel functions with tuning hyperparameters have been used to train and improve the accuracy of the prediction model with multiple iterations. This paper also outlines the existing methods and the machine learning techniques to determine Evapotranspiration, data collection and preprocessing, model construction, and evaluation metrics, highlighting the significance of SVM in advancing the field of ET prediction. The results demonstrate the robustness and high predictability of the developed model on the basis of performance evaluation metrics (R2, RMSE, MAE). The effectiveness of the proposed model in capturing complex relationships within soil and environmental parameters provide insights into its potential applications for water resource management and hydrological ecosystem.

Keywords: evapotranspiration, FAO56, KNIME, machine learning, RStudio, SVM, sensors

Procedia PDF Downloads 69
1991 Coffee Consumption and Glucose Metabolism: a Systematic Review of Clinical Trials

Authors: Caio E. G. Reis, Jose G. Dórea, Teresa H. M. da Costa

Abstract:

Objective: Epidemiological data shows an inverse association of coffee consumption with risk of type 2 diabetes mellitus. However, the clinical effects of coffee consumption on the glucose metabolism biomarkers remain controversial. Thus, this paper reviews clinical trials that evaluated the effects of coffee consumption on glucose metabolism. Research Design and Methods: We identified studies published until December 2014 by searching electronic databases and reference lists. We included randomized clinical trials which the intervention group received caffeinated and/or decaffeinated coffee and the control group received water or placebo treatments and measured biomarkers of glucose metabolism. The Jadad Score was applied to evaluate the quality of the studies whereas studies that scored ≥ 3 points were considered for the analyses. Results: Seven clinical trials (total of 237 subjects) were analyzed involving adult healthy, overweight and diabetic subjects. The studies were divided in short-term (1 to 3h) and long-term (2 to 16 weeks) duration. The results for short-term studies showed that caffeinated coffee consumption may increase the area under the curve for glucose response, while for long-term studies caffeinated coffee may improve the glycemic metabolism by reducing the glucose curve and increasing insulin response. These results seem to show that the benefits of coffee consumption occur in the long-term as has been shown in the reduction of type 2 diabetes mellitus risk in epidemiological studies. Nevertheless, until the relationship between long-term coffee consumption and type 2 diabetes mellitus is better understood and any mechanism involved identified, it is premature to make claims about coffee preventing type 2 diabetes mellitus. Conclusion: The findings suggest that caffeinated coffee may impairs glucose metabolism in short-term but in the long-term the studies indicate reduction of type 2 diabetes mellitus risk. More clinical trials with comparable methodology are needed to unravel this paradox.

Keywords: coffee, diabetes mellitus type 2, glucose, insulin

Procedia PDF Downloads 466
1990 Defence Industry in the Political Economy of State and Business Relations

Authors: Hatice Idil Gorgen

Abstract:

Turkey has been investing in its national defence industrial base since the 1980s. State’s role in defence industry showed differences in Turkey. Parallel with this, ruling group’s attitude toward companies in defence sector varied. These changes in policies and behaviors of the state have occurred throughout such milestones as political and economic turmoil in domestic and international level. Hence, it is argued that state’s role, relations with private companies in defense sector and its policies towards the defense industry has shown differences due to the international system, political institutions, ideas and political coalitions in Turkey since the 1980s. Therefore, in order to see changes in the role of the state in defence sector, this paper aims to indicate first, history of state’s role in production and defence industry in the post-1980s era. Secondly, to comprehend the changes in the state’s role in defence industry, Stephan Haggard’s sources of policy change will be provided in the theoretical ground. Thirdly, state cooperated, and joint venture defence firms, state’s actions toward them will be observed. The remaining part will explore the underlying reasons for the changes in the role of the state in defence industry, and it implicitly or explicitly impacts on state business relations. Major findings illustrate that targeted idea of self-sufficient or autarky Turkey to attract domestic audience and to raise the prestige through defence system; ruling elites can regard defence industry and involved business groups as a mean for their ends. State dominant value, sensitive perception which has been ever since Ottoman Empire, prioritizes business groups in defence industry compared to others and push the ruling elites to pursue hard power in defence sectors. Through the globally structural transformation in defence industry, integration of Turkey to liberal bloc deepened and widened interdependence among states. Although it is a qualitative study, it involves the numerated data and descriptive statistics. Data will be collected by searching secondary sources from the literature, examining official documents of ministry of defence, and other appropriate ministries.

Keywords: defense industry, state and business relations, public private relations, arm industry

Procedia PDF Downloads 315
1989 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 478
1988 Psychosocial Consequences of Discovering Misattributed Paternity in Adulthood: Insider Action Research

Authors: Alyona Cerfontyne, Levita D'Souza, Lefteris Patlamazoglou

Abstract:

Unlike adoption and donor-assisted reproduction, misattributed paternity occurring within the context of spontaneous conception and outside of formally recognised practices of having a child remains largely an understudied phenomenon. In adulthood, to discover misattributed paternity, i.e., that the man you call your father is not related to you genetically, can have profound implications for everyone affected. Until the advent of direct-to-consumer DNA testing 20 years ago, such discoveries were relatively rare. Despite the growing number of individuals uncovering their biogenetic paternity through genetic testing, there is very limited research on misattributed paternity from the perspective of adult children affected by it. No research exists on how to support these individuals through counselling post-discovery. Framed as insider action research, this study aimed to explore the perceived psychosocial consequences of misattributed paternity discoveries and coping strategies used by individuals who discover their misattributed paternity status in adulthood. In total, 12 individuals with misattributed paternity participated in semi-structured interviews in July-August 2022. The collected data was analysed using reflexive thematic analysis. The study’s results indicate that discovering misattributed paternity in adulthood can be likened to a watershed moment forever changing the trajectory of one’s life. Psychological experiences consistent with trauma, as well as grief and loss, re-evaluation of close family relationships, reestablishment of one’s identity, as well as experiencing a profound need to belong are the key themes emerging from the analysis of psychosocial experiences. Post-discovery, individuals with misattributed paternity employ a wide range of emotional and problem-focused coping strategies, amongst which seeking connection with those who understand, searching for information on the new biogenetic family and finding new meanings to life are most prominent. The study contributes both to the academic and practical knowledge of experiences of misattributed paternity and highlights the importance of further research on the topic.

Keywords: discovery of misattributed paternity, misattributed paternity, paternal discrepancy, psychosocial consequences, coping

Procedia PDF Downloads 89
1987 Changing Colours and Odours: Exploring Cues Used by Insect Pollinators in Two Brassicaceous Plants

Authors: Katherine Y. Barragan-Fonseca, Joop J. A. Van Loon, Marcel Dicke, Dani Lucas-Barbosa

Abstract:

Flowering plants use different traits to attract pollinators, which indicate flower location and reward quality. Visual and olfactory cues are among the most important floral traits exploited by pollinating insects. Pollination can alter physical and chemical cues of flowers, which can subsequently influence the behaviour of flower visitors. We investigated the main cues exploited by the syrphid fly Episyrphus balteatus and the butterfly Pieris brassicae when visiting flowers of Brassica nigra and Raphanus sativus plants. We studied post-pollination changes and their effects on the behaviour of flower visitors and flower volatile emission. Preference of pollinators was investigated by offering visual and olfactory cues simultaneously as well as separately in two-choice bioassays. We also assessed whether pollen is used as a cue by pollinating insects. In addition, we studied whether behavioural responses could be correlated with changes in plant volatile emission, by collecting volatiles from flower headspace. P. brassicae and E. balteatus did not use pollen as a cue in either of the two plant species studied. Interestingly, pollinators showed a strong bias for visual cues over olfactory cues when exposed to B. nigra plants. Flower visits by pollinators were influenced by post-pollination changes in B. nigra. In contrast, plant responses to pollination did not influence pollinator preference for R. sativus flowers. These results correlate well with floral volatile emission of B. nigra and R. sativus; pollination influenced the volatile profile of B. nigra flowers but not that of R. sativus. Collectively, our data show that different pollinators exploit different visual and olfactory traits when searching for nectar or pollen of flowers of two close related plant species. Although the syrphid fly consumes mostly pollen from brassicaceous flowers, it cannot detect pollen from a distance and likely associates other flower traits with quantity and quality of pollen.

Keywords: plant volatiles, pollinators, post-pollination changes, visual and odour cues

Procedia PDF Downloads 162
1986 Rapid Algorithm for GPS Signal Acquisition

Authors: Fabricio Costa Silva, Samuel Xavier de Souza

Abstract:

A Global Positioning System (GPS) receiver is responsible to determine position, velocity and timing information by using satellite information. To get this information are necessary to combine an incoming and a locally generated signal. The procedure called acquisition need to found two information, the frequency and phase of the incoming signal. This is very time consuming, so there are several techniques to reduces the computational complexity, but each of then put projects issues in conflict. I this papers we present a method that can reduce the computational complexity by reducing the search space and paralleling the search.

Keywords: GPS, acquisition, complexity, parallelism

Procedia PDF Downloads 538
1985 Elderly Health Care Process by Community Participation: A Sub-District in the Lower Northern Region of Thailand

Authors: Amaraporn Puraya, Roongtiva Boonpracom, Somsak Thojampa, Sirikanok Klankhajhon, Kittisak Kumpeera

Abstract:

The objective of this qualitative research was to study the elderly health care process by community participation. Data were collected by quality research methods, including secondary data study, observation, in-depth interviews, and focus group discussions and analyzed by content analysis, reflection and review of information. The research results pointed out that the important elderly health care process by community participation consisted of 2 parts, namely the community participation development process in elderly health care and the outcomes from the participation development process. The community participation development process consisted of 4 steps as follows: 1) Building the leadership team, an important social capital of the community, which started from searching for both formal and informal leaders by giving the opportunity for public participation and creating clear agreements defining roles, duties and responsibilities; 2) investigating the problems and the needs of the community, 3) designing the elderly health care activities under the concept of self-care potential development of the elderly through participation in community forums and meetings to exchange knowledge with common goals, plans and operation and 4) the development process of sustainable health care agreement at the local level, starting from opening communication channels to create awareness and participation in various activities at both individual and group levels as well as pushing activities/projects into the community development plan consistent with the local administration policy. The outcomes from the participation development process were as follows. 1) There was the integration of the elderly for doing the elderly health care activities/projects in the community managed by the elderly themselves. 2) The service system was changed from the passive to the proactive one, focusing on health promotion rather than treating diseases or illnesses. 3) The registered nurses / the public health officers can provide care for the elderly with chronic illnesses through the implementation of activities/projects of elderly health care so that the elderly can access the services more. 4) The local government organization became the main mechanism in driving the elderly health care process by community participation.

Keywords: elderly health care process, community participation, elderly, Thailand

Procedia PDF Downloads 213
1984 Motion Planning and Simulation Design of a Redundant Robot for Sheet Metal Bending Processes

Authors: Chih-Jer Lin, Jian-Hong Hou

Abstract:

Industry 4.0 is a vision of integrated industry implemented by artificial intelligent computing, software, and Internet technologies. The main goal of industry 4.0 is to deal with the difficulty owing to competitive pressures in the marketplace. For today’s manufacturing factories, the type of production is changed from mass production (high quantity production with low product variety) to medium quantity-high variety production. To offer flexibility, better quality control, and improved productivity, robot manipulators are used to combine material processing, material handling, and part positioning systems into an integrated manufacturing system. To implement the automated system for sheet metal bending operations, motion planning of a 7-degrees of freedom (DOF) robot is studied in this paper. A virtual reality (VR) environment of a bending cell, which consists of the robot and a bending machine, is established using the virtual robot experimentation platform (V-REP) simulator. For sheet metal bending operations, the robot only needs six DOFs for the pick-and-place or tracking tasks. Therefore, this 7 DOF robot has more DOFs than the required to execute a specified task; it can be called a redundant robot. Therefore, this robot has kinematic redundancies to deal with the task-priority problems. For redundant robots, Pseudo-inverse of the Jacobian is the most popular motion planning method, but the pseudo-inverse methods usually lead to a kind of chaotic motion with unpredictable arm configurations as the Jacobian matrix lose ranks. To overcome the above problem, we proposed a method to formulate the motion planning problems as optimization problem. Moreover, a genetic algorithm (GA) based method is proposed to deal with motion planning of the redundant robot. Simulation results validate the proposed method feasible for motion planning of the redundant robot in an automated sheet-metal bending operations.

Keywords: redundant robot, motion planning, genetic algorithm, obstacle avoidance

Procedia PDF Downloads 146
1983 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters

Authors: Dylan Santos De Pinho, Nabil Ouerhani

Abstract:

Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.

Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization

Procedia PDF Downloads 147