Search results for: network optimization methods
20202 Multi-Objectives Genetic Algorithm for Optimizing Machining Process Parameters
Authors: Dylan Santos De Pinho, Nabil Ouerhani
Abstract:
Energy consumption of machine-tools is becoming critical for machine-tool builders and end-users because of economic, ecological and legislation-related reasons. Many machine-tool builders are seeking for solutions that allow the reduction of energy consumption of machine-tools while preserving the same productivity rate and the same quality of machined parts. In this paper, we present the first results of a project conducted jointly by academic and industrial partners to reduce the energy consumption of a Swiss-Type lathe. We employ genetic algorithms to find optimal machining parameters – the set of parameters that lead to the best trade-off between energy consumption, part quality and tool lifetime. Three main machining process parameters are considered in our optimization technique, namely depth of cut, spindle rotation speed and material feed rate. These machining process parameters have been identified as the most influential ones in the configuration of the Swiss-type machining process. A state-of-the-art multi-objective genetic algorithm has been used. The algorithm combines three fitness functions, which are objective functions that permit to evaluate a set of parameters against the three objectives: energy consumption, quality of the machined parts, and tool lifetime. In this paper, we focus on the investigation of the fitness function related to energy consumption. Four different energy consumption related fitness functions have been investigated and compared. The first fitness function refers to the Kienzle cutting force model. The second fitness function uses the Material Removal Rate (RMM) as an indicator of energy consumption. The two other fitness functions are non-deterministic, learning-based functions. One fitness function uses a simple Neural Network to learn the relation between the process parameters and the energy consumption from experimental data. Another fitness function uses Lasso regression to determine the same relation. The goal is, then, to find out which fitness functions predict best the energy consumption of a Swiss-Type machining process for the given set of machining process parameters. Once determined, these functions may be used for optimization purposes – determine the optimal machining process parameters leading to minimum energy consumption. The performance of the four fitness functions has been evaluated. The Tornos DT13 Swiss-Type Lathe has been used to carry out the experiments. A mechanical part including various Swiss-Type machining operations has been selected for the experiments. The evaluation process starts with generating a set of CNC (Computer Numerical Control) programs for machining the part at hand. Each CNC program considers a different set of machining process parameters. During the machining process, the power consumption of the spindle is measured. All collected data are assigned to the appropriate CNC program and thus to the set of machining process parameters. The evaluation approach consists in calculating the correlation between the normalized measured power consumption and the normalized power consumption prediction for each of the four fitness functions. The evaluation shows that the Lasso and Neural Network fitness functions have the highest correlation coefficient with 97%. The fitness function “Material Removal Rate” (MRR) has a correlation coefficient of 90%, whereas the Kienzle-based fitness function has a correlation coefficient of 80%.Keywords: adaptive machining, genetic algorithms, smart manufacturing, parameters optimization
Procedia PDF Downloads 14520201 A Neural Network Approach to Evaluate Supplier Efficiency in a Supply Chain
Authors: Kishore K. Pochampally
Abstract:
The success of a supply chain heavily relies on the efficiency of the suppliers involved. In this paper, we propose a neural network approach to evaluate the efficiency of a supplier, which is being considered for inclusion in a supply chain, using the available linguistic (fuzzy) data of suppliers that already exist in the supply chain. The approach is carried out in three phases, as follows: In phase one, we identify criteria for evaluation of the supplier of interest. Then, in phase two, we use performance measures of already existing suppliers to construct a neural network that gives weights (importance values) of criteria identified in phase one. Finally, in phase three, we calculate the overall rating of the supplier of interest. The following are the major findings of the research conducted for this paper: (i) linguistic (fuzzy) ratings of suppliers such as 'good', 'bad', etc., can be converted (defuzzified) to numerical ratings (1 – 10 scale) using fuzzy logic so that those ratings can be used for further quantitative analysis; (ii) it is possible to construct and train a multi-level neural network in order to determine the weights of the criteria that are used to evaluate a supplier; and (iii) Borda’s rule can be used to group the weighted ratings and calculate the overall efficiency of the supplier.Keywords: fuzzy data, neural network, supplier, supply chain
Procedia PDF Downloads 11220200 Ant System with Acoustic Communication
Authors: Saad Bougrine, Salma Ouchraa, Belaid Ahiod, Abdelhakim Ameur El Imrani
Abstract:
Ant colony optimization is an ant algorithm framework that took inspiration from foraging behaviour of ant colonies. Indeed, ACO algorithms use a chemical communication, represented by pheromone trails, to build good solutions. However, ants involve different communication channels to interact. Thus, this paper introduces the acoustic communication between ants while they are foraging. This process allows fine and local exploration of search space and permits optimal solution to be improved.Keywords: acoustic communication, ant colony optimization, local search, traveling salesman problem
Procedia PDF Downloads 58420199 Mathematical Model of Corporate Bond Portfolio and Effective Border Preview
Authors: Sergey Podluzhnyy
Abstract:
One of the most important tasks of investment and pension fund management is building decision support system which helps to make right decision on corporate bond portfolio formation. Today there are several basic methods of bond portfolio management. They are duration management, immunization and convexity management. Identified methods have serious disadvantage: they do not take into account credit risk or insolvency risk of issuer. So, identified methods can be applied only for management and evaluation of high-quality sovereign bonds. Applying article proposes mathematical model for building an optimal in case of risk and yield corporate bond portfolio. Proposed model takes into account the default probability in formula of assessment of bonds which results to more correct evaluation of bonds prices. Moreover, applied model provides tools for visualization of the efficient frontier of corporate bonds portfolio taking into account the exposure to credit risk, which will increase the quality of the investment decisions of portfolio managers.Keywords: corporate bond portfolio, default probability, effective boundary, portfolio optimization task
Procedia PDF Downloads 31720198 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21420197 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database
Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan
Abstract:
Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database
Procedia PDF Downloads 57420196 Response Surface Methodology for the Optimization of Paddy Husker by Medium Brown Rice Peeling Machine 6 Rubber Type
Authors: S. Bangphan, P. Bangphan, C. Ketsombun, T. Sammana
Abstract:
Optimization of response surface methodology (RSM) was employed to study the effects of three factor (rubber of clearance, spindle of speed, and rice of moisture) in brown rice peeling machine of the optimal good rice yield (99.67, average of three repeats). The optimized composition derived from RSM regression was analyzed using Regression analysis and Analysis of Variance (ANOVA). At a significant level α=0.05, the values of Regression coefficient, R2 adjust were 96.55% and standard deviation were 1.05056. The independent variables are initial rubber of clearance, spindle of speed and rice of moisture parameters namely. The investigating responses are final rubber clearance, spindle of speed and moisture of rice.Keywords: brown rice, response surface methodology (RSM), peeling machine, optimization, paddy husker
Procedia PDF Downloads 57120195 Coefficient of Performance (COP) Optimization of an R134a Cross Vane Expander Compressor Refrigeration System
Authors: Y. D. Lim, K. S. Yap, K. T. Ooi
Abstract:
Cross Vane Expander Compressor (CVEC) is a newly invented expander-compressor combined unit, where it is introduced to replace the compressor and the expansion valve in traditional refrigeration system. The mathematical model of CVEC has been developed to examine its performance, and it was found that the energy consumption of a conventional refrigeration system was reduced by as much as 18%. It is believed that energy consumption can be further reduced by optimizing the device. In this study, the coefficient of performance (COP) of CVEC has been optimized under predetermined operational parameters and constrained main design parameters. Several main design parameters of CVEC were selected to be the variables, and the optimization was done with theoretical model in a simulation program. The theoretical model consists of geometrical model, dynamic model, heat transfer model and valve dynamics model. Complex optimization method, which is a constrained, direct search and multi-variables method was used in the study. As a result, the optimization study suggested that with an appropriate combination of design parameters, a 58% COP improvement in CVEC R134a refrigeration system is possible.Keywords: COP, cross vane expander-compressor, CVEC, design, simulation, refrigeration system, air-conditioning, R134a, multi variables
Procedia PDF Downloads 33220194 Evaluating the Perception of Roma in Europe through Social Network Analysis
Authors: Giulia I. Pintea
Abstract:
The Roma people are a nomadic ethnic group native to India, and they are one of the most prevalent minorities in Europe. In the past, Roma were enslaved and they were imprisoned in concentration camps during the Holocaust; today, Roma are subject to hate crimes and are denied access to healthcare, education, and proper housing. The aim of this project is to analyze how the public perception of the Roma people may be influenced by antiziganist and pro-Roma institutions in Europe. In order to carry out this project, we used social network analysis to build two large social networks: The antiziganist network, which is composed of institutions that oppress and racialize Roma, and the pro-Roma network, which is composed of institutions that advocate for and protect Roma rights. Measures of centrality, density, and modularity were obtained to determine which of the two social networks is exerting the greatest influence on the public’s perception of Roma in European societies. Furthermore, data on hate crimes on Roma were gathered from the Organization for Security and Cooperation in Europe (OSCE). We analyzed the trends in hate crimes on Roma for several European countries for 2009-2015 in order to see whether or not there have been changes in the public’s perception of Roma, thus helping us evaluate which of the two social networks has been more influential. Overall, the results suggest that there is a greater and faster exchange of information in the pro-Roma network. However, when taking the hate crimes into account, the impact of the pro-Roma institutions is ambiguous, due to differing patterns among European countries, suggesting that the impact of the pro-Roma network is inconsistent. Despite antiziganist institutions having a slower flow of information, the hate crime patterns also suggest that the antiziganist network has a higher impact on certain countries, which may be due to institutions outside the political sphere boosting the spread of antiziganist ideas and information to the European public.Keywords: applied mathematics, oppression, Roma people, social network analysis
Procedia PDF Downloads 27720193 The Nature and the Structure of Scientific and Innovative Collaboration Networks
Authors: Afshin Moazami, Andrea Schiffauerova
Abstract:
The objective of this work is to investigate the development and the role of collaboration networks in the creation of knowledge and innovations in the US and Canada, with a special focus on Quebec. In order to create scientific networks, the data on journal articles were extracted from SCOPUS, and the networks were built based on the co-authorship of the journal papers. For innovation networks, the USPTO database was used, and the networks were built on the patent co-inventorship. Various indicators characterizing the evolution of the network structure and the positions of the researchers and inventors in the networks were calculated. The comparison between the United States, Canada, and Quebec was then carried out. The preliminary results show that the nature of scientific collaboration networks differs from the one seen in innovation networks. Scientists work in bigger teams and are mostly interconnected within one giant network component, whereas the innovation network is much more clustered and fragmented, the inventors work more repetitively with the same partners, often in smaller isolated groups. In both Canada and the US, an increasing tendency towards collaboration was observed, and it was found that networks are getting bigger and more centralized with time. Moreover, a declining share of knowledge transfers per scientist was detected, suggesting an increasing specialization of science. The US collaboration networks tend to be more centralized than the Canadian ones. Quebec shares a lot of features with the Canadian network, but some differences were observed, for example, Quebec inventors rely more on the knowledge transmission through intermediaries.Keywords: Canada, collaboration, innovation network, scientific network, Quebec, United States
Procedia PDF Downloads 19920192 Energy Balance Routing to Enhance Network Performance in Wireless Sensor Network
Authors: G. Baraneedaran, Deepak Singh, Kollipara Tejesh
Abstract:
The wireless sensors network has been an active research area over the y-ear passed. Due to the limited energy and communication ability of sensor nodes, it seems especially important to design a routing protocol for WSNs so that sensing data can be transmitted to the receiver effectively, an energy-balanced routing method based on forward-aware factor (FAF-EBRM) is proposed in this paper. In FAF-EBRM, the next-hop node is selected according to the awareness of link weight and forward energy density. A spontaneous reconstruction mechanism for Local topology is designed additionally. In this experiment, FAF-EBRM is compared with LEACH and EECU, experimental results show that FAF-EBRM outperforms LEACH and EECU, which balances the energy consumption, prolongs the function lifetime and guarantees high Qos of WSN.Keywords: energy balance, forward-aware factor (FAF), forward energy density, link weight, network performance
Procedia PDF Downloads 53820191 A Taxonomy of Routing Protocols in Wireless Sensor Networks
Authors: A. Kardi, R. Zagrouba, M. Alqahtani
Abstract:
The Internet of Everything (IoE) presents today a very attractive and motivating field of research. It is basically based on Wireless Sensor Networks (WSNs) in which the routing task is the major analysis topic. In fact, it directly affects the effectiveness and the lifetime of the network. This paper, developed from recent works and based on extensive researches, proposes a taxonomy of routing protocols in WSNs. Our main contribution is that we propose a classification model based on nine classes namely application type, delivery mode, initiator of communication, network architecture, path establishment (route discovery), network topology (structure), protocol operation, next hop selection and latency-awareness and energy-efficient routing protocols. In order to provide a total classification pattern to serve as reference for network designers, each class is subdivided into possible subclasses, presented, and discussed using different parameters such as purposes and characteristics.Keywords: routing, sensor, survey, wireless sensor networks, WSNs
Procedia PDF Downloads 18120190 Participatory Air Quality Monitoring in African Cities: Empowering Communities, Enhancing Accountability, and Ensuring Sustainable Environments
Authors: Wabinyai Fidel Raja, Gideon Lubisa
Abstract:
Air pollution is becoming a growing concern in Africa due to rapid industrialization and urbanization, leading to implications for public health and the environment. Establishing a comprehensive air quality monitoring network is crucial to combat this issue. However, conventional methods of monitoring are insufficient in African cities due to the high cost of setup and maintenance. To address this, low-cost sensors (LCS) can be deployed in various urban areas through the use of participatory air quality network siting (PAQNS). PAQNS involves stakeholders from the community, local government, and private sector working together to determine the most appropriate locations for air quality monitoring stations. This approach improves the accuracy and representativeness of air quality monitoring data, engages and empowers community members, and reflects the actual exposure of the population. Implementing PAQNS in African cities can build trust, promote accountability, and increase transparency in the air quality management process. However, challenges to implementing this approach must be addressed. Nonetheless, improving air quality is essential for protecting public health and promoting a sustainable environment. Implementing participatory and data-informed air quality monitoring can take a significant step toward achieving these important goals in African cities and beyond.Keywords: low-cost sensors, participatory air quality network siting, air pollution, air quality management
Procedia PDF Downloads 9120189 Cyber Security Enhancement via Software Defined Pseudo-Random Private IP Address Hopping
Authors: Andre Slonopas, Zona Kostic, Warren Thompson
Abstract:
Obfuscation is one of the most useful tools to prevent network compromise. Previous research focused on the obfuscation of the network communications between external-facing edge devices. This work proposes the use of two edge devices, external and internal facing, which communicate via private IPv4 addresses in a software-defined pseudo-random IP hopping. This methodology does not require additional IP addresses and/or resources to implement. Statistical analyses demonstrate that the hopping surface must be at least 1e3 IP addresses in size with a broad standard deviation to minimize the possibility of coincidence of monitored and communication IPs. The probability of breaking the hopping algorithm requires a collection of at least 1e6 samples, which for large hopping surfaces will take years to collect. The probability of dropped packets is controlled via memory buffers and the frequency of hops and can be reduced to levels acceptable for video streaming. This methodology provides an impenetrable layer of security ideal for information and supervisory control and data acquisition systems.Keywords: moving target defense, cybersecurity, network security, hopping randomization, software defined network, network security theory
Procedia PDF Downloads 18520188 Evaluation of Security and Performance of Master Node Protocol in the Bitcoin Peer-To-Peer Network
Authors: Muntadher Sallal, Gareth Owenson, Mo Adda, Safa Shubbar
Abstract:
Bitcoin is a digital currency based on a peer-to-peer network to propagate and verify transactions. Bitcoin is gaining wider adoption than any previous crypto-currency. However, the mechanism of peers randomly choosing logical neighbors without any knowledge about underlying physical topology can cause a delay overhead in information propagation, which makes the system vulnerable to double-spend attacks. Aiming at alleviating the propagation delay problem, this paper introduces proximity-aware extensions to the current Bitcoin protocol, named Master Node Based Clustering (MNBC). The ultimate purpose of the proposed protocol, that are based on how clusters are formulated and how nodes can define their membership, is to improve the information propagation delay in the Bitcoin network. In MNBC protocol, physical internet connectivity increases, as well as the number of hops between nodes, decreases through assigning nodes to be responsible for maintaining clusters based on physical internet proximity. We show, through simulations, that the proposed protocol defines better clustering structures that optimize the performance of the transaction propagation over the Bitcoin protocol. The evaluation of partition attacks in the MNBC protocol, as well as the Bitcoin network, was done in this paper. Evaluation results prove that even though the Bitcoin network is more resistant against the partitioning attack than the MNBC protocol, more resources are needed to be spent to split the network in the MNBC protocol, especially with a higher number of nodes.Keywords: Bitcoin network, propagation delay, clustering, scalability
Procedia PDF Downloads 11420187 Optimization of Double-Layered Microchannel Heat Sinks
Authors: Tu-Chieh Hung, Wei-Mon Yan, Xiao-Dong Wang, Yu-Xian Huang
Abstract:
This work employs a combined optimization procedure including a simplified conjugate-gradient method and a three-dimensional fluid flow and heat transfer model to study the optimal geometric parameter design of double-layered microchannel heat sinks. The overall thermal resistance RT is the objective function to be minimized with number of channels, N, the channel width ratio, β, the bottom channel aspect ratio, αb, and upper channel aspect ratio, αu, as the search variables. It is shown that, for the given bottom area (10 mm×10 mm) and heat flux (100 W cm-2), the optimal (minimum) thermal resistance of double-layered microchannel heat sinks is about RT=0.12 ℃/m2W with the corresponding optimal geometric parameters N=73, β=0.50, αb=3.52, and, αu= 7.21 under a constant pumping power of 0.05 W. The optimization process produces a maximum reduction by 52.8% in the overall thermal resistance compared with an initial guess (N=112, β=0.37, αb=10.32 and, αu=10.93). The results also show that the optimal thermal resistance decreases rapidly with the pumping power and tends to be a saturated value afterward. The corresponding optimal values of parameters N, αb, and αu increase while that of β decrease as the pumping power increases. However, further increasing pumping power is not always cost-effective for the application of heat sink designs.Keywords: optimization, double-layered microchannel heat sink, simplified conjugate-gradient method, thermal resistance
Procedia PDF Downloads 48920186 Sleep Apnea Hypopnea Syndrom Diagnosis Using Advanced ANN Techniques
Authors: Sachin Singh, Thomas Penzel, Dinesh Nandan
Abstract:
Accurate identification of Sleep Apnea Hypopnea Syndrom Diagnosis is difficult problem for human expert because of variability among persons and unwanted noise. This paper proposes the diagonosis of Sleep Apnea Hypopnea Syndrome (SAHS) using airflow, ECG, Pulse and SaO2 signals. The features of each type of these signals are extracted using statistical methods and ANN learning methods. These extracted features are used to approximate the patient's Apnea Hypopnea Index(AHI) using sample signals in model. Advance signal processing is also applied to snore sound signal to locate snore event and SaO2 signal is used to support whether determined snore event is true or noise. Finally, Apnea Hypopnea Index (AHI) event is calculated as per true snore event detected. Experiment results shows that the sensitivity can reach up to 96% and specificity to 96% as AHI greater than equal to 5.Keywords: neural network, AHI, statistical methods, autoregressive models
Procedia PDF Downloads 11720185 Latency-Based Motion Detection in Spiking Neural Networks
Authors: Mohammad Saleh Vahdatpour, Yanqing Zhang
Abstract:
Understanding the neural mechanisms underlying motion detection in the human visual system has long been a fascinating challenge in neuroscience and artificial intelligence. This paper presents a spiking neural network model inspired by the processing of motion information in the primate visual system, particularly focusing on the Middle Temporal (MT) area. In our study, we propose a multi-layer spiking neural network model to perform motion detection tasks, leveraging the idea that synaptic delays in neuronal communication are pivotal in motion perception. Synaptic delay, determined by factors like axon length and myelin insulation, affects the temporal order of input spikes, thereby encoding motion direction and speed. Overall, our spiking neural network model demonstrates the feasibility of capturing motion detection principles observed in the primate visual system. The combination of synaptic delays, learning mechanisms, and shared weights and delays in SMD provides a promising framework for motion perception in artificial systems, with potential applications in computer vision and robotics.Keywords: neural network, motion detection, signature detection, convolutional neural network
Procedia PDF Downloads 8520184 Identifying the Factors affecting on the Success of Energy Usage Saving in Municipality of Tehran
Authors: Rojin Bana Derakhshan, Abbas Toloie
Abstract:
For the purpose of optimizing and developing energy efficiency in building, it is required to recognize key elements of success in optimization of energy consumption before performing any actions. Surveying Principal Components is one of the most valuable result of Linear Algebra because the simple and non-parametric methods are become confusing. So that energy management system implemented according to energy management system international standard ISO50001:2011 and all energy parameters in building to be measured through performing energy auditing. In this essay by simulating used of data mining, the key impressive elements on energy saving in buildings to be determined. This approach is based on data mining statistical techniques using feature selection method and fuzzy logic and convert data from massive to compressed type and used to increase the selected feature. On the other side, influence portion and amount of each energy consumption elements in energy dissipation in percent are recognized as separated norm while using obtained results from energy auditing and after measurement of all energy consuming parameters and identified variables. Accordingly, energy saving solution divided into 3 categories, low, medium and high expense solutions.Keywords: energy saving, key elements of success, optimization of energy consumption, data mining
Procedia PDF Downloads 46720183 Steepest Descent Method with New Step Sizes
Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman
Abstract:
Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence
Procedia PDF Downloads 53920182 Design and Optimization of a 6 Degrees of Freedom Co-Manipulated Parallel Robot for Prostate Brachytherapy
Authors: Aziza Ben Halima, Julien Bert, Dimitris Visvikis
Abstract:
In this paper, we propose designing and evaluating a parallel co-manipulated robot dedicated to low-dose-rate prostate brachytherapy. We developed 6 degrees of freedom compact and lightweight robot easy to install in the operating room thanks to its parallel design. This robotic system provides a co-manipulation allowing the surgeon to keep control of the needle’s insertion and consequently to improve the acceptability of the plan for the clinic. The best dimension’s configuration was solved by calculating the geometric model and using an optimization approach. The aim was to ensure the whole coverage of the prostate volume and consider the allowed free space around the patient that includes the ultrasound probe. The final robot dimensions fit in a cube of 300 300 300 mm³. A prototype was 3D printed, and the robot workspace was measured experimentally. The results show that the proposed robotic system satisfies the medical application requirements and permits the needle to reach any point within the prostate.Keywords: medical robotics, co-manipulation, prostate brachytherapy, optimization
Procedia PDF Downloads 20320181 Proposal of Data Collection from Probes
Authors: M. Kebisek, L. Spendla, M. Kopcek, T. Skulavik
Abstract:
In our paper we describe the security capabilities of data collection. Data are collected with probes located in the near and distant surroundings of the company. Considering the numerous obstacles e.g. forests, hills, urban areas, the data collection is realized in several ways. The collection of data uses connection via wireless communication, LAN network, GSM network and in certain areas data are collected by using vehicles. In order to ensure the connection to the server most of the probes have ability to communicate in several ways. Collected data are archived and subsequently used in supervisory applications. To ensure the collection of the required data, it is necessary to propose algorithms that will allow the probes to select suitable communication channel.Keywords: communication, computer network, data collection, probe
Procedia PDF Downloads 35820180 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices
Authors: Kaustav Mukherjee
Abstract:
In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parametersKeywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss
Procedia PDF Downloads 13020179 Genetic Algorithm Based Node Fault Detection and Recovery in Distributed Sensor Networks
Authors: N. Nalini, Lokesh B. Bhajantri
Abstract:
In Distributed Sensor Networks, the sensor nodes are prone to failure due to energy depletion and some other reasons. In this regard, fault tolerance of network is essential in distributed sensor environment. Energy efficiency, network or topology control and fault-tolerance are the most important issues in the development of next-generation Distributed Sensor Networks (DSNs). This paper proposes a node fault detection and recovery using Genetic Algorithm (GA) in DSN when some of the sensor nodes are faulty. The main objective of this work is to provide fault tolerance mechanism which is energy efficient and responsive to network using GA, which is used to detect the faulty nodes in the network based on the energy depletion of node and link failure between nodes. The proposed fault detection model is used to detect faults at node level and network level faults (link failure and packet error). Finally, the performance parameters for the proposed scheme are evaluated.Keywords: distributed sensor networks, genetic algorithm, fault detection and recovery, information technology
Procedia PDF Downloads 45220178 Speed Control of DC Motor Using Optimization Techniques Based PID Controller
Authors: Santosh Kumar Suman, Vinod Kumar Giri
Abstract:
The goal of this paper is to outline a speed controller of a DC motor by choice of a PID parameters utilizing genetic algorithms (GAs), the DC motor is extensively utilized as a part of numerous applications such as steel plants, electric trains, cranes and a great deal more. DC motor could be represented by a nonlinear model when nonlinearities such as attractive dissemination are considered. To provide effective control, nonlinearities and uncertainties in the model must be taken into account in the control design. The DC motor is considered as third order system. Objective of this paper three type of tuning techniques for PID parameter. In this paper, an independently energized DC motor utilizing MATLAB displaying, has been outlined whose velocity might be examined utilizing the Proportional, Integral, Derivative (KP, KI , KD) addition of the PID controller. Since, established controllers PID are neglecting to control the drive when weight parameters be likewise changed. The principle point of this paper is to dissect the execution of optimization techniques viz. The Genetic Algorithm (GA) for improve PID controllers parameters for velocity control of DC motor and list their points of interest over the traditional tuning strategies. The outcomes got from GA calculations were contrasted and that got from traditional technique. It was found that the optimization techniques beat customary tuning practices of ordinary PID controllers.Keywords: DC motor, PID controller, optimization techniques, genetic algorithm (GA), objective function, IAE
Procedia PDF Downloads 41720177 Portfolio Optimization under a Hybrid Stochastic Volatility and Constant Elasticity of Variance Model
Authors: Jai Heui Kim, Sotheara Veng
Abstract:
This paper studies the portfolio optimization problem for a pension fund under a hybrid model of stochastic volatility and constant elasticity of variance (CEV) using asymptotic analysis method. When the volatility component is fast mean-reverting, it is able to derive asymptotic approximations for the value function and the optimal strategy for general utility functions. Explicit solutions are given for the exponential and hyperbolic absolute risk aversion (HARA) utility functions. The study also shows that using the leading order optimal strategy results in the value function, not only up to the leading order, but also up to first order correction term. A practical strategy that does not depend on the unobservable volatility level is suggested. The result is an extension of the Merton's solution when stochastic volatility and elasticity of variance are considered simultaneously.Keywords: asymptotic analysis, constant elasticity of variance, portfolio optimization, stochastic optimal control, stochastic volatility
Procedia PDF Downloads 29820176 Development of a Mathematical Model to Characterize the Oil Production in the Federal Republic of Nigeria Environment
Authors: Paul C. Njoku, Archana Swati Njoku
Abstract:
The study deals with the development of a mathematical model to characterize the oil production in Nigeria. This is calculated by initiating the dynamics of oil production in million barrels revenue plan cost of oil production in million nairas and unit cost of production from 1974-1982 in the contest of the federal Republic of Nigeria. This country export oil to other countries as well as importing specialized crude. The transport network from origin/destination tij to pairs is taking into account simulation runs, optimization have been considered in this study.Keywords: mathematical oil model development dynamics, Nigeria, characterization barrels, dynamics of oil production
Procedia PDF Downloads 38520175 Simulation of Human Heart Activation Based on Diffusion Tensor Imaging
Authors: Ihab Elaff
Abstract:
Simulating the heart’s electrical stimulation is essential in modeling and evaluating the electrophysiology behavior of the heart. For achieving that, there are two structures in concern: the ventricles’ Myocardium, and the ventricles’ Conduction Network. Ventricles’ Myocardium has been modeled as anisotropic material from Diffusion Tensor Imaging (DTI) scan, and the Conduction Network has been extracted from DTI as a case-based structure based on the biological properties of the heart tissues and the working methodology of the Magnetic Resonance Imaging (MRI) scanner. Results of the produced activation were much similar to real measurements of the reference model that was presented in the literature.Keywords: diffusion tensor, DTI, heart, conduction network, excitation propagation
Procedia PDF Downloads 26320174 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies
Authors: Masoud Sheidai
Abstract:
Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis
Procedia PDF Downloads 12220173 Optimal Design of Reference Node Placement for Wireless Indoor Positioning Systems in Multi-Floor Building
Authors: Kittipob Kondee, Chutima Prommak
Abstract:
In this paper, we propose an optimization technique that can be used to optimize the placements of reference nodes and improve the location determination performance for the multi-floor building. The proposed technique is based on Simulated Annealing algorithm (SA) and is called MSMR-M. The performance study in this work is based on simulation. We compare other node-placement techniques found in the literature with the optimal node-placement solutions obtained from our optimization. The results show that using the optimal node-placement obtained by our proposed technique can improve the positioning error distances up to 20% better than those of the other techniques. The proposed technique can provide an average error distance within 1.42 meters.Keywords: indoor positioning system, optimization system design, multi-floor building, wireless sensor networks
Procedia PDF Downloads 245