Search results for: recursive least squares algorithm
2236 Lipidomic Response to Neoadjuvant Chemoradiotherapy in Rectal Cancer
Authors: Patricia O. Carvalho, Marcia C. F. Messias, Salvador Sanchez Vinces, Caroline F. A. Gatinoni, Vitor P. Iordanu, Carlos A. R. Martinez
Abstract:
Lipidomics methods are widely used in the identification and validation of disease-specific biomarkers and therapy response evaluation. The present study aimed to identify a panel of potential lipid biomarkers to evaluate response to neoadjuvant chemoradiotherapy in rectal adenocarcinoma (RAC). Liquid chromatography–mass spectrometry (LC-MS)-based untargeted lipidomic was used to profile human serum samples from patients with clinical stage T2 or T3 resectable RAC, after and before chemoradiotherapy treatment. A total of 28 blood plasma samples were collected from 14 patients with RAC who recruited at the São Francisco University Hospital (HUSF/USF). The study was approved by the ethics committee (CAAE 14958819.8.0000.5514). Univariate and multivariate statistical analyses were applied to explore dysregulated metabolic pathways using untargeted lipidic profiling and data mining approaches. A total of 36 statistically significant altered lipids were identified and the subsequent partial least-squares discriminant analysis model was both cross validated (R2, Q2) and permutated. Lisophosphatidyl-choline (LPC) plasmalogens containing palmitoleic and oleic acids, with high variable importance in projection score, showed a tendency to be lower after completion of chemoradiotherapy. Chemoradiotherapy seems to change plasmanyl-phospholipids levels, indicating that these lipids play an important role in the RAC pathogenesis.Keywords: lipidomics, neoadjuvant chemoradiotherapy, plasmalogens, rectal adenocarcinoma
Procedia PDF Downloads 1302235 Nine-Level Shunt Active Power Filter Associated with a Photovoltaic Array Coupled to the Electrical Distribution Network
Authors: Zahzouh Zoubir, Bouzaouit Azzeddine, Gahgah Mounir
Abstract:
The use of more and more electronic power switches with a nonlinear behavior generates non-sinusoidal currents in distribution networks, which causes damage to domestic and industrial equipment. The multi-level shunt power active filter is subsequently shown to be an adequate solution to the problem raised. Nevertheless, the difficulty of adjusting the active filter DC supply voltage requires another technology to ensure it. In this article, a photovoltaic generator is associated with the DC bus power terminals of the active filter. The proposed system consists of a field of solar panels, three multi-level voltage inverters connected to the power grid and a non-linear load consisting of a six-diode rectifier bridge supplying a resistive-inductive load. Current control techniques of active and reactive power are used to compensate for both harmonic currents and reactive power as well as to inject active solar power into the distribution network. An algorithm of the search method of the maximum power point of type Perturb and observe is applied. Simulation results of the system proposed under the MATLAB/Simulink environment shows that the performance of control commands that reassure the solar power injection in the network, harmonic current compensation and power factor correction.Keywords: Actif power filter, MPPT, pertub&observe algorithm, PV array, PWM-control
Procedia PDF Downloads 3372234 Hard Disk Failure Predictions in Supercomputing System Based on CNN-LSTM and Oversampling Technique
Authors: Yingkun Huang, Li Guo, Zekang Lan, Kai Tian
Abstract:
Hard disk drives (HDD) failure of the exascale supercomputing system may lead to service interruption and invalidate previous calculations, and it will cause permanent data loss. Therefore, initiating corrective actions before hard drive failures materialize is critical to the continued operation of jobs. In this paper, a highly accurate analysis model based on CNN-LSTM and oversampling technique was proposed, which can correctly predict the necessity of a disk replacement even ten days in advance. Generally, the learning-based method performs poorly on a training dataset with long-tail distribution, especially fault prediction is a very classic situation as the scarcity of failure data. To overcome the puzzle, a new oversampling was employed to augment the data, and then, an improved CNN-LSTM with the shortcut was built to learn more effective features. The shortcut transmits the results of the previous layer of CNN and is used as the input of the LSTM model after weighted fusion with the output of the next layer. Finally, a detailed, empirical comparison of 6 prediction methods is presented and discussed on a public dataset for evaluation. The experiments indicate that the proposed method predicts disk failure with 0.91 Precision, 0.91 Recall, 0.91 F-measure, and 0.90 MCC for 10 days prediction horizon. Thus, the proposed algorithm is an efficient algorithm for predicting HDD failure in supercomputing.Keywords: HDD replacement, failure, CNN-LSTM, oversampling, prediction
Procedia PDF Downloads 782233 Regularizing Software for Aerosol Particles
Authors: Christine Böckmann, Julia Rosemann
Abstract:
We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization
Procedia PDF Downloads 3422232 Vehicular Speed Detection Camera System Using Video Stream
Authors: C. A. Anser Pasha
Abstract:
In this paper, a new Vehicular Speed Detection Camera System that is applicable as an alternative to traditional radars with the same accuracy or even better is presented. The real-time measurement and analysis of various traffic parameters such as speed and number of vehicles are increasingly required in traffic control and management. Image processing techniques are now considered as an attractive and flexible method for automatic analysis and data collections in traffic engineering. Various algorithms based on image processing techniques have been applied to detect multiple vehicles and track them. The SDCS processes can be divided into three successive phases; the first phase is Objects detection phase, which uses a hybrid algorithm based on combining an adaptive background subtraction technique with a three-frame differencing algorithm which ratifies the major drawback of using only adaptive background subtraction. The second phase is Objects tracking, which consists of three successive operations - object segmentation, object labeling, and object center extraction. Objects tracking operation takes into consideration the different possible scenarios of the moving object like simple tracking, the object has left the scene, the object has entered the scene, object crossed by another object, and object leaves and another one enters the scene. The third phase is speed calculation phase, which is calculated from the number of frames consumed by the object to pass by the scene.Keywords: radar, image processing, detection, tracking, segmentation
Procedia PDF Downloads 4662231 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells
Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser
Abstract:
Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.Keywords: cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security
Procedia PDF Downloads 3242230 Fill Rate Window as a Criterion for Spares Allocation
Authors: Michael Dreyfuss, Yahel Giat
Abstract:
Limited battery range and long recharging times are the greatest obstacles to the successful adoption of electric cars. One of the suggestions to overcome these problems is that carmakers retain ownership of batteries and provide battery swapping service so that customers exchange their depleted batteries for recharged batteries. Motivated by this example, we consider the problem of optimal spares allocation in an exchangeable-item, multi-location repair system. We generalize the standard service measures of fill rate and average waiting time to reflect the fact that customers penalize the service provider only if they have to wait more than a ‘tolerable’ time window. These measures are denoted as the window fill rate and the truncated waiting time, respectively. We find that the truncated waiting time is convex and therefore a greedy algorithm solves the spares allocation problem efficiently. We show that the window fill rate is generally S-shaped and describe an efficient algorithm to find a near-optimal solution and detail a priori and a posteriori upper bounds to the distance from optimum. The theory is complemented with a large scale numerical example demonstrating the spare battery allocation in battery swapping stations.Keywords: convex-concave optimization, exchangeable item, M/G/infinity, multiple location, repair system, spares allocation, window fill rate
Procedia PDF Downloads 4932229 Machine Learning Approach for Automating Electronic Component Error Classification and Detection
Authors: Monica Racha, Siva Chandrasekaran, Alex Stojcevski
Abstract:
The engineering programs focus on promoting students' personal and professional development by ensuring that students acquire technical and professional competencies during four-year studies. The traditional engineering laboratory provides an opportunity for students to "practice by doing," and laboratory facilities aid them in obtaining insight and understanding of their discipline. Due to rapid technological advancements and the current COVID-19 outbreak, the traditional labs were transforming into virtual learning environments. Aim: To better understand the limitations of the physical laboratory, this research study aims to use a Machine Learning (ML) algorithm that interfaces with the Augmented Reality HoloLens and predicts the image behavior to classify and detect the electronic components. The automated electronic components error classification and detection automatically detect and classify the position of all components on a breadboard by using the ML algorithm. This research will assist first-year undergraduate engineering students in conducting laboratory practices without any supervision. With the help of HoloLens, and ML algorithm, students will reduce component placement error on a breadboard and increase the efficiency of simple laboratory practices virtually. Method: The images of breadboards, resistors, capacitors, transistors, and other electrical components will be collected using HoloLens 2 and stored in a database. The collected image dataset will then be used for training a machine learning model. The raw images will be cleaned, processed, and labeled to facilitate further analysis of components error classification and detection. For instance, when students conduct laboratory experiments, the HoloLens captures images of students placing different components on a breadboard. The images are forwarded to the server for detection in the background. A hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm will be used to train the dataset for object recognition and classification. The convolution layer extracts image features, which are then classified using Support Vector Machine (SVM). By adequately labeling the training data and classifying, the model will predict, categorize, and assess students in placing components correctly. As a result, the data acquired through HoloLens includes images of students assembling electronic components. It constantly checks to see if students appropriately position components in the breadboard and connect the components to function. When students misplace any components, the HoloLens predicts the error before the user places the components in the incorrect proportion and fosters students to correct their mistakes. This hybrid Convolutional Neural Networks (CNNs) and Support Vector Machines (SVMs) algorithm automating electronic component error classification and detection approach eliminates component connection problems and minimizes the risk of component damage. Conclusion: These augmented reality smart glasses powered by machine learning provide a wide range of benefits to supervisors, professionals, and students. It helps customize the learning experience, which is particularly beneficial in large classes with limited time. It determines the accuracy with which machine learning algorithms can forecast whether students are making the correct decisions and completing their laboratory tasks.Keywords: augmented reality, machine learning, object recognition, virtual laboratories
Procedia PDF Downloads 1342228 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)
Procedia PDF Downloads 2332227 A Hybrid Algorithm Based on Greedy Randomized Adaptive Search Procedure and Chemical Reaction Optimization for the Vehicle Routing Problem with Hard Time Windows
Authors: Imen Boudali, Marwa Ragmoun
Abstract:
The Vehicle Routing Problem with Hard Time Windows (VRPHTW) is a basic distribution management problem that models many real-world problems. The objective of the problem is to deliver a set of customers with known demands on minimum-cost vehicle routes while satisfying vehicle capacity and hard time windows for customers. In this paper, we propose to deal with our optimization problem by using a new hybrid stochastic algorithm based on two metaheuristics: Chemical Reaction Optimization (CRO) and Greedy Randomized Adaptive Search Procedure (GRASP). The first method is inspired by the natural process of chemical reactions enabling the transformation of unstable substances with excessive energy to stable ones. During this process, the molecules interact with each other through a series of elementary reactions to reach minimum energy for their existence. This property is embedded in CRO to solve the VRPHTW. In order to enhance the population diversity throughout the search process, we integrated the GRASP in our method. Simulation results on the base of Solomon’s benchmark instances show the very satisfactory performances of the proposed approach.Keywords: Benchmark Problems, Combinatorial Optimization, Vehicle Routing Problem with Hard Time Windows, Meta-heuristics, Hybridization, GRASP, CRO
Procedia PDF Downloads 4102226 Automatic Early Breast Cancer Segmentation Enhancement by Image Analysis and Hough Transform
Authors: David Jurado, Carlos Ávila
Abstract:
Detection of early signs of breast cancer development is crucial to quickly diagnose the disease and to define adequate treatment to increase the survival probability of the patient. Computer Aided Detection systems (CADs), along with modern data techniques such as Machine Learning (ML) and Neural Networks (NN), have shown an overall improvement in digital mammography cancer diagnosis, reducing the false positive and false negative rates becoming important tools for the diagnostic evaluations performed by specialized radiologists. However, ML and NN-based algorithms rely on datasets that might bring issues to the segmentation tasks. In the present work, an automatic segmentation and detection algorithm is described. This algorithm uses image processing techniques along with the Hough transform to automatically identify microcalcifications that are highly correlated with breast cancer development in the early stages. Along with image processing, automatic segmentation of high-contrast objects is done using edge extraction and circle Hough transform. This provides the geometrical features needed for an automatic mask design which extracts statistical features of the regions of interest. The results shown in this study prove the potential of this tool for further diagnostics and classification of mammographic images due to the low sensitivity to noisy images and low contrast mammographies.Keywords: breast cancer, segmentation, X-ray imaging, hough transform, image analysis
Procedia PDF Downloads 822225 Simultaneous Determination of Six Characterizing/Quality Parameters of Biodiesels via 1H NMR and Multivariate Calibration
Authors: Gustavo G. Shimamoto, Matthieu Tubino
Abstract:
The characterization and the quality of biodiesel samples are checked by determining several parameters. Considering a large number of analysis to be performed, as well as the disadvantages of the use of toxic solvents and waste generation, multivariate calibration is suggested to reduce the number of tests. In this work, hydrogen nuclear magnetic resonance (1H NMR) spectra were used to build multivariate models, from partial least squares (PLS) regression, in order to determine simultaneously six important characterizing and/or quality parameters of biodiesels: density at 20 ºC, kinematic viscosity at 40 ºC, iodine value, acid number, oxidative stability, and water content. Biodiesels from twelve different oils sources were used in this study: babassu, brown flaxseed, canola, corn, cottonseed, macauba almond, microalgae, palm kernel, residual frying, sesame, soybean, and sunflower. 1H NMR reflects the structures of the compounds present in biodiesel samples and showed suitable correlations with the six parameters. The PLS models were constructed with latent variables between 5 and 7, the obtained values of r(cal) and r(val) were greater than 0.994 and 0.989, respectively. In addition, the models were considered suitable to predict all the six parameters for external samples, taking into account the analytical speed to perform it. Thus, the alliance between 1H NMR and PLS showed to be appropriate to characterize and evaluate the quality of biodiesels, reducing significantly analysis time, the consumption of reagents/solvents, and waste generation. Therefore, the proposed methods can be considered to adhere to the principles of green chemistry.Keywords: biodiesel, multivariate calibration, nuclear magnetic resonance, quality parameters
Procedia PDF Downloads 5382224 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information
Authors: Haifeng Wang, Haili Zhang
Abstract:
Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.Keywords: computational social science, movie preference, machine learning, SVM
Procedia PDF Downloads 2572223 A Convergent Interacting Particle Method for Computing Kpp Front Speeds in Random Flows
Authors: Tan Zhang, Zhongjian Wang, Jack Xin, Zhiwen Zhang
Abstract:
We aim to efficiently compute the spreading speeds of reaction-diffusion-advection (RDA) fronts in divergence-free random flows under the Kolmogorov-Petrovsky-Piskunov (KPP) nonlinearity. We study a stochastic interacting particle method (IPM) for the reduced principal eigenvalue (Lyapunov exponent) problem of an associated linear advection-diffusion operator with spatially random coefficients. The Fourier representation of the random advection field and the Feynman-Kac (FK) formula of the principal eigenvalue (Lyapunov exponent) form the foundation of our method implemented as a genetic evolution algorithm. The particles undergo advection-diffusion and mutation/selection through a fitness function originated in the FK semigroup. We analyze the convergence of the algorithm based on operator splitting and present numerical results on representative flows such as 2D cellular flow and 3D Arnold-Beltrami-Childress (ABC) flow under random perturbations. The 2D examples serve as a consistency check with semi-Lagrangian computation. The 3D results demonstrate that IPM, being mesh-free and self-adaptive, is simple to implement and efficient for computing front spreading speeds in the advection-dominated regime for high-dimensional random flows on unbounded domains where no truncation is needed.Keywords: KPP front speeds, random flows, Feynman-Kac semigroups, interacting particle method, convergence analysis
Procedia PDF Downloads 462222 The Design of a Mixed Matrix Model for Activity Levels Extraction and Sub Processes Classification of a Work Project (Case: Great Tehran Electrical Distribution Company)
Authors: Elham Allahmoradi, Bahman Allahmoradi, Ali Bonyadi Naeini
Abstract:
Complex systems have many aspects. A variety of methods have been developed to analyze these systems. The most efficient of these methods should not only be simple, but also provide useful and comprehensive information about many aspects of the system. Matrix methods are considered the most commonly methods used to analyze and design systems. Each matrix method can examine a particular aspect of the system. If these methods are combined, managers can access to more comprehensive and broader information about the system. This study was conducted in four steps. In the first step, a process model of a real project has been extracted through IDEF3. In the second step, activity levels have been attained by writing a process model in the form of a design structure matrix (DSM) and sorting it through triangulation algorithm (TA). In the third step, sub-processes have been obtained by writing the process model in the form of an interface structure matrix (ISM) and clustering it through cluster identification algorithm (CIA). In the fourth step, a mixed model has been developed to provide a unified picture of the project structure through the simultaneous presentation of activities and sub-processes. Finally, the paper is completed with a conclusion.Keywords: integrated definition for process description capture (IDEF3) method, design structure matrix (DSM), interface structure matrix (ism), mixed matrix model, activity level, sub-process
Procedia PDF Downloads 4932221 Spatial Analysis of Festival Spaces in Traditional Festivals in Taipei City
Authors: Liu Szu Yin
Abstract:
The center of urban development lies in commercial transactions and folk religious activities. In Taipei City, temples serve as crucial urban spaces and centers for civic activities and religious beliefs. The appearance of local temples can be influenced by the prosperity of the surrounding communities. Apart from being centers of religious worship, Taipei's temples also host festival celebrations, allowing people to gather in front of the temples and form collective urban memories. The spatial attributes for hosting festival activities include streets, squares, parks, and buildings. In Taipei, many traditional festivals take place on the streets, either as round-trip routes or linear routes with a single starting and ending point. Given the processions and parades involving palanquins and other ceremonial objects during traditional festival activities, street spaces are frequently utilized. Therefore, this study analyzes the historical context and street spaces of three traditional festivals in Taipei City, including Qingshan Temple in Monga, Xiahai City God Temple in Dadaocheng, and Baoan Temple in Dalongdong, through on-site research. Most urban festival planners need to understand the characteristics of the city's streets in order to effectively utilize street spaces for festival planning. Taipei's traditional festivals not only preserve Chinese traditional culture but also incorporate modern elements, ensuring the transmission of culture and faith and allowing the city to become characterized by sustainable culture and unique urban memories.Keywords: festival space, urban festival, taipei, urban memory
Procedia PDF Downloads 672220 Algorithm for Quantification of Pulmonary Fibrosis in Chest X-Ray Exams
Authors: Marcela de Oliveira, Guilherme Giacomini, Allan Felipe Fattori Alves, Ana Luiza Menegatti Pavan, Maria Eugenia Dela Rosa, Fernando Antonio Bacchim Neto, Diana Rodrigues de Pina
Abstract:
It is estimated that each year one death every 10 seconds (about 2 million deaths) in the world is attributed to tuberculosis (TB). Even after effective treatment, TB leaves sequelae such as, for example, pulmonary fibrosis, compromising the quality of life of patients. Evaluations of the aforementioned sequel are usually performed subjectively by radiology specialists. Subjective evaluation may indicate variations inter and intra observers. The examination of x-rays is the diagnostic imaging method most accomplished in the monitoring of patients diagnosed with TB and of least cost to the institution. The application of computational algorithms is of utmost importance to make a more objective quantification of pulmonary impairment in individuals with tuberculosis. The purpose of this research is the use of computer algorithms to quantify the pulmonary impairment pre and post-treatment of patients with pulmonary TB. The x-ray images of 10 patients with TB diagnosis confirmed by examination of sputum smears were studied. Initially the segmentation of the total lung area was performed (posteroanterior and lateral views) then targeted to the compromised region by pulmonary sequel. Through morphological operators and the application of signal noise tool, it was possible to determine the compromised lung volume. The largest difference found pre- and post-treatment was 85.85% and the smallest was 54.08%.Keywords: algorithm, radiology, tuberculosis, x-rays exam
Procedia PDF Downloads 4172219 A Multi-Objective Programming Model to Supplier Selection and Order Allocation Problem in Stochastic Environment
Authors: Rouhallah Bagheri, Morteza Mahmoudi, Hadi Moheb-Alizadeh
Abstract:
This paper aims at developing a multi-objective model for supplier selection and order allocation problem in stochastic environment, where purchasing cost, percentage of delivered items with delay and percentage of rejected items provided by each supplier are supposed to be stochastic parameters following any arbitrary probability distribution. In this regard, dependent chance programming is used which maximizes probability of the event that total purchasing cost, total delivered items with delay and total rejected items are less than or equal to pre-determined values given by decision maker. The abovementioned stochastic multi-objective programming problem is then transformed into a stochastic single objective programming problem using minimum deviation method. In the next step, the further problem is solved applying a genetic algorithm, which performs a simulation process in order to calculate the stochastic objective function as its fitness function. Finally, the impact of stochastic parameters on the given solution is examined via a sensitivity analysis exploiting coefficient of variation. The results show that whatever stochastic parameters have greater coefficients of variation, the value of the objective function in the stochastic single objective programming problem is deteriorated.Keywords: supplier selection, order allocation, dependent chance programming, genetic algorithm
Procedia PDF Downloads 3112218 Finite-Sum Optimization: Adaptivity to Smoothness and Loopless Variance Reduction
Authors: Bastien Batardière, Joon Kwon
Abstract:
For finite-sum optimization, variance-reduced gradient methods (VR) compute at each iteration the gradient of a single function (or of a mini-batch), and yet achieve faster convergence than SGD thanks to a carefully crafted lower-variance stochastic gradient estimator that reuses past gradients. Another important line of research of the past decade in continuous optimization is the adaptive algorithms such as AdaGrad, that dynamically adjust the (possibly coordinate-wise) learning rate to past gradients and thereby adapt to the geometry of the objective function. Variants such as RMSprop and Adam demonstrate outstanding practical performance that have contributed to the success of deep learning. In this work, we present AdaLVR, which combines the AdaGrad algorithm with loopless variance-reduced gradient estimators such as SAGA or L-SVRG that benefits from a straightforward construction and a streamlined analysis. We assess that AdaLVR inherits both good convergence properties from VR methods and the adaptive nature of AdaGrad: in the case of L-smooth convex functions we establish a gradient complexity of O(n + (L + √ nL)/ε) without prior knowledge of L. Numerical experiments demonstrate the superiority of AdaLVR over state-of-the-art methods. Moreover, we empirically show that the RMSprop and Adam algorithm combined with variance-reduced gradients estimators achieve even faster convergence.Keywords: convex optimization, variance reduction, adaptive algorithms, loopless
Procedia PDF Downloads 682217 Neuro-Fuzzy Approach to Improve Reliability in Auxiliary Power Supply System for Nuclear Power Plant
Authors: John K. Avor, Choong-Koo Chang
Abstract:
The transfer of electrical loads at power generation stations from Standby Auxiliary Transformer (SAT) to Unit Auxiliary Transformer (UAT) and vice versa is through a fast bus transfer scheme. Fast bus transfer is a time-critical application where the transfer process depends on various parameters, thus transfer schemes apply advance algorithms to ensure power supply reliability and continuity. In a nuclear power generation station, supply continuity is essential, especially for critical class 1E electrical loads. Bus transfers must, therefore, be executed accurately within 4 to 10 cycles in order to achieve safety system requirements. However, the main problem is that there are instances where transfer schemes scrambled due to inaccurate interpretation of key parameters; and consequently, have failed to transfer several critical loads from UAT to the SAT during main generator trip event. Although several techniques have been adopted to develop robust transfer schemes, a combination of Artificial Neural Network and Fuzzy Systems (Neuro-Fuzzy) has not been extensively used. In this paper, we apply the concept of Neuro-Fuzzy to determine plant operating mode and dynamic prediction of the appropriate bus transfer algorithm to be selected based on the first cycle of voltage information. The performance of Sequential Fast Transfer and Residual Bus Transfer schemes was evaluated through simulation and integration of the Neuro-Fuzzy system. The objective for adopting Neuro-Fuzzy approach in the bus transfer scheme is to utilize the signal validation capabilities of artificial neural network, specifically the back-propagation algorithm which is very accurate in learning completely new systems. This research presents a combined effect of artificial neural network and fuzzy systems to accurately interpret key bus transfer parameters such as magnitude of the residual voltage, decay time, and the associated phase angle of the residual voltage in order to determine the possibility of high speed bus transfer for a particular bus and the corresponding transfer algorithm. This demonstrates potential for general applicability to improve reliability of the auxiliary power distribution system. The performance of the scheme is implemented on APR1400 nuclear power plant auxiliary system.Keywords: auxiliary power system, bus transfer scheme, fuzzy logic, neural networks, reliability
Procedia PDF Downloads 1712216 Vision-Based Daily Routine Recognition for Healthcare with Transfer Learning
Authors: Bruce X. B. Yu, Yan Liu, Keith C. C. Chan
Abstract:
We propose to record Activities of Daily Living (ADLs) of elderly people using a vision-based system so as to provide better assistive and personalization technologies. Current ADL-related research is based on data collected with help from non-elderly subjects in laboratory environments and the activities performed are predetermined for the sole purpose of data collection. To obtain more realistic datasets for the application, we recorded ADLs for the elderly with data collected from real-world environment involving real elderly subjects. Motivated by the need to collect data for more effective research related to elderly care, we chose to collect data in the room of an elderly person. Specifically, we installed Kinect, a vision-based sensor on the ceiling, to capture the activities that the elderly subject performs in the morning every day. Based on the data, we identified 12 morning activities that the elderly person performs daily. To recognize these activities, we created a HARELCARE framework to investigate into the effectiveness of existing Human Activity Recognition (HAR) algorithms and propose the use of a transfer learning algorithm for HAR. We compared the performance, in terms of accuracy, and training progress. Although the collected dataset is relatively small, the proposed algorithm has a good potential to be applied to all daily routine activities for healthcare purposes such as evidence-based diagnosis and treatment.Keywords: daily activity recognition, healthcare, IoT sensors, transfer learning
Procedia PDF Downloads 1312215 Numerical Simulation of Unsteady Cases of Fluid Flow Using Modified Dynamic Boundary Condition (mDBC) in Smoothed Particle Hydrodynamics Models
Authors: Exa Heydemans, Jessica Sjah, Dwinanti Rika Marthanty
Abstract:
This paper presents numerical simulations using an open boundary algorithm with modified dynamic boundary condition (mDBC) for weakly compressible smoothed particle hydrodynamics models from particle-based code Dualsphysics. The problems of piping erosion in dams and dikes are aimed for studying the algorithm. The case 2D model of unsteady fluid flow past around a fixed cylinder is simulated, where various values of Reynold’s numbers (Re40, Re60, Re80, and Re100) and different model’s resolution are considered. A constant velocity with different values of viscosity for generating various Reynold’s numbers and different numbers of particles over a cylinder for the resolution are modeled. The interaction between solid particles of the cylinder and fluid particles is concerned. The cylinder is affected by the hydrodynamics force caused by the flow of fluid particles. The solid particles of the cylinder are the observation points to obtain force and pressure due to the hydrodynamics forces. As results of the simulation, which is to show the capability to model 2D unsteady with various Reynold’s numbers, the pressure coefficient, drag coefficient, lift coefficient, and Strouhal number are compared to the previous work from literature.Keywords: hydrodynamics, internal erosion, dualsphysics, viscous fluid flow
Procedia PDF Downloads 1632214 Assessment Power and Oscillation Damping Using the POD Controller and Proposed FOD Controller
Authors: Tohid Rahimi, Yahya Naderi, Babak Yousefi, Seyed Hossein Hoseini
Abstract:
Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. However, FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. However, Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper, firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. Therefore, FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.Keywords: power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA)
Procedia PDF Downloads 4742213 The Estimation Method of Inter-Story Drift for Buildings Based on Evolutionary Learning
Authors: Kyu Jin Kim, Byung Kwan Oh, Hyo Seon Park
Abstract:
The seismic responses-based structural health monitoring system has been performed to reduce seismic damage. The inter-story drift ratio which is the major index of the seismic capacity assessment is employed for estimating the seismic damage of buildings. Meanwhile, seismic response analysis to estimate the structural responses of building demands significantly high computational cost due to increasing number of high-rise and large buildings. To estimate the inter-story drift ratio of buildings from the earthquake efficiently, this paper suggests the estimation method of inter-story drift for buildings using an artificial neural network (ANN). In the method, the radial basis function neural network (RBFNN) is integrated with optimization algorithm to optimize the variable through evolutionary learning that refers to evolutionary radial basis function neural network (ERBFNN). The estimation method estimates the inter-story drift without seismic response analysis when the new earthquakes are subjected to buildings. The effectiveness of the estimation method is verified through a simulation using multi-degree of freedom system.Keywords: structural health monitoring, inter-story drift ratio, artificial neural network, radial basis function neural network, genetic algorithm
Procedia PDF Downloads 3262212 Thermo-Physical Properties and Solubility of CO2 in Piperazine Activated Aqueous Solutions of β-Alanine
Authors: Ghulam Murshid
Abstract:
Carbon dioxide is one of the major greenhouse gas (GHG) contributors. It is an obligation of the industry to reduce the amount of carbon dioxide emission to the acceptable limits. Tremendous research and studies are reported in the past and still the quest to find the suitable and economical solution of this problem needed to be explored in order to develop the most plausible absorber for carbon dioxide removal. Amino acids are reported by the researchers as a potential solvent for absorption of carbon dioxide to replace alkanolamines due to its ability to resist oxidative degradation, low volatility due to its ionic structure and higher surface tension. In addition, the introduction of promoter-like piperazine to amino acid helps to further enhance the solubility. In this work, the effect of piperazine on thermophysical properties and solubility of β-Alanine aqueous solutions were studied for various concentrations. The measured physicochemical properties data was correlated as a function of temperature using least-squares method and the correlation parameters are reported together with it respective standard deviations. The effect of activator piperazine on the CO2 loading performance of selected amino acid under high-pressure conditions (1bar to 10bar) at temperature range of (30 to 60)oC was also studied. Solubility of CO2 decreases with increasing temperature and increases with increasing pressure. Quadratic representation of solubility using Response Surface Methodology (RSM) shows that the most important parameter to optimize solubility is system pressure. The addition of promoter increases the solubility effect of the solvent.Keywords: amino acids, co2, global warming, solubility
Procedia PDF Downloads 4122211 Foreign Investment, Technological Diffusion and Competiveness of Exports: A Case for Textile Industry in Pakistan
Authors: Syed Toqueer Akhter, Muhammad Awais
Abstract:
Pakistan is a country which is gifted by naturally abundant resources these resources are a pioneer towards a prospect and developed country. Pakistan is the fourth largest exporter of the textile in the world and with the passage of time the competitiveness of these exports is subject to a decline. With a lot of International players in the textile world like China, Bangladesh, India, and Sri Lanka, Pakistan needs to put up a lot of effort to compete with these countries. This research paper would determine the impact of Foreign Direct Investment upon technological diffusion and that how significantly it may be affecting on export performance of the country. It would also demonstrate that with the increase in Foreign Direct Investment, technological diffusion, strong property rights, and using different policy tools, export competitiveness of the country could be improved. The research has been carried out using time series data from 1995 to 2013 and the results have been estimated by using competing Econometrics modes such as Robust regression and Generalized least squares so that to consolidate the impact of the Foreign Investments and Technological diffusion upon export competitiveness comprehensively. Distributed Lag model has also been used to encompass the lagged effect of policy tools variables used by the government. Model estimates entail that 'FDI' and 'Technological Diffusion' do have a significant impact on the competitiveness of the exports of Pakistan. It may also be inferred that competitiveness of Textile Sector requires integrated policy framework, primarily including the reduction in interest rates, providing subsides, and manufacturing of value added products.Keywords: high technology export, robust regression, patents, technological diffusion, export competitiveness
Procedia PDF Downloads 4972210 Artificial Neural Network Modeling and Genetic Algorithm Based Optimization of Hydraulic Design Related to Seepage under Concrete Gravity Dams on Permeable Soils
Authors: Muqdad Al-Juboori, Bithin Datta
Abstract:
Hydraulic structures such as gravity dams are classified as essential structures, and have the vital role in providing strong and safe water resource management. Three major aspects must be considered to achieve an effective design of such a structure: 1) The building cost, 2) safety, and 3) accurate analysis of seepage characteristics. Due to the complexity and non-linearity relationships of the seepage process, many approximation theories have been developed; however, the application of these theories results in noticeable errors. The analytical solution, which includes the difficult conformal mapping procedure, could be applied for a simple and symmetrical problem only. Therefore, the objectives of this paper are to: 1) develop a surrogate model based on numerical simulated data using SEEPW software to approximately simulate seepage process related to a hydraulic structure, 2) develop and solve a linked simulation-optimization model based on the developed surrogate model to describe the seepage occurring under a concrete gravity dam, in order to obtain optimum and safe design at minimum cost. The result shows that the linked simulation-optimization model provides an efficient and optimum design of concrete gravity dams.Keywords: artificial neural network, concrete gravity dam, genetic algorithm, seepage analysis
Procedia PDF Downloads 2222209 Probabilistic Approach of Dealing with Uncertainties in Distributed Constraint Optimization Problems and Situation Awareness for Multi-agent Systems
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
In this paper, we describe how Bayesian inferential reasoning will contributes in obtaining a well-satisfied prediction for Distributed Constraint Optimization Problems (DCOPs) with uncertainties. We also demonstrate how DCOPs could be merged to multi-agent knowledge understand and prediction (i.e. Situation Awareness). The DCOPs functions were merged with Bayesian Belief Network (BBN) in the form of situation, awareness, and utility nodes. We describe how the uncertainties can be represented to the BBN and make an effective prediction using the expectation-maximization algorithm or conjugate gradient descent algorithm. The idea of variable prediction using Bayesian inference may reduce the number of variables in agents’ sampling domain and also allow missing variables estimations. Experiment results proved that the BBN perform compelling predictions with samples containing uncertainties than the perfect samples. That is, Bayesian inference can help in handling uncertainties and dynamism of DCOPs, which is the current issue in the DCOPs community. We show how Bayesian inference could be formalized with Distributed Situation Awareness (DSA) using uncertain and missing agents’ data. The whole framework was tested on multi-UAV mission for forest fire searching. Future work focuses on augmenting existing architecture to deal with dynamic DCOPs algorithms and multi-agent information merging.Keywords: DCOP, multi-agent reasoning, Bayesian reasoning, swarm intelligence
Procedia PDF Downloads 1172208 Hybrid Approach for Software Defect Prediction Using Machine Learning with Optimization Technique
Authors: C. Manjula, Lilly Florence
Abstract:
Software technology is developing rapidly which leads to the growth of various industries. Now-a-days, software-based applications have been adopted widely for business purposes. For any software industry, development of reliable software is becoming a challenging task because a faulty software module may be harmful for the growth of industry and business. Hence there is a need to develop techniques which can be used for early prediction of software defects. Due to complexities in manual prediction, automated software defect prediction techniques have been introduced. These techniques are based on the pattern learning from the previous software versions and finding the defects in the current version. These techniques have attracted researchers due to their significant impact on industrial growth by identifying the bugs in software. Based on this, several researches have been carried out but achieving desirable defect prediction performance is still a challenging task. To address this issue, here we present a machine learning based hybrid technique for software defect prediction. First of all, Genetic Algorithm (GA) is presented where an improved fitness function is used for better optimization of features in data sets. Later, these features are processed through Decision Tree (DT) classification model. Finally, an experimental study is presented where results from the proposed GA-DT based hybrid approach is compared with those from the DT classification technique. The results show that the proposed hybrid approach achieves better classification accuracy.Keywords: decision tree, genetic algorithm, machine learning, software defect prediction
Procedia PDF Downloads 3282207 Optimal Design of Linear Generator to Recharge the Smartphone Battery
Authors: Jin Ho Kim, Yujeong Shin, Seong-Jin Cho, Dong-Jin Kim, U-Syn Ha
Abstract:
Due to the development of the information industry and technologies, cellular phones have must not only function to communicate, but also have functions such as the Internet, e-banking, entertainment, etc. These phones are called smartphones. The performance of smartphones has improved, because of the various functions of smartphones, and the capacity of the battery has been increased gradually. Recently, linear generators have been embedded in smartphones in order to recharge the smartphone's battery. In this study, optimization is performed and an array change of permanent magnets is examined in order to increase efficiency. We propose an optimal design using design of experiments (DOE) to maximize the generated induced voltage. The thickness of the poleshoe and permanent magnet (PM), the height of the poleshoe and PM, and the thickness of the coil are determined to be design variables. We made 25 sampling points using an orthogonal array according to four design variables. We performed electromagnetic finite element analysis to predict the generated induced voltage using the commercial electromagnetic analysis software ANSYS Maxwell. Then, we made an approximate model using the Kriging algorithm, and derived optimal values of the design variables using an evolutionary algorithm. The commercial optimization software PIAnO (Process Integration, Automation, and Optimization) was used with these algorithms. The result of the optimization shows that the generated induced voltage is improved.Keywords: smartphone, linear generator, design of experiment, approximate model, optimal design
Procedia PDF Downloads 344