Search results for: computational error
2503 Arabic Character Recognition Using Regression Curves with the Expectation Maximization Algorithm
Authors: Abdullah A. AlShaher
Abstract:
In this paper, we demonstrate how regression curves can be used to recognize 2D non-rigid handwritten shapes. Each shape is represented by a set of non-overlapping uniformly distributed landmarks. The underlying models utilize 2nd order of polynomials to model shapes within a training set. To estimate the regression models, we need to extract the required coefficients which describe the variations for a set of shape class. Hence, a least square method is used to estimate such modes. We then proceed by training these coefficients using the apparatus Expectation Maximization algorithm. Recognition is carried out by finding the least error landmarks displacement with respect to the model curves. Handwritten isolated Arabic characters are used to evaluate our approach.Keywords: character recognition, regression curves, handwritten Arabic letters, expectation maximization algorithm
Procedia PDF Downloads 1452502 Projective Lag Synchronization in Drive-Response Dynamical Networks via Hybrid Feedback Control
Authors: Mohd Salmi Md Noorani, Ghada Al-Mahbashi, Sakhinah Abu Bakar
Abstract:
This paper investigates projective lag synchronization (PLS) behavior in drive response dynamical networks (DRDNs) model with identical nodes. A hybrid feedback control method is designed to achieve the PLS with mismatch and without mismatch terms. The stability of the error dynamics is proven theoretically using the Lyapunov stability theory. Finally, analytical results show that the states of the dynamical network with non-delayed coupling can be asymptotically synchronized onto a desired scaling factor under the designed controller. Moreover, the numerical simulations results demonstrate the validity of the proposed method.Keywords: drive-response dynamical network, projective lag synchronization, hybrid feedback control, stability theory
Procedia PDF Downloads 3942501 Additive White Gaussian Noise Filtering from ECG by Wiener Filter and Median Filter: A Comparative Study
Authors: Hossein Javidnia, Salehe Taheri
Abstract:
The Electrocardiogram (ECG) is the recording of the heart’s electrical potential versus time. ECG signals are often contaminated with noise such as baseline wander and muscle noise. As these signals have been widely used in clinical studies to detect heart diseases, it is essential to filter these noises. In this paper we compare performance of Wiener Filtering and Median Filtering methods to filter Additive White Gaussian (AWG) noise with the determined signal to noise ratio (SNR) ranging from 3 to 5 dB applied to long-term ECG recordings samples. Root mean square error (RMSE) and coefficient of determination (R2) between the filtered ECG and original ECG was used as the filter performance indicator. Experimental results show that Wiener filter has better noise filtering performance than Median filter.Keywords: ECG noise filtering, Wiener filtering, median filtering, Gaussian noise, filtering performance
Procedia PDF Downloads 5292500 Discrete Sliding Modes Regulator with Exponential Holder for Non-Linear Systems
Authors: G. Obregon-Pulido , G. C. Solis-Perales, J. A. Meda-Campaña
Abstract:
In this paper, we present a sliding mode controller in discrete time. The design of the controller is based on the theory of regulation for nonlinear systems. In the problem of disturbance rejection and/or output tracking, it is known that in discrete time, a controller that uses the zero-order holder only guarantees tracking at the sampling instances but not between instances. It is shown that using the so-called exponential holder, it is possible to guarantee asymptotic zero output tracking error, also between the sampling instant. For stabilizing the problem of close loop system we introduce the sliding mode approach relaxing the requirements of the existence of a linear stabilizing control law.Keywords: regulation theory, sliding modes, discrete controller, ripple-free tracking
Procedia PDF Downloads 572499 The Interaction between Human and Environment on the Perspective of Environmental Ethics
Authors: Mella Ismelina Farma Rahayu
Abstract:
Environmental problems could not be separated from unethical human perspectives and behaviors toward the environment. There is a fundamental error in the philosophy of people’s perspective about human and nature and their relationship with the environment, which in turn will create an inappropriate behavior in relation to the environment. The aim of this study is to investigate and to understand the ethics of the environment in the context of humans interacting with the environment by using the hermeneutic approach. The related theories and concepts collected from literature review are used as data, which were analyzed by using interpretation, critical evaluation, internal coherence, comparisons, and heuristic techniques. As a result of this study, there will be a picture related to the interaction of human and environment in the perspective of environmental ethics, as well as the problems of the value of ecological justice in the interaction of humans and environment. We suggest that the interaction between humans and environment need to be based on environmental ethics, in a spirit of mutual respect between humans and the natural world.Keywords: environment, environmental ethics, interaction, value
Procedia PDF Downloads 4242498 Computational Modeling of Load Limits of Carbon Fibre Composite Laminates Subjected to Low-Velocity Impact Utilizing Convolution-Based Fast Fourier Data Filtering Algorithms
Authors: Farhat Imtiaz, Umar Farooq
Abstract:
In this work, we developed a computational model to predict ply level failure in impacted composite laminates. Data obtained from physical testing from flat and round nose impacts of 8-, 16-, 24-ply laminates were considered. Routine inspections of the tested laminates were carried out to approximate ply by ply inflicted damage incurred. Plots consisting of load–time, load–deflection, and energy–time history were drawn to approximate the inflicted damages. Impact test generated unwanted data logged due to restrictions on testing and logging systems were also filtered. Conventional filters (built-in, statistical, and numerical) reliably predicted load thresholds for relatively thin laminates such as eight and sixteen ply panels. However, for relatively thick laminates such as twenty-four ply laminates impacted by flat nose impact generated clipped data which can just be de-noised using oscillatory algorithms. The literature search reveals that modern oscillatory data filtering and extrapolation algorithms have scarcely been utilized. This investigation reports applications of filtering and extrapolation of the clipped data utilising fast Fourier Convolution algorithm to predict load thresholds. Some of the results were related to the impact-induced damage areas identified with Ultrasonic C-scans and found to be in acceptable agreement. Based on consistent findings, utilizing of modern data filtering and extrapolation algorithms to data logged by the existing machines has efficiently enhanced data interpretations without resorting to extra resources. The algorithms could be useful for impact-induced damage approximations of similar cases.Keywords: fibre reinforced laminates, fast Fourier algorithms, mechanical testing, data filtering and extrapolation
Procedia PDF Downloads 1352497 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 4412496 Contextual Distribution for Textual Alignment
Authors: Yuri Bizzoni, Marianne Reboul
Abstract:
Our program compares French and Italian translations of Homer’s Odyssey, from the XVIth to the XXth century. We focus on the third point, showing how distributional semantics systems can be used both to improve alignment between different French translations as well as between the Greek text and a French translation. Although we focus on French examples, the techniques we display are completely language independent.Keywords: classical receptions, computational linguistics, distributional semantics, Homeric poems, machine translation, translation studies, text alignment
Procedia PDF Downloads 4352495 Robust Speed Sensorless Control to Estimated Error for PMa-SynRM
Authors: Kyoung-Jin Joo, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee
Abstract:
Recently, the permanent magnet-assisted synchronous reluctance motor (PMa-SynRM) that can be substituted for the induction motor has been studying because of the needs of the development of the premium high efficiency motor for the minimum energy performance standard (MEPS). PMa-SynRM is required to the speed and position information for motor speed and torque controls. However, to apply the sensors has many problems that are sensor mounting space shortage and additional cost, etc. Therefore, in this paper, speed-sensorless control based on model reference adaptive system (MRAS) is introduced to eliminate the sensor. The sensorless method is constructed in a reference model as standard and an adaptive model as the state observer. The proposed algorithm is verified by the simulation.Keywords: PMa-SynRM, sensorless control, robust estimation, MRAS method
Procedia PDF Downloads 4052494 Neural Network Based Path Loss Prediction for Global System for Mobile Communication in an Urban Environment
Authors: Danladi Ali
Abstract:
In this paper, we measured GSM signal strength in the Dnepropetrovsk city in order to predict path loss in study area using nonlinear autoregressive neural network prediction and we also, used neural network clustering to determine average GSM signal strength receive at the study area. The nonlinear auto-regressive neural network predicted that the GSM signal is attenuated with the mean square error (MSE) of 2.6748dB, this attenuation value is used to modify the COST 231 Hata and the Okumura-Hata models. The neural network clustering revealed that -75dB to -95dB is received more frequently. This means that the signal strength received at the study is mostly weak signalKeywords: one-dimensional multilevel wavelets, path loss, GSM signal strength, propagation, urban environment and model
Procedia PDF Downloads 3822493 Required SNR for PPM in Downlink Gamma-Gamma Turbulence Channel
Authors: Selami Şahin
Abstract:
In this paper, in order to achieve sufficient bit error rate (BER) according to zenith angle of the satellite to ground station, SNR requirement is investigated utilizing pulse position modulation (PPM). To realize explicit results, all parameters such as link distance, Rytov variance, scintillation index, wavelength, aperture diameter of the receiver, Fried's parameter and zenith angle have been taken into account. Results indicate that after some parameters are determined since the constraints of the system, to achieve desired BER, required SNR values are in wide range while zenith angle changes from small to large values. Therefore, in order not to utilize high link margin, either SNR should adjust according to zenith angle or link should establish with predetermined intervals of the zenith angle.Keywords: Free-space optical communication, optical downlink channel, atmospheric turbulence, wireless optical communication
Procedia PDF Downloads 4022492 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 3932491 Detecting Memory-Related Gene Modules in sc/snRNA-seq Data by Deep-Learning
Authors: Yong Chen
Abstract:
To understand the detailed molecular mechanisms of memory formation in engram cells is one of the most fundamental questions in neuroscience. Recent single-cell RNA-seq (scRNA-seq) and single-nucleus RNA-seq (snRNA-seq) techniques have allowed us to explore the sparsely activated engram ensembles, enabling access to the molecular mechanisms that underlie experience-dependent memory formation and consolidation. However, the absence of specific and powerful computational methods to detect memory-related genes (modules) and their regulatory relationships in the sc/snRNA-seq datasets has strictly limited the analysis of underlying mechanisms and memory coding principles in mammalian brains. Here, we present a deep-learning method named SCENTBOX, to detect memory-related gene modules and causal regulatory relationships among themfromsc/snRNA-seq datasets. SCENTBOX first constructs codifferential expression gene network (CEGN) from case versus control sc/snRNA-seq datasets. It then detects the highly correlated modules of differential expression genes (DEGs) in CEGN. The deep network embedding and attention-based convolutional neural network strategies are employed to precisely detect regulatory relationships among DEG genes in a module. We applied them on scRNA-seq datasets of TRAP; Ai14 mouse neurons with fear memory and detected not only known memory-related genes, but also the modules and potential causal regulations. Our results provided novel regulations within an interesting module, including Arc, Bdnf, Creb, Dusp1, Rgs4, and Btg2. Overall, our methods provide a general computational tool for processing sc/snRNA-seq data from case versus control studie and a systematic investigation of fear-memory-related gene modules.Keywords: sc/snRNA-seq, memory formation, deep learning, gene module, causal inference
Procedia PDF Downloads 1202490 Surviral: An Agent-Based Simulation Framework for Sars-Cov-2 Outcome Prediction
Authors: Sabrina Neururer, Marco Schweitzer, Werner Hackl, Bernhard Tilg, Patrick Raudaschl, Andreas Huber, Bernhard Pfeifer
Abstract:
History and the current outbreak of Covid-19 have shown the deadly potential of infectious diseases. However, infectious diseases also have a serious impact on areas other than health and healthcare, such as the economy or social life. These areas are strongly codependent. Therefore, disease control measures, such as social distancing, quarantines, curfews, or lockdowns, have to be adopted in a very considerate manner. Infectious disease modeling can support policy and decision-makers with adequate information regarding the dynamics of the pandemic and therefore assist in planning and enforcing appropriate measures that will prevent the healthcare system from collapsing. In this work, an agent-based simulation package named “survival” for simulating infectious diseases is presented. A special focus is put on SARS-Cov-2. The presented simulation package was used in Austria to model the SARS-Cov-2 outbreak from the beginning of 2020. Agent-based modeling is a relatively recent modeling approach. Since our world is getting more and more complex, the complexity of the underlying systems is also increasing. The development of tools and frameworks and increasing computational power advance the application of agent-based models. For parametrizing the presented model, different data sources, such as known infections, wastewater virus load, blood donor antibodies, circulating virus variants and the used capacity for hospitalization, as well as the availability of medical materials like ventilators, were integrated with a database system and used. The simulation result of the model was used for predicting the dynamics and the possible outcomes and was used by the health authorities to decide on the measures to be taken in order to control the pandemic situation. The survival package was implemented in the programming language Java and the analytics were performed with R Studio. During the first run in March 2020, the simulation showed that without measures other than individual personal behavior and appropriate medication, the death toll would have been about 27 million people worldwide within the first year. The model predicted the hospitalization rates (standard and intensive care) for Tyrol and South Tyrol with an accuracy of about 1.5% average error. They were calculated to provide 10-days forecasts. The state government and the hospitals were provided with the 10-days models to support their decision-making. This ensured that standard care was maintained for as long as possible without restrictions. Furthermore, various measures were estimated and thereafter enforced. Among other things, communities were quarantined based on the calculations while, in accordance with the calculations, the curfews for the entire population were reduced. With this framework, which is used in the national crisis team of the Austrian province of Tyrol, a very accurate model could be created on the federal state level as well as on the district and municipal level, which was able to provide decision-makers with a solid information basis. This framework can be transferred to various infectious diseases and thus can be used as a basis for future monitoring.Keywords: modelling, simulation, agent-based, SARS-Cov-2, COVID-19
Procedia PDF Downloads 1752489 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation
Procedia PDF Downloads 3742488 Ramp Rate and Constriction Factor Based Dual Objective Economic Load Dispatch Using Particle Swarm Optimization
Authors: Himanshu Shekhar Maharana, S. K .Dash
Abstract:
Economic Load Dispatch (ELD) proves to be a vital optimization process in electric power system for allocating generation amongst various units to compute the cost of generation, the cost of emission involving global warming gases like sulphur dioxide, nitrous oxide and carbon monoxide etc. In this dissertation, we emphasize ramp rate constriction factor based particle swarm optimization (RRCPSO) for analyzing various performance objectives, namely cost of generation, cost of emission, and a dual objective function involving both these objectives through the experimental simulated results. A 6-unit 30 bus IEEE test case system has been utilized for simulating the results involving improved weight factor advanced ramp rate limit constraints for optimizing total cost of generation and emission. This method increases the tendency of particles to venture into the solution space to ameliorate their convergence rates. Earlier works through dispersed PSO (DPSO) and constriction factor based PSO (CPSO) give rise to comparatively higher computational time and less good optimal solution at par with current dissertation. This paper deals with ramp rate and constriction factor based well defined ramp rate PSO to compute various objectives namely cost, emission and total objective etc. and compares the result with DPSO and weight improved PSO (WIPSO) techniques illustrating lesser computational time and better optimal solution.Keywords: economic load dispatch (ELD), constriction factor based particle swarm optimization (CPSO), dispersed particle swarm optimization (DPSO), weight improved particle swarm optimization (WIPSO), ramp rate and constriction factor based particle swarm optimization (RRCPSO)
Procedia PDF Downloads 3822487 Fires in Historic Buildings: Assessment of Evacuation of People by Computational Simulation
Authors: Ivana R. Moser, Joao C. Souza
Abstract:
Building fires are random phenomena that can be extremely violent, and safe evacuation of people is the most guaranteed tactic in saving lives. The correct evacuation of buildings, and other spaces occupied by people, means leaving the place in a short time and by the appropriate way. It depends on the perception of spaces by the individual, the architectural layout and the presence of appropriate routing systems. As historical buildings were constructed in other times, when, as in general, the current security requirements were not available yet, it is necessary to adapt these spaces to make them safe. Computer models of evacuation simulation are widely used tools for assessing the safety of people in a building or agglomeration sites and these are associated with the analysis of human behaviour, makes the results of emergency evacuation more correct and conclusive. The objective of this research is the performance evaluation of historical interest buildings, regarding the safe evacuation of people, through computer simulation, using PTV Viswalk software. The buildings objects of study are the Colégio Catarinense, centennial building, located in the city of Florianópolis, Santa Catarina / Brazil. The software used uses the variables of human behaviour, such as: avoid collision with other pedestrians and avoid obstacles. Scenarios were run on the three-dimensional models and the contribution to safety in risk situations was verified as an alternative measure, especially in the impossibility of applying those measures foreseen by the current fire safety codes in Brazil. The simulations verified the evacuation time in situations of normality and emergency situations, as well as indicate the bottlenecks and critical points of the studied buildings, to seek solutions to prevent and correct these undesirable events. It is understood that adopting an advanced computational performance-based approach promotes greater knowledge of the building and how people behave in these specific environments, in emergency situations.Keywords: computer simulation, escape routes, fire safety, historic buildings, human behavior
Procedia PDF Downloads 1882486 An Evolutionary Approach for Automated Optimization and Design of Vivaldi Antennas
Authors: Sahithi Yarlagadda
Abstract:
The design of antenna is constrained by mathematical and geometrical parameters. Though there are diverse antenna structures with wide range of feeds yet, there are many geometries to be tried, which cannot be customized into predefined computational methods. The antenna design and optimization qualify to apply evolutionary algorithmic approach since the antenna parameters weights dependent on geometric characteristics directly. The evolutionary algorithm can be explained simply for a given quality function to be maximized. We can randomly create a set of candidate solutions, elements of the function's domain, and apply the quality function as an abstract fitness measure. Based on this fitness, some of the better candidates are chosen to seed the next generation by applying recombination and permutation to them. In conventional approach, the quality function is unaltered for any iteration. But the antenna parameters and geometries are wide to fit into single function. So, the weight coefficients are obtained for all possible antenna electrical parameters and geometries; the variation is learnt by mining the data obtained for an optimized algorithm. The weight and covariant coefficients of corresponding parameters are logged for learning and future use as datasets. This paper drafts an approach to obtain the requirements to study and methodize the evolutionary approach to automated antenna design for our past work on Vivaldi antenna as test candidate. The antenna parameters like gain, directivity, etc. are directly caged by geometries, materials, and dimensions. The design equations are to be noted here and valuated for all possible conditions to get maxima and minima for given frequency band. The boundary conditions are thus obtained prior to implementation, easing the optimization. The implementation mainly aimed to study the practical computational, processing, and design complexities that incur while simulations. HFSS is chosen for simulations and results. MATLAB is used to generate the computations, combinations, and data logging. MATLAB is also used to apply machine learning algorithms and plotting the data to design the algorithm. The number of combinations is to be tested manually, so HFSS API is used to call HFSS functions from MATLAB itself. MATLAB parallel processing tool box is used to run multiple simulations in parallel. The aim is to develop an add-in to antenna design software like HFSS, CSTor, a standalone application to optimize pre-identified common parameters of wide range of antennas available. In this paper, we have used MATLAB to calculate Vivaldi antenna parameters like slot line characteristic impedance, impedance of stripline, slot line width, flare aperture size, dielectric and K means, and Hamming window are applied to obtain the best test parameters. HFSS API is used to calculate the radiation, bandwidth, directivity, and efficiency, and data is logged for applying the Evolutionary genetic algorithm in MATLAB. The paper demonstrates the computational weights and Machine Learning approach for automated antenna optimizing for Vivaldi antenna.Keywords: machine learning, Vivaldi, evolutionary algorithm, genetic algorithm
Procedia PDF Downloads 1112485 Improving Efficiencies of Planting Configurations on Draft Environment of Town Square: The Case Study of Taichung City Hall in Taichung, Taiwan
Authors: Yu-Wen Huang, Yi-Cheng Chiang
Abstract:
With urban development, lots of buildings are built around the city. The buildings always affect the urban wind environment. The accelerative situation of wind caused of buildings often makes pedestrians uncomfortable, even causes the accidents and dangers. Factors influencing pedestrian level wind including atmospheric boundary layer, wind direction, wind velocity, planting, building volume, geometric shape of the buildings and adjacent interference effects, etc. Planting has many functions including scraping and slowing urban heat island effect, creating a good visual landscape, increasing urban green area and improve pedestrian level wind. On the other hand, urban square is an important space element supporting the entrance to buildings, city landmarks, and activity collections, etc. The appropriateness of urban square environment usually dominates its success. This research focuses on the effect of tree-planting on the wind environment of urban square. This research studied the square belt of Taichung City Hall. Taichung City Hall is a cuboid building with a large mass opening. The square belt connects the front square, the central opening and the back square. There is often wind draft on the square belt. This phenomenon decreases the activities on the squares. This research applies tree-planting to improve the wind environment and evaluate the effects of two types of planting configuration. The Computational Fluid Dynamics (CFD) simulation analysis and extensive field measurements are applied to explore the improve efficiency of planting configuration on wind environment. This research compares efficiencies of different kinds of planting configuration, including the clustering array configuration and the dispersion, and evaluates the efficiencies by the SET*.Keywords: micro-climate, wind environment, planting configuration, comfortableness, computational fluid dynamics (CFD)
Procedia PDF Downloads 3112484 Improving the Penalty-free Multi-objective Evolutionary Design Optimization of Water Distribution Systems
Authors: Emily Kambalame
Abstract:
Water distribution networks necessitate many investments for construction, prompting researchers to seek cost reduction and efficient design solutions. Optimization techniques are employed in this regard to address these challenges. In this context, the penalty-free multi-objective evolutionary algorithm (PFMOEA) coupled with pressure-dependent analysis (PDA) was utilized to develop a multi-objective evolutionary search for the optimization of water distribution systems (WDSs). The aim of this research was to find out if the computational efficiency of the PFMOEA for WDS optimization could be enhanced. This was done by applying real coding representation and retaining different percentages of feasible and infeasible solutions close to the Pareto front in the elitism step of the optimization. Two benchmark network problems, namely the Two-looped and Hanoi networks, were utilized in the study. A comparative analysis was then conducted to assess the performance of the real-coded PFMOEA in relation to other approaches described in the literature. The algorithm demonstrated competitive performance for the two benchmark networks by implementing real coding. The real-coded PFMOEA achieved the novel best-known solutions ($419,000 and $6.081 million) and a zero-pressure deficit for the two networks, requiring fewer function evaluations than the binary-coded PFMOEA. In previous PFMOEA studies, elitism applied a default retention of 30% of the least cost-feasible solutions while excluding all infeasible solutions. It was found in this study that by replacing 10% and 15% of the feasible solutions with infeasible ones that are close to the Pareto front with minimal pressure deficit violations, the computational efficiency of the PFMOEA was significantly enhanced. The configuration of 15% feasible and 15% infeasible solutions outperformed other retention allocations by identifying the optimal solution with the fewest function evaluationKeywords: design optimization, multi-objective evolutionary, penalty-free, water distribution systems
Procedia PDF Downloads 632483 Towards Automated Remanufacturing of Marine and Offshore Engineering Components
Authors: Aprilia, Wei Liang Keith Nguyen, Shu Beng Tor, Gerald Gim Lee Seet, Chee Kai Chua
Abstract:
Automated remanufacturing process is of great interest in today’s marine and offshore industry. Most of the current remanufacturing processes are carried out manually and hence they are error prone, labour-intensive and costly. In this paper, a conceptual framework for automated remanufacturing is presented. This framework involves the integration of 3D non-contact digitization, adaptive surface reconstruction, additive manufacturing and machining operation. Each operation is operated and interconnected automatically as one system. The feasibility of adaptive surface reconstruction on marine and offshore engineering components is also discussed. Several engineering components were evaluated and the results showed that this proposed system is feasible. Conclusions are drawn and further research work is discussed.Keywords: adaptive surface reconstruction, automated remanufacturing, automatic repair, reverse engineering
Procedia PDF Downloads 3262482 The Systems Biology Verification Endeavor: Harness the Power of the Crowd to Address Computational and Biological Challenges
Authors: Stephanie Boue, Nicolas Sierro, Julia Hoeng, Manuel C. Peitsch
Abstract:
Systems biology relies on large numbers of data points and sophisticated methods to extract biologically meaningful signal and mechanistic understanding. For example, analyses of transcriptomics and proteomics data enable to gain insights into the molecular differences in tissues exposed to diverse stimuli or test items. Whereas the interpretation of endpoints specifically measuring a mechanism is relatively straightforward, the interpretation of big data is more complex and would benefit from comparing results obtained with diverse analysis methods. The sbv IMPROVER project was created to implement solutions to verify systems biology data, methods, and conclusions. Computational challenges leveraging the wisdom of the crowd allow benchmarking methods for specific tasks, such as signature extraction and/or samples classification. Four challenges have already been successfully conducted and confirmed that the aggregation of predictions often leads to better results than individual predictions and that methods perform best in specific contexts. Whenever the scientific question of interest does not have a gold standard, but may greatly benefit from the scientific community to come together and discuss their approaches and results, datathons are set up. The inaugural sbv IMPROVER datathon was held in Singapore on 23-24 September 2016. It allowed bioinformaticians and data scientists to consolidate their ideas and work on the most promising methods as teams, after having initially reflected on the problem on their own. The outcome is a set of visualization and analysis methods that will be shared with the scientific community via the Garuda platform, an open connectivity platform that provides a framework to navigate through different applications, databases and services in biology and medicine. We will present the results we obtained when analyzing data with our network-based method, and introduce a datathon that will take place in Japan to encourage the analysis of the same datasets with other methods to allow for the consolidation of conclusions.Keywords: big data interpretation, datathon, systems toxicology, verification
Procedia PDF Downloads 2782481 Modelling of Heat Transfer during Controlled Cooling of Thermo-Mechanically Treated Rebars Using Computational Fluid Dynamics Approach
Authors: Rohit Agarwal, Mrityunjay K. Singh, Soma Ghosh, Ramesh Shankar, Biswajit Ghosh, Vinay V. Mahashabde
Abstract:
Thermo-mechanical treatment (TMT) of rebars is a critical process to impart sufficient strength and ductility to rebar. TMT rebars are produced by the Tempcore process, involves an 'in-line' heat treatment in which hot rolled bar (temperature is around 1080°C) is passed through water boxes where it is quenched under high pressure water jets (temperature is around 25°C). The quenching rate dictates composite structure consisting (four non-homogenously distributed phases of rebar microstructure) pearlite-ferrite, bainite, and tempered martensite (from core to rim). The ferrite and pearlite phases present at core induce ductility to rebar while martensitic rim induces appropriate strength. The TMT process is difficult to model as it brings multitude of complex physics such as heat transfer, highly turbulent fluid flow, multicomponent and multiphase flow present in the control volume. Additionally the presence of film boiling regime (above Leidenfrost point) due to steam formation adds complexity to domain. A coupled heat transfer and fluid flow model based on computational fluid dynamics (CFD) has been developed at product technology division of Tata Steel, India which efficiently predicts temperature profile and percentage martensite rim thickness of rebar during quenching process. The model has been validated with 16 mm rolling of New Bar mill (NBM) plant of Tata Steel Limited, India. Furthermore, based on the scenario analyses, optimal configuration of nozzles was found which helped in subsequent increase in rolling speed.Keywords: boiling, critical heat flux, nozzles, thermo-mechanical treatment
Procedia PDF Downloads 2172480 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory
Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi
Abstract:
The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation
Procedia PDF Downloads 4632479 Context-Aware Recommender System Using Collaborative Filtering, Content-Based Algorithm and Fuzzy Rules
Authors: Xochilt Ramirez-Garcia, Mario Garcia-Valdez
Abstract:
Contextual recommendations are implemented in Recommender Systems to improve user satisfaction, recommender system makes accurate and suitable recommendations for a particular situation reaching personalized recommendations. The context provides information relevant to the Recommender System and is used as a filter for selection of relevant items for the user. This paper presents a Context-aware Recommender System, which uses techniques based on Collaborative Filtering and Content-Based, as well as fuzzy rules, to recommend items inside the context. The dataset used to test the system is Trip Advisor. The accuracy in the recommendations was evaluated with the Mean Absolute Error.Keywords: algorithms, collaborative filtering, intelligent systems, fuzzy logic, recommender systems
Procedia PDF Downloads 4242478 Forecasting Amman Stock Market Data Using a Hybrid Method
Authors: Ahmad Awajan, Sadam Al Wadi
Abstract:
In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series
Procedia PDF Downloads 1322477 Performance Analysis of Artificial Neural Network with Decision Tree in Prediction of Diabetes Mellitus
Authors: J. K. Alhassan, B. Attah, S. Misra
Abstract:
Human beings have the ability to make logical decisions. Although human decision - making is often optimal, it is insufficient when huge amount of data is to be classified. medical dataset is a vital ingredient used in predicting patients health condition. In other to have the best prediction, there calls for most suitable machine learning algorithms. This work compared the performance of Artificial Neural Network (ANN) and Decision Tree Algorithms (DTA) as regards to some performance metrics using diabetes data. The evaluations was done using weka software and found out that DTA performed better than ANN. Multilayer Perceptron (MLP) and Radial Basis Function (RBF) were the two algorithms used for ANN, while RegTree and LADTree algorithms were the DTA models used. The Root Mean Squared Error (RMSE) of MLP is 0.3913,that of RBF is 0.3625, that of RepTree is 0.3174 and that of LADTree is 0.3206 respectively.Keywords: artificial neural network, classification, decision tree algorithms, diabetes mellitus
Procedia PDF Downloads 4112476 Time Synchronization between the eNBs in E-UTRAN under the Asymmetric IP Network
Abstract:
In this paper, we present a method for a time synchronization between the two eNodeBs (eNBs) in E-UTRAN (Evolved Universal Terrestrial Radio Access) network. The two eNBs are cooperating in so-called inter eNB CA (Carrier Aggregation) case and connected via asymmetrical IP network. We solve the problem by using broadcasting signals generated in E-UTRAN as synchronization signals. The results show that the time synchronization with the proposed method is possible with the error significantly less than 1 ms which is sufficient considering the time transmission interval is 1 ms in E-UTRAN. This makes this method (with low complexity) more suitable than Network Time Protocol (NTP) in the mobile applications with generated broadcasting signals where time synchronization in asymmetrical network is required.Keywords: IP scheduled throughput, E-UTRAN, Evolved Universal Terrestrial Radio Access Network, NTP, Network Time Protocol, assymetric network, delay
Procedia PDF Downloads 3612475 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers
Authors: Mohammad Hassan Ziraksaz
Abstract:
Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer
Procedia PDF Downloads 1612474 Optimization of an Electro-Submersible Pump for Crude Oil Extraction Processes
Authors: Deisy Becerra, Nicolas Rios, Miguel Asuaje
Abstract:
The Electrical Submersible Pump (ESP) is one of the most artificial lifting methods used in the last years, which consists of a serial arrangement of centrifugal pumps. One of the main concerns when handling crude oil is the formation of O/W or W/O (oil/water or water/oil) emulsions inside the pump, due to the shear rate imparted and the presence of high molecular weight substances that act as natural surfactants. Therefore, it is important to perform an analysis of the flow patterns inside the pump to increase the percentage of oil recovered using the centrifugal force and the difference in density between the oil and the water to generate the separation of liquid phases. For this study, a Computational Fluid Dynamic (CFD) model was developed on STAR-CCM+ software based on 3D geometry of a Franklin Electric 4400 4' four-stage ESP. In this case, the modification of the last stage was carried out to improve the centrifugal effect inside the pump, and a perforated double tube was designed with three different holes configurations disposed at the outlet section, through which the cut water flows. The arrangement of holes used has different geometrical configurations such as circles, rectangles, and irregular shapes determined as grating around the tube. The two-phase flow was modeled using an Eulerian approach with the Volume of Fluid (VOF) method, which predicts the distribution and movement of larger interfaces in immiscible phases. Different water-oil compositions were evaluated, such as 70-30% v/v, 80-20% v/v and 90-10% v/v, respectively. Finally, greater recovery of oil was obtained. For the several compositions evaluated, the volumetric oil fraction was greater than 0.55 at the pump outlet. Similarly, it is possible to show an inversely proportional relationship between the Water/Oil rate (WOR) and the volumetric flow. The volumetric fractions evaluated, the oil flow increased approximately between 41%-10% for circular perforations and 49%-19% for rectangular shaped perforations, regarding the inlet flow. Besides, the elimination of the pump diffuser in the last stage of the pump reduced the head by approximately 20%.Keywords: computational fluid dynamic, CFD, electrical submersible pump, ESP, two phase flow, volume of fluid, VOF, water/oil rate, WOR
Procedia PDF Downloads 158