Search results for: invasive weed optimization algorithm
5431 A Digital Filter for Symmetrical Components Identification
Authors: Khaled M. El-Naggar
Abstract:
This paper presents a fast and efficient technique for monitoring and supervising power system disturbances generated due to dynamic performance of power systems or faults. Monitoring power system quantities involve monitoring fundamental voltage, current magnitudes, and their frequencies as well as their negative and zero sequence components under different operating conditions. The proposed technique is based on simulated annealing optimization technique (SA). The method uses digital set of measurements for the voltage or current waveforms at power system bus to perform the estimation process digitally. The algorithm is tested using different simulated data to monitor the symmetrical components of power system waveforms. Different study cases are considered in this work. Effects of number of samples, sampling frequency and the sample window size are studied. Results are reported and discussed.Keywords: estimation, faults, measurement, symmetrical components
Procedia PDF Downloads 4655430 Consideration of Uncertainty in Engineering
Authors: A. Mohammadi, M. Moghimi, S. Mohammadi
Abstract:
Engineers need computational methods which could provide solutions less sensitive to the environmental effects, so the techniques should be used which take the uncertainty to account to control and minimize the risk associated with design and operation. In order to consider uncertainty in engineering problem, the optimization problem should be solved for a suitable range of the each uncertain input variable instead of just one estimated point. Using deterministic optimization problem, a large computational burden is required to consider every possible and probable combination of uncertain input variables. Several methods have been reported in the literature to deal with problems under uncertainty. In this paper, different methods presented and analyzed.Keywords: uncertainty, Monte Carlo simulated, stochastic programming, scenario method
Procedia PDF Downloads 4145429 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join
Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel
Abstract:
Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.Keywords: map reduce, hadoop, semi join, two way join
Procedia PDF Downloads 5135428 Fault Diagnosis of Manufacturing Systems Using AntTreeStoch with Parameter Optimization by ACO
Authors: Ouahab Kadri, Leila Hayet Mouss
Abstract:
In this paper, we present three diagnostic modules for complex and dynamic systems. These modules are based on three ant colony algorithms, which are AntTreeStoch, Lumer & Faieta and Binary ant colony. We chose these algorithms for their simplicity and their wide application range. However, we cannot use these algorithms in their basement forms as they have several limitations. To use these algorithms in a diagnostic system, we have proposed three variants. We have tested these algorithms on datasets issued from two industrial systems, which are clinkering system and pasteurization system.Keywords: ant colony algorithms, complex and dynamic systems, diagnosis, classification, optimization
Procedia PDF Downloads 2995427 The Optimization of Decision Rules in Multimodal Decision-Level Fusion Scheme
Authors: Andrey V. Timofeev, Dmitry V. Egorov
Abstract:
This paper introduces an original method of parametric optimization of the structure for multimodal decision-level fusion scheme which combines the results of the partial solution of the classification task obtained from assembly of the mono-modal classifiers. As a result, a multimodal fusion classifier which has the minimum value of the total error rate has been obtained.Keywords: classification accuracy, fusion solution, total error rate, multimodal fusion classifier
Procedia PDF Downloads 4665426 Taguchi Method for Analyzing a Flexible Integrated Logistics Network
Authors: E. Behmanesh, J. Pannek
Abstract:
Logistics network design is known as one of the strategic decision problems. As these kinds of problems belong to the category of NP-hard problems, traditional ways are failed to find an optimal solution in short time. In this study, we attempt to involve reverse flow through an integrated design of forward/reverse supply chain network that formulated into a mixed integer linear programming. This Integrated, multi-stages model is enriched by three different delivery path which makes the problem more complex. To tackle with such an NP-hard problem a revised random path direct encoding method based memetic algorithm is considered as the solution methodology. Each algorithm has some parameters that need to be investigate to reveal the best performance. In this regard, Taguchi method is adapted to identify the optimum operating condition of the proposed memetic algorithm to improve the results. In this study, four factors namely, population size, crossover rate, local search iteration and a number of iteration are considered. Analyzing the parameters and improvement in results are the outlook of this research.Keywords: integrated logistics network, flexible path, memetic algorithm, Taguchi method
Procedia PDF Downloads 1875425 Failure Inference and Optimization for Step Stress Model Based on Bivariate Wiener Model
Authors: Soudabeh Shemehsavar
Abstract:
In this paper, we consider the situation under a life test, in which the failure time of the test units are not related deterministically to an observable stochastic time varying covariate. In such a case, the joint distribution of failure time and a marker value would be useful for modeling the step stress life test. The problem of accelerating such an experiment is considered as the main aim of this paper. We present a step stress accelerated model based on a bivariate Wiener process with one component as the latent (unobservable) degradation process, which determines the failure times and the other as a marker process, the degradation values of which are recorded at times of failure. Parametric inference based on the proposed model is discussed and the optimization procedure for obtaining the optimal time for changing the stress level is presented. The optimization criterion is to minimize the approximate variance of the maximum likelihood estimator of a percentile of the products’ lifetime distribution.Keywords: bivariate normal, Fisher information matrix, inverse Gaussian distribution, Wiener process
Procedia PDF Downloads 3175424 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1505423 A First Step towards Automatic Evolutionary for Gas Lifts Allocation Optimization
Authors: Younis Elhaddad, Alfonso Ortega
Abstract:
Oil production by means of gas lift is a standard technique in oil production industry. To optimize the total amount of oil production in terms of the amount of gas injected is a key question in this domain. Different methods have been tested to propose a general methodology. Many of them apply well-known numerical methods. Some of them have taken into account the power of evolutionary approaches. Our goal is to provide the experts of the domain with a powerful automatic searching engine into which they can introduce their knowledge in a format close to the one used in their domain, and get solutions comprehensible in the same terms, as well. These proposals introduced in the genetic engine the most expressive formal models to represent the solutions to the problem. These algorithms have proven to be as effective as other genetic systems but more flexible and comfortable for the researcher although they usually require huge search spaces to justify their use due to the computational resources involved in the formal models. The first step to evaluate the viability of applying our approaches to this realm is to fully understand the domain and to select an instance of the problem (gas lift optimization) in which applying genetic approaches could seem promising. After analyzing the state of the art of this topic, we have decided to choose a previous work from the literature that faces the problem by means of numerical methods. This contribution includes details enough to be reproduced and complete data to be carefully analyzed. We have designed a classical, simple genetic algorithm just to try to get the same results and to understand the problem in depth. We could easily incorporate the well mathematical model, and the well data used by the authors and easily translate their mathematical model, to be numerically optimized, into a proper fitness function. We have analyzed the 100 curves they use in their experiment, similar results were observed, in addition, our system has automatically inferred an optimum total amount of injected gas for the field compatible with the addition of the optimum gas injected in each well by them. We have identified several constraints that could be interesting to incorporate to the optimization process but that could be difficult to numerically express. It could be interesting to automatically propose other mathematical models to fit both, individual well curves and also the behaviour of the complete field. All these facts and conclusions justify continuing exploring the viability of applying the approaches more sophisticated previously proposed by our research group.Keywords: evolutionary automatic programming, gas lift, genetic algorithms, oil production
Procedia PDF Downloads 1625422 Carotid Intima-Media Thickness and Ankle-Brachial Index as Predictors of the Severity of Coronary Artery Disease
Authors: Ali Kassem, Yaser Kamal, Mohamed Abdel Wahab, Mohamed Hussen
Abstract:
Introduction: Atherosclerosis is one of the leading causes of death all over the world. Recently, there is an increasing interest in Carotid Intima-Medial Thickness (CIMT) and Ankle Brachial Index (ABI) as non-invasive tools for identifying subclinical atherosclerosis. We aim to examine the role of CIMT and ABI as predictors of the severity of angiographically documented coronary artery disease (CAD). Methods: A cross-sectional study conducted on 60 patients who were investigated by coronary angiography at Sohag University Hospital, Egypt. CIMT: After the carotid arteries were located by transverse scans, the probe was rotated 90 ° to obtain and record longitudinal images of bilateral carotid arteries ABI: Each patient was evaluated in the supine position after resting for 5 min. ABI was measured in each leg using a Doppler Ultrasound while the patient remained in the same position. The lowest ABI obtained for either leg was taken as the ABI measurement for the patient. Results: Patients with carotid mean IMT ≥ 0.9 mm had significantly more severe coronary artery disease than patients without thickening (mean IMT > 0.9 mm). Similarly, patients with low ABI (< 0.9) had significantly more severe coronary artery disease than patients with ABI ≥ 0.9. When the patients were divided into 4 groups (group A, n = 15, mean IMT < 0.9 mm, ABI ≥ 0.9; group B, n = 25, mean IMT < 0.9 mm, low ABI; group C, n = 5, mean IMT ≥ 0.9 mm, ABI ≥ 0.9; group D, n = 19, mean IMT ≤ 0.9 mm, low ABI), the presence of significant coronary stenosis (> 50%) of the groups were significantly different (group A, n = 5: (33.3%); group B, n = 11: (52.4%); group C, n = 4: (60%); group D, n=15, (78.9%), P = 0.001). Conclusion: CIMT and ABI provide useful information on the severity of CAD. Early and aggressive intervention should be considered in patients with CAD and abnormalities in one or both of these non-invasive modalities.Keywords: ankle brachial index, carotid intima media thickness, coronary artery disease, predictors of severity
Procedia PDF Downloads 2325421 Design an Intelligent Fire Detection System Based on Neural Network and Particle Swarm Optimization
Authors: Majid Arvan, Peyman Beygi, Sina Rokhsati
Abstract:
In-time detection of fire in buildings is of great importance. Employing intelligent methods in data processing in fire detection systems leads to a significant reduction of fire damage at lowest cost. In this paper, the raw data obtained from the fire detection sensor networks in buildings is processed by using intelligent methods based on neural networks and the likelihood of fire happening is predicted. In order to enhance the quality of system, the noise in the sensor data is reduced by analyzing wavelets and applying SVD technique. Meanwhile, the proposed neural network is trained using particle swarm optimization (PSO). In the simulation work, the data is collected from sensor network inside the room and applied to the proposed network. Then the outputs are compared with conventional MLP network. The simulation results represent the superiority of the proposed method over the conventional one.Keywords: intelligent fire detection, neural network, particle swarm optimization, fire sensor network
Procedia PDF Downloads 3805420 An Android Application for ECG Monitoring and Evaluation Using Pan-Tompkins Algorithm
Authors: Cebrail Çiflikli, Emre Öner Tartan
Abstract:
Parallel to the fast worldwide increase of elderly population and spreading unhealthy life habits, there is a significant rise in the number of patients and health problems. The supervision of people who have health problems and oversight in detection of people who have potential risks, bring a considerable cost to health system and increase workload of physician. To provide an efficient solution to this problem, in the recent years mobile applications have shown their potential for wide usage in health monitoring. In this paper we present an Android mobile application that records and evaluates ECG signal using Pan-Tompkins algorithm for QRS detection. The application model includes an alarm mechanism that is proposed to be used for sending message including abnormality information and location information to health supervisor.Keywords: Android mobile application, ECG monitoring, QRS detection, Pan-Tompkins Algorithm
Procedia PDF Downloads 2345419 Invasive Asian Carp Fish Species: A Natural and Sustainable Source of Methionine for Organic Poultry Production
Authors: Komala Arsi, Ann M. Donoghue, Dan J. Donoghue
Abstract:
Methionine is an essential dietary amino acid necessary to promote growth and health of poultry. Synthetic methionine is commonly used as a supplement in conventional poultry diets and is temporarily allowed in organic poultry feed for lack of natural and organically approved sources of methionine. It has been a challenge to find a natural, sustainable and cost-effective source for methionine which reiterates the pressing need to explore potential alternatives of methionine for organic poultry production. Fish have high concentrations of methionine, but wild-caught fish are expensive and adversely impact wild fish populations. Asian carp (AC) is an invasive species and its utilization has the potential to be used as a natural methionine source. However, to our best knowledge, there is no proven technology to utilize this fish as a methionine source. In this study, we co-extruded Asian carp and soybean meal to form a dry-extruded, methionine-rich AC meal. In order to formulate rations with the novel extruded carp meal, the product was tested on cecectomized roosters for its amino acid digestibility and total metabolizable energy (TMEn). Excreta was collected and the gross energy, protein content of the feces was determined to calculate Total Metabolizable Energy (TME). The methionine content, digestibility and TME values were greater for the extruded AC meal than control diets. Carp meal was subsequently tested as a methionine source in feeds formulated for broilers, and production performance (body weight gain and feed conversion ratio) was assessed in comparison with broilers fed standard commercial diets supplemented with synthetic methionine. In this study, broiler chickens were fed either a control diet with synthetic methionine or a treatment diet with extruded AC meal (8 replicates/treatment; n=30 birds/replicate) from day 1 to 42 days of age. At the end of the trial, data for body weights, feed intake and feed conversion ratio (FCR) was analyzed using one-way ANOVA with Fisher LSD test for multiple comparisons. Results revealed that birds on AC diet had body weight gains and feed intake comparable to diets containing synthetic methionine (P > 0.05). Results from the study suggest that invasive AC-derived fish meal could potentially be an effective and inexpensive source of sustainable natural methionine for organic poultry farmers.Keywords: Asian carp, methionine, organic, poultry
Procedia PDF Downloads 1585418 An Audit of Local Guidance Compliance For Stereotactic Core Biopsy For DCIS In The Breast Screening Programme
Authors: Aisling Eves, Andrew Pieri, Ross McLean, Nerys Forester
Abstract:
Background: The breast unit local guideline recommends that 12 cores should be used in a stereotactic-guided biopsy to diagnose DCIS. Twelve cores are regarded to provide good diagnostic value without removing more breast tissue than necessary. This study aimed to determine compliance with guidelines and investigated how the number of cores impacted upon the re-excision rate and size discrepancies. Methods: This single-centre retrospective cohort study of 72 consecutive breast screened patients with <15mm DCIS on radiological report underwent stereotactic-guided core biopsy and subsequent surgical excision. Clinical, radiological, and histological data were collected over 5 years, and ASCO guidelines for margin involvement of <2mm was used to guide the need for re-excision. Results: Forty-six (63.9%) patients had <12 cores taken, and 26 (36.1%) patients had ≥12 cores taken. Only six (8.3%) patients had 12 cores taken in their stereotactic biopsy. Incomplete surgical excision was seen in 17 patients overall (23.6%), and of these patients, twelve (70.6%) had fewer than 12 cores taken (p=0.55 for the difference between groups). Mammogram and biopsy underestimated the size of the DCIS in this subgroup by a median of 15mm (range: 6-135mm). Re-excision was required in 9 patients (12.5%), and five patients (6.9%) were found to have invasive ductal carcinoma on excision (80% had <12 cores, p=0.43). Discussion: There is poor compliance with the breast unit local guidelines and higher rates of re-excision in patients who did not have ≥12 cores taken. Taking ≥12 cores resulted in fewer missed invasive cancers lower incomplete excision and re-excision rates.Keywords: stereotactic core biopsy, DCIS, breast screening, Re-excision rates, core biopsy
Procedia PDF Downloads 1285417 A Novel Approach of NPSO on Flexible Logistic (S-Shaped) Model for Software Reliability Prediction
Authors: Pooja Rani, G. S. Mahapatra, S. K. Pandey
Abstract:
In this paper, we propose a novel approach of Neural Network and Particle Swarm Optimization methods for software reliability prediction. We first explain how to apply compound function in neural network so that we can derive a Flexible Logistic (S-shaped) Growth Curve (FLGC) model. This model mathematically represents software failure as a random process and can be used to evaluate software development status during testing. To avoid trapping in local minima, we have applied Particle Swarm Optimization method to train proposed model using failure test data sets. We drive our proposed model using computational based intelligence modeling. Thus, proposed model becomes Neuro-Particle Swarm Optimization (NPSO) model. We do test result with different inertia weight to update particle and update velocity. We obtain result based on best inertia weight compare along with Personal based oriented PSO (pPSO) help to choose local best in network neighborhood. The applicability of proposed model is demonstrated through real time test data failure set. The results obtained from experiments show that the proposed model has a fairly accurate prediction capability in software reliability.Keywords: software reliability, flexible logistic growth curve model, software cumulative failure prediction, neural network, particle swarm optimization
Procedia PDF Downloads 3445416 Imaging of Underground Targets with an Improved Back-Projection Algorithm
Authors: Alireza Akbari, Gelareh Babaee Khou
Abstract:
Ground Penetrating Radar (GPR) is an important nondestructive remote sensing tool that has been used in both military and civilian fields. Recently, GPR imaging has attracted lots of attention in detection of subsurface shallow small targets such as landmines and unexploded ordnance and also imaging behind the wall for security applications. For the monostatic arrangement in the space-time GPR image, a single point target appears as a hyperbolic curve because of the different trip times of the EM wave when the radar moves along a synthetic aperture and collects reflectivity of the subsurface targets. With this hyperbolic curve, the resolution along the synthetic aperture direction shows undesired low resolution features owing to the tails of hyperbola. However, highly accurate information about the size, electromagnetic (EM) reflectivity, and depth of the buried objects is essential in most GPR applications. Therefore hyperbolic curve behavior in the space-time GPR image is often willing to be transformed to a focused pattern showing the object's true location and size together with its EM scattering. The common goal in a typical GPR image is to display the information of the spatial location and the reflectivity of an underground object. Therefore, the main challenge of GPR imaging technique is to devise an image reconstruction algorithm that provides high resolution and good suppression of strong artifacts and noise. In this paper, at first, the standard back-projection (BP) algorithm that was adapted to GPR imaging applications used for the image reconstruction. The standard BP algorithm was limited with against strong noise and a lot of artifacts, which have adverse effects on the following work like detection targets. Thus, an improved BP is based on cross-correlation between the receiving signals proposed for decreasing noises and suppression artifacts. To improve the quality of the results of proposed BP imaging algorithm, a weight factor was designed for each point in region imaging. Compared to a standard BP algorithm scheme, the improved algorithm produces images of higher quality and resolution. This proposed improved BP algorithm was applied on the simulation and the real GPR data and the results showed that the proposed improved BP imaging algorithm has a superior suppression artifacts and produces images with high quality and resolution. In order to quantitatively describe the imaging results on the effect of artifact suppression, focusing parameter was evaluated.Keywords: algorithm, back-projection, GPR, remote sensing
Procedia PDF Downloads 4525415 An Experimental Investigation of the Effect of Control Algorithm on the Energy Consumption and Temperature Distribution of a Household Refrigerator
Authors: G. Peker, Tolga N. Aynur, E. Tinar
Abstract:
In order to determine the energy consumption level and cooling characteristics of a domestic refrigerator controlled with various cooling system algorithms, a side by side type (SBS) refrigerator was tested in temperature and humidity controlled chamber conditions. Two different control algorithms; so-called drop-in and frequency controlled variable capacity compressor algorithms, were tested on the same refrigerator. Refrigerator cooling characteristics were investigated for both cases and results were compared with each other. The most important comparison parameters between the two algorithms were taken as; temperature distribution, energy consumption, evaporation and condensation temperatures, and refrigerator run times. Standard energy consumption tests were carried out on the same appliance and resulted in almost the same energy consumption levels, with a difference of %1,5. By using these two different control algorithms, the power consumptions character/profile of the refrigerator was found to be similar. By following the associated energy measurement standard, the temperature values of the test packages were measured to be slightly higher for the frequency controlled algorithm compared to the drop-in algorithm. This paper contains the details of this experimental study conducted with different cooling control algorithms and compares the findings based on the same standard conditions.Keywords: control algorithm, cooling, energy consumption, refrigerator
Procedia PDF Downloads 3735414 Sparsity-Based Unsupervised Unmixing of Hyperspectral Imaging Data Using Basis Pursuit
Authors: Ahmed Elrewainy
Abstract:
Mixing in the hyperspectral imaging occurs due to the low spatial resolutions of the used cameras. The existing pure materials “endmembers” in the scene share the spectra pixels with different amounts called “abundances”. Unmixing of the data cube is an important task to know the present endmembers in the cube for the analysis of these images. Unsupervised unmixing is done with no information about the given data cube. Sparsity is one of the recent approaches used in the source recovery or unmixing techniques. The l1-norm optimization problem “basis pursuit” could be used as a sparsity-based approach to solve this unmixing problem where the endmembers is assumed to be sparse in an appropriate domain known as dictionary. This optimization problem is solved using proximal method “iterative thresholding”. The l1-norm basis pursuit optimization problem as a sparsity-based unmixing technique was used to unmix real and synthetic hyperspectral data cubes.Keywords: basis pursuit, blind source separation, hyperspectral imaging, spectral unmixing, wavelets
Procedia PDF Downloads 1955413 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 3555412 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 4695411 Density-based Denoising of Point Cloud
Authors: Faisal Zaman, Ya Ping Wong, Boon Yian Ng
Abstract:
Point cloud source data for surface reconstruction is usually contaminated with noise and outliers. To overcome this, we present a novel approach using modified kernel density estimation (KDE) technique with bilateral filtering to remove noisy points and outliers. First we present a method for estimating optimal bandwidth of multivariate KDE using particle swarm optimization technique which ensures the robust performance of density estimation. Then we use mean-shift algorithm to find the local maxima of the density estimation which gives the centroid of the clusters. Then we compute the distance of a certain point from the centroid. Points belong to outliers then removed by automatic thresholding scheme which yields an accurate and economical point surface. The experimental results show that our approach comparably robust and efficient.Keywords: point preprocessing, outlier removal, surface reconstruction, kernel density estimation
Procedia PDF Downloads 3445410 Adaptive Envelope Protection Control for the below and above Rated Regions of Wind Turbines
Authors: Mustafa Sahin, İlkay Yavrucuk
Abstract:
This paper presents a wind turbine envelope protection control algorithm that protects Variable Speed Variable Pitch (VSVP) wind turbines from damage during operation throughout their below and above rated regions, i.e. from cut-in to cut-out wind speed. The proposed approach uses a neural network that can adapt to turbines and their operating points. An algorithm monitors instantaneous wind and turbine states, predicts a wind speed that would push the turbine to a pre-defined envelope limit and, when necessary, realizes an avoidance action. Simulations are realized using the MS Bladed Wind Turbine Simulation Model for the NREL 5 MW wind turbine equipped with baseline controllers. In all simulations, through the proposed algorithm, it is observed that the turbine operates safely within the allowable limit throughout the below and above rated regions. Two example cases, adaptations to turbine operating points for the below and above rated regions and protections are investigated in simulations to show the capability of the proposed envelope protection system (EPS) algorithm, which reduces excessive wind turbine loads and expectedly increases the turbine service life.Keywords: adaptive envelope protection control, limit detection and avoidance, neural networks, ultimate load reduction, wind turbine power control
Procedia PDF Downloads 1365409 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 3195408 Vector Quantization Based on Vector Difference Scheme for Image Enhancement
Authors: Biji Jacob
Abstract:
Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.Keywords: codebook, image enhancement, vector difference, vector quantization
Procedia PDF Downloads 2675407 Serum 25-Hydroxyvitamin D Levels in Korean Breast Cancer Patients
Authors: Sung Yong Kim, Byung Joo Song
Abstract:
Background: Circulating 25-hydroxyvitamin D (25(OH)D) levels has been considered to be inversely related to breast cancer development, recurrence risk, and mortality. Mean vitamin D levels in Korean population is lower than western countries due to higher incidence of lactose intolerance and lower exposure to sunlight. The purpose of this study was to assess incidence of 25(OH)D deficiency at diagnosis and after adjuvant chemotherapy and to investigate the correlation serum 25(OH)D levels with clinicopathologic features. Methods: From December 2011 to October 2012, 280 breast cancer patients seen at a single tertiary cancer center were enrolled. Serum 25(OH)D was measured at the time of surgery and after completion of adjuvant chemotherapy. Statistical analyses used chi-square test, Fisher's exact test, t-test, and ANOVA. Results: Mean serum 25(OH)D was 18.5 ng/ml. The 25(OH)D levels were deficient (<20 ng/ml) in 190 patients (67.9%), insufficient (20-29 ng/ml) in 51 patients(18.2%), and sufficient (30-150 ng/ml) in 39 patients(13.9%). A notable decrease in 25(OH)D concentration was observed(p<0.001) after chemotherapy but was not related to chemotherapy regimens. It was found significant lower 25(OH)D levels at winter season(from October to March, p=0.030). Subjects with invasive carcinoma (IDC or ILC) had significantly lower circulating levels of 25(OH)D than those with ductal carcinoma in situ(DCIS) (p=0.010). Patients with larger tumor size tends to have lower serum 25(OH)D but there were no statistical significance. Conclusions: Most of the breast cancer patients showed deficient or insufficient serum 25(OH)D concentration. Incidence of vitamin D deficiency was higher in invasive carcinoma than DCIS. Serum 25(OH)D levels were decreased after chemotherapy. Consideration should be given to the supplement of vitamin D to those patients.Keywords: breast neoplasms, vitamin D, Korean population, breast cancer
Procedia PDF Downloads 4165406 Lumbar Punctures: Re-Audit of Procedure Documentation Following the Introduction of a Standardised Procedure Checklist
Authors: Hayley Lawrence, Nabi Shah, Sarah Dyer
Abstract:
Aims: Lumbar punctures are a common bedside procedure performed in acute medicine. Published guidance exists on the standardised documentation of invasive procedures in order to reduce the risk of complications. The audit aim was to assess current standards of documentation in accordance with both the GMC and the National Standards for Invasive Procedures guidelines. A second cycle was conducted after introducing a standardised sticker created using current guidelines. This would assess whether the sticker improved documentation, aiming for 100% standard in each step of the procedure. Methods: An initial prospective audit of current practice was conducted over a 3-month period. Patients were identified by their presenting complaints and by colleagues assessing acute medical patients. Initial findings were presented locally, and a further prospective audit was conducted following the implementation of a standardised sticker. Results: 19 lumbar punctures were included in the first cycle and 13 procedures in the second. Pre-procedure documentation was collected for each cycle, whereby documentation of ‘Indication’ improved from 5.3% to 84.6%, ‘Consent’ from 84.2% to 100%, ‘Coagulopathy’ from 0% to 61.5%, ‘Drug Chart checked’ from 0% to 100%, ‘Position of patient’ from 26.3% to 100% and use of ‘Aseptic Technique’ from 83.3% to 100% from the first to the second cycle respectively. ‘Level of Doctor’ and ‘Supervision’ decreased from 53% to 31% and 53% to 46%, respectively, in the second cycle. Documentation of the procedure itself also demonstrated improvements, with ‘Level of Insertion’ 15.8% to 100%, ‘Name of Antiseptic Used’ 11.1% to 69.2%, ‘Local Anaesthetic Used’ 26.3% to 53.8%, ‘Needle Gauge’ 42.1% to 76.9%, ‘Number of Attempts’ 78.9% to 100% and ‘Traumatic/Atraumatic’ procedure 26.3% to 92.3%, respectively. A similar number of opening pressures were documented in each cycle at 57.9% and 53.8%, respectively, but its documentation was deemed ‘Not Applicable’ in a higher number of patients in the second cycle. Post-procedure documentation improved, with ‘Number of Samples obtained’ increasing from 52.6% to 92.3% and documentation of ‘Immediate Complications’ increasing from 78.9% to 100%. ‘Dressing Applied’ was poorly documented in the first cycle at 16.7%. This was not included on the standardised sticker, resulting in 0% documentation in the second cycle. Documentation of Clinicians’ Name and Bleep reduced from 63.2% to 15.4%, but when the name only was analysed, this increased to 84.6%. Conclusions: Standardised stickers for lumbar punctures do improve documentation and hence should result in improved patient safety. There is still room for improvement to reach 100% standard in each area, especially with respect to the clinician’s name and contact details being documented. Final adjustments will be made to the sticker before being included in a lumbar puncture kit, which will be made readily available in the acute medical wards. Future audits could be extended to include other common bedside procedures performed in acute medicine to ensure documentation of all these procedures reaches 100% standard.Keywords: invasive procedure, lumbar puncture, medical record keeping, procedure checklist, procedure documentation, standardised documentation
Procedia PDF Downloads 1045405 A Resource Optimization Strategy for CPU (Central Processing Unit) Intensive Applications
Authors: Junjie Peng, Jinbao Chen, Shuai Kong, Danxu Liu
Abstract:
On the basis of traditional resource allocation strategies, the usage of resources on physical servers in cloud data center is great uncertain. It will cause waste of resources if the assignment of tasks is not enough. On the contrary, it will cause overload if the assignment of tasks is too much. This is especially obvious when the applications are the same type because of its resource preferences. Considering CPU intensive application is one of the most common types of application in the cloud, we studied the optimization strategy for CPU intensive applications on the same server. We used resource preferences to analyze the case that multiple CPU intensive applications run simultaneously, and put forward a model which can predict the execution time for CPU intensive applications which run simultaneously. Based on the prediction model, we proposed the method to select the appropriate number of applications for a machine. Experiments show that the model can predict the execution time accurately for CPU intensive applications. To improve the execution efficiency of applications, we propose a scheduling model based on priority for CPU intensive applications. Extensive experiments verify the validity of the scheduling model.Keywords: cloud computing, CPU intensive applications, resource optimization, strategy
Procedia PDF Downloads 2795404 Mathematical Model and Algorithm for the Berth and Yard Resource Allocation at Seaports
Authors: Ming Liu, Zhihui Sun, Xiaoning Zhang
Abstract:
This paper studies a deterministic container transportation problem, jointly optimizing the berth allocation, quay crane assignment and yard storage allocation at container ports. The problem is formulated as an integer program to coordinate the decisions. Because of the large scale, it is then transformed into a set partitioning formulation, and a framework of branchand- price algorithm is provided to solve it.Keywords: branch-and-price, container terminal, joint scheduling, maritime logistics
Procedia PDF Downloads 2935403 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System
Authors: Tahsin A. H. Nishat, Raquib Ahsan
Abstract:
Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system
Procedia PDF Downloads 1375402 Porul: Option Generation and Selection and Scoring Algorithms for a Tamil Flash Card Game
Authors: Anitha Narasimhan, Aarthy Anandan, Madhan Karky, C. N. Subalalitha
Abstract:
Games can be the excellent tools for teaching a language. There are few e-learning games in Indian languages like word scrabble, cross word, quiz games etc., which were developed mainly for educational purposes. This paper proposes a Tamil word game called, “Porul”, which focuses on education as well as on players’ thinking and decision-making skills. Porul is a multiple choice based quiz game, in which the players attempt to answer questions correctly from the given multiple options that are generated using a unique algorithm called the Option Selection algorithm which explores the semantics of the question in various dimensions namely, synonym, rhyme and Universal Networking Language semantic category. This kind of semantic exploration of the question not only increases the complexity of the game but also makes it more interesting. The paper also proposes a Scoring Algorithm which allots a score based on the popularity score of the question word. The proposed game has been tested using 20,000 Tamil words.Keywords: Porul game, Tamil word game, option selection, flash card, scoring, algorithm
Procedia PDF Downloads 404