Search results for: radial error
1213 Discrete Sliding Modes Regulator with Exponential Holder for Non-Linear Systems
Authors: G. Obregon-Pulido , G. C. Solis-Perales, J. A. Meda-Campaña
Abstract:
In this paper, we present a sliding mode controller in discrete time. The design of the controller is based on the theory of regulation for nonlinear systems. In the problem of disturbance rejection and/or output tracking, it is known that in discrete time, a controller that uses the zero-order holder only guarantees tracking at the sampling instances but not between instances. It is shown that using the so-called exponential holder, it is possible to guarantee asymptotic zero output tracking error, also between the sampling instant. For stabilizing the problem of close loop system we introduce the sliding mode approach relaxing the requirements of the existence of a linear stabilizing control law.Keywords: regulation theory, sliding modes, discrete controller, ripple-free tracking
Procedia PDF Downloads 541212 The Interaction between Human and Environment on the Perspective of Environmental Ethics
Authors: Mella Ismelina Farma Rahayu
Abstract:
Environmental problems could not be separated from unethical human perspectives and behaviors toward the environment. There is a fundamental error in the philosophy of people’s perspective about human and nature and their relationship with the environment, which in turn will create an inappropriate behavior in relation to the environment. The aim of this study is to investigate and to understand the ethics of the environment in the context of humans interacting with the environment by using the hermeneutic approach. The related theories and concepts collected from literature review are used as data, which were analyzed by using interpretation, critical evaluation, internal coherence, comparisons, and heuristic techniques. As a result of this study, there will be a picture related to the interaction of human and environment in the perspective of environmental ethics, as well as the problems of the value of ecological justice in the interaction of humans and environment. We suggest that the interaction between humans and environment need to be based on environmental ethics, in a spirit of mutual respect between humans and the natural world.Keywords: environment, environmental ethics, interaction, value
Procedia PDF Downloads 4221211 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 4411210 Robust Speed Sensorless Control to Estimated Error for PMa-SynRM
Authors: Kyoung-Jin Joo, In-Gun Kim, Hyun-Seok Hong, Dong-Woo Kang, Ju Lee
Abstract:
Recently, the permanent magnet-assisted synchronous reluctance motor (PMa-SynRM) that can be substituted for the induction motor has been studying because of the needs of the development of the premium high efficiency motor for the minimum energy performance standard (MEPS). PMa-SynRM is required to the speed and position information for motor speed and torque controls. However, to apply the sensors has many problems that are sensor mounting space shortage and additional cost, etc. Therefore, in this paper, speed-sensorless control based on model reference adaptive system (MRAS) is introduced to eliminate the sensor. The sensorless method is constructed in a reference model as standard and an adaptive model as the state observer. The proposed algorithm is verified by the simulation.Keywords: PMa-SynRM, sensorless control, robust estimation, MRAS method
Procedia PDF Downloads 4041209 Neural Network Based Path Loss Prediction for Global System for Mobile Communication in an Urban Environment
Authors: Danladi Ali
Abstract:
In this paper, we measured GSM signal strength in the Dnepropetrovsk city in order to predict path loss in study area using nonlinear autoregressive neural network prediction and we also, used neural network clustering to determine average GSM signal strength receive at the study area. The nonlinear auto-regressive neural network predicted that the GSM signal is attenuated with the mean square error (MSE) of 2.6748dB, this attenuation value is used to modify the COST 231 Hata and the Okumura-Hata models. The neural network clustering revealed that -75dB to -95dB is received more frequently. This means that the signal strength received at the study is mostly weak signalKeywords: one-dimensional multilevel wavelets, path loss, GSM signal strength, propagation, urban environment and model
Procedia PDF Downloads 3821208 Studying the Evolution of Soot and Precursors in Turbulent Flames Using Laser Diagnostics
Authors: Muhammad A. Ashraf, Scott Steinmetz, Matthew J. Dunn, Assaad R. Masri
Abstract:
This study focuses on the evolution of soot and soot precursors in three different piloted diffusion turbulent flames. The fuel composition is as follow flame A (ethylene/nitrogen, 2:3 by volume), flame B (ethylene/air, 2:3 by volume), and flame C (pure methane). These flames are stabilized using a 4mm diameter jet surrounded by a pilot annulus with an outer diameter of 15 mm. The pilot issues combustion products from stoichiometric premixed flames of hydrogen, acetylene, and air. In all cases, the jet Reynolds number is 10,000, and air flows in the coflow stream at a velocity of 5 m/s. Time-resolved laser-induced fluorescence (LIF) is collected at two wavelength bands in the visible (445 nm) and UV regions (266 nm) along with laser-induced incandescence (LII). The combined results are employed to study concentration, size, and growth of soot and precursors. A set of four fast photo-multiplier tubes are used to record emission data in temporal domain. A 266nm laser pulse preferentially excites smaller nanoparticles which emit a fluorescence spectrum which is analysed to track the presence, evolution, and destruction of nanoparticles. A 1064nm laser pulse excites sufficiently large soot particles, and the resulting incandescence is collected at 1064nm. At downstream and outer radial locations, intermittency becomes a relevant factor. Therefore, data collected in turbulent flames is conditioned to account for intermittency so that the resulting mean profiles for scattering, fluorescence, and incandescence are shown for the events that contain traces of soot. It is found that in the upstream regions of the ethylene-air and ethylene-nitrogen flames, the presence of soot precursors is rather similar. However, further downstream, soot concentration grows larger in the ethylene-air flames.Keywords: laser induced incandescence, laser induced fluorescence, soot, nanoparticles
Procedia PDF Downloads 1461207 Required SNR for PPM in Downlink Gamma-Gamma Turbulence Channel
Authors: Selami Şahin
Abstract:
In this paper, in order to achieve sufficient bit error rate (BER) according to zenith angle of the satellite to ground station, SNR requirement is investigated utilizing pulse position modulation (PPM). To realize explicit results, all parameters such as link distance, Rytov variance, scintillation index, wavelength, aperture diameter of the receiver, Fried's parameter and zenith angle have been taken into account. Results indicate that after some parameters are determined since the constraints of the system, to achieve desired BER, required SNR values are in wide range while zenith angle changes from small to large values. Therefore, in order not to utilize high link margin, either SNR should adjust according to zenith angle or link should establish with predetermined intervals of the zenith angle.Keywords: Free-space optical communication, optical downlink channel, atmospheric turbulence, wireless optical communication
Procedia PDF Downloads 4011206 Simulation of GAG-Analogue Biomimetics for Intervertebral Disc Repair
Authors: Dafna Knani, Sarit S. Sivan
Abstract:
Aggrecan, one of the main components of the intervertebral disc (IVD), belongs to the family of proteoglycans (PGs) that are composed of glycosaminoglycan (GAG) chains covalently attached to a core protein. Its primary function is to maintain tissue hydration and hence disc height under the high loads imposed by muscle activity and body weight. Significant PG loss is one of the first indications of disc degeneration. A possible solution to recover disc functions is by injecting a synthetic hydrogel into the joint cavity, hence mimicking the role of PGs. One of the hydrogels proposed is GAG-analogues, based on sulfate-containing polymers, which are responsible for hydration in disc tissue. In the present work, we used molecular dynamics (MD) to study the effect of the hydrogel crosslinking (type and degree) on the swelling behavior of the suggested GAG-analogue biomimetics by calculation of cohesive energy density (CED), solubility parameter, enthalpy of mixing (ΔEmix) and the interactions between the molecules at the pure form and as a mixture with water. The simulation results showed that hydrophobicity plays an important role in the swelling of the hydrogel, as indicated by the linear correlation observed between solubility parameter values of the copolymers and crosslinker weight ratio (w/w); this correlation was found useful in predicting the amount of PEGDA needed for the desirable hydration behavior of (CS)₄-peptide. Enthalpy of mixing calculations showed that all the GAG analogs, (CS)₄ and (CS)₄-peptide are water-soluble; radial distribution function analysis revealed that they form interactions with water molecules, which is important for the hydration process. To conclude, our simulation results, beyond supporting the experimental data, can be used as a useful predictive tool in the future development of biomaterials, such as disc replacement.Keywords: molecular dynamics, proteoglycans, enthalpy of mixing, swelling
Procedia PDF Downloads 751205 Analysis of Rock Cutting Progress with a New Axe-Shaped PDC Cutter to Improve PDC Bit Performance in Elastoplastic Formation
Authors: Fangyuan Shao, Wei Liu, Deli Gao
Abstract:
Polycrystalline diamond compact (PDC) bits have occupied a large market of unconventional oil and gas drilling. The application of PDC bits benefits from the efficient rock breaking of PDC cutters. In response to increasingly complex formations, many shaped cutters have been invited, but many of them have not been solved by the mechanism of rock breaking. In this paper, two kinds of PDC cutters: a new axe-shaped (NAS) cutter and cylindrical cutter (benchmark) were studied by laboratory experiments. NAS cutter is obtained by optimizing two sides of axe-shaped cutter with curved surfaces. All the cutters were put on a vertical turret lathe (VTL) in the laboratory for cutting tests. According to the cutting distance, the VTL tests can be divided into two modes: single-turn rotary cutting and continuous cutting. The cutting depth of cutting (DOC) was set at 1.0 mm and 2.0 mm in the former mode. The later mode includes a dry VTL test for thermal stability and a wet VTL test for wear resistance. Load cell and 3D optical profiler were used to obtain the value of cutting forces and wear area, respectively. Based on the findings of the single-turn rotary cutting VTL tests, the performance of A NAS cutter was better than the benchmark cutter on elastoplastic material cutting. The cutting forces (normal forces, tangential force, and radial force) and special mechanical energy (MSE) of a NAS cutter were lower than that of the benchmark cutter under the same condition. It meant that a NAS cutter was more efficient on elastoplastic material breaking. However, the wear resistance of a new axe-shaped cutter was higher than that of a benchmark cutter. The results of the dry VTL test showed that the thermal stability of a NAS cutter was higher than that of a benchmark cutter. The cutting efficiency can be improved by optimizing the geometric structure of the PDC cutter. The change of thermal stability may be caused by the decrease of the contact area between cutter and rock at given DOC. The conclusions of this paper can be used as an important reference for PDC cutters designers.Keywords: axe-shaped cutter, PDC cutter, rotary cutting test, vertical turret lathe
Procedia PDF Downloads 2031204 Temporally Coherent 3D Animation Reconstruction from RGB-D Video Data
Authors: Salam Khalifa, Naveed Ahmed
Abstract:
We present a new method to reconstruct a temporally coherent 3D animation from single or multi-view RGB-D video data using unbiased feature point sampling. Given RGB-D video data, in form of a 3D point cloud sequence, our method first extracts feature points using both color and depth information. In the subsequent steps, these feature points are used to match two 3D point clouds in consecutive frames independent of their resolution. Our new motion vectors based dynamic alignment method then fully reconstruct a spatio-temporally coherent 3D animation. We perform extensive quantitative validation using novel error functions to analyze the results. We show that despite the limiting factors of temporal and spatial noise associated to RGB-D data, it is possible to extract temporal coherence to faithfully reconstruct a temporally coherent 3D animation from RGB-D video data.Keywords: 3D video, 3D animation, RGB-D video, temporally coherent 3D animation
Procedia PDF Downloads 3731203 Towards Automated Remanufacturing of Marine and Offshore Engineering Components
Authors: Aprilia, Wei Liang Keith Nguyen, Shu Beng Tor, Gerald Gim Lee Seet, Chee Kai Chua
Abstract:
Automated remanufacturing process is of great interest in today’s marine and offshore industry. Most of the current remanufacturing processes are carried out manually and hence they are error prone, labour-intensive and costly. In this paper, a conceptual framework for automated remanufacturing is presented. This framework involves the integration of 3D non-contact digitization, adaptive surface reconstruction, additive manufacturing and machining operation. Each operation is operated and interconnected automatically as one system. The feasibility of adaptive surface reconstruction on marine and offshore engineering components is also discussed. Several engineering components were evaluated and the results showed that this proposed system is feasible. Conclusions are drawn and further research work is discussed.Keywords: adaptive surface reconstruction, automated remanufacturing, automatic repair, reverse engineering
Procedia PDF Downloads 3261202 A New Framework for ECG Signal Modeling and Compression Based on Compressed Sensing Theory
Authors: Siavash Eftekharifar, Tohid Yousefi Rezaii, Mahdi Shamsi
Abstract:
The purpose of this paper is to exploit compressed sensing (CS) method in order to model and compress the electrocardiogram (ECG) signals at a high compression ratio. In order to obtain a sparse representation of the ECG signals, first a suitable basis matrix with Gaussian kernels, which are shown to nicely fit the ECG signals, is constructed. Then the sparse model is extracted by applying some optimization technique. Finally, the CS theory is utilized to obtain a compressed version of the sparse signal. Reconstruction of the ECG signal from the compressed version is also done to prove the reliability of the algorithm. At this stage, a greedy optimization technique is used to reconstruct the ECG signal and the Mean Square Error (MSE) is calculated to evaluate the precision of the proposed compression method.Keywords: compressed sensing, ECG compression, Gaussian kernel, sparse representation
Procedia PDF Downloads 4621201 Context-Aware Recommender System Using Collaborative Filtering, Content-Based Algorithm and Fuzzy Rules
Authors: Xochilt Ramirez-Garcia, Mario Garcia-Valdez
Abstract:
Contextual recommendations are implemented in Recommender Systems to improve user satisfaction, recommender system makes accurate and suitable recommendations for a particular situation reaching personalized recommendations. The context provides information relevant to the Recommender System and is used as a filter for selection of relevant items for the user. This paper presents a Context-aware Recommender System, which uses techniques based on Collaborative Filtering and Content-Based, as well as fuzzy rules, to recommend items inside the context. The dataset used to test the system is Trip Advisor. The accuracy in the recommendations was evaluated with the Mean Absolute Error.Keywords: algorithms, collaborative filtering, intelligent systems, fuzzy logic, recommender systems
Procedia PDF Downloads 4211200 Forecasting Amman Stock Market Data Using a Hybrid Method
Authors: Ahmad Awajan, Sadam Al Wadi
Abstract:
In this study, a hybrid method based on Empirical Mode Decomposition and Holt-Winter (EMD-HW) is used to forecast Amman stock market data. First, the data are decomposed by EMD method into Intrinsic Mode Functions (IMFs) and residual components. Then, all components are forecasted by HW technique. Finally, forecasting values are aggregated together to get the forecasting value of stock market data. Empirical results showed that the EMD- HW outperform individual forecasting models. The strength of this EMD-HW lies in its ability to forecast non-stationary and non- linear time series without a need to use any transformation method. Moreover, EMD-HW has a relatively high accuracy comparing with eight existing forecasting methods based on the five forecast error measures.Keywords: Holt-Winter method, empirical mode decomposition, forecasting, time series
Procedia PDF Downloads 1291199 Assessment of Kinetic Trajectory of the Median Nerve from Wrist Ultrasound Images Using Two Dimensional Baysian Speckle Tracking Technique
Authors: Li-Kai Kuo, Shyh-Hau Wang
Abstract:
The kinetic trajectory of the median nerve (MN) in the wrist has shown to be capable of being applied to assess the carpal tunnel syndrome (CTS), and was found able to be detected by high-frequency ultrasound image via motion tracking technique. Yet, previous study may not quickly perform the measurement due to the use of a single element transducer for ultrasound image scanning. Therefore, previous system is not appropriate for being applied to clinical application. In the present study, B-mode ultrasound images of the wrist corresponding to movements of fingers from flexion to extension were acquired by clinical applicable real-time scanner. The kinetic trajectories of MN were off-line estimated utilizing two dimensional Baysian speckle tracking (TDBST) technique. The experiments were carried out from ten volunteers by ultrasound scanner at 12 MHz frequency. Results verified from phantom experiments have demonstrated that TDBST technique is able to detect the movement of MN based on signals of the past and present information and then to reduce the computational complications associated with the effect of such image quality as the resolution and contrast variations. Moreover, TDBST technique tended to be more accurate than that of the normalized cross correlation tracking (NCCT) technique used in previous study to detect movements of the MN in the wrist. In response to fingers’ flexion movement, the kinetic trajectory of the MN moved toward the ulnar-palmar direction, and then toward the radial-dorsal direction corresponding to the extensional movement. TDBST technique and the employed ultrasound image scanner have verified to be feasible to sensitively detect the kinetic trajectory and displacement of the MN. It thus could be further applied to diagnose CTS clinically and to improve the measurements to assess 3D trajectory of the MN.Keywords: baysian speckle tracking, carpal tunnel syndrome, median nerve, motion tracking
Procedia PDF Downloads 4951198 Time Synchronization between the eNBs in E-UTRAN under the Asymmetric IP Network
Abstract:
In this paper, we present a method for a time synchronization between the two eNodeBs (eNBs) in E-UTRAN (Evolved Universal Terrestrial Radio Access) network. The two eNBs are cooperating in so-called inter eNB CA (Carrier Aggregation) case and connected via asymmetrical IP network. We solve the problem by using broadcasting signals generated in E-UTRAN as synchronization signals. The results show that the time synchronization with the proposed method is possible with the error significantly less than 1 ms which is sufficient considering the time transmission interval is 1 ms in E-UTRAN. This makes this method (with low complexity) more suitable than Network Time Protocol (NTP) in the mobile applications with generated broadcasting signals where time synchronization in asymmetrical network is required.Keywords: IP scheduled throughput, E-UTRAN, Evolved Universal Terrestrial Radio Access Network, NTP, Network Time Protocol, assymetric network, delay
Procedia PDF Downloads 3611197 Extracting an Experimental Relation between SMD, Mass Flow Rate, Velocity and Pressure in Swirl Fuel Atomizers
Authors: Mohammad Hassan Ziraksaz
Abstract:
Fuel atomizers are used in a wide range of IC engines, turbojets and a variety of liquid propellant rocket engines. As the fuel spray fully develops its characters approach their ultimate amounts. Fuel spray characters such as SMD, injection pressure, mass flow rate, droplet velocity and spray cone angle play important roles to atomize the liquid fuel to finely atomized fuel droplets and finally form the fine fuel spray. Well performed, fully developed, fine spray without any defections, brings the idea of finding an experimental relation between the main effective spray characters. Extracting an experimental relation between SMD and other fuel spray physical characters in swirl fuel atomizers is the main scope of this experimental work. Droplet velocity, fuel mass flow rate, SMD and spray cone angle are the parameters which are measured. A set of twelve reverse engineering atomizers without any spray defections and a set of eight original atomizers as referenced well-performed spray are contributed in this work. More than 350 tests, mostly repeated, were performed. This work shows that although spray cone angle plays a very effective role in spray formation, after formation, it smoothly approaches to an almost constant amount while the other characters are changed to create fine droplets. Therefore, the work to find the relation between the characters is focused on SMD, droplet velocity, fuel mass flow rate, and injection pressure. The process of fuel spray formation begins in 5 Psig injection pressures, where a tiny fuel onion attaches to the injector tip and ended in 250 Psig injection pressure, were fully developed fine fuel spray forms. Injection pressure is gradually increased to observe how the spray forms. In each step, all parameters are measured and recorded carefully to provide a data bank. Various diagrams have been drawn to study the behavior of the parameters in more detail. Experiments and graphs show that the power equation can best show changes in parameters. The SMD experimental relation with pressure P, fuel mass flow rate Q ̇ and droplet velocity V extracted individually in pairs. Therefore, the proportional relation of SMD with other parameters is founded. Now it is time to find an experimental relation including all the parameters. Using obtained proportional relation, replacing the parameters with experimentally measured ones and drawing the graphs of experimental SMD versus proportion SMD (〖SMD〗_P), a correctional equation and consequently the final experimental equation is obtained. This experimental equation is specified to use for swirl fuel atomizers and the use of this experimental equation in different conditions shows about 3% error, which is expected to achieve lower error and consequently higher accuracy by increasing the number of experiments and increasing the accuracy of data collection.Keywords: droplet velocity, experimental relation, mass flow rate, SMD, swirl fuel atomizer
Procedia PDF Downloads 1611196 Autonomic Recovery Plan with Server Virtualization
Authors: S. Hameed, S. Anwer, M. Saad, M. Saady
Abstract:
For autonomic recovery with server virtualization, a cogent plan that includes recovery techniques and backups with virtualized servers can be developed instead of assigning an idle server to backup operations. In addition to hardware cost reduction and data center trail, the disaster recovery plan can ensure system uptime and to meet objectives of high availability, recovery time, recovery point, server provisioning, and quality of services. This autonomic solution would also support disaster management, testing, and development of the recovery site. In this research, a workflow plan is proposed for supporting disaster recovery with virtualization providing virtual monitoring, requirements engineering, solution decision making, quality testing, and disaster management. This recovery model would make disaster recovery a lot easier, faster, and less error prone.Keywords: autonomous intelligence, disaster recovery, cloud computing, server virtualization
Procedia PDF Downloads 1621195 Demand Forecasting Using Artificial Neural Networks Optimized by Particle Swarm Optimization
Authors: Daham Owaid Matrood, Naqaa Hussein Raheem
Abstract:
Evolutionary algorithms and Artificial neural networks (ANN) are two relatively young research areas that were subject to a steadily growing interest during the past years. This paper examines the use of Particle Swarm Optimization (PSO) to train a multi-layer feed forward neural network for demand forecasting. We use in this paper weekly demand data for packed cement and towels, which have been outfitted by the Northern General Company for Cement and General Company of prepared clothes respectively. The results showed superiority of trained neural networks using particle swarm optimization on neural networks trained using error back propagation because their ability to escape from local optima.Keywords: artificial neural network, demand forecasting, particle swarm optimization, weight optimization
Procedia PDF Downloads 4511194 Virtual Dimension Analysis of Hyperspectral Imaging to Characterize a Mining Sample
Authors: L. Chevez, A. Apaza, J. Rodriguez, R. Puga, H. Loro, Juan Z. Davalos
Abstract:
Virtual Dimension (VD) procedure is used to analyze Hyperspectral Image (HIS) treatment-data in order to estimate the abundance of mineral components of a mining sample. Hyperspectral images coming from reflectance spectra (NIR region) are pre-treated using Standard Normal Variance (SNV) and Minimum Noise Fraction (MNF) methodologies. The endmember components are identified by the Simplex Growing Algorithm (SVG) and after adjusted to the reflectance spectra of reference-databases using Simulated Annealing (SA) methodology. The obtained abundance of minerals of the sample studied is very near to the ones obtained using XRD with a total relative error of 2%.Keywords: hyperspectral imaging, minimum noise fraction, MNF, simplex growing algorithm, SGA, standard normal variance, SNV, virtual dimension, XRD
Procedia PDF Downloads 1581193 Finite Element Modeling of Aortic Intramural Haematoma Shows Size Matters
Authors: Aihong Zhao, Priya Sastry, Mark L Field, Mohamad Bashir, Arvind Singh, David Richens
Abstract:
Objectives: Intramural haematoma (IMH) is one of the pathologies, along with acute aortic dissection, that present as Acute Aortic Syndrome (AAS). Evidence suggests that unlike aortic dissection, some intramural haematomas may regress with medical management. However, intramural haematomas have been traditionally managed like acute aortic dissections. Given that some of these pathologies may regress with conservative management, it would be useful to be able to identify which of these may not need high risk emergency intervention. A computational aortic model was used in this study to try and identify intramural haematomas with risk of progression to aortic dissection. Methods: We created a computational model of the aorta with luminal blood flow. Reports in the literature have identified 11 mm as the radial clot thickness that is associated with heightened risk of progression of intramural haematoma. Accordingly, haematomas of varying sizes were implanted in the modeled aortic wall to test this hypothesis. The model was exposed to physiological blood flows and the stresses and strains in each layer of the aortic wall were recorded. Results: Size and shape of clot were seen to affect the magnitude of aortic stresses. The greatest stresses and strains were recorded in the intima of the model. When the haematoma exceeded 10 mm in all dimensions, the stress on the intima reached breaking point. Conclusion: Intramural clot size appears to be a contributory factor affecting aortic wall stress. Our computer simulation corroborates clinical evidence in the literature proposing that IMH diameter greater than 11 mm may be predictive of progression. This preliminary report suggests finite element modelling of the aortic wall may be a useful process by which to examine putative variables important in predicting progression or regression of intramural haematoma.Keywords: intramural haematoma, acute aortic syndrome, finite element analysis,
Procedia PDF Downloads 4311192 Optimal Mother Wavelet Function for Shoulder Muscles of Upper Limb Amputees
Authors: Amanpreet Kaur
Abstract:
Wavelet transform (WT) is a powerful statistical tool used in applied mathematics for signal and image processing. The different mother, wavelet basis function, has been compared to select the optimal wavelet function that represents the electromyogram signal characteristics of upper limb amputees. Four different EMG electrode has placed on different location of shoulder muscles. Twenty one wavelet functions from different wavelet families were investigated. These functions included Daubechies (db1-db10), Symlets (sym1-sym5), Coiflets (coif1-coif5) and Discrete Meyer. Using mean square error value, the significance of the mother wavelet functions has been determined for teres, pectorals, and infraspinatus around shoulder muscles. The results show that the best mother wavelet is the db3 from the Daubechies family for efficient classification of the signal.Keywords: Daubechies, upper limb amputation, shoulder muscles, Symlets, Coiflets
Procedia PDF Downloads 2351191 The Design of a Computer Simulator to Emulate Pathology Laboratories: A Model for Optimising Clinical Workflows
Authors: M. Patterson, R. Bond, K. Cowan, M. Mulvenna, C. Reid, F. McMahon, P. McGowan, H. Cormican
Abstract:
This paper outlines the design of a simulator to allow for the optimisation of clinical workflows through a pathology laboratory and to improve the laboratory’s efficiency in the processing, testing, and analysis of specimens. Often pathologists have difficulty in pinpointing and anticipating issues in the clinical workflow until tests are running late or in error. It can be difficult to pinpoint the cause and even more difficult to predict any issues which may arise. For example, they often have no indication of how many samples are going to be delivered to the laboratory that day or at a given hour. If we could model scenarios using past information and known variables, it would be possible for pathology laboratories to initiate resource preparations, e.g. the printing of specimen labels or to activate a sufficient number of technicians. This would expedite the clinical workload, clinical processes and improve the overall efficiency of the laboratory. The simulator design visualises the workflow of the laboratory, i.e. the clinical tests being ordered, the specimens arriving, current tests being performed, results being validated and reports being issued. The simulator depicts the movement of specimens through this process, as well as the number of specimens at each stage. This movement is visualised using an animated flow diagram that is updated in real time. A traffic light colour-coding system will be used to indicate the level of flow through each stage (green for normal flow, orange for slow flow, and red for critical flow). This would allow pathologists to clearly see where there are issues and bottlenecks in the process. Graphs would also be used to indicate the status of specimens at each stage of the process. For example, a graph could show the percentage of specimen tests that are on time, potentially late, running late and in error. Clicking on potentially late samples will display more detailed information about those samples, the tests that still need to be performed on them and their urgency level. This would allow any issues to be resolved quickly. In the case of potentially late samples, this could help to ensure that critically needed results are delivered on time. The simulator will be created as a single-page web application. Various web technologies will be used to create the flow diagram showing the workflow of the laboratory. JavaScript will be used to program the logic, animate the movement of samples through each of the stages and to generate the status graphs in real time. This live information will be extracted from an Oracle database. As well as being used in a real laboratory situation, the simulator could also be used for training purposes. ‘Bots’ would be used to control the flow of specimens through each step of the process. Like existing software agents technology, these bots would be configurable in order to simulate different situations, which may arise in a laboratory such as an emerging epidemic. The bots could then be turned on and off to allow trainees to complete the tasks required at that step of the process, for example validating test results.Keywords: laboratory-process, optimization, pathology, computer simulation, workflow
Procedia PDF Downloads 2861190 Solution of S3 Problem of Deformation Mechanics for a Definite Condition and Resulting Modifications of Important Failure Theories
Authors: Ranajay Bhowmick
Abstract:
Analysis of stresses for an infinitesimal tetrahedron leads to a situation where we obtain a cubic equation consisting of three stress invariants. This cubic equation, when solved for a definite condition, gives the principal stresses directly without requiring any cumbersome and time-consuming trial and error methods or iterative numerical procedures. Since the failure criterion of different materials are generally expressed as functions of principal stresses, an attempt has been made in this study to incorporate the solutions of the cubic equation in the form of principal stresses, obtained for a definite condition, into some of the established failure theories to determine their modified descriptions. It has been observed that the failure theories can be represented using the quadratic stress invariant and the orientation of the principal plane.Keywords: cubic equation, stress invariant, trigonometric, explicit solution, principal stress, failure criterion
Procedia PDF Downloads 1371189 Adiabatic Flame Temperature: New Calculation Methode
Authors: Muthana Abdul Mjed Jamel Al-gburi
Abstract:
The present paper introduces the methane-air flame and its main chemical reaction, the mass burning rate, the burning velocity, and the most important parameter, the adiabatic and its evaluation. Those major important flame parameters will be mathematically formulated and computerized using the MATLAB program. The present program established a new technique to decide the true adiabatic flame temperature. The new technique implements the trial and error procedure to obtained the calculated total internal energy of the product species then evaluate of the reactants ones, from both, we can draw two energy lines their intersection will decide the true required temperature. The obtained results show accurate evaluation for the atmospheric Stoichiometric (Φ=1.05) methane-air flame, and the value was 2136.36 K.Keywords: 1- methane-air flame, 2-, adiabatic flame temperature, 3-, reaction model, 4- matlab program, 5-, new technique
Procedia PDF Downloads 741188 Bayesian Structural Identification with Systematic Uncertainty Using Multiple Responses
Authors: André Jesus, Yanjie Zhu, Irwanda Laory
Abstract:
Structural health monitoring is one of the most promising technologies concerning aversion of structural risk and economic savings. Analysts often have to deal with a considerable variety of uncertainties that arise during a monitoring process. Namely the widespread application of numerical models (model-based) is accompanied by a widespread concern about quantifying the uncertainties prevailing in their use. Some of these uncertainties are related with the deterministic nature of the model (code uncertainty) others with the variability of its inputs (parameter uncertainty) and the discrepancy between a model/experiment (systematic uncertainty). The actual process always exhibits a random behaviour (observation error) even when conditions are set identically (residual variation). Bayesian inference assumes that parameters of a model are random variables with an associated PDF, which can be inferred from experimental data. However in many Bayesian methods the determination of systematic uncertainty can be problematic. In this work systematic uncertainty is associated with a discrepancy function. The numerical model and discrepancy function are approximated by Gaussian processes (surrogate model). Finally, to avoid the computational burden of a fully Bayesian approach the parameters that characterise the Gaussian processes were estimated in a four stage process (modular Bayesian approach). The proposed methodology has been successfully applied on fields such as geoscience, biomedics, particle physics but never on the SHM context. This approach considerably reduces the computational burden; although the extent of the considered uncertainties is lower (second order effects are neglected). To successfully identify the considered uncertainties this formulation was extended to consider multiple responses. The efficiency of the algorithm has been tested on a small scale aluminium bridge structure, subjected to a thermal expansion due to infrared heaters. Comparison of its performance with responses measured at different points of the structure and associated degrees of identifiability is also carried out. A numerical FEM model of the structure was developed and the stiffness from its supports is considered as a parameter to calibrate. Results show that the modular Bayesian approach performed best when responses of the same type had the lowest spatial correlation. Based on previous literature, using different types of responses (strain, acceleration, and displacement) should also improve the identifiability problem. Uncertainties due to parametric variability, observation error, residual variability, code variability and systematic uncertainty were all recovered. For this example the algorithm performance was stable and considerably quicker than Bayesian methods that account for the full extent of uncertainties. Future research with real-life examples is required to fully access the advantages and limitations of the proposed methodology.Keywords: bayesian, calibration, numerical model, system identification, systematic uncertainty, Gaussian process
Procedia PDF Downloads 3261187 Reconnecting The Peripheral Wagons to the Euro Area Core Locomotive
Authors: Igor Velickovski, Aleksandar Stojkov, Ivana Rajkovic
Abstract:
This paper investigates drivers of shock synchronization using quarterly data for 27 European countries over the period 1999-2013 and taking into account the difference between core (‘the euro area core locomotive’) and peripheral euro area and transition countries (‘the peripheral wagons’). Results from panel error-correction models suggest that core of the euro area has not been strong magnetizer of the shock convergence of periphery and transition countries since the euro inception as a result of the offsetting effects of the various factors that affected the shock convergence process. These findings challenge the endogeneity hypothesis in the optimum currency area framework and rather support the specialisation paradigm which is concerning evidence for the future stability of the euro area.Keywords: dynamic panel models, shock synchronisation, trade, optimum currency area
Procedia PDF Downloads 3571186 On a Continuous Formulation of Block Method for Solving First Order Ordinary Differential Equations (ODEs)
Authors: A. M. Sagir
Abstract:
The aim of this paper is to investigate the performance of the developed linear multistep block method for solving first order initial value problem of Ordinary Differential Equations (ODEs). The method calculates the numerical solution at three points simultaneously and produces three new equally spaced solution values within a block. The continuous formulations enable us to differentiate and evaluate at some selected points to obtain three discrete schemes, which were used in block form for parallel or sequential solutions of the problems. A stability analysis and efficiency of the block method are tested on ordinary differential equations involving practical applications, and the results obtained compared favorably with the exact solution. Furthermore, comparison of error analysis has been developed with the help of computer software.Keywords: block method, first order ordinary differential equations, linear multistep, self-starting
Procedia PDF Downloads 3061185 Image Compression Using Block Power Method for SVD Decomposition
Authors: El Asnaoui Khalid, Chawki Youness, Aksasse Brahim, Ouanan Mohammed
Abstract:
In these recent decades, the important and fast growth in the development and demand of multimedia products is contributing to an insufficient in the bandwidth of device and network storage memory. Consequently, the theory of data compression becomes more significant for reducing the data redundancy in order to save more transfer and storage of data. In this context, this paper addresses the problem of the lossless and the near-lossless compression of images. This proposed method is based on Block SVD Power Method that overcomes the disadvantages of Matlab's SVD function. The experimental results show that the proposed algorithm has a better compression performance compared with the existing compression algorithms that use the Matlab's SVD function. In addition, the proposed approach is simple and can provide different degrees of error resilience, which gives, in a short execution time, a better image compression.Keywords: image compression, SVD, block SVD power method, lossless compression, near lossless
Procedia PDF Downloads 3871184 Upon One Smoothing Problem in Project Management
Authors: Dimitri Golenko-Ginzburg
Abstract:
A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate
Procedia PDF Downloads 302