Search results for: interval analysis method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 39768

Search results for: interval analysis method

39348 Numerical Treatment of Block Method for the Solution of Ordinary Differential Equations

Authors: A. M. Sagir

Abstract:

Discrete linear multistep block method of uniform order for the solution of first order Initial Value Problems (IVPs) in Ordinary Differential Equations (ODEs) is presented in this paper. The approach of interpolation and collocation approximation are adopted in the derivation of the method which is then applied to first order ordinary differential equations with associated initial conditions. The continuous hybrid formulations enable us to differentiate and evaluate at some grids and off – grid points to obtain four discrete schemes, which were used in block form for parallel or sequential solutions of the problems. Furthermore, a stability analysis and efficiency of the block method are tested on ordinary differential equations, and the results obtained compared favorably with the exact solution.

Keywords: block method, first order ordinary differential equations, hybrid, self-starting

Procedia PDF Downloads 474
39347 Elasto-Plastic Analysis of Structures Using Adaptive Gaussian Springs Based Applied Element Method

Authors: Mai Abdul Latif, Yuntian Feng

Abstract:

Applied Element Method (AEM) is a method that was developed to aid in the analysis of the collapse of structures. Current available methods cannot deal with structural collapse accurately; however, AEM can simulate the behavior of a structure from an initial state of no loading until collapse of the structure. The elements in AEM are connected with sets of normal and shear springs along the edges of the elements, that represent the stresses and strains of the element in that region. The elements are rigid, and the material properties are introduced through the spring stiffness. Nonlinear dynamic analysis has been widely modelled using the finite element method for analysis of progressive collapse of structures; however, difficulties in the analysis were found at the presence of excessively deformed elements with cracking or crushing, as well as having a high computational cost, and difficulties on choosing the appropriate material models for analysis. The Applied Element method is developed and coded to significantly improve the accuracy and also reduce the computational costs of the method. The scheme works for both linear elastic, and nonlinear cases, including elasto-plastic materials. This paper will focus on elastic and elasto-plastic material behaviour, where the number of springs required for an accurate analysis is tested. A steel cantilever beam is used as the structural element for the analysis. The first modification of the method is based on the Gaussian Quadrature to distribute the springs. Usually, the springs are equally distributed along the face of the element, but it was found that using Gaussian springs, only up to 2 springs were required for perfectly elastic cases, while with equal springs at least 5 springs were required. The method runs on a Newton-Raphson iteration scheme, and quadratic convergence was obtained. The second modification is based on adapting the number of springs required depending on the elasticity of the material. After the first Newton Raphson iteration, Von Mises stress conditions were used to calculate the stresses in the springs, and the springs are classified as elastic or plastic. Then transition springs, springs located exactly between the elastic and plastic region, are interpolated between regions to strictly identify the elastic and plastic regions in the cross section. Since a rectangular cross-section was analyzed, there were two plastic regions (top and bottom), and one elastic region (middle). The results of the present study show that elasto-plastic cases require only 2 springs for the elastic region, and 2 springs for the plastic region. This showed to improve the computational cost, reducing the minimum number of springs in elasto-plastic cases to only 6 springs. All the work is done using MATLAB and the results will be compared to models of structural elements using the finite element method in ANSYS.

Keywords: applied element method, elasto-plastic, Gaussian springs, nonlinear

Procedia PDF Downloads 218
39346 An Effort at Improving Reliability of Laboratory Data in Titrimetric Analysis for Zinc Sulphate Tablets Using Validated Spreadsheet Calculators

Authors: M. A. Okezue, K. L. Clase, S. R. Byrn

Abstract:

The requirement for maintaining data integrity in laboratory operations is critical for regulatory compliance. Automation of procedures reduces incidence of human errors. Quality control laboratories located in low-income economies may face some barriers in attempts to automate their processes. Since data from quality control tests on pharmaceutical products are used in making regulatory decisions, it is important that laboratory reports are accurate and reliable. Zinc Sulphate (ZnSO4) tablets is used in treatment of diarrhea in pediatric population, and as an adjunct therapy for COVID-19 regimen. Unfortunately, zinc content in these formulations is determined titrimetrically; a manual analytical procedure. The assay for ZnSO4 tablets involves time-consuming steps that contain mathematical formulae prone to calculation errors. To achieve consistency, save costs, and improve data integrity, validated spreadsheets were developed to simplify the two critical steps in the analysis of ZnSO4 tablets: standardization of 0.1M Sodium Edetate (EDTA) solution, and the complexometric titration assay procedure. The assay method in the United States Pharmacopoeia was used to create a process flow for ZnSO4 tablets. For each step in the process, different formulae were input into two spreadsheets to automate calculations. Further checks were created within the automated system to ensure validity of replicate analysis in titrimetric procedures. Validations were conducted using five data sets of manually computed assay results. The acceptance criteria set for the protocol were met. Significant p-values (p < 0.05, α = 0.05, at 95% Confidence Interval) were obtained from students’ t-test evaluation of the mean values for manual-calculated and spreadsheet results at all levels of the analysis flow. Right-first-time analysis and principles of data integrity were enhanced by use of the validated spreadsheet calculators in titrimetric evaluations of ZnSO4 tablets. Human errors were minimized in calculations when procedures were automated in quality control laboratories. The assay procedure for the formulation was achieved in a time-efficient manner with greater level of accuracy. This project is expected to promote cost savings for laboratory business models.

Keywords: data integrity, spreadsheets, titrimetry, validation, zinc sulphate tablets

Procedia PDF Downloads 165
39345 Sensitivity Analysis of Prestressed Post-Tensioned I-Girder and Deck System

Authors: Tahsin A. H. Nishat, Raquib Ahsan

Abstract:

Sensitivity analysis of design parameters of the optimization procedure can become a significant factor while designing any structural system. The objectives of the study are to analyze the sensitivity of deck slab thickness parameter obtained from both the conventional and optimum design methodology of pre-stressed post-tensioned I-girder and deck system and to compare the relative significance of slab thickness. For analysis on conventional method, the values of 14 design parameters obtained by the conventional iterative method of design of a real-life I-girder bridge project have been considered. On the other side for analysis on optimization method, cost optimization of this system has been done using global optimization methodology 'Evolutionary Operation (EVOP)'. The problem, by which optimum values of 14 design parameters have been obtained, contains 14 explicit constraints and 46 implicit constraints. For both types of design parameters, sensitivity analysis has been conducted on deck slab thickness parameter which can become too sensitive for the obtained optimum solution. Deviations of slab thickness on both the upper and lower side of its optimum value have been considered reflecting its realistic possible ranges of variations during construction. In this procedure, the remaining parameters have been kept unchanged. For small deviations from the optimum value, compliance with the explicit and implicit constraints has been examined. Variations in the cost have also been estimated. It is obtained that without violating any constraint deck slab thickness obtained by the conventional method can be increased up to 25 mm whereas slab thickness obtained by cost optimization can be increased only up to 0.3 mm. The obtained result suggests that slab thickness becomes less sensitive in case of conventional method of design. Therefore, for realistic design purpose sensitivity should be conducted for any of the design procedure of girder and deck system.

Keywords: sensitivity analysis, optimum design, evolutionary operations, PC I-girder, deck system

Procedia PDF Downloads 131
39344 3D Frictionless Contact Case between the Structure of E-Bike and the Ground

Authors: Lele Zhang, Hui Leng Choo, Alexander Konyukhov, Shuguang Li

Abstract:

China is currently the world's largest producer and distributor of electric bicycle (e-bike). The increasing number of e-bikes on the road is accompanied by rising injuries and even deaths of e-bike drivers. Therefore, there is a growing need to improve the safety structure of e-bikes. This 3D frictionless contact analysis is a preliminary, but necessary work for further structural design improvement of an e-bike. The contact analysis between e-bike and the ground was carried out as follows: firstly, the Penalty method was illustrated and derived from the simplest spring-mass system. This is one of the most common methods to satisfy the frictionless contact case; secondly, ANSYS static analysis was carried out to verify finite element (FE) models with contact pair (without friction) between e-bike and the ground; finally, ANSYS transient analysis was used to obtain the data of the penetration p(u) of e-bike with respect to the ground. Results obtained from the simulation are as estimated by comparing with that from theoretical method. In the future, protective shell will be designed following the stability criteria and added to the frame of e-bike. Simulation of side falling of the improved safety structure of e-bike will be confirmed with experimental data.

Keywords: frictionless contact, penalty method, e-bike, finite element

Procedia PDF Downloads 273
39343 The Therapeutic Effects of Acupuncture on Oral Dryness and Antibody Modification in Sjogren Syndrome: A Meta-Analysis

Authors: Tzu-Hao Li, Yen-Ying Kung, Chang-Youh Tsai

Abstract:

Oral dryness is a common chief complaint among patients with Sjőgren syndrome (SS), which is a disorder currently known as autoantibodies production; however, to author’s best knowledge, there has been no satisfying pharmacy to relieve the associated symptoms. Hence the effectiveness of other non-pharmacological interventions such as acupuncture should be accessed. We conducted a meta-analysis of randomized clinical trials (RCTs) which evaluated the effectiveness of xerostomia in SS. PubMed, Embase, Cochrane Central Register of Controlled Trials (CENTRAL), Chongqing Weipu Database (CQVIP), China Academic Journals Full-text Database, AiritiLibrary, Chinese Electronic Periodicals Service (CEPS), China National Knowledge Infrastructure (CNKI) Database were searches through May 12, 2018 to select studies. Data for evaluation of subjective and objective xerostomia was extracted and was assessed with random-effects meta-analysis. After searching, a total of 541 references were yielded and five RCTs were included, covering 340 patients dry mouth resulted from SS, among whom 169 patients received acupuncture and 171 patients were control group. Acupuncture group was associated with higher subjective response rate (odds ratio 3.036, 95% confidence interval [CI] 1.828 – 5.042, P < 0.001) and increased salivary flow rate (weighted mean difference [WMD] 3.066, 95% CI 2.969 – 3.164, P < 0.001), as an objective marker. In addition, two studies examined IgG levels, which were lower in the acupuncture group (WMD -166.857, 95% CI -233.138 - -100.576, P < 0.001). Therefore, in the present meta-analysis, acupuncture improves both subjective and objective markers of dry mouth with autoantibodies reduction in patients with SS and is considered as an option of non-pharmacological treatment for SS.

Keywords: acupuncture, meta-analysis, Sjogren syndrome, xerostomia

Procedia PDF Downloads 120
39342 Comparison between Pushover Analysis Techniques and Validation of the Simplified Modal Pushover Analysis

Authors: N. F. Hanna, A. M. Haridy

Abstract:

One of the main drawbacks of the Modal Pushover Analysis (MPA) is the need to perform nonlinear time-history analysis, which complicates the analysis method and time. A simplified version of the MPA has been proposed based on the concept of the inelastic deformation ratio. Furthermore, the effect of the higher modes of vibration is considered by assuming linearly-elastic responses, which enables the use of standard elastic response spectrum analysis. In this thesis, the simplified MPA (SMPA) method is applied to determine the target global drift and the inter-story drifts of steel frame building. The effect of the higher vibration modes is considered within the framework of the SMPA. A comprehensive survey about the inelastic deformation ratio is presented. After that, a suitable expression from literature is selected for the inelastic deformation ratio and then implemented in the SMPA. The estimated seismic demands using the SMPA, such as target drift, base shear, and the inter-story drifts, are compared with the seismic responses determined by applying the standard MPA. The accuracy of the estimated seismic demands is validated by comparing with the results obtained by the nonlinear time-history analysis using real earthquake records.

Keywords: modal analysis, pushover analysis, seismic performance, target displacement

Procedia PDF Downloads 360
39341 Comparison of Agree Method and Shortest Path Method for Determining the Flow Direction in Basin Morphometric Analysis: Case Study of Lower Tapi Basin, Western India

Authors: Jaypalsinh Parmar, Pintu Nakrani, Bhaumik Shah

Abstract:

Digital Elevation Model (DEM) is elevation data of the virtual grid on the ground. DEM can be used in application in GIS such as hydrological modelling, flood forecasting, morphometrical analysis and surveying etc.. For morphometrical analysis the stream flow network plays a very important role. DEM lacks accuracy and cannot match field data as it should for accurate results of morphometrical analysis. The present study focuses on comparing the Agree method and the conventional Shortest path method for finding out morphometric parameters in the flat region of the Lower Tapi Basin which is located in the western India. For the present study, open source SRTM (Shuttle Radar Topography Mission with 1 arc resolution) and toposheets issued by Survey of India (SOI) were used to determine the morphometric linear aspect such as stream order, number of stream, stream length, bifurcation ratio, mean stream length, mean bifurcation ratio, stream length ratio, length of overland flow, constant of channel maintenance and aerial aspect such as drainage density, stream frequency, drainage texture, form factor, circularity ratio, elongation ratio, shape factor and relief aspect such as relief ratio, gradient ratio and basin relief for 53 catchments of Lower Tapi Basin. Stream network was digitized from the available toposheets. Agree DEM was created by using the SRTM and stream network from the toposheets. The results obtained were used to demonstrate a comparison between the two methods in the flat areas.

Keywords: agree method, morphometric analysis, lower Tapi basin, shortest path method

Procedia PDF Downloads 232
39340 Electrical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: electrical disaggregation, DTW, general appliance modeling, event detection

Procedia PDF Downloads 70
39339 Approximation of a Wanted Flow via Topological Sensitivity Analysis

Authors: Mohamed Abdelwahed

Abstract:

We propose an optimization algorithm for the geometric control of fluid flow. The used approach is based on the topological sensitivity analysis method. It consists in studying the variation of a cost function with respect to the insertion of a small obstacle in the domain. Some theoretical and numerical results are presented in 2D and 3D.

Keywords: sensitivity analysis, topological gradient, shape optimization, stokes equations

Procedia PDF Downloads 531
39338 Temporal Variation of Reference Evapotranspiration in Central Anatolia Region, Turkey and Meteorological Drought Analysis via Standardized Precipitation Evapotranspiration Index Method

Authors: Alper Serdar Anli

Abstract:

Analysis of temporal variation of reference evapotranspiration (ET0) is important in arid and semi-arid regions where water resources are limited. In this study, temporal variation of reference evapotranspiration (ET0) and meteorological drought analysis through SPEI (Standardized Precipitation Evapotranspiration Index) method have been carried out in provinces of Central Anatolia Region, Turkey. Reference evapotranspiration of concerning provinces in the region has been estimated using Penman-Monteith method and one calendar year has been split up four periods as r1, r2, r3 and r4. Temporal variation of reference evapotranspiration according to four periods has been analyzed through parametric Dickey-Fuller test and non-parametric Mann-Whitney U test. As a result, significant increasing trends for reference evapotranspiration have been detected and according to SPEI method used for estimating meteorological drought in provinces, mild drought has been experienced in general, and however there have been also a significant amount of events where moderate and severely droughts occurred.

Keywords: central Anatolia region, drought index, Penman-Monteith, reference evapotranspiration, temporal variation

Procedia PDF Downloads 309
39337 Aliasing Free and Additive Error in Spectra for Alpha Stable Signals

Authors: R. Sabre

Abstract:

This work focuses on the symmetric alpha stable process with continuous time frequently used in modeling the signal with indefinitely growing variance, often observed with an unknown additive error. The objective of this paper is to estimate this error from discrete observations of the signal. For that, we propose a method based on the smoothing of the observations via Jackson polynomial kernel and taking into account the width of the interval where the spectral density is non-zero. This technique allows avoiding the “Aliasing phenomenon” encountered when the estimation is made from the discrete observations of a process with continuous time. We have studied the convergence rate of the estimator and have shown that the convergence rate improves in the case where the spectral density is zero at the origin. Thus, we set up an estimator of the additive error that can be subtracted for approaching the original signal without error.

Keywords: spectral density, stable processes, aliasing, non parametric

Procedia PDF Downloads 124
39336 Estimation of a Finite Population Mean under Random Non Response Using Improved Nadaraya and Watson Kernel Weights

Authors: Nelson Bii, Christopher Ouma, John Odhiambo

Abstract:

Non-response is a potential source of errors in sample surveys. It introduces bias and large variance in the estimation of finite population parameters. Regression models have been recognized as one of the techniques of reducing bias and variance due to random non-response using auxiliary data. In this study, it is assumed that random non-response occurs in the survey variable in the second stage of cluster sampling, assuming full auxiliary information is available throughout. Auxiliary information is used at the estimation stage via a regression model to address the problem of random non-response. In particular, the auxiliary information is used via an improved Nadaraya-Watson kernel regression technique to compensate for random non-response. The asymptotic bias and mean squared error of the estimator proposed are derived. Besides, a simulation study conducted indicates that the proposed estimator has smaller values of the bias and smaller mean squared error values compared to existing estimators of finite population mean. The proposed estimator is also shown to have tighter confidence interval lengths at a 95% coverage rate. The results obtained in this study are useful, for instance, in choosing efficient estimators of the finite population mean in demographic sample surveys.

Keywords: mean squared error, random non-response, two-stage cluster sampling, confidence interval lengths

Procedia PDF Downloads 131
39335 Dynamic Analysis of Composite Doubly Curved Panels with Variable Thickness

Authors: I. Algul, G. Akgun, H. Kurtaran

Abstract:

Dynamic analysis of composite doubly curved panels with variable thickness subjected to different pulse types using Generalized Differential Quadrature method (GDQ) is presented in this study. Panels with variable thickness are used in the construction of aerospace and marine industry. Giving variable thickness to panels can allow the designer to get optimum structural efficiency. For this reason, estimating the response of variable thickness panels is very important to design more reliable structures under dynamic loads. Dynamic equations for composite panels with variable thickness are obtained using virtual work principle. Partial derivatives in the equation of motion are expressed with GDQ and Newmark average acceleration scheme is used for temporal discretization. Several examples are used to highlight the effectiveness of the proposed method. Results are compared with finite element method. Effects of taper ratios, boundary conditions and loading type on the response of composite panel are investigated.

Keywords: differential quadrature method, doubly curved panels, laminated composite materials, small displacement

Procedia PDF Downloads 351
39334 Failure Analysis and Verification Using an Integrated Method for Automotive Electric/Electronic Systems

Authors: Lei Chen, Jian Jiao, Tingdi Zhao

Abstract:

Failures of automotive electric/electronic systems, which are universally considered to be safety-critical and software-intensive, may cause catastrophic accidents. Analysis and verification of failures in these kinds of systems is a big challenge with increasing system complexity. Model-checking is often employed to allow formal verification by ensuring that the system model conforms to specified safety properties. The system-level effects of failures are established, and the effects on system behavior are observed through the formal verification. A hazard analysis technique, called Systems-Theoretic Process Analysis, is capable of identifying design flaws which may cause potential failure hazardous, including software and system design errors and unsafe interactions among multiple system components. This paper provides a concept on how to use model-checking integrated with Systems-Theoretic Process Analysis to perform failure analysis and verification of automotive electric/electronic systems. As a result, safety requirements are optimized, and failure propagation paths are found. Finally, an automotive electric/electronic system case study is used to verify the effectiveness and practicability of the method.

Keywords: failure analysis and verification, model checking, system-theoretic process analysis, automotive electric/electronic system

Procedia PDF Downloads 112
39333 Control of Biofilm Formation and Inorganic Particle Accumulation on Reverse Osmosis Membrane by Hypochlorite Washing

Authors: Masaki Ohno, Cervinia Manalo, Tetsuji Okuda, Satoshi Nakai, Wataru Nishijima

Abstract:

Reverse osmosis (RO) membranes have been widely used for desalination to purify water for drinking and other purposes. Although at present most RO membranes have no resistance to chlorine, chlorine-resistant membranes are being developed. Therefore, direct chlorine treatment or chlorine washing will be an option in preventing biofouling on chlorine-resistant membranes. Furthermore, if particle accumulation control is possible by using chlorine washing, expensive pretreatment for particle removal can be removed or simplified. The objective of this study was to determine the effective hypochlorite washing condition required for controlling biofilm formation and inorganic particle accumulation on RO membrane in a continuous flow channel with RO membrane and spacer. In this study, direct chlorine washing was done by soaking fouled RO membranes in hypochlorite solution and fluorescence intensity was used to quantify biofilm on the membrane surface. After 48 h of soaking the membranes in high fouling potential waters, the fluorescence intensity decreased to 0 from 470 using the following washing conditions: 10 mg/L chlorine concentration, 2 times/d washing interval, and 30 min washing time. The chlorine concentration required to control biofilm formation decreased as the chlorine concentration (0.5–10 mg/L), the washing interval (1–4 times/d), or the washing time (1–30 min) increased. For the sample solutions used in the study, 10 mg/L chlorine concentration with 2 times/d interval, and 5 min washing time was required for biofilm control. The optimum chlorine washing conditions obtained from soaking experiments proved to be applicable also in controlling biofilm formation in continuous flow experiments. Moreover, chlorine washing employed in controlling biofilm with suspended particles resulted in lower amounts of organic (0.03 mg/cm2) and inorganic (0.14 mg/cm2) deposits on the membrane than that for sample water without chlorine washing (0.14 mg/cm2 and 0.33 mg/cm2, respectively). The amount of biofilm formed was 79% controlled by continuous washing with 10 mg/L of free chlorine concentration, and the inorganic accumulation amount decreased by 58% to levels similar to that of pure water with kaolin (0.17 mg/cm2) as feed water. These results confirmed the acceleration of particle accumulation due to biofilm formation, and that the inhibition of biofilm growth can almost completely reduce further particle accumulation. In addition, effective hypochlorite washing condition which can control both biofilm formation and particle accumulation could be achieved.

Keywords: reverse osmosis, washing condition optimization, hypochlorous acid, biofouling control

Procedia PDF Downloads 343
39332 Study the Effect of Tolerances for Press Tool Assembly: Computer Aided Tolerance Analysis

Authors: Subodh Kumar, Ramkisan Pawar, Gopal D. Belurkar

Abstract:

This paper describes a study for simple blanking tool. In blanking or piercing operation, punch and die should be concentric for proper cutting. In this study, tolerance analysis method is used to analyze the variation in the press tool assembly. Variation results into the eccentricity in between die and punch due to cumulative tolerance of parts used in assembly. 1D variation analysis were performed by CREO parametric computer aided design (CAD) Software Powered by CETOL 6σ computer aided tolerance analysis software. Use of CAD analysis software given the opportunity to find out the cause of variation in tool assembly. Accordingly, the new specification of tolerance and process setting for die set manufacturing has determined. Tolerance allocation and tolerance analysis method were performed iteratively to conclude that position tolerance as well as size tolerance of hole in top plate for bush and size tolerance of guide pillar were more responsible for eccentricity in punch and die. This work proposes optimum tolerance for press tool assembly parts to achieve 100 % yield for specified .015mm minimum tolerance zone.

Keywords: blanking, GD&T (Geometric Dimension and Tolerancing), DPMU (defects per million unit), press tool, stackup analysis, tolerance allocation, yield percentage

Procedia PDF Downloads 352
39331 Aerodynamic Design an UAV and Stability Analysis with Method of Genetic Algorithm Optimization

Authors: Saul A. Torres Z., Eduardo Liceaga C., Alfredo Arias M.

Abstract:

We seek to develop a UAV for agricultural spraying at a maximum altitude of 5000 meters above sea level, with a payload of 100 liters of fumigant. For the developing the aerodynamic design of the aircraft is using computational tools such as the "Vortex Lattice Athena" software, "MATLAB", "ANSYS FLUENT", "XFoil" package among others. Also methods are being used structured programming, exhaustive analysis of optimization methods and search. The results have a very low margin of error, and the multi-objective problems can be helpful for future developments. Also we developed method for Stability Analysis (Lateral-Directional and Longitudinal).

Keywords: aerodynamics design, optimization, algorithm genetic, multi-objective problem, longitudinal stability, lateral-directional stability

Procedia PDF Downloads 582
39330 Determination of Benzatropine in Hair by GC/MS after Liquid-Liquid Extraction (LLE)

Authors: Abdulsallam A. Bakdash, Aiyshah M. Alshehri, Hind M. Alenzi

Abstract:

Benzatropine (benztropine) is used to treat symptoms of Parkinson's disease or involuntary movements due to the side effects of certain psychiatric drugs. We report in this study, results of a procedure for the determination of benzatropine in hair using LLE, once with methanol and second with phosphate buffer (pH 6.0), followed by filtration and then re-extraction with dichloromethane. A GC/MS method was developed and validated for this determination using selected ion monitoring (SIM) detection without derivatization. Linearity established over the concentration range 0.1-20.0 ng/mg hair, and the correlation coefficients were greater than 0.99. Recoveries were 52.2% and 21.1% using methanol and phosphate buffer extraction, respectively. Detection limits of benzatropine in hair were between 0.65 and 3.0 ng/mg hair, while the accuracy were 10.4% and 18.5% (RSD), respectively. We also applied this method to the analysis of soaked hair samples and demonstrated that the LLE using methanol meets the requirement for the analysis of benzatropine in hair.

Keywords: hair analysis, benzatropine, liquid-liquid extraction, GC/MS

Procedia PDF Downloads 396
39329 Empirical Decomposition of Time Series of Power Consumption

Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats

Abstract:

Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).

Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;

Procedia PDF Downloads 74
39328 Numerical Simulation of Filtration Gas Combustion: Front Propagation Velocity

Authors: Yuri Laevsky, Tatyana Nosova

Abstract:

The phenomenon of filtration gas combustion (FGC) had been discovered experimentally at the beginning of 80’s of the previous century. It has a number of important applications in such areas as chemical technologies, fire-explosion safety, energy-saving technologies, oil production. From the physical point of view, FGC may be defined as the propagation of region of gaseous exothermic reaction in chemically inert porous medium, as the gaseous reactants seep into the region of chemical transformation. The movement of the combustion front has different modes, and this investigation is focused on the low-velocity regime. The main characteristic of the process is the velocity of the combustion front propagation. Computation of this characteristic encounters substantial difficulties because of the strong heterogeneity of the process. The mathematical model of FGC is formed by the energy conservation laws for the temperature of the porous medium and the temperature of gas and the mass conservation law for the relative concentration of the reacting component of the gas mixture. In this case the homogenization of the model is performed with the use of the two-temperature approach when at each point of the continuous medium we specify the solid and gas phases with a Newtonian heat exchange between them. The construction of a computational scheme is based on the principles of mixed finite element method with the usage of a regular mesh. The approximation in time is performed by an explicit–implicit difference scheme. Special attention was given to determination of the combustion front propagation velocity. Straight computation of the velocity as grid derivative leads to extremely unstable algorithm. It is worth to note that the term ‘front propagation velocity’ makes sense for settled motion when some analytical formulae linking velocity and equilibrium temperature are correct. The numerical implementation of one of such formulae leading to the stable computation of instantaneous front velocity has been proposed. The algorithm obtained has been applied in subsequent numerical investigation of the FGC process. This way the dependence of the main characteristics of the process on various physical parameters has been studied. In particular, the influence of the combustible gas mixture consumption on the front propagation velocity has been investigated. It also has been reaffirmed numerically that there is an interval of critical values of the interfacial heat transfer coefficient at which a sort of a breakdown occurs from a slow combustion front propagation to a rapid one. Approximate boundaries of such an interval have been calculated for some specific parameters. All the results obtained are in full agreement with both experimental and theoretical data, confirming the adequacy of the model and the algorithm constructed. The presence of stable techniques to calculate the instantaneous velocity of the combustion wave allows considering the semi-Lagrangian approach to the solution of the problem.

Keywords: filtration gas combustion, low-velocity regime, mixed finite element method, numerical simulation

Procedia PDF Downloads 293
39327 Using Artificial Intelligence Method to Explore the Important Factors in the Reuse of Telecare by the Elderly

Authors: Jui-Chen Huang

Abstract:

This research used artificial intelligence method to explore elderly’s opinions on the reuse of telecare, its effect on their service quality, satisfaction and the relationship between customer perceived value and intention to reuse. This study conducted a questionnaire survey on the elderly. A total of 124 valid copies of a questionnaire were obtained. It adopted Backpropagation Network (BPN) to propose an effective and feasible analysis method, which is different from the traditional method. Two third of the total samples (82 samples) were taken as the training data, and the one third of the samples (42 samples) were taken as the testing data. The training and testing data RMSE (root mean square error) are 0.022 and 0.009 in the BPN, respectively. As shown, the errors are acceptable. On the other hand, the training and testing data RMSE are 0.100 and 0.099 in the regression model, respectively. In addition, the results showed the service quality has the greatest effects on the intention to reuse, followed by the satisfaction, and perceived value. This result of the Backpropagation Network method is better than the regression analysis. This result can be used as a reference for future research.

Keywords: artificial intelligence, backpropagation network (BPN), elderly, reuse, telecare

Procedia PDF Downloads 205
39326 Effect of Synthesis Method on Structural, Morphological Properties of Zr0.8Y0.2-xLax Oxides (x=0, 0.1, 0.2)

Authors: Abdelaziz Ghrib, Samir Hattali, Mouloud Ghrib, Mohamed Lamine Aouissia, David Ruch

Abstract:

In the present study, the solid solutions with a chemical composition of Zr0.8Y0.2-xLaxO2 (x=0, 0.1, 0.2) were synthesized via two routes, by hydrothermal method using NaOH as precipitating agent at 230°C for 15h and by the sol–gel process using citric acid as complexing agent. Compounds have been characterized by powder X-ray diffraction (XRD), Scanning Electron Microscopy (SEM), Thermo gravimetric Analysis (TGA) and Differential Thermal Analysis (DTA) techniques for appropriate characterization of the distinct thermal events occurring during synthesis. All the compounds crystallize in cubic fluorite structure, as indicated by X-ray diffraction studie. The microstructure of oxides synthesized by sol-gel showed porosity that increased with the lanthanum La3+ contents compared to hydrothermal method which gives a single crystal oxide.

Keywords: oxide, hydrothermal, rare earth, solubility, sol-gel, ternary mixture

Procedia PDF Downloads 626
39325 Optimization Analysis of a Concentric Tube Heat Exchanger with Field Synergy Principle

Authors: M. C. Lin, C. W. Su

Abstract:

The paper investigates the optimization analysis to the heat exchanger design, mainly with response surface method and genetic algorithm to explore the relationship between optimal fluid flow velocity and temperature of the heat exchanger using field synergy principle. First, finite volume method is proposed to calculate the flow temperature and flow rate distribution for numerical analysis. We identify the most suitable simulation equations by response surface methodology. Furthermore, a genetic algorithm approach is applied to optimize the relationship between fluid flow velocity and flow temperature of the heat exchanger. The results show that the field synergy angle plays vital role in the performance of a true heat exchanger.

Keywords: optimization analysis, field synergy, heat exchanger, genetic algorithm

Procedia PDF Downloads 302
39324 Nonlinear Analysis with Failure Using the Boundary Element Method

Authors: Ernesto Pineda Leon, Dante Tolentino Lopez, Janis Zapata Lopez

Abstract:

The current paper shows the application of the boundary element method for the analysis of plates under shear stress causing plasticity. In this case, the shear deformation of a plate is considered by means of the Reissner’s theory. The probability of failure of a Reissner’s plate due to a proposed index plastic behavior is calculated taken into account the uncertainty in mechanical and geometrical properties. The problem is developed in two dimensions. The classic plasticity’s theory is applied and a formulation for initial stresses that lead to the boundary integral equations due to plasticity is also used. For the plasticity calculation, the Von Misses criteria is used. To solve the non-linear equations an incremental method is employed. The results show a relatively small failure probability for the ranges of loads between 0.6 and 1.0. However, for values between 1.0 and 2.5, the probability of failure increases significantly. Consequently, for load bigger than 2.5 the plate failure is a safe event. The results are compared to those that were found in the literature and the agreement is good.

Keywords: boundary element method, failure, plasticity, probability

Procedia PDF Downloads 303
39323 Software Engineering Inspired Cost Estimation for Process Modelling

Authors: Felix Baumann, Aleksandar Milutinovic, Dieter Roller

Abstract:

Up to this point business process management projects in general and business process modelling projects in particular could not rely on a practical and scientifically validated method to estimate cost and effort. Especially the model development phase is not covered by a cost estimation method or model. Further phases of business process modelling starting with implementation are covered by initial solutions which are discussed in the literature. This article proposes a method of filling this gap by deriving a cost estimation method from available methods in similar domains namely software development or software engineering. Software development is regarded as closely similar to process modelling as we show. After the proposition of this method different ideas for further analysis and validation of the method are proposed. We derive this method from COCOMO II and Function Point which are established methods of effort estimation in the domain of software development. For this we lay out similarities of the software development rocess and the process of process modelling which is a phase of the Business Process Management life-cycle.

Keywords: COCOMO II, busines process modeling, cost estimation method, BPM COCOMO

Procedia PDF Downloads 433
39322 Assessment of Frying Material by Deep-Fat Frying Method

Authors: Brinda Sharma, Saakshi S. Sarpotdar

Abstract:

Deep-fat frying is popular standard method that has been studied basically to clarify the complicated mechanisms of fat decomposition at high temperatures and to assess their effects on human health. The aim of this paper is to point out the application of method engineering that has been recently improved our understanding of the fundamental principles and mechanisms concerned at different scales and different times throughout the process: pretreatment, frying, and cooling. It covers the several aspects of deep-fat drying. New results regarding the understanding of the frying method that are obtained as a results of major breakthroughs in on-line instrumentation (heat, steam flux, and native pressure sensors), within the methodology of microstructural and imaging analysis (NMR, MRI, SEM) and in software system tools for the simulation of coupled transfer and transport phenomena. Such advances have opened the approach for the creation of significant information of the behavior of varied materials and to the event of latest tools to manage frying operations via final product quality in real conditions. Lastly, this paper promotes an integrated approach to the frying method as well as numerous competencies like those of chemists, engineers, toxicologists, nutritionists, and materials scientists also as of the occupation and industrial sectors.

Keywords: frying, cooling, imaging analysis (NMR, MRI, SEM), deep-fat frying

Procedia PDF Downloads 425
39321 Proposal of Design Method in the Semi-Acausal System Model

Authors: Shigeyuki Haruyama, Ken Kaminishi, Junji Kaneko, Tadayuki Kyoutani, Siti Ruhana Omar, Oke Oktavianty

Abstract:

This study is used as a definition method to the value and function in manufacturing sector. In concurrence of discussion about present condition of modeling method, until now definition of 1D-CAE is ambiguity and not conceptual. Across all the physics fields, those methods are defined with the formulation of differential algebraic equation which only applied time derivation and simulation. At the same time, we propose semi-acausal modeling concept and differential algebraic equation method as a newly modeling method which the efficiency has been verified through the comparison of numerical analysis result between the semi-acausal modeling calculation and FEM theory calculation.

Keywords: system model, physical models, empirical models, conservation law, differential algebraic equation, object-oriented

Procedia PDF Downloads 477
39320 Analysis of Potential Flow around Two-Dimensional Body by Surface Panel Method and Vortex Lattice Method

Authors: M. Abir Hossain, M. Shahjada Tarafder

Abstract:

This paper deals with the analysis of potential flow past two-dimensional body by discretizing the body into panels where the Laplace equation was applied to each panel. The Laplace equation was solved at each panel by applying the boundary conditions. The boundary condition was applied at each panel to mathematically formulate the problem and then convert the problem into a computer-solvable problem. Kutta condition was applied at both the leading and trailing edges to see whether the condition is satisfied or not. Another approach that is applied for the analysis is Vortex Lattice Method (VLM). A vortex ring is considered at each control point. Using the Biot-Savart Law the strength at each control point is calculated and hence the pressure differentials are measured. For the comparison of the analytic result with the experimental result, different NACA section hydrofoil is used. The analytic result of NACA 0012 and NACA 0015 are compared with the experimental result of Abbott and Doenhoff and found significant conformity with the achieved result.

Keywords: Kutta condition, Law of Biot-Savart, pressure differentials, potential flow, vortex lattice method

Procedia PDF Downloads 186
39319 Identification of the Orthotropic Parameters of Cortical Bone under Nanoindentation

Authors: D. Remache, M. Semaan, C. Baron, M. Pithioux, P. Chabrand, J. M. Rossi, J. L. Milan

Abstract:

A good understanding of the mechanical properties of the bone implies a better understanding of its various diseases, such as osteoporosis. Berkovich nanoindentation tests were performed on the human cortical bone to extract its orthotropic parameters. The nanoindentation experiments were then simulated by the finite element method. Different configurations of interactions between the tip indenter and the bone were simulated. The orthotropic parameters of the material were identified by the inverse method for each configuration. The friction effect on the bone mechanical properties was then discussed. It was found that the inverse method using the finite element method is a very efficient method to predict the mechanical behavior of the bone.

Keywords: mechanical behavior of bone, nanoindentation, finite element analysis, inverse optimization approaches

Procedia PDF Downloads 381