Search results for: model updating method
30720 Canada Deuterium Uranium Updated Fire Probabilistic Risk Assessment Model for Canadian Nuclear Plants
Authors: Hossam Shalabi, George Hadjisophocleous
Abstract:
The Canadian Nuclear Power Plants (NPPs) use some portions of NUREG/CR-6850 in carrying out Fire Probabilistic Risk Assessment (PRA). An assessment for the applicability of NUREG/CR-6850 to CANDU reactors was performed and a CANDU Fire PRA was introduced. There are 19 operating CANDU reactors in Canada at five sites (Bruce A, Bruce B, Darlington, Pickering and Point Lepreau). A fire load density survey was done for all Fire Safe Shutdown Analysis (FSSA) fire zones in all CANDU sites in Canada. National Fire Protection Association (NFPA) Standard 557 proposes that a fire load survey must be conducted by either the weighing method or the inventory method or a combination of both. The combination method results in the most accurate values for fire loads. An updated CANDU Fire PRA model is demonstrated in this paper that includes the fuel survey in all Canadian CANDU stations. A qualitative screening step for the CANDU fire PRA is illustrated in this paper to include any fire events that can damage any part of the emergency power supply in addition to FSSA cables.Keywords: fire safety, CANDU, nuclear, fuel densities, FDS, qualitative analysis, fire probabilistic risk assessment
Procedia PDF Downloads 13430719 Design and Analysis of Flexible Slider Crank Mechanism
Authors: Thanh-Phong Dao, Shyh-Chour Huang
Abstract:
This study presents the optimal design and formulation of a kinematic model of a flexible slider crank mechanism. The objective of the proposed innovative design is to take extra advantage of the compliant mechanism and maximize the fatigue life by applying the Taguchi method. A formulated kinematic model is developed using a Pseudo-Rigid-Body Model (PRBM). By means of mathematic models, the kinematic behaviors of the flexible slider crank mechanism are captured using MATLAB software. Finite Element Analysis (FEA) is used to show the stress distribution. The results show that the optimal shape of the flexible hinge includes a force of 8.5N, a width of 9mm and a thickness of 1.1mm. Analysis of variance shows that the thickness of the proposed hinge is the most significant parameter, with an F test of 15.5. Finally, a prototype is manufactured to prepare for testing the kinematic and dynamic behaviors.Keywords: kinematic behavior, fatigue life, pseudo-rigid-body model, flexible slider crank mechanism
Procedia PDF Downloads 45730718 A New Mathematical Model of Human Olfaction
Authors: H. Namazi, H. T. N. Kuan
Abstract:
It is known that in humans, the adaptation to a given odor occurs within a quite short span of time (typically one minute) after the odor is presented to the brain. Different models of human olfaction have been developed by scientists but none of these models consider the diffusion phenomenon in olfaction. A novel microscopic model of the human olfaction is presented in this paper. We develop this model by incorporating the transient diffusivity. In fact, the mathematical model is written based on diffusion of the odorant within the mucus layer. By the use of the model developed in this paper, it becomes possible to provide quantification of the objective strength of odor.Keywords: diffusion, microscopic model, mucus layer, olfaction
Procedia PDF Downloads 50430717 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: erodible beds, finite element method, finite volume method, nonlinear elasticity, shallow water equations, stresses in soil
Procedia PDF Downloads 12830716 An Investigation of a Three-Dimensional Constitutive Model of Gas Diffusion Layers in Polymer Electrolyte Membrane Fuel Cells
Authors: Yanqin Chen, Chao Jiang, Chongdu Cho
Abstract:
This research presents the three-dimensional mechanical characteristics of a commercial gas diffusion layer by experiment and simulation results. Although the mechanical performance of gas diffusion layers has attracted much attention, its reliability and accuracy are still a major challenge. With the help of simulation analysis methods, it is beneficial to the gas diffusion layer’s extensive commercial development and the overall stress analysis of proton electrolyte membrane fuel cells during its pre-production design period. Therefore, in this paper, a three-dimensional constitutive model of a commercial gas diffusion layer, including its material stiffness matrix parameters, is developed and coded, in the user-defined material model of a commercial finite element method software for simulation. Then, the model is validated by comparing experimental results as well as simulation outcomes. As a result, both the experimental data and simulation results show a good agreement with each other, with high accuracy.Keywords: gas diffusion layer, proton electrolyte membrane fuel cell, stiffness matrix, three-dimensional mechanical characteristics, user-defined material model
Procedia PDF Downloads 15730715 Some Basic Problems for the Elastic Material with Voids in the Case of Approximation N=1 of Vekua's Theory
Authors: Bakur Gulua
Abstract:
In this work, we consider some boundary value problems for the plate. The plate is the elastic material with voids. The state of plate equilibrium is described by the system of differential equations that is derived from three-dimensional equations of equilibrium of an elastic material with voids (Cowin-Nunziato model) by Vekua's reduction method. Its general solution is represented by means of analytic functions of a complex variable and solutions of Helmholtz equations. The problem is solved analytically by the method of the theory of functions of a complex variable.Keywords: the elastic material with voids, boundary value problems, Vekua's reduction method, a complex variable
Procedia PDF Downloads 12530714 An Output Oriented Super-Efficiency Model for Considering Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
There exists some time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in calculating efficiency of decision making units (DMU). Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. This problem can be resolved a super-efficiency model. However, a super efficiency model sometimes causes infeasibility problem. This paper suggests an output oriented super-efficiency model for efficiency evaluation under the consideration of time lag effect. A case example using a long term research project is given to compare the suggested model with the MpO modelKeywords: DEA, Super-efficiency, Time Lag, research activities
Procedia PDF Downloads 65530713 Different Methods of Fe3O4 Nano Particles Synthesis
Authors: Arezoo Hakimi, Afshin Farahbakhsh
Abstract:
Herein, we comparison synthesized Fe3O4 using, hydrothermal method, Mechanochemical processes and solvent thermal method. The Hydrothermal Technique has been the most popular one, gathering interest from scientists and technologists of different disciplines, particularly in the last fifteen years. In the hydrothermal method Fe3O4 microspheres, in which many nearly monodisperse spherical particles with diameters of about 400nm, in the mechanochemical method regular morphology indicates that the particles are well crystallized and in the solvent thermal method Fe3O4 nanoparticles have good properties of uniform size and good dispersion.Keywords: Fe3O4 nanoparticles, hydrothermal method, mechanochemical processes, solvent thermal method
Procedia PDF Downloads 34930712 Study of the Relationship between the Roughness Configuration of Channel Bottom and the Creation of Vortices at the Rough Area: Numerical Modelling
Authors: Youb Said, Fourar Ali
Abstract:
To describe the influence of bottom roughness on the free surface flows by numerical modeling, a two-dimensional model was developed. The equations of continuity and momentum (Naviers Stokes equations) are solved by the finite volume method. We considered a turbulent flow in an open channel with a bottom roughness. For our simulations, the K-ε model was used. After setting the initial and boundary conditions and solve the equations set, we were able to achieve the following results: vortex forming in the hollow causing substantial energy dissipation in the obstacle areas that form the bottom roughness. The comparison of our results with experimental ones shows a good agreement in terms of the results in the rough area. However, in other areas, differences were more or less important. These differences are in areas far from the bottom, especially the free surface area just after the bottom. These disagreements are probably due to experimental constants used by the k-ε model.Keywords: modeling, free surface flow, turbulence, bottom roughness, finite volume, K-ε model, energy dissipation
Procedia PDF Downloads 38030711 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 13030710 Validation of the Formal Model of Web Services Applications for Digital Reference Service of Library Information System
Authors: Zainab Magaji Musa, Nordin M. A. Rahman, Julaily Aida Jusoh
Abstract:
The web services applications for digital reference service (WSDRS) of LIS model is an informal model that claims to reduce the problems of digital reference services in libraries. It uses web services technology to provide efficient way of satisfying users’ needs in the reference section of libraries. The formal WSDRS model consists of the Z specifications of all the informal specifications of the model. This paper discusses the formal validation of the Z specifications of WSDRS model. The authors formally verify and thus validate the properties of the model using Z/EVES theorem prover.Keywords: validation, verification, formal, theorem prover
Procedia PDF Downloads 51330709 An Information Matrix Goodness-of-Fit Test of the Conditional Logistic Model for Matched Case-Control Studies
Authors: Li-Ching Chen
Abstract:
The case-control design has been widely applied in clinical and epidemiological studies to investigate the association between risk factors and a given disease. The retrospective design can be easily implemented and is more economical over prospective studies. To adjust effects for confounding factors, methods such as stratification at the design stage and may be adopted. When some major confounding factors are difficult to be quantified, a matching design provides an opportunity for researchers to control the confounding effects. The matching effects can be parameterized by the intercepts of logistic models and the conditional logistic regression analysis is then adopted. This study demonstrates an information-matrix-based goodness-of-fit statistic to test the validity of the logistic regression model for matched case-control data. The asymptotic null distribution of this proposed test statistic is inferred. It needs neither to employ a simulation to evaluate its critical values nor to partition covariate space. The asymptotic power of this test statistic is also derived. The performance of the proposed method is assessed through simulation studies. An example of the real data set is applied to illustrate the implementation of the proposed method as well.Keywords: conditional logistic model, goodness-of-fit, information matrix, matched case-control studies
Procedia PDF Downloads 29030708 Text-to-Speech in Azerbaijani Language via Transfer Learning in a Low Resource Environment
Authors: Dzhavidan Zeinalov, Bugra Sen, Firangiz Aslanova
Abstract:
Most text-to-speech models cannot operate well in low-resource languages and require a great amount of high-quality training data to be considered good enough. Yet, with the improvements made in ASR systems, it is now much easier than ever to collect data for the design of custom text-to-speech models. In this work, our work on using the ASR model to collect data to build a viable text-to-speech system for one of the leading financial institutions of Azerbaijan will be outlined. NVIDIA’s implementation of the Tacotron 2 model was utilized along with the HiFiGAN vocoder. As for the training, the model was first trained with high-quality audio data collected from the Internet, then fine-tuned on the bank’s single speaker call center data. The results were then evaluated by 50 different listeners and got a mean opinion score of 4.17, displaying that our method is indeed viable. With this, we have successfully designed the first text-to-speech model in Azerbaijani and publicly shared 12 hours of audiobook data for everyone to use.Keywords: Azerbaijani language, HiFiGAN, Tacotron 2, text-to-speech, transfer learning, whisper
Procedia PDF Downloads 4330707 Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process
Authors: Jan Stodt, Christoph Reich
Abstract:
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.Keywords: audit, machine learning, assessment, metrics
Procedia PDF Downloads 26830706 Extreme Value Modelling of Ghana Stock Exchange Indices
Authors: Kwabena Asare, Ezekiel N. N. Nortey, Felix O. Mettle
Abstract:
Modelling of extreme events has always been of interest in fields such as hydrology and meteorology. However, after the recent global financial crises, appropriate models for modelling of such rare events leading to these crises have become quite essential in the finance and risk management fields. This paper models the extreme values of the Ghana Stock Exchange All-Shares indices (2000-2010) by applying the Extreme Value Theory to fit a model to the tails of the daily stock returns data. A conditional approach of the EVT was preferred and hence an ARMA-GARCH model was fitted to the data to correct for the effects of autocorrelation and conditional heteroscedastic terms present in the returns series, before EVT method was applied. The Peak Over Threshold (POT) approach of the EVT, which fits a Generalized Pareto Distribution (GPD) model to excesses above a certain selected threshold, was employed. Maximum likelihood estimates of the model parameters were obtained and the model’s goodness of fit was assessed graphically using Q-Q, P-P and density plots. The findings indicate that the GPD provides an adequate fit to the data of excesses. The size of the extreme daily Ghanaian stock market movements were then computed using the Value at Risk (VaR) and Expected Shortfall (ES) risk measures at some high quantiles, based on the fitted GPD model.Keywords: extreme value theory, expected shortfall, generalized pareto distribution, peak over threshold, value at risk
Procedia PDF Downloads 55630705 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data
Authors: M. A. Meslem
Abstract:
For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.Keywords: quasigeoid, gravity aomalies, covariance, GGM
Procedia PDF Downloads 13630704 Random Subspace Ensemble of CMAC Classifiers
Authors: Somaiyeh Dehghan, Mohammad Reza Kheirkhahan Haghighi
Abstract:
The rapid growth of domains that have data with a large number of features, while the number of samples is limited has caused difficulty in constructing strong classifiers. To reduce the dimensionality of the feature space becomes an essential step in classification task. Random subspace method (or attribute bagging) is an ensemble classifier that consists of several classifiers that each base learner in ensemble has subset of features. In the present paper, we introduce Random Subspace Ensemble of CMAC neural network (RSE-CMAC), each of which has training with subset of features. Then we use this model for classification task. For evaluation performance of our model, we compare it with bagging algorithm on 36 UCI datasets. The results reveal that the new model has better performance.Keywords: classification, random subspace, ensemble, CMAC neural network
Procedia PDF Downloads 32830703 ELD79-LGD2006 Transformation Techniques Implementation and Accuracy Comparison in Tripoli Area, Libya
Authors: Jamal A. Gledan, Othman A. Azzeidani
Abstract:
During the last decade, Libya established a new Geodetic Datum called Libyan Geodetic Datum 2006 (LGD 2006) by using GPS, whereas the ground traversing method was used to establish the last Libyan datum which was called the Europe Libyan Datum 79 (ELD79). The current research paper introduces ELD79 to LGD2006 coordinate transformation technique, the accurate comparison of transformation between multiple regression equations and the three-parameters model (Bursa-Wolf). The results had been obtained show that the overall accuracy of stepwise multi regression equations is better than that can be determined by using Bursa-Wolf transformation model.Keywords: geodetic datum, horizontal control points, traditional similarity transformation model, unconventional transformation techniques
Procedia PDF Downloads 30430702 Enhanced Method of Conceptual Sizing of Aircraft Electro-Thermal De-Icing System
Authors: Ahmed Shinkafi, Craig Lawson
Abstract:
There is a great advancement towards the All-Electric Aircraft (AEA) technology. The AEA concept assumes that all aircraft systems will be integrated into one electrical power source in the future. The principle of the electro-thermal system is to transfer the energy required for anti/de-icing to the protected areas in electrical form. However, powering a large aircraft anti-icing system electrically could be quite excessive in cost and system weight. Hence, maximising the anti/de-icing efficiency of the electro-thermal system in order to minimise its power demand has become crucial to electro-thermal de-icing system sizing. In this work, an enhanced methodology has been developed for conceptual sizing of aircraft electro-thermal de-icing System. The work factored those critical terms overlooked in previous studies which were critical to de-icing energy consumption. A case study of a typical large aircraft wing de-icing was used to test and validate the model. The model was used to optimise the system performance by a trade-off between the de-icing peak power and system energy consumption. The optimum melting surface temperatures and energy flux predicted enabled the reduction in the power required for de-icing. The weight penalty associated with electro-thermal anti-icing/de-icing method could be eliminated using this method without under estimating the de-icing power requirement.Keywords: aircraft, de-icing system, electro-thermal, in-flight icing
Procedia PDF Downloads 51630701 Study of Natural Convection Heat Transfer of Plate-Fin Heat Sink
Authors: Han-Taw Chen, Tzu-Hsiang Lin, Chung-Hou Lai
Abstract:
This study applies the inverse method and three-dimensional CFD commercial software in conjunction with the experimental temperature data to investigate the heat transfer and fluid flow characteristics of the plate-fin heat sink in a rectangular closed enclosure. The inverse method with the finite difference method and the experimental temperature data is applied to determine the approximate heat transfer coefficient. Later, based on the obtained results, the zero-equation turbulence model is used to obtain the heat transfer and fluid flow characteristics between two fins. To validate the accuracy of the results obtained, the comparison of the heat transfer coefficient is made. The obtained temperature at selected measurement locations of the fin is also compared with experimental data. The effect of the height of the rectangular enclosure on the obtained results is discussed.Keywords: inverse method, fluent, heat transfer characteristics, plate-fin heat sink
Procedia PDF Downloads 38730700 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models
Authors: Tejush Badal, Sanaa Alwidian
Abstract:
Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.Keywords: analysis, class diagram, model family, unified modeling language, union model
Procedia PDF Downloads 7230699 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 42530698 Using Structured Analysis and Design Technique Method for Unmanned Aerial Vehicle Components
Authors: Najeh Lakhoua
Abstract:
Introduction: Scientific developments and techniques for the systemic approach generate several names to the systemic approach: systems analysis, systems analysis, structural analysis. The main purpose of these reflections is to find a multi-disciplinary approach which organizes knowledge, creates universal language design and controls complex sets. In fact, system analysis is structured sequentially by steps: the observation of the system by various observers in various aspects, the analysis of interactions and regulatory chains, the modeling that takes into account the evolution of the system, the simulation and the real tests in order to obtain the consensus. Thus the system approach allows two types of analysis according to the structure and the function of the system. The purpose of this paper is to present an application of system analysis of Unmanned Aerial Vehicle (UAV) components in order to represent the architecture of this system. Method: There are various analysis methods which are proposed, in the literature, in to carry out actions of global analysis and different points of view as SADT method (Structured Analysis and Design Technique), Petri Network. The methodology adopted in order to contribute to the system analysis of an Unmanned Aerial Vehicle has been proposed in this paper and it is based on the use of SADT. In fact, we present a functional analysis based on the SADT method of UAV components Body, power supply and platform, computing, sensors, actuators, software, loop principles, flight controls and communications). Results: In this part, we present the application of SADT method for the functional analysis of the UAV components. This SADT model will be composed exclusively of actigrams. It starts with the main function ‘To analysis of the UAV components’. Then, this function is broken into sub-functions and this process is developed until the last decomposition level has been reached (levels A1, A2, A3 and A4). Recall that SADT techniques are semi-formal; however, for the same subject, different correct models can be built without having to know with certitude which model is the good or, at least, the best. In fact, this kind of model allows users a sufficient freedom in its construction and so the subjective factor introduces a supplementary dimension for its validation. That is why the validation step on the whole necessitates the confrontation of different points of views. Conclusion: In this paper, we presented an application of system analysis of Unmanned Aerial Vehicle components. In fact, this application of system analysis is based on SADT method (Structured Analysis Design Technique). This functional analysis proved the useful use of SADT method and its ability of describing complex dynamic systems.Keywords: system analysis, unmanned aerial vehicle, functional analysis, architecture
Procedia PDF Downloads 20230697 Clustering-Based Threshold Model for Condition Rating of Concrete Bridge Decks
Authors: M. Alsharqawi, T. Zayed, S. Abu Dabous
Abstract:
To ensure safety and serviceability of bridge infrastructure, accurate condition assessment and rating methods are needed to provide basis for bridge Maintenance, Repair and Replacement (MRR) decisions. In North America, the common practices to assess condition of bridges are through visual inspection. These practices are limited to detect surface defects and external flaws. Further, the thresholds that define the severity of bridge deterioration are selected arbitrarily. The current research discusses the main deteriorations and defects identified during visual inspection and Non-Destructive Evaluation (NDE). NDE techniques are becoming popular in augmenting the visual examination during inspection to detect subsurface defects. Quality inspection data and accurate condition assessment and rating are the basis for determining appropriate MRR decisions. Thus, in this paper, a novel method for bridge condition assessment using the Quality Function Deployment (QFD) theory is utilized. The QFD model is designed to provide an integrated condition by evaluating both the surface and subsurface defects for concrete bridges. Moreover, an integrated condition rating index with four thresholds is developed based on the QFD condition assessment model and using K-means clustering technique. Twenty case studies are analyzed by applying the QFD model and implementing the developed rating index. The results from the analyzed case studies show that the proposed threshold model produces robust MRR recommendations consistent with decisions and recommendations made by bridge managers on these projects. The proposed method is expected to advance the state of the art of bridges condition assessment and rating.Keywords: concrete bridge decks, condition assessment and rating, quality function deployment, k-means clustering technique
Procedia PDF Downloads 22230696 A Study on Improvement of the Electromagnetic Vibration of a Polygon Mirror Scanner Motor
Authors: Yongmin You
Abstract:
Electric machines for office automation device such as printer and scanner have been required the low noise and vibration performance. Many researches about the low noise and vibration of polygon mirror scanner motor have been also progressed. The noise and vibration of polygon mirror scanner motor can be classified by aerodynamic, structural and electromagnetic. Electromagnetic noise and vibration can be occurred by high cogging torque and nonsinusoidal back EMF. To improve the cogging torque and back EMF characteristic, we apply unequal air-gap. To analyze characteristic of a polygon mirror scanner motor, two dimensional finite element method is used. To minimize the cogging torque of a polygon mirror motor, Kriging based on latin hypercube sampling (LHS) is utilized. Compared to the initial model, the torque ripple of the optimized unequal air-gap model was reduced by 23.4 % while maintaining the back EMF and average torque. To verify the optimal design results, the experiment was performed. We measured the vibration in motors at 23,600 rpm which is the rated velocity. The radial and axial gravitational acceleration of the optimal model were declined more than seven times and three times, respectively. From these results, a shape optimized unequal polygon mirror scanner motor has shown the usefulness of an improvement in the torque ripple and electromagnetic vibration characteristic.Keywords: polygon mirror scanner motor, optimal design, finite element method, vibration
Procedia PDF Downloads 34130695 Detection of Cardiac Arrhythmia Using Principal Component Analysis and Xgboost Model
Authors: Sujay Kotwale, Ramasubba Reddy M.
Abstract:
Electrocardiogram (ECG) is a non-invasive technique used to study and analyze various heart diseases. Cardiac arrhythmia is a serious heart disease which leads to death of the patients, when left untreated. An early-time detection of cardiac arrhythmia would help the doctors to do proper treatment of the heart. In the past, various algorithms and machine learning (ML) models were used to early-time detection of cardiac arrhythmia, but few of them have achieved better results. In order to improve the performance, this paper implements principal component analysis (PCA) along with XGBoost model. The PCA was implemented to the raw ECG signals which suppress redundancy information and extracted significant features. The obtained significant ECG features were fed into XGBoost model and the performance of the model was evaluated. In order to valid the proposed technique, raw ECG signals obtained from standard MIT-BIH database were employed for the analysis. The result shows that the performance of proposed method is superior to the several state-of-the-arts techniques.Keywords: cardiac arrhythmia, electrocardiogram, principal component analysis, XGBoost
Procedia PDF Downloads 11730694 Numerical Crashworthiness Investigations of a Full-Scale Composite Fuselage Section
Authors: Redouane Lombarkia
Abstract:
To apply a new material model developed and validated for plain weave fabric CFRP composites usually used in stanchions in sub-cargo section in aircrafts. This work deals with the development of a numerical model of the fuselage section of commercial aircraft based on the pure explicit finite element method FEM within Abaqus/Explicit commercial code. The aim of this work is the evaluation of the energy absorption capabilities of a full-scale composite fuselage section, including sub-cargo stanchions, Drop tests were carried out from a free fall height of about 5 m and impact velocity of about 6 m∕s. To asses, the prediction efficiency of the proposed numerical modeling procedure, a comparison with literature existed experimental results was performed. We demonstrate the efficiency of the proposed methodology to well capture crash damage mechanisms compared to experimental resultsKeywords: crashworthiness, fuselage section, finite elements method (FEM), stanchions, specific energy absorption SEA
Procedia PDF Downloads 9330693 Optimal Selection of Replenishment Policies Using Distance Based Approach
Authors: Amit Gupta, Deepak Juneja, Sorabh Gupta
Abstract:
This paper presents a model based on distance based approach (DBA) method employed for evaluation, selection, and ranking of replenishment policies for a single location inventory, which hitherto not developed in the literature. This work recognizes the significance of the selection problem, identifies the selection criteria, the relative importance of selection criteria for this research problem. The developed model is capable of comparing any number of alternate inventory policies for various selection criteria where cardinal values are assigned as a rating to alternate inventory polices for selection criteria and weights of selection criteria. The illustrated example demonstrates the model and presents the result in terms of ranking of replenishment policies.Keywords: DBA, ranking, replenishment policies, selection criteria
Procedia PDF Downloads 15530692 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids
Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho
Abstract:
In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model
Procedia PDF Downloads 11430691 Climate Changes in Albania and Their Effect on Cereal Yield
Authors: Lule Basha, Eralda Gjika
Abstract:
This study is focused on analyzing climate change in Albania and its potential effects on cereal yields. Initially, monthly temperature and rainfalls in Albania were studied for the period 1960-2021. Climacteric variables are important variables when trying to model cereal yield behavior, especially when significant changes in weather conditions are observed. For this purpose, in the second part of the study, linear and nonlinear models explaining cereal yield are constructed for the same period, 1960-2021. The multiple linear regression analysis and lasso regression method are applied to the data between cereal yield and each independent variable: average temperature, average rainfall, fertilizer consumption, arable land, land under cereal production, and nitrous oxide emissions. In our regression model, heteroscedasticity is not observed, data follow a normal distribution, and there is a low correlation between factors, so we do not have the problem of multicollinearity. Machine-learning methods, such as random forest, are used to predict cereal yield responses to climacteric and other variables. Random Forest showed high accuracy compared to the other statistical models in the prediction of cereal yield. We found that changes in average temperature negatively affect cereal yield. The coefficients of fertilizer consumption, arable land, and land under cereal production are positively affecting production. Our results show that the Random Forest method is an effective and versatile machine-learning method for cereal yield prediction compared to the other two methods.Keywords: cereal yield, climate change, machine learning, multiple regression model, random forest
Procedia PDF Downloads 89