Search results for: statistical criterion.
955 Optimisation of a Dragonfly-Inspired Flapping Wing-Actuation System
Authors: Jia-Ming Kok, Javaan Chahl
Abstract:
An optimisation method using both global and local optimisation is implemented to determine the flapping profile which will produce the most lift for an experimental wing-actuation system. The optimisation method is tested using a numerical quasi-steady analysis. Results of an optimised flapping profile show a 20% increase in lift generated as compared to flapping profiles obtained by high speed cinematography of a Sympetrum frequens dragonfly. Initial optimisation procedures showed 3166 objective function evaluations. The global optimisation parameters - initial sample size and stage one sample size, were altered to reduce the number of function evaluations. Altering the stage one sample size had no significant effect. It was found that reducing the initial sample size to 400 would allow a reduction in computational effort to approximately 1500 function evaluations without compromising the global solvers ability to locate potential minima. To further reduce the optimisation effort required, we increase the local solver’s convergence tolerance criterion. An increase in the tolerance from 0.02N to 0.05N decreased the number of function evaluations by another 20%. However, this potentially reduces the maximum obtainable lift by up to 0.025N.
Keywords: Flapping wing, Optimisation, Quasi-steady model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2403954 Speaker Identification by Joint Statistical Characterization in the Log Gabor Wavelet Domain
Authors: Suman Senapati, Goutam Saha
Abstract:
Real world Speaker Identification (SI) application differs from ideal or laboratory conditions causing perturbations that leads to a mismatch between the training and testing environment and degrade the performance drastically. Many strategies have been adopted to cope with acoustical degradation; wavelet based Bayesian marginal model is one of them. But Bayesian marginal models cannot model the inter-scale statistical dependencies of different wavelet scales. Simple nonlinear estimators for wavelet based denoising assume that the wavelet coefficients in different scales are independent in nature. However wavelet coefficients have significant inter-scale dependency. This paper enhances this inter-scale dependency property by a Circularly Symmetric Probability Density Function (CS-PDF) related to the family of Spherically Invariant Random Processes (SIRPs) in Log Gabor Wavelet (LGW) domain and corresponding joint shrinkage estimator is derived by Maximum a Posteriori (MAP) estimator. A framework is proposed based on these to denoise speech signal for automatic speaker identification problems. The robustness of the proposed framework is tested for Text Independent Speaker Identification application on 100 speakers of POLYCOST and 100 speakers of YOHO speech database in three different noise environments. Experimental results show that the proposed estimator yields a higher improvement in identification accuracy compared to other estimators on popular Gaussian Mixture Model (GMM) based speaker model and Mel-Frequency Cepstral Coefficient (MFCC) features.Keywords: Speaker Identification, Log Gabor Wavelet, Bayesian Bivariate Estimator, Circularly Symmetric Probability Density Function, SIRP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1651953 Remaining Useful Life Estimation of Bearings Based on Nonlinear Dimensional Reduction Combined with Timing Signals
Authors: Zhongmin Wang, Wudong Fan, Hengshan Zhang, Yimin Zhou
Abstract:
In data-driven prognostic methods, the prediction accuracy of the estimation for remaining useful life of bearings mainly depends on the performance of health indicators, which are usually fused some statistical features extracted from vibrating signals. However, the existing health indicators have the following two drawbacks: (1) The differnet ranges of the statistical features have the different contributions to construct the health indicators, the expert knowledge is required to extract the features. (2) When convolutional neural networks are utilized to tackle time-frequency features of signals, the time-series of signals are not considered. To overcome these drawbacks, in this study, the method combining convolutional neural network with gated recurrent unit is proposed to extract the time-frequency image features. The extracted features are utilized to construct health indicator and predict remaining useful life of bearings. First, original signals are converted into time-frequency images by using continuous wavelet transform so as to form the original feature sets. Second, with convolutional and pooling layers of convolutional neural networks, the most sensitive features of time-frequency images are selected from the original feature sets. Finally, these selected features are fed into the gated recurrent unit to construct the health indicator. The results state that the proposed method shows the enhance performance than the related studies which have used the same bearing dataset provided by PRONOSTIA.Keywords: Continuous wavelet transform, convolution neural network, gated recurrent unit, health indicators, remaining useful life.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 768952 The Quality Assessment of Seismic Reflection Survey Data Using Statistical Analysis: A Case Study of Fort Abbas Area, Cholistan Desert, Pakistan
Authors: U. Waqas, M. F. Ahmed, A. Mehmood, M. A. Rashid
Abstract:
In geophysical exploration surveys, the quality of acquired data holds significant importance before executing the data processing and interpretation phases. In this study, 2D seismic reflection survey data of Fort Abbas area, Cholistan Desert, Pakistan was taken as test case in order to assess its quality on statistical bases by using normalized root mean square error (NRMSE), Cronbach’s alpha test (α) and null hypothesis tests (t-test and F-test). The analysis challenged the quality of the acquired data and highlighted the significant errors in the acquired database. It is proven that the study area is plain, tectonically least affected and rich in oil and gas reserves. However, subsurface 3D modeling and contouring by using acquired database revealed high degrees of structural complexities and intense folding. The NRMSE had highest percentage of residuals between the estimated and predicted cases. The outcomes of hypothesis testing also proved the biasness and erraticness of the acquired database. Low estimated value of alpha (α) in Cronbach’s alpha test confirmed poor reliability of acquired database. A very low quality of acquired database needs excessive static correction or in some cases, reacquisition of data is also suggested which is most of the time not feasible on economic grounds. The outcomes of this study could be used to assess the quality of large databases and to further utilize as a guideline to establish database quality assessment models to make much more informed decisions in hydrocarbon exploration field.
Keywords: Data quality, null hypothesis, seismic lines, seismic reflection survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 615951 A Mathematical Representation for Mechanical Model Assessment: Numerical Model Qualification Method
Authors: Keny Ordaz-Hernandez, Xavier Fischer, Fouad Bennis
Abstract:
This article illustrates a model selection management approach for virtual prototypes in interactive simulations. In those numerical simulations, the virtual prototype and its environment are modelled as a multiagent system, where every entity (prototype,human, etc.) is modelled as an agent. In particular, virtual prototyp ingagents that provide mathematical models of mechanical behaviour inform of computational methods are considered. This work argues that selection of an appropriate model in a changing environment,supported by models? characteristics, can be managed by the deter-mination a priori of specific exploitation and performance measures of virtual prototype models. As different models exist to represent a single phenomenon, it is not always possible to select the best one under all possible circumstances of the environment. Instead the most appropriate shall be selecting according to the use case. The proposed approach consists in identifying relevant metrics or indicators for each group of models (e.g. entity models, global model), formulate their qualification, analyse the performance, and apply the qualification criteria. Then, a model can be selected based on the performance prediction obtained from its qualification. The authors hope that this approach will not only help to inform engineers and researchers about another approach for selecting virtual prototype models, but also assist virtual prototype engineers in the systematic or automatic model selection.
Keywords: Virtual prototype models, domain, qualification criterion, model qualification, model assessment, environmental modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2039950 Waist Circumference-Related Performance of Tense Indices during Varying Pediatric Obesity States and Metabolic Syndrome
Authors: Mustafa M. Donma
Abstract:
Obesity increases the risk of elevated blood pressure, which is a metabolic syndrome (MetS) component. Waist circumference (WC) is accepted as an indispensable parameter for the evaluation of these health problems. The close relationship of height with blood pressure values revealed the necessity of including height in tense indices. The association of tense indices with WC has also become an increasingly important topic. The purpose of this study was to develop a tense index that could contribute to differential diagnosis of MetS more than the indices previously introduced. 194 children, aged 6-11 years, were considered to constitute four groups. The study was performed on normal weight (Group 1), overweight + obese (Group 2), morbid obese [without (Group 3) and with (Group 4) MetS findings] children. Children were included in the groups according to the recommendations of World Health Organization based on age- and gender-dependent body mass index percentiles. For MetS group, MetS components well-established before were considered. Anthropometric measurements as well as blood pressure values were taken. Statistical calculations were performed. 0.05 was accepted as the p value indicating statistical significance. There were no statistically significant differences between the groups for pulse pressure, systolic-to-diastolic pressure ratio and tense index. Increasing values were observed from Group 1 to Group 4 in terms of mean arterial blood pressure and ADTI, which was highly correlated with WC in all groups except Group 1. Both tense index and ADTI exhibited significant correlations with WC in Group 3. However, in Group 4, ADTI, which includes height parameter in the equation, was unique in establishing a strong correlation with WC. In conclusion, ADTI was suggested as a tense index while investigating children with MetS.
Keywords: Blood pressure, metabolic syndrome, waist circumference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 64949 Through Biometric Card in Romania: Person Identification by Face, Fingerprint and Voice Recognition
Authors: Hariton N. Costin, Iulian Ciocoiu, Tudor Barbu, Cristian Rotariu
Abstract:
In this paper three different approaches for person verification and identification, i.e. by means of fingerprints, face and voice recognition, are studied. Face recognition uses parts-based representation methods and a manifold learning approach. The assessment criterion is recognition accuracy. The techniques under investigation are: a) Local Non-negative Matrix Factorization (LNMF); b) Independent Components Analysis (ICA); c) NMF with sparse constraints (NMFsc); d) Locality Preserving Projections (Laplacianfaces). Fingerprint detection was approached by classical minutiae (small graphical patterns) matching through image segmentation by using a structural approach and a neural network as decision block. As to voice / speaker recognition, melodic cepstral and delta delta mel cepstral analysis were used as main methods, in order to construct a supervised speaker-dependent voice recognition system. The final decision (e.g. “accept-reject" for a verification task) is taken by using a majority voting technique applied to the three biometrics. The preliminary results, obtained for medium databases of fingerprints, faces and voice recordings, indicate the feasibility of our study and an overall recognition precision (about 92%) permitting the utilization of our system for a future complex biometric card.Keywords: Biometry, image processing, pattern recognition, speech analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1944948 A Study of Two Disease Models: With and Without Incubation Period
Authors: H. C. Chinwenyi, H. D. Ibrahim, J. O. Adekunle
Abstract:
The incubation period is defined as the time from infection with a microorganism to development of symptoms. In this research, two disease models: one with incubation period and another without incubation period were studied. The study involves the use of a mathematical model with a single incubation period. The test for the existence and stability of the disease free and the endemic equilibrium states for both models were carried out. The fourth order Runge-Kutta method was used to solve both models numerically. Finally, a computer program in MATLAB was developed to run the numerical experiments. From the results, we are able to show that the endemic equilibrium state of the model with incubation period is locally asymptotically stable whereas the endemic equilibrium state of the model without incubation period is unstable under certain conditions on the given model parameters. It was also established that the disease free equilibrium states of the model with and without incubation period are locally asymptotically stable. Furthermore, results from numerical experiments using empirical data obtained from Nigeria Centre for Disease Control (NCDC) showed that the overall population of the infected people for the model with incubation period is higher than that without incubation period. We also established from the results obtained that as the transmission rate from susceptible to infected population increases, the peak values of the infected population for the model with incubation period decrease and are always less than those for the model without incubation period.
Keywords: Asymptotic stability, incubation period, Routh-Hurwitz criterion, Runge Kutta method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 684947 Urbanization and Income Inequality in Thailand
Authors: Acumsiri Tantiakrnpanit
Abstract:
This paper aims to examine the relationship between urbanization and income inequality in Thailand during the period 2002–2020, using a panel of data for 76 provinces collected from Thailand’s National Statistical Office (Labor Force Survey: LFS), as well as geospatial data from the U.S. Air Force Defense Meteorological Satellite Program (DMSP) and the Visible Infrared Imaging Radiometer Suite Day/Night band (VIIRS-DNB) satellite for 19 selected years. This paper employs two different definitions to identify urban areas: 1) Urban areas defined by Thailand's National Statistical Office (LFS), and 2) Urban areas estimated using nighttime light data from the DMSP and VIIRS-DNB satellite. The second method includes two sub-categories: 2.1) Determining urban areas by calculating nighttime light density with a population density of 300 people per square kilometer, and 2.2) Calculating urban areas based on nighttime light density corresponding to a population density of 1,500 people per square kilometer. The empirical analysis based on Ordinary Least Squares (OLS), fixed effects, and random effects models reveals a consistent U-shaped relationship between income inequality and urbanization. The findings from the econometric analysis demonstrate that urbanization or population density has a significant and negative impact on income inequality. Moreover, the square of urbanization shows a statistically significant positive impact on income inequality. Additionally, there is a negative association between logarithmically transformed income and income inequality. This paper also proposes the inclusion of satellite imagery, geospatial data, and spatial econometric techniques in future studies to conduct quantitative analysis of spatial relationships.
Keywords: Income inequality, nighttime light, population density, Thailand, urbanization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 127946 Comparison of Finite Difference Schemes for Water Flow in Unsaturated Soils
Authors: H. Taheri Shahraiyni, B. Ataie Ashtiani
Abstract:
Flow movement in unsaturated soil can be expressed by a partial differential equation, named Richards equation. The objective of this study is the finding of an appropriate implicit numerical solution for head based Richards equation. Some of the well known finite difference schemes (fully implicit, Crank Nicolson and Runge-Kutta) have been utilized in this study. In addition, the effects of different approximations of moisture capacity function, convergence criteria and time stepping methods were evaluated. Two different infiltration problems were solved to investigate the performance of different schemes. These problems include of vertical water flow in a wet and very dry soils. The numerical solutions of two problems were compared using four evaluation criteria and the results of comparisons showed that fully implicit scheme is better than the other schemes. In addition, utilizing of standard chord slope method for approximation of moisture capacity function, automatic time stepping method and difference between two successive iterations as convergence criterion in the fully implicit scheme can lead to better and more reliable results for simulation of fluid movement in different unsaturated soils.Keywords: Finite Difference methods, Richards equation, fullyimplicit, Crank-Nicolson, Runge-Kutta.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2376945 Seamless Handover in Urban 5G-UAV Systems Using Entropy Weighted Method
Authors: Anirudh Sunil Warrier, Saba Al-Rubaye, Dimitrios Panagiotakopoulos, Gokhan Inalhan, Antonios Tsourdos
Abstract:
The demand for increased data transfer rate and network traffic capacity has given rise to the concept of heterogeneous networks. Heterogeneous networks are wireless networks, consisting of devices using different underlying radio access technologies (RAT). For Unmanned Aerial Vehicles (UAVs) this enhanced data rate and network capacity are even more critical especially in their applications of medicine, delivery missions and military. In an urban heterogeneous network environment, the UAVs must be able switch seamlessly from one base station (BS) to another for maintaining a reliable link. Therefore, seamless handover in such urban environments has become a major challenge. In this paper, a scheme to achieve seamless handover is developed, an algorithm based on Received Signal Strength (RSS) criterion for network selection is used and Entropy Weighted Method (EWM) is implemented for decision making. Seamless handover using EWM decision-making is demonstrated successfully for a UAV moving across fifth generation (5G) and long-term evolution (LTE) networks via a simulation level analysis. Thus, a solution for UAV-5G communication, specifically the mobility challenge in heterogeneous networks is solved and this work could act as step forward in making UAV-5G architecture integration a possibility.
Keywords: Air to ground, A2G, fifth generation, 5G, handover, mobility, unmanned aerial vehicle, UAV, urban environments.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433944 Flutter Analysis of Slender Beams with Variable Cross Sections Based on Integral Equation Formulation
Authors: Z. El Felsoufi, L. Azrar
Abstract:
This paper studies a mathematical model based on the integral equations for dynamic analyzes numerical investigations of a non-uniform or multi-material composite beam. The beam is subjected to a sub-tangential follower force and elastic foundation. The boundary conditions are represented by generalized parameterized fixations by the linear and rotary springs. A mathematical formula based on Euler-Bernoulli beam theory is presented for beams with variable cross-sections. The non-uniform section introduces non-uniformity in the rigidity and inertia of beams and consequently, more complicated equilibrium who governs the equation. Using the boundary element method and radial basis functions, the equation of motion is reduced to an algebro-differential system related to internal and boundary unknowns. A generalized formula for the deflection, the slope, the moment and the shear force are presented. The free vibration of non-uniform loaded beams is formulated in a compact matrix form and all needed matrices are explicitly given. The dynamic stability analysis of slender beam is illustrated numerically based on the coalescence criterion. A realistic case related to an industrial chimney is investigated.
Keywords: Chimney, BEM and integral equation formulation, non uniform cross section, vibration and Flutter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1620943 Evaluating the Capability of the Flux-Limiter Schemes in Capturing the Turbulence Structures in a Fully Developed Channel Flow
Authors: Mohamed Elghorab, Vendra C. Madhav Rao, Jennifer X. Wen
Abstract:
Turbulence modelling is still evolving, and efforts are on to improve and develop numerical methods to simulate the real turbulence structures by using the empirical and experimental information. The monotonically integrated large eddy simulation (MILES) is an attractive approach for modelling turbulence in high Re flows, which is based on the solving of the unfiltered flow equations with no explicit sub-grid scale (SGS) model. In the current work, this approach has been used, and the action of the SGS model has been included implicitly by intrinsic nonlinear high-frequency filters built into the convection discretization schemes. The MILES solver is developed using the opensource CFD OpenFOAM libraries. The role of flux limiters schemes namely, Gamma, superBee, van-Albada and van-Leer, is studied in predicting turbulent statistical quantities for a fully developed channel flow with a friction Reynolds number, ReT = 180, and compared the numerical predictions with the well-established Direct Numerical Simulation (DNS) results for studying the wall generated turbulence. It is inferred from the numerical predictions that Gamma, van-Leer and van-Albada limiters produced more diffusion and overpredicted the velocity profiles, while superBee scheme reproduced velocity profiles and turbulence statistical quantities in good agreement with the reference DNS data in the streamwise direction although it deviated slightly in the spanwise and normal to the wall directions. The simulation results are further discussed in terms of the turbulence intensities and Reynolds stresses averaged in time and space to draw conclusion on the flux limiter schemes performance in OpenFOAM context.
Keywords: Flux limiters, MILES, OpenFOAM, turbulence structures, TVD schemes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1124942 Revealing Nonlinear Couplings between Oscillators from Time Series
Authors: B.P. Bezruchko, D.A. Smirnov
Abstract:
Quantitative characterization of nonlinear directional couplings between stochastic oscillators from data is considered. We suggest coupling characteristics readily interpreted from a physical viewpoint and their estimators. An expression for a statistical significance level is derived analytically that allows reliable coupling detection from a relatively short time series. Performance of the technique is demonstrated in numerical experiments.Keywords: Nonlinear time series analysis, directional couplings, coupled oscillators.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1265941 UEMSD Risk Identification – Case Study
Authors: K. Sekulová, M. Šimon
Abstract:
The article demonstrates on a case study how it is possible to identify MSD risk. It is based on a dissertation Risk identification model of occupational diseases formation in relation to the work activity that determines what risk can endanger workers who are exposed to the specific risk factors. It is evaluated based on statistical calculations. These risk factors are main cause of upperextremities musculoskeletal disorders.
Keywords: Case study, upper-extremity musculoskeletal disorders, ergonomics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2065940 The Impact of Upgrades on ERP System Reliability
Authors: F. Urem, K. Fertalj, I. Livaja
Abstract:
Constant upgrading of Enterprise Resource Planning (ERP) systems is necessary, but can cause new defects. This paper attempts to model the likelihood of defects after completed upgrades with Weibull defect probability density function (PDF). A case study is presented analyzing data of recorded defects obtained for one ERP subsystem. The trends are observed for the value of the parameters relevant to the proposed statistical Weibull distribution for a given one year period. As a result, the ability to predict the appearance of defects after the next upgrade is described.Keywords: ERP, upgrade, reliability, Weibull model
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1634939 Detection of Clipped Fragments in Speech Signals
Authors: Sergei Aleinik, Yuri Matveev
Abstract:
In this paper a novel method for the detection of clipping in speech signals is described. It is shown that the new method has better performance than known clipping detection methods, is easy to implement, and is robust to changes in signal amplitude, size of data, etc. Statistical simulation results are presented.
Keywords: Clipping, clipped signal, speech signal processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2673938 Comparing Academically Gifted and Non-Gifted Students- Supportive Environments in Jordan
Authors: Mustafa Qaseem Hielat, Ahmad Mohammad Al-Shabatat
Abstract:
Jordan exerts many efforts to nurture their academically gifted students in special schools since 2001. During the past nine years of launching these schools, their learning and excellence environments were believed to be distinguished compared to public schools. This study investigated the environments of gifted students compared with other non-gifted, using a survey instrument that measures the dimensions of family, peers, teachers, school- support, society, and resources –dimensions rooted deeply in supporting gifted education, learning, and achievement. A total number of 109 were selected from excellence schools for academically gifted students, and 119 non-gifted students were selected from public schools. Around 8.3% of the non-gifted students reported that they “Never" received any support from their surrounding environments, 14.9% reported “Seldom" support, 23.7% reported “ Often" support, 26.0% reported “Frequent" support, and 32.8% reported “Very frequent" support. Where the gifted students reported more “Never" support than the non-gifted did with 11.3%, “Seldom" support with 15.4%, “Often" support with 26.6%, “Frequent" support with 29.0%, and reported “Very frequent" support less than the non-gifted students with 23.6%. Unexpectedly, statistical differences were found between the two groups favoring non-gifted students in perception of their surrounding environments in specific dimensions, namely, school- support, teachers, and society. No statistical differences were found in the other dimensions of the survey, namely, family, peers, and resources. As the differences were found in teachers, school- support, and society, the nurturing environments for the excellence schools need to be revised to adopt more creative teaching styles, rich school atmosphere and infrastructures, interactive guiding for the students and their parents, promoting for the excellence environments, and re-build successful identification models. Thus, families, schools, and society should increase their cooperation, communication, and awareness of the gifted supportive environments. However, more studies to investigate other aspects of promoting academic giftedness and excellence are recommended.Keywords: Academic giftedness, Supportive environment, Excellence schools, Gifted grouping, Gifted nurturing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881937 An Axiomatic Model for Development of the Allocated Architecture in Systems Engineering Process
Authors: A. Sharahi, R. Tehrani, A. Mollajan
Abstract:
The final step to complete the “Analytical Systems Engineering Process” is the “Allocated Architecture” in which all Functional Requirements (FRs) of an engineering system must be allocated into their corresponding Physical Components (PCs). At this step, any design for developing the system’s allocated architecture in which no clear pattern of assigning the exclusive “responsibility” of each PC for fulfilling the allocated FR(s) can be found is considered a poor design that may cause difficulties in determining the specific PC(s) which has (have) failed to satisfy a given FR successfully. The present study utilizes the Axiomatic Design method principles to mathematically address this problem and establishes an “Axiomatic Model” as a solution for reaching good alternatives for developing the allocated architecture. This study proposes a “loss Function”, as a quantitative criterion to monetarily compare non-ideal designs for developing the allocated architecture and choose the one which imposes relatively lower cost to the system’s stakeholders. For the case-study, we use the existing design of U. S. electricity marketing subsystem, based on data provided by the U.S. Energy Information Administration (EIA). The result for 2012 shows the symptoms of a poor design and ineffectiveness due to coupling among the FRs of this subsystem.
Keywords: Allocated Architecture, Analytical Systems Engineering Process, Functional Requirements (FRs), Physical Components (PCs), Responsibility of a Physical Component, System’s Stakeholders.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973936 FEM Models of Glued Laminated Timber Beams Enhanced by Bayesian Updating of Elastic Moduli
Authors: L. Melzerová, T. Janda, M. Šejnoha, J. Šejnoha
Abstract:
Two finite element (FEM) models are presented in this paper to address the random nature of the response of glued timber structures made of wood segments with variable elastic moduli evaluated from 3600 indentation measurements. This total database served to create the same number of ensembles as was the number of segments in the tested beam. Statistics of these ensembles were then assigned to given segments of beams and the Latin Hypercube Sampling (LHS) method was called to perform 100 simulations resulting into the ensemble of 100 deflections subjected to statistical evaluation. Here, a detailed geometrical arrangement of individual segments in the laminated beam was considered in the construction of two-dimensional FEM model subjected to in fourpoint bending to comply with the laboratory tests. Since laboratory measurements of local elastic moduli may in general suffer from a significant experimental error, it appears advantageous to exploit the full scale measurements of timber beams, i.e. deflections, to improve their prior distributions with the help of the Bayesian statistical method. This, however, requires an efficient computational model when simulating the laboratory tests numerically. To this end, a simplified model based on Mindlin’s beam theory was established. The improved posterior distributions show that the most significant change of the Young’s modulus distribution takes place in laminae in the most strained zones, i.e. in the top and bottom layers within the beam center region. Posterior distributions of moduli of elasticity were subsequently utilized in the 2D FEM model and compared with the original simulations.
Keywords: Bayesian inference, FEM, four point bending test, laminated timber, parameter estimation, prior and posterior distribution, Young’s modulus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2218935 Work Engagement of Malaysian Nurses: Exploring the Impact of Hope and Resilience
Authors: Noraini Othman, Aizzat Mohd Nasurdin
Abstract:
The purpose of this study was to investigate the relationship between hope and resilience with work engagement. A total of 422 staff nurses working in three public hospitals in Peninsular Malaysia participated in this study. Statistical results using regression analysis revealed that hope and resilience were positively related to work engagement. Possible reasons for these findings, as well as their implications and future research directions are discussed.
Keywords: hope, nurses, resilience, work engagement
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3752934 A Software-Supported Methodology for Designing General-Purpose Interconnection Networks for Reconfigurable Architectures
Authors: Kostas Siozios, Dimitrios Soudris, Antonios Thanailakis
Abstract:
Modern applications realized onto FPGAs exhibit high connectivity demands. Throughout this paper we study the routing constraints of Virtex devices and we propose a systematic methodology for designing a novel general-purpose interconnection network targeting to reconfigurable architectures. This network consists of multiple segment wires and SB patterns, appropriately selected and assigned across the device. The goal of our proposed methodology is to maximize the hardware utilization of fabricated routing resources. The derived interconnection scheme is integrated on a Virtex style FPGA. This device is characterized both for its high-performance, as well as for its low-energy requirements. Due to this, the design criterion that guides our architecture selections was the minimal Energy×Delay Product (EDP). The methodology is fully-supported by three new software tools, which belong to MEANDER Design Framework. Using a typical set of MCNC benchmarks, extensive comparison study in terms of several critical parameters proves the effectiveness of the derived interconnection network. More specifically, we achieve average Energy×Delay Product reduction by 63%, performance increase by 26%, reduction in leakage power by 21%, reduction in total energy consumption by 11%, at the expense of increase of channel width by 20%.
Keywords: Design Methodology, FPGA, Interconnection, Low-Energy, High-Performance, CAD tool.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1721933 Statistical Optimization of Adsorption of a Harmful Dye from Aqueous Solution
Abstract:
Textile industries cater to varied customer preferences and contribute substantially to the economy. However, these textile industries also produce a considerable amount of effluents. Prominent among these are the azo dyes which impart considerable color and toxicity even at low concentrations. Azo dyes are also used as coloring agents in food and pharmaceutical industry. Despite their applications, azo dyes are also notorious pollutants and carcinogens. Popular techniques like photo-degradation, biodegradation and the use of oxidizing agents are not applicable for all kinds of dyes, as most of them are stable to these techniques. Chemical coagulation produces a large amount of toxic sludge which is undesirable and is also ineffective towards a number of dyes. Most of the azo dyes are stable to UV-visible light irradiation and may even resist aerobic degradation. Adsorption has been the most preferred technique owing to its less cost, high capacity and process efficiency and the possibility of regenerating and recycling the adsorbent. Adsorption is also most preferred because it may produce high quality of the treated effluent and it is able to remove different kinds of dyes. However, the adsorption process is influenced by many variables whose inter-dependence makes it difficult to identify optimum conditions. The variables include stirring speed, temperature, initial concentration and adsorbent dosage. Further, the internal diffusional resistance inside the adsorbent particle leads to slow uptake of the solute within the adsorbent. Hence, it is necessary to identify optimum conditions that lead to high capacity and uptake rate of these pollutants. In this work, commercially available activated carbon was chosen as the adsorbent owing to its high surface area. A typical azo dye found in textile effluent waters, viz. the monoazo Acid Orange 10 dye (CAS: 1936-15-8) has been chosen as the representative pollutant. Adsorption studies were mainly focused at obtaining equilibrium and kinetic data for the batch adsorption process at different process conditions. Studies were conducted at different stirring speed, temperature, adsorbent dosage and initial dye concentration settings. The Full Factorial Design was the chosen statistical design framework for carrying out the experiments and identifying the important factors and their interactions. The optimum conditions identified from the experimental model were validated with actual experiments at the recommended settings. The equilibrium and kinetic data obtained were fitted to different models and the model parameters were estimated. This gives more details about the nature of adsorption taking place. Critical data required to design batch adsorption systems for removal of Acid Orange 10 dye and identification of factors that critically influence the separation efficiency are the key outcomes from this research.
Keywords: Acid Orange 10, Activated carbon, Optimum conditions, Statistical design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353932 A Web and Cloud-Based Measurement System Analysis Tool for the Automotive Industry
Authors: C. A. Barros, Ana P. Barroso
Abstract:
Any industrial company needs to determine the amount of variation that exists within its measurement process and guarantee the reliability of their data, studying the performance of their measurement system, in terms of linearity, bias, repeatability and reproducibility and stability. This issue is critical for automotive industry suppliers, who are required to be certified by the 16949:2016 standard (replaces the ISO/TS 16949) of International Automotive Task Force, defining the requirements of a quality management system for companies in the automotive industry. Measurement System Analysis (MSA) is one of the mandatory tools. Frequently, the measurement system in companies is not connected to the equipment and do not incorporate the methods proposed by the Automotive Industry Action Group (AIAG). To address these constraints, an R&D project is in progress, whose objective is to develop a web and cloud-based MSA tool. This MSA tool incorporates Industry 4.0 concepts, such as, Internet of Things (IoT) protocols to assure the connection with the measuring equipment, cloud computing, artificial intelligence, statistical tools, and advanced mathematical algorithms. This paper presents the preliminary findings of the project. The web and cloud-based MSA tool is innovative because it implements all statistical tests proposed in the MSA-4 reference manual from AIAG as well as other emerging methods and techniques. As it is integrated with the measuring devices, it reduces the manual input of data and therefore the errors. The tool ensures traceability of all performed tests and can be used in quality laboratories and in the production lines. Besides, it monitors MSAs over time, allowing both the analysis of deviations from the variation of the measurements performed and the management of measurement equipment and calibrations. To develop the MSA tool a ten-step approach was implemented. Firstly, it was performed a benchmarking analysis of the current competitors and commercial solutions linked to MSA, concerning Industry 4.0 paradigm. Next, an analysis of the size of the target market for the MSA tool was done. Afterwards, data flow and traceability requirements were analysed in order to implement an IoT data network that interconnects with the equipment, preferably via wireless. The MSA web solution was designed under UI/UX principles and an API in python language was developed to perform the algorithms and the statistical analysis. Continuous validation of the tool by companies is being performed to assure real time management of the ‘big data’. The main results of this R&D project are: MSA Tool, web and cloud-based; Python API; New Algorithms to the market; and Style Guide of UI/UX of the tool. The MSA tool proposed adds value to the state of the art as it ensures an effective response to the new challenges of measurement systems, which are increasingly critical in production processes. Although the automotive industry has triggered the development of this innovative MSA tool, other industries would also benefit from it. Currently, companies from molds and plastics, chemical and food industry are already validating it.
Keywords: Automotive industry, Industry 4.0, internet of things, IATF 16949:2016, measurement system analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993931 Shannon-Weaver Biodiversity of Neutrophils in Fractal Networks of Immunofluorescence for Medical Diagnostics
Authors: N.E.Galich
Abstract:
We develop new nonlinear methods of immunofluorescence analysis for a sensitive technology of respiratory burst reaction of DNA fluorescence due to oxidative activity in the peripheral blood neutrophils. Histograms in flow cytometry experiments represent a fluorescence flashes frequency as functions of fluorescence intensity. We used the Shannon-Weaver index for definition of neutrophils- biodiversity and Hurst index for definition of fractal-s correlations in immunofluorescence for different donors, as the basic quantitative criteria for medical diagnostics of health status. We analyze frequencies of flashes, information, Shannon entropies and their fractals in immunofluorescence networks due to reduction of histogram range. We found the number of simplest universal correlations for biodiversity, information and Hurst index in diagnostics and classification of pathologies for wide spectra of diseases. In addition is determined the clear criterion of a common immunity and human health status in a form of yes/no answers type. These answers based on peculiarities of information in immunofluorescence networks and biodiversity of neutrophils. Experimental data analysis has shown the existence of homeostasis for information entropy in oxidative activity of DNA in neutrophil nuclei for all donors.Keywords: blood and cells fluorescence in diagnostics ofdiseases, cytometric histograms, entropy and information in fractalnetworks of oxidative activity of DNA, long-range chromosomalcorrelations in living cells.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700930 Effect of Confinement on the Bearing Capacity and Settlement of Spread Foundations
Authors: Tahsin Toma Sabbagh, Ihsan Al-Abboodi, Ali Al-Jazaairry
Abstract:
Allowable-bearing capacity is the competency of soil to safely carries the pressure from the superstructure without experiencing a shear failure with accompanying excessive settlements. Ensuring a safe bearing pressure with respect to failure does not tolerate settlement of the foundation will be within acceptable limits. Therefore, settlement analysis should always be performed since most structures are settlement sensitive. When visualising the movement of a soil wedge in the bearing capacity criterion, both vertically and horizontally, it becomes clear that by confining the soil surrounding the foundation, both the bearing capacity and settlement values improve. In this study, two sizes of spread foundation were considered; (2×4) m and (3×5) m. These represent two real problem case studies of an existing building. The foundations were analysed in terms of dimension as well as position with respect to a confining wall (i.e., sheet piles on both sides). Assuming B is the least foundation dimension, the study comprised the analyses of three distances; (0.1 B), (0.5 B), and (0.75 B) between the sheet piles and foundations alongside three depths of confinement (0.5 B), (1 B), and (1.5 B). Nonlinear three-dimensional finite element analysis (ANSYS) was adopted to perform an analytical investigation on the behaviour of the two foundations contained by the case study. Results showed that confinement of foundations reduced the overall stresses near the foundation by 65% and reduced the vertical displacement by 90%. Moreover, the most effective distance between the confinement wall and the foundation was found to be 0.5 B.
Keywords: Bearing capacity, cohesionless soils, spread footings, soil confinement, soil modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 890929 Main Tendencies of Youth Unemployment and the Regulation Mechanisms for Decreasing Its Rate in Georgia
Authors: Nino Paresashvili, Nino Abesadze
Abstract:
The modern world faces huge challenges. Globalization changed the socio-economic conditions of many countries. The current processes in the global environment have a different impact on countries with different cultures. However, an alleviation of poverty and improvement of living conditions is still the basic challenge for the majority of countries, because much of the population still lives under the official threshold of poverty. It is very important to stimulate youth employment. In order to prepare young people for the labour market, it is essential to provide them with the appropriate professional skills and knowledge. It is necessary to plan efficient activities for decreasing an unemployment rate and for developing the perfect mechanisms for regulation of a labour market. Such planning requires thorough study and analysis of existing reality, as well as development of corresponding mechanisms. Statistical analysis of unemployment is one of the main platforms for regulation of the labour market key mechanisms. The corresponding statistical methods should be used in the study process. Such methods are observation, gathering, grouping, and calculation of the generalized indicators. Unemployment is one of the most severe socioeconomic problems in Georgia. According to the past as well as the current statistics, unemployment rates always have been the most problematic issue to resolve for policy makers. Analytical works towards to the above-mentioned problem will be the basis for the next sustainable steps to solve the main problem. The results of the study showed that the choice of young people is not often due to their inclinations, their interests and the labour market demand. That is why the wrong professional orientation of young people in most cases leads to their unemployment. At the same time, it was shown that there are a number of professions in the labour market with a high demand because of the deficit the appropriate specialties. To achieve healthy competitiveness in youth employment, it is necessary to formulate regional employment programs with taking into account the regional infrastructure specifications.
Keywords: Unemployment. analysis, methods, tendencies, regulation mechanisms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 777928 Complex-Valued Neural Network in Image Recognition: A Study on the Effectiveness of Radial Basis Function
Authors: Anupama Pande, Vishik Goel
Abstract:
A complex valued neural network is a neural network, which consists of complex valued input and/or weights and/or thresholds and/or activation functions. Complex-valued neural networks have been widening the scope of applications not only in electronics and informatics, but also in social systems. One of the most important applications of the complex valued neural network is in image and vision processing. In Neural networks, radial basis functions are often used for interpolation in multidimensional space. A Radial Basis function is a function, which has built into it a distance criterion with respect to a centre. Radial basis functions have often been applied in the area of neural networks where they may be used as a replacement for the sigmoid hidden layer transfer characteristic in multi-layer perceptron. This paper aims to present exhaustive results of using RBF units in a complex-valued neural network model that uses the back-propagation algorithm (called 'Complex-BP') for learning. Our experiments results demonstrate the effectiveness of a Radial basis function in a complex valued neural network in image recognition over a real valued neural network. We have studied and stated various observations like effect of learning rates, ranges of the initial weights randomly selected, error functions used and number of iterations for the convergence of error on a neural network model with RBF units. Some inherent properties of this complex back propagation algorithm are also studied and discussed.
Keywords: Complex valued neural network, Radial BasisFunction, Image recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2411927 Statistical Approach to Identify Stress and Biases Impairing Decision-Making in High-Risk Industry
Authors: Ph. Fauquet-Alekhine
Abstract:
Decision-making occurs several times an hour when working in high risk industry and an erroneous choice might have undesirable outcomes for people and the environment surrounding the industrial plant. Industrial decisions are very often made in a context of acute stress. Time pressure is a crucial stressor leading decision makers sometimes to boost up the decision-making process and if it is not possible then shift to the simplest strategy. We thus found it interesting to update the characterization of the stress factors impairing decision-making at Chinon Nuclear Power Plant (France) in order to optimize decision making contexts and/or associated processes. The investigation was based on the analysis of reports addressing safety events over the last 3 years. Among 93 reports, those explicitly addressing decision-making issues were identified. Characterization of each event was undertaken in terms of three criteria: stressors, biases impairing decision making and weaknesses of the decision-making process. The statistical analysis showed that biases were distributed over 10 possibilities among which the hypothesis confirmation bias was clearly salient. No significant correlation was found between criteria. The analysis indicated that the main stressor was time pressure and highlights an unexpected form of stressor: the trust asymmetry principle of the expert. The analysis led to the conclusion that this stressor impaired decision-making from a psychological angle rather than from a physiological angle: it induces defensive bias of self-esteem, self-protection associated with a bias of confirmation. This leads to the hypothesis that this stressor can intervene in some cases without being detected, and to the hypothesis that other stressors of the same kind might occur without being detected too. Further investigations addressing these hypotheses are considered. The analysis also led to the conclusion that dealing with these issues implied i) decision-making methods being well known to the workers and automated and ii) the decision-making tools being well known and strictly applied. Training was thus adjusted.
Keywords: Bias, expert, high risk industry, stress.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667926 Characterisation and Classification of Natural Transients
Authors: Ernst D. Schmitter
Abstract:
Monitoring lightning electromagnetic pulses (sferics) and other terrestrial as well as extraterrestrial transient radiation signals is of considerable interest for practical and theoretical purposes in astro- and geophysics as well as meteorology. Managing a continuous flow of data, automisation of the detection and classification process is important. Features based on a combination of wavelet and statistical methods proved efficient for analysis and characterisation of transients and as input into a radial basis function network that is trained to discriminate transients from pulse like to wave like.Keywords: transient signals, statistics, wavelets, neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450