Search results for: dynamic panel data models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 31443

Search results for: dynamic panel data models

31173 The Use of the Flat Field Panel for the On-Ground Calibration of Metis Coronagraph on Board of Solar Orbiter

Authors: C. Casini, V. Da Deppo, P. Zuppella, P. Chioetto, A. Slemer, F. Frassetto, M. Romoli, F. Landini, M. Pancrazzi, V. Andretta, E. Antonucci, A. Bemporad, M. Casti, Y. De Leo, M. Fabi, S. Fineschi, F. Frassati, C. Grimani, G. Jerse, P. Heinzel, K. Heerlein, A. Liberatore, E. Magli, G. Naletto, G. Nicolini, M.G. Pelizzo, P. Romano, C. Sasso, D. Spadaro, M. Stangalini, T. Straus, R. Susino, L. Teriaca, M. Uslenghi, A. Volpicelli

Abstract:

Solar Orbiter, launched on February 9th 2020, is an ESA/NASA mission conceived to study the Sun. The payload is composed of 10 instruments, among which there is the Metis coronagraph. A coronagraph aims at taking images of the solar corona: the occulter element simulates a total solar eclipse. This work presents some of the results obtained in the visible light band (580-640 nm) using a flat field panel source. The flat field panel gives a uniform illumination; consequently, it has been used during the on-ground calibration for several purposes: evaluating the response of each pixel of the detector (linearity); and characterizing the Field of View of the coronagraph. As a conclusion, a major result is the verification that the requirement for the Field of View (FoV) of Metis is fulfilled. Some investigations are in progress in order to verify that the performance measured on-ground did not change after launch.

Keywords: solar orbiter, Metis, coronagraph, flat field panel, calibration, on-ground, performance

Procedia PDF Downloads 102
31172 Low-Cost IoT System for Monitoring Ground Propagation Waves due to Construction and Traffic Activities to Nearby Construction

Authors: Lan Nguyen, Kien Le Tan, Bao Nguyen Pham Gia

Abstract:

Due to the high cost, specialized dynamic measurement devices for industrial lands are difficult for many colleges to equip for hands-on teaching. This study connects a dynamic measurement sensor and receiver utilizing an inexpensive Raspberry Pi 4 board, some 24-bit ADC circuits, a geophone vibration sensor, and embedded Python open-source programming. Gather and analyze signals for dynamic measuring, ground vibration monitoring, and structure vibration monitoring. The system may wirelessly communicate data to the computer and is set up as a communication node network, enabling real-time monitoring of background vibrations at various locations. The device can be utilized for a variety of dynamic measurement and monitoring tasks, including monitoring earthquake vibrations, ground vibrations from construction operations, traffic, and vibrations of building structures.

Keywords: sensors, FFT, signal processing, real-time data monitoring, ground propagation wave, python, raspberry Pi 4

Procedia PDF Downloads 96
31171 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel

Authors: Hamed Kalhori, Lin Ye

Abstract:

In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.

Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction

Procedia PDF Downloads 531
31170 An Online Mastery Learning Method Based on a Dynamic Formative Evaluation

Authors: Jeongim Kang, Moon Hee Kim, Seong Baeg Kim

Abstract:

This paper proposes a novel e-learning model that is based on a dynamic formative evaluation. On evaluating the existing format of e-learning, conditions regarding repetitive learning to achieve mastery, causes issues for learners to lose tension and become neglectful of learning. The dynamic formative evaluation proposed is able to supplement limitation of the existing approaches. Since a repetitive learning method does not provide a perfect feedback, this paper puts an emphasis on the dynamic formative evaluation that is able to maximize learning achievement. Through the dynamic formative evaluation, the instructor is able to refer to the evaluation result when making estimation about the learner. To show the flow chart of learning, based on the dynamic formative evaluation, the model proves its effectiveness and validity.

Keywords: online learning, dynamic formative evaluation, mastery learning, repetitive learning method, learning achievement

Procedia PDF Downloads 503
31169 Behavior of the RC Slab Subjected to Impact Loading According to the DIF

Authors: Yong Jae Yu, Jae-Yeol Cho

Abstract:

In the design of structural concrete for impact loading, design or model codes often employ a dynamic increase factor (DIF) to impose dynamic effect on static response. Dynamic increase factors that are obtained from laboratory material test results and that are commonly given as a function of strain rate only are quite different from each other depending on the design concept of design codes like ACI 349M-06, fib Model Code 2010 and ACI 370R-14. Because the dynamic increase factors currently adopted in the codes are too simple and limited to consider a variety of strength of materials, their application in practical design is questionable. In this study, the dynamic increase factors used in the three codes were validated through the finite element analysis of reinforced concrete slab elements which were tested and reported by other researcher. The test was intended to simulate a wall element of the containment building in nuclear power plants that is assumed to be subject to impact scenario that the Pentagon experienced on September 11, 2001. The finite element analysis was performed using the ABAQAUS 6.10 and the plasticity models were employed for the concrete, reinforcement. The dynamic increase factors given in the three codes were applied to the stress-strain curves of the materials. To estimate the dynamic increase factors, strain rate was adopted as a parameter. Comparison of the test and analysis was done with regard to perforation depth, maximum deflection, and surface crack area of the slab. Consequently, it was found that DIF has so great an effect on the behavior of the reinforced concrete structures that selection of DIF should be very careful. The result implies that DIF should be provided in design codes in more delicate format considering various influence factors.

Keywords: impact, strain rate, DIF, slab elements

Procedia PDF Downloads 289
31168 Machine Learning Analysis of Student Success in Introductory Calculus Based Physics I Course

Authors: Chandra Prayaga, Aaron Wade, Lakshmi Prayaga, Gopi Shankar Mallu

Abstract:

This paper presents the use of machine learning algorithms to predict the success of students in an introductory physics course. Data having 140 rows pertaining to the performance of two batches of students was used. The lack of sufficient data to train robust machine learning models was compensated for by generating synthetic data similar to the real data. CTGAN and CTGAN with Gaussian Copula (Gaussian) were used to generate synthetic data, with the real data as input. To check the similarity between the real data and each synthetic dataset, pair plots were made. The synthetic data was used to train machine learning models using the PyCaret package. For the CTGAN data, the Ada Boost Classifier (ADA) was found to be the ML model with the best fit, whereas the CTGAN with Gaussian Copula yielded Logistic Regression (LR) as the best model. Both models were then tested for accuracy with the real data. ROC-AUC analysis was performed for all the ten classes of the target variable (Grades A, A-, B+, B, B-, C+, C, C-, D, F). The ADA model with CTGAN data showed a mean AUC score of 0.4377, but the LR model with the Gaussian data showed a mean AUC score of 0.6149. ROC-AUC plots were obtained for each Grade value separately. The LR model with Gaussian data showed consistently better AUC scores compared to the ADA model with CTGAN data, except in two cases of the Grade value, C- and A-.

Keywords: machine learning, student success, physics course, grades, synthetic data, CTGAN, gaussian copula CTGAN

Procedia PDF Downloads 39
31167 Approach for the Mathematical Calculation of the Damping Factor of Railway Bridges with Ballasted Track

Authors: Andreas Stollwitzer, Lara Bettinelli, Josef Fink

Abstract:

The expansion of the high-speed rail network over the past decades has resulted in new challenges for engineers, including traffic-induced resonance vibrations of railway bridges. Excessive resonance-induced speed-dependent accelerations of railway bridges during high-speed traffic can lead to negative consequences such as fatigue symptoms, distortion of the track, destabilisation of the ballast bed, and potentially even derailment. A realistic prognosis of bridge vibrations during high-speed traffic must not only rely on the right choice of an adequate calculation model for both bridge and train but first and foremost on the use of dynamic model parameters which reflect reality appropriately. However, comparisons between measured and calculated bridge vibrations are often characterised by considerable discrepancies, whereas dynamic calculations overestimate the actual responses and therefore lead to uneconomical results. This gap between measurement and calculation constitutes a complex research issue and can be traced to several causes. One major cause is found in the dynamic properties of the ballasted track, more specifically in the persisting, substantial uncertainties regarding the consideration of the ballasted track (mechanical model and input parameters) in dynamic calculations. Furthermore, the discrepancy is particularly pronounced concerning the damping values of the bridge, as conservative values have to be used in the calculations due to normative specifications and lack of knowledge. By using a large-scale test facility, the analysis of the dynamic behaviour of ballasted track has been a major research topic at the Institute of Structural Engineering/Steel Construction at TU Wien in recent years. This highly specialised test facility is designed for isolated research of the ballasted track's dynamic stiffness and damping properties – independent of the bearing structure. Several mechanical models for the ballasted track consisting of one or more continuous spring-damper elements were developed based on the knowledge gained. These mechanical models can subsequently be integrated into bridge models for dynamic calculations. Furthermore, based on measurements at the test facility, model-dependent stiffness and damping parameters were determined for these mechanical models. As a result, realistic mechanical models of the railway bridge with different levels of detail and sufficiently precise characteristic values are available for bridge engineers. Besides that, this contribution also presents another practical application of such a bridge model: Based on the bridge model, determination equations for the damping factor (as Lehr's damping factor) can be derived. This approach constitutes a first-time method that makes the damping factor of a railway bridge calculable. A comparison of this mathematical approach with measured dynamic parameters of existing railway bridges illustrates, on the one hand, the apparent deviation between normatively prescribed and in-situ measured damping factors. On the other hand, it is also shown that a new approach, which makes it possible to calculate the damping factor, provides results that are close to reality and thus raises potentials for minimising the discrepancy between measurement and calculation.

Keywords: ballasted track, bridge dynamics, damping, model design, railway bridges

Procedia PDF Downloads 159
31166 Measurement of VIP Edge Conduction Using Vacuum Guarded Hot Plate

Authors: Bongsu Choi, Tae-Ho Song

Abstract:

Vacuum insulation panel (VIP) is a promising thermal insulator for buildings, refrigerator, LNG carrier and so on. In general, it has the thermal conductivity of 2~4 mW/m•K. However, this thermal conductivity is that measured at the center of VIP. The total effective thermal conductivity of VIP is larger than this value due to the edge conduction through the envelope. In this paper, the edge conduction of VIP is examined theoretically, numerically and experimentally. To confirm the existence of the edge conduction, numerical analysis is performed for simple two-dimensional VIP model and a theoretical model is proposed to calculate the edge conductivity. Also, the edge conductivity is measured using the vacuum guarded hot plate and the experiment is validated against numerical analysis. The results show that the edge conductivity is dependent on the width of panel and thickness of Al-foil. To reduce the edge conduction, it is recommended that the VIP should be made as big as possible or made of thin Al film envelope.

Keywords: envelope, edge conduction, thermal conductivity, vacuum insulation panel

Procedia PDF Downloads 399
31165 Bridging the Data Gap for Sexism Detection in Twitter: A Semi-Supervised Approach

Authors: Adeep Hande, Shubham Agarwal

Abstract:

This paper presents a study on identifying sexism in online texts using various state-of-the-art deep learning models based on BERT. We experimented with different feature sets and model architectures and evaluated their performance using precision, recall, F1 score, and accuracy metrics. We also explored the use of pseudolabeling technique to improve model performance. Our experiments show that the best-performing models were based on BERT, and their multilingual model achieved an F1 score of 0.83. Furthermore, the use of pseudolabeling significantly improved the performance of the BERT-based models, with the best results achieved using the pseudolabeling technique. Our findings suggest that BERT-based models with pseudolabeling hold great promise for identifying sexism in online texts with high accuracy.

Keywords: large language models, semi-supervised learning, sexism detection, data sparsity

Procedia PDF Downloads 66
31164 Hybrid Equity Warrants Pricing Formulation under Stochastic Dynamics

Authors: Teh Raihana Nazirah Roslan, Siti Zulaiha Ibrahim, Sharmila Karim

Abstract:

A warrant is a financial contract that confers the right but not the obligation, to buy or sell a security at a certain price before expiration. The standard procedure to value equity warrants using call option pricing models such as the Black–Scholes model had been proven to contain many flaws, such as the assumption of constant interest rate and constant volatility. In fact, existing alternative models were found focusing more on demonstrating techniques for pricing, rather than empirical testing. Therefore, a mathematical model for pricing and analyzing equity warrants which comprises stochastic interest rate and stochastic volatility is essential to incorporate the dynamic relationships between the identified variables and illustrate the real market. Here, the aim is to develop dynamic pricing formulations for hybrid equity warrants by incorporating stochastic interest rates from the Cox-Ingersoll-Ross (CIR) model, along with stochastic volatility from the Heston model. The development of the model involves the derivations of stochastic differential equations that govern the model dynamics. The resulting equations which involve Cauchy problem and heat equations are then solved using partial differential equation approaches. The analytical pricing formulas obtained in this study comply with the form of analytical expressions embedded in the Black-Scholes model and other existing pricing models for equity warrants. This facilitates the practicality of this proposed formula for comparison purposes and further empirical study.

Keywords: Cox-Ingersoll-Ross model, equity warrants, Heston model, hybrid models, stochastic

Procedia PDF Downloads 126
31163 Financial Regulations and Insolvency Risk: Empirical Evidence from Commercial Banks of Pakistan

Authors: Shumaila Zeb

Abstract:

The proposed study aims to investigate insolvency risk of commercial banks of Pakistan. Furthermore, it empirically estimates the effect of already implemented financial regulations on the insolvency risk of banks. To carry out the empirical analysis, a balanced bank-level panel data covering the period 2008-2016 is used. The Z-score is used for calculating the insolvency risk of each bank. The panel regression is used to investigate the relationship between financial regulations and insolvency risk of banks. The empirics reveal that the financial regulations enforced by State Bank of Pakistan have significant impacts on the insolvency risk of banks. The results further indicate that loan ratio and reserve ratio are positively and significantly related to the insolvency risk of banks.

Keywords: insolvency risk, Z-score, financial regulations, banks

Procedia PDF Downloads 193
31162 The Channels through Which Energy Tax Can Affect Economic Growth: Panel Data Analysis

Authors: Mahmoud Hassan, Walid Oueslati, Damien Rousseliere

Abstract:

This paper explores the channels through which energy taxes may affect economic growth, using a simultaneous equations model for a balanced panel data of 31 OECD countries over the 1994–2013 period. The empirical results reveal a negative impact of energy taxes on physical investment in the short and long term. This impact is negatively sensitive to the existence and level of public debt. Additionally, the results show that energy taxes have an indirect effect on human capital through their impact on polluting emissions. The taxes on energy products are able to reduce both the flux and the stock of polluting emissions that have a negative impact on human capital skills in the short and long term. Finally, we found that energy taxes could encourage eco-innovation in the short and long term.

Keywords: energy taxes, economic growth, public debt, simultaneous equations model, multiple imputation

Procedia PDF Downloads 229
31161 Static and Dynamic Tailings Dam Monitoring with Accelerometers

Authors: Cristiana Ortigão, Antonio Couto, Thiago Gabriel

Abstract:

In the wake of Samarco Fundão’s failure in 2015 followed by Vale’s Brumadinho disaster in 2019, the Brazilian National Mining Agency started a comprehensive dam safety programmed to rank dam safety risks and establish monitoring and analysis procedures. This paper focuses on the use of accelerometers for static and dynamic applications. Static applications may employ tiltmeters, as an example shown later in this paper. Dynamic monitoring of a structure with accelerometers yields its dynamic signature and this technique has also been successfully used in Brazil and this paper gives an example of tailings dam.

Keywords: instrumentation, dynamic, monitoring, tailings, dams, tiltmeters, automation

Procedia PDF Downloads 136
31160 Practical Guide To Design Dynamic Block-Type Shallow Foundation Supporting Vibrating Machine

Authors: Dodi Ikhsanshaleh

Abstract:

When subjected to dynamic load, foundation oscillates in the way that depends on the soil behaviour, the geometry and inertia of the foundation and the dynamic exctation. The practical guideline to analysis block-type foundation excitated by dynamic load from vibrating machine is presented. The analysis use Lumped Mass Parameter Method to express dynamic properties such as stiffness and damping of soil. The numerical examples are performed on design block-type foundation supporting gas turbine compressor which is important equipment package in gas processing plant

Keywords: block foundation, dynamic load, lumped mass parameter

Procedia PDF Downloads 485
31159 Joint Modeling of Longitudinal and Time-To-Event Data with Latent Variable

Authors: Xinyuan Y. Song, Kai Kang

Abstract:

Joint models for analyzing longitudinal and survival data are widely used to investigate the relationship between a failure time process and time-variant predictors. A common assumption in conventional joint models in the survival analysis literature is that all predictors are observable. However, this assumption may not always be supported because unobservable traits, namely, latent variables, which are indirectly observable and should be measured through multiple observed variables, are commonly encountered in the medical, behavioral, and financial research settings. In this study, a joint modeling approach to deal with this feature is proposed. The proposed model comprises three parts. The first part is a dynamic factor analysis model for characterizing latent variables through multiple observed indicators over time. The second part is a random coefficient trajectory model for describing the individual trajectories of latent variables. The third part is a proportional hazard model for examining the effects of time-invariant predictors and the longitudinal trajectories of time-variant latent risk factors on hazards of interest. A Bayesian approach coupled with a Markov chain Monte Carlo algorithm to perform statistical inference. An application of the proposed joint model to a study on the Alzheimer's disease neuroimaging Initiative is presented.

Keywords: Bayesian analysis, joint model, longitudinal data, time-to-event data

Procedia PDF Downloads 139
31158 Review of Downscaling Methods in Climate Change and Their Role in Hydrological Studies

Authors: Nishi Bhuvandas, P. V. Timbadiya, P. L. Patel, P. D. Porey

Abstract:

Recent perceived climate variability raises concerns with unprecedented hydrological phenomena and extremes. Distribution and circulation of the waters of the Earth become increasingly difficult to determine because of additional uncertainty related to anthropogenic emissions. According to the sixth Intergovernmental Panel on Climate Change (IPCC) Technical Paper on Climate Change and water, changes in the large-scale hydrological cycle have been related to an increase in the observed temperature over several decades. Although many previous research carried on effect of change in climate on hydrology provides a general picture of possible hydrological global change, new tools and frameworks for modelling hydrological series with nonstationary characteristics at finer scales, are required for assessing climate change impacts. Of the downscaling techniques, dynamic downscaling is usually based on the use of Regional Climate Models (RCMs), which generate finer resolution output based on atmospheric physics over a region using General Circulation Model (GCM) fields as boundary conditions. However, RCMs are not expected to capture the observed spatial precipitation extremes at a fine cell scale or at a basin scale. Statistical downscaling derives a statistical or empirical relationship between the variables simulated by the GCMs, called predictors, and station-scale hydrologic variables, called predictands. The main focus of the paper is on the need for using statistical downscaling techniques for projection of local hydrometeorological variables under climate change scenarios. The projections can be then served as a means of input source to various hydrologic models to obtain streamflow, evapotranspiration, soil moisture and other hydrological variables of interest.

Keywords: climate change, downscaling, GCM, RCM

Procedia PDF Downloads 399
31157 Use of Predictive Food Microbiology to Determine the Shelf-Life of Foods

Authors: Fatih Tarlak

Abstract:

Predictive microbiology can be considered as an important field in food microbiology in which it uses predictive models to describe the microbial growth in different food products. Predictive models estimate the growth of microorganisms quickly, efficiently, and in a cost-effective way as compared to traditional methods of enumeration, which are long-lasting, expensive, and time-consuming. The mathematical models used in predictive microbiology are mainly categorised as primary and secondary models. The primary models are the mathematical equations that define the growth data as a function of time under a constant environmental condition. The secondary models describe the effects of environmental factors, such as temperature, pH, and water activity (aw) on the parameters of the primary models, including the maximum specific growth rate and lag phase duration, which are the most critical growth kinetic parameters. The combination of primary and secondary models provides valuable information to set limits for the quantitative detection of the microbial spoilage and assess product shelf-life.

Keywords: shelf-life, growth model, predictive microbiology, simulation

Procedia PDF Downloads 203
31156 Analysis of Potential Flow around Two-Dimensional Body by Surface Panel Method and Vortex Lattice Method

Authors: M. Abir Hossain, M. Shahjada Tarafder

Abstract:

This paper deals with the analysis of potential flow past two-dimensional body by discretizing the body into panels where the Laplace equation was applied to each panel. The Laplace equation was solved at each panel by applying the boundary conditions. The boundary condition was applied at each panel to mathematically formulate the problem and then convert the problem into a computer-solvable problem. Kutta condition was applied at both the leading and trailing edges to see whether the condition is satisfied or not. Another approach that is applied for the analysis is Vortex Lattice Method (VLM). A vortex ring is considered at each control point. Using the Biot-Savart Law the strength at each control point is calculated and hence the pressure differentials are measured. For the comparison of the analytic result with the experimental result, different NACA section hydrofoil is used. The analytic result of NACA 0012 and NACA 0015 are compared with the experimental result of Abbott and Doenhoff and found significant conformity with the achieved result.

Keywords: Kutta condition, Law of Biot-Savart, pressure differentials, potential flow, vortex lattice method

Procedia PDF Downloads 188
31155 Data-Driven Dynamic Overbooking Model for Tour Operators

Authors: Kannapha Amaruchkul

Abstract:

We formulate a dynamic overbooking model for a tour operator, in which most reservations contain at least two people. The cancellation rate and the timing of the cancellation may depend on the group size. We propose two overbooking policies, namely economic- and service-based. In an economic-based policy, we want to minimize the expected oversold and underused cost, whereas, in a service-based policy, we ensure that the probability of an oversold situation does not exceed the pre-specified threshold. To illustrate the applicability of our approach, we use tour package data in 2016-2018 from a tour operator in Thailand to build a data-driven robust optimization model, and we tested the proposed overbooking policy in 2019. We also compare the data-driven approach to the conventional approach of fitting data into a probability distribution.

Keywords: applied stochastic model, data-driven robust optimization, overbooking, revenue management, tour operator

Procedia PDF Downloads 128
31154 Model Order Reduction of Complex Airframes Using Component Mode Synthesis for Dynamic Aeroelasticity Load Analysis

Authors: Paul V. Thomas, Mostafa S. A. Elsayed, Denis Walch

Abstract:

Airframe structural optimization at different design stages results in new mass and stiffness distributions which modify the critical design loads envelop. Determination of aircraft critical loads is an extensive analysis procedure which involves simulating the aircraft at thousands of load cases as defined in the certification requirements. It is computationally prohibitive to use a Global Finite Element Model (GFEM) for the load analysis, hence reduced order structural models are required which closely represent the dynamic characteristics of the GFEM. This paper presents the implementation of Component Mode Synthesis (CMS) method for the generation of high fidelity Reduced Order Model (ROM) of complex airframes. Here, sub-structuring technique is used to divide the complex higher order airframe dynamical system into a set of subsystems. Each subsystem is reduced to fewer degrees of freedom using matrix projection onto a carefully chosen reduced order basis subspace. The reduced structural matrices are assembled for all the subsystems through interface coupling and the dynamic response of the total system is solved. The CMS method is employed to develop the ROM of a Bombardier Aerospace business jet which is coupled with an aerodynamic model for dynamic aeroelasticity loads analysis under gust turbulence. Another set of dynamic aeroelastic loads is also generated employing a stick model of the same aircraft. Stick model is the reduced order modelling methodology commonly used in the aerospace industry based on stiffness generation by unitary loading application. The extracted aeroelastic loads from both models are compared against those generated employing the GFEM. Critical loads Modal participation factors and modal characteristics of the different ROMs are investigated and compared against those of the GFEM. Results obtained show that the ROM generated using Craig Bampton CMS reduction process has a superior dynamic characteristics compared to the stick model.

Keywords: component mode synthesis, craig bampton reduction method, dynamic aeroelasticity analysis, model order reduction

Procedia PDF Downloads 204
31153 Assessing Performance of Data Augmentation Techniques for a Convolutional Network Trained for Recognizing Humans in Drone Images

Authors: Masood Varshosaz, Kamyar Hasanpour

Abstract:

In recent years, we have seen growing interest in recognizing humans in drone images for post-disaster search and rescue operations. Deep learning algorithms have shown great promise in this area, but they often require large amounts of labeled data to train the models. To keep the data acquisition cost low, augmentation techniques can be used to create additional data from existing images. There are many techniques of such that can help generate variations of an original image to improve the performance of deep learning algorithms. While data augmentation is potentially assumed to improve the accuracy and robustness of the models, it is important to ensure that the performance gains are not outweighed by the additional computational cost or complexity of implementing the techniques. To this end, it is important to evaluate the impact of data augmentation on the performance of the deep learning models. In this paper, we evaluated the most currently available 2D data augmentation techniques on a standard convolutional network which was trained for recognizing humans in drone images. The techniques include rotation, scaling, random cropping, flipping, shifting, and their combination. The results showed that the augmented models perform 1-3% better compared to a base network. However, as the augmented images only contain the human parts already visible in the original images, a new data augmentation approach is needed to include the invisible parts of the human body. Thus, we suggest a new method that employs simulated 3D human models to generate new data for training the network.

Keywords: human recognition, deep learning, drones, disaster mitigation

Procedia PDF Downloads 87
31152 Agriculture Yield Prediction Using Predictive Analytic Techniques

Authors: Nagini Sabbineni, Rajini T. V. Kanth, B. V. Kiranmayee

Abstract:

India’s economy primarily depends on agriculture yield growth and their allied agro industry products. The agriculture yield prediction is the toughest task for agricultural departments across the globe. The agriculture yield depends on various factors. Particularly countries like India, majority of agriculture growth depends on rain water, which is highly unpredictable. Agriculture growth depends on different parameters, namely Water, Nitrogen, Weather, Soil characteristics, Crop rotation, Soil moisture, Surface temperature and Rain water etc. In our paper, lot of Explorative Data Analysis is done and various predictive models were designed. Further various regression models like Linear, Multiple Linear, Non-linear models are tested for the effective prediction or the forecast of the agriculture yield for various crops in Andhra Pradesh and Telangana states.

Keywords: agriculture yield growth, agriculture yield prediction, explorative data analysis, predictive models, regression models

Procedia PDF Downloads 308
31151 A Study of Life Expectancy in an Urban Set up of North-Eastern India under Dynamic Consideration Incorporating Cause Specific Mortality

Authors: Mompi Sharma, Labananda Choudhury, Anjana M. Saikia

Abstract:

Background: The period life table is entirely based on the assumption that the mortality patterns of the population existing in the given period will persist throughout their lives. However, it has been observed that the mortality rate continues to decline. As such, if the rates of change of probabilities of death are considered in a life table then we get a dynamic life table. Although, mortality has been declining in all parts of India, one may be interested to know whether these declines had appeared more in an urban area of underdeveloped regions like North-Eastern India. So, attempt has been made to know the mortality pattern and the life expectancy under dynamic scenario in Guwahati, the biggest city of North Eastern India. Further, if the probabilities of death changes then there is a possibility that its different constituent probabilities will also change. Since cardiovascular disease (CVD) is the leading cause of death in Guwahati. Therefore, an attempt has also been made to formulate dynamic cause specific death ratio and probabilities of death due to CVD. Objectives: To construct dynamic life table for Guwahati for the year 2011 based on the rates of change of probabilities of death over the previous 10 and 25 years (i.e.,2001 and 1986) and to compute corresponding dynamic cause specific death ratio and probabilities of death due to CVD. Methodology and Data: The study uses the method proposed by Denton and Spencer (2011) to construct dynamic life table for Guwahati. So, the data from the Office of the Birth and Death, Guwahati Municipal Corporation for the years 1986, 2001 and 2011 are taken. The population based data are taken from 2001 and 2011 census (India). However, the population data for 1986 has been estimated. Also, the cause of death ratio and probabilities of death due to CVD are computed for the aforementioned years and then extended to dynamic set up for the year 2011 by considering the rates of change of those probabilities over the previous 10 and 25 years. Findings: The dynamic life expectancy at birth (LEB) for Guwahati is found to be higher than the corresponding values in the period table by 3.28 (5.65) years for males and 8.30 (6.37) years for females during the period of 10 (25) years. The life expectancies under dynamic consideration in all the other age groups are also seen higher than the usual life expectancies, which may be possible due to gradual decline in probabilities of death since 1986-2011. Further, a continuous decline has also been observed in death ratio due to CVD along with cause specific probabilities of death for both sexes. As a consequence, dynamic cause of death probability due to CVD is found to be less in comparison to usual procedure. Conclusion: Since incorporation of changing mortality rates in period life table for Guwahati resulted in higher life expectancies and lower probabilities of death due to CVD, this would possibly bring out the real situation of deaths prevailing in the city.

Keywords: cause specific death ratio, cause specific probabilities of death, dynamic, life expectancy

Procedia PDF Downloads 230
31150 Geopotential Models Evaluation in Algeria Using Stochastic Method, GPS/Leveling and Topographic Data

Authors: M. A. Meslem

Abstract:

For precise geoid determination, we use a reference field to subtract long and medium wavelength of the gravity field from observations data when we use the remove-compute-restore technique. Therefore, a comparison study between considered models should be made in order to select the optimal reference gravity field to be used. In this context, two recent global geopotential models have been selected to perform this comparison study over Northern Algeria. The Earth Gravitational Model (EGM2008) and the Global Gravity Model (GECO) conceived with a combination of the first model with anomalous potential derived from a GOCE satellite-only global model. Free air gravity anomalies in the area under study have been used to compute residual data using both gravity field models and a Digital Terrain Model (DTM) to subtract the residual terrain effect from the gravity observations. Residual data were used to generate local empirical covariance functions and their fitting to the closed form in order to compare their statistical behaviors according to both cases. Finally, height anomalies were computed from both geopotential models and compared to a set of GPS levelled points on benchmarks using least squares adjustment. The result described in details in this paper regarding these two models has pointed out a slight advantage of GECO global model globally through error degree variances comparison and ground-truth evaluation.

Keywords: quasigeoid, gravity aomalies, covariance, GGM

Procedia PDF Downloads 131
31149 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 445
31148 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds

Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott

Abstract:

Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.

Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)

Procedia PDF Downloads 367
31147 Applying Genetic Algorithm in Exchange Rate Models Determination

Authors: Mehdi Rostamzadeh

Abstract:

Genetic Algorithms (GAs) are an adaptive heuristic search algorithm premised on the evolutionary ideas of natural selection and genetic. In this study, we apply GAs for fundamental and technical models of exchange rate determination in exchange rate market. In this framework, we estimated absolute and relative purchasing power parity, Mundell-Fleming, sticky and flexible prices (monetary models), equilibrium exchange rate and portfolio balance model as fundamental models and Auto Regressive (AR), Moving Average (MA), Auto-Regressive with Moving Average (ARMA) and Mean Reversion (MR) as technical models for Iranian Rial against European Union’s Euro using monthly data from January 1992 to December 2014. Then, we put these models into the genetic algorithm system for measuring their optimal weight for each model. These optimal weights have been measured according to four criteria i.e. R-Squared (R2), mean square error (MSE), mean absolute percentage error (MAPE) and root mean square error (RMSE).Based on obtained Results, it seems that for explaining of Iranian Rial against EU Euro exchange rate behavior, fundamental models are better than technical models.

Keywords: exchange rate, genetic algorithm, fundamental models, technical models

Procedia PDF Downloads 268
31146 The Effect of Dynamic Eccentricity on the Stator Current Spectrum of 550 kW Induction Motor

Authors: Saleh Elawgali

Abstract:

In order to present the effect of the dynamic eccentricity on the stator currents of squirrel cage induction machines, the current spectrums of a 550 kW induction motor was calculated for the cases of full symmetry and dynamic eccentricity. The calculations presented in this paper are based on the Poly-Harmonic Model accounting for static and dynamic eccentricity, stator and rotor slotting, parallel branches as well as cage asymmetry. The calculations were followed by Fourier analysis of the stator currents in steady state operation. The paper presents the stator current spectrums for full symmetry and dynamic eccentricity cases, and demonstrates the harmonics present in each case. The effect of dynamic eccentricity is demonstrating via comparing the current spectrums related to dynamic eccentricity cases with the full symmetry one.

Keywords: current spectrum, dynamic eccentricity, harmonics, Induction machine, slot harmonic zone.

Procedia PDF Downloads 392
31145 Photovoltaic Water Pumping System Application

Authors: Sarah Abdourraziq

Abstract:

Photovoltaic (PV) water pumping system is one of the most used and important applications in the field of solar energy. However, the cost and the efficiency are still a concern, especially with continued change of solar radiation and temperature. Then, the improvement of the efficiency of the system components is a good solution to reducing the cost. The use of maximum power point tracking (MPPT) algorithms to track the output maximum power point (MPP) of the PV panel is very important to improve the efficiency of the whole system. In this paper, we will present a definition of the functioning of MPPT technique, and a detailed model of each component of PV pumping system with Matlab-Simulink, the results shows the influence of the changing of solar radiation and temperature in the output characteristics of PV panel, which influence in the efficiency of the system. Our system consists of a PV generator, a boost converter, a motor-pump set, and storage tank.

Keywords: PV panel, boost converter, MPPT, MPP, PV pumping system

Procedia PDF Downloads 395
31144 Identifying Critical Success Factors for Data Quality Management through a Delphi Study

Authors: Maria Paula Santos, Ana Lucas

Abstract:

Organizations support their operations and decision making on the data they have at their disposal, so the quality of these data is remarkably important and Data Quality (DQ) is currently a relevant issue, the literature being unanimous in pointing out that poor DQ can result in large costs for organizations. The literature review identified and described 24 Critical Success Factors (CSF) for Data Quality Management (DQM) that were presented to a panel of experts, who ordered them according to their degree of importance, using the Delphi method with the Q-sort technique, based on an online questionnaire. The study shows that the five most important CSF for DQM are: definition of appropriate policies and standards, control of inputs, definition of a strategic plan for DQ, organizational culture focused on quality of the data and obtaining top management commitment and support.

Keywords: critical success factors, data quality, data quality management, Delphi, Q-Sort

Procedia PDF Downloads 210