Search results for: mean bias error
1816 Comparison of Methods of Estimation for Use in Goodness of Fit Tests for Binary Multilevel Models
Authors: I. V. Pinto, M. R. Sooriyarachchi
Abstract:
It can be frequently observed that the data arising in our environment have a hierarchical or a nested structure attached with the data. Multilevel modelling is a modern approach to handle this kind of data. When multilevel modelling is combined with a binary response, the estimation methods get complex in nature and the usual techniques are derived from quasi-likelihood method. The estimation methods which are compared in this study are, marginal quasi-likelihood (order 1 & order 2) (MQL1, MQL2) and penalized quasi-likelihood (order 1 & order 2) (PQL1, PQL2). A statistical model is of no use if it does not reflect the given dataset. Therefore, checking the adequacy of the fitted model through a goodness-of-fit (GOF) test is an essential stage in any modelling procedure. However, prior to usage, it is also equally important to confirm that the GOF test performs well and is suitable for the given model. This study assesses the suitability of the GOF test developed for binary response multilevel models with respect to the method used in model estimation. An extensive set of simulations was conducted using MLwiN (v 2.19) with varying number of clusters, cluster sizes and intra cluster correlations. The test maintained the desirable Type-I error for models estimated using PQL2 and it failed for almost all the combinations of MQL. Power of the test was adequate for most of the combinations in all estimation methods except MQL1. Moreover, models were fitted using the four methods to a real-life dataset and performance of the test was compared for each model.Keywords: goodness-of-fit test, marginal quasi-likelihood, multilevel modelling, penalized quasi-likelihood, power, quasi-likelihood, type-I error
Procedia PDF Downloads 1421815 Multi-Point Dieless Forming Product Defect Reduction Using Reliability-Based Robust Process Optimization
Authors: Misganaw Abebe Baye, Ji-Woo Park, Beom-Soo Kang
Abstract:
The product quality of multi-point dieless forming (MDF) is identified to be dependent on the process parameters. Moreover, a certain variation of friction and material properties may have a substantially worse influence on the final product quality. This study proposed on how to compensate the MDF product defects by minimizing the sensitivity of noise parameter variations. This can be attained by reliability-based robust optimization (RRO) technique to obtain the optimal process setting of the controllable parameters. Initially two MDF Finite Element (FE) simulations of AA3003-H14 saddle shape showed a substantial amount of dimpling, wrinkling, and shape error. FE analyses are consequently applied on ABAQUS commercial software to obtain the correlation between the control process setting and noise variation with regard to the product defects. The best prediction models are chosen from the family of metamodels to swap the computational expensive FE simulation. Genetic algorithm (GA) is applied to determine the optimal process settings of the control parameters. Monte Carlo Analysis (MCA) is executed to determine how the noise parameter variation affects the final product quality. Finally, the RRO FE simulation and the experimental result show that the amendment of the control parameters in the final forming process leads to a considerably better-quality product.Keywords: dimpling, multi-point dieless forming, reliability-based robust optimization, shape error, variation, wrinkling
Procedia PDF Downloads 2541814 Government Final Consumption Expenditure and Household Consumption Expenditure NPISHS in Nigeria
Authors: Usman A. Usman
Abstract:
Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp (financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.Keywords: government final consumption expenditure, household consumption expenditure, vector error correction model, cointegration
Procedia PDF Downloads 521813 Type A Quadricuspid Aortic Valve; Rarer than a Four-Leaf Clover, an Example of Availability Heuristic
Authors: Frazer Kirk, Rohen Skiba, Pankaj Saxena
Abstract:
The natural history of the QAV is poorly understood due to the exceeding rarity of the condition. Incidence rates vary between 0.00028-1%. Classically patients present with Aortic Regurgitation (AR) between 40-60 years of age experiencing palpitations, chest pain, or heart failure. (1, 2) Echocardiography is the mainstay of diagnosis for this condition; however, given the rarity of this condition, it can easily be overlooked, as demonstrated here. The case report that follows serves as a reminder of the condition to reduce the innate cognitive bias to overlook the diagnosis due to the availability heuristic. Intraoperative photography, echocardiographic and magnetic resonance imaging from this case for reference to demonstrate that while the diagnosis of Aortic regurgitation was recognized early, the valve morphology was underappreciated.Keywords: quadricuspid aortic valve, cardiac surgery, echocardiography, congenital
Procedia PDF Downloads 1621812 Approximation of Geodesics on Meshes with Implementation in Rhinoceros Software
Authors: Marian Sagat, Mariana Remesikova
Abstract:
In civil engineering, there is a problem how to industrially produce tensile membrane structures that are non-developable surfaces. Nondevelopable surfaces can only be developed with a certain error and we want to minimize this error. To that goal, the non-developable surfaces are cut into plates along to the geodesic curves. We propose a numerical algorithm for finding approximations of open geodesics on meshes and surfaces based on geodesic curvature flow. For practical reasons, it is important to automatize the choice of the time step. We propose a method for automatic setting of the time step based on the diagonal dominance criterion for the matrix of the linear system obtained by discretization of our partial differential equation model. Practical experiments show reliability of this method. Because approximation of the model is made by numerical method based on classic derivatives, it is necessary to solve obstacles which occur for meshes with sharp corners. We solve this problem for big family of meshes with sharp corners via special rotations which can be seen as partial unfolding of the mesh. In practical applications, it is required that the approximation of geodesic has its vertices only on the edges of the mesh. This problem is solved by a specially designed pointing tracking algorithm. We also partially solve the problem of finding geodesics on meshes with holes. We implemented the whole algorithm in Rhinoceros (commercial 3D computer graphics and computer-aided design software ). It is done by using C# language as C# assembly library for Grasshopper, which is plugin in Rhinoceros.Keywords: geodesic, geodesic curvature flow, mesh, Rhinoceros software
Procedia PDF Downloads 1501811 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2801810 Radiation Hardness Materials Article Review
Authors: S. Abou El-Azm, U. Kruchonak, M. Gostkin, A. Guskov, A. Zhemchugov
Abstract:
Semiconductor detectors are widely used in nuclear physics and high-energy physics experiments. The application of semiconductor detectors could be limited by their ultimate radiation resistance. The increase of radiation defects concentration leads to significant degradation of the working parameters of semiconductor detectors. The investigation of radiation defects properties in order to enhance the radiation hardness of semiconductor detectors is an important task for the successful implementation of a number of nuclear physics experiments; we presented some information about radiation hardness materials like diamond, sapphire and CdTe. Also, the results of measurements I-V characteristics, charge collection efficiency and its dependence on the bias voltage for different doses of high resistivity (GaAs: Cr) and Si at LINAC-200 accelerator and reactor IBR-2 are presented.Keywords: semiconductor detectors, radiation hardness, GaAs, Si, CCE, I-V, C-V
Procedia PDF Downloads 1131809 An Improved Robust Algorithm Based on Cubature Kalman Filter for Single-Frequency Global Navigation Satellite System/Inertial Navigation Tightly Coupled System
Authors: Hao Wang, Shuguo Pan
Abstract:
The Global Navigation Satellite System (GNSS) signal received by the dynamic vehicle in the harsh environment will be frequently interfered with and blocked, which generates gross error affecting the positioning accuracy of the GNSS/Inertial Navigation System (INS) integrated navigation. Therefore, this paper put forward an improved robust Cubature Kalman filter (CKF) algorithm for single-frequency GNSS/INS tightly coupled system ambiguity resolution. Firstly, the dynamic model and measurement model of a single-frequency GNSS/INS tightly coupled system was established, and the method for GNSS integer ambiguity resolution with INS aided is studied. Then, we analyzed the influence of pseudo-range observation with gross error on GNSS/INS integrated positioning accuracy. To reduce the influence of outliers, this paper improved the CKF algorithm and realized an intelligent selection of robust strategies by judging the ill-conditioned matrix. Finally, a field navigation test was performed to demonstrate the effectiveness of the proposed algorithm based on the double-differenced solution mode. The experiment has proved the improved robust algorithm can greatly weaken the influence of separate, continuous, and hybrid observation anomalies for enhancing the reliability and accuracy of GNSS/INS tightly coupled navigation solutions.Keywords: GNSS/INS integrated navigation, ambiguity resolution, Cubature Kalman filter, Robust algorithm
Procedia PDF Downloads 1001808 Composition Dependence of Exchange Anisotropy in PtₓMn₁₋ₓ/Co₇₀Fe₃₀ Films
Authors: Sina Ranjbar, Masakiyo Tsunoda, Mikihiko Oogane, Yasuo Ando
Abstract:
We systematically investigated the exchange anisotropy for ferromagnetic Co70Fe30 and antiferromagnetic PtMn bilayer films. We focused on the relevance between the exchange bias and the composition of the Ptₓ Mn₁₋ₓ (14 < x < 22 and 45 < x < 56 at %) films, and we successfully optimized the composition. The crystal structure of the Ptₓ Mn₁₋ₓ films was FCC for 14 < x < 22 at % and FCT for 45 < x < 56 at % after annealing at 370 ◦C for 6 hours. The unidirectional anisotropy constant (Jₖ) for fcc-Pt₁₅Mn₈₅ (20 nm) and fct-Pt₄₈Mn₅₂ (20 nm) prepared under optimum conditions in composition were 0.16 and 0.20 erg/cm², respectively. Both Pt₁₅Mn₈₅ and Pt₄₈Mn₅₂ films showed a larger unidirectional anisotropy constant (Jₖ) than in other reports. They also showed a flatter surface than that of other antiferromagnetic materials. The obtained PtMn films with a large exchange anisotropy and slight roughness are useful as an antiferromagnetic layer in spintronic applications.Keywords: antiferromagnetic material, PtMn thin film, exchange anisotropy, composition dependence
Procedia PDF Downloads 2611807 Reasons for the Selection of Information-Processing Framework and the Philosophy of Mind as a General Account for an Error Analysis and Explanation on Mathematics
Authors: Michael Lousis
Abstract:
This research study is concerned with learner’s errors on Arithmetic and Algebra. The data resulted from a broader international comparative research program called Kassel Project. However, its conceptualisation differed from and contrasted with that of the main program, which was mostly based on socio-demographic data. The way in which the research study was conducted, was not dependent on the researcher’s discretion, but was absolutely dictated by the nature of the problem under investigation. This is because the phenomenon of learners’ mathematical errors is due neither to the intentions of learners nor to institutional processes, rules and norms, nor to the educators’ intentions and goals; but rather to the way certain information is presented to learners and how their cognitive apparatus processes this information. Several approaches for the study of learners’ errors have been developed from the beginning of the 20th century, encompassing different belief systems. These approaches were based on the behaviourist theory, on the Piagetian- constructivist research framework, the perspective that followed the philosophy of science and the information-processing paradigm. The researcher of the present study was forced to disclose the learners’ course of thinking that led them in specific observable actions with the result of showing particular errors in specific problems, rather than analysing scripts with the students’ thoughts presented in a written form. This, in turn, entailed that the choice of methods would have to be appropriate and conducive to seeing and realising the learners’ errors from the perspective of the participants in the investigation. This particular fact determined important decisions to be made concerning the selection of an appropriate framework for analysing the mathematical errors and giving explanations. Thus the rejection of the belief systems concerning behaviourism, the Piagetian-constructivist, and philosophy of science perspectives took place, and the information-processing paradigm in conjunction with the philosophy of mind were adopted as a general account for the elaboration of data. This paper explains why these decisions were appropriate and beneficial for conducting the present study and for the establishment of the ensued thesis. Additionally, the reasons for the adoption of the information-processing paradigm in conjunction with the philosophy of mind give sound and legitimate bases for the development of future studies concerning mathematical error analysis are explained.Keywords: advantages-disadvantages of theoretical prospects, behavioral prospect, critical evaluation of theoretical prospects, error analysis, information-processing paradigm, opting for the appropriate approach, philosophy of science prospect, Piagetian-constructivist research frameworks, review of research in mathematical errors
Procedia PDF Downloads 1901806 Government Final Consumption Expenditure Financial Deepening and Household Consumption Expenditure NPISHs in Nigeria
Authors: Usman A. Usman
Abstract:
Undeniably, unlike the Classical side, the Keynesian perspective of the aggregate demand side indeed has a significant position in the policy, growth, and welfare of Nigeria due to government involvement and ineffective demand of the population living with poor per capita income. This study seeks to investigate the effect of Government Final Consumption Expenditure, Financial Deepening on Households, and NPISHs Final consumption expenditure using data on Nigeria from 1981 to 2019. This study employed the ADF stationarity test, Johansen Cointegration test, and Vector Error Correction Model. The results of the study revealed that the coefficient of Government final consumption expenditure has a positive effect on household consumption expenditure in the long run. There is a long-run and short-run relationship between gross fixed capital formation and household consumption expenditure. The coefficients cpsgdp financial deepening and gross fixed capital formation posit a negative impact on household final consumption expenditure. The coefficients money supply lm2gdp, which is another proxy for financial deepening, and the coefficient FDI have a positive effect on household final consumption expenditure in the long run. Therefore, this study recommends that Gross fixed capital formation stimulates household consumption expenditure; a legal framework to support investment is a panacea to increasing hoodmold income and consumption and reducing poverty in Nigeria. Therefore, this should be a key central component of policy.Keywords: household, government expenditures, vector error correction model, johansen test
Procedia PDF Downloads 611805 The Return of the Witches: A Class That Motivates the Analysis of Gender Bias in Engineer
Authors: Veronica Botero, Karen Ortiz
Abstract:
The Faculty of Mines, of the National University of Colombia, Medellín Campus, is a faculty that has 136 years of history and represents one of the most important study centers in the country in the field of engineering and scientific research, as well as a reference at a global, national, and Latin American level in this matter. Despite being a faculty with so many years of history and having trained a large number of graduates under the traditional mechanistic and androcentric paradigm, which reproduces the logic of the traditional scientific method and the differentiated and severe look between subject-object of research among other binarisms, has also been the place where professors and students have become aware of the need to transform this paradigm into engineering, and focus on the sustainability of diversity and the well-being of the natural and social systems that inhabit the territories and has opened possibilities for the implementation of classes that address feminist pedagogical theories and practices. The class: The return of the witches, is an initiative that constitutes an important training exercise that provides students with the study of feminisms, the importance of closing gender gaps and critical readings on the traditional paradigm of engineering. The objective of this article is to present a systematization of the experience of design, implementation and development of this elective class, describing the tensions that arose at the time when a subject of this style was created and proposed in the Department of Geosciences and Environment, from the Faculty of Mines in 2022; the reactions of the groups of students who have taken it and their perceptions and opinions about ecofeminism as proposals for critical analysis and practices in relation to the environment and, above all, how their readings of the world have changed after having studied this subject for a semester. The pedagogical journey and the feminist methodologies that have been designed and adjusted over two years of work will be explained based on the sharing of situated knowledge of the students and the two teachers who teach the course, who pose challenges to the dominant ideology in engineering since one of them is trained in human sciences and feminist studies and the other, although trained in civil engineering and geosciences, is a woman with diverse sexual orientation and is the first professor to have assumed the position of dean in the 135 years of history of the Faculty. The transformations in the life experience of the students are revealing since they affirm that the training process is forceful and powerful to outline a much more qualified and critical professional profile that contributes to the transformation of gender gaps in the country. This class is therefore a challenge in this Faculty of Engineering that still presents a dominant ideology on gender that has not been questioned or challenged before.Keywords: feminisms, gender equality, gender bias, engineering for life Manifiesto.
Procedia PDF Downloads 701804 Dynamics of Light Induced Current in 1D Coupled Quantum Dots
Authors: Tokuei Sako
Abstract:
Laser-induced current in a quasi-one-dimensional nanostructure has been studied by a model of a few electrons confined in a 1D electrostatic potential coupled to electrodes at both ends and subjected to a pulsed laser field. The time-propagation of the one- and two-electron wave packets has been calculated by integrating the time-dependent Schrödinger equation directly by the symplectic integrator method with uniform Fourier grid. The temporal behavior of the resultant light-induced current in the studied systems has been discussed with respect to the lifetime of the quasi-bound states formed when the static bias voltage is applied.Keywords: pulsed laser field, nanowire, electron wave packet, quantum dots, time-dependent Schrödinger equation
Procedia PDF Downloads 3561803 Protein Tertiary Structure Prediction by a Multiobjective Optimization and Neural Network Approach
Authors: Alexandre Barbosa de Almeida, Telma Woerle de Lima Soares
Abstract:
Protein structure prediction is a challenging task in the bioinformatics field. The biological function of all proteins majorly relies on the shape of their three-dimensional conformational structure, but less than 1% of all known proteins in the world have their structure solved. This work proposes a deep learning model to address this problem, attempting to predict some aspects of the protein conformations. Throughout a process of multiobjective dominance, a recurrent neural network was trained to abstract the particular bias of each individual multiobjective algorithm, generating a heuristic that could be useful to predict some of the relevant aspects of the three-dimensional conformation process formation, known as protein folding.Keywords: Ab initio heuristic modeling, multiobjective optimization, protein structure prediction, recurrent neural network
Procedia PDF Downloads 2051802 Formation of Miniband Structure in Dimer Fibonacci GaAs/Ga1-XAlXAs Superlattices
Authors: Aziz Zoubir, Sefir Yamina, Djelti Redouan, Bentata Samir
Abstract:
The effect of a uniform electric field across multibarrier systems (GaAs/AlxGa1-xAs) is exhaustively explored by a computational model using exact Airy function formalism and the transfer-matrix technique. In the case of biased Dimer Fibonacci Height Barrier superlattices (DFHBSL) structure a strong reduction in transmission properties was observed and the width of the miniband structure linearly decreases with the increase of the applied bias. This is due to the confinement of the states in the miniband structure, which becomes increasingly important (Wannier-Stark effect).Keywords: Dimer Fibonacci Height Barrier superlattices, singular extended states, exact Airy function, transfer matrix formalism
Procedia PDF Downloads 5091801 The Bayesian Premium Under Entropy Loss
Authors: Farouk Metiri, Halim Zeghdoudi, Mohamed Riad Remita
Abstract:
Credibility theory is an experience rating technique in actuarial science which can be seen as one of quantitative tools that allows the insurers to perform experience rating, that is, to adjust future premiums based on past experiences. It is used usually in automobile insurance, worker's compensation premium, and IBNR (incurred but not reported claims to the insurer) where credibility theory can be used to estimate the claim size amount. In this study, we focused on a popular tool in credibility theory which is the Bayesian premium estimator, considering Lindley distribution as a claim distribution. We derive this estimator under entropy loss which is asymmetric and squared error loss which is a symmetric loss function with informative and non-informative priors. In a purely Bayesian setting, the prior distribution represents the insurer’s prior belief about the insured’s risk level after collection of the insured’s data at the end of the period. However, the explicit form of the Bayesian premium in the case when the prior is not a member of the exponential family could be quite difficult to obtain as it involves a number of integrations which are not analytically solvable. The paper finds a solution to this problem by deriving this estimator using numerical approximation (Lindley approximation) which is one of the suitable approximation methods for solving such problems, it approaches the ratio of the integrals as a whole and produces a single numerical result. Simulation study using Monte Carlo method is then performed to evaluate this estimator and mean squared error technique is made to compare the Bayesian premium estimator under the above loss functions.Keywords: bayesian estimator, credibility theory, entropy loss, monte carlo simulation
Procedia PDF Downloads 3341800 Accurate Calculation of the Penetration Depth of a Bullet Using ANSYS
Authors: Eunsu Jang, Kang Park
Abstract:
In developing an armored ground combat vehicle (AGCV), it is a very important step to analyze the vulnerability (or the survivability) of the AGCV against enemy’s attack. In the vulnerability analysis, the penetration equations are usually used to get the penetration depth and check whether a bullet can penetrate the armor of the AGCV, which causes the damage of internal components or crews. The penetration equations are derived from penetration experiments which require long time and great efforts. However, they usually hold only for the specific material of the target and the specific type of the bullet used in experiments. Thus, penetration simulation using ANSYS can be another option to calculate penetration depth. However, it is very important to model the targets and select the input parameters in order to get an accurate penetration depth. This paper performed a sensitivity analysis of input parameters of ANSYS on the accuracy of the calculated penetration depth. Two conflicting objectives need to be achieved in adopting ANSYS in penetration analysis: maximizing the accuracy of calculation and minimizing the calculation time. To maximize the calculation accuracy, the sensitivity analysis of the input parameters for ANSYS was performed and calculated the RMS error with the experimental data. The input parameters include mesh size, boundary condition, material properties, target diameter are tested and selected to minimize the error between the calculated result from simulation and the experiment data from the papers on the penetration equation. To minimize the calculation time, the parameter values obtained from accuracy analysis are adjusted to get optimized overall performance. As result of analysis, the followings were found: 1) As the mesh size gradually decreases from 0.9 mm to 0.5 mm, both the penetration depth and calculation time increase. 2) As diameters of the target decrease from 250mm to 60 mm, both the penetration depth and calculation time decrease. 3) As the yield stress which is one of the material property of the target decreases, the penetration depth increases. 4) The boundary condition with the fixed side surface of the target gives more penetration depth than that with the fixed side and rear surfaces. By using above finding, the input parameters can be tuned to minimize the error between simulation and experiments. By using simulation tool, ANSYS, with delicately tuned input parameters, penetration analysis can be done on computer without actual experiments. The data of penetration experiments are usually hard to get because of security reasons and only published papers provide them in the limited target material. The next step of this research is to generalize this approach to anticipate the penetration depth by interpolating the known penetration experiments. This result may not be accurate enough to be used to replace the penetration experiments, but those simulations can be used in the early stage of the design process of AGCV in modelling and simulation stage.Keywords: ANSYS, input parameters, penetration depth, sensitivity analysis
Procedia PDF Downloads 4011799 Application of Grey Theory in the Forecast of Facility Maintenance Hours for Office Building Tenants and Public Areas
Authors: Yen Chia-Ju, Cheng Ding-Ruei
Abstract:
This study took case office building as subject and explored the responsive work order repair request of facilities and equipment in offices and public areas by gray theory, with the purpose of providing for future related office building owners, executive managers, property management companies, mechanical and electrical companies as reference for deciding and assessing forecast model. Important conclusions of this study are summarized as follows according to the study findings: 1. Grey Relational Analysis discusses the importance of facilities repair number of six categories, namely, power systems, building systems, water systems, air conditioning systems, fire systems and manpower dispatch in order. In terms of facilities maintenance importance are power systems, building systems, water systems, air conditioning systems, manpower dispatch and fire systems in order. 2. GM (1,N) and regression method took maintenance hours as dependent variables and repair number, leased area and tenants number as independent variables and conducted single month forecast based on 12 data from January to December 2011. The mean absolute error and average accuracy of GM (1,N) from verification results were 6.41% and 93.59%; the mean absolute error and average accuracy of regression model were 4.66% and 95.34%, indicating that they have highly accurate forecast capability.Keywords: rey theory, forecast model, Taipei 101, office buildings, property management, facilities, equipment
Procedia PDF Downloads 4441798 A Research on Inference from Multiple Distance Variables in Hedonic Regression Focus on Three Variables
Authors: Yan Wang, Yasushi Asami, Yukio Sadahiro
Abstract:
In urban context, urban nodes such as amenity or hazard will certainly affect house price, while classic hedonic analysis will employ distance variables measured from each urban nodes. However, effects from distances to facilities on house prices generally do not represent the true price of the property. Distance variables measured on the same surface are suffering a problem called multicollinearity, which is usually presented as magnitude variance and mean value in regression, errors caused by instability. In this paper, we provided a theoretical framework to identify and gather the data with less bias, and also provided specific sampling method on locating the sample region to avoid the spatial multicollinerity problem in three distance variable’s case.Keywords: hedonic regression, urban node, distance variables, multicollinerity, collinearity
Procedia PDF Downloads 4651797 Microwave Dielectric Constant Measurements of Titanium Dioxide Using Five Mixture Equations
Authors: Jyh Sheen, Yong-Lin Wang
Abstract:
This research dedicates to find a different measurement procedure of microwave dielectric properties of ceramic materials with high dielectric constants. For the composite of ceramic dispersed in the polymer matrix, the dielectric constants of the composites with different concentrations can be obtained by various mixture equations. The other development of mixture rule is to calculate the permittivity of ceramic from measurements on composite. To do this, the analysis method and theoretical accuracy on six basic mixture laws derived from three basic particle shapes of ceramic fillers have been reported for dielectric constants of ceramic less than 40 at microwave frequency. Similar researches have been done for other well-known mixture rules. They have shown that both the physical curve matching with experimental results and low potential theory error are important to promote the calculation accuracy. Recently, a modified of mixture equation for high dielectric constant ceramics at microwave frequency has also been presented for strontium titanate (SrTiO3) which was selected from five more well known mixing rules and has shown a good accuracy for high dielectric constant measurements. However, it is still not clear the accuracy of this modified equation for other high dielectric constant materials. Therefore, the five more well known mixing rules are selected again to understand their application to other high dielectric constant ceramics. The other high dielectric constant ceramic, TiO2 with dielectric constant 100, was then chosen for this research. Their theoretical error equations are derived. In addition to the theoretical research, experimental measurements are always required. Titanium dioxide is an interesting ceramic for microwave applications. In this research, its powder is adopted as the filler material and polyethylene powder is like the matrix material. The dielectric constants of those ceramic-polyethylene composites with various compositions were measured at 10 GHz. The theoretical curves of the five published mixture equations are shown together with the measured results to understand the curve matching condition of each rule. Finally, based on the experimental observation and theoretical analysis, one of the five rules was selected and modified to a new powder mixture equation. This modified rule has show very good curve matching with the measurement data and low theoretical error. We can then calculate the dielectric constant of pure filler medium (titanium dioxide) by those mixing equations from the measured dielectric constants of composites. The accuracy on the estimating dielectric constant of pure ceramic by various mixture rules will be compared. This modified mixture rule has also shown good measurement accuracy on the dielectric constant of titanium dioxide ceramic. This study can be applied to the microwave dielectric properties measurements of other high dielectric constant ceramic materials in the future.Keywords: microwave measurement, dielectric constant, mixture rules, composites
Procedia PDF Downloads 3671796 Attention States in the Sustained Attention to Response Task: Effects of Trial Duration, Mind-Wandering and Focus
Authors: Aisling Davies, Ciara Greene
Abstract:
Over the past decade the phenomenon of mind-wandering in cognitive tasks has attracted widespread scientific attention. Research indicates that mind-wandering occurrences can be detected through behavioural responses in the Sustained Attention to Response Task (SART) and several studies have attributed a specific pattern of responding around an error in this task to an observable effect of a mind-wandering state. SART behavioural responses are also widely accepted as indices of sustained attention and of general attention lapses. However, evidence suggests that these same patterns of responding may be attributable to other factors associated with more focused states and that it may also be possible to distinguish the two states within the same task. To use behavioural responses in the SART to study mind-wandering, it is essential to establish both the SART parameters that would increase the likelihood of errors due to mind-wandering, and exactly what type of responses are indicative of mind-wandering, neither of which have yet been determined. The aims of this study were to compare different versions of the SART to establish which task would induce the most mind-wandering episodes and to determine whether mind-wandering related errors can be distinguished from errors during periods of focus, by behavioural responses in the SART. To achieve these objectives, 25 Participants completed four modified versions of the SART that differed from the classic paradigm in several ways so to capture more instances of mind-wandering. The duration that trials were presented for was increased proportionately across each of the four versions of the task; Standard, Medium Slow, Slow, and Very Slow and participants intermittently responded to thought probes assessing their level of focus and degree of mind-wandering throughout. Error rates, reaction times and variability in reaction times decreased in proportion to the decrease in trial duration rate and the proportion of mind-wandering related errors increased, until the Very Slow condition where the extra decrease in duration no longer had an effect. Distinct reaction time patterns around an error, dependent on level of focus (high/low) and level of mind-wandering (high/low) were also observed indicating four separate attention states occurring within the SART. This study establishes the optimal duration of trial presentation for inducing mind-wandering in the SART, provides evidence supporting the idea that different attention states can be observed within the SART and highlights the importance of addressing other factors contributing to behavioural responses when studying mind-wandering during this task. A notable finding in relation to the standard SART, was that while more errors were observed in this version of the task, most of these errors were during periods of focus, raising significant questions about our current understanding of mind-wandering and associated failures of attention.Keywords: attention, mind-wandering, trial duration rate, Sustained Attention to Response Task (SART)
Procedia PDF Downloads 1821795 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 4271794 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 2311793 Forecasting Nokoué Lake Water Levels Using Long Short-Term Memory Network
Authors: Namwinwelbere Dabire, Eugene C. Ezin, Adandedji M. Firmin
Abstract:
The prediction of hydrological flows (rainfall-depth or rainfall-discharge) is becoming increasingly important in the management of hydrological risks such as floods. In this study, the Long Short-Term Memory (LSTM) network, a state-of-the-art algorithm dedicated to time series, is applied to predict the daily water level of Nokoue Lake in Benin. This paper aims to provide an effective and reliable method enable of reproducing the future daily water level of Nokoue Lake, which is influenced by a combination of two phenomena: rainfall and river flow (runoff from the Ouémé River, the Sô River, the Porto-Novo lagoon, and the Atlantic Ocean). Performance analysis based on the forecasting horizon indicates that LSTM can predict the water level of Nokoué Lake up to a forecast horizon of t+10 days. Performance metrics such as Root Mean Square Error (RMSE), coefficient of correlation (R²), Nash-Sutcliffe Efficiency (NSE), and Mean Absolute Error (MAE) agree on a forecast horizon of up to t+3 days. The values of these metrics remain stable for forecast horizons of t+1 days, t+2 days, and t+3 days. The values of R² and NSE are greater than 0.97 during the training and testing phases in the Nokoué Lake basin. Based on the evaluation indices used to assess the model's performance for the appropriate forecast horizon of water level in the Nokoué Lake basin, the forecast horizon of t+3 days is chosen for predicting future daily water levels.Keywords: forecasting, long short-term memory cell, recurrent artificial neural network, Nokoué lake
Procedia PDF Downloads 641792 Acceleration-Based Motion Model for Visual Simultaneous Localization and Mapping
Authors: Daohong Yang, Xiang Zhang, Lei Li, Wanting Zhou
Abstract:
Visual Simultaneous Localization and Mapping (VSLAM) is a technology that obtains information in the environment for self-positioning and mapping. It is widely used in computer vision, robotics and other fields. Many visual SLAM systems, such as OBSLAM3, employ a constant-speed motion model that provides the initial pose of the current frame to improve the speed and accuracy of feature matching. However, in actual situations, the constant velocity motion model is often difficult to be satisfied, which may lead to a large deviation between the obtained initial pose and the real value, and may lead to errors in nonlinear optimization results. Therefore, this paper proposed a motion model based on acceleration, which can be applied on most SLAM systems. In order to better describe the acceleration of the camera pose, we decoupled the pose transformation matrix, and calculated the rotation matrix and the translation vector respectively, where the rotation matrix is represented by rotation vector. We assume that, in a short period of time, the changes of rotating angular velocity and translation vector remain the same. Based on this assumption, the initial pose of the current frame is estimated. In addition, the error of constant velocity model was analyzed theoretically. Finally, we applied our proposed approach to the ORBSLAM3 system and evaluated two sets of sequences on the TUM dataset. The results showed that our proposed method had a more accurate initial pose estimation and the accuracy of ORBSLAM3 system is improved by 6.61% and 6.46% respectively on the two test sequences.Keywords: error estimation, constant acceleration motion model, pose estimation, visual SLAM
Procedia PDF Downloads 941791 The Impact of Introspective Models on Software Engineering
Authors: Rajneekant Bachan, Dhanush Vijay
Abstract:
The visualization of operating systems has refined the Turing machine, and current trends suggest that the emulation of 32 bit architectures will soon emerge. After years of technical research into Web services, we demonstrate the synthesis of gigabit switches, which embodies the robust principles of theory. Loam, our new algorithm for forward-error correction, is the solution to all of these challenges.Keywords: software engineering, architectures, introspective models, operating systems
Procedia PDF Downloads 5381790 The Per Capita Income, Energy production and Environmental Degradation: A Comprehensive Assessment of the existence of the Environmental Kuznets Curve Hypothesis in Bangladesh
Authors: Ashique Mahmud, MD. Ataul Gani Osmani, Shoria Sharmin
Abstract:
In the first quarter of the twenty-first century, the most substantial global concern is environmental contamination, and it has gained the prioritization of both the national and international community. Keeping in mind this crucial fact, this study conducted different statistical and econometrical methods to identify whether the gross national income of the country has a significant impact on electricity production from nonrenewable sources and different air pollutants like carbon dioxide, nitrous oxide, and methane emissions. Besides, the primary objective of this research was to analyze whether the environmental Kuznets curve hypothesis holds for the examined variables. After analyzing different statistical properties of the variables, this study came to the conclusion that the environmental Kuznets curve hypothesis holds for gross national income and carbon dioxide emission in Bangladesh in the short run as well as the long run. This study comes to this conclusion based on the findings of ordinary least square estimations, ARDL bound tests, short-run causality analysis, the Error Correction Model, and other pre-diagnostic and post-diagnostic tests that have been employed in the structural model. Moreover, this study wants to demonstrate that the outline of gross national income and carbon dioxide emissions is in its initial stage of development and will increase up to the optimal peak. The compositional effect will then force the emission to decrease, and the environmental quality will be restored in the long run.Keywords: environmental Kuznets curve hypothesis, carbon dioxide emission in Bangladesh, gross national income in Bangladesh, autoregressive distributed lag model, granger causality, error correction model
Procedia PDF Downloads 1501789 Lamb Waves Wireless Communication in Healthy Plates Using Coherent Demodulation
Authors: Rudy Bahouth, Farouk Benmeddour, Emmanuel Moulin, Jamal Assaad
Abstract:
Guided ultrasonic waves are used in Non-Destructive Testing (NDT) and Structural Health Monitoring (SHM) for inspection and damage detection. Recently, wireless data transmission using ultrasonic waves in solid metallic channels has gained popularity in some industrial applications such as nuclear, aerospace and smart vehicles. The idea is to find a good substitute for electromagnetic waves since they are highly attenuated near metallic components due to Faraday shielding. The proposed solution is to use ultrasonic guided waves such as Lamb waves as an information carrier due to their capability of propagation for long distances. In addition to this, valuable information about the health of the structure could be extracted simultaneously. In this work, the reliable frequency bandwidth for communication is extracted experimentally from dispersion curves at first. Then, an experimental platform for wireless communication using Lamb waves is described and built. After this, coherent demodulation algorithm used in telecommunications is tested for Amplitude Shift Keying, On-Off Keying and Binary Phase Shift Keying modulation techniques. Signal processing parameters such as threshold choice, number of cycles per bit and Bit Rate are optimized. Experimental results are compared based on the average Bit Error Rate. Results have shown high sensitivity to threshold selection for Amplitude Shift Keying and On-Off Keying techniques resulting a Bit Rate decrease. Binary Phase Shift Keying technique shows the highest stability and data rate between all tested modulation techniques.Keywords: lamb waves communication, wireless communication, coherent demodulation, bit error rate
Procedia PDF Downloads 2601788 Hypoglossal Nerve Stimulation (Baseline vs. 12 months) for Obstructive Sleep Apnea: A Meta-Analysis
Authors: Yasmeen Jamal Alabdallat, Almutazballlah Bassam Qablan, Hamza Al-Salhi, Salameh Alarood, Ibraheem Alkhawaldeh, Obada Abunar, Adam Abdallah
Abstract:
Obstructive sleep apnea (OSA) is a disorder caused by the repeated collapse of the upper airway during sleep. It is the most common cause of sleep-related breathing disorder, as OSA can cause loud snoring, daytime fatigue, or more severe problems such as high blood pressure, cardiovascular disease, coronary artery disease, insulin-resistant diabetes, and depression. The hypoglossal nerve stimulator (HNS) is an implantable medical device that reduces the occurrence of obstructive sleep apnea by electrically stimulating the hypoglossal nerve in rhythm with the patient's breathing, causing the tongue to move. This stimulation helps keep the patient's airways clear while they sleep. This systematic review and meta-analysis aimed to assess the clinical outcome of hypoglossal nerve stimulation as a treatment of obstructive sleep apnea. A computer literature search of PubMed, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials was conducted from inception until August 2022. Studies assessing the following clinical outcomes (Apnea-Hypopnea Index (AHI), Epworth Sleepiness Scale (ESS), Functional Outcomes of Sleep Questionnaire (FOSQ), Oxygen Desaturation Indices (ODI), (Oxygen Saturation (SaO2)) were pooled in the meta-analysis using Review Manager Software. We assessed the quality of studies according to the Cochrane risk-of-bias tool for randomized trials (RoB2), Risk of Bias In Non-randomized Studies - of Interventions (ROBINS-I), and a modified version of NOS for the non-comparative cohort studies.13 Studies (Six Clinical Trials and Seven prospective cohort studies) with a total of 817 patients were included in the meta-analysis. The results of AHI were reported in 11 studies examining OSA 696 patients. We found that there was a significant improvement in the AHI after 12 months of HNS (MD = 18.2 with 95% CI, (16.7 to 19.7; I2 = 0%); P < 0.00001). Further, 12 studies reported the results of ESS after 12 months of intervention with a significant improvement in the range of sleepiness among the examined 757 OSA patients (MD = 5.3 with 95% CI, (4.75 to 5.86; I2 = 65%); P < 0.0001). Moreover, nine studies involving 699 participants reported the results of FOSQ after 12 months of HNS with a significant reported improvement (MD = -3.09 with 95% CI, (-3.41 to 2.77; I2 = 0%); P < 0.00001). In addition, ten studies reported the results of ODI with a significant improvement after 12 months of HNS among the 817 examined patients (MD = 14.8 with 95% CI, (13.25 to 16.32; I2 = 0%); P < 000001). The Hypoglossal Nerve Stimulation showed a significant positive impact on obstructive sleep apnea patients after 12 months of therapy in terms of apnea-hypopnea index, oxygen desaturation indices, manifestations of the behavioral morbidity associated with obstructive sleep apnea, and functional status resulting from sleepiness.Keywords: apnea, meta-analysis, hypoglossal, stimulation
Procedia PDF Downloads 1151787 A Pilot Study to Investigate the Use of Machine Translation Post-Editing Training for Foreign Language Learning
Authors: Hong Zhang
Abstract:
The main purpose of this study is to show that machine translation (MT) post-editing (PE) training can help our Chinese students learn Spanish as a second language. Our hypothesis is that they might make better use of it by learning PE skills specific for foreign language learning. We have developed PE training materials based on the data collected in a previous study. Training material included the special error types of the output of MT and the error types that our Chinese students studying Spanish could not detect in the experiment last year. This year we performed a pilot study in order to evaluate the PE training materials effectiveness and to what extent PE training helps Chinese students who study the Spanish language. We used screen recording to record these moments and made note of every action done by the students. Participants were speakers of Chinese with intermediate knowledge of Spanish. They were divided into two groups: Group A performed PE training and Group B did not. We prepared a Chinese text for both groups, and participants translated it by themselves (human translation), and then used Google Translate to translate the text and asked them to post-edit the raw MT output. Comparing the results of PE test, Group A could identify and correct the errors faster than Group B students, Group A did especially better in omission, word order, part of speech, terminology, mistranslation, official names, and formal register. From the results of this study, we can see that PE training can help Chinese students learn Spanish as a second language. In the future, we could focus on the students’ struggles during their Spanish studies and complete the PE training materials to teach Chinese students learning Spanish with machine translation.Keywords: machine translation, post-editing, post-editing training, Chinese, Spanish, foreign language learning
Procedia PDF Downloads 144