Search results for: models error comparison
12043 Towards Efficient Reasoning about Families of Class Diagrams Using Union Models
Authors: Tejush Badal, Sanaa Alwidian
Abstract:
Class diagrams are useful tools within the Unified Modelling Language (UML) to model and visualize the relationships between, and properties of objects within a system. As a system evolves over time and space (e.g., products), a series of models with several commonalities and variabilities create what is known as a model family. In circumstances where there are several versions of a model, examining each model individually, becomes expensive in terms of computation resources. To avoid performing redundant operations, this paper proposes an approach for representing a family of class diagrams into Union Models to represent model families using a single generic model. The paper aims to analyze and reason about a family of class diagrams using union models as opposed to individual analysis of each member model in the family. The union algorithm provides a holistic view of the model family, where the latter cannot be otherwise obtained from an individual analysis approach, this in turn, enhances the analysis performed in terms of speeding up the time needed to analyze a family of models together as opposed to analyzing individual models, one model at a time.Keywords: analysis, class diagram, model family, unified modeling language, union model
Procedia PDF Downloads 7412042 Applying Business Model Patterns: A Case Study in Latin American Building Industry
Authors: James Alberto Ortega Morales, Nelson Andrés Martínez Marín
Abstract:
The bulding industry is one of the most important sectors all around the world in terms of contribution to index like GDP and labor. On the other hand, it is a major contributor to Greenhouse Gases (GHG) and waste generation contributing to global warming. In this sense, it is necessary to establish sustainable practices both from the strategic point of view to the operations point of view as well in all business and industries. Business models don’t scape to this reality attending it´s mediator role between strategy and operations. Business models can turn from the traditional practices searching economic benefits to sustainable bussines models that generate both economic value and value for society and the environment. Recent advances in the analysis of sustainable business models find different classifications that allow finding potential triple bottom line (economic, social and environmental) solutions applicable in every business sector. Into the metioned Advances have been identified, 11 groups and 45 patterns of sustainable business models have been identified; such patterns can be found either in the business models as a whole or found concurrently in their components. This article presents the analysis of a case study, seeking to identify the components and elements that are part of it, using the ECO CANVAS conceptual model. The case study allows showing the concurrent existence of different patterns of business models for sustainability empirically, serving as an example and inspiration for other Latin American companies interested in integrating sustainability into their new and existing business models.Keywords: sustainable business models, business sustainability, business model patterns, case study, construction industry
Procedia PDF Downloads 11312041 Volatility Model with Markov Regime Switching to Forecast Baht/USD
Authors: Nop Sopipan
Abstract:
In this paper, we forecast the volatility of Baht/USDs using Markov Regime Switching GARCH (MRS-GARCH) models. These models allow volatility to have different dynamics according to unobserved regime variables. The main purpose of this paper is to find out whether MRS-GARCH models are an improvement on the GARCH type models in terms of modeling and forecasting Baht/USD volatility. The MRS-GARCH is the best performance model for Baht/USD volatility in short term but the GARCH model is best perform for long term.Keywords: volatility, Markov Regime Switching, forecasting, Baht/USD
Procedia PDF Downloads 30212040 EEG Correlates of Trait and Mathematical Anxiety during Lexical and Numerical Error-Recognition Tasks
Authors: Alexander N. Savostyanov, Tatiana A. Dolgorukova, Elena A. Esipenko, Mikhail S. Zaleshin, Margherita Malanchini, Anna V. Budakova, Alexander E. Saprygin, Tatiana A. Golovko, Yulia V. Kovas
Abstract:
EEG correlates of mathematical and trait anxiety level were studied in 52 healthy Russian-speakers during execution of error-recognition tasks with lexical, arithmetic and algebraic conditions. Event-related spectral perturbations were used as a measure of brain activity. The ERSP plots revealed alpha/beta desynchronizations within a 500-3000 ms interval after task onset and slow-wave synchronization within an interval of 150-350 ms. Amplitudes of these intervals reflected the accuracy of error recognition, and were differently associated with the three conditions. The correlates of anxiety were found in theta (4-8 Hz) and beta2 (16-20 Hz) frequency bands. In theta band the effects of mathematical anxiety were stronger expressed in lexical, than in arithmetic and algebraic condition. The mathematical anxiety effects in theta band were associated with differences between anterior and posterior cortical areas, whereas the effects of trait anxiety were associated with inter-hemispherical differences. In beta1 and beta2 bands effects of trait and mathematical anxiety were directed oppositely. The trait anxiety was associated with increase of amplitude of desynchronization, whereas the mathematical anxiety was associated with decrease of this amplitude. The effect of mathematical anxiety in beta2 band was insignificant for lexical condition but was the strongest in algebraic condition. EEG correlates of anxiety in theta band could be interpreted as indexes of task emotionality, whereas the reaction in beta2 band is related to tension of intellectual resources.Keywords: EEG, brain activity, lexical and numerical error-recognition tasks, mathematical and trait anxiety
Procedia PDF Downloads 56112039 Basic Study of Mammographic Image Magnification System with Eye-Detector and Simple EEG Scanner
Authors: Aika Umemuro, Mitsuru Sato, Mizuki Narita, Saya Hori, Saya Sakurai, Tomomi Nakayama, Ayano Nakazawa, Toshihiro Ogura
Abstract:
Mammography requires the detection of very small calcifications, and physicians search for microcalcifications by magnifying the images as they read them. The mouse is necessary to zoom in on the images, but this can be tiring and distracting when many images are read in a single day. Therefore, an image magnification system combining an eye-detector and a simple electroencephalograph (EEG) scanner was devised, and its operability was evaluated. Two experiments were conducted in this study: the measurement of eye-detection error using an eye-detector and the measurement of the time required for image magnification using a simple EEG scanner. Eye-detector validation showed that the mean distance of eye-detection error ranged from 0.64 cm to 2.17 cm, with an overall mean of 1.24 ± 0.81 cm for the observers. The results showed that the eye detection error was small enough for the magnified area of the mammographic image. The average time required for point magnification in the verification of the simple EEG scanner ranged from 5.85 to 16.73 seconds, and individual differences were observed. The reason for this may be that the size of the simple EEG scanner used was not adjustable, so it did not fit well for some subjects. The use of a simple EEG scanner with size adjustment would solve this problem. Therefore, the image magnification system using the eye-detector and the simple EEG scanner is useful.Keywords: EEG scanner, eye-detector, mammography, observers
Procedia PDF Downloads 21512038 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components
Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea
Abstract:
Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.Keywords: assessment, part of speech, sentiment analysis, student feedback
Procedia PDF Downloads 14212037 Comparison of Presented Definitions to Authenticity and Integrity
Authors: Golnaz Salehi Mourkani
Abstract:
Two conception of Integrity and authenticity, in texts have just applied respectively for adaptive reuse and conservation, which in comparison with word “Integrity” in texts related to adaptive reuse is much more seen than Authenticity, which is often applied with conservation. According to Stove, H., (2007) in some cases, this conception have used with this form “integrity/authenticity” in texts, that cause to infer one conception of both. In this article, with referring to definitions and comparison of aspects specialized to both concept of “Authenticity and Integrity” through literature review, it was attempted to examine common and distinctive aspects of each one, then with this method we can reach their differences in adaptive reuse.Keywords: adaptive reuse, integrity, authenticity, conservation
Procedia PDF Downloads 43012036 Tracing Sources of Sediment in an Arid River, Southern Iran
Authors: Hesam Gholami
Abstract:
Elevated suspended sediment loads in riverine systems resulting from accelerated erosion due to human activities are a serious threat to the sustainable management of watersheds and ecosystem services therein worldwide. Therefore, mitigation of deleterious sediment effects as a distributed or non-point pollution source in the catchments requires reliable provenance information. Sediment tracing or sediment fingerprinting, as a combined process consisting of sampling, laboratory measurements, different statistical tests, and the application of mixing or unmixing models, is a useful technique for discriminating the sources of sediments. From 1996 to the present, different aspects of this technique, such as grouping the sources (spatial and individual sources), discriminating the potential sources by different statistical techniques, and modification of mixing and unmixing models, have been introduced and modified by many researchers worldwide, and have been applied to identify the provenance of fine materials in agricultural, rural, mountainous, and coastal catchments, and in large catchments with numerous lakes and reservoirs. In the last two decades, efforts exploring the uncertainties associated with sediment fingerprinting results have attracted increasing attention. The frameworks used to quantify the uncertainty associated with fingerprinting estimates can be divided into three groups comprising Monte Carlo simulation, Bayesian approaches and generalized likelihood uncertainty estimation (GLUE). Given the above background, the primary goal of this study was to apply geochemical fingerprinting within the GLUE framework in the estimation of sub-basin spatial sediment source contributions in the arid Mehran River catchment in southern Iran, which drains into the Persian Gulf. The accuracy of GLUE predictions generated using four different sets of statistical tests for discriminating three sub-basin spatial sources was evaluated using 10 virtual sediments (VS) samples with known source contributions using the root mean square error (RMSE) and mean absolute error (MAE). Based on the results, the contributions modeled by GLUE for the western, central and eastern sub-basins are 1-42% (overall mean 20%), 0.5-30% (overall mean 12%) and 55-84% (overall mean 68%), respectively. According to the mean absolute fit (MAF; ≥ 95% for all target sediment samples) and goodness-of-fit (GOF; ≥ 99% for all samples), our suggested modeling approach is an accurate technique to quantify the source of sediments in the catchments. Overall, the estimated source proportions can help watershed engineers plan the targeting of conservation programs for soil and water resources.Keywords: sediment source tracing, generalized likelihood uncertainty estimation, virtual sediment mixtures, Iran
Procedia PDF Downloads 7412035 A Computational Model of the Thermal Grill Illusion: Simulating the Perceived Pain Using Neuronal Activity in Pain-Sensitive Nerve Fibers
Authors: Subhankar Karmakar, Madhan Kumar Vasudevan, Manivannan Muniyandi
Abstract:
Thermal Grill Illusion (TGI) elicits a strong and often painful sensation of burn when interlacing warm and cold stimuli that are individually non-painful, excites thermoreceptors beneath the skin. Among several theories of TGI, the “disinhibition” theory is the most widely accepted in the literature. According to this theory, TGI is the result of the disinhibition or unmasking of the pain-sensitive HPC (Heat-Pinch-Cold) nerve fibers due to the inhibition of cold-sensitive nerve fibers that are responsible for masking HPC nerve fibers. Although researchers focused on understanding TGI throughexperiments and models, none of them investigated the prediction of TGI pain intensity through a computational model. Furthermore, the comparison of psychophysically perceived TGI intensity with neurophysiological models has not yet been studied. The prediction of pain intensity through a computational model of TGI can help inoptimizing thermal displays and understanding pathological conditions related to temperature perception. The current studyfocuses on developing a computational model to predict the intensity of TGI pain and experimentally observe the perceived TGI pain. The computational model is developed based on the disinhibition theory and by utilizing the existing popular models of warm and cold receptors in the skin. The model aims to predict the neuronal activity of the HPC nerve fibers. With a temperature-controlled thermal grill setup, fifteen participants (ten males and five females) were presented with five temperature differences between warm and cold grills (each repeated three times). All the participants rated the perceived TGI pain sensation on a scale of one to ten. For the range of temperature differences, the experimentally observed perceived intensity of TGI is compared with the neuronal activity of pain-sensitive HPC nerve fibers. The simulation results show a monotonically increasing relationship between the temperature differences and the neuronal activity of the HPC nerve fibers. Moreover, a similar monotonically increasing relationship is experimentally observed between temperature differences and the perceived TGI intensity. This shows the potential comparison of TGI pain intensity observed through the experimental study with the neuronal activity predicted through the model. The proposed model intends to bridge the theoretical understanding of the TGI and the experimental results obtained through psychophysics. Further studies in pain perception are needed to develop a more accurate version of the current model.Keywords: thermal grill Illusion, computational modelling, simulation, psychophysics, haptics
Procedia PDF Downloads 17112034 Vector Quantization Based on Vector Difference Scheme for Image Enhancement
Authors: Biji Jacob
Abstract:
Vector quantization algorithm which uses minimum distance calculation for codebook generation, a time consuming calculation performed on each pixel values leads to computation complexity. The codebook is updated by comparing the distance of each vector to their centroid vector and measure for their closeness. In this paper vector quantization is modified based on vector difference algorithm for image enhancement purpose. In the proposed scheme, vector differences between the vectors are considered as the new generation vectors or new codebook vectors. The codebook is updated by comparing the new generation vector with a threshold value having minimum error with the parent vector. The minimum error decides the fitness of each newly generated vector. Thus the codebook is generated in an adaptive manner and the fitness value is determined for the suppression of the degraded portion of the image and thereby leads to the enhancement of the image through the adaptive searching capability of the vector quantization through vector difference algorithm. Experimental results shows that the vector difference scheme efficiently modifies the vector quantization algorithm for enhancing the image with peak signal to noise ratio (PSNR), mean square error (MSE), Euclidean distance (E_dist) as the performance parameters.Keywords: codebook, image enhancement, vector difference, vector quantization
Procedia PDF Downloads 26712033 Least Squares Solution for Linear Quadratic Gaussian Problem with Stochastic Approximation Approach
Authors: Sie Long Kek, Wah June Leong, Kok Lay Teo
Abstract:
Linear quadratic Gaussian model is a standard mathematical model for the stochastic optimal control problem. The combination of the linear quadratic estimation and the linear quadratic regulator allows the state estimation and the optimal control policy to be designed separately. This is known as the separation principle. In this paper, an efficient computational method is proposed to solve the linear quadratic Gaussian problem. In our approach, the Hamiltonian function is defined, and the necessary conditions are derived. In addition to this, the output error is defined and the least-square optimization problem is introduced. By determining the first-order necessary condition, the gradient of the sum squares of output error is established. On this point of view, the stochastic approximation approach is employed such that the optimal control policy is updated. Within a given tolerance, the iteration procedure would be stopped and the optimal solution of the linear-quadratic Gaussian problem is obtained. For illustration, an example of the linear-quadratic Gaussian problem is studied. The result shows the efficiency of the approach proposed. In conclusion, the applicability of the approach proposed for solving the linear quadratic Gaussian problem is highly demonstrated.Keywords: iteration procedure, least squares solution, linear quadratic Gaussian, output error, stochastic approximation
Procedia PDF Downloads 18612032 Use of Cyber-Physical Devices for the Implementation of Virtual and Augmented Realities in Bridge Construction
Authors: Muhammmad Fawad
Abstract:
The bridge construction industry has been revolutionized by the applications of Virtual Reality (VR) and Augmented Reality (AR). In this article, the author has focused on the field applications of digital technologies in structural, especially in bridge engineering. This research analyzed the use of VR/AR for the assessment of bridge concepts. For this purpose, the author has used Cyber-Physical Devices, i.e., Oculus Quest (OQ) for the implementation of VR, Trimble Microsoft HoloLens (THL), and Trimble Site Vision (TSV) for the implementation of AR/MR by visualizing the models of bridge planned to be constructed in Poland. The visualization of the models in Extended Reality (XR) is based on the development of BIM models of the bridge, which are further uploaded to the platforms required to implement these models in XR. This research helped to implement the models in MR so a bridge with a 1:1 scale at the exact location was placed, and authorities were presented with the possibility to visualize the exact scale and location of the bridge before its construction.Keywords: augmented reality, virtual reality, HoloLens, BIM, bridges
Procedia PDF Downloads 12212031 Fault Analysis of Induction Machine Using Finite Element Method (FEM)
Authors: Wiem Zaabi, Yemna Bensalem, Hafedh Trabelsi
Abstract:
The paper presents a finite element (FE) based efficient analysis procedure for induction machine (IM). The FE formulation approaches are proposed to achieve this goal: the magnetostatic and the non-linear transient time stepped formulations. The study based on finite element models offers much more information on the phenomena characterizing the operation of electrical machines than the classical analytical models. This explains the increase of the interest for the finite element investigations in electrical machines. Based on finite element models, this paper studies the influence of the stator and the rotor faults on the behavior of the IM. In this work, a simple dynamic model for an IM with inter-turn winding fault and a broken bar fault is presented. This fault model is used to study the IM under various fault conditions and severity. The simulation results are conducted to validate the fault model for different levels of fault severity. The comparison of the results obtained by simulation tests allowed verifying the precision of the proposed FEM model. This paper presents a technical method based on Fast Fourier Transform (FFT) analysis of stator current and electromagnetic torque to detect the faults of broken rotor bar. The technique used and the obtained results show clearly the possibility of extracting signatures to detect and locate faults.Keywords: Finite element Method (FEM), Induction motor (IM), short-circuit fault, broken rotor bar, Fast Fourier Transform (FFT) analysis
Procedia PDF Downloads 29912030 Public Spending and Economic Growth: An Empirical Analysis of Developed Countries
Authors: Bernur Acikgoz
Abstract:
The purpose of this paper is to investigate the effects of public spending on economic growth and examine the sources of economic growth in developed countries since the 1990s. This paper analyses whether public spending effect on economic growth based on Cobb-Douglas Production Function with the two econometric models with Autoregressive Distributed Lag (ARDL) and Dynamic Fixed Effect (DFE) for 21 developed countries (high-income OECD countries), over the period 1990-2013. Our models results are parallel to each other and the models support that public spending has an important role for economic growth. This result is accurate with theories and previous empirical studies.Keywords: public spending, economic growth, panel data, ARDL models
Procedia PDF Downloads 37012029 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 3012028 Assessing Level of Pregnancy Rate and Milk Yield in Indian Murrah Buffaloes
Authors: V. Jamuna, A. K. Chakravarty, C. S. Patil, Vijay Kumar, M. A. Mir, Rakesh Kumar
Abstract:
Intense selection of buffaloes for milk production at organized herds of the country without giving due attention to fertility traits viz. pregnancy rate has lead to deterioration in their performances. Aim of study is to develop an optimum model for predicting pregnancy rate and to assess the level of pregnancy rate with respect to milk production Murrah buffaloes. Data pertaining to 1224 lactation records of Murrah buffaloes spread over a period 21 years were analyzed and it was observed that pregnancy rate depicted negative phenotypic association with lactation milk yield (-0.08 ± 0.04). For developing optimum model for pregnancy rate in Murrah buffaloes seven simple and multiple regression models were developed. Among the seven models, model II having only Service period as an independent reproduction variable, was found to be the best prediction model, based on the four statistical criterions (high coefficient of determination (R 2), low mean sum of squares due to error (MSSe), conceptual predictive (CP) value, and Bayesian information criterion (BIC). For standardizing the level of fertility with milk production, pregnancy rate was classified into seven classes with the increment of 10% in all parities, life time and their corresponding average pregnancy rate in relation to the average lactation milk yield (MY).It was observed that to achieve around 2000 kg MY which can be considered optimum for Indian Murrah buffaloes, level of pregnancy rate should be in between 30-50%.Keywords: life time, pregnancy rate, production, service period, standardization
Procedia PDF Downloads 63512027 Effect of Drag Coefficient Models concerning Global Air-Sea Momentum Flux in Broad Wind Range including Extreme Wind Speeds
Authors: Takeshi Takemoto, Naoya Suzuki, Naohisa Takagaki, Satoru Komori, Masako Terui, George Truscott
Abstract:
Drag coefficient is an important parameter in order to correctly estimate the air-sea momentum flux. However, The parameterization of the drag coefficient hasn’t been established due to the variation in the field data. Instead, a number of drag coefficient model formulae have been proposed, even though almost all these models haven’t discussed the extreme wind speed range. With regards to such models, it is unclear how the drag coefficient changes in the extreme wind speed range as the wind speed increased. In this study, we investigated the effect of the drag coefficient models concerning the air-sea momentum flux in the extreme wind range on a global scale, comparing two different drag coefficient models. Interestingly, one model didn’t discuss the extreme wind speed range while the other model considered it. We found that the difference of the models in the annual global air-sea momentum flux was small because the occurrence frequency of strong wind was approximately 1% with a wind speed of 20m/s or more. However, we also discovered that the difference of the models was shown in the middle latitude where the annual mean air-sea momentum flux was large and the occurrence frequency of strong wind was high. In addition, the estimated data showed that the difference of the models in the drag coefficient was large in the extreme wind speed range and that the largest difference became 23% with a wind speed of 35m/s or more. These results clearly show that the difference of the two models concerning the drag coefficient has a significant impact on the estimation of a regional air-sea momentum flux in an extreme wind speed range such as that seen in a tropical cyclone environment. Furthermore, we estimated each air-sea momentum flux using several kinds of drag coefficient models. We will also provide data from an observation tower and result from CFD (Computational Fluid Dynamics) concerning the influence of wind flow at and around the place.Keywords: air-sea interaction, drag coefficient, air-sea momentum flux, CFD (Computational Fluid Dynamics)
Procedia PDF Downloads 37112026 Survival and Hazard Maximum Likelihood Estimator with Covariate Based on Right Censored Data of Weibull Distribution
Authors: Al Omari Mohammed Ahmed
Abstract:
This paper focuses on Maximum Likelihood Estimator with Covariate. Covariates are incorporated into the Weibull model. Under this regression model with regards to maximum likelihood estimator, the parameters of the covariate, shape parameter, survival function and hazard rate of the Weibull regression distribution with right censored data are estimated. The mean square error (MSE) and absolute bias are used to compare the performance of Weibull regression distribution. For the simulation comparison, the study used various sample sizes and several specific values of the Weibull shape parameter.Keywords: weibull regression distribution, maximum likelihood estimator, survival function, hazard rate, right censoring
Procedia PDF Downloads 44112025 Volatility Switching between Two Regimes
Authors: Josip Visković, Josip Arnerić, Ante Rozga
Abstract:
Based on the fact that volatility is time varying in high frequency data and that periods of high volatility tend to cluster, the most successful and popular models in modelling time varying volatility are GARCH type models. When financial returns exhibit sudden jumps that are due to structural breaks, standard GARCH models show high volatility persistence, i.e. integrated behaviour of the conditional variance. In such situations models in which the parameters are allowed to change over time are more appropriate. This paper compares different GARCH models in terms of their ability to describe structural changes in returns caused by financial crisis at stock markets of six selected central and east European countries. The empirical analysis demonstrates that Markov regime switching GARCH model resolves the problem of excessive persistence and outperforms uni-regime GARCH models in forecasting volatility when sudden switching occurs in response to financial crisis.Keywords: central and east European countries, financial crisis, Markov switching GARCH model, transition probabilities
Procedia PDF Downloads 22612024 Application of Neural Network on the Loading of Copper onto Clinoptilolite
Authors: John Kabuba
Abstract:
The study investigated the implementation of the Neural Network (NN) techniques for prediction of the loading of Cu ions onto clinoptilolite. The experimental design using analysis of variance (ANOVA) was chosen for testing the adequacy of the Neural Network and for optimizing of the effective input parameters (pH, temperature and initial concentration). Feed forward, multi-layer perceptron (MLP) NN successfully tracked the non-linear behavior of the adsorption process versus the input parameters with mean squared error (MSE), correlation coefficient (R) and minimum squared error (MSRE) of 0.102, 0.998 and 0.004 respectively. The results showed that NN modeling techniques could effectively predict and simulate the highly complex system and non-linear process such as ion-exchange.Keywords: clinoptilolite, loading, modeling, neural network
Procedia PDF Downloads 41512023 Shoulder Range of Motion Measurements using Computer Vision Compared to Hand-Held Goniometric Measurements
Authors: Lakshmi Sujeesh, Aaron Ramzeen, Ricky Ziming Guo, Abhishek Agrawal
Abstract:
Introduction: Range of motion (ROM) is often measured by physiotherapists using hand-held goniometer as part of mobility assessment for diagnosis. Due to the nature of hand-held goniometer measurement procedure, readings often tend to have some variations depending on the physical therapist taking the measurements (Riddle et al.). This study aims to validate computer vision software readings against goniometric measurements for quick and consistent ROM measurements to be taken by clinicians. The use of this computer vision software hopes to improve the future of musculoskeletal space with more efficient diagnosis from recording of patient’s ROM with minimal human error across different physical therapists. Methods: Using the hand-held long arm goniometer measurements as the “gold-standard”, healthy study participants (n = 20) were made to perform 4 exercises: Front elevation, Abduction, Internal Rotation, and External Rotation, using both arms. Assessment of active ROM using computer vision software at different angles set by goniometer for each exercise was done. Interclass Correlation Coefficient (ICC) using 2-way random effects model, Box-Whisker plots, and Root Mean Square error (RMSE) were used to find the degree of correlation and absolute error measured between set and recorded angles across the repeated trials by the same rater. Results: ICC (2,1) values for all 4 exercises are above 0.9, indicating excellent reliability. Lowest overall RMSE was for external rotation (5.67°) and highest for front elevation (8.00°). Box-whisker plots showed have showed that there is a potential zero error in the measurements done by the computer vision software for abduction, where absolute error for measurements taken at 0 degree are shifted away from the ideal 0 line, with its lowest recorded error being 8°. Conclusion: Our results indicate that the use of computer vision software is valid and reliable to use in clinical settings by physiotherapists for measuring shoulder ROM. Overall, computer vision helps improve accessibility to quality care provided for individual patients, with the ability to assess ROM for their condition at home throughout a full cycle of musculoskeletal care (American Academy of Orthopaedic Surgeons) without the need for a trained therapist.Keywords: physiotherapy, frozen shoulder, joint range of motion, computer vision
Procedia PDF Downloads 10712022 Agriculture Yield Prediction Using Predictive Analytic Techniques
Authors: Nagini Sabbineni, Rajini T. V. Kanth, B. V. Kiranmayee
Abstract:
India’s economy primarily depends on agriculture yield growth and their allied agro industry products. The agriculture yield prediction is the toughest task for agricultural departments across the globe. The agriculture yield depends on various factors. Particularly countries like India, majority of agriculture growth depends on rain water, which is highly unpredictable. Agriculture growth depends on different parameters, namely Water, Nitrogen, Weather, Soil characteristics, Crop rotation, Soil moisture, Surface temperature and Rain water etc. In our paper, lot of Explorative Data Analysis is done and various predictive models were designed. Further various regression models like Linear, Multiple Linear, Non-linear models are tested for the effective prediction or the forecast of the agriculture yield for various crops in Andhra Pradesh and Telangana states.Keywords: agriculture yield growth, agriculture yield prediction, explorative data analysis, predictive models, regression models
Procedia PDF Downloads 31312021 On the Accuracy of Basic Modal Displacement Method Considering Various Earthquakes
Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar
Abstract:
Time history seismic analysis is supposed to be the most accurate method to predict the seismic demand of structures. On the other hand, the required computational time of this method toward achieving the result is its main deficiency. While being applied in optimization process, in which the structure must be analyzed thousands of time, reducing the required computational time of seismic analysis of structures makes the optimization algorithms more practical. Apparently, the invented approximate methods produce some amount of errors in comparison with exact time history analysis but the recently proposed method namely, Complete Quadratic Combination (CQC) and Sum Root of the Sum of Squares (SRSS) drastically reduces the computational time by combination of peak responses in each mode. In the present research, the Basic Modal Displacement (BMD) method is introduced and applied towards estimation of seismic demand of main structure. Seismic demand of sampled structure is estimated by calculation of modal displacement of basic structure (in which the modal displacement has been calculated). Shear steel sampled structures are selected as case studies. The error applying the introduced method is calculated by comparison of the estimated seismic demands with exact time history dynamic analysis. The efficiency of the proposed method is demonstrated by application of three types of earthquakes (in view of time of peak ground acceleration).Keywords: time history dynamic analysis, basic modal displacement, earthquake-induced demands, shear steel structures
Procedia PDF Downloads 35512020 On the Cluster of the Families of Hybrid Polynomial Kernels in Kernel Density Estimation
Authors: Benson Ade Eniola Afere
Abstract:
Over the years, kernel density estimation has been extensively studied within the context of nonparametric density estimation. The fundamental components of kernel density estimation are the kernel function and the bandwidth. While the mathematical exploration of the kernel component has been relatively limited, its selection and development remain crucial. The Mean Integrated Squared Error (MISE), serving as a measure of discrepancy, provides a robust framework for assessing the effectiveness of any kernel function. A kernel function with a lower MISE is generally considered to perform better than one with a higher MISE. Hence, the primary aim of this article is to create kernels that exhibit significantly reduced MISE when compared to existing classical kernels. Consequently, this article introduces a cluster of hybrid polynomial kernel families. The construction of these proposed kernel functions is carried out heuristically by combining two kernels from the classical polynomial kernel family using probability axioms. We delve into the analysis of error propagation within these kernels. To assess their performance, simulation experiments, and real-life datasets are employed. The obtained results demonstrate that the proposed hybrid kernels surpass their classical kernel counterparts in terms of performance.Keywords: classical polynomial kernels, cluster of families, global error, hybrid Kernels, Kernel density estimation, Monte Carlo simulation
Procedia PDF Downloads 9312019 Graphical Modeling of High Dimension Processes with an Environmental Application
Authors: Ali S. Gargoum
Abstract:
Graphical modeling plays an important role in providing efficient probability calculations in high dimensional problems (computational efficiency). In this paper, we address one of such problems where we discuss fragmenting puff models and some distributional assumptions concerning models for the instantaneous, emission readings and for the fragmenting process. A graphical representation in terms of a junction tree of the conditional probability breakdown of puffs and puff fragments is proposed.Keywords: graphical models, influence diagrams, junction trees, Bayesian nets
Procedia PDF Downloads 39612018 Multi-Faceted Growth in Creative Industries
Authors: Sanja Pfeifer, Nataša Šarlija, Marina Jeger, Ana Bilandžić
Abstract:
The purpose of this study is to explore the different facets of growth among micro, small and medium-sized firms in Croatia and to analyze the differences between models designed for all micro, small and medium-sized firms and those in creative industries. Three growth prediction models were designed and tested using the growth of sales, employment and assets of the company as dependent variables. The key drivers of sales growth are: prudent use of cash, industry affiliation and higher share of intangible assets. Growth of assets depends on retained profits, internal and external sources of financing, as well as industry affiliation. Growth in employment is closely related to sources of financing, in particular, debt and it occurs less frequently than growth in sales and assets. The findings confirm the assumption that growth strategies of small and medium-sized enterprises (SMEs) in creative industries have specific differences in comparison to SMEs in general. Interestingly, only 2.2% of growing enterprises achieve growth in employment, assets and sales simultaneously.Keywords: creative industries, growth prediction model, growth determinants, growth measures
Procedia PDF Downloads 33212017 Dynamics of the Landscape in the Different Colonization Models Implemented in the Legal Amazon
Authors: Valdir Moura, FranciléIa De Oliveira E. Silva, Erivelto Mercante, Ranieli Dos Anjos De Souza, Jerry Adriani Johann
Abstract:
Several colonization projects were implemented in the Brazilian Legal Amazon in the 1970s and 1980s. Among all of these colonization projects, the most prominent were those with the Fishbone and Topographic models. Within this scope, the projects of settlements known as Anari and Machadinho were created, which stood out because they are contiguous areas with different models and structure of occupation and colonization. The main objective of this work was to evaluate the dynamics of Land-Use and Land-Cover (LULC) in two different colonization models, implanted in the State of Rondonia in the 1980s. The Fishbone and Topographic models were implanted in the Anari and Machadinho settlements respectively. The understanding of these two forms of occupation will help in future colonization programs of the Brazilian Legal Amazon. These settlements are contiguous areas with different occupancy structures. A 32-year Landsat time series (1984-2016) was used to evaluate the rates and trends in the LULC process in the different colonization models. In the different occupation models analyzed, the results showed a rapid loss of primary and secondary forests (deforestation), mainly due to the dynamics of use, established by the Agriculture/Pasture (A/P) relation and, with heavy dependence due to road construction.Keywords: land-cover, deforestation, rate fragments, remote sensing, secondary succession
Procedia PDF Downloads 13512016 Performance of Fiber-Reinforced Polymer as an Alternative Reinforcement
Authors: Salah E. El-Metwally, Marwan Abdo, Basem Abdel Wahed
Abstract:
Fiber-reinforced polymer (FRP) bars have been proposed as an alternative to conventional steel bars; hence, the use of these non-corrosive and nonmetallic reinforcing bars has increased in various concrete projects. This concrete material is lightweight, has a long lifespan, and needs minor maintenance; however, its non-ductile nature and weak bond with the surrounding concrete create a significant challenge. The behavior of concrete elements reinforced with FRP bars has been the subject of several experimental investigations, even with their high cost. This study aims to numerically assess the viability of using FRP bars, as longitudinal reinforcement, in comparison with traditional steel bars, and also as prestressing tendons instead of the traditional prestressing steel. The nonlinear finite element analysis has been utilized to carry out the current study. Numerical models have been developed to examine the behavior of concrete beams reinforced with FRP bars or tendons against similar models reinforced with either conventional steel or prestressing steel. These numerical models were verified by experimental test results available in the literature. The obtained results revealed that concrete beams reinforced with FRP bars, as passive reinforcement, exhibited less ductility and less stiffness than similar beams reinforced with steel bars. On the other hand, when FRP tendons are employed in prestressing concrete beams, the results show that the performance of these beams is similar to those beams prestressed by conventional active reinforcement but with a difference caused by the two tendon materials’ moduli of elasticity.Keywords: reinforced concrete, prestressed concrete, nonlinear finite element analysis, fiber-reinforced polymer, ductility
Procedia PDF Downloads 1312015 Estimating X-Ray Spectra for Digital Mammography by Using the Expectation Maximization Algorithm: A Monte Carlo Simulation Study
Authors: Chieh-Chun Chang, Cheng-Ting Shih, Yan-Lin Liu, Shu-Jun Chang, Jay Wu
Abstract:
With the widespread use of digital mammography (DM), radiation dose evaluation of breasts has become important. X-ray spectra are one of the key factors that influence the absorbed dose of glandular tissue. In this study, we estimated the X-ray spectrum of DM using the expectation maximization (EM) algorithm with the transmission measurement data. The interpolating polynomial model proposed by Boone was applied to generate the initial guess of the DM spectrum with the target/filter combination of Mo/Mo and the tube voltage of 26 kVp. The Monte Carlo N-particle code (MCNP5) was used to tally the transmission data through aluminum sheets of 0.2 to 3 mm. The X-ray spectrum was reconstructed by using the EM algorithm iteratively. The influence of the initial guess for EM reconstruction was evaluated. The percentage error of the average energy between the reference spectrum inputted for Monte Carlo simulation and the spectrum estimated by the EM algorithm was -0.14%. The normalized root mean square error (NRMSE) and the normalized root max square error (NRMaSE) between both spectra were 0.6% and 2.3%, respectively. We conclude that the EM algorithm with transmission measurement data is a convenient and useful tool for estimating x-ray spectra for DM in clinical practice.Keywords: digital mammography, expectation maximization algorithm, X-Ray spectrum, X-Ray
Procedia PDF Downloads 73012014 A Predictive MOC Solver for Water Hammer Waves Distribution in Network
Authors: A. Bayle, F. Plouraboué
Abstract:
Water Distribution Network (WDN) still suffers from a lack of knowledge about fast pressure transient events prediction, although the latter may considerably impact their durability. Accidental or planned operating activities indeed give rise to complex pressure interactions and may drastically modified the local pressure value generating leaks and, in rare cases, pipe’s break. In this context, a numerical predictive analysis is conducted to prevent such event and optimize network management. A couple of Python/FORTRAN 90, home-made software, has been developed using Method Of Characteristic (MOC) solving for water-hammer equations. The solver is validated by direct comparison with theoretical and experimental measurement in simple configurations whilst afterward extended to network analysis. The algorithm's most costly steps are designed for parallel computation. A various set of boundary conditions and energetic losses models are considered for the network simulations. The results are analyzed in both real and frequencies domain and provide crucial information on the pressure distribution behavior within the network.Keywords: energetic losses models, method of characteristic, numerical predictive analysis, water distribution network, water hammer
Procedia PDF Downloads 232