Search results for: reduced order models
21034 Use of In-line Data Analytics and Empirical Model for Early Fault Detection
Authors: Hyun-Woo Cho
Abstract:
Automatic process monitoring schemes are designed to give early warnings for unusual process events or abnormalities as soon as possible. For this end, various techniques have been developed and utilized in various industrial processes. It includes multivariate statistical methods, representation skills in reduced spaces, kernel-based nonlinear techniques, etc. This work presents a nonlinear empirical monitoring scheme for batch type production processes with incomplete process measurement data. While normal operation data are easy to get, unusual fault data occurs infrequently and thus are difficult to collect. In this work, noise filtering steps are added in order to enhance monitoring performance by eliminating irrelevant information of the data. The performance of the monitoring scheme was demonstrated using batch process data. The results showed that the monitoring performance was improved significantly in terms of detection success rate of process fault.Keywords: batch process, monitoring, measurement, kernel method
Procedia PDF Downloads 32521033 Influence of Behavior Models on the Response of a Reinforced Concrete Frame: Multi-Fiber Approach
Authors: A. Kahil, A. Nekmouche, N. Khelil, I. Hamadou, M. Hamizi, Ne. Hannachi
Abstract:
The objective of this work is to study the influence of the nonlinear behavior models of the concrete (concrete_BAEL and concrete_UNI) as well as the confinement brought by the transverse reinforcement on the seismic response of reinforced concrete frame (RC/frame). These models as well as the confinement are integrated in the Cast3m finite element calculation code. The consideration of confinement (TAC, taking into account the confinement) provided by the transverse reinforcement and the non-consideration of confinement (without consideration of containment, WCC) in the presence and absence of a vertical load is studied. The application was made on a reinforced concrete frame (RC/frame) with 3 levels and 2 spans. The results show that on the one hand, the concrete_BAEL model slightly underestimates the resistance of the RC/frame in the plastic field, whereas the concrete_uni model presents the best results compared to the simplified model "concrete_BAEL", on the other hand, for the concrete-uni model, taking into account the confinement has no influence on the behavior of the RC/frame under imposed displacement up to a vertical load of 500 KN.Keywords: reinforced concrete, nonlinear calculation, behavior laws, fiber model confinement, numerical simulation
Procedia PDF Downloads 16521032 Performance of the Cmip5 Models in Simulation of the Present and Future Precipitation over the Lake Victoria Basin
Authors: M. A. Wanzala, L. A. Ogallo, F. J. Opijah, J. N. Mutemi
Abstract:
The usefulness and limitations in climate information are due to uncertainty inherent in the climate system. For any given region to have sustainable development it is important to apply climate information into its socio-economic strategic plans. The overall objective of the study was to assess the performance of the Coupled Model Inter-comparison Project (CMIP5) over the Lake Victoria Basin. The datasets used included the observed point station data, gridded rainfall data from Climate Research Unit (CRU) and hindcast data from eight CMIP5. The methodology included trend analysis, spatial analysis, correlation analysis, Principal Component Analysis (PCA) regression analysis, and categorical statistical skill score. Analysis of the trends in the observed rainfall records indicated an increase in rainfall variability both in space and time for all the seasons. The spatial patterns of the individual models output from the models of MPI, MIROC, EC-EARTH and CNRM were closest to the observed rainfall patterns.Keywords: categorical statistics, coupled model inter-comparison project, principal component analysis, statistical downscaling
Procedia PDF Downloads 37121031 Publish/Subscribe Scientific Workflow Interoperability Framework (PS-SWIF) Architecture and Design
Authors: Ahmed Alqaoud
Abstract:
This paper describes Publish/Subscribe Scientific Workflow Interoperability Framework (PS-SWIF) architecture and its components that collectively provide interoperability between heterogeneous scientific workflow systems. Requirements to achieve interoperability are identified. This paper also provides a detailed investigation and design of models and solutions for system requirements, and considers how workflow interoperability models provided by Workflow Management Coalition (WfMC) can be achieved using the PS-SWIF system.Keywords: publish/subscribe, scientific workflow, web services, workflow interoperability
Procedia PDF Downloads 30821030 A Block World Problem Based Sudoku Solver
Authors: Luciana Abednego, Cecilia Nugraheni
Abstract:
There are many approaches proposed for solving Sudoku puzzles. One of them is by modelling the puzzles as block world problems. There have been three model for Sudoku solvers based on this approach. Each model expresses Sudoku solver as a parameterized multi agent systems. In this work, we propose a new model which is an improvement over the existing models. This paper presents the development of a Sudoku solver that implements all the proposed models. Some experiments have been conducted to determine the performance of each model.Keywords: Sudoku puzzle, Sudoku solver, block world problem, parameterized multi agent systems
Procedia PDF Downloads 34321029 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling
Authors: M. Almutairi, S. Hadjiloucas
Abstract:
The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.Keywords: harmonics, passive filter, power factor, power quality
Procedia PDF Downloads 30921028 Order vs. Justice: The Cases of Libya and Syria from the Perspective of the English School Theory
Authors: A. Gün Güneş
Abstract:
This study aims to explicate the functionality of the responsibility to protect (R2P) in terms of order and justice within the context of the main traditions of the English School theory. The conflicts in Libya and Syria and the response of the international society to these crises are analyzed in the pluralism-solidarism dichotomy of the English School. In this regard, the intervention under R2P in Libya exemplifies the solidaristic side emphasizing justice, while the non-intervention in Syria exemplifies the pluralistic side emphasizing order. This study discusses the cases of Libya and Syria on the basis of Great Powers.Keywords: English school theory, international society, order, justice, responsibility to protect
Procedia PDF Downloads 43821027 A Comparative Study of the Proposed Models for the Components of the National Health Information System
Authors: M. Ahmadi, Sh. Damanabi, F. Sadoughi
Abstract:
National Health Information System plays an important role in ensuring timely and reliable access to Health information which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, by using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system for better planning and management influential factors of performance seems necessary, therefore, in this study, different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process, and output. In this context, search for information using library resources and internet search were conducted and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system, Lippeveld, Sauerborn, and Bodart Model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008 and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities, and equipment. In addition, in the ‘process’ section from three models, we pointed up the actions ensuring the quality of health information system and in output section, except Lippeveld Model, two other models consider information products, usage and distribution of information as components of the national health information system. Conclusion: The results showed that all the three models have had a brief discussion about the components of health information in input section. However, Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process, and output.Keywords: National Health Information System, components of the NHIS, Lippeveld Model
Procedia PDF Downloads 42521026 Models to Estimate Monthly Mean Daily Global Solar Radiation on a Horizontal Surface in Alexandria
Authors: Ahmed R. Abdelaziz, Zaki M. I. Osha
Abstract:
Solar radiation data are of great significance for solar energy system design. This study aims at developing and calibrating new empirical models for estimating monthly mean daily global solar radiation on a horizontal surface in Alexandria, Egypt. Day length hours, sun height, day number, and declination angle calculated data are used for this purpose. A comparison between measured and calculated values of solar radiation is carried out. It is shown that all the proposed correlations are able to predict the global solar radiation with excellent accuracy in Alexandria.Keywords: solar energy, global solar radiation, model, regression coefficient
Procedia PDF Downloads 40821025 Approach to Formulate Intuitionistic Fuzzy Regression Models
Authors: Liang-Hsuan Chen, Sheng-Shing Nien
Abstract:
This study aims to develop approaches to formulate intuitionistic fuzzy regression (IFR) models for many decision-making applications in the fuzzy environments using intuitionistic fuzzy observations. Intuitionistic fuzzy numbers (IFNs) are used to characterize the fuzzy input and output variables in the IFR formulation processes. A mathematical programming problem (MPP) is built up to optimally determine the IFR parameters. Each parameter in the MPP is defined as a couple of alternative numerical variables with opposite signs, and an intuitionistic fuzzy error term is added to the MPP to characterize the uncertainty of the model. The IFR model is formulated based on the distance measure to minimize the total distance errors between estimated and observed intuitionistic fuzzy responses in the MPP resolution processes. The proposed approaches are simple/efficient in the formulation/resolution processes, in which the sign of parameters can be determined so that the problem to predetermine the sign of parameters is avoided. Furthermore, the proposed approach has the advantage that the spread of the predicted IFN response will not be over-increased, since the parameters in the established IFR model are crisp. The performance of the obtained models is evaluated and compared with the existing approaches.Keywords: fuzzy sets, intuitionistic fuzzy number, intuitionistic fuzzy regression, mathematical programming method
Procedia PDF Downloads 14021024 Heteroscedastic Parametric and Semiparametric Smooth Coefficient Stochastic Frontier Application to Technical Efficiency Measurement
Authors: Rebecca Owusu Coffie, Atakelty Hailu
Abstract:
Variants of production frontier models have emerged, however, only a limited number of them are applied in empirical research. Hence the effects of these alternative frontier models are not well understood, particularly within sub-Saharan Africa. In this paper, we apply recent advances in the production frontier to examine levels of technical efficiency and efficiency drivers. Specifically, we compare the heteroscedastic parametric and the semiparametric stochastic smooth coefficient (SPSC) models. Using rice production data from Ghana, our empirical estimates reveal that alternative specification of efficiency estimators results in either downward or upward bias in the technical efficiency estimates. Methodologically, we find that the SPSC model is more suitable and generates high-efficiency estimates. Within the parametric framework, we find that parameterization of both the mean and variance of the pre-truncated function is the best model. For the drivers of technical efficiency, we observed that longer farm distances increase inefficiency through a reduction in labor productivity. High soil quality, however, increases productivity through increased land productivity.Keywords: pre-truncated, rice production, smooth coefficient, technical efficiency
Procedia PDF Downloads 44921023 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges
Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars
Abstract:
In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting
Procedia PDF Downloads 15521022 Microbial Activity and Greenhouse Gas (GHG) Emissions in Recovery Process in a Grassland of China
Authors: Qiushi Ning
Abstract:
The nitrogen (N) is an important limiting factor of various ecosystems, and the N deposition rate is increasing unprecedentedly due to anthropogenic activities. The N deposition altered the microbial growth and activity, and microbial mediated N cycling through changing soil pH, the availability of N and carbon (C). The CO2, CH4 and N2O are important greenhouse gas which threaten the sustainability and function of the ecosystem. With the prolonged and increasing N enrichment, the soil acidification and C limitation will be aggravated, and the microbial biomass will be further declined. The soil acidification and lack of C induced by N addition are argued as two important factors regulating the microbial activity and growth, and the studies combined soil acidification with lack of C on microbial community are scarce. In order to restore the ecosystem affected by chronic N loading, we determined the responses of microbial activity and GHG emssions to lime and glucose (control, 1‰ lime, 2‰ lime, glucose, 1‰ lime×glucose and 2‰ lime×glucose) addition which was used to alleviate the soil acidification and supply C resource into soils with N addition rates 0-50 g N m–2yr–1. The results showed no significant responses of soil respiration and microbial biomass (MBC and MBN) to lime addition, however, the glucose substantially improved the soil respiration and microbial biomass (MBC and MBN); the cumulative CO2 emission and microbial biomass of lime×glucose treatments were not significantly higher than those of only glucose treatment. The glucose and lime×glucose treatments reduced the net mineralization and nitrification rate, due to inspired microbial growth via C supply incorporating more inorganic N to the biomass, and mineralization of organic N was relatively reduced. The glucose addition also increased the CH4 and N2O emissions, CH4 emissions was regulated mainly by C resource as a substrate for methanogen. However, the N2O emissions were regulated by both C resources and soil pH, the C was important energy and the increased soil pH could benefit the nitrifiers and denitrifiers which were primary producers of N2O. The soil respiration and N2O emissions increased with increasing N addition rates in all glucose treatments, as the external C resource improved microbial N utilization. Compared with alleviated soil acidification, the improved availability of C substantially increased microbial activity, therefore, the C should be the main limiting factor in long-term N loading soils. The most important, when we use the organic C fertilization to improve the production of the ecosystems, the GHG emissions and consequent warming potentials should be carefully considered.Keywords: acidification and C limitation, greenhouse gas emission, microbial activity, N deposition
Procedia PDF Downloads 30921021 3D Reconstruction of Human Body Based on Gender Classification
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo
Abstract:
SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction
Procedia PDF Downloads 7121020 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach
Authors: Gong Zhilin, Jing Yang, Jian Yin
Abstract:
The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).Keywords: credit card, data mining, fraud detection, money transactions
Procedia PDF Downloads 13321019 Practical Simulation Model of Floating-Gate MOS Transistor in Sub 100 nm Technologies
Authors: Zina Saheb, Ezz El-Masry
Abstract:
As CMOS technology scaling down, Silicon oxide thickness (SiO2) become very thin (few Nano meters). When SiO2 is less than 3nm, gate direct tunneling (DT) leakage current becomes a dormant problem that impacts the transistor performance. Floating gate MOSFET (FGMOSFET) has been used in many low-voltage and low-power applications. Most of the available simulation models of FGMOSFET for analog circuit design does not account for gate DT current and there is no accurate analysis for the gate DT. It is a crucial to use an accurate mode in order to get a realistic simulation result that account for that DT impact on FGMOSFET performance effectively.Keywords: CMOS transistor, direct-tunneling current, floating-gate, gate-leakage current, simulation model
Procedia PDF Downloads 53121018 Copper Price Prediction Model for Various Economic Situations
Authors: Haidy S. Ghali, Engy Serag, A. Samer Ezeldin
Abstract:
Copper is an essential raw material used in the construction industry. During the year 2021 and the first half of 2022, the global market suffered from a significant fluctuation in copper raw material prices due to the aftermath of both the COVID-19 pandemic and the Russia-Ukraine war, which exposed its consumers to an unexpected financial risk. Thereto, this paper aims to develop two ANN-LSTM price prediction models, using Python, that can forecast the average monthly copper prices traded in the London Metal Exchange; the first model is a multivariate model that forecasts the copper price of the next 1-month and the second is a univariate model that predicts the copper prices of the upcoming three months. Historical data of average monthly London Metal Exchange copper prices are collected from January 2009 till July 2022, and potential external factors are identified and employed in the multivariate model. These factors lie under three main categories: energy prices and economic indicators of the three major exporting countries of copper, depending on the data availability. Before developing the LSTM models, the collected external parameters are analyzed with respect to the copper prices using correlation and multicollinearity tests in R software; then, the parameters are further screened to select the parameters that influence the copper prices. Then, the two LSTM models are developed, and the dataset is divided into training, validation, and testing sets. The results show that the performance of the 3-Month prediction model is better than the 1-Month prediction model, but still, both models can act as predicting tools for diverse economic situations.Keywords: copper prices, prediction model, neural network, time series forecasting
Procedia PDF Downloads 11621017 Enabling Self-Care and Shared Decision Making for People Living with Dementia
Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan
Abstract:
People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.Keywords: care goals, decision-making, dementia, self-care, sensors
Procedia PDF Downloads 17321016 Optimization of Strategies and Models Review for Optimal Technologies-Based on Fuzzy Schemes for Green Architecture
Authors: Ghada Elshafei, A. Elazim Negm
Abstract:
Recently, Green architecture becomes a significant way to a sustainable future. Green building designs involve finding the balance between comfortable homebuilding and sustainable environment. Moreover, the utilization of the new technologies such as artificial intelligence techniques are used to complement current practices in creating greener structures to keep the built environment more sustainable. The most common objectives are green buildings should be designed to minimize the overall impact of the built environment on ecosystems in general and particularly on human health and on the natural environment. This will lead to protecting occupant health, improving employee productivity, reducing pollution and sustaining the environmental. In green building design, multiple parameters which may be interrelated, contradicting, vague and of qualitative/quantitative nature are broaden to use. This paper presents a comprehensive critical state of art review of current practices based on fuzzy and its combination techniques. Also, presented how green architecture/building can be improved using the technologies that been used for analysis to seek optimal green solutions strategies and models to assist in making the best possible decision out of different alternatives.Keywords: green architecture/building, technologies, optimization, strategies, fuzzy techniques, models
Procedia PDF Downloads 48121015 A Smart CAD Program for Custom Hand Orthosis Generation Based on Anthropometric Relationships
Authors: Elissa D. Ledoux, Eric J. Barth
Abstract:
Producing custom orthotic devices is a time-consuming and iterative process. Efficiency could be increased with a smart CAD program to rapidly generate custom part files for 3D printing, reducing the need for a skilled orthosis technician as well as the hands-on time required. Anthropometric data for the hand was analyzed in order to determine dimensional relationships and reduce the number of measurements needed to parameterize the hand. Using these relationships, a smart CAD package was developed to produce custom sized hand orthosis parts downloadable for 3D printing. Results showed that the number of anatomical parameters required could be reduced from 8 to 3, and the relationships hold for 5th to 95th percentile male hands. CAD parts regenerate correctly for the same range. This package could significantly impact the orthotics industry in terms of expedited production and reduction of required human resources and patient contact.Keywords: CAD, hand, orthosis, orthotic, rehabilitation robotics, upper limb
Procedia PDF Downloads 22821014 Improving Exchange Rate Forecasting Accuracy Using Ensemble Learning Techniques: A Comparative Study
Authors: Gokcen Ogruk-Maz, Sinan Yildirim
Abstract:
Introduction: Exchange rate forecasting is pivotal for informed financial decision-making, encompassing risk management, investment strategies, and international trade planning. However, traditional forecasting models often fail to capture the complexity and volatility of currency markets. This study explores the potential of ensemble learning techniques such as Random Forest, Gradient Boosting, and AdaBoost to enhance the accuracy and robustness of exchange rate predictions. Research Objectives The primary objective is to evaluate the performance of ensemble methods in comparison to traditional econometric models such as Uncovered Interest Rate Parity, Purchasing Power Parity, and Monetary Models. By integrating advanced machine learning techniques with fundamental macroeconomic indicators, this research seeks to identify optimal approaches for predicting exchange rate movements across major currency pairs. Methodology: Using historical exchange rate data and economic indicators such as interest rates, inflation, money supply, and GDP, the study develops forecasting models leveraging ensemble techniques. Comparative analysis is performed against traditional models and hybrid approaches incorporating Facebook Prophet, Artificial Neural Networks, and XGBoost. The models are evaluated using statistical metrics like Mean Squared Error, Theil Ratio, and Diebold-Mariano tests across five currency pairs (JPY to USD, AUD to USD, CAD to USD, GBP to USD, and NZD to USD). Preliminary Results: Results indicate that ensemble learning models consistently outperform traditional methods in predictive accuracy. XGBoost shows the strongest performance among the techniques evaluated, achieving significant improvements in forecast precision with consistently low p-values and Theil Ratios. Hybrid models integrating macroeconomic fundamentals into machine learning frameworks further enhance predictive accuracy. Discussion: The findings show the potential of ensemble methods to address the limitations of traditional models by capturing non-linear relationships and complex dynamics in exchange rate movements. While Random Forest and Gradient Boosting are effective, the superior performance of XGBoost suggests that its capacity for handling sparse and irregular data offers a distinct advantage in financial forecasting. Conclusion and Implications: This research demonstrates that ensemble learning techniques, particularly when combined with traditional macroeconomic fundamentals, provide a robust framework for improving exchange rate forecasting. The study offers actionable insights for financial practitioners and policymakers, emphasizing the value of integrating machine learning approaches into predictive modeling for monetary economics.Keywords: exchange rate forecasting, ensemble learning, financial modeling, machine learning, monetary economics, XGBoost
Procedia PDF Downloads 721013 Supporting Densification through the Planning and Implementation of Road Infrastructure in the South African Context
Authors: K. Govender, M. Sinclair
Abstract:
This paper demonstrates a proof of concept whereby shorter trips and land use densification can be promoted through an alternative approach to planning and implementation of road infrastructure in the South African context. It briefly discusses how the development of the Compact City concept relies on a combination of promoting shorter trips and densification through a change in focus in road infrastructure provision. The methodology developed in this paper uses a traffic model to test the impact of synthesized deterrence functions on congestion locations in the road network through the assignment of traffic on the study network. The results from this study demonstrate that intelligent planning of road infrastructure can indeed promote reduced urban sprawl, increased residential density and mixed-use areas which are supported by an efficient public transport system; and reduced dependence on the freeway network with a fixed road infrastructure budget. The study has resonance for all cities where urban sprawl is seemingly unstoppable.Keywords: compact cities, densification, road infrastructure planning, transportation modelling
Procedia PDF Downloads 17921012 Seismic Fragility of Weir Structure Considering Aging Degradation of Concrete Material
Authors: HoYoung Son, DongHoon Shin, WooYoung Jung
Abstract:
This study presented the seismic fragility framework of concrete weir structure subjected to strong seismic ground motions and in particular, concrete aging condition of the weir structure was taken into account in this study. In order to understand the influence of concrete aging on the weir structure, by using probabilistic risk assessment, the analytical seismic fragility of the weir structure was derived for pre- and post-deterioration of concrete. The performance of concrete weir structure after five years was assumed for the concrete aging or deterioration, and according to after five years’ condition, the elastic modulus was simply reduced about one–tenth compared with initial condition of weir structures. A 2D nonlinear finite element analysis was performed considering the deterioration of concrete in weir structures using ABAQUS platform, a commercial structural analysis program. Simplified concrete degradation was resulted in the increase of almost 45% of the probability of failure at Limit State 3, in comparison to initial construction stage, by analyzing the seismic fragility.Keywords: weir, FEM, concrete, fragility, aging
Procedia PDF Downloads 48521011 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials
Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen
Abstract:
Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.Keywords: placebo analgesia, aging, acute pain, TENS, EEG
Procedia PDF Downloads 14221010 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 28421009 Parametric Estimation of U-Turn Vehicles
Authors: Yonas Masresha Aymeku
Abstract:
The purpose of capacity modelling at U-turns is to develop a relationship between capacity and its geometric characteristics. In fact, the few models available for the estimation of capacity at different transportation facilities do not provide specific guidelines for median openings. For this reason, an effort is made to estimate the capacity by collecting the data sets from median openings at different lane roads in Hyderabad City, India. Wide difference (43% -59%) among the capacity values estimated by the existing models shows the limitation to consider for mixed traffic situations. Thus, a distinct model is proposed for the estimation of the capacity of U-turn vehicles at median openings considering mixed traffic conditions, which would further prompt to investigate the effect of different factors that might affect the capacity.Keywords: geometric, guiddelines, median, vehicles
Procedia PDF Downloads 7221008 Bank ATM Monitoring System Using IR Sensor
Authors: P. Saravanakumar, N. Raja, M. Rameshkumar, D. Mohankumar, R. Sateeshkumar, B. Maheshwari
Abstract:
This research work is designed using Microsoft VB. Net as front end and MySQL as back end. The project deals with secure the user transaction in the ATM system. This application contains the option for sending the failed transaction details to the particular customer by using the SMS. When the customer withdraws the amount from the Bank ATM system, sometimes the amount will not be dispatched but the amount will be debited to the particular account. This application is used to avoid this type of problems in the ATM system. In this proposed system using IR technique to detect the dispatched amount. IR Transmitter and IR Receiver are placed in the path of cash dispatch. It is connected each other through the IR signal. When the customers withdraw the amount in the ATM system then the amount will be dispatched or not is monitored by IR Receiver. If the amount will be dispatched then the signal will be interrupted between the IR Receiver and the IR Transmitter. At that time, the monitoring system will be reduced their particular withdraw amount on their account. If the cash will not be dispatched, the signal will not be interrupted, at that time the particular withdraw amount will not be reduced their account. If the transaction completed successfully, the transaction details such as withdraw amount and current balance can be sent to the customer via the SMS. If the transaction fails, the transaction failed message can be send to the customer.Keywords: ATM system, monitoring system, IR Transmitter, IR Receiver
Procedia PDF Downloads 31121007 Forecasting the Volatility of Geophysical Time Series with Stochastic Volatility Models
Authors: Maria C. Mariani, Md Al Masum Bhuiyan, Osei K. Tweneboah, Hector G. Huizar
Abstract:
This work is devoted to the study of modeling geophysical time series. A stochastic technique with time-varying parameters is used to forecast the volatility of data arising in geophysics. In this study, the volatility is defined as a logarithmic first-order autoregressive process. We observe that the inclusion of log-volatility into the time-varying parameter estimation significantly improves forecasting which is facilitated via maximum likelihood estimation. This allows us to conclude that the estimation algorithm for the corresponding one-step-ahead suggested volatility (with ±2 standard prediction errors) is very feasible since it possesses good convergence properties.Keywords: Augmented Dickey Fuller Test, geophysical time series, maximum likelihood estimation, stochastic volatility model
Procedia PDF Downloads 31721006 Full-Scale Test of a Causeway Embankment Supported by Raft-Aggregate Column Foundation on Soft Clay Deposit
Authors: Tri Harianto, Lawalenna Samang, St. Hijraini Nur, Arwin
Abstract:
Recently, a port development is constructed in Makassar city, South Sulawesi Province, Indonesia. Makassar city is located in lowland area that dominated by soft marine clay deposit. A two kilometers causeway construction was built which is situated on the soft clay layer. In order to investigate the behavior of causeway embankment, a full-scale test was conducted of high embankment built on a soft clay deposit. The embankment with 3,5 m high was supported by two types of reinforcement such as raft and raft-aggregate column foundation. Since the ground was undergoing consolidation due to the preload, the raft and raft-aggregate column foundations were monitored in order to analyze the vertical ground movement by inducing the settlement of the foundation. In this study, two types of foundation (raft and raft-aggregate column) were tested to observe the effectiveness of raft-aggregate column compare to raft foundation in reducing the settlement. The settlement monitored during the construction stage by using the settlement plates, which is located in the center and toe of the embankment. Measurements were taken every day for each embankment construction stage (4 months). In addition, an analytical calculation was conducted in this study to compare the full-scale test result. The result shows that the raft-aggregate column foundation significantly reduces the settlement by 30% compared to the raft foundation. A raft-aggregate column foundation also reduced the time period of each loading stage. The Good agreement of analytical calculation compared to the full-scale test result also found in this study.Keywords: full-scale, preloading, raft-aggregate column, soft clay
Procedia PDF Downloads 30221005 Pharmacogenetics of P2Y12 Receptor Inhibitors
Authors: Ragy Raafat Gaber Attaalla
Abstract:
For cardiovascular illness, oral P2Y12 inhibitors including clopidogrel, prasugrel, and ticagrelor are frequently recommended. Each of these medications has advantages and disadvantages. In the absence of genotyping, it has been demonstrated that the stronger platelet aggregation inhibitors prasugrel and ticagrelor are superior than clopidogrel at preventing significant adverse cardiovascular events following an acute coronary syndrome and percutaneous coronary intervention (PCI). Both, nevertheless, come with a higher risk of bleeding unrelated to a coronary artery bypass. As a prodrug, clopidogrel needs to be bioactivated, principally by the CYP2C19 enzyme. A CYP2C19 no function allele and diminished or absent CYP2C19 enzyme activity are present in about 30% of people. The reduced exposure to the active metabolite of clopidogrel and reduced inhibition of platelet aggregation among clopidogrel-treated carriers of a CYP2C19 no function allele likely contributed to the reduced efficacy of clopidogrel in clinical trials. Clopidogrel's pharmacogenetic results are strongest when used in conjunction with PCI, but evidence for other indications is growing. One of the most typical examples of clinical pharmacogenetic application is CYP2C19 genotype-guided antiplatelet medication following PCI. Guidance is available from expert consensus groups and regulatory bodies to assist with incorporating genetic information into P2Y12 inhibitor prescribing decisions. Here, we examine the data supporting genotype-guided P2Y12 inhibitor selection's effects on clopidogrel response and outcomes and discuss tips for pharmacogenetic implementation. We also discuss procedures for using genotype data to choose P2Y12 inhibitor therapies as well as any unmet research needs. Finally, choosing a P2Y12 inhibitor medication that optimally balances the atherothrombotic and bleeding risks may be influenced by both clinical and genetic factors.Keywords: inhibitors, cardiovascular events, coronary intervention, pharmacogenetic implementation
Procedia PDF Downloads 116