Search results for: adaptive instance normalization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1884

Search results for: adaptive instance normalization

564 Spatial Analysis of Flood Vulnerability in Highly Urbanized Area: A Case Study in Taipei City

Authors: Liang Weichien

Abstract:

Without adequate information and mitigation plan for natural disaster, the risk to urban populated areas will increase in the future as populations grow, especially in Taiwan. Taiwan is recognized as the world's high-risk areas, where an average of 5.7 times of floods occur per year should seek to strengthen coherence and consensus in how cities can plan for flood and climate change. Therefore, this study aims at understanding the vulnerability to flooding in Taipei city, Taiwan, by creating indicators and calculating the vulnerability of each study units. The indicators were grouped into sensitivity and adaptive capacity based on the definition of vulnerability of Intergovernmental Panel on Climate Change. The indicators were weighted by using Principal Component Analysis. However, current researches were based on the assumption that the composition and influence of the indicators were the same in different areas. This disregarded spatial correlation that might result in inaccurate explanation on local vulnerability. The study used Geographically Weighted Principal Component Analysis by adding geographic weighting matrix as weighting to get the different main flood impact characteristic in different areas. Cross Validation Method and Akaike Information Criterion were used to decide bandwidth and Gaussian Pattern as the bandwidth weight scheme. The ultimate outcome can be used for the reduction of damage potential by integrating the outputs into local mitigation plan and urban planning.

Keywords: flood vulnerability, geographically weighted principal components analysis, GWPCA, highly urbanized area, spatial correlation

Procedia PDF Downloads 284
563 An Improved Total Variation Regularization Method for Denoising Magnetocardiography

Authors: Yanping Liao, Congcong He, Ruigang Zhao

Abstract:

The application of magnetocardiography signals to detect cardiac electrical function is a new technology developed in recent years. The magnetocardiography signal is detected with Superconducting Quantum Interference Devices (SQUID) and has considerable advantages over electrocardiography (ECG). It is difficult to extract Magnetocardiography (MCG) signal which is buried in the noise, which is a critical issue to be resolved in cardiac monitoring system and MCG applications. In order to remove the severe background noise, the Total Variation (TV) regularization method is proposed to denoise MCG signal. The approach transforms the denoising problem into a minimization optimization problem and the Majorization-minimization algorithm is applied to iteratively solve the minimization problem. However, traditional TV regularization method tends to cause step effect and lacks constraint adaptability. In this paper, an improved TV regularization method for denoising MCG signal is proposed to improve the denoising precision. The improvement of this method is mainly divided into three parts. First, high-order TV is applied to reduce the step effect, and the corresponding second derivative matrix is used to substitute the first order. Then, the positions of the non-zero elements in the second order derivative matrix are determined based on the peak positions that are detected by the detection window. Finally, adaptive constraint parameters are defined to eliminate noises and preserve signal peak characteristics. Theoretical analysis and experimental results show that this algorithm can effectively improve the output signal-to-noise ratio and has superior performance.

Keywords: constraint parameters, derivative matrix, magnetocardiography, regular term, total variation

Procedia PDF Downloads 152
562 Structural Correlates of Reduced Malicious Pleasure in Huntington's Disease

Authors: Sandra Baez, Mariana Pino, Mildred Berrio, Hernando Santamaria-Garcia, Lucas Sedeno, Adolfo Garcia, Sol Fittipaldi, Agustin Ibanez

Abstract:

Schadenfreude refers to the perceiver’s experience of pleasure at another’s misfortune. This is a multidetermined emotion which can be evoked by hostile feelings and envy. The experience of Schadenfreude engages mechanisms implicated in diverse social cognitive processes. For instance, Schadenfreude involves heightened reward processing, accompanied by increased striatal engagement and it interacts with mentalizing and perspective-taking abilities. Patients with Huntington's disease (HD) exhibit reductions of Schadenfreude experience, suggesting a role of striatal degeneration in such an impairment. However, no study has directly assessed the relationship between regional brain atrophy in HD and reduced Schadenfreude. This study investigated whether gray matter (GM) atrophy in HD patients correlates with ratings of Schadenfreude. First, we compared the performance of 20 HD patients and 23 controls on an experimental task designed to trigger Schadenfreude and envy (another social emotion acting as a control condition). Second, we compared GM volume between groups. Third, we examined brain regions where atrophy might be associated with specific impairments in the patients. Results showed that while both groups showed similar ratings of envy, HD patients reported lower Schadenfreude. The latter pattern was related to atrophy in regions of the reward system (ventral striatum) and the mentalizing network (precuneus and superior parietal lobule). Our results shed light on the intertwining of reward and socioemotional processes in Schadenfreude, while offering novel evidence about their neural correlates. In addition, our results open the door to future studies investigating social emotion processing in other clinical populations characterized by striatal or mentalizing network impairments (e.g., Parkinson’s disease, schizophrenia, autism spectrum disorders).

Keywords: envy, Gray matter atrophy, Huntigton's disease, Schadenfreude, social emotions

Procedia PDF Downloads 335
561 Discovery of Exoplanets in Kepler Data Using a Graphics Processing Unit Fast Folding Method and a Deep Learning Model

Authors: Kevin Wang, Jian Ge, Yinan Zhao, Kevin Willis

Abstract:

Kepler has discovered over 4000 exoplanets and candidates. However, current transit planet detection techniques based on the wavelet analysis and the Box Least Squares (BLS) algorithm have limited sensitivity in detecting minor planets with a low signal-to-noise ratio (SNR) and long periods with only 3-4 repeated signals over the mission lifetime of 4 years. This paper presents a novel precise-period transit signal detection methodology based on a new Graphics Processing Unit (GPU) Fast Folding algorithm in conjunction with a Convolutional Neural Network (CNN) to detect low SNR and/or long-period transit planet signals. A comparison with BLS is conducted on both simulated light curves and real data, demonstrating that the new method has higher speed, sensitivity, and reliability. For instance, the new system can detect transits with SNR as low as three while the performance of BLS drops off quickly around SNR of 7. Meanwhile, the GPU Fast Folding method folds light curves 25 times faster than BLS, a significant gain that allows exoplanet detection to occur at unprecedented period precision. This new method has been tested with all known transit signals with 100% confirmation. In addition, this new method has been successfully applied to the Kepler of Interest (KOI) data and identified a few new Earth-sized Ultra-short period (USP) exoplanet candidates and habitable planet candidates. The results highlight the promise for GPU Fast Folding as a replacement to the traditional BLS algorithm for finding small and/or long-period habitable and Earth-sized planet candidates in-transit data taken with Kepler and other space transit missions such as TESS(Transiting Exoplanet Survey Satellite) and PLATO(PLAnetary Transits and Oscillations of stars).

Keywords: algorithms, astronomy data analysis, deep learning, exoplanet detection methods, small planets, habitable planets, transit photometry

Procedia PDF Downloads 222
560 Determination of Rare Earth Element Patterns in Uranium Matrix for Nuclear Forensics Application: Method Development for Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Measurements

Authors: Bernadett Henn, Katalin Tálos, Éva Kováss Széles

Abstract:

During the last 50 years, the worldwide permeation of the nuclear techniques induces several new problems in the environmental and in the human life. Nowadays, due to the increasing of the risk of terrorism worldwide, the potential occurrence of terrorist attacks using also weapon of mass destruction containing radioactive or nuclear materials as e.g. dirty bombs, is a real threat. For instance, the uranium pellets are one of the potential nuclear materials which are suitable for making special weapons. The nuclear forensics mainly focuses on the determination of the origin of the confiscated or found nuclear and other radioactive materials, which could be used for making any radioactive dispersive device. One of the most important signatures in nuclear forensics to find the origin of the material is the determination of the rare earth element patterns (REE) in the seized or found radioactive or nuclear samples. The concentration and the normalized pattern of the REE can be used as an evidence of uranium origin. The REE are the fourteen Lanthanides in addition scandium and yttrium what are mostly found together and really low concentration in uranium pellets. The problems of the REE determination using ICP-MS technique are the uranium matrix (high concentration of uranium) and the interferences among Lanthanides. In this work, our aim was to develop an effective chemical sample preparation process using extraction chromatography for separation the uranium matrix and the rare earth elements from each other following some publications can be found in the literature and modified them. Secondly, our purpose was the optimization of the ICP-MS measuring process for REE concentration. During method development, in the first step, a REE model solution was used in two different types of extraction chromatographic resins (LN® and TRU®) and different acidic media for environmental testing the Lanthanides separation. Uranium matrix was added to the model solution and was proved in the same conditions. Methods were tested and validated using REE UOC (uranium ore concentrate) reference materials. Samples were analyzed by sector field mass spectrometer (ICP-SFMS).

Keywords: extraction chromatography, nuclear forensics, rare earth elements, uranium

Procedia PDF Downloads 306
559 Significance of Treated Wasteater in Facing Consequences of Climate Change in Arid Regions

Authors: Jamal A. Radaideh, A. J. Radaideh

Abstract:

Being a problem threatening the planet and its ecosystems, the climate change has been considered for a long time as a disturbing topic impacting water resources in Jordan. Jordan is expected for instance to be highly vulnerable to climate change consequences given its unbalanced distribution between water resources availability and existing demands. Thus, action on adaptation to climate impacts is urgently needed to cope with the negative consequences of climate change. Adaptation to global change must include prudent management of treated wastewater as a renewable resource, especially in regions lacking groundwater or where groundwater is already over exploited. This paper highlights the expected negative effects of climate change on the already scarce water sources and to motivate researchers and decision makers to take precautionary measures and find alternatives to keep the level of water supplies at the limits required for different consumption sectors in terms of quantity and quality. The paper will focus on assessing the potential for wastewater recycling as an adaptation measure to cope with water scarcity in Jordan and to consider wastewater as integral part of the national water budget to solve environmental problems. The paper also identified a research topic designed to help the nation progress in making the most appropriate use of the resource, namely for agricultural irrigation. Wastewater is a promising alternative to fill the shortage in water resources, especially due to climate changes, and to preserve the valuable fresh water to give priority to securing drinking water for the population from these resources and at the same time raise the efficiency of the use of available resources. Jordan has more than 36 wastewater treatment plants distributed throughout the country and producing about 386,000 CM/day of reclaimed water. According to the reports of water quality control programs, more than 85 percent of this water is of a quality that is completely identical to the quality suitable for irrigation of field crops and forest trees according to the requirements of Jordanian Standard No. 893/2006.

Keywords: climate change effects on water resources, adaptation on climate change, treated wastewater recycling, arid and semi-arid regions, Jordan

Procedia PDF Downloads 110
558 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 130
557 The Effect of Emotion Self-Confidence and Perceived Social Support on Hong Kong Higher-Education Students' Suicide-Related Emotional Experiences

Authors: K. C. Ching

Abstract:

There is growing public concern over the increasing prevalence of student suicide in Hong Kong. Some identify the problem with insufficient social support, while some attribute it to the vast fluctuations in emotional experience and the hindrances to emotion-regulation, both typical of adolescence and emerging adulthood. This study is thus designed to explore the respective effect of perceived social support and emotion self-confidence, on positive emotions and negative emotions. Fifty-seven Hong Kong higher-education students (17 males, 40 females) aged between 18 and 25 (M = 21.78) responded to an online questionnaire consisted of self-reported measures of perceived social support, emotional self-confidence, positive emotions, and negative emotions. Hierarchical regression analysis revealed that emotional self-confidence positively associated with positive emotions and negatively with negative emotions, while perceived social support positively associated with positive emotions but was not related to negative emotions. Perceived social support and emotional self-confidence both predicted positive emotions, but did not interact to predict any emotional outcome. It is concluded that students’ positive and negative emotional experiences are closely related to their emotion-regulation process. But for social support, its effect is merely protective, meaning that although perceived social support generally promotes positive emotions, it alone does not suffice to alleviate students’ negative emotions. These conclusions carry profound implications to suicide prevention practices, including that most existing suicide prevention campaigns should advance from merely fostering mutual support to directly promoting adaptive coping of emotional negativity.

Keywords: emerging adulthood, emotional self-confidence, hong kong, perceived social support, suicide prevention

Procedia PDF Downloads 141
556 Optimization Modeling of the Hybrid Antenna Array for the DoA Estimation

Authors: Somayeh Komeylian

Abstract:

The direction of arrival (DoA) estimation is the crucial aspect of the radar technologies for detecting and dividing several signal sources. In this scenario, the antenna array output modeling involves numerous parameters including noise samples, signal waveform, signal directions, signal number, and signal to noise ratio (SNR), and thereby the methods of the DoA estimation rely heavily on the generalization characteristic for establishing a large number of the training data sets. Hence, we have analogously represented the two different optimization models of the DoA estimation; (1) the implementation of the decision directed acyclic graph (DDAG) for the multiclass least-squares support vector machine (LS-SVM), and (2) the optimization method of the deep neural network (DNN) radial basis function (RBF). We have rigorously verified that the LS-SVM DDAG algorithm is capable of accurately classifying DoAs for the three classes. However, the accuracy and robustness of the DoA estimation are still highly sensitive to technological imperfections of the antenna arrays such as non-ideal array design and manufacture, array implementation, mutual coupling effect, and background radiation and thereby the method may fail in representing high precision for the DoA estimation. Therefore, this work has a further contribution on developing the DNN-RBF model for the DoA estimation for overcoming the limitations of the non-parametric and data-driven methods in terms of array imperfection and generalization. The numerical results of implementing the DNN-RBF model have confirmed the better performance of the DoA estimation compared with the LS-SVM algorithm. Consequently, we have analogously evaluated the performance of utilizing the two aforementioned optimization methods for the DoA estimation using the concept of the mean squared error (MSE).

Keywords: DoA estimation, Adaptive antenna array, Deep Neural Network, LS-SVM optimization model, Radial basis function, and MSE

Procedia PDF Downloads 99
555 AER Model: An Integrated Artificial Society Modeling Method for Cloud Manufacturing Service Economic System

Authors: Deyu Zhou, Xiao Xue, Lizhen Cui

Abstract:

With the increasing collaboration among various services and the growing complexity of user demands, there are more and more factors affecting the stable development of the cloud manufacturing service economic system (CMSE). This poses new challenges to the evolution analysis of the CMSE. Many researchers have modeled and analyzed the evolution process of CMSE from the perspectives of individual learning and internal factors influencing the system, but without considering other important characteristics of the system's individuals (such as heterogeneity, bounded rationality, etc.) and the impact of external environmental factors. Therefore, this paper proposes an integrated artificial social model for the cloud manufacturing service economic system, which considers both the characteristics of the system's individuals and the internal and external influencing factors of the system. The model consists of three parts: the Agent model, environment model, and rules model (Agent-Environment-Rules, AER): (1) the Agent model considers important features of the individuals, such as heterogeneity and bounded rationality, based on the adaptive behavior mechanisms of perception, action, and decision-making; (2) the environment model describes the activity space of the individuals (real or virtual environment); (3) the rules model, as the driving force of system evolution, describes the mechanism of the entire system's operation and evolution. Finally, this paper verifies the effectiveness of the AER model through computational and experimental results.

Keywords: cloud manufacturing service economic system (CMSE), AER model, artificial social modeling, integrated framework, computing experiment, agent-based modeling, social networks

Procedia PDF Downloads 78
554 Estimated Heat Production, Blood Parameters and Mitochondrial DNA Copy Number of Nellore Bulls with High and Low Residual Feed Intake

Authors: Welder A. Baldassini, Jon J. Ramsey, Marcos R. Chiaratti, Amália S. Chaves, Renata H. Branco, Sarah F. M. Bonilha, Dante P. D. Lanna

Abstract:

With increased production costs there is a need for animals that are more efficient in terms of meat production. In this context, the role of mitochondrial DNA (mtDNA) on physiological processes in liver, muscle and adipose tissues may account for inter-animal variation in energy expenditures and heat production. The purpose this study was to investigate if the amounts of mtDNA in liver, muscle and adipose tissue (subcutaneous and visceral depots) of Nellore bulls are associated with residual feed intake (RFI) and estimated heat production (EHP). Eighteen animals were individually fed in a feedlot for 90 days. RFI values were obtained by regression of dry matter intake (DMI) in relation to average daily gain (ADG) and mid-test metabolic body weight (BW). The animals were classified into low (more efficient) and high (less efficient) RFI groups. The bulls were then randomly distributed in individual pens where they were given excess feed twice daily to result in 5 to 10% orts for 90 d with diet containing 15% crude protein and 2.7 Mcal ME/kg DM. The heart rate (HR) of bulls was monitored for 4 consecutive days and used for calculation of EHP. Electrodes were fitted to bulls with stretch belts (POLAR RS400; Kempele, Finland). To calculate oxygen pulse (O2P), oxygen consumption was obtained using a facemask connected to the gas analyzer (EXHALYZER, ECOMedics, Zurich, Switzerland) and HR were simultaneously measured for 15 minutes period. Daily oxygen (O2) consumption was calculated by multiplying the volume of O2 per beat by total daily beats. EHP was calculated multiplying O2P by the average HR obtained during the 4 days, assuming 4.89 kcal/L of O2 to measure daily EHP that was expressed in kilocalories/day/kilogram metabolic BW (kcal/day/kg BW0.75). Blood samples were collected between days 45 and 90th after the beginning of the trial period in order to measure the concentration of hemoglobin and hematocrit. The bulls were slaughtered in an experimental slaughter house in accordance with current guidelines. Immediately after slaughter, a section of liver, a portion of longissimus thoracis (LT) muscle, plus a portion of subcutaneous fat (surrounding LT muscle) and portions of visceral fat (kidney, pelvis and inguinal fat) were collected. Samples of liver, muscle and adipose tissues were used to quantify mtDNA copy number per cell. The number of mtDNA copies was determined by normalization of mtDNA amount against a single copy nuclear gene (B2M). Mean of EHP, hemoglobin and hematocrit of high and low RFI bulls were compared using two-sample t-tests. Additionally, the one-way ANOVA was used to compare mtDNA quantification considering the mains effects of RFI groups. We found lower EHP (83.047 vs. 97.590 kcal/day/kgBW0.75; P < 0.10), hemoglobin concentration (13.533 vs. 15.108 g/dL; P < 0.10) and hematocrit percentage (39.3 vs. 43.6 %; P < 0.05) in low compared to high RFI bulls, respectively, which may be useful traits to identify efficient animals. However, no differences were observed between the mtDNA content in liver, muscle and adipose tissue of Nellore bulls with high and low RFI.

Keywords: bioenergetics, Bos indicus, feed efficiency, mitochondria

Procedia PDF Downloads 244
553 Design of Robust and Intelligent Controller for Active Removal of Space Debris

Authors: Shabadini Sampath, Jinglang Feng

Abstract:

With huge kinetic energy, space debris poses a major threat to astronauts’ space activities and spacecraft in orbit if a collision happens. The active removal of space debris is required in order to avoid frequent collisions that would occur. In addition, the amount of space debris will increase uncontrollably, posing a threat to the safety of the entire space system. But the safe and reliable removal of large-scale space debris has been a huge challenge to date. While capturing and deorbiting space debris, the space manipulator has to achieve high control precision. However, due to uncertainties and unknown disturbances, there is difficulty in coordinating the control of the space manipulator. To address this challenge, this paper focuses on developing a robust and intelligent control algorithm that controls joint movement and restricts it on the sliding manifold by reducing uncertainties. A neural network adaptive sliding mode controller (NNASMC) is applied with the objective of finding the control law such that the joint motions of the space manipulator follow the given trajectory. A computed torque control (CTC) is an effective motion control strategy that is used in this paper for computing space manipulator arm torque to generate the required motion. Based on the Lyapunov stability theorem, the proposed intelligent controller NNASMC and CTC guarantees the robustness and global asymptotic stability of the closed-loop control system. Finally, the controllers used in the paper are modeled and simulated using MATLAB Simulink. The results are presented to prove the effectiveness of the proposed controller approach.

Keywords: GNC, active removal of space debris, AI controllers, MatLabSimulink

Procedia PDF Downloads 130
552 A Constructivist and Strategic Approach to School Learning: A Study in a Tunisian Primary School

Authors: Slah Eddine Ben Fadhel

Abstract:

Despite the development of new pedagogic methods, current teaching practices put more emphasis on the learning products than on the processes learners deploy. In school syllabi, for instance, very little time is devoted to both the explanation and analysis of strategies aimed at resolving problems by means of targeting students’ metacognitive procedures. Within a cognitive framework, teaching/learning contexts are conceived of in terms of cognitive, metacognitive and affective activities intended for the treatment of information. During these activities, learners come to develop an array of knowledge and strategies which can be subsumed within an active and constructive process. Through the investigation of strategies and metacognition concepts, the purpose is to reflect upon the modalities at the heart of the learning process and to demonstrate, similarly, the inherent significance of a cognitive approach to learning. The scope of this paper is predicated on a study where the population is a group of 76 primary school pupils who experienced difficulty with learning French. The population was divided into two groups: the first group was submitted during three months to a strategy-based training to learn French. All through this phase, the teachers centred class activities round making learners aware of the strategies the latter deployed and geared them towards appraising the steps these learners had themselves taken by means of a variety of tools, most prominent among which is the logbook. The second group was submitted to the usual learning context with no recourse whatsoever to any strategy-oriented tasks. The results of both groups point out the improvement of linguistic competences in the French language in the case of those pupils who were trained by means of strategic procedures. Furthermore, this improvement was noted in relation with the native language (Arabic), a fact that tends to highlight the importance of the interdisciplinary investigation of (meta-)cognitive strategies. These results show that strategic learning promotes in pupils the development of a better awareness of their own processes, which contributes to improving their general linguistic competences.

Keywords: constructive approach, cognitive strategies, metacognition, learning

Procedia PDF Downloads 210
551 The Vulnerability of Climate Change to Farmers, Fishermen and Herdsmen in Nigeria

Authors: Nasiru Medugu Idris

Abstract:

This research is aimed at assessing the vulnerability of climate change to rural communities (farmers, herdsmen and fishermen) in Nigeria with the view to study the underlying causes and degree of vulnerability to climate change and examine the conflict between farmers and herdsmen as a result of climate change. This research employed the use of quantitative and qualitative means of data gathering techniques as well as physical observations. Six states (Kebbi, Adamawa, Nasarawa, Osun, Ebonyi, and Akwa Ibom) have been selected on the ground that they are key food production areas in the country and are therefore essential to continual food security in the country. So also, they also double as fishing communities in order to aid the comprehensive study of all the effects on climate on farmers and fishermen alike. Community focus group discussions were carried out in the various states for an interactive session and also to have firsthand information on their level of awareness on climate change. Climate data from the Nigerian Meteorological Agency over the past decade were collected for the purpose of analyzing trends in climate. The study observed that the level of vulnerability of rural dwellers most especially farmers, herdsmen and fishermen to climate change is very high due to their socioeconomic, ethnic and historical perspective of their trend. The study, therefore, recommends that urgent step needs to be put in place to help control natural hazards and man-made disasters and serious measures are also needed in order to minimize severe societal, economic and political crises; some of which may either escalate to violent conflicts or could be avoided by efforts of conflict resolution and prevention by the initiation of a process of de-escalation. So this study has recommended the best-fit adaptive and mitigation measures to climate change vulnerability in rural communities of Nigeria.

Keywords: adaptation, farmers, fishermen, herdsmen

Procedia PDF Downloads 190
550 Methods for Enhancing Ensemble Learning or Improving Classifiers of This Technique in the Analysis and Classification of Brain Signals

Authors: Seyed Mehdi Ghezi, Hesam Hasanpoor

Abstract:

This scientific article explores enhancement methods for ensemble learning with the aim of improving the performance of classifiers in the analysis and classification of brain signals. The research approach in this field consists of two main parts, each with its own strengths and weaknesses. The choice of approach depends on the specific research question and available resources. By combining these approaches and leveraging their respective strengths, researchers can enhance the accuracy and reliability of classification results, consequently advancing our understanding of the brain and its functions. The first approach focuses on utilizing machine learning methods to identify the best features among the vast array of features present in brain signals. The selection of features varies depending on the research objective, and different techniques have been employed for this purpose. For instance, the genetic algorithm has been used in some studies to identify the best features, while optimization methods have been utilized in others to identify the most influential features. Additionally, machine learning techniques have been applied to determine the influential electrodes in classification. Ensemble learning plays a crucial role in identifying the best features that contribute to learning, thereby improving the overall results. The second approach concentrates on designing and implementing methods for selecting the best classifier or utilizing meta-classifiers to enhance the final results in ensemble learning. In a different section of the research, a single classifier is used instead of multiple classifiers, employing different sets of features to improve the results. The article provides an in-depth examination of each technique, highlighting their advantages and limitations. By integrating these techniques, researchers can enhance the performance of classifiers in the analysis and classification of brain signals. This advancement in ensemble learning methodologies contributes to a better understanding of the brain and its functions, ultimately leading to improved accuracy and reliability in brain signal analysis and classification.

Keywords: ensemble learning, brain signals, classification, feature selection, machine learning, genetic algorithm, optimization methods, influential features, influential electrodes, meta-classifiers

Procedia PDF Downloads 74
549 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets

Authors: Cristian Pauna

Abstract:

Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network

Procedia PDF Downloads 160
548 Using Dynamic Glazing to Eliminate Mechanical Cooling in Multi-family Highrise Buildings

Authors: Ranojoy Dutta, Adam Barker

Abstract:

Multifamily residential buildings are increasingly being built with large glazed areas to provide tenants with greater daylight and outdoor views. However, traditional double-glazed window assemblies can lead to significant thermal discomfort from high radiant temperatures as well as increased cooling energy use to address solar gains. Dynamic glazing provides an effective solution by actively controlling solar transmission to maintain indoor thermal comfort, without compromising the visual connection to outdoors. This study uses thermal simulations across three Canadian cities (Toronto, Vancouver and Montreal) to verify if dynamic glazing along with operable windows and ceiling fans can maintain the indoor operative temperature of a prototype southwest facing high-rise apartment unit within the ASHRAE 55 adaptive comfort range for a majority of the year, without any mechanical cooling. Since this study proposes the use of natural ventilation for cooling and the typical building life cycle is 30-40 years, the typical weather files have been modified based on accepted global warming projections for increased air temperatures by 2050. Results for the prototype apartment confirm that thermal discomfort with dynamic glazing occurs only for less than 0.7% of the year. However, in the baseline scenario with low-E glass there are up to 7% annual hours of discomfort despite natural ventilation with operable windows and improved air movement with ceiling fans.

Keywords: electrochromic glazing, multi-family housing, passive cooling, thermal comfort, natural ventilation

Procedia PDF Downloads 104
547 Teaching Non-Euclidean Geometries to Learn Euclidean One: An Experimental Study

Authors: Silvia Benvenuti, Alessandra Cardinali

Abstract:

In recent years, for instance, in relation to the Covid 19 pandemic and the evidence of climate change, it is becoming quite clear that the development of a young kid into an adult citizen requires a solid scientific background. Citizens are required to exert logical thinking and know the methods of science in order to adapt, understand, and develop as persons. Mathematics sits at the core of these required skills: learning the axiomatic method is fundamental to understand how hard sciences work and helps in consolidating logical thinking, which will be useful for the entire life of a student. At the same time, research shows that the axiomatic study of geometry is a problematic topic for students, even for those with interest in mathematics. With this in mind, the main goals of the research work we will describe are: (1) to show whether non-Euclidean geometries can be a tool to allow students to consolidate the knowledge of Euclidean geometries by developing it in a critical way; (2) to promote the understanding of the modern axiomatic method in geometry; (3) to give students a new perspective on mathematics so that they can see it as a creative activity and a widely discussed topic with a historical background. One of the main issues related to the state-of-the-art in this topic is the shortage of experimental studies with students. For this reason, our aim is to show further experimental evidence of the potential benefits of teaching non-Euclidean geometries at high school, based on data collected from a study started in 2005 in the frame of the Italian National Piano Lauree Scientifiche, continued by a teacher training organized in September 2018, perfected in a pilot study that involved 77 high school students during the school years 2018-2019 and 2019-2020. and finally implemented through an experimental study conducted in 2020-21 with 87 high school students. Our study shows that there is potential for further research to challenge current conceptions of the school mathematics curriculum and of the capabilities of high school mathematics students.

Keywords: Non-Euclidean geometries, beliefs about mathematics, questionnaires, modern axiomatic method

Procedia PDF Downloads 75
546 Leveraging Mobile Apps for Citizen-Centric Urban Planning: Insights from Tajawob Implementation

Authors: Alae El Fahsi

Abstract:

This study explores the ‘Tajawob’ app's role in urban development, demonstrating how mobile applications can empower citizens and facilitate urban planning. Tajawob serves as a digital platform for community feedback, engagement, and participatory governance, addressing urban challenges through innovative tech solutions. This research synthesizes data from a variety of sources, including user feedback, engagement metrics, and interviews with city officials, to assess the app’s impact on citizen participation in urban development in Morocco. By integrating advanced data analytics and user experience design, Tajawob has bridged the communication gap between citizens and government officials, fostering a more collaborative and transparent urban planning process. The findings reveal a significant increase in civic engagement, with users actively contributing to urban management decisions, thereby enhancing the responsiveness and inclusivity of urban governance. Challenges such as digital literacy, infrastructure limitations, and privacy concerns are also discussed, providing a comprehensive overview of the obstacles and opportunities presented by mobile app-based citizen engagement platforms. The study concludes with strategic recommendations for scaling the Tajawob model to other contexts, emphasizing the importance of adaptive technology solutions in meeting the evolving needs of urban populations. This research contributes to the burgeoning field of smart city innovations, offering key insights into the role of digital tools in facilitating more democratic and participatory urban environments.

Keywords: smart cities, digital governance, urban planning, strategic design

Procedia PDF Downloads 57
545 Buffer Allocation and Traffic Shaping Policies Implemented in Routers Based on a New Adaptive Intelligent Multi Agent Approach

Authors: M. Taheri Tehrani, H. Ajorloo

Abstract:

In this paper, an intelligent multi-agent framework is developed for each router in which agents have two vital functionalities, traffic shaping and buffer allocation and are positioned in the ports of the routers. With traffic shaping functionality agents shape the traffic forward by dynamic and real time allocation of the rate of generation of tokens in a Token Bucket algorithm and with buffer allocation functionality agents share their buffer capacity between each other based on their need and the conditions of the network. This dynamic and intelligent framework gives this opportunity to some ports to work better under burst and more busy conditions. These agents work intelligently based on Reinforcement Learning (RL) algorithm and will consider effective parameters in their decision process. As RL have limitation considering much parameter in its decision process due to the volume of calculations, we utilize our novel method which invokes Principle Component Analysis (PCA) on the RL and gives a high dimensional ability to this algorithm to consider as much as needed parameters in its decision process. This implementation when is compared to our previous work where traffic shaping was done without any sharing and dynamic allocation of buffer size for each port, the lower packet drop in the whole network specifically in the source routers can be seen. These methods are implemented in our previous proposed intelligent simulation environment to be able to compare better the performance metrics. The results obtained from this simulation environment show an efficient and dynamic utilization of resources in terms of bandwidth and buffer capacities pre allocated to each port.

Keywords: principal component analysis, reinforcement learning, buffer allocation, multi- agent systems

Procedia PDF Downloads 517
544 Differential Impacts of Whole-Growth-Duration Warming on the Grain Yield and Quality between Early and Late Rice

Authors: Shan Huang, Guanjun Huang, Yongjun Zeng, Haiyuan Wang

Abstract:

The impacts of whole-growth warming on grain yield and quality in double rice cropping systems still remain largely unknown. In this study, a two-year field whole-growth warming experiment was conducted with two inbred indica rice cultivars (Zhongjiazao 17 and Xiangzaoxian 45) for early season and two hybrid indica rice cultivars (Wanxiangyouhuazhan and Tianyouhuazhan) for late season. The results showed that whole-growth warming did not affect early rice yield but significantly decreased late rice yield, which was caused by the decreased grain weight that may be related to the increased plant respiration and reduced translocation of dry matter accumulated during the pre-heading phase under warming. Whole-growth warming improved the milling quality of late rice but decreased that of early rice; however, the chalky rice rate and chalkiness degree were increased by 20.7% and 33.9% for early rice and 37.6 % and 51.6% for late rice under warming, respectively. We found that the crude protein content of milled rice was significantly increased by warming in both early and late rice, which would result in deterioration of eating quality. Besides, compared with the control treatment, the setback of late rice was significantly reduced by 17.8 % under warming, while that of early rice was not significantly affected by warming. These results suggest that the negative impacts of whole-growth warming on grain quality may be more severe in early rice than in late rice. Therefore, adaptation in both rice breeding and agronomic practices is needed to alleviate climate warming on the production of a double rice cropping system. Climate-smart agricultural practices ought to be implemented to mitigate the detrimental effects of warming on rice grain quality. For instance, fine-tuning the application rate and timing of inorganic nitrogen fertilizers, along with the introduction of organic amendments and the cultivation of heat-tolerant rice varieties, can help reduce the negative impact of rising temperatures on rice quality. Furthermore, to comprehensively understand the influence of climate warming on rice grain quality, future research should encompass a wider range of rice cultivars and experimental sites.

Keywords: climate warming, double rice cropping, dry matter, grain quality, grain yield

Procedia PDF Downloads 37
543 Inflammatory Changes in Postmenopausal Women including Th17 and Treg

Authors: Ae Ra Han, Seoung Eun Huh, Ji Yeon Kim, Joanne Kwak-Kim, Sung Ki Lee

Abstract:

Objective: Prevalence of osteoporosis, cardiovascular disorders, and Alzheimer's disease rapidly increase after menopause. Immune activation and inflammation are suggested as an important pathogenesis of these serious diseases. Several pro-inflammatory cytokines are increased in women with surgical or natural menopause. However, the little is known about IL-17 producing T cells and Foxp3+ regulatory T (Treg) cells in post-menopause. Methods: A total of 34 postmenopausal women, who had no active cardiovascular, endocrine and infectious disorders were recruited as study group and healthy premenopausal women participated as controls. Peripheral blood mononuclear cells were isolated. Immuno-morphologic (CD3, CD4, CD8, CD19, CD56/CD16), intracellular cytokine (TNF-alpha, IFN-gamma, IL-10, IL-17), and Treg cell (Foxp3) studies were carried out using flow cytometry. The proportion of peripheral lymphocytes, including IL-17 producing and Foxp3+ Treg cells immune cell in each group were statistically analyzed. Results: The proportion of NK cells was significantly increased in menopausal women as compared to that of controls (P=.005). The ratios of TNF-alpha/IL-10 producing CD3+CD4+ T cells were increased in postmenopausal women. CD3+IL-17+ T cell level was higher in postmenopausal women and CD4+ Foxp3+ Treg cells was lower than that of controls. The ratios of CD3+IL-17+ T cell to CD3+Foxp3+ and to CD4+Foxp3+ Treg cells were significantly increased in postmenopausal women (P=.001). Conclusions: We found enhanced innate immunity and Th1- and Th17-mediated adaptive immunity in postmenopausal women. This may explain increasing prevalence of chronic inflammatory diseases after menopause. Further studies are needed to elucidate what factors contribute to this inflammatory shift in the postmenopause.

Keywords: inflammation, immune cell, menopause, Th17, regulatory T cell

Procedia PDF Downloads 321
542 Application of Design Thinking for Technology Transfer of Remotely Piloted Aircraft Systems for the Creative Industry

Authors: V. Santamarina Campos, M. de Miguel Molina, B. de Miguel Molina, M. Á. Carabal Montagud

Abstract:

With this contribution, we want to show a successful example of the application of the Design Thinking methodology, in the European project 'Technology transfer of Remotely Piloted Aircraft Systems (RPAS) for the creative industry'. The use of this methodology has allowed us to design and build a drone, based on the real needs of prospective users. It has demonstrated that this is a powerful tool for generating innovative ideas in the field of robotics, by focusing its effectiveness on understanding and solving real user needs. In this way, with the support of an interdisciplinary team, comprised of creatives, engineers and economists, together with the collaboration of prospective users from three European countries, a non-linear work dynamic has been created. This teamwork has generated a sense of appreciation towards the creative industries, through continuously adaptive, inventive, and playful collaboration and communication, which has facilitated the development of prototypes. These have been designed to enable filming and photography in interior spaces, within 13 sectors of European creative industries: Advertising, Architecture, Fashion, Film, Antiques and Museums, Music, Photography, Televison, Performing Arts, Publishing, Arts and Crafts, Design and Software. Furthermore, it has married the real needs of the creative industries, with what is technologically and commercially viable. As a result, a product of great value has been obtained, which offers new business opportunities for small companies across this sector.

Keywords: design thinking, design for effectiveness, methodology, active toolkit, storyboards, PAR, focus group, innovation, RPAS, indoor drone, aerial film, creative industry, end users, stakeholder

Procedia PDF Downloads 200
541 Bioinformatic Approaches in Population Genetics and Phylogenetic Studies

Authors: Masoud Sheidai

Abstract:

Biologists with a special field of population genetics and phylogeny have different research tasks such as populations’ genetic variability and divergence, species relatedness, the evolution of genetic and morphological characters, and identification of DNA SNPs with adaptive potential. To tackle these problems and reach a concise conclusion, they must use the proper and efficient statistical and bioinformatic methods as well as suitable genetic and morphological characteristics. In recent years application of different bioinformatic and statistical methods, which are based on various well-documented assumptions, are the proper analytical tools in the hands of researchers. The species delineation is usually carried out with the use of different clustering methods like K-means clustering based on proper distance measures according to the studied features of organisms. A well-defined species are assumed to be separated from the other taxa by molecular barcodes. The species relationships are studied by using molecular markers, which are analyzed by different analytical methods like multidimensional scaling (MDS) and principal coordinate analysis (PCoA). The species population structuring and genetic divergence are usually investigated by PCoA and PCA methods and a network diagram. These are based on bootstrapping of data. The Association of different genes and DNA sequences to ecological and geographical variables is determined by LFMM (Latent factor mixed model) and redundancy analysis (RDA), which are based on Bayesian and distance methods. Molecular and morphological differentiating characters in the studied species may be identified by linear discriminant analysis (DA) and discriminant analysis of principal components (DAPC). We shall illustrate these methods and related conclusions by giving examples from different edible and medicinal plant species.

Keywords: GWAS analysis, K-Means clustering, LFMM, multidimensional scaling, redundancy analysis

Procedia PDF Downloads 122
540 An Efficient Hardware/Software Workflow for Multi-Cores Simulink Applications

Authors: Asma Rebaya, Kaouther Gasmi, Imen Amari, Salem Hasnaoui

Abstract:

Over these last years, applications such as telecommunications, signal processing, digital communication with advanced features (Multi-antenna, equalization..) witness a rapid evaluation accompanied with an increase of user exigencies in terms of latency, the power of computation… To satisfy these requirements, the use of hardware/software systems is a common solution; where hardware is composed of multi-cores and software is represented by models of computation, synchronous data flow (SDF) graph for instance. Otherwise, the most of the embedded system designers utilize Simulink for modeling. The issue is how to simplify the c code generation, for a multi-cores platform, of an application modeled by Simulink. To overcome this problem, we propose a workflow allowing an automatic transformation from the Simulink model to the SDF graph and providing an efficient schedule permitting to optimize the number of cores and to minimize latency. This workflow goes from a Simulink application and a hardware architecture described by IP.XACT language. Based on the synchronous and hierarchical behavior of both models, the Simulink block diagram is automatically transformed into an SDF graph. Once this process is successfully achieved, the scheduler calculates the optimal cores’ number needful by minimizing the maximum density of the whole application. Then, a core is chosen to execute a specific graph task in a specific order and, subsequently, a compatible C code is generated. In order to perform this proposal, we extend Preesm, a rapid prototyping tool, to take the Simulink model as entry input and to support the optimal schedule. Afterward, we compared our results to this tool results, using a simple illustrative application. The comparison shows that our results strictly dominate the Preesm results in terms of number of cores and latency. In fact, if Preesm needs m processors and latency L, our workflow need processors and latency L'< L.

Keywords: hardware/software system, latency, modeling, multi-cores platform, scheduler, SDF graph, Simulink model, workflow

Procedia PDF Downloads 266
539 Project-Bbased Learning (PBL) Taken to Extremes: Full-Year/Full-Time PBL Replacement of Core Curriculum

Authors: Stephen Grant Atkins

Abstract:

Radical use of project-based learning (PBL) in a small New Zealand business school provides an opportunity to longitudinally examine its effects over a decade of pre-Covid data. Prior to this business school’s implementation of PBL, starting in 2012, the business pedagogy literature presented just one example of PBL replacing an entire core-set of courses. In that instance, a British business school merged four of its ‘degree Year 3’ accounting courses into one PBL semester. As radical as that would have seemed, to students aged 20-to-22, the PBL experiment conducted in a New Zealand business school was notably more extreme: 41 nationally-approved Learning Outcomes (L.O.s), these deriving from 8 separate core courses, were aggregated into one grand set of L.O.s, and then treated as a ‘full-year’/‘full-time’ single course. The 8 courses in question were all components of this business school’s compulsory ‘degree Year 1’ curriculum. Thus, the students involved were notably younger (…ages 17-to-19…), and no ‘part-time’ enrolments were allowed. Of interest are this PBL experiment’s effects on subsequent performance outcomes in ‘degree Years 2 & 3’ (….which continued to operate in their traditional ways). Of special interest is the quality of ‘group project’ outcomes. This is because traditionally, ‘degree Year 1’ course assessments are only minimally based on group work. This PBL experiment altered that practice radically, such that PBL ‘degree Year 1’ alumni entered their remaining two years of business coursework with far more ‘project group’ experience. Timeline-wise, thus of interest here, firstly, is ‘degree Year 2’ performance outcomes data from years 2010-2012 + 2016-2018, and likewise ‘degree Year 3’ data for years 2011-2013 + 2017-2019. Those years provide a pre-&-post comparative baseline for performance outcomes in students never exposed to this school’s radical PBL experiment. That baseline is then compared to PBL alumni outcomes (2013-2016….including’Student Evaluation of Course Quality’ outcomes…) to clarify ‘radical PBL’ effects.

Keywords: project-based learning, longitudinal mixed-methods, students criticism, effects-on-learning

Procedia PDF Downloads 96
538 Structural Morphing on High Performance Composite Hydrofoil to Postpone Cavitation

Authors: Fatiha Mohammed Arab, Benoit Augier, Francois Deniset, Pascal Casari, Jacques Andre Astolfi

Abstract:

For the top high performance foiling yachts, cavitation is often a limiting factor for take-off and top speed. This work investigates solutions to delay the onset of cavitation thanks to structural morphing. The structural morphing is based on compliant leading and trailing edge, with effect similar to flaps. It is shown here that the commonly accepted effect of flaps regarding the control of lift and drag forces can also be used to postpone the inception of cavitation. A numerical and experimental study is conducted in order to assess the effect of the geometric parameters of hydrofoil on their hydrodynamic performances and in cavitation inception. The effect of a 70% trailing edge and a 30% leading edge of NACA 0012 is investigated using Xfoil software at a constant Reynolds number 106. The simulations carried out for a range flaps deflections and various angles of attack. So, the result showed that the lift coefficient increase with the increase of flap deflection, but also with the increase of angle of attack and enlarged the bucket cavitation. To evaluate the efficiency of the Xfoil software, a 2D analysis flow over a NACA 0012 with leading and trailing edge flap was studied using Fluent software. The results of the two methods are in a good agreement. To validate the numerical approach, a passive adaptive composite model is built and tested in the hydrodynamic tunnel at the Research Institute of French Naval Academy. The model shows the ability to simulate the effect of flap by a LE and TE structural morphing due to hydrodynamic loading.

Keywords: cavitation, flaps, hydrofoil, panel method, xfoil

Procedia PDF Downloads 172
537 User Authentication Using Graphical Password with Sound Signature

Authors: Devi Srinivas, K. Sindhuja

Abstract:

This paper presents architecture to improve surveillance applications based on the usage of the service oriented paradigm, with smart phones as user terminals, allowing application dynamic composition and increasing the flexibility of the system. According to the result of moving object detection research on video sequences, the movement of the people is tracked using video surveillance. The moving object is identified using the image subtraction method. The background image is subtracted from the foreground image, from that the moving object is derived. So the Background subtraction algorithm and the threshold value is calculated to find the moving image by using background subtraction algorithm the moving frame is identified. Then, by the threshold value the movement of the frame is identified and tracked. Hence, the movement of the object is identified accurately. This paper deals with low-cost intelligent mobile phone-based wireless video surveillance solution using moving object recognition technology. The proposed solution can be useful in various security systems and environmental surveillance. The fundamental rule of moving object detecting is given in the paper, then, a self-adaptive background representation that can update automatically and timely to adapt to the slow and slight changes of normal surroundings is detailed. While the subtraction of the present captured image and the background reaches a certain threshold, a moving object is measured to be in the current view, and the mobile phone will automatically notify the central control unit or the user through SMS (Short Message System). The main advantage of this system is when an unknown image is captured by the system it will alert the user automatically by sending an SMS to user’s mobile.

Keywords: security, graphical password, persuasive cued click points

Procedia PDF Downloads 535
536 Experimental Simulation Set-Up for Validating Out-Of-The-Loop Mitigation when Monitoring High Levels of Automation in Air Traffic Control

Authors: Oliver Ohneiser, Francesca De Crescenzio, Gianluca Di Flumeri, Jan Kraemer, Bruno Berberian, Sara Bagassi, Nicolina Sciaraffa, Pietro Aricò, Gianluca Borghini, Fabio Babiloni

Abstract:

An increasing degree of automation in air traffic will also change the role of the air traffic controller (ATCO). ATCOs will fulfill significantly more monitoring tasks compared to today. However, this rather passive role may lead to Out-Of-The-Loop (OOTL) effects comprising vigilance decrement and less situation awareness. The project MINIMA (Mitigating Negative Impacts of Monitoring high levels of Automation) has conceived a system to control and mitigate such OOTL phenomena. In order to demonstrate the MINIMA concept, an experimental simulation set-up has been designed. This set-up consists of two parts: 1) a Task Environment (TE) comprising a Terminal Maneuvering Area (TMA) simulator as well as 2) a Vigilance and Attention Controller (VAC) based on neurophysiological data recording such as electroencephalography (EEG) and eye-tracking devices. The current vigilance level and the attention focus of the controller are measured during the ATCO’s active work in front of the human machine interface (HMI). The derived vigilance level and attention trigger adaptive automation functionalities in the TE to avoid OOTL effects. This paper describes the full-scale experimental set-up and the component development work towards it. Hence, it encompasses a pre-test whose results influenced the development of the VAC as well as the functionalities of the final TE and the two VAC’s sub-components.

Keywords: automation, human factors, air traffic controller, MINIMA, OOTL (Out-Of-The-Loop), EEG (Electroencephalography), HMI (Human Machine Interface)

Procedia PDF Downloads 382
535 Nonlinear Estimation Model for Rail Track Deterioration

Authors: M. Karimpour, L. Hitihamillage, N. Elkhoury, S. Moridpour, R. Hesami

Abstract:

Rail transport authorities around the world have been facing a significant challenge when predicting rail infrastructure maintenance work for a long period of time. Generally, maintenance monitoring and prediction is conducted manually. With the restrictions in economy, the rail transport authorities are in pursuit of improved modern methods, which can provide precise prediction of rail maintenance time and location. The expectation from such a method is to develop models to minimize the human error that is strongly related to manual prediction. Such models will help them in understanding how the track degradation occurs overtime under the change in different conditions (e.g. rail load, rail type, rail profile). They need a well-structured technique to identify the precise time that rail tracks fail in order to minimize the maintenance cost/time and secure the vehicles. The rail track characteristics that have been collected over the years will be used in developing rail track degradation prediction models. Since these data have been collected in large volumes and the data collection is done both electronically and manually, it is possible to have some errors. Sometimes these errors make it impossible to use them in prediction model development. This is one of the major drawbacks in rail track degradation prediction. An accurate model can play a key role in the estimation of the long-term behavior of rail tracks. Accurate models increase the track safety and decrease the cost of maintenance in long term. In this research, a short review of rail track degradation prediction models has been discussed before estimating rail track degradation for the curve sections of Melbourne tram track system using Adaptive Network-based Fuzzy Inference System (ANFIS) model.

Keywords: ANFIS, MGT, prediction modeling, rail track degradation

Procedia PDF Downloads 333