Search results for: input constraints
1998 Real-Time Path Planning for Unmanned Air Vehicles Using Improved Rapidly-Exploring Random Tree and Iterative Trajectory Optimization
Authors: A. Ramalho, L. Romeiro, R. Ventura, A. Suleman
Abstract:
A real-time path planning framework for Unmanned Air Vehicles, and in particular multi-rotors is proposed. The framework is designed to provide feasible trajectories from the current UAV position to a goal state, taking into account constraints such as obstacle avoidance, problem kinematics, and vehicle limitations such as maximum speed and maximum acceleration. The framework computes feasible paths online, allowing to avoid new, unknown, dynamic obstacles without fully re-computing the trajectory. These features are achieved using an iterative process in which the robot computes and optimizes the trajectory while performing the mission objectives. A first trajectory is computed using a modified Rapidly-Exploring Random Tree (RRT) algorithm, that provides trajectories that respect a maximum curvature constraint. The trajectory optimization is accomplished using the Interior Point Optimizer (IPOPT) as a solver. The framework has proven to be able to compute a trajectory and optimize to a locally optimal with computational efficiency making it feasible for real-time operations.Keywords: interior point optimization, multi-rotors, online path planning, rapidly exploring random trees, trajectory optimization
Procedia PDF Downloads 1351997 Statistical Wavelet Features, PCA, and SVM-Based Approach for EEG Signals Classification
Authors: R. K. Chaurasiya, N. D. Londhe, S. Ghosh
Abstract:
The study of the electrical signals produced by neural activities of human brain is called Electroencephalography. In this paper, we propose an automatic and efficient EEG signal classification approach. The proposed approach is used to classify the EEG signal into two classes: epileptic seizure or not. In the proposed approach, we start with extracting the features by applying Discrete Wavelet Transform (DWT) in order to decompose the EEG signals into sub-bands. These features, extracted from details and approximation coefficients of DWT sub-bands, are used as input to Principal Component Analysis (PCA). The classification is based on reducing the feature dimension using PCA and deriving the support-vectors using Support Vector Machine (SVM). The experimental are performed on real and standard dataset. A very high level of classification accuracy is obtained in the result of classification.Keywords: discrete wavelet transform, electroencephalogram, pattern recognition, principal component analysis, support vector machine
Procedia PDF Downloads 6381996 Extending Image Captioning to Video Captioning Using Encoder-Decoder
Authors: Sikiru Ademola Adewale, Joe Thomas, Bolanle Hafiz Matti, Tosin Ige
Abstract:
This project demonstrates the implementation and use of an encoder-decoder model to perform a many-to-many mapping of video data to text captions. The many-to-many mapping occurs via an input temporal sequence of video frames to an output sequence of words to form a caption sentence. Data preprocessing, model construction, and model training are discussed. Caption correctness is evaluated using 2-gram BLEU scores across the different splits of the dataset. Specific examples of output captions were shown to demonstrate model generality over the video temporal dimension. Predicted captions were shown to generalize over video action, even in instances where the video scene changed dramatically. Model architecture changes are discussed to improve sentence grammar and correctness.Keywords: decoder, encoder, many-to-many mapping, video captioning, 2-gram BLEU
Procedia PDF Downloads 1081995 Crow Search Algorithm-Based Task Offloading Strategies for Fog Computing Architectures
Authors: Aniket Ganvir, Ritarani Sahu, Suchismita Chinara
Abstract:
The rapid digitization of various aspects of life is leading to the creation of smart IoT ecosystems, where interconnected devices generate significant amounts of valuable data. However, these IoT devices face constraints such as limited computational resources and bandwidth. Cloud computing emerges as a solution by offering ample resources for offloading tasks efficiently despite introducing latency issues, especially for time-sensitive applications like fog computing. Fog computing (FC) addresses latency concerns by bringing computation and storage closer to the network edge, minimizing data travel distance, and enhancing efficiency. Offloading tasks to fog nodes or the cloud can conserve energy and extend IoT device lifespan. The offloading process is intricate, with tasks categorized as full or partial, and its optimization presents an NP-hard problem. Traditional greedy search methods struggle to address the complexity of task offloading efficiently. To overcome this, the efficient crow search algorithm (ECSA) has been proposed as a meta-heuristic optimization algorithm. ECSA aims to effectively optimize computation offloading, providing solutions to this challenging problem.Keywords: IoT, fog computing, task offloading, efficient crow search algorithm
Procedia PDF Downloads 581994 AI In Health and Wellbeing - A Seven-Step Engineering Method
Authors: Denis Özdemir, Max Senges
Abstract:
There are many examples of AI-supported apps for better health and wellbeing. Generally, these applications help people to achieve their goals based on scientific research and input data. Still, they do not always explain how those three are related, e.g. by making implicit assumptions about goals that hold for many but not for all. We present a seven-step method for designing health and wellbeing AIs considering goal setting, measurable results, real-time indicators, analytics, visual representations, communication, and feedback. It can help engineers as guidance in developing apps, recommendation algorithms, and interfaces that support humans in their decision-making without patronization. To illustrate the method, we create a recommender AI for tiny wellbeing habits and run a small case study, including a survey. From the results, we infer how people perceive the relationship between them and the AI and to what extent it helps them to achieve their goals. We review our seven-step engineering method and suggest modifications for the next iteration.Keywords: recommender systems, natural language processing, health apps, engineering methods
Procedia PDF Downloads 1651993 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator
Authors: Yildiz Stella Dak, Jale Tezcan
Abstract:
Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection
Procedia PDF Downloads 3301992 Explicitation as a Non-Professional Translation Universal: Evidence from the Translation of Promotional Material
Authors: Julieta Alos
Abstract:
Following the explicitation hypothesis, it has been proposed that explicitation is a translation universal, i.e., one of those features that characterize translated texts, and cannot be traced back to interference from a particular language. The explicitation hypothesis has been enthusiastically endorsed by some scholars, and firmly rejected by others. Focusing on the translation of promotional material from English into Arabic, specifically in the luxury goods market, the aims of this study are twofold: First, to contribute to the debate regarding the notion of explicitation in order to advance our understanding of what has become a contentious concept. Second, to add to the growing body of literature on non-professional translation by shedding light on this particular aspect of it. To this end, our study uses a combination of qualitative and quantitative methods to explore a corpus of brochures pertaining to the luxury industry, translated into Arabic at the local marketing agencies promoting the brands in question, by bilingual employees who have no translation training. Our data reveals a preference to avoid creative language choices in favor of more direct advertising messages, suggestive of a general tendency towards explicitation in non-professional translation, beyond what is dictated by the grammatical and stylistic constraints of Arabic. We argue, further, that this translation approach is at odds with the principles of luxury advertising, which emphasize implicitness and ambiguity, and view language as an extension of the creative process involved in the production of the luxury item.Keywords: English-Arabic translation, explicitation, non-professional translation, promotional texts
Procedia PDF Downloads 3751991 Aircraft Automatic Collision Avoidance Using Spiral Geometric Approach
Authors: M. Orefice, V. Di Vito
Abstract:
This paper provides a description of a Collision Avoidance algorithm that has been developed starting from the mathematical modeling of the flight of insects, in terms of spirals and conchospirals geometric paths. It is able to calculate a proper avoidance manoeuver aimed to prevent the infringement of a predefined distance threshold between ownship and the considered intruder, while minimizing the ownship trajectory deviation from the original path and in compliance with the aircraft performance limitations and dynamic constraints. The algorithm is designed in order to be suitable for real-time applications, so that it can be considered for the implementation in the most recent airborne automatic collision avoidance systems using the traffic data received through an ADS-B IN device. The presented approach is able to take into account the rules-of-the-air, due to the possibility to select, through specifically designed decision making logic based on the consideration of the encounter geometry, the direction of the calculated collision avoidance manoeuver that allows complying with the rules-of-the-air, as for instance the fundamental right of way rule. In the paper, the proposed collision avoidance algorithm is presented and its preliminary design and software implementation is described. The applicability of this method has been proved through preliminary simulation tests performed in a 2D environment considering single intruder encounter geometries, as reported and discussed in the paper.Keywords: ADS-B Based Application, Collision Avoidance, RPAS, Spiral Geometry.
Procedia PDF Downloads 2411990 Progress, Challenges, and Prospects of Non-Conventional Feed Resources for Livestock Production in Sub-Saharan Africa: A Review
Authors: Clyde Haruzivi, Olusegun Oyebade Ikusika, Thando Conference Mpendulo
Abstract:
Feed scarcity, increasing demand for animal products due to the growing human population, competition for conventional feed resources for humans and animal production, and ever-increasing prices of these feed resources are major constraints to the livestock industry in Sub-Saharan Africa. As a result, the industry is suffering immensely as the cost of production is high, hence the reduced returns. Most affected are the communal and resource-limited farmers who cannot afford the cost of conventional feed resources to supplement feeds, especially in arid and semi-arid areas where the available feed resources are not adequate for maintenance and production. This has tasked researchers and animal scientists to focus on the potential of non-conventional feed resources (NCFRs). Non-conventional feed resources could fill the gap through reduced competition, cost of feed, increased supply, increased profits, and independency as farmers will be utilizing locally available feed resources. Identifying available non-conventional feed resources is vital as it creates possibilities for novel feed industries and markets and implements methods of using these feedstuffs to improve livestock production and livelihoods in Sub-Saharan Africa. Hence, this research work analyses the progress, challenges, and prospects of some non-conventional feed resources in Sub-Saharan Africa.Keywords: non-conventional, feed resources, livestock production, food security, Sub-Saharan
Procedia PDF Downloads 1131989 Greenhouse Gas Emissions from a Tropical Eutrophic Freshwater Wetland
Authors: Juan P. Silva, T. R. Canchala, H. J. Lubberding, E. J. Peña, H. J. Gijzen
Abstract:
This study measured the fluxes of greenhouse gases (GHGs) i.e. CO2, CH4 and N2O from a tropical eutrophic freshwater wetland (“Sonso Lagoon”) which receives input loading nutrient from several sources i.e. agricultural run-off, domestic sewage, and a polluted river. The flux measurements were carried out at four different points using the static chamber technique. CO2 fluxes ranged from -8270 to 12210 mg.m-2.d-1 (median = 360; SD = 4.11; n = 50), CH4 ranged between 0.2 and 5270 mg.m-2.d-1 (median = 60; SD = 1.27; n = 45), and N2O ranged from -31.12 to 15.4 mg N2O m-2.d-1 (median = 0.05; SD = 9.36; n = 42). Although some negative fluxes were observed in the zone dominated by floating plants i.e. Eichornia crassipes, Salvinia sp., and Pistia stratiotes L., the mean values indicated that the Sonso Lagoon was a net source of CO2, CH4 and N2O. In addition, an effect of the eutrophication on GHG emissions could be observed in the positive correlation found between CO2, CH4 and N2O generation and COD, PO4-3, NH3-N, TN and NO3-N. The eutrophication impact on GHG production highlights the necessity to limit the anthropic activities on freshwater wetlands.Keywords: eutrophication, greenhouse gas emissions, freshwater wetlands, climate change
Procedia PDF Downloads 3611988 Accurate Algorithm for Selecting Ground Motions Satisfying Code Criteria
Authors: S. J. Ha, S. J. Baik, T. O. Kim, S. W. Han
Abstract:
For computing the seismic responses of structures, current seismic design provisions permit response history analyses (RHA) that can be used without limitations in height, seismic design category, and building irregularity. In order to obtain accurate seismic responses using RHA, it is important to use adequate input ground motions. Current seismic design provisions provide criteria for selecting ground motions. In this study, the accurate and computationally efficient algorithm is proposed for accurately selecting ground motions that satisfy the requirements specified in current seismic design provisions. The accuracy of the proposed algorithm is verified using single-degree-of-freedom systems with various natural periods and yield strengths. This study shows that the mean seismic responses obtained from RHA with seven and ten ground motions selected using the proposed algorithm produce errors within 20% and 13%, respectively.Keywords: algorithm, ground motion, response history analysis, selection
Procedia PDF Downloads 2861987 Sex Education: The Teacher’s Discourses About the Relation Between the Children and the Media, Concerning Sex Education and the Childhood
Authors: Katerina Samartzi
Abstract:
This study focuses on the teacher’s discourses in Greece, about the relation between the children and the media, concerning sex education and widely the childhood. The teachers’ input reflect the anxieties and the dominant discourses that exist around these issues. The study begins with the critical discussion of the available literature concerning the potential impact of media and the ‘moral panics’, their role in sex education and the children’s use of sexual material. Moreover, the study analyses the social construction of childhood and sexuality. Given the lack of explicit and official protocol for the sex education in Greece and due the fact that the young people are familiar with all the material provided by the New Media and their part as an informal education, this project aims to point out the factors that reinforce these gaps. This study focuses on the way the adults and specifically teachers contextualize the children’s relation with media, their sexuality, the sex education, the use of sexual material and the childhood.Keywords: childhood, children's sexuality, media, moral panics, pornography, sex education
Procedia PDF Downloads 1751986 A Robotic Rehabilitation Arm Driven by Somatosensory Brain-Computer Interface
Authors: Jiewei Li, Hongyan Cui, Chunqi Chang, Yong Hu
Abstract:
It was expected to benefit patient with hemiparesis after stroke by extensive arm rehabilitation, to partially regain forearm and hand function. This paper propose a robotic rehabilitation arm in assisting the hemiparetic patient to learn new ways of using and moving their weak arms. In this study, the robotic arm was driven by a somatosensory stimulated brain computer interface (BCI), which is a new modality BCI. The use of somatosensory stimulation is not only an input for BCI, but also a electrical stimulation for treatment of hemiparesis to strengthen the arm and improve its range of motion. A trial of this robotic rehabilitation arm was performed in a stroke patient with pure motor hemiparesis. The initial trial showed a promising result from the patient with great motivation and function improvement. It suggests that robotic rehabilitation arm driven by somatosensory BCI can enhance the rehabilitation performance and progress for hemiparetic patients after stroke.Keywords: robotic rehabilitation arm, brain computer interface (BCI), hemiparesis, stroke, somatosensory stimulation
Procedia PDF Downloads 3901985 Numerical Solution of Two-Dimensional Solute Transport System Using Operational Matrices
Authors: Shubham Jaiswal
Abstract:
In this study, the numerical solution of two-dimensional solute transport system in a homogeneous porous medium of finite-length is obtained. The considered transport system have the terms accounting for advection, dispersion and first-order decay with first-type boundary conditions. Initially, the aquifer is considered solute free and a constant input-concentration is considered at inlet boundary. The solution is describing the solute concentration in rectangular inflow-region of the homogeneous porous media. The numerical solution is derived using a powerful method viz., spectral collocation method. The numerical computation and graphical presentations exhibit that the method is effective and reliable during solution of the physical model with complicated boundary conditions even in the presence of reaction term.Keywords: two-dimensional solute transport system, spectral collocation method, Chebyshev polynomials, Chebyshev differentiation matrix
Procedia PDF Downloads 2321984 Development of Risk Management System for Urban Railroad Underground Structures and Surrounding Ground
Authors: Y. K. Park, B. K. Kim, J. W. Lee, S. J. Lee
Abstract:
To assess the risk of the underground structures and surrounding ground, we collect basic data by the engineering method of measurement, exploration and surveys and, derive the risk through proper analysis and each assessment for urban railroad underground structures and surrounding ground including station inflow. Basic data are obtained by the fiber-optic sensors, MEMS sensors, water quantity/quality sensors, tunnel scanner, ground penetrating radar, light weight deflectometer, and are evaluated if they are more than the proper value or not. Based on these data, we analyze the risk level of urban railroad underground structures and surrounding ground. And we develop the risk management system to manage efficiently these data and to support a convenient interface environment at input/output of data.Keywords: urban railroad, underground structures, ground subsidence, station inflow, risk
Procedia PDF Downloads 3361983 Tip60’s Novel RNA-Binding Function Modulates Alternative Splicing of Pre-mRNA Targets Implicated in Alzheimer’s Disease
Authors: Felice Elefant, Akanksha Bhatnaghar, Keegan Krick, Elizabeth Heller
Abstract:
Context: The severity of Alzheimer’s Disease (AD) progression involves an interplay of genetics, age, and environmental factors orchestrated by histone acetyltransferase (HAT) mediated neuroepigenetic mechanisms. While disruption of Tip60 HAT action in neural gene control is implicated in AD, alternative mechanisms underlying Tip60 function remain unexplored. Altered RNA splicing has recently been highlighted as a widespread hallmark in the AD transcriptome that is implicated in the disease. Research Aim: The aim of this study was to identify a novel RNA binding/splicing function for Tip60 in human hippocampus and impaired in brains from AD fly models and AD patients. Methodology/Analysis: The authors used RNA immunoprecipitation using RNA isolated from 200 pooled wild type Drosophila brains for each of the 3 biological replicates. To identify Tip60’s RNA targets, they performed genome sequencing (DNB-SequencingTM technology, BGI genomics) on 3 replicates for Input RNA and RNA IPs by Tip60. Findings: The authors' transcriptomic analysis of RNA bound to Tip60 by Tip60-RNA immunoprecipitation (RIP) revealed Tip60 RNA targets enriched for critical neuronal processes implicated in AD. Remarkably, 79% of Tip60’s RNA targets overlap with its chromatin gene targets, supporting a model by which Tip60 orchestrates bi-level transcriptional regulation at both the chromatin and RNA level, a function unprecedented for any HAT to date. Since RNA splicing occurs co-transcriptionally and splicing defects are implicated in AD, the authors investigated whether Tip60-RNA targeting modulates splicing decisions and if this function is altered in AD. Replicate multivariate analysis of transcript splicing (rMATS) analysis of RNA-Seq data sets from wild-type and AD fly brains revealed a multitude of mammalian-like AS defects. Strikingly, over half of these altered RNAs were bonafide Tip60-RNA targets enriched for in the AD-gene curated database, with some AS alterations prevented against by increasing Tip60 in fly brain. Importantly, human orthologs of several Tip60-modulated spliced genes in Drosophila are well characterized aberrantly spliced genes in human AD brains, implicating disruption of Tip60’s splicing function in AD pathogenesis. Theoretical Importance: The authors' findings support a novel RNA interaction and splicing regulatory function for Tip60 that may underlie AS impairments that hallmark AD etiology. Data Collection: The authors collected data from RNA immunoprecipitation experiments using RNA isolated from 200 pooled wild type Drosophila brains for each of the 3 biological replicates. They also performed genome sequencing (DNBSequencingTM technology, BGI genomics) on 3 replicates for Input RNA and RNA IPs by Tip60. Questions: The question addressed by this study was whether Tip60 has a novel RNA binding/splicing function in human hippocampus and whether this function is impaired in brains from AD fly models and AD patients. Conclusions: The authors' findings support a novel RNA interaction and splicing regulatory function for Tip60 that may underlie AS impairments that hallmark AD etiology.Keywords: Alzheimer's disease, cognition, aging, neuroepigenetics
Procedia PDF Downloads 761982 Energy Efficiency Retrofitting of Residential Buildings Case Study: Multi-Family Apartment Building in Tripoli, Lebanon
Authors: Yathreb Sabsaby
Abstract:
Energy efficiency retrofitting of existing buildings was long ignored by public authorities who favored energy efficiency policies in new buildings, which are easier to implement. Indeed, retrofitting is more complex and difficult to organize because of the extreme diversity in existing buildings, administrative situations and occupation. Energy efficiency retrofitting of existing buildings has now become indispensable in all economies—even emerging countries—given the constraints imposed by energy security and climate change, and because it represents considerable potential energy savings. Addressing energy efficiency in the existing building stock has been acknowledged as one of the most critical yet challenging aspects of reducing our environmental footprint on the ecosystem. Tripoli, Lebanon chosen as case study area is a typical Mediterranean metropolis in the North Lebanon, where multifamily residential buildings are all around the city. This generally implies that the density of energy demand is extremely high, even the renewable energy facilities are involved, they can just play as a minor energy provider at the current technology level in the single family house. It seems only the low energy design for buildings can be made possible, not the zero energy certainly in developing country. This study reviews the latest research and experience and provides recommendations for deep energy retrofits that aim to save more than 50% of the energy used in a typical Tripoli apartment building.Keywords: energy-efficiency, existing building, multifamily residential building, retrofit
Procedia PDF Downloads 4551981 Investigation of Cascade Loop Heat Pipes
Authors: Nandy Putra, Atrialdipa Duanovsah, Kristofer Haliansyah
Abstract:
The aim of this research is to design a LHP with low thermal resistance and low condenser temperature. A Self-designed cascade LHP was tested by using biomaterial, sintered copper powder, and aluminum screen mesh as the wick. Using pure water as the working fluid for the first level of the LHP and 96% alcohol as the working fluid for the second level of LHP, the experiments were run with 10W, 20W, and 30W heat input. Experimental result shows that the usage of biomaterial as wick could reduce more temperature at evaporator than by using sintered copper powder and screen mesh up to 22.63% and 37.41% respectively. The lowest thermal resistance occurred during the usage of biomaterial as wick of heat pipe, which is 2.06 oC/W. The usage of cascade system could be applied to LHP to reduce the temperature at condenser and reduced thermal resistance up to 17.6%.Keywords: biomaterial, cascade loop heat pipe, screen mesh, sintered Cu
Procedia PDF Downloads 2641980 Comprehensive Evaluation of Thermal Environment and Its Countermeasures: A Case Study of Beijing
Authors: Yike Lamu, Jieyu Tang, Jialin Wu, Jianyun Huang
Abstract:
With the development of economy and science and technology, the urban heat island effect becomes more and more serious. Taking Beijing city as an example, this paper divides the value of each influence index of heat island intensity and establishes a mathematical model – neural network system based on the fuzzy comprehensive evaluation index of heat island effect. After data preprocessing, the algorithm of weight of each factor affecting heat island effect is generated, and the data of sex indexes affecting heat island intensity of Shenyang City and Shanghai City, Beijing, and Hangzhou City are input, and the result is automatically output by the neural network system. It is of practical significance to show the intensity of heat island effect by visual method, which is simple, intuitive and can be dynamically monitored.Keywords: heat island effect, neural network, comprehensive evaluation, visualization
Procedia PDF Downloads 1331979 Fuzzy Logic Control for Flexible Joint Manipulator: An Experimental Implementation
Authors: Sophia Fry, Mahir Irtiza, Alexa Hoffman, Yousef Sardahi
Abstract:
This study presents an intelligent control algorithm for a flexible robotic arm. Fuzzy control is used to control the motion of the arm to maintain the arm tip at the desired position while reducing vibration and increasing the system speed of response. The Fuzzy controller (FC) is based on adding the tip angular position to the arm deflection angle and using their sum as a feedback signal to the control algorithm. This reduces the complexity of the FC in terms of the input variables, number of membership functions, fuzzy rules, and control structure. Also, the design of the fuzzy controller is model-free and uses only our knowledge about the system. To show the efficacy of the FC, the control algorithm is implemented on the flexible joint manipulator (FJM) developed by Quanser. The results show that the proposed control method is effective in terms of response time, overshoot, and vibration amplitude.Keywords: fuzzy logic control, model-free control, flexible joint manipulators, nonlinear control
Procedia PDF Downloads 1181978 The Processing of Context-Dependent and Context-Independent Scalar Implicatures
Authors: Liu Jia’nan
Abstract:
The default accounts hold the view that there exists a kind of scalar implicature which can be processed without context and own a psychological privilege over other scalar implicatures which depend on context. In contrast, the Relevance Theorist regards context as a must because all the scalar implicatures have to meet the need of relevance in discourse. However, in Katsos, the experimental results showed: Although quantitatively the adults rejected under-informative utterance with lexical scales (context-independent) and the ad hoc scales (context-dependent) at almost the same rate, adults still regarded the violation of utterance with lexical scales much more severe than with ad hoc scales. Neither default account nor Relevance Theory can fully explain this result. Thus, there are two questionable points to this result: (1) Is it possible that the strange discrepancy is due to other factors instead of the generation of scalar implicature? (2) Are the ad hoc scales truly formed under the possible influence from mental context? Do the participants generate scalar implicatures with ad hoc scales instead of just comparing semantic difference among target objects in the under- informative utterance? In my Experiment 1, the question (1) will be answered by repetition of Experiment 1 by Katsos. Test materials will be showed by PowerPoint in the form of pictures, and each procedure will be done under the guidance of a tester in a quiet room. Our Experiment 2 is intended to answer question (2). The test material of picture will be transformed into the literal words in DMDX and the target sentence will be showed word-by-word to participants in the soundproof room in our lab. Reading time of target parts, i.e. words containing scalar implicatures, will be recorded. We presume that in the group with lexical scale, standardized pragmatically mental context would help generate scalar implicature once the scalar word occurs, which will make the participants hope the upcoming words to be informative. Thus if the new input after scalar word is under-informative, more time will be cost for the extra semantic processing. However, in the group with ad hoc scale, scalar implicature may hardly be generated without the support from fixed mental context of scale. Thus, whether the new input is informative or not does not matter at all, and the reading time of target parts will be the same in informative and under-informative utterances. People’s mind may be a dynamic system, in which lots of factors would co-occur. If Katsos’ experimental result is reliable, will it shed light on the interplay of default accounts and context factors in scalar implicature processing? We might be able to assume, based on our experiments, that one single dominant processing paradigm may not be plausible. Furthermore, in the processing of scalar implicature, the semantic interpretation and the pragmatic interpretation may be made in a dynamic interplay in the mind. As to the lexical scale, the pragmatic reading may prevail over the semantic reading because of its greater exposure in daily language use, which may also lead the possible default or standardized paradigm override the role of context. However, those objects in ad hoc scale are not usually treated as scalar membership in mental context, and thus lexical-semantic association of the objects may prevent their pragmatic reading from generating scalar implicature. Only when the sufficient contextual factors are highlighted, can the pragmatic reading get privilege and generate scalar implicature.Keywords: scalar implicature, ad hoc scale, dynamic interplay, default account, Mandarin Chinese processing
Procedia PDF Downloads 3221977 Using Machine Learning to Monitor the Condition of the Cutting Edge during Milling Hardened Steel
Authors: Pawel Twardowski, Maciej Tabaszewski, Jakub Czyżycki
Abstract:
The main goal of the work was to use machine learning to predict cutting-edge wear. The research was carried out while milling hardened steel with sintered carbide cutters at various cutting speeds. During the tests, cutting-edge wear was measured, and vibration acceleration signals were also measured. Appropriate measures were determined from the vibration signals and served as input data in the machine-learning process. Two approaches were used in this work. The first one involved a two-state classification of the cutting edge - suitable and unfit for further work. In the second approach, prediction of the cutting-edge state based on vibration signals was used. The obtained research results show that the appropriate use of machine learning algorithms gives excellent results related to monitoring cutting edge during the process.Keywords: milling of hardened steel, tool wear, vibrations, machine learning
Procedia PDF Downloads 591976 Electrophysiological Correlates of Statistical Learning in Children with and without Developmental Language Disorder
Authors: Ana Paula Soares, Alexandrina Lages, Helena Oliveira, Francisco-Javier Gutiérrez-Domínguez, Marisa Lousada
Abstract:
From an early age, exposure to a spoken language allows us to implicitly capture the structure underlying the succession of the speech sounds in that language and to segment it into meaningful units (words). Statistical learning (SL), i.e., the ability to pick up patterns in the sensory environment even without intention or consciousness of doing it, is thus assumed to play a central role in the acquisition of the rule-governed aspects of language and possibly to lie behind the language difficulties exhibited by children with development language disorder (DLD). The research conducted so far has, however, led to inconsistent results, which might stem from the behavioral tasks used to test SL. In a classic SL experiment, participants are first exposed to a continuous stream (e.g., syllables) in which, unbeknownst to the participants, stimuli are grouped into triplets that always appear together in the stream (e.g., ‘tokibu’, ‘tipolu’), with no pauses between each other (e.g., ‘tokibutipolugopilatokibu’) and without any information regarding the task or the stimuli. Following exposure, SL is assessed by asking participants to discriminate between triplets previously presented (‘tokibu’) from new sequences never presented together during exposure (‘kipopi’), i.e., to perform a two-alternative-forced-choice (2-AFC) task. Despite the widespread use of the 2-AFC to test SL, it has come under increasing criticism as it is an offline post-learning task that only assesses the result of the learning that had occurred during the previous exposure phase and that might be affected by other factors beyond the computation of regularities embedded in the input, typically the likelihood two syllables occurring together, a statistic known as transitional probability (TP). One solution to overcome these limitations is to assess SL as exposure to the stream unfolds using online techniques such as event-related potentials (ERP) that is highly sensitive to the time-course of the learning in the brain. Here we collected ERPs to examine the neurofunctional correlates of SL in preschool children with DLD, and chronological-age typical language development (TLD) controls who were exposed to an auditory stream in which eight three-syllable nonsense words, four of which presenting high-TPs and the other four low-TPs, to further analyze whether the ability of DLD and TLD children to extract-word-like units from the steam was modulated by words’ predictability. Moreover, to ascertain if the previous knowledge of the to-be-learned-regularities affected the neural responses to high- and low-TP words, children performed the auditory SL task, firstly, under implicit, and, subsequently, under explicit conditions. Although behavioral evidence of SL was not obtained in either group, the neural responses elicited during the exposure phases of the SL tasks differentiated children with DLD from children with TLD. Specifically, the results indicated that only children from the TDL group showed neural evidence of SL, particularly in the SL task performed under explicit conditions, firstly, for the low-TP, and, subsequently, for the high-TP ‘words’. Taken together, these findings support the view that children with DLD showed deficits in the extraction of the regularities embedded in the auditory input which might underlie the language difficulties.Keywords: development language disorder, statistical learning, transitional probabilities, word segmentation
Procedia PDF Downloads 1881975 Numerical Investigation of the Influence on Buckling Behaviour Due to Different Launching Bearings
Authors: Nadine Maier, Martin Mensinger, Enea Tallushi
Abstract:
In general, today, two types of launching bearings are used in the construction of large steel and steel concrete composite bridges. These are sliding rockers and systems with hydraulic bearings. The advantages and disadvantages of the respective systems are under discussion. During incremental launching, the center of the webs of the superstructure is not perfectly in line with the center of the launching bearings due to unavoidable tolerances, which may have an influence on the buckling behavior of the web plates. These imperfections are not considered in the current design against plate buckling, according to DIN EN 1993-1-5. It is therefore investigated whether the design rules have to take into account any eccentricities which occur during incremental launching and also if this depends on the respective launching bearing. Therefore, at the Technical University Munich, large-scale buckling tests were carried out on longitudinally stiffened plates under biaxial stresses with the two different types of launching bearings and eccentric load introduction. Based on the experimental results, a numerical model was validated. Currently, we are evaluating different parameters for both types of launching bearings, such as load introduction length, load eccentricity, the distance between longitudinal stiffeners, the position of the rotation point of the spherical bearing, which are used within the hydraulic bearings, web, and flange thickness and imperfections. The imperfection depends on the geometry of the buckling field and whether local or global buckling occurs. This and also the size of the meshing is taken into account in the numerical calculations of the parametric study. As a geometric imperfection, the scaled first buckling mode is applied. A bilinear material curve is used so that a GMNIA analysis is performed to determine the load capacity. Stresses and displacements are evaluated in different directions, and specific stress ratios are determined at the critical points of the plate at the time of the converging load step. To evaluate the load introduction of the transverse load, the transverse stress concentration is plotted on a defined longitudinal section on the web. In the same way, the rotation of the flange is evaluated in order to show the influence of the different degrees of freedom of the launching bearings under eccentric load introduction and to be able to make an assessment for the case, which is relevant in practice. The input and the output are automatized and depend on the given parameters. Thus we are able to adapt our model to different geometric dimensions and load conditions. The programming is done with the help of APDL and a Python code. This allows us to evaluate and compare more parameters faster. Input and output errors are also avoided. It is, therefore, possible to evaluate a large spectrum of parameters in a short time, which allows a practical evaluation of different parameters for buckling behavior. This paper presents the results of the tests as well as the validation and parameterization of the numerical model and shows the first influences on the buckling behavior under eccentric and multi-axial load introduction.Keywords: buckling behavior, eccentric load introduction, incremental launching, large scale buckling tests, multi axial stress states, parametric numerical modelling
Procedia PDF Downloads 1071974 Simple Procedure for Probability Calculation of Tensile Crack Occurring in Rigid Pavement: A Case Study
Authors: Aleš Florian, Lenka Ševelová, Jaroslav Žák
Abstract:
Formation of tensile cracks in concrete slabs of rigid pavement can be (among others) the initiation point of the other, more serious failures which can ultimately lead to complete degradation of the concrete slab and thus the whole pavement. Two measures can be used for reliability assessment of this phenomenon - the probability of failure and/or the reliability index. Different methods can be used for their calculation. The simple ones are called moment methods and simulation techniques. Two methods - FOSM Method and Simple Random Sampling Method - are verified and their comparison is performed. The influence of information about the probability distribution and the statistical parameters of input variables as well as of the limit state function on the calculated reliability index and failure probability are studied in three points on the lower surface of concrete slabs of the older type of rigid pavement formerly used in the Czech Republic.Keywords: failure, pavement, probability, reliability index, simulation, tensile crack
Procedia PDF Downloads 5461973 Geometric and Algebraic Properties of the Eigenvalues of Monotone Matrices
Authors: Brando Vagenende, Marie-Anne Guerry
Abstract:
For stochastic matrices of any order, the geometric description of the convex set of eigenvalues is completely known. The purpose of this study is to investigate the subset of the monotone matrices. This type of matrix appears in contexts such as intergenerational occupational mobility, equal-input modeling, and credit ratings-based systems. Monotone matrices are stochastic matrices in which each row stochastically dominates the previous row. The monotonicity property of a stochastic matrix can be expressed by a nonnegative lower-order matrix with the same eigenvalues as the original monotone matrix (except for the eigenvalue 1). Specifically, the aim of this research is to focus on the properties of eigenvalues of monotone matrices. For those matrices up to order 3, there already exists a complete description of the convex set of eigenvalues. For monotone matrices of order at least 4, this study gives, through simulations, more insight into the geometric description of their eigenvalues. Furthermore, this research treats in a geometric and algebraic way the properties of eigenvalues of monotone matrices of order at least 4.Keywords: eigenvalues of matrices, finite Markov chains, monotone matrices, nonnegative matrices, stochastic matrices
Procedia PDF Downloads 801972 Annual Water Level Simulation Using Support Vector Machine
Authors: Maryam Khalilzadeh Poshtegal, Seyed Ahmad Mirbagheri, Mojtaba Noury
Abstract:
In this paper, by application of the input yearly data of rainfall, temperature and flow to the Urmia Lake, the simulation of water level fluctuation were applied by means of three models. According to the climate change investigation the fluctuation of lakes water level are of high interest. This study investigate data-driven models, support vector machines (SVM), SVM method which is a new regression procedure in water resources are applied to the yearly level data of Lake Urmia that is the biggest and the hyper saline lake in Iran. The evaluated lake levels are found to be in good correlation with the observed values. The results of SVM simulation show better accuracy and implementation. The mean square errors, mean absolute relative errors and determination coefficient statistics are used as comparison criteria.Keywords: simulation, water level fluctuation, urmia lake, support vector machine
Procedia PDF Downloads 3671971 A Conceptual Framework for Assessing the Development of Health Information Systems Enterprise Architecture Interoperability
Authors: Prosper Tafadzwa Denhere, Ephias Ruhode, Munyaradzi Zhou
Abstract:
Health Information Systems (HISs) interoperability is emerging to be the future of modern healthcare systems Enterprise Architecture (EA), where healthcare entities are seamlessly interconnected to share healthcare data. The reality that the healthcare industry has been characterised by an influx of fragmented stand-alone e-Health systems, which present challenges of healthcare information sharing across platforms, desires much attention for systems integration efforts. The lack of an EA conceptual framework resultantly crates the need for investigating an ideal solution to the objective of Health Information Systems interoperability development assessment. The study takes a qualitative exploratory approach through a design science research context. The research aims to study the various themes withdrawn from the literature that can help in the assessment of interoperable HISs development through a literature study. Themes derived from the study include HIS needs, HIS readiness, HIS constraints, and HIS technology integration elements and standards tied to the EA development architectural layers of The Open Group Architecture Framework (TOGAF) as an EA development methodology. Eventually, the themes were conceptualised into a framework reviewed by two experts. The essence of the study was to provide a framework within which interoperable EA of HISs should be developed.Keywords: enterprise architecture, eHealth, health information systems, interoperability
Procedia PDF Downloads 1051970 Hydrodynamic Analysis with Heat Transfer in Solid Gas Fluidized Bed Reactor for Solar Thermal Applications
Authors: Sam Rasoulzadeh, Atefeh Mousavi
Abstract:
Fluidized bed reactors are known as highly exothermic and endothermic according to uniformity in temperature as a safe and effective mean for catalytic reactors. In these reactors, a wide range of catalyst particles can be used and by using a continuous operation proceed to produce in succession. Providing optimal conditions for the operation of these types of reactors will prevent the exorbitant costs necessary to carry out laboratory work. In this regard, a hydrodynamic analysis was carried out with heat transfer in the solid-gas fluidized bed reactor for solar thermal applications. The results showed that in the fluid flow the input of the reactor has a lower temperature than the outlet, and when the fluid is passing from the reactor, the heat transfer happens between cylinder and solar panel and fluid. It increases the fluid temperature in the outlet pump and also the kinetic energy of the fluid has been raised in the outlet areas.Keywords: heat transfer, solar reactor, fluidized bed reactor, CFD, computational fluid dynamics
Procedia PDF Downloads 1801969 Crosssampler: A Digital Convolution Cross Synthesis Instrument
Authors: Jimmy Eadie
Abstract:
Convolutional Cross Synthesis (CCS) has emerged as a powerful technique for blending input signals to create hybrid sounds. It has significantly expanded the horizons of digital signal processing, enabling artists to explore audio effects. However, the conventional applications of CCS primarily revolve around reverberation and room simulation rather than being utilized as a creative synthesis method. In this paper, we present the design of a digital instrument called CrossSampler that harnesses a parametric approach to convolution cross-synthesis, which involves using adjustable parameters to control the blending of audio signals through convolution. These parameters allow for customization of the resulting sound, offering greater creative control and flexibility. It enables users to shape the output by manipulating factors such as duration, intensity, and spectral characteristics. This approach facilitates experimentation and exploration in sound design and opens new sonic possibilities.Keywords: convolution, synthesis, sampling, virtual instrument
Procedia PDF Downloads 64