Search results for: orthogonal functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2776

Search results for: orthogonal functions

2176 Intonation Salience as an Underframe to Text Intonation Models

Authors: Tatiana Stanchuliak

Abstract:

It is common knowledge that intonation is not laid over a ready text. On the contrary, intonation forms and accompanies the text on the level of its birth in the speaker’s mind. As a result, intonation plays one of the fundamental roles in the process of transferring a thought into external speech. Intonation structure can highlight the semantic significance of textual elements and become a ranging mark in understanding the information structure of the text. Intonation functions by means of prosodic characteristics, one of which is intonation salience, whose function in texts results in making some textual elements more prominent than others. This function of intonation, therefore, performs as organizing. It helps to form the frame of key elements of the text. The study under consideration made an attempt to look into the inner nature of salience and create a sort of a text intonation model. This general goal brought to some more specific intermediate results. First, there were established degrees of salience on the level of the smallest semantic element - intonation group, as well as prosodic means of creating salience, were examined. Second, the most frequent combinations of prosodic means made it possible to distinguish patterns of salience, which then became constituent elements of a text intonation model. Third, the analysis of the predicate structure allowed to divide the whole text into smaller parts, or units, which performed a specific function in the developing of the general communicative intention. It appeared that such units can be found in any text and they have common characteristics of their intonation arrangement. These findings are certainly very important both for the theory of intonation and their practical application.

Keywords: accentuation , inner speech, intention, intonation, intonation functions, models, patterns, predicate, salience, semantics, sentence stress, text

Procedia PDF Downloads 266
2175 The Effect of Randomly Distributed Polypropylene Fibers and Some Additive Materials on Freezing-Thawing Durability of a Fine-Grained Soil

Authors: A. Şahin Zaimoglu

Abstract:

A number of studies have been conducted recently to investigate the influence of randomly oriented fibers on some engineering properties of cohesive and cohesionless soils. However, few studies have been carried out on freezing-thawing behavior of fine-grained soils modified with discrete fiber inclusions and additive materials. This experimental study was performed to investigate the effect of randomly distributed polypropylene fibers (PP) and some additive materials [e.g.., borogypsum (BG), fly ash (FA) and cement (C)] on freezing-thawing durability (mass losses) of a fine-grained soil for 6,12 and 18 cycles. The Taguchi method was applied to the experiments and a standard L9 orthogonal array (OA) with four factors and three levels were chosen. A series of freezing-thawing tests were conducted on each specimen. 0-20 % BG, 0-20 % FA, 0-0.25 % PP and 0-3 % of C by total dry weight of mixture were used in the preparation of specimens. Experimental results showed that the most effective materials for the freezing-thawing durability (mass losses) of the samples were borogypsum and fly ash. The values of mass losses for 6, 12 and 18 cycles in optimum conditions were 16.1%, 5.1% and 3.6%, respectively.

Keywords: freezing-thawing, additive materials, reinforced soil, optimization

Procedia PDF Downloads 306
2174 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 225
2173 NMR-Based Metabolomics Reveals Dietary Effects in Liver Extracts of Arctic Charr (Salvelinus alpinus) and Tilapia (Oreochromis mossambicus) Fed Different Levels of Starch

Authors: Rani Abro, Ali Ata Moazzami, Jan Erik Lindberg, Torbjörn Lundh

Abstract:

The effect of dietary starch level on liver metabolism in Arctic charr (Salvelinus alpinus) and tilapia (Oreochromis mossambicus) was studied using 1H-NMR based metabolomics. Fingerlings were fed iso-nitrogenous diets containing 0, 10 and 20 % starch for two months before liver samples were collected for metabolite analysis. Metabolite profiling was performed using 600 MHz NMR Chenomx software. In total, 48 metabolites were profiled in liver extracts from both fish species. Following the profiling, principal component analysis (PCA) and orthogonal partial least square discriminant analysis (OPLC-DA) were performed. These revealed that differences in the concentration of significant metabolites were correlated to the dietary starch level in both species. The most prominent difference in metabolic response to starch feeding between the omnivorous tilapia and the carnivorous Arctic charr was an indication of higher anaerobic metabolism in Arctic charr. The data also indicated that amino acid and pyrimidine metabolism was higher in Artic charr than in tilapia.

Keywords: arctic charr, metabolomics, starch, tilapia

Procedia PDF Downloads 455
2172 The Relationship between Life Event Stress, Depressive Thoughts, and Working Memory Capacity

Authors: Eid Abo Hamza, Ahmed Helal

Abstract:

Purpose: The objective is to measure the capacity of the working memory, ie. the maximum number of elements that can be retrieved and processed, by measuring the basic functions of working memory (inhibition/transfer/update), and also to investigate its relationship to life stress and depressive thoughts. Methods: The study sample consisted of 50 students from Egypt. A cognitive task was designed to measure the working memory capacity based on the determinants found in previous research, which showed that cognitive tasks are the best measurements of the functions and capacity of working memory. Results: The results indicated that there were statistically significant differences in the level of life stress events (high/low) on the task of measuring the working memory capacity. The results also showed that there were no statistically significant differences between males and females or between academic major on the task of measuring the working memory capacity. Furthermore, the results reported that there was no statistically significant effect of the interaction of the level of life stress (high/low) and gender (male/female) on the task of measuring working memory capacity. Finally, the results showed that there were significant differences in the level of depressive thoughts (high/low) on the task of measuring working memory. Conclusions: The current research concludes that neither the interaction of stressful life events, gender, and academic major, nor the interaction of depressive thoughts, gender, and academic major, influence on working memory capacity.

Keywords: working memory, depression, stress, life event

Procedia PDF Downloads 157
2171 Allium Cepa Extract Provides Neuroprotection Against Ischemia Reperfusion Induced Cognitive Dysfunction and Brain Damage in Mice

Authors: Jaspal Rana, Alkem Laboratories, Baddi, Himachal Pradesh, India Chitkara University, Punjab, India

Abstract:

Oxidative stress has been identified as an underlying cause of ischemia-reperfusion (IR) related cognitive dysfunction and brain damage. Therefore, antioxidant based therapies to treat IR injury are being investigated. Allium cepa L. (onion) is used as culinary medicine and is documented to have marked antioxidant effects. Hence, the present study was designed to evaluate the effect of A. cepa outer scale extract (ACE) against IR induced cognition and biochemical deficit in mice. ACE was prepared by maceration with 70% methanol and fractionated into ethylacetate and aqueous fractions. Bilateral common carotid artery occlusion for 10 min followed by 24 h reperfusion was used to induce cerebral IR injury. Following IR injury, ACE (100 and 200 mg/kg) was administered orally to animals for 7 days once daily. Behavioral outcomes (memory and sensorimotor functions) were evaluated using Morris water maze and neurological severity score. Cerebral infarct size, brain thiobarbituric acid reactive species, reduced glutathione, and superoxide dismutase activity was also determined. Treatment with ACE significantly ameliorated IR mediated deterioration of memory and sensorimotor functions and rise in brain oxidative stress in animals. The results of the present investigation revealed that ACE improved functional outcomes after cerebral IR injury, which may be attributed to its antioxidant properties.

Keywords: stroke, neuroprotection, ischemia reperfusion, herbal drugs

Procedia PDF Downloads 104
2170 Implementation of Conceptual Real-Time Embedded Functional Design via Drive-By-Wire ECU Development

Authors: Ananchai Ukaew, Choopong Chauypen

Abstract:

Design concepts of real-time embedded system can be realized initially by introducing novel design approaches. In this literature, model based design approach and in-the-loop testing were employed early in the conceptual and preliminary phase to formulate design requirements and perform quick real-time verification. The design and analysis methodology includes simulation analysis, model based testing, and in-the-loop testing. The design of conceptual drive-by-wire, or DBW, algorithm for electronic control unit, or ECU, was presented to demonstrate the conceptual design process, analysis, and functionality evaluation. The concepts of DBW ECU function can be implemented in the vehicle system to improve electric vehicle, or EV, conversion drivability. However, within a new development process, conceptual ECU functions and parameters are needed to be evaluated. As a result, the testing system was employed to support conceptual DBW ECU functions evaluation. For the current setup, the system components were consisted of actual DBW ECU hardware, electric vehicle models, and control area network or CAN protocol. The vehicle models and CAN bus interface were both implemented as real-time applications where ECU and CAN protocol functionality were verified according to the design requirements. The proposed system could potentially benefit in performing rapid real-time analysis of design parameters for conceptual system or software algorithm development.

Keywords: drive-by-wire ECU, in-the-loop testing, model-based design, real-time embedded system

Procedia PDF Downloads 348
2169 Implementation of Successive Interference Cancellation Algorithms in the 5g Downlink

Authors: Mokrani Mohamed Amine

Abstract:

In this paper, we have implemented successive interference cancellation algorithms in the 5G downlink. We have calculated the maximum throughput in Frequency Division Duplex (FDD) mode in the downlink, where we have obtained a value equal to 836932 b/ms. The transmitter is of type Multiple Input Multiple Output (MIMO) with eight transmitting and receiving antennas. Each antenna among eight transmits simultaneously a data rate of 104616 b/ms that contains the binary messages of the three users; in this case, the Cyclic Redundancy Check CRC is negligible, and the MIMO category is the spatial diversity. The technology used for this is called Non-Orthogonal Multiple Access (NOMA) with a Quadrature Phase Shift Keying (QPSK) modulation. The transmission is done in a Rayleigh fading channel with the presence of obstacles. The MIMO Successive Interference Cancellation (SIC) receiver with two transmitting and receiving antennas recovers its binary message without errors for certain values of transmission power such as 50 dBm, with 0.054485% errors when the transmitted power is 20dBm and with 0.00286763% errors for a transmitted power of 32 dBm(in the case of user 1) as well as with 0.0114705% errors when the transmitted power is 20 dBm also with 0.00286763% errors for a power of 24 dBm(in the case of user2) by applying the steps involved in SIC.

Keywords: 5G, NOMA, QPSK, TBS, LDPC, SIC, capacity

Procedia PDF Downloads 102
2168 A Rhetorical Approach to Julian the Emperor: A Consolation upon the Departure of the Excellent Sallust

Authors: Georgios Alexandropoulos

Abstract:

This study examines the rhetorical practice of "The consolation to himself upon the departure of the excellent Sallust" written by Flavius Claudius Julian the emperor. Its purpose is to describe the way that Julian uses the language as to have favorable effects on public through certain communicative and rhetorical functions.

Keywords: discourse analysis, Byzantine rhetoric,

Procedia PDF Downloads 415
2167 Spatial Interpolation of Aerosol Optical Depth Pollution: Comparison of Methods for the Development of Aerosol Distribution

Authors: Sahabeh Safarpour, Khiruddin Abdullah, Hwee San Lim, Mohsen Dadras

Abstract:

Air pollution is a growing problem arising from domestic heating, high density of vehicle traffic, electricity production, and expanding commercial and industrial activities, all increasing in parallel with urban population. Monitoring and forecasting of air quality parameters are important due to health impact. One widely available metric of aerosol abundance is the aerosol optical depth (AOD). The AOD is the integrated light extinction coefficient over a vertical atmospheric column of unit cross section, which represents the extent to which the aerosols in that vertical profile prevent the transmission of light by absorption or scattering. Seasonal aerosol optical depth (AOD) values at 550 nm derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) sensor onboard NASA’s Terra satellites, for the 10 years period of 2000-2010 were used to test 7 different spatial interpolation methods in the present study. The accuracy of estimations was assessed through visual analysis as well as independent validation based on basic statistics, such as root mean square error (RMSE) and correlation coefficient. Based on the RMSE and R values of predictions made using measured values from 2000 to 2010, Radial Basis Functions (RBFs) yielded the best results for spring, summer, and winter and ordinary kriging yielded the best results for fall.

Keywords: aerosol optical depth, MODIS, spatial interpolation techniques, Radial Basis Functions

Procedia PDF Downloads 406
2166 Development of a Roadmap for Assessment the Sustainability of Buildings in Saudi Arabia Using Building Information Modeling

Authors: Ibrahim A. Al-Sulaihi, Khalid S. Al-Gahtani, Abdullah M. Al-Sugair, Aref A. Abadel

Abstract:

Achieving environmental sustainability is one of the important issues considered in many countries’ vision. Green/Sustainable building is widely used terminology for describing a friendly environmental construction. Applying sustainable practices has a significant importance in various fields, including construction field that consumes an enormous amount of resource and causes a considerable amount of waste. The need for sustainability is increased in the regions that suffering from the limitation of natural resource and extreme weather conditions such as Saudi Arabia. Since buildings designs are getting sophisticated, the need for tools, which support decision-making for sustainability issues, is increasing, especially in the design and preconstruction stages. In this context, Building Information Modeling (BIM) can aid in performing complex building performance analyses to ensure an optimized sustainable building design. Accordingly, this paper introduces a roadmap towards developing a systematic approach for presenting the sustainability of buildings using BIM. The approach includes set of main processes including; identifying the sustainability parameters that can be used for sustainability assessment in Saudi Arabia, developing sustainability assessment method that fits the special circumstances in the Kingdom, identifying the sustainability requirements and BIM functions that can be used for satisfying these requirements, and integrating these requirements with identified functions. As a result, the sustainability-BIM approach can be developed which helps designers in assessing the sustainability and exploring different design alternatives at the early stage of the construction project.

Keywords: green buildings, sustainability, BIM, rating systems, environment, Saudi Arabia

Procedia PDF Downloads 377
2165 An Attempt at the Multi-Criterion Classification of Small Towns

Authors: Jerzy Banski

Abstract:

The basic aim of this study is to discuss and assess different classifications and research approaches to small towns that take their social and economic functions into account, as well as relations with surrounding areas. The subject literature typically includes three types of approaches to the classification of small towns: 1) the structural, 2) the location-related, and 3) the mixed. The structural approach allows for the grouping of towns from the point of view of the social, cultural and economic functions they discharge. The location-related approach draws on the idea of there being a continuum between the center and the periphery. A mixed classification making simultaneous use of the different approaches to research brings the most information to bear in regard to categories of the urban locality. Bearing in mind the approaches to classification, it is possible to propose a synthetic method for classifying small towns that takes account of economic structure, location and the relationship between the towns and their surroundings. In the case of economic structure, the small centers may be divided into two basic groups – those featuring a multi-branch structure and those that are specialized economically. A second element of the classification reflects the locations of urban centers. Two basic types can be identified – the small town within the range of impact of a large agglomeration, or else the town outside such areas, which is to say located peripherally. The third component of the classification arises out of small towns’ relations with their surroundings. In consequence, it is possible to indicate 8 types of small-town: from local centers enjoying good accessibility and a multi-branch economic structure to peripheral supra-local centers characterised by a specialized economic structure.

Keywords: small towns, classification, functional structure, localization

Procedia PDF Downloads 179
2164 Numerical Studies on the Performance of the Finned-Tube Heat Exchanger

Authors: S. P. Praveen Kumar, Bong-Su Sin, Kwon-Hee Lee

Abstract:

Finned-tube heat exchangers are predominantly used in space conditioning systems, as well as other applications requiring heat exchange between two fluids. The design of finned-tube heat exchangers requires the selection of over a dozen design parameters by the designer such as tube pitch, tube diameter, tube thickness, etc. Finned-tube heat exchangers are common devices; however, their performance characteristics are complicated. In this paper, numerical studies have been carried out to analyze the performances of finned tube heat exchanger (without fins considered for experimental purpose) by predicting the characteristics of temperature difference and pressure drop. In this study, a design considering 5 design variables, maximizing the temperature difference and minimizing the pressure drop was suggested by applying DOE. In this process, L18 orthogonal array was adopted. Parametric analytical studies have been carried out using Analysis of Variance (ANOVA) to determine the relative importance of each variable with respect to the temperature difference and the pressure drop. Following the results, the final design was suggested by predicting the optimum design therefore confirming the optimized condition.

Keywords: heat exchanger, fluid analysis, heat transfer, design of experiment, analysis of variance

Procedia PDF Downloads 445
2163 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 145
2162 Approximation by Generalized Lupaş-Durrmeyer Operators with Two Parameter α and β

Authors: Preeti Sharma

Abstract:

This paper deals with the Stancu type generalization of Lupaş-Durrmeyer operators. We establish some direct results in the polynomial weighted space of continuous functions defined on the interval [0, 1]. Also, Voronovskaja type theorem is studied.

Keywords: Lupas-Durrmeyer operators, polya distribution, weighted approximation, rate of convergence, modulus of continuity

Procedia PDF Downloads 343
2161 Household Wealth and Portfolio Choice When Tail Events Are Salient

Authors: Carlson Murray, Ali Lazrak

Abstract:

Robust experimental evidence of systematic violations of expected utility (EU) establishes that individuals facing risk overweight utility from low probability gains and losses when making choices. These findings motivated development of models of preferences with probability weighting functions, such as rank dependent utility (RDU). We solve for the optimal investing strategy of an RDU investor in a dynamic binomial setting from which we derive implications for investing behavior. We show that relative to EU investors with constant relative risk aversion, commonly measured probability weighting functions produce optimal RDU terminal wealth with significant downside protection and upside exposure. We additionally find that in contrast to EU investors, RDU investors optimally choose a portfolio that contains fair bets that provide payo↵s that can be interpreted as lottery outcomes or exposure to idiosyncratic returns. In a calibrated version of the model, we calculate that RDU investors would be willing to pay 5% of their initial wealth for the freedom to trade away from an optimal EU wealth allocation. The dynamic trading strategy that supports the optimal wealth allocation implies portfolio weights that are independent of initial wealth but requires higher risky share after good stock return histories. Optimal trading also implies the possibility of non-participation when historical returns are poor. Our model fills a gap in the literature by providing new quantitative and qualitative predictions that can be tested experimentally or using data on household wealth and portfolio choice.

Keywords: behavioral finance, probability weighting, portfolio choice

Procedia PDF Downloads 419
2160 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 127
2159 Pseudo Modal Operating Deflection Shape Based Estimation Technique of Mode Shape Using Time History Modal Assurance Criterion

Authors: Doyoung Kim, Hyo Seon Park

Abstract:

Studies of System Identification(SI) based on Structural Health Monitoring(SHM) have actively conducted for structural safety. Recently SI techniques have been rapidly developed with output-only SI paradigm for estimating modal parameters. The features of these output-only SI methods consist of Frequency Domain Decomposition(FDD) and Stochastic Subspace Identification(SSI) are using the algorithms based on orthogonal decomposition such as singular value decomposition(SVD). But the SVD leads to high level of computational complexity to estimate modal parameters. This paper proposes the technique to estimate mode shape with lower computational cost. This technique shows pseudo modal Operating Deflections Shape(ODS) through bandpass filter and suggests time history Modal Assurance Criterion(MAC). Finally, mode shape could be estimated from pseudo modal ODS and time history MAC. Analytical simulations of vibration measurement were performed and the results with mode shape and computation time between representative SI method and proposed method were compared.

Keywords: modal assurance criterion, mode shape, operating deflection shape, system identification

Procedia PDF Downloads 407
2158 Chronic Cognitive Impacts of Mild Traumatic Brain Injury during Aging

Authors: Camille Charlebois-Plante, Marie-Ève Bourassa, Gaelle Dumel, Meriem Sabir, Louis De Beaumont

Abstract:

To the extent of our knowledge, there has been little interest in the chronic effects of mild traumatic brain injury (mTBI) on cognition during normal aging. This is rather surprising considering the impacts on daily and social functioning. In addition, sustaining a mTBI during late adulthood may increase the effect of normal biological aging in individuals who consider themselves normal and healthy. The objective of this study was to characterize the persistent neuropsychological repercussions of mTBI sustained during late adulthood, on average 12 months prior to testing. To this end, 35 mTBI patients and 42 controls between the ages of 50 and 69 completed an exhaustive neuropsychological assessment lasting three hours. All mTBI patients were asymptomatic and all participants had a score ≥ 27 at the MoCA. The evaluation consisted of 20 standardized neuropsychological tests measuring memory, attention, executive and language functions, as well as information processing speed. Performance on tests of visual (Brief Visuospatial Memory Test Revised) and verbal memory (Rey Auditory Verbal Learning Test and WMS-IV Logical Memory subtest), lexical access (Boston Naming Test) and response inhibition (Stroop) revealed to be significantly lower in the mTBI group. These findings suggest that a mTBI sustained during late adulthood induces lasting effects on cognitive function. Episodic memory and executive functions seem to be particularly vulnerable to enduring mTBI effects.

Keywords: cognitive function, late adulthood, mild traumatic brain injury, neuropsychology

Procedia PDF Downloads 168
2157 Enhancement of Mass Transport and Separations of Species in a Electroosmotic Flow by Distinct Oscillatory Signals

Authors: Carlos Teodoro, Oscar Bautista

Abstract:

In this work, we analyze theoretically the mass transport in a time-periodic electroosmotic flow through a parallel flat plate microchannel under different periodic functions of the applied external electric field. The microchannel connects two reservoirs having different constant concentrations of an electro-neutral solute, and the zeta potential of the microchannel walls are assumed to be uniform. The governing equations that allow determining the mass transport in the microchannel are given by the Poisson-Boltzmann equation, the modified Navier-Stokes equations, where the Debye-Hückel approximation is considered (the zeta potential is less than 25 mV), and the species conservation. These equations are nondimensionalized and four dimensionless parameters appear which control the mass transport phenomenon. In this sense, these parameters are an angular Reynolds, the Schmidt and the Péclet numbers, and an electrokinetic parameter representing the ratio of the half-height of the microchannel to the Debye length. To solve the mathematical model, first, the electric potential is determined from the Poisson-Boltzmann equation, which allows determining the electric force for various periodic functions of the external electric field expressed as Fourier series. In particular, three different excitation wave forms of the external electric field are assumed, a) sawteeth, b) step, and c) a periodic irregular functions. The periodic electric forces are substituted in the modified Navier-Stokes equations, and the hydrodynamic field is derived for each case of the electric force. From the obtained velocity fields, the species conservation equation is solved and the concentration fields are found. Numerical calculations were done by considering several binary systems where two dilute species are transported in the presence of a carrier. It is observed that there are different angular frequencies of the imposed external electric signal where the total mass transport of each species is the same, independently of the molecular diffusion coefficient. These frequencies are called crossover frequencies and are obtained graphically at the intersection when the total mass transport is plotted against the imposed frequency. The crossover frequencies are different depending on the Schmidt number, the electrokinetic parameter, the angular Reynolds number, and on the type of signal of the external electric field. It is demonstrated that the mass transport through the microchannel is strongly dependent on the modulation frequency of the applied particular alternating electric field. Possible extensions of the analysis to more complicated pulsation profiles are also outlined.

Keywords: electroosmotic flow, mass transport, oscillatory flow, species separation

Procedia PDF Downloads 215
2156 The Roots of Amazonia’s Droughts and Floods: Complex Interactions of Pacific and Atlantic Sea-Surface Temperatures

Authors: Rosimeire Araújo Silva, Philip Martin Fearnside

Abstract:

Extreme droughts and floods in the Amazon have serious consequences for natural ecosystems and the human population in the region. The frequency of these events has increased in recent years, and projections of climate change predict greater frequency and intensity of these events. Understanding the links between these extreme events and different patterns of sea surface temperature in the Atlantic and Pacific Oceans is essential, both to improve the modeling of climate change and its consequences and to support efforts of adaptation in the region. The relationship between sea temperatures and events in the Amazon is much more complex than is usually assumed in climatic models. Warming and cooling of different parts of the oceans, as well as the interaction between simultaneous temperature changes in different parts of each ocean and between the two oceans, have specific consequences for the Amazon, with effects on precipitation that vary in different parts of the region. Simplistic generalities, such as the association between El Niño events and droughts in the Amazon, do not capture this complexity. We investigated the variability of Sea Surface Temperature (SST) in the Tropical Pacific Ocean during the period 1950-2022, using Empirical Orthogonal Functions (FOE), spectral analysis coherence and wavelet phase. The two were identified as the main modes of variability, which explain about 53,9% and 13,3%, respectively, of the total variance of the data. The spectral and coherence analysis and wavelets phase showed that the first selected mode represents the warming in the central part of the Pacific Ocean (the “Central El Niño”), while the second mode represents warming in the eastern part of the Pacific (the “Eastern El Niño The effects of the 1982-1983 and 1976-1977 El Niño events in the Amazon, although both events were characterized by an increase in sea surface temperatures in the Equatorial Pacific, the impact on rainfall in the Amazon was distinct. In the rainy season, from December to March, the sub-basins of the Japurá, Jutaí, Jatapu, Tapajós, Trombetas and Xingu rivers were the regions that showed the greatest reductions in rainfall associated with El Niño Central (1982-1983), while the sub-basins of the Javari, Purus, Negro and Madeira rivers had the most pronounced reductions in the year of Eastern El Niño (1976-1977). In the transition to the dry season, in April, the greatest reductions were associated with the Eastern El Niño year for the majority of the study region, with the exception only of the sub-basins of the Madeira, Trombetas and Xingu rivers, which had their associated reductions to Central El Niño. In the dry season from July to September, the sub-basins of the Japurá Jutaí Jatapu Javari Trombetas and Madeira rivers were the rivers that showed the greatest reductions in rainfall associated with El Niño Central, while the sub-basins of the Tapajós Purus Negro and Xingu rivers had the most pronounced reductions. In the Eastern El Niño year this season. In this way, it is possible to conclude that the Central (Eastern) El Niño controlled the reductions in soil moisture in the dry (rainy) season for all sub-basins shown in this study. Extreme drought events associated with these meteorological phenomena can lead to a significant increase in the occurrence of forest fires. These fires have a devastating impact on Amazonian vegetation, resulting in the irreparable loss of biodiversity and the release of large amounts of carbon stored in the forest, contributing to the increase in the greenhouse effect and global climate change.

Keywords: sea surface temperature, variability, climate, Amazon

Procedia PDF Downloads 63
2155 Multi-Response Optimization of CNC Milling Parameters Using Taguchi Based Grey Relational Analysis for AA6061 T6 Aluminium Alloy

Authors: Varsha Singh, Kishan Fuse

Abstract:

This paper presents a study of the grey-Taguchi method to optimize CNC milling parameters of AA6061 T6 aluminium alloy. Grey-Taguchi method combines Taguchi method based design of experiments (DOE) with grey relational analysis (GRA). Multi-response optimization of different quality characteristics as surface roughness, material removal rate, cutting forces is done using grey relational analysis (GRA). The milling parameters considered for experiments include cutting speed, feed per tooth, and depth of cut. Each parameter with three levels is selected. A grey relational grade is used to estimate overall quality characteristics performance. The Taguchi’s L9 orthogonal array is used for design of experiments. MINITAB 17 software is used for optimization. Analysis of variance (ANOVA) is used to identify most influencing parameter. The experimental results show that grey relational analysis is effective method for optimizing multi-response characteristics. Optimum results are finally validated by performing confirmation test.

Keywords: ANOVA, CNC milling, grey relational analysis, multi-response optimization

Procedia PDF Downloads 306
2154 Identification Algorithm of Critical Interface, Modelling Perils on Critical Infrastructure Subjects

Authors: Jiří. J. Urbánek, Hana Malachová, Josef Krahulec, Jitka Johanidisová

Abstract:

The paper deals with crisis situations investigation and modelling within the organizations of critical infrastructure. Every crisis situation has an origin in the emergency event occurrence in the organizations of energetic critical infrastructure especially. Here, the emergency events can be both the expected events, then crisis scenarios can be pre-prepared by pertinent organizational crisis management authorities towards their coping or the unexpected event (Black Swan effect) – without pre-prepared scenario, but it needs operational coping of crisis situations as well. The forms, characteristics, behaviour and utilization of crisis scenarios have various qualities, depending on real critical infrastructure organization prevention and training processes. An aim is always better organizational security and continuity obtainment. This paper objective is to find and investigate critical/ crisis zones and functions in critical situations models of critical infrastructure organization. The DYVELOP (Dynamic Vector Logistics of Processes) method is able to identify problematic critical zones and functions, displaying critical interfaces among actors of crisis situations on the DYVELOP maps named Blazons. Firstly, for realization of this ability is necessary to derive and create identification algorithm of critical interfaces. The locations of critical interfaces are the flags of crisis situation in real organization of critical infrastructure. Conclusive, the model of critical interface will be displayed at real organization of Czech energetic crisis infrastructure subject in Black Out peril environment. The Blazons need live power Point presentation for better comprehension of this paper mission.

Keywords: algorithm, crisis, DYVELOP, infrastructure

Procedia PDF Downloads 408
2153 Research on Ultrafine Particles Classification Using Hydrocyclone with Annular Rinse Water

Authors: Tao Youjun, Zhao Younan

Abstract:

The separation effect of fine coal can be improved by the process of pre-desliming. It was significantly enhanced when the fine coal was processed using Falcon concentrator with the removal of -45um coal slime. Ultrafine classification tests using Krebs classification cyclone with annular rinse water showed that increasing feeding pressure can effectively avoid the phenomena of heavy particles passing into overflow and light particles slipping into underflow. The increase of rinse water pressure could reduce the content of fine-grained particles while increasing the classification size. The increase in feeding concentration had a negative effect on the efficiency of classification, meanwhile increased the classification size due to the enhanced hindered settling caused by high underflow concentration. As a result of optimization experiments with response indicator of classification efficiency which based on orthogonal design using Design-Expert software indicated that the optimal classification efficiency reached 91.32% with the feeding pressure of 0.03MPa, the rinse water pressure of 0.02MPa and the feeding concentration of 12.5%. Meanwhile, the classification size was 49.99 μm which had a good agreement with the predicted value.

Keywords: hydrocyclone, ultrafine classification, slime, classification efficiency, classification size

Procedia PDF Downloads 165
2152 Spatial Patterns and Temporal Evolution of Octopus Abundance in the Mauritanian Zone

Authors: Dedah Ahmed Babou, Nicolas Bez

Abstract:

The Min-Max autocorrelation factor (MAF) approach makes it possible to express in a space formed by spatially independent factors, spatiotemporal observations. These factors are ordered in decreasing order of spatial autocorrelation. The starting observations are thus expressed in the space formed by these factors according to temporal coordinates. Each vector of temporal coefficients expresses the temporal evolution of the weight of the corresponding factor. Applying this approach has enabled us to achieve the following results: (i) Define a spatially orthogonal space in which the projections of the raw data are determined; (ii) Define a limit threshold for the factors with the strongest structures in order to analyze the weight, and the temporal evolution of these different structures (iii) Study the correlation between the temporal evolution of the persistent spatial structures and that of the observed average abundance (iv) Propose prototypes of campaigns reflecting a high vs. low abundance (v) Propose a classification of campaigns that highlights seasonal and/or temporal similarities. These results were obtained by analyzing the octopus yield during the scientific campaigns of the oceanographic vessel Al Awam during the period 1989-2017 in the Mauritanian exclusive economic zone.

Keywords: spatiotemporal , autocorrelation, kriging, variogram, Octopus vulgaris

Procedia PDF Downloads 145
2151 Forecasting Market Share of Electric Vehicles in Taiwan Using Conjoint Models and Monte Carlo Simulation

Authors: Li-hsing Shih, Wei-Jen Hsu

Abstract:

Recently, the sale of electrical vehicles (EVs) has increased dramatically due to maturing technology development and decreasing cost. Governments of many countries have made regulations and policies in favor of EVs due to their long-term commitment to net zero carbon emissions. However, due to uncertain factors such as the future price of EVs, forecasting the future market share of EVs is a challenging subject for both the auto industry and local government. This study tries to forecast the market share of EVs using conjoint models and Monte Carlo simulation. The research is conducted in three phases. (1) A conjoint model is established to represent the customer preference structure on purchasing vehicles while five product attributes of both EV and internal combustion engine vehicles (ICEV) are selected. A questionnaire survey is conducted to collect responses from Taiwanese consumers and estimate the part-worth utility functions of all respondents. The resulting part-worth utility functions can be used to estimate the market share, assuming each respondent will purchase the product with the highest total utility. For example, attribute values of an ICEV and a competing EV are given respectively, two total utilities of the two vehicles of a respondent are calculated and then knowing his/her choice. Once the choices of all respondents are known, an estimate of market share can be obtained. (2) Among the attributes, future price is the key attribute that dominates consumers’ choice. This study adopts the assumption of a learning curve to predict the future price of EVs. Based on the learning curve method and past price data of EVs, a regression model is established and the probability distribution function of the price of EVs in 2030 is obtained. (3) Since the future price is a random variable from the results of phase 2, a Monte Carlo simulation is then conducted to simulate the choices of all respondents by using their part-worth utility functions. For instance, using one thousand generated future prices of an EV together with other forecasted attribute values of the EV and an ICEV, one thousand market shares can be obtained with a Monte Carlo simulation. The resulting probability distribution of the market share of EVs provides more information than a fixed number forecast, reflecting the uncertain nature of the future development of EVs. The research results can help the auto industry and local government make more appropriate decisions and future action plans.

Keywords: conjoint model, electrical vehicle, learning curve, Monte Carlo simulation

Procedia PDF Downloads 67
2150 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 336
2149 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 577
2148 Integration of STEM Education in Quebec, Canada – Challenges and Opportunities

Authors: B. El Fadil, R. Najar

Abstract:

STEM education is promoted by many scholars and curricula around the world, but it is not yet well established in the province of Quebec in Canada. In addition, effective instructional STEM activities and design methods are required to ensure that students and teachers' needs are being met. One potential method is the Engineering Design Process (EDP), a methodology that emphasizes the importance of creativity and collaboration in problem-solving strategies. This article reports on a case study that focused on using the EDP to develop instructional materials by means of making a technological artifact to teach mathematical variables and functions at the secondary level. The five iterative stages of the EDP (design, make, test, infer, and iterate) were integrated into the development of the course materials. Data was collected from different sources: pre- and post-questionnaires, as well as a working document dealing with pupils' understanding based on designing, making, testing, and simulating. Twenty-four grade seven (13 years old) students in Northern Quebec participated in the study. The findings of this study indicate that STEM activities have a positive impact not only on students' engagement in classroom activities but also on learning new mathematical concepts. Furthermore, STEM-focused activities have a significant effect on problem-solving skills development in an interdisciplinary approach. Based on the study's results, we can conclude, inter alia, that teachers should integrate STEM activities into their teaching practices to increase learning outcomes and attach more importance to STEM-focused activities to develop students' reflective thinking and hands-on skills.

Keywords: engineering design process, motivation, stem, integration, variables, functions

Procedia PDF Downloads 87
2147 Optimization of Surface Roughness in Turning Process Utilizing Live Tooling via Taguchi Methodology

Authors: Weinian Wang, Joseph C. Chen

Abstract:

The objective of this research is to optimize the process of cutting cylindrical workpieces utilizing live tooling on a HAAS ST-20 lathe. Surface roughness (Ra) has been investigated as the indicator of quality characteristics for machining process. Aluminum alloy was used to conduct experiments due to its wide range usages in engineering structures and components where light weight or corrosion resistance is required. In this study, Taguchi methodology is utilized to determine the effects that each of the parameters has on surface roughness (Ra). A total of 18 experiments of each process were designed according to Taguchi’s L9 orthogonal array (OA) with four control factors at three levels of each and signal-to-noise ratios (S/N) were computed with Smaller the better equation for minimizing the system. The optimal parameters identified for the surface roughness of the turning operation utilizing live tooling were a feed rate of 3 inches/min(A3); a spindle speed of 1300 rpm(B3); a 2-flute titanium nitrite coated 3/8” endmill (C1); and a depth of cut of 0.025 inches (D2). The mean surface roughness of the confirmation runs in turning operation was 8.22 micro inches. The final results demonstrate that Taguchi methodology is a sufficient way of process improvement in turning process on surface roughness.

Keywords: CNC milling operation, CNC turning operation, surface roughness, Taguchi parameter design

Procedia PDF Downloads 175