Search results for: convex functions
2244 Secure Cryptographic Operations on SIM Card for Mobile Financial Services
Authors: Kerem Ok, Serafettin Senturk, Serdar Aktas, Cem Cevikbas
Abstract:
Mobile technology is very popular nowadays and it provides a digital world where users can experience many value-added services. Service Providers are also eager to offer diverse value-added services to users such as digital identity, mobile financial services and so on. In this context, the security of data storage in smartphones and the security of communication between the smartphone and service provider are critical for the success of these services. In order to provide the required security functions, the SIM card is one acceptable alternative. Since SIM cards include a Secure Element, they are able to store sensitive data, create cryptographically secure keys, encrypt and decrypt data. In this paper, we design and implement a SIM and a smartphone framework that uses a SIM card for secure key generation, key storage, data encryption, data decryption and digital signing for mobile financial services. Our frameworks show that the SIM card can be used as a controlled Secure Element to provide required security functions for popular e-services such as mobile financial services.Keywords: SIM card, mobile financial services, cryptography, secure data storage
Procedia PDF Downloads 3122243 Artificial Intelligent Methodology for Liquid Propellant Engine Design Optimization
Authors: Hassan Naseh, Javad Roozgard
Abstract:
This paper represents the methodology based on Artificial Intelligent (AI) applied to Liquid Propellant Engine (LPE) optimization. The AI methodology utilized from Adaptive neural Fuzzy Inference System (ANFIS). In this methodology, the optimum objective function means to achieve maximum performance (specific impulse). The independent design variables in ANFIS modeling are combustion chamber pressure and temperature and oxidizer to fuel ratio and output of this modeling are specific impulse that can be applied with other objective functions in LPE design optimization. To this end, the LPE’s parameter has been modeled in ANFIS methodology based on generating fuzzy inference system structure by using grid partitioning, subtractive clustering and Fuzzy C-Means (FCM) clustering for both inferences (Mamdani and Sugeno) and various types of membership functions. The final comparing optimization results shown accuracy and processing run time of the Gaussian ANFIS Methodology between all methods.Keywords: ANFIS methodology, artificial intelligent, liquid propellant engine, optimization
Procedia PDF Downloads 5902242 Dynamic Reroute Modeling for Emergency Evacuation: Case Study of Brunswick City, Germany
Authors: Yun-Pang Flötteröd, Jakob Erdmann
Abstract:
The human behaviors during evacuations are quite complex. One of the critical behaviors which affect the efficiency of evacuation is route choice. Therefore, the respective simulation modeling work needs to function properly. In this paper, Simulation of Urban Mobility’s (SUMO) current dynamic route modeling during evacuation, i.e. the rerouting functions, is examined with a real case study. The result consistency of the simulation and the reality is checked as well. Four influence factors (1) time to get information, (2) probability to cancel a trip, (3) probability to use navigation equipment, and (4) rerouting and information updating period are considered to analyze possible traffic impacts during the evacuation and to examine the rerouting functions in SUMO. Furthermore, some behavioral characters of the case study are analyzed with use of the corresponding detector data and applied in the simulation. The experiment results show that the dynamic route modeling in SUMO can deal with the proposed scenarios properly. Some issues and function needs related to route choice are discussed and further improvements are suggested.Keywords: evacuation, microscopic traffic simulation, rerouting, SUMO
Procedia PDF Downloads 1942241 Implementation of CNV-CH Algorithm Using Map-Reduce Approach
Authors: Aishik Deb, Rituparna Sinha
Abstract:
We have developed an algorithm to detect the abnormal segment/"structural variation in the genome across a number of samples. We have worked on simulated as well as real data from the BAM Files and have designed a segmentation algorithm where abnormal segments are detected. This algorithm aims to improve the accuracy and performance of the existing CNV-CH algorithm. The next-generation sequencing (NGS) approach is very fast and can generate large sequences in a reasonable time. So the huge volume of sequence information gives rise to the need for Big Data and parallel approaches of segmentation. Therefore, we have designed a map-reduce approach for the existing CNV-CH algorithm where a large amount of sequence data can be segmented and structural variations in the human genome can be detected. We have compared the efficiency of the traditional and map-reduce algorithms with respect to precision, sensitivity, and F-Score. The advantages of using our algorithm are that it is fast and has better accuracy. This algorithm can be applied to detect structural variations within a genome, which in turn can be used to detect various genetic disorders such as cancer, etc. The defects may be caused by new mutations or changes to the DNA and generally result in abnormally high or low base coverage and quantification values.Keywords: cancer detection, convex hull segmentation, map reduce, next generation sequencing
Procedia PDF Downloads 1362240 Pragmatic Discoursal Study of Hedging Constructions in English Language
Authors: Mohammed Hussein Ahmed, Bahar Mohammed Kareem
Abstract:
This study is concerned with the pragmatic discoursal study of hedging constructions in English language. Hedging is a mitigated word used to lessen the impact of the utterance uttered by the speakers. Hedging could be either adverbs, adjectives, verbs and sometimes it may consist of clauses. It aims at finding out the extent to which speakers and participants of the discourse use hedging constructions during their conversations. The study also aims at finding out whether or not there are any significant differences in the types and functions of the frequency of hedging constructions employed by male and female. It is hypothesized that hedging constructions are frequent in English discourse more than any other languages due to its formality and that the frequency of the types and functions are influenced by the gender of the participants. To achieve the aims of the study, two types of procedures have been followed: theoretical and practical. The theoretical procedure consists of presenting a theoretical background of hedging topic which includes its definitions, etymology and theories. The practical procedure consists of selecting a sample of texts and analyzing them according to an adopted model. A number of conclusions will be drawn based on the findings of the study.Keywords: hedging, pragmatics, politeness, theoretical
Procedia PDF Downloads 5882239 Investigating a Deterrence Function for Work Trips for Perth Metropolitan Area
Authors: Ali Raouli, Amin Chegenizadeh, Hamid Nikraz
Abstract:
The Perth metropolitan area and its surrounding regions have been expanding rapidly in recent decades and it is expected that this growth will continue in the years to come. With this rapid growth and the resulting increase in population, consideration should be given to strategic planning and modelling for the future expansion of Perth. The accurate estimation of projected traffic volumes has always been a major concern for the transport modelers and planners. Development of a reliable strategic transport model depends significantly on the inputs data into the model and the calibrated parameters of the model to reflect the existing situation. Trip distribution is the second step in four-step modelling (FSM) which is complex due to its behavioral nature. Gravity model is the most common method for trip distribution. The spatial separation between the Origin and Destination (OD) zones will be reflected in gravity model by applying deterrence functions which provide an opportunity to include people’s behavior in choosing their destinations based on distance, time and cost of their journeys. Deterrence functions play an important role for distribution of the trips within a study area and would simulate the trip distances and therefore should be calibrated for any particular strategic transport model to correctly reflect the trip behavior within the modelling area. This paper aims to review the most common deterrence functions and propose a calibrated deterrence function for work trips within the Perth Metropolitan Area based on the information obtained from the latest available Household data and Perth and Region Travel Survey (PARTS) data. As part of this study, a four-step transport model using EMME software has been developed for Perth Metropolitan Area to assist with the analysis and findings.Keywords: deterrence function, four-step modelling, origin destination, transport model
Procedia PDF Downloads 1682238 Effects of Oral L-Carnitine on Liver Functions after Trans arterial Chemoembolization in Hepatocellular Carcinoma Patients
Authors: Ali Kassem, Aly Taha, Abeer Hassan, Kazuhide Higuchi
Abstract:
Introduction: Trans arterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) is usually followed by hepatic dysfunction that limits its efficacy. L-carnitine is recently studied as hepatoprotective agent. Our aim is to evaluate the L-carnitine effects against the deterioration of liver functions after TACE. Method: 53 patients with intermediate stage HCC were assigned into two groups; L-carnitine group (26 patients) who received L-carnitine 300 mg tablet twice daily from 2 weeks before to 12 weeks after TACE and control group (27 patients) without L-carnitine therapy. 28 of studied patients received branched chain amino acids granules. Results: There were significant differences between L-carnitine Vs. control group in mean serum albumin change from baseline to 1 week and 4 weeks after TACE (p < 0.05). L-Carnitine maintained Child-Pugh score at 1 week after TACE and exhibited improvement at 4 weeks after TACE (p < 0.01 Vs 1 week after TACE). Control group has significant Child-Pugh score deterioration from baseline to 1 week after TACE (p < 0.05) and 12 weeks after TACE (p < 0.05). There were significant differences between L-carnitine and control groups in mean Child-Pugh score change from baseline to 4 weeks (p < 0.05) and 12 weeks after TACE (p < 0.05). L-carnitine displayed improvement in (PT) from baseline to 1 week, 4 w (p < 0.05) and 12 weeks after TACE. PT in control group declined less than baseline along all follow up intervals. Total bilirubin in L-carnitine group decreased at 1 week post TACE while in control group, it significantly increased at 1 week (p = 0.01). ALT and C-reactive protein elevation were suppressed at 1 week after TACE in Lcarnitine group. The hepatoprotective effects of L-carnitine were enhanced by concomitant use of branched chain amino acids. Conclusion: L-carnitine and BCAA combination therapy offer a novel supportive strategy after TACE in HCC patients.Keywords: hepatocellular carcinoma, L-carnitine, liver functions , trans-arterial embolization
Procedia PDF Downloads 1552237 Modified Weibull Approach for Bridge Deterioration Modelling
Authors: Niroshan K. Walgama Wellalage, Tieling Zhang, Richard Dwight
Abstract:
State-based Markov deterioration models (SMDM) sometimes fail to find accurate transition probability matrix (TPM) values, and hence lead to invalid future condition prediction or incorrect average deterioration rates mainly due to drawbacks of existing nonlinear optimization-based algorithms and/or subjective function types used for regression analysis. Furthermore, a set of separate functions for each condition state with age cannot be directly derived by using Markov model for a given bridge element group, which however is of interest to industrial partners. This paper presents a new approach for generating Homogeneous SMDM model output, namely, the Modified Weibull approach, which consists of a set of appropriate functions to describe the percentage condition prediction of bridge elements in each state. These functions are combined with Bayesian approach and Metropolis Hasting Algorithm (MHA) based Markov Chain Monte Carlo (MCMC) simulation technique for quantifying the uncertainty in model parameter estimates. In this study, factors contributing to rail bridge deterioration were identified. The inspection data for 1,000 Australian railway bridges over 15 years were reviewed and filtered accordingly based on the real operational experience. Network level deterioration model for a typical bridge element group was developed using the proposed Modified Weibull approach. The condition state predictions obtained from this method were validated using statistical hypothesis tests with a test data set. Results show that the proposed model is able to not only predict the conditions in network-level accurately but also capture the model uncertainties with given confidence interval.Keywords: bridge deterioration modelling, modified weibull approach, MCMC, metropolis-hasting algorithm, bayesian approach, Markov deterioration models
Procedia PDF Downloads 7292236 Modelling of Heat Generation in a 18650 Lithium-Ion Battery Cell under Varying Discharge Rates
Authors: Foo Shen Hwang, Thomas Confrey, Stephen Scully, Barry Flannery
Abstract:
Thermal characterization plays an important role in battery pack design. Lithium-ion batteries have to be maintained between 15-35 °C to operate optimally. Heat is generated (Q) internally within the batteries during both the charging and discharging phases. This can be quantified using several standard methods. The most common method of calculating the batteries heat generation is through the addition of both the joule heating effects and the entropic changes across the battery. In addition, such values can be derived by identifying the open-circuit voltage (OCV), nominal voltage (V), operating current (I), battery temperature (T) and the rate of change of the open-circuit voltage in relation to temperature (dOCV/dT). This paper focuses on experimental characterization and comparative modelling of the heat generation rate (Q) across several current discharge rates (0.5C, 1C, and 1.5C) of a 18650 cell. The analysis is conducted utilizing several non-linear mathematical functions methods, including polynomial, exponential, and power models. Parameter fitting is carried out over the respective function orders; polynomial (n = 3~7), exponential (n = 2) and power function. The generated parameter fitting functions are then used as heat source functions in a 3-D computational fluid dynamics (CFD) solver under natural convection conditions. Generated temperature profiles are analyzed for errors based on experimental discharge tests, conducted at standard room temperature (25°C). Initial experimental results display low deviation between both experimental and CFD temperature plots. As such, the heat generation function formulated could be easier utilized for larger battery applications than other methods available.Keywords: computational fluid dynamics, curve fitting, lithium-ion battery, voltage drop
Procedia PDF Downloads 962235 Online Learning for Modern Business Models: Theoretical Considerations and Algorithms
Authors: Marian Sorin Ionescu, Olivia Negoita, Cosmin Dobrin
Abstract:
This scientific communication reports and discusses learning models adaptable to modern business problems and models specific to digital concepts and paradigms. In the PAC (probably approximately correct) learning model approach, in which the learning process begins by receiving a batch of learning examples, the set of learning processes is used to acquire a hypothesis, and when the learning process is fully used, this hypothesis is used in the prediction of new operational examples. For complex business models, a lot of models should be introduced and evaluated to estimate the induced results so that the totality of the results are used to develop a predictive rule, which anticipates the choice of new models. In opposition, for online learning-type processes, there is no separation between the learning (training) and predictive phase. Every time a business model is approached, a test example is considered from the beginning until the prediction of the appearance of a model considered correct from the point of view of the business decision. After choosing choice a part of the business model, the label with the logical value "true" is known. Some of the business models are used as examples of learning (training), which helps to improve the prediction mechanisms for future business models.Keywords: machine learning, business models, convex analysis, online learning
Procedia PDF Downloads 1422234 A Deterministic Approach for Solving the Hull and White Interest Rate Model with Jump Process
Authors: Hong-Ming Chen
Abstract:
This work considers the resolution of the Hull and White interest rate model with the jump process. A deterministic process is adopted to model the random behavior of interest rate variation as deterministic perturbations, which is depending on the time t. The Brownian motion and jumps uncertainty are denoted as the integral functions piecewise constant function w(t) and point function θ(t). It shows that the interest rate function and the yield function of the Hull and White interest rate model with jump process can be obtained by solving a nonlinear semi-infinite programming problem. A relaxed cutting plane algorithm is then proposed for solving the resulting optimization problem. The method is calibrated for the U.S. treasury securities at 3-month data and is used to analyze several effects on interest rate prices, including interest rate variability, and the negative correlation between stock returns and interest rates. The numerical results illustrate that our approach essentially generates the yield functions with minimal fitting errors and small oscillation.Keywords: optimization, interest rate model, jump process, deterministic
Procedia PDF Downloads 1612233 Loss Function Optimization for CNN-Based Fingerprint Anti-Spoofing
Authors: Yehjune Heo
Abstract:
As biometric systems become widely deployed, the security of identification systems can be easily attacked by various spoof materials. This paper contributes to finding a reliable and practical anti-spoofing method using Convolutional Neural Networks (CNNs) based on the types of loss functions and optimizers. The types of CNNs used in this paper include AlexNet, VGGNet, and ResNet. By using various loss functions including Cross-Entropy, Center Loss, Cosine Proximity, and Hinge Loss, and various loss optimizers which include Adam, SGD, RMSProp, Adadelta, Adagrad, and Nadam, we obtained significant performance changes. We realize that choosing the correct loss function for each model is crucial since different loss functions lead to different errors on the same evaluation. By using a subset of the Livdet 2017 database, we validate our approach to compare the generalization power. It is important to note that we use a subset of LiveDet and the database is the same across all training and testing for each model. This way, we can compare the performance, in terms of generalization, for the unseen data across all different models. The best CNN (AlexNet) with the appropriate loss function and optimizers result in more than 3% of performance gain over the other CNN models with the default loss function and optimizer. In addition to the highest generalization performance, this paper also contains the models with high accuracy associated with parameters and mean average error rates to find the model that consumes the least memory and computation time for training and testing. Although AlexNet has less complexity over other CNN models, it is proven to be very efficient. For practical anti-spoofing systems, the deployed version should use a small amount of memory and should run very fast with high anti-spoofing performance. For our deployed version on smartphones, additional processing steps, such as quantization and pruning algorithms, have been applied in our final model.Keywords: anti-spoofing, CNN, fingerprint recognition, loss function, optimizer
Procedia PDF Downloads 1372232 Research of Possibilities to Influence the Metal Cross-Section Deformation during Cold Rolling with the Help of Local Deformation Zone Creation
Authors: A. Pesin, D. Pustovoytov, A. Kolesnik, M. Sverdlik
Abstract:
Rolling disturbances often arise which might lead to defects such as nonflatness, warpage, corrugation, etc. Numerous methods of compensation for such disturbances are well known. However, most of them preserve the initial form of transverse flow of the strip, such as convex, concave or asymmetric (for example, sphenoid). Sometimes, the form inherited (especially asymmetric) is undesirable. Technical solutions have been developed which include providing conditions for transverse metal flow in deformation zone. It should be noted that greater reduction is followed by transverse flow increase, while less reduction causes a corresponding decrease in metal flow for differently deformed metal lengths to remain approximately the same and in order to avoid the defects mentioned above. One of the solutions suggests sequential strip deforming from rectangular cross-section profile with periodical rectangular grooves back into rectangular profile again. The work was carried out in DEFORM 3D program complex. Experimental rolling was performed on laboratory mill 150. Comparison of experimental and theoretical results demonstrated good correlation.Keywords: FEM, cross-section deformation, mechanical engineering, applied mechanics
Procedia PDF Downloads 3482231 Key Frame Based Video Summarization via Dependency Optimization
Authors: Janya Sainui
Abstract:
As a rapid growth of digital videos and data communications, video summarization that provides a shorter version of the video for fast video browsing and retrieval is necessary. Key frame extraction is one of the mechanisms to generate video summary. In general, the extracted key frames should both represent the entire video content and contain minimum redundancy. However, most of the existing approaches heuristically select key frames; hence, the selected key frames may not be the most different frames and/or not cover the entire content of a video. In this paper, we propose a method of video summarization which provides the reasonable objective functions for selecting key frames. In particular, we apply a statistical dependency measure called quadratic mutual informaion as our objective functions for maximizing the coverage of the entire video content as well as minimizing the redundancy among selected key frames. The proposed key frame extraction algorithm finds key frames as an optimization problem. Through experiments, we demonstrate the success of the proposed video summarization approach that produces video summary with better coverage of the entire video content while less redundancy among key frames comparing to the state-of-the-art approaches.Keywords: video summarization, key frame extraction, dependency measure, quadratic mutual information
Procedia PDF Downloads 2672230 Path Integrals and Effective Field Theory of Large Scale Structure
Authors: Revant Nayar
Abstract:
In this work, we recast the equations describing large scale structure, and by extension all nonlinear fluids, in the path integral formalism. We first calculate the well known two and three point functions using Schwinger Keldysh formalism used commonly to perturbatively solve path integrals in non- equilibrium systems. Then we include EFT corrections due to pressure, viscosity, and noise as effects on the time-dependent propagator. We are able to express results for arbitrary two and three point correlation functions in LSS in terms of differential operators acting on a triple K master intergral. We also, for the first time, get analytical results for more general initial conditions deviating from the usual power law P∝kⁿ by introducing a mass scale in the initial conditions. This robust field theoretic formalism empowers us with tools from strongly coupled QFT to study the strongly non-linear regime of LSS and turbulent fluid dynamics such as OPE and holographic duals. These could be used to capture fully the strongly non-linear dynamics of fluids and move towards solving the open problem of classical turbulence.Keywords: quantum field theory, cosmology, effective field theory, renormallisation
Procedia PDF Downloads 1352229 Self-denigration in Doctoral Defense Sessions: Scale Development and Validation
Authors: Alireza Jalilifar, Nadia Mayahi
Abstract:
The dissertation defense as a complicated conflict-prone context entails the adoption of elegant interactional strategies, one of which is self-denigration. This study aimed to develop and validate a self-denigration model that fits the context of doctoral defense sessions in applied linguistics. Two focus group discussions provided the basis for developing this conceptual model, which assumed 10 functions for self-denigration, namely good manners, modesty, affability, altruism, assertiveness, diffidence, coercive self-deprecation, evasion, diplomacy, and flamboyance. These functions were used to design a 40-item questionnaire on the attitudes of applied linguists concerning self-denigration in defense sessions. The confirmatory factor analysis of the questionnaire indicated the predictive ability of the measurement model. The findings of this study suggest that self-denigration in doctoral defense sessions is the social representation of the participants’ values, ideas and practices adopted as a negotiation strategy and a conflict management policy for the purpose of establishing harmony and maintaining resilience. This study has implications for doctoral students and academics and illuminates further research on self-denigration in other contexts.Keywords: academic discourse, politeness, self-denigration, grounded theory, dissertation defense
Procedia PDF Downloads 1392228 A Practical and Efficient Evaluation Function for 3D Model Based Vehicle Matching
Authors: Yuan Zheng
Abstract:
3D model-based vehicle matching provides a new way for vehicle recognition, localization and tracking. Its key is to construct an evaluation function, also called fitness function, to measure the degree of vehicle matching. The existing fitness functions often poorly perform when the clutter and occlusion exist in traffic scenarios. In this paper, we present a practical and efficient fitness function. Unlike the existing evaluation functions, the proposed fitness function is to study the vehicle matching problem from both local and global perspectives, which exploits the pixel gradient information as well as the silhouette information. In view of the discrepancy between 3D vehicle model and real vehicle, a weighting strategy is introduced to differently treat the fitting of the model’s wireframes. Additionally, a normalization operation for the model’s projection is performed to improve the accuracy of the matching. Experimental results on real traffic videos reveal that the proposed fitness function is efficient and robust to the cluttered background and partial occlusion.Keywords: 3D-2D matching, fitness function, 3D vehicle model, local image gradient, silhouette information
Procedia PDF Downloads 3992227 An Interpolation Tool for Data Transfer in Two-Dimensional Ice Accretion Problems
Authors: Marta Cordero-Gracia, Mariola Gomez, Olivier Blesbois, Marina Carrion
Abstract:
One of the difficulties in icing simulations is for extended periods of exposure, when very large ice shapes are created. As well as being large, they can have complex shapes, such as a double horn. For icing simulations, these configurations are currently computed in several steps. The icing step is stopped when the ice shapes become too large, at which point a new mesh has to be created to allow for further CFD and ice growth simulations to be performed. This can be very costly, and is a limiting factor in the simulations that can be performed. A way to avoid the costly human intervention in the re-meshing step of multistep icing computation is to use mesh deformation instead of re-meshing. The aim of the present work is to apply an interpolation method based on Radial Basis Functions (RBF) to transfer deformations from surface mesh to volume mesh. This deformation tool has been developed specifically for icing problems. It is able to deal with localized, sharp and large deformations, unlike the tools traditionally used for more smooth wing deformations. This tool will be presented along with validation on typical two-dimensional icing shapes.Keywords: ice accretion, interpolation, mesh deformation, radial basis functions
Procedia PDF Downloads 3142226 Development of a Fuzzy Logic Based Model for Monitoring Child Pornography
Authors: Mariam Ismail, Kazeem Rufai, Jeremiah Balogun
Abstract:
A study was conducted to apply fuzzy logic to the development of a monitoring model for child pornography based on associated risk factors, which can be used by forensic experts or integrated into forensic systems for the early detection of child pornographic activities. A number of methods were adopted in the study, which includes an extensive review of related works was done in order to identify the factors that are associated with child pornography following which they were validated by an expert sex psychologist and guidance counselor, and relevant data was collected. Fuzzy membership functions were used to fuzzify the associated variables identified alongside the risk of the occurrence of child pornography based on the inference rules that were provided by the experts consulted, and the fuzzy logic expert system was simulated using the Fuzzy Logic Toolbox available in the MATLAB Software Release 2016. The results of the study showed that there were 4 categories of risk factors required for assessing the risk of a suspect committing child pornography offenses. The results of the study showed that 2 and 3 triangular membership functions were used to formulate the risk factors based on the 2 and 3 number of labels assigned, respectively. The results of the study showed that 5 fuzzy logic models were formulated such that the first 4 was used to assess the impact of each category on child pornography while the last one takes the 4 outputs from the 4 fuzzy logic models as inputs required for assessing the risk of child pornography. The following conclusion was made; there were factors that were related to personal traits, social traits, history of child pornography crimes, and self-regulatory deficiency traits by the suspects required for the assessment of the risk of child pornography crimes committed by a suspect. Using the values of the identified risk factors selected for this study, the risk of child pornography can be easily assessed from their values in order to determine the likelihood of a suspect perpetuating the crime.Keywords: fuzzy, membership functions, pornography, risk factors
Procedia PDF Downloads 1312225 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children
Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga
Abstract:
Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.Keywords: executive functions, flanker task, inhibition, preschool children
Procedia PDF Downloads 2532224 An Online Adaptive Thresholding Method to Classify Google Trends Data Anomalies for Investor Sentiment Analysis
Authors: Duygu Dere, Mert Ergeneci, Kaan Gokcesu
Abstract:
Google Trends data has gained increasing popularity in the applications of behavioral finance, decision science and risk management. Because of Google’s wide range of use, the Trends statistics provide significant information about the investor sentiment and intention, which can be used as decisive factors for corporate and risk management fields. However, an anomaly, a significant increase or decrease, in a certain query cannot be detected by the state of the art applications of computation due to the random baseline noise of the Trends data, which is modelled as an Additive white Gaussian noise (AWGN). Since through time, the baseline noise power shows a gradual change an adaptive thresholding method is required to track and learn the baseline noise for a correct classification. To this end, we introduce an online method to classify meaningful deviations in Google Trends data. Through extensive experiments, we demonstrate that our method can successfully classify various anomalies for plenty of different data.Keywords: adaptive data processing, behavioral finance , convex optimization, online learning, soft minimum thresholding
Procedia PDF Downloads 1692223 Handwriting Velocity Modeling by Artificial Neural Networks
Authors: Mohamed Aymen Slim, Afef Abdelkrim, Mohamed Benrejeb
Abstract:
The handwriting is a physical demonstration of a complex cognitive process learnt by man since his childhood. People with disabilities or suffering from various neurological diseases are facing so many difficulties resulting from problems located at the muscle stimuli (EMG) or signals from the brain (EEG) and which arise at the stage of writing. The handwriting velocity of the same writer or different writers varies according to different criteria: age, attitude, mood, writing surface, etc. Therefore, it is interesting to reconstruct an experimental basis records taking, as primary reference, the writing speed for different writers which would allow studying the global system during handwriting process. This paper deals with a new approach of the handwriting system modeling based on the velocity criterion through the concepts of artificial neural networks, precisely the Radial Basis Functions (RBF) neural networks. The obtained simulation results show a satisfactory agreement between responses of the developed neural model and the experimental data for various letters and forms then the efficiency of the proposed approaches.Keywords: Electro Myo Graphic (EMG) signals, experimental approach, handwriting process, Radial Basis Functions (RBF) neural networks, velocity modeling
Procedia PDF Downloads 4412222 Elastohydrodynamic Lubrication Study Using Discontinuous Finite Volume Method
Authors: Prawal Sinha, Peeyush Singh, Pravir Dutt
Abstract:
Problems in elastohydrodynamic lubrication have attracted a lot of attention in the last few decades. Solving a two-dimensional problem has always been a big challenge. In this paper, a new discontinuous finite volume method (DVM) for two-dimensional point contact Elastohydrodynamic Lubrication (EHL) problem has been developed and analyzed. A complete algorithm has been presented for solving such a problem. The method presented is robust and easily parallelized in MPI architecture. GMRES technique is implemented to solve the matrix obtained after the formulation. A new approach is followed in which discontinuous piecewise polynomials are used for the trail functions. It is natural to assume that the advantages of using discontinuous functions in finite element methods should also apply to finite volume methods. The nature of the discontinuity of the trail function is such that the elements in the corresponding dual partition have the smallest support as compared with the Classical finite volume methods. Film thickness calculation is done using singular quadrature approach. Results obtained have been presented graphically and discussed. This method is well suited for solving EHL point contact problem and can probably be used as commercial software.Keywords: elastohydrodynamic, lubrication, discontinuous finite volume method, GMRES technique
Procedia PDF Downloads 2582221 Argument Representation in Non-Spatial Motion Bahasa Melayu Based Conceptual Structure Theory
Authors: Nurul Jamilah Binti Rosly
Abstract:
The typology of motion must be understood as a change from one location to another. But from a conceptual point of view, motion can also occur in non-spatial contexts associated with human and social factors. Therefore, from the conceptual point of view, the concept of non-spatial motion involves the movement of time, ownership, identity, state, and existence. Accordingly, this study will focus on the lexical as shared, accept, be, store, and exist as the study material. The data in this study were extracted from the Database of Languages and Literature Corpus Database, Malaysia, which was analyzed using semantics and syntax concepts using Conceptual Structure Theory - Ray Jackendoff (2002). Semantic representations are represented in the form of conceptual structures in argument functions that include functions [events], [situations], [objects], [paths] and [places]. The findings show that the mapping of these arguments comprises three main stages, namely mapping the argument structure, mapping the tree, and mapping the role of thematic items. Accordingly, this study will show the representation of non- spatial Malay language areas.Keywords: arguments, concepts, constituencies, events, situations, thematics
Procedia PDF Downloads 1292220 Home Range and Spatial Interaction Modelling of Black Bears
Authors: Fekadu L. Bayisa, Elvan Ceyhan, Todd D. Steury
Abstract:
Interaction between individuals within the same species is an important component of population dynamics. An interaction can be either static (based on spatial overlap) or dynamic (based on movement interactions). Using GPS collar data, we can quantify both static and dynamic interactions between black bears. The goal of this work is to determine the level of black bear interactions using the 95% and 50% home ranges, as well as to model black bear spatial interactions, which could be attraction, avoidance/repulsion, or a lack of interaction at all, to gain new insights and improve our understanding of ecological processes. Recent methodological developments in home range estimation, inhomogeneous multitype/cross-type summary statistics, and envelope testing methods are explored to study the nature of black bear interactions. Our findings, in general, indicate that the black bears of one type in our data set tend to cluster around another type.Keywords: autocorrelated kernel density estimator, cross-type summary function, inhomogeneous multitype Poisson process, kernel density estimator, minimum convex polygon, pointwise and global envelope tests
Procedia PDF Downloads 822219 Conduction Transfer Functions for the Calculation of Heat Demands in Heavyweight Facade Systems
Authors: Mergim Gasia, Bojan Milovanovica, Sanjin Gumbarevic
Abstract:
Better energy performance of the building envelope is one of the most important aspects of energy savings if the goals set by the European Union are to be achieved in the future. Dynamic heat transfer simulations are being used for the calculation of building energy consumption because they give more realistic energy demands compared to the stationary calculations that do not take the building’s thermal mass into account. Software used for these dynamic simulation use methods that are based on the analytical models since numerical models are insufficient for longer periods. The analytical models used in this research fall in the category of the conduction transfer functions (CTFs). Two methods for calculating the CTFs covered by this research are the Laplace method and the State-Space method. The literature review showed that the main disadvantage of these methods is that they are inadequate for heavyweight façade elements and shorter time periods used for the calculation. The algorithms for both the Laplace and State-Space methods are implemented in Mathematica, and the results are compared to the results from EnergyPlus and TRNSYS since these software use similar algorithms for the calculation of the building’s energy demand. This research aims to check the efficiency of the Laplace and the State-Space method for calculating the building’s energy demand for heavyweight building elements and shorter sampling time, and it also gives the means for the improvement of the algorithms used by these methods. As the reference point for the boundary heat flux density, the finite difference method (FDM) is used. Even though the dynamic heat transfer simulations are superior to the calculation based on the stationary boundary conditions, they have their limitations and will give unsatisfactory results if not properly used.Keywords: Laplace method, state-space method, conduction transfer functions, finite difference method
Procedia PDF Downloads 1332218 Worst-Case Load Shedding in Electric Power Networks
Authors: Fu Lin
Abstract:
We consider the worst-case load-shedding problem in electric power networks where a number of transmission lines are to be taken out of service. The objective is to identify a prespecified number of line outages that lead to the maximum interruption of power generation and load at the transmission level, subject to the active power-flow model, the load and generation capacity of the buses, and the phase-angle limit across the transmission lines. For this nonlinear model with binary constraints, we show that all decision variables are separable except for the nonlinear power-flow equations. We develop an iterative decomposition algorithm, which converts the worst-case load shedding problem into a sequence of small subproblems. We show that the subproblems are either convex problems that can be solved efficiently or nonconvex problems that have closed-form solutions. Consequently, our approach is scalable for large networks. Furthermore, we prove the convergence of our algorithm to a critical point, and the objective value is guaranteed to decrease throughout the iterations. Numerical experiments with IEEE test cases demonstrate the effectiveness of the developed approach.Keywords: load shedding, power system, proximal alternating linearization method, vulnerability analysis
Procedia PDF Downloads 1412217 Optimal Load Factors for Seismic Design of Buildings
Authors: Juan Bojórquez, Sonia E. Ruiz, Edén Bojórquez, David de León Escobedo
Abstract:
A life-cycle optimization procedure to establish the best load factors combinations for seismic design of buildings, is proposed. The expected cost of damage from future earthquakes within the life of the structure is estimated, and realistic cost functions are assumed. The functions include: Repair cost, cost of contents damage, cost associated with loss of life, cost of injuries and economic loss. The loads considered are dead, live and earthquake load. The study is performed for reinforced concrete buildings located in Mexico City. The buildings are modeled as multiple-degree-of-freedom frame structures. The parameter selected to measure the structural damage is the maximum inter-story drift. The structural models are subjected to 31 soft-soil ground motions recorded in the Lake Zone of Mexico City. In order to obtain the annual structural failure rates, a numerical integration method is applied.Keywords: load factors, life-cycle analysis, seismic design, reinforced concrete buildings
Procedia PDF Downloads 6182216 Effect of Risperidone and Haloperidol on Clinical Picture and Some Biochemical Parameters of Schizophrenic Libyan Patients
Authors: Mabruka S. Elashheb, Adullah Ali Bakush
Abstract:
Schizophrenia is referred to as a disorder, not a disease, because there has not been any clear, reliable, and specific etiological factor. Even if schizophrenia is not a very frequent disease, it is among the most burdensome and costly illnesses worldwide. Prevention of relapse is a major goal of maintenance treatment in patients with psychotic disorders. We performed a comparison of a newer, atypical antipsychotic drug, Risperidone, and an older, conventional neuroleptic drug, Haloperidol, in terms of the effect on the usual kidney and liver functions and negative and positive symptoms in patients with schizophrenia and schizoaffective disorder after three and five weeks of their treatments. It is apparent from the comparative data of Haloperidol and Risperidone treatments in schizophrenic patients that Resperidone had superior improvement of negative and positive symptoms of patients, no harmful effect on liver and kidney functions and greater efficacy and faster recovery from schizophrenic symptoms in patients. On the basis of our findings of the present study, we concluded that treatment with Risperidone is superior to Haloperidol in reducing the risk of relapse among outpatients with schizophrenic disorders.Keywords: schizophrenia, Haloperidol, Risperidone, positive and negative symptom
Procedia PDF Downloads 3792215 Functional Outcome of Speech, Voice and Swallowing Following Excision of Glomus Jugulare Tumor
Authors: B. S. Premalatha, Kausalya Sahani
Abstract:
Background: Glomus jugulare tumors arise within the jugular foramen and are commonly seen in females particularly on the left side. Surgical excision of the tumor may cause lower cranial nerve deficits. Cranial nerve involvement produces hoarseness of voice, slurred speech, and dysphagia along with other physical symptoms, thereby affecting the quality of life of individuals. Though oncological clearance is mainly emphasized on while treating these individuals, little importance is given to their communication, voice and swallowing problems, which play a crucial part in daily functioning. Objective: To examine the functions of voice, speech and swallowing outcomes of the subjects, following excision of glomus jugulare tumor. Methods: Two female subjects aged 56 and 62 years had come with a complaint of change in voice, inability to swallow and reduced clarity of speech following surgery for left glomus jugulare tumor were participants of the study. Their surgical information revealed multiple cranial nerve palsies involving the left facial, left superior and recurrent branches of the vagus nerve, left pharyngeal, left soft palate, left hypoglossal and vestibular nerves. Functional outcomes of voice, speech and swallowing were evaluated by perceptual and objective assessment procedures. Assessment included the examination of oral structures and functions, dysarthria by Frenchey dysarthria assessment, cranial nerve functions and swallowing functions. MDVP and Dr. Speech software were used to evaluate acoustic parameters of voice and quality of voice respectively. Results: The study revealed that both the subjects, subsequent to excision of glomus jugulare tumor, showed a varied picture of affected oral structure and functions, articulation, voice and swallowing functions. The cranial nerve assessment showed impairment of the vagus, hypoglossal, facial and glossopharyngeal nerves. Voice examination indicated vocal cord paralysis associated with breathy quality of voice, weak voluntary cough, reduced pitch and loudness range, and poor respiratory support. Perturbation parameters as jitter, shimmer were affected along with s/z ratio indicative of voice fold pathology. Reduced MPD(Maximum Phonation Duration) of vowels indicated that disturbed coordination between respiratory and laryngeal systems. Hypernasality was found to be a prominent feature which reduced speech intelligibility. Imprecise articulation was seen in both the subjects as the hypoglossal nerve was affected following surgery. Injury to vagus, hypoglossal, gloss pharyngeal and facial nerves disturbed the function of swallowing. All the phases of swallow were affected. Aspiration was observed before and during the swallow, confirming the oropharyngeal dysphagia. All the subsystems were affected as per Frenchey Dysarthria Assessment signifying the diagnosis of flaccid dysarthria. Conclusion: There is an observable communication and swallowing difficulty seen following excision of glomus jugulare tumor. Even with complete resection, extensive rehabilitation may be necessary due to significant lower cranial nerve dysfunction. The finding of the present study stresses the need for involvement of as speech and swallowing therapist for pre-operative counseling and assessment of functional outcomes.Keywords: functional outcome, glomus jugulare tumor excision, multiple cranial nerve impairment, speech and swallowing
Procedia PDF Downloads 252