Search results for: uncorrected refractive error
1071 Language Switching Errors of Bilinguals: Role of Top down and Bottom up Process
Authors: Numra Qayyum, Samina Sarwat, Noor ul Ain
Abstract:
Bilingual speakers generally can speak both languages with the same competency without mixing them intentionally and making mistakes, but sometimes errors occur in language selection. This quantitative study particularly deals with the language errors made by Urdu-English bilinguals. In this research, researchers have given special attention to the part played by bottom-up priming and top-down cognitive control in these errors. Unstable Urdu-English bilingual participants termed pictures and were prompted to shift from one language to another under the pressure of time. Different situations were given to manipulate the participants. The long and short runs trials of the same language were also given before switching to another language. The study is concluded with the findings that bilinguals made more errors when switching to the first language from their second language, and these errors are large in number, especially when a speaker is switching from L2 (second language) to L1 (first language) after a long run. When the switching is reversed, i.e., from L2 to LI, it had no effect at all. These results gave the clear responsibility of all these errors to top-down cognitive control.Keywords: bottom up priming, language error, language switching, top down cognitive control
Procedia PDF Downloads 1371070 Improving Load Frequency Control of Multi-Area Power System by Considering Uncertainty by Using Optimized Type 2 Fuzzy Pid Controller with the Harmony Search Algorithm
Authors: Mehrdad Mahmudizad, Roya Ahmadi Ahangar
Abstract:
This paper presents the method of designing the type 2 fuzzy PID controllers in order to solve the problem of Load Frequency Control (LFC). The Harmony Search (HS) algorithm is used to regulate the measurement factors and the effect of uncertainty of membership functions of Interval Type 2 Fuzzy Proportional Integral Differential (IT2FPID) controllers in order to reduce the frequency deviation resulted from the load oscillations. The simulation results implicitly show that the performance of the proposed IT2FPID LFC in terms of error, settling time and resistance against different load oscillations is more appropriate and preferred than PID and Type 1 Fuzzy Proportional Integral Differential (T1FPID) controllers.Keywords: load frequency control, fuzzy-pid controller, type 2 fuzzy system, harmony search algorithm
Procedia PDF Downloads 2781069 Predicting Shot Making in Basketball Learnt Fromadversarial Multiagent Trajectories
Authors: Mark Harmon, Abdolghani Ebrahimi, Patrick Lucey, Diego Klabjan
Abstract:
In this paper, we predict the likelihood of a player making a shot in basketball from multiagent trajectories. Previous approaches to similar problems center on hand-crafting features to capture domain-specific knowledge. Although intuitive, recent work in deep learning has shown, this approach is prone to missing important predictive features. To circumvent this issue, we present a convolutional neural network (CNN) approach where we initially represent the multiagent behavior as an image. To encode the adversarial nature of basketball, we use a multichannel image which we then feed into a CNN. Additionally, to capture the temporal aspect of the trajectories, we use “fading.” We find that this approach is superior to a traditional FFN model. By using gradient ascent, we were able to discover what the CNN filters look for during training. Last, we find that a combined FFN+CNN is the best performing network with an error rate of 39%.Keywords: basketball, computer vision, image processing, convolutional neural network
Procedia PDF Downloads 1531068 Attention-Based Spatio-Temporal Approach for Fire and Smoke Detection
Authors: Alireza Mirrashid, Mohammad Khoshbin, Ali Atghaei, Hassan Shahbazi
Abstract:
In various industries, smoke and fire are two of the most important threats in the workplace. One of the common methods for detecting smoke and fire is the use of infrared thermal and smoke sensors, which cannot be used in outdoor applications. Therefore, the use of vision-based methods seems necessary. The problem of smoke and fire detection is spatiotemporal and requires spatiotemporal solutions. This paper presents a method that uses spatial features along with temporal-based features to detect smoke and fire in the scene. It consists of three main parts; the task of each part is to reduce the error of the previous part so that the final model has a robust performance. This method also uses transformer modules to increase the accuracy of the model. The results of our model show the proper performance of the proposed approach in solving the problem of smoke and fire detection and can be used to increase workplace safety.Keywords: attention, fire detection, smoke detection, spatio-temporal
Procedia PDF Downloads 2031067 Real-Time Image Encryption Using a 3D Discrete Dual Chaotic Cipher
Authors: M. F. Haroun, T. A. Gulliver
Abstract:
In this paper, an encryption algorithm is proposed for real-time image encryption. The scheme employs a dual chaotic generator based on a three dimensional (3D) discrete Lorenz attractor. Encryption is achieved using non-autonomous modulation where the data is injected into the dynamics of the master chaotic generator. The second generator is used to permute the dynamics of the master generator using the same approach. Since the data stream can be regarded as a random source, the resulting permutations of the generator dynamics greatly increase the security of the transmitted signal. In addition, a technique is proposed to mitigate the error propagation due to the finite precision arithmetic of digital hardware. In particular, truncation and rounding errors are eliminated by employing an integer representation of the data which can easily be implemented. The simple hardware architecture of the algorithm makes it suitable for secure real-time applications.Keywords: chaotic systems, image encryption, non-autonomous modulation, FPGA
Procedia PDF Downloads 5061066 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 4451065 Controllable Modification of Glass-Crystal Composites with Ion-Exchange Technique
Authors: Andrey A. Lipovskii, Alexey V. Redkov, Vyacheslav V. Rusan, Dmitry K. Tagantsev, Valentina V. Zhurikhina
Abstract:
The presented research is related to the development of recently proposed technique of the formation of composite materials, like optical glass-ceramics, with predetermined structure and properties of the crystalline component. The technique is based on the control of the size and concentration of the crystalline grains using the phenomenon of glass-ceramics decrystallization (vitrification) induced by ion-exchange. This phenomenon was discovered and explained in the beginning of the 2000s, while related theoretical description was given in 2016 only. In general, the developed theory enables one to model the process and optimize the conditions of ion-exchange processing of glass-ceramics, which provide given properties of crystalline component, in particular, profile of the average size of the crystalline grains. The optimization is possible if one knows two dimensionless parameters of the theoretical model. One of them (β) is the value which is directly related to the solubility of crystalline component of the glass-ceramics in the glass matrix, and another (γ) is equal to the ratio of characteristic times of ion-exchange diffusion and crystalline grain dissolution. The presented study is dedicated to the development of experimental technique and simulation which allow determining these parameters. It is shown that these parameters can be deduced from the data on the space distributions of diffusant concentrations and average size of crystalline grains in the glass-ceramics samples subjected to ion-exchange treatment. Measurements at least at two temperatures and two processing times at each temperature are necessary. The composite material used was a silica-based glass-ceramics with crystalline grains of Li2OSiO2. Cubical samples of the glass-ceramics (6x6x6 mm3) underwent the ion exchange process in NaNO3 salt melt at 520 oC (for 16 and 48 h), 540 oC (for 8 and 24 h), 560 oC (for 4 and 12 h), and 580 oC (for 2 and 8 h). The ion exchange processing resulted in the glass-ceramics vitrification in the subsurface layers where ion-exchange diffusion took place. Slabs about 1 mm thick were cut from the central part of the samples and their big facets were polished. These slabs were used to find profiles of diffusant concentrations and average size of the crystalline grains. The concentration profiles were determined from refractive index profiles measured with Max-Zender interferometer, and profiles of the average size of the crystalline grains were determined with micro-Raman spectroscopy. Numerical simulation were based on the developed theoretical model of the glass-ceramics decrystallization induced by ion exchange. The simulation of the processes was carried out for different values of β and γ parameters under all above-mentioned ion exchange conditions. As a result, the temperature dependences of the parameters, which provided a reliable coincidence of the simulation and experimental data, were found. This ensured the adequate modeling of the process of the glass-ceramics decrystallization in 520-580 oC temperature interval. Developed approach provides a powerful tool for fine tuning of the glass-ceramics structure, namely, concentration and average size of crystalline grains.Keywords: diffusion, glass-ceramics, ion exchange, vitrification
Procedia PDF Downloads 2691064 Morphological Features Fusion for Identifying INBREAST-Database Masses Using Neural Networks and Support Vector Machines
Authors: Nadia el Atlas, Mohammed el Aroussi, Mohammed Wahbi
Abstract:
In this paper a novel technique of mass characterization based on robust features-fusion is presented. The proposed method consists of mainly four stages: (a) the first phase involves segmenting the masses using edge information’s. (b) The second phase is to calculate and fuse the most relevant morphological features. (c) The last phase is the classification step which allows us to classify the images into benign and malignant masses. In this step we have implemented Support Vectors Machines (SVM) and Artificial Neural Networks (ANN), which were evaluated with the following performance criteria: confusion matrix, accuracy, sensitivity, specificity, receiver operating characteristic ROC, and error histogram. The effectiveness of this new approach was evaluated by a recently developed database: INBREAST database. The fusion of the most appropriate morphological features provided very good results. The SVM gives accuracy to within 64.3%. Whereas the ANN classifier gives better results with an accuracy of 97.5%.Keywords: breast cancer, mammography, CAD system, features, fusion
Procedia PDF Downloads 5991063 Performance Improvement of Cooperative Scheme in Wireless OFDM Systems
Authors: Ki-Ro Kim, Seung-Jun Yu, Hyoung-Kyu Song
Abstract:
Recently, the wireless communication systems are required to have high quality and provide high bit rate data services. Researchers have studied various multiple antenna scheme to meet the demand. In practical application, it is difficult to deploy multiple antennas for limited size and cost. Cooperative diversity techniques are proposed to overcome the limitations. Cooperative communications have been widely investigated to improve performance of wireless communication. Among diversity schemes, space-time block code has been widely studied for cooperative communication systems. In this paper, we propose a new cooperative scheme using pre-coding and space-time block code. The proposed cooperative scheme provides improved error performance than a conventional cooperative scheme using space-time block coding scheme.Keywords: cooperative communication, space-time block coding, pre-coding
Procedia PDF Downloads 3591062 Estimation of Rare and Clustered Population Mean Using Two Auxiliary Variables in Adaptive Cluster Sampling
Authors: Muhammad Nouman Qureshi, Muhammad Hanif
Abstract:
Adaptive cluster sampling (ACS) is specifically developed for the estimation of highly clumped populations and applied to a wide range of situations like animals of rare and endangered species, uneven minerals, HIV patients and drug users. In this paper, we proposed a generalized semi-exponential estimator with two auxiliary variables under the framework of ACS design. The expressions of approximate bias and mean square error (MSE) of the proposed estimator are derived. Theoretical comparisons of the proposed estimator have been made with existing estimators. A numerical study is conducted on real and artificial populations to demonstrate and compare the efficiencies of the proposed estimator. The results indicate that the proposed generalized semi-exponential estimator performed considerably better than all the adaptive and non-adaptive estimators considered in this paper.Keywords: auxiliary information, adaptive cluster sampling, clustered populations, Hansen-Hurwitz estimation
Procedia PDF Downloads 2381061 Forecasting Exchange Rate between Thai Baht and the US Dollar Using Time Series Analysis
Authors: Kunya Bowornchockchai
Abstract:
The objective of this research is to forecast the monthly exchange rate between Thai baht and the US dollar and to compare two forecasting methods. The methods are Box-Jenkins’ method and Holt’s method. Results show that the Box-Jenkins’ method is the most suitable method for the monthly Exchange Rate between Thai Baht and the US Dollar. The suitable forecasting model is ARIMA (1,1,0) without constant and the forecasting equation is Yt = Yt-1 + 0.3691 (Yt-1 - Yt-2) When Yt is the time series data at time t, respectively.Keywords: Box–Jenkins method, Holt’s method, mean absolute percentage error (MAPE), exchange rate
Procedia PDF Downloads 2541060 Robot Movement Using the Trust Region Policy Optimization
Authors: Romisaa Ali
Abstract:
The Policy Gradient approach is one of the deep reinforcement learning families that combines deep neural networks (DNN) with reinforcement learning RL to discover the optimum of the control problem through experience gained from the interaction between the robot and its surroundings. In contrast to earlier policy gradient algorithms, which were unable to handle these two types of error because of over-or under-estimation introduced by the deep neural network model, this article will discuss the state-of-the-art SOTA policy gradient technique, trust region policy optimization (TRPO), by applying this method in various environments compared to another policy gradient method, the Proximal Policy Optimization (PPO), to explain their robust optimization, using this SOTA to gather experience data during various training phases after observing the impact of hyper-parameters on neural network performance.Keywords: deep neural networks, deep reinforcement learning, proximal policy optimization, state-of-the-art, trust region policy optimization
Procedia PDF Downloads 1691059 Naïve Bayes: A Classical Approach for the Epileptic Seizures Recognition
Authors: Bhaveek Maini, Sanjay Dhanka, Surita Maini
Abstract:
Electroencephalography (EEG) is used to classify several epileptic seizures worldwide. It is a very crucial task for the neurologist to identify the epileptic seizure with manual EEG analysis, as it takes lots of effort and time. Human error is always at high risk in EEG, as acquiring signals needs manual intervention. Disease diagnosis using machine learning (ML) has continuously been explored since its inception. Moreover, where a large number of datasets have to be analyzed, ML is acting as a boon for doctors. In this research paper, authors proposed two different ML models, i.e., logistic regression (LR) and Naïve Bayes (NB), to predict epileptic seizures based on general parameters. These two techniques are applied to the epileptic seizures recognition dataset, available on the UCI ML repository. The algorithms are implemented on an 80:20 train test ratio (80% for training and 20% for testing), and the performance of the model was validated by 10-fold cross-validation. The proposed study has claimed accuracy of 81.87% and 95.49% for LR and NB, respectively.Keywords: epileptic seizure recognition, logistic regression, Naïve Bayes, machine learning
Procedia PDF Downloads 611058 Modeling Driving Distraction Considering Psychological-Physical Constraints
Authors: Yixin Zhu, Lishengsa Yue, Jian Sun, Lanyue Tang
Abstract:
Modeling driving distraction in microscopic traffic simulation is crucial for enhancing simulation accuracy. Current driving distraction models are mainly derived from physical motion constraints under distracted states, in which distraction-related error terms are added to existing microscopic driver models. However, the model accuracy is not very satisfying, due to a lack of modeling the cognitive mechanism underlying the distraction. This study models driving distraction based on the Queueing Network Human Processor model (QN-MHP). This study utilizes the queuing structure of the model to perform task invocation and switching for distracted operation and control of the vehicle under driver distraction. Based on the assumption of the QN-MHP model about the cognitive sub-network, server F is a structural bottleneck. The latter information must wait for the previous information to leave server F before it can be processed in server F. Therefore, the waiting time for task switching needs to be calculated. Since the QN-MHP model has different information processing paths for auditory information and visual information, this study divides driving distraction into two types: auditory distraction and visual distraction. For visual distraction, both the visual distraction task and the driving task need to go through the visual perception sub-network, and the stimuli of the two are asynchronous, which is called stimulus on asynchrony (SOA), so when calculating the waiting time for switching tasks, it is necessary to consider it. In the case of auditory distraction, the auditory distraction task and the driving task do not need to compete for the server resources of the perceptual sub-network, and their stimuli can be synchronized without considering the time difference in receiving the stimuli. According to the Theory of Planned Behavior for drivers (TPB), this study uses risk entropy as the decision criterion for driver task switching. A logistic regression model is used with risk entropy as the independent variable to determine whether the driver performs a distraction task, to explain the relationship between perceived risk and distraction. Furthermore, to model a driver’s perception characteristics, a neurophysiological model of visual distraction tasks is incorporated into the QN-MHP, and executes the classical Intelligent Driver Model. The proposed driving distraction model integrates the psychological cognitive process of a driver with the physical motion characteristics, resulting in both high accuracy and interpretability. This paper uses 773 segments of distracted car-following in Shanghai Naturalistic Driving Study data (SH-NDS) to classify the patterns of distracted behavior on different road facilities and obtains three types of distraction patterns: numbness, delay, and aggressiveness. The model was calibrated and verified by simulation. The results indicate that the model can effectively simulate the distracted car-following behavior of different patterns on various roadway facilities, and its performance is better than the traditional IDM model with distraction-related error terms. The proposed model overcomes the limitations of physical-constraints-based models in replicating dangerous driving behaviors, and internal characteristics of an individual. Moreover, the model is demonstrated to effectively generate more dangerous distracted driving scenarios, which can be used to construct high-value automated driving test scenarios.Keywords: computational cognitive model, driving distraction, microscopic traffic simulation, psychological-physical constraints
Procedia PDF Downloads 911057 Experimental Investigation of Natural Frequency and Forced Vibration of Euler-Bernoulli Beam under Displacement of Concentrated Mass and Load
Authors: Aref Aasi, Sadegh Mehdi Aghaei, Balaji Panchapakesan
Abstract:
This work aims to evaluate the free and forced vibration of a beam with two end joints subjected to a concentrated moving mass and a load using the Euler-Bernoulli method. The natural frequency is calculated for different locations of the concentrated mass and load on the beam. The analytical results are verified by the experimental data. The variations of natural frequency as a function of the location of the mass, the effect of the forced frequency on the vibrational amplitude, and the displacement amplitude versus time are investigated. It is discovered that as the concentrated mass moves toward the center of the beam, the natural frequency of the beam and the relative error between experimental and analytical data decreases. There is a close resemblance between analytical data and experimental observations.Keywords: Euler-Bernoulli beam, natural frequency, forced vibration, experimental setup
Procedia PDF Downloads 2741056 A Generic Metamodel for Dependability Analysis
Authors: Moomen Chaari, Wolfgang Ecker, Thomas Kruse, Bogdan-Andrei Tabacaru
Abstract:
In our daily life, we frequently interact with complex systems which facilitate our mobility, enhance our access to information, and sometimes help us recover from illnesses or diseases. The reliance on these systems is motivated by the established evaluation and assessment procedures which are performed during the different phases of the design and manufacturing flow. Such procedures are aimed to qualify the system’s delivered services with respect to their availability, reliability, safety, and other properties generally referred to as dependability attributes. In this paper, we propose a metamodel based generic characterization of dependability concepts and describe an automation methodology to customize this characterization to different standards and contexts. When integrated in concrete design and verification environments, the proposed methodology promotes the reuse of already available dependability assessment tools and reduces the costs and the efforts required to create consistent and efficient artefacts for fault injection or error simulation.Keywords: dependability analysis, model-driven development, metamodeling, code generation
Procedia PDF Downloads 4861055 An ANOVA Approach for the Process Parameters Optimization of Al-Si Alloy Sand Casting
Authors: Manjinder Bajwa, Mahipal Singh, Manish Nagpal
Abstract:
This research paper aims to propose a novel approach using ANOVA technique for the strategic investigation of process parameters and their effects on the mechanical properties of Aluminium alloy cast. The two process parameters considered here were permeability of sand and pouring temperature of aluminium alloy. ANOVA has been employed for the first time to determine the effects of these selected parameters on the impact strength of alloy. The experimental results show that this proposed technique has great potential for analyzing sand casting process. Using this approach we have determined the treatment mean square, response mean square and mean square of error as 8.54, 8.255 and 0.435 respectively. The research concluded that at the 5% level of significance, permeability of sand is the more significant parameter influencing the impact strength of cast alloy.Keywords: aluminium alloy, pouring temperature, permeability of sand, impact strength, ANOVA
Procedia PDF Downloads 4481054 Efficient Subsurface Mapping: Automatic Integration of Ground Penetrating Radar with Geographic Information Systems
Authors: Rauf R. Hussein, Devon M. Ramey
Abstract:
Integrating Ground Penetrating Radar (GPR) with Geographic Information Systems (GIS) can provide valuable insights for various applications, such as archaeology, transportation, and utility locating. Although there has been progress toward automating the integration of GPR data with GIS, fully automatic integration has not been achieved yet. Additionally, manually integrating GPR data with GIS can be a time-consuming and error-prone process. In this study, actual, real-world GPR applications are presented, and a software named GPR-GIS 10 is created to interactively extract subsurface targets from GPR radargrams and automatically integrate them into GIS. With this software, it is possible to quickly and reliably integrate the two techniques to create informative subsurface maps. The results indicated that automatic integration of GPR with GIS can be an efficient tool to map and view any subsurface targets in their appropriate location in a 3D space with the needed precision. The findings of this study could help GPR-GIS integrators save time and reduce errors in many GPR-GIS applications.Keywords: GPR, GIS, GPR-GIS 10, drone technology, automation
Procedia PDF Downloads 921053 Modeling of Global Solar Radiation on a Horizontal Surface Using Artificial Neural Network: A Case Study
Authors: Laidi Maamar, Hanini Salah
Abstract:
The present work investigates the potential of artificial neural network (ANN) model to predict the horizontal global solar radiation (HGSR). The ANN is developed and optimized using three years meteorological database from 2011 to 2013 available at the meteorological station of Blida (Blida 1 university, Algeria, Latitude 36.5°, Longitude 2.81° and 163 m above mean sea level). Optimal configuration of the ANN model has been determined by minimizing the Root Means Square Error (RMSE) and maximizing the correlation coefficient (R2) between observed and predicted data with the ANN model. To select the best ANN architecture, we have conducted several tests by using different combinations of parameters. A two-layer ANN model with six hidden neurons has been found as an optimal topology with (RMSE=4.036 W/m²) and (R²=0.999). A graphical user interface (GUI), was designed based on the best network structure and training algorithm, to enhance the users’ friendliness application of the model.Keywords: artificial neural network, global solar radiation, solar energy, prediction, Algeria
Procedia PDF Downloads 4981052 Agriculture and Global Economy vis-à-vis the Climate Change
Authors: Assaad Ghazouani, Ati Abdessatar
Abstract:
In the world, agriculture maintains a social and economic importance in the national economy. Its importance is distinguished by its ripple effects not only downstream but also upstream vis-à-vis the non-agricultural sector. However, the situation is relatively fragile because of weather conditions. In this work, we propose a model to highlight the impacts of climate change (CC) on economic growth in the world where agriculture is considered as a strategic sector. The CC is supposed to directly and indirectly affect economic growth by reducing the performance of the agricultural sector. The model is tested for Tunisia. The results validate the hypothesis that the potential economic damage of the CC is important. Indeed, an increase in CO2 concentration (temperatures and disruption of rainfall patterns) will have an impact on global economic growth particularly by reducing the performance of the agricultural sector. Analysis from a vector error correction model also highlights the magnitude of climate impact on the performance of the agricultural sector and its repercussions on economic growthKeywords: Climate Change, Agriculture, Economic Growth, World, VECM, Cointegration.
Procedia PDF Downloads 6191051 Usage of Military Spending, Debt Servicing and Growth for Dealing with Emergency Plan of Indian External Debt
Authors: Sahbi Farhani
Abstract:
This study investigates the relationship between external debt and military spending in case of India over the period of 1970–2012. In doing so, we have applied the structural break unit root tests to examine stationarity properties of the variables. The Auto-Regressive Distributed Lag (ARDL) bounds testing approach is used to test whether cointegration exists in presence of structural breaks stemming in the series. Our results indicate the cointegration among external debt, military spending, debt servicing, and economic growth. Moreover, military spending and debt servicing add in external debt. Economic growth helps in lowering external debt. The Vector Error Correction Model (VECM) analysis and Granger causality test reveal that military spending and economic growth cause external debt. The feedback effect also exists between external debt and debt servicing in case of India.Keywords: external debt, military spending, ARDL approach, India
Procedia PDF Downloads 2961050 VANETs Geographic Routing Protocols: A survey
Authors: Ramin Karimi
Abstract:
One of common highly mobile wireless ad hoc networks is Vehicular Ad Hoc Networks. Hence routing in vehicular ad hoc network (VANET) has attracted much attention during the last few years. VANET is characterized by its high mobility of nodes and specific topology patterns. Moreover these networks encounter a significant loss rate and a very short duration of communication. In vehicular ad hoc networks, one of challenging is routing of data due to high speed mobility and changing topology of vehicles. Geographic routing protocols are becoming popular due to advancement and availability of GPS devices. Delay Tolerant Networks (DTNs) are a class of networks that enable communication where connectivity issues like sparse connectivity, intermittent connectivity; high latency, long delay, high error rates, asymmetric data rate, and even no end-to-end connectivity exist. In this paper, we review the existing Geographic Routing Protocols for VANETs and also provide a qualitative comparison of them.Keywords: vehicular ad hoc networks, mobility, geographic routing, delay tolerant networks
Procedia PDF Downloads 5201049 Attention and Memory in the Music Learning Process in Individuals with Visual Impairments
Authors: Lana Burmistrova
Abstract:
Introduction: The influence of visual impairments on several cognitive processes used in the music learning process is an increasingly important area in special education and cognitive musicology. Many children have several visual impairments due to the refractive errors and irreversible inhibitors. However, based on the compensatory neuroplasticity and functional reorganization, congenitally blind (CB) and early blind (EB) individuals use several areas of the occipital lobe to perceive and process auditory and tactile information. CB individuals have greater memory capacity, memory reliability, and less false memory mechanisms are used while executing several tasks, they have better working memory (WM) and short-term memory (STM). Blind individuals use several strategies while executing tactile and working memory n-back tasks: verbalization strategy (mental recall), tactile strategy (tactile recall) and combined strategies. Methods and design: The aim of the pilot study was to substantiate similar tendencies while executing attention, memory and combined auditory tasks in blind and sighted individuals constructed for this study, and to investigate attention, memory and combined mechanisms used in the music learning process. For this study eight (n=8) blind and eight (n=8) sighted individuals aged 13-20 were chosen. All respondents had more than five years music performance and music learning experience. In the attention task, all respondents had to identify pitch changes in tonal and randomized melodic pairs. The memory task was based on the mismatch negativity (MMN) proportion theory: 80 percent standard (not changed) and 20 percent deviant (changed) stimuli (sequences). Every sequence was named (na-na, ra-ra, za-za) and several items (pencil, spoon, tealight) were assigned for each sequence. Respondents had to recall the sequences, to associate them with the item and to detect possible changes. While executing the combined task, all respondents had to focus attention on the pitch changes and had to detect and describe these during the recall. Results and conclusion: The results support specific features in CB and EB, and similarities between late blind (LB) and sighted individuals. While executing attention and memory tasks, it was possible to observe the tendency in CB and EB by using more precise execution tactics and usage of more advanced periodic memory, while focusing on auditory and tactile stimuli. While executing memory and combined tasks, CB and EB individuals used passive working memory to recall standard sequences, active working memory to recall deviant sequences and combined strategies. Based on the observation results, assessment of blind respondents and recording specifics, following attention and memory correlations were identified: reflective attention and STM, reflective attention and periodic memory, auditory attention and WM, tactile attention and WM, auditory tactile attention and STM. The results and the summary of findings highlight the attention and memory features used in the music learning process in the context of blindness, and the tendency of the several attention and memory types correlated based on the task, strategy and individual features.Keywords: attention, blindness, memory, music learning, strategy
Procedia PDF Downloads 1841048 Prediction of the Thermodynamic Properties of Hydrocarbons Using Gaussian Process Regression
Authors: N. Alhazmi
Abstract:
Knowing the thermodynamics properties of hydrocarbons is vital when it comes to analyzing the related chemical reaction outcomes and understanding the reaction process, especially in terms of petrochemical industrial applications, combustions, and catalytic reactions. However, measuring the thermodynamics properties experimentally is time-consuming and costly. In this paper, Gaussian process regression (GPR) has been used to directly predict the main thermodynamic properties - standard enthalpy of formation, standard entropy, and heat capacity -for more than 360 cyclic and non-cyclic alkanes, alkenes, and alkynes. A simple workflow has been proposed that can be applied to directly predict the main properties of any hydrocarbon by knowing its descriptors and chemical structure and can be generalized to predict the main properties of any material. The model was evaluated by calculating the statistical error R², which was more than 0.9794 for all the predicted properties.Keywords: thermodynamic, Gaussian process regression, hydrocarbons, regression, supervised learning, entropy, enthalpy, heat capacity
Procedia PDF Downloads 2221047 Economic Design of a Quality Control Chart for the Proportion of Defective Items
Authors: Encarnación Álvarez-Verdejo, Raúl Amor-Pulido, Pablo J. Moya-Fernández, Juan F. Muñoz-Rosas, Francisco J. Blanco-Encomienda
Abstract:
Many companies use the statistical tool named as statistical quality control, and which can have a high cost for the companies interested on these statistical tools. The evaluation of the quality of products and services is an important topic, but the reduction of the cost of the implantation of the statistical quality control also has important benefits for the companies. For this reason, it is important to implement a economic design for the various steps included into the statistical quality control. In this paper, we describe some relevant aspects related to the economic design of a quality control chart for the proportion of defective items. They are very important because the suggested issues can reduce the cost of implementing a quality control chart for the proportion of defective items. Note that the main purpose of this chart is to evaluate and control the proportion of defective items of a production process.Keywords: proportion, type I error, economic plan, distribution function
Procedia PDF Downloads 4431046 A High Performance Piano Note Recognition Scheme via Precise Onset Detection and Segmented Short-Time Fourier Transform
Authors: Sonali Banrjee, Swarup Kumar Mitra, Aritra Acharyya
Abstract:
A piano note recognition method has been proposed by the authors in this paper. The authors have used a comprehensive method for onset detection of each note present in a piano piece followed by segmented short-time Fourier transform (STFT) for the identification of piano notes. The performance evaluation of the proposed method has been carried out in different harsh noisy environments by adding different levels of additive white Gaussian noise (AWGN) having different signal-to-noise ratio (SNR) in the original signal and evaluating the note detection error rate (NDER) of different piano pieces consisting of different number of notes at different SNR levels. The NDER is found to be remained within 15% for all piano pieces under consideration when the SNR is kept above 8 dB.Keywords: AWGN, onset detection, piano note, STFT
Procedia PDF Downloads 1601045 Optimal Feedback Linearization Control of PEM Fuel Cell
Authors: E. Shahsavari, R. Ghasemi, A. Akramizadeh
Abstract:
This paper presents a new method to design nonlinear feedback linearization controller for polymer electrolyte membrane fuel cells (PEMFCs). A nonlinear controller is designed based on nonlinear model to prolong the stack life of PEM fuel cells. Since it is known that large deviations between hydrogen and oxygen partial pressures can cause severe membrane damage in the fuel cell, feedback linearization is applied to the PEM fuel cell system so that the deviation can be kept as small as possible during disturbances or load variations. To obtain an accurate feedback linearization controller, tuning the linear parameters are always important. So in proposed study NSGA_II method was used to tune the designed controller in aim to decrease the controller tracking error. The simulation result showed that the proposed method tuned the controller efficiently.Keywords: nonlinear dynamic model, polymer electrolyte membrane fuel cells, feedback linearization, optimal control, NSGA_II
Procedia PDF Downloads 5181044 Utilizing Grid Computing to Enhance Power Systems Performance
Authors: Rafid A. Al-Khannak, Fawzi M. Al-Naima
Abstract:
Power load is one of the most important controlling keys which decide power demands and illustrate power usage to shape power market. Hence, power load forecasting is the parameter which facilitates understanding and analyzing all these aspects. In this paper, power load forecasting is solved under MATLAB environment by constructing a neural network for the power load to find an accurate simulated solution with the minimum error. A developed algorithm to achieve load forecasting application with faster technique is the aim for this paper. The algorithm is used to enable MATLAB power application to be implemented by multi machines in the Grid computing system, and to accomplish it within much less time, cost and with high accuracy and quality. Grid Computing, the modern computational distributing technology, has been used to enhance the performance of power applications by utilizing idle and desired Grid contributor(s) by sharing computational power resources.Keywords: DeskGrid, Grid Server, idle contributor(s), grid computing, load forecasting
Procedia PDF Downloads 4751043 Comparison between Bernardi’s Equation and Heat Flux Sensor Measurement as Battery Heat Generation Estimation Method
Authors: Marlon Gallo, Eduardo Miguel, Laura Oca, Eneko Gonzalez, Unai Iraola
Abstract:
The heat generation of an energy storage system is an essential topic when designing a battery pack and its cooling system. Heat generation estimation is used together with thermal models to predict battery temperature in operation and adapt the design of the battery pack and the cooling system to these thermal needs guaranteeing its safety and correct operation. In the present work, a comparison between the use of a heat flux sensor (HFS) for indirect measurement of heat losses in a cell and the widely used and simplified version of Bernardi’s equation for estimation is presented. First, a Li-ion cell is thermally characterized with an HFS to measure the thermal parameters that are used in a first-order lumped thermal model. These parameters are the equivalent thermal capacity and the thermal equivalent resistance of a single Li-ion cell. Static (when no current is flowing through the cell) and dynamic (making current flow through the cell) tests are conducted in which HFS is used to measure heat between the cell and the ambient, so thermal capacity and resistances respectively can be calculated. An experimental platform records current, voltage, ambient temperature, surface temperature, and HFS output voltage. Second, an equivalent circuit model is built in a Matlab-Simulink environment. This allows the comparison between the generated heat predicted by Bernardi’s equation and the HFS measurements. Data post-processing is required to extrapolate the heat generation from the HFS measurements, as the sensor records the heat released to the ambient and not the one generated within the cell. Finally, the cell temperature evolution is estimated with the lumped thermal model (using both HFS and Bernardi’s equation total heat generation) and compared towards experimental temperature data (measured with a T-type thermocouple). At the end of this work, a critical review of the results obtained and the possible mismatch reasons are reported. The results show that indirectly measuring the heat generation with HFS gives a more precise estimation than Bernardi’s simplified equation. On the one hand, when using Bernardi’s simplified equation, estimated heat generation differs from cell temperature measurements during charges at high current rates. Additionally, for low capacity cells where a small change in capacity has a great influence on the terminal voltage, the estimated heat generation shows high dependency on the State of Charge (SoC) estimation, and therefore open circuit voltage calculation (as it is SoC dependent). On the other hand, with indirect measuring the heat generation with HFS, the resulting error is a maximum of 0.28ºC in the temperature prediction, in contrast with 1.38ºC with Bernardi’s simplified equation. This illustrates the limitations of Bernardi’s simplified equation for applications where precise heat monitoring is required. For higher current rates, Bernardi’s equation estimates more heat generation and consequently, a higher predicted temperature. Bernardi´s equation accounts for no losses after cutting the charging or discharging current. However, HFS measurement shows that after cutting the current the cell continues generating heat for some time, increasing the error of Bernardi´s equation.Keywords: lithium-ion battery, heat flux sensor, heat generation, thermal characterization
Procedia PDF Downloads 3891042 The Quality Health Services and Patient Satisfaction in Hospital
Authors: Nadia Fatima Zahra Malki
Abstract:
Quality is one of the most important modern management patterns that organizations seek to achieve in all areas and sectors in order to meet the needs and desires of customers and to remain and continuity, as they constitute a competitive advantage for the organization. and among the most prominent organizations that must be available on the quality factor are health organizations as they relate to the most valuable component of production. It is a person, and his health, and any error in it threatens his life and may lead to death, so she must provide health services of high quality to achieve the highest degree of satisfaction for the patient. This research aims to study the quality of health services and the extent of their impact on patient satisfaction, and this is through an applied study that relied on measuring the level of quality of health services in the university hospital center of Algeria and the extent of their impact on patient satisfaction according to the dimensions of the quality of health services, and we reached a conclusion that the determinants of the quality of health services It affects patient satisfaction, which necessitates developing health services according to patients' requirements and improving their quality to obtain patient satisfaction.Keywords: health service, health quality, quality determinants, patient satisfaction
Procedia PDF Downloads 62