Search results for: Squared Error (SE) loss function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9690

Search results for: Squared Error (SE) loss function

9510 Sentiment Analysis of Consumers’ Perceptions on Social Media about the Main Mobile Providers in Jamaica

Authors: Sherrene Bogle, Verlia Bogle, Tyrone Anderson

Abstract:

In recent years, organizations have become increasingly interested in the possibility of analyzing social media as a means of gaining meaningful feedback about their products and services. The aspect based sentiment analysis approach is used to predict the sentiment for Twitter datasets for Digicel and Lime, the main mobile companies in Jamaica, using supervised learning classification techniques. The results indicate an average of 82.2 percent accuracy in classifying tweets when comparing three separate classification algorithms against the purported baseline of 70 percent and an average root mean squared error of 0.31. These results indicate that the analysis of sentiment on social media in order to gain customer feedback can be a viable solution for mobile companies looking to improve business performance.

Keywords: machine learning, sentiment analysis, social media, supervised learning

Procedia PDF Downloads 434
9509 An Analysis of Socio-Demographics, Living Conditions, and Physical and Emotional Child Abuse Patterns in the Context of the 2010 Haiti Earthquake

Authors: Sony Subedi, Colleen Davison, Susan Bartels

Abstract:

Objective: The aim of this study is to i) investigate the socio-demographics and living conditions of households in Haiti pre- and post 2010 earthquake, ii) determine the household prevalence of emotional and physical abuse in children (aged 2-14) after the earthquake, and iii) explore the association between earthquake-related loss and experience of emotional and physical child abuse in the household while considering potential confounding variables and the interactive effects of a number of social, economic, and demographic factors. Methods: A nationally representative sample of Haitian households from the 2005/6 and 2012 phases of the Demographic and Health Surveys (DHS) was used. Descriptive analysis was summarized using frequencies and measures of central tendency. Chi-squared and independent t-tests were used to compare data that was available pre-earthquake and post-earthquake. The association between experiences of earthquake-related loss and emotional and physical child abuse was assessed using log-binomial regression models. Results: Comparing pre-post-earthquake, noteworthy improvements were observed in the educational attainment of the household head (9.1% decrease in “no education” category) and in possession of the following household items: electricity, television, mobile-phone, and radio post-earthquake. Approximately 77.0% of children aged 2-14 experienced at least one form of physical abuse and 78.5% of children experienced at least one form of emotional abuse one month prior to the 2012 survey period. Analysis regarding the third objective (association between experiences of earthquake-related loss and emotional and physical child abuse) is in progress. Conclusions: The extremely high prevalence of emotional and physical child abuse in Haiti indicates an immediate need for improvements in the enforcement of existing policies and interventions aimed at decreasing child abuse in the household.

Keywords: Haiti earthquake, physical abuse, emotional abuse, natural disasters, children

Procedia PDF Downloads 177
9508 Investigation of the Decisive Factors on the Slump Loss: A Case Study of Cement Factors (Portland Cement Type 2)

Authors: M. B. Ahmadi, A. A. Kaffash B., B. Mobaraki

Abstract:

Slump loss, which refers to the gradual reduction of workability and the amount of slump in fresh concrete over time, is one of the significant challenges in the ready-mixed concrete industry. Therefore, having accurate knowledge of the factors affecting slump loss is a crucial solution in this field. In this paper, an attempt was made to investigate the effect of cement produced by different units on the slump of concrete in a laboratory setting. For this purpose, 12 cement samples were prepared from 6 different production units. Physical and chemical tests were performed on the cement samples. Subsequently, a laboratory concrete mix with a slump of 13 ± 1 cm was prepared with each cement sample, and the slump was measured at 0, 15, 30, 45, and 60 minutes. Although the environmental factors, mix design specifications, and execution conditions—factors that significantly influence the slump loss trend—were constant in all 12 laboratory concrete mixes, the slump loss trends differed among them. These trends were categorized based on the results, and the relationship between the slump loss percentage in 60 minutes, the water-cement ratio, and the LOI and K2O values of different cements were introduced.

Keywords: concrete, slump loss, portland cement, efficiency

Procedia PDF Downloads 66
9507 Composite Forecasts Accuracy for Automobile Sales in Thailand

Authors: Watchareeporn Chaimongkol

Abstract:

In this paper, we compare the statistical measures accuracy of composite forecasting model to estimate automobile customer demand in Thailand. A modified simple exponential smoothing and autoregressive integrate moving average (ARIMA) forecasting model is built to estimate customer demand of passenger cars, instead of using information of historical sales data. Our model takes into account special characteristic of the Thai automobile market such as sales promotion, advertising and publicity, petrol price, and interest rate for loan. We evaluate our forecasting model by comparing forecasts with actual data using six accuracy measurements, mean absolute percentage error (MAPE), geometric mean absolute error (GMAE), symmetric mean absolute percentage error (sMAPE), mean absolute scaled error (MASE), median relative absolute error (MdRAE), and geometric mean relative absolute error (GMRAE).

Keywords: composite forecasting, simple exponential smoothing model, autoregressive integrate moving average model selection, accuracy measurements

Procedia PDF Downloads 355
9506 A Neuropsychological Investigation of the Relationship between Anxiety Levels and Loss of Inhibitory Cognitive Control in Ageing and Dementia

Authors: Nasreen Basoudan, Andrea Tales, Frederic Boy

Abstract:

Non-clinical anxiety may be comprised of state anxiety - temporarily experienced anxiety related to a specific situation, and trait anxiety - a longer lasting response or a general disposition to anxiety. While temporary and occasional anxiety whether as a mood state or personality dimension is normal, nonclinical anxiety may influence many more components of information processing than previously recognized. In ageing and dementia-related research, disease characterization now involves attempts to understand a much wider range of brain function such as loss of inhibitory control, as against the more common focus on memory and cognition. However, in many studies, the tendency has been to include individuals with clinical anxiety disorders while excluding persons with lower levels of state or trait anxiety. Loss of inhibitory cognitive control can lead to behaviors such as aggression, reduced sensitivity to others, sociopathic thoughts and actions. Anxiety has also been linked to inhibitory control, with research suggesting that people with anxiety are less capable of inhibiting their emotions than the average person. This study investigates the relationship between anxiety and loss of inhibitory control in younger and older adults, using a variety of questionnaires and computers-based tests. Based on the premise that irrespective of classification, anxiety is associated with a wide range of physical, affective, and cognitive responses, this study explores evidence indicative of the potential influence anxiety per se on loss of inhibitory control, in order to contribute to discussion and appropriate consideration of anxiety-related factors in methodological practice.

Keywords: anxiety, ageing, dementia, inhibitory control

Procedia PDF Downloads 236
9505 The Phosphatidate Phosphatase Pah1 and Its Regulator Nem1/spo7 Protein Phosphatase Required for Nucleophagy

Authors: Muhammad Arifur Rahman, Talukdar M. Waliullah, Takashi Ushimaru

Abstract:

Nucleophagy selectively degrades nuclear materials, especially nucleolus after nutrient starvation or inactivation of TORC1 kinase in budding yeast. Budding yeast phosphatidate (PA) phosphatase Pah1 that converts PA to diacylglycerol is essential for partitioning of lipid precursors between membrane and storage that is crucial for many aspects of cell growth and development. Pah1 is required for nuclear/ER membrane biogenesis and vacuole function, but whether Pah1 and its activator Nem1/Spo7 protein phosphatase complex are involved in autophagy is largely unknown. Loss of Pah1 causes expansion of the nucleus and fragmentation of the vacuole. Here we show that Pah1 is required for bulk autophagy and nucleophagy after TORC1 inactivation. Loss of Pah1 impaired nucleophagy severely and bulk autophagy to a lesser extent. Loss of the Pah1 activator Nem1-Spo7 protein phosphatase exhibited similar features.

Keywords: autophagy, Nem1/Spo7 phosphatase, Pah1, nucleophagy, TORC1

Procedia PDF Downloads 213
9504 Development of a General Purpose Computer Programme Based on Differential Evolution Algorithm: An Application towards Predicting Elastic Properties of Pavement

Authors: Sai Sankalp Vemavarapu

Abstract:

This paper discusses the application of machine learning in the field of transportation engineering for predicting engineering properties of pavement more accurately and efficiently. Predicting the elastic properties aid us in assessing the current road conditions and taking appropriate measures to avoid any inconvenience to commuters. This improves the longevity and sustainability of the pavement layer while reducing its overall life-cycle cost. As an example, we have implemented differential evolution (DE) in the back-calculation of the elastic modulus of multi-layered pavement. The proposed DE global optimization back-calculation approach is integrated with a forward response model. This approach treats back-calculation as a global optimization problem where the cost function to be minimized is defined as the root mean square error in measured and computed deflections. The optimal solution which is elastic modulus, in this case, is searched for in the solution space by the DE algorithm. The best DE parameter combinations and the most optimum value is predicted so that the results are reproducible whenever the need arises. The algorithm’s performance in varied scenarios was analyzed by changing the input parameters. The prediction was well within the permissible error, establishing the supremacy of DE.

Keywords: cost function, differential evolution, falling weight deflectometer, genetic algorithm, global optimization, metaheuristic algorithm, multilayered pavement, pavement condition assessment, pavement layer moduli back calculation

Procedia PDF Downloads 159
9503 Collagen Deposition in Lung Parenchyma Driven by Depletion of LYVE-1+ Macrophages Protects Emphysema and Loss of Airway Function

Authors: Yinebeb Mezgebu Dagnachew, Hwee Ying Lim, Liao Wupeng, Sheau Yng Lim, Lim Sheng Jie Natalie, Veronique Angeli

Abstract:

Collagen is essential for maintaining lung structure and function, and its remodeling has been associated with respiratory diseases, including chronic obstructive pulmonary disease (COPD). However, the cellular mechanisms driving collagen remodeling and the functional implications of this process in the pathophysiology of pulmonary diseases remain poorly understood. Using a mouse model of Lyve-1 expressing macrophage depletion, we found that the absence of this subpopulation of tissue-resident macrophage led to the preferential deposition of type I collagen fibers around the alveoli and bronchi in the steady state. Further analysis by polarized light microscopy revealed that the collagen fibers accumulating in the lungs depleted of Lyve-1+ macrophages were thicker and crosslinked. A decrease in MMP-9 gene expression and proteolytic activity, together with an increase in Col1a1, Timp-3 and Lox gene expression, accompanied the collagen alterations. Next, we investigated the effect of the collagen remodeling on the pathophysiology of COPD and airway function in mouse lacking Lyve-1+ macrophage exposed chronically to cigarette smoke (CS), a well-established animal model of COPD. We showed that the deposition of collagen protected mouse against the destruction of alveoli (emphysema) and bronchi thickening after CS exposure and prevented loss of airway function. Thus, we demonstrate that interstitial Lyve-1+ macrophages regulate the composition, amount, and architecture of the collagen network in the lungs and that such collagen remodeling functionally impacts the development of COPD. This study further supports the potential of targeting collagen as a promising approach to treating respiratory diseases.

Keywords: lung, extracellular matrix, chronic obstructive pulmonary disease, matrix metalloproteinases, collagen

Procedia PDF Downloads 26
9502 Enhancing Predictive Accuracy in Pharmaceutical Sales through an Ensemble Kernel Gaussian Process Regression Approach

Authors: Shahin Mirshekari, Mohammadreza Moradi, Hossein Jafari, Mehdi Jafari, Mohammad Ensaf

Abstract:

This research employs Gaussian Process Regression (GPR) with an ensemble kernel, integrating Exponential Squared, Revised Matern, and Rational Quadratic kernels to analyze pharmaceutical sales data. Bayesian optimization was used to identify optimal kernel weights: 0.76 for Exponential Squared, 0.21 for Revised Matern, and 0.13 for Rational Quadratic. The ensemble kernel demonstrated superior performance in predictive accuracy, achieving an R² score near 1.0, and significantly lower values in MSE, MAE, and RMSE. These findings highlight the efficacy of ensemble kernels in GPR for predictive analytics in complex pharmaceutical sales datasets.

Keywords: Gaussian process regression, ensemble kernels, bayesian optimization, pharmaceutical sales analysis, time series forecasting, data analysis

Procedia PDF Downloads 66
9501 An Assessment of Vegetable Farmers’ Perceptions about Post-harvest Loss Sources in Ghana

Authors: Kofi Kyei, Kenchi Matsui

Abstract:

Loss of vegetable products has been a major constraint in the post-harvest chain. Sources of post-harvest loss in the vegetable industry start from the time of harvesting to its handling and at the various market centers. Identifying vegetable farmers’ perceptions about post-harvest loss sources is one way of addressing this issue. In this paper, we assessed farmers’ perceptions about sources of post-harvest losses in the Ashanti Region of Ghana. We also identified the factors that influence their perceptions. To clearly understand farmers’ perceptions, we selected Sekyere-Kumawu District in the Ashanti Region. Sekyere-Kumawu District is one of the major producers of vegetables in the Region. Based on a questionnaire survey, 100 vegetable farmers growing tomato, pepper, okra, cabbage, and garden egg were purposely selected from five communities in Sekyere-Kumawu District. For farmers’ perceptions, the five points Likert scale was employed. On a scale from 1 (no loss) to 5 (extremely high loss), we processed the scores for each vegetable harvest. To clarify factors influencing farmers’ perceptions, the Pearson Correlation analysis was used. Our findings revealed that farmers perceive post-harvest loss by pest infestation as the most extreme loss. However, vegetable farmers did not perceive loss during transportation as a serious source of post-harvest loss. The Pearson Correlation analysis results further revealed that farmers’ age, gender, level of education, and years of experience had an influence on their perceptions. This paper then discusses some recommendations to minimize the post-harvest loss in the region.

Keywords: Ashanti Region, pest infestation, post-harvest loss, vegetable farmers

Procedia PDF Downloads 167
9500 Integrating Deterministic and Probabilistic Safety Assessment to Decrease Risk & Energy Consumption in a Typical PWR

Authors: Ebrahim Ghanbari, Mohammad Reza Nematollahi

Abstract:

Integrating deterministic and probabilistic safety assessment (IDPSA) is one of the most commonly used issues in the field of safety analysis of power plant accident. It has also been recognized today that the role of human error in creating these accidents is not less than systemic errors, so the human interference and system errors in fault and event sequences are necessary. The integration of these analytical topics will be reflected in the frequency of core damage and also the study of the use of water resources in an accident such as the loss of all electrical power of the plant. In this regard, the SBO accident was simulated for the pressurized water reactor in the deterministic analysis issue, and by analyzing the operator's behavior in controlling the accident, the results of the combination of deterministic and probabilistic assessment were identified. The results showed that the best performance of the plant operator would reduce the risk of an accident by 10%, as well as a decrease of 6.82 liters/second of the water sources of the plant.

Keywords: IDPSA, human error, SBO, risk

Procedia PDF Downloads 125
9499 Evaluating Generative Neural Attention Weights-Based Chatbot on Customer Support Twitter Dataset

Authors: Sinarwati Mohamad Suhaili, Naomie Salim, Mohamad Nazim Jambli

Abstract:

Sequence-to-sequence (seq2seq) models augmented with attention mechanisms are playing an increasingly important role in automated customer service. These models, which are able to recognize complex relationships between input and output sequences, are crucial for optimizing chatbot responses. Central to these mechanisms are neural attention weights that determine the focus of the model during sequence generation. Despite their widespread use, there remains a gap in the comparative analysis of different attention weighting functions within seq2seq models, particularly in the domain of chatbots using the Customer Support Twitter (CST) dataset. This study addresses this gap by evaluating four distinct attention-scoring functions—dot, multiplicative/general, additive, and an extended multiplicative function with a tanh activation parameter — in neural generative seq2seq models. Utilizing the CST dataset, these models were trained and evaluated over 10 epochs with the AdamW optimizer. Evaluation criteria included validation loss and BLEU scores implemented under both greedy and beam search strategies with a beam size of k=3. Results indicate that the model with the tanh-augmented multiplicative function significantly outperforms its counterparts, achieving the lowest validation loss (1.136484) and the highest BLEU scores (0.438926 under greedy search, 0.443000 under beam search, k=3). These results emphasize the crucial influence of selecting an appropriate attention-scoring function in improving the performance of seq2seq models for chatbots. Particularly, the model that integrates tanh activation proves to be a promising approach to improve the quality of chatbots in the customer support context.

Keywords: attention weight, chatbot, encoder-decoder, neural generative attention, score function, sequence-to-sequence

Procedia PDF Downloads 73
9498 Influence of Annealing Temperature on Optical, Anticandidal, Photocatalytic and Dielectric Properties of ZnO/TiO2 Nanocomposites

Authors: Wasi Khan, Suboohi Shervani, Swaleha Naseem, Mohd. Shoeb, J. A. Khan, B. R. Singh, A. H. Naqvi

Abstract:

We have successfully synthesized ZnO/TiO2 nanocomposite using a two-step solochemical synthesis method. The influence of annealing temperature on microstructural, optical, anticandidal, photocatalytic activities and dielectric properties were investigated. X-ray diffraction (XRD) and scanning electron microscopy (SEM) show the formation of nanocomposite and uniform surface morphology of all samples. The UV-Vis spectra indicate decrease in band gap energy with increase in annealing temperature. The anticandidal activity of ZnO/TiO2 nanocomposite was evaluated against MDR C. albicans 077. The in-vitro killing assay revealed that the ZnO/TiO2 nanocomposite efficiently inhibit the growth of the C. albicans 077. The nanocomposite also exhibited the photocatalytic activity for the degradation of methyl orange as a function of time at 465 nm wavelength. The electrical behaviour of composite has been studied over a wide range of frequencies at room temperature using complex impedance spectroscopy. The dielectric constants, dielectric loss and ac conductivity (σac) were studied as the function of frequency, which have been explained by ‘Maxwell Wagner Model’. The data reveals that the dielectric constant and loss (tanδ) exhibit the normal dielectric behavior and decreases with the increase in frequency.

Keywords: ZnO/TiO2 nanocomposites, SEM, photocatalytic activity, dielectric properties

Procedia PDF Downloads 398
9497 In silico Analysis of a Causative Mutation in Cadherin-23 Gene Identified in an Omani Family with Hearing Loss

Authors: Mohammed N. Al Kindi, Mazin Al Khabouri, Khalsa Al Lamki, Tommasso Pappuci, Giovani Romeo, Nadia Al Wardy

Abstract:

Hereditary hearing loss is a heterogeneous group of complex disorders with an overall incidence of one in every five hundred newborns presented as syndromic and non-syndromic forms. Cadherin-related 23 (CDH23) is one of the listed deafness causative genes. CDH23 is found to be expressed in the stereocilia of hair cells and the retina photoreceptor cells. Defective CDH23 has been associated mostly with prelingual severe-to-profound sensorineural hearing loss (SNHL) in either syndromic (USH1D) or non-syndromic SNHL (DFNB12). An Omani family diagnosed clinically with severe-profound sensorineural hearing loss was genetically analysed by whole exome sequencing technique. A novel homozygous missense variant, c.A7451C (p.D2484A), in exon 53 of CDH23 was detected. One hundred and thirty control samples were analysed where all were negative for the detected variant. The variant was analysed in silico for pathogenicity verification using several mutation prediction software. The variant proved to be a pathogenic mutation and is reported for the first time in Oman and worldwide. It is concluded that in silico mutation prediction analysis might be used as a useful molecular diagnostics tool benefiting both genetic counseling and mutation verification. The aspartic acid 2484 alanine missense substitution might be the main disease-causing mutation that damages CDH23 function and could be used as a genetic hearing loss marker for this particular Omani family.

Keywords: Cdh23, d2484a, in silico, Oman

Procedia PDF Downloads 209
9496 Deep Neural Network Approach for Navigation of Autonomous Vehicles

Authors: Mayank Raj, V. G. Narendra

Abstract:

Ever since the DARPA challenge on autonomous vehicles in 2005, there has been a lot of buzz about ‘Autonomous Vehicles’ amongst the major tech giants such as Google, Uber, and Tesla. Numerous approaches have been adopted to solve this problem, which can have a long-lasting impact on mankind. In this paper, we have used Deep Learning techniques and TensorFlow framework with the goal of building a neural network model to predict (speed, acceleration, steering angle, and brake) features needed for navigation of autonomous vehicles. The Deep Neural Network has been trained on images and sensor data obtained from the comma.ai dataset. A heatmap was used to check for correlation among the features, and finally, four important features were selected. This was a multivariate regression problem. The final model had five convolutional layers, followed by five dense layers. Finally, the calculated values were tested against the labeled data, where the mean squared error was used as a performance metric.

Keywords: autonomous vehicles, deep learning, computer vision, artificial intelligence

Procedia PDF Downloads 151
9495 A Comparative Study on a Tilt-Integral-Derivative Controller with Proportional-Integral-Derivative Controller for a Pacemaker

Authors: Aysan Esgandanian, Sabalan Daneshvar

Abstract:

The study is done to determine the comparison between proportional-integral-derivative controller (PID controller) and tilt-integral-derivative (TID controller) for cardiac pacemaker systems, which can automatically control the heart rate to accurately track a desired preset profile. The controller offers good adaption of heart to the physiological needs of the patient. The parameters of the both controllers are tuned by particle swarm optimization (PSO) algorithm which uses the integral of time square error as a fitness function to be minimized. Simulation results are performed on the developed cardiovascular system of humans and results demonstrate that the TID controller produces superior control performance than PID controllers. In this paper, all simulations were performed in Matlab.

Keywords: integral of time square error, pacemaker systems, proportional-integral-derivative controller, PSO algorithm, tilt-integral-derivative controller

Procedia PDF Downloads 459
9494 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 157
9493 Advances in Fiber Optic Technology for High-Speed Data Transmission

Authors: Salim Yusif

Abstract:

Fiber optic technology has revolutionized telecommunications and data transmission, providing unmatched speed, bandwidth, and reliability. This paper presents the latest advancements in fiber optic technology, focusing on innovations in fiber materials, transmission techniques, and network architectures that enhance the performance of high-speed data transmission systems. Key advancements include the development of ultra-low-loss optical fibers, multi-core fibers, advanced modulation formats, and the integration of fiber optics into next-generation network architectures such as Software-Defined Networking (SDN) and Network Function Virtualization (NFV). Additionally, recent developments in fiber optic sensors are discussed, extending the utility of optical fibers beyond data transmission. Through comprehensive analysis and experimental validation, this research offers valuable insights into the future directions of fiber optic technology, highlighting its potential to drive innovation across various industries.

Keywords: fiber optics, high-speed data transmission, ultra-low-loss optical fibers, multi-core fibers, modulation formats, coherent detection, software-defined networking, network function virtualization, fiber optic sensors

Procedia PDF Downloads 51
9492 Optimization of E-motor Control Parameters for Electrically Propelled Vehicles by Integral Squared Method

Authors: Ibrahim Cicek, Melike Nikbay

Abstract:

Electrically propelled vehicles, either road or aerial vehicles are studied on contemporarily for their robust maneuvers and cost-efficient transport operations. The main power generating systems of such vehicles electrified by selecting proper components and assembled as e-powertrain. Generally, e-powertrain components selected considering the target performance requirements. Since the main component of propulsion is the drive unit, e-motor control system is subjected to achieve the performance targets. In this paper, the optimization of e-motor control parameters studied by Integral Squared Method (ISE). The overall aim is to minimize power consumption of such vehicles depending on mission profile and maintaining smooth maneuvers for passenger comfort. The sought-after values of control parameters are computed using the Optimal Control Theory. The system is modeled as a closed-loop linear control system with calibratable parameters.

Keywords: optimization, e-powertrain, optimal control, electric vehicles

Procedia PDF Downloads 121
9491 Seasonal and Species Variations in Incidence of Foetal Loss at the Maiduguri Abattoir in Northern Nigeria

Authors: Abdulrazaq O. Raji, Abba Mohammed, Ibrahim D. Mohammed

Abstract:

This study was conducted to investigate foetal loss among slaughtered livestock species at the Maiduguri abattoir from 2009 to 2013. Record of animals slaughtered monthly and fetuses recovered were collected from the management of the Maiduguri abattoir. Data was subjected to Analysis of Variance using the General Linear Model of SPSS 13.0 with Season, Species and their interaction as fixed factors. Average yearly slaughter at the Maiduguri abattoir was 63,225 animals with cattle, camel, goat and sheep accounting for 19737, 7374, 19281 and 17540 of the total. The corresponding number of those pregnant were 3117, 839, 2281 and 2432 out of a total of 8522 animals. Thus, cattle, camel, goat and sheep accounted for 30.87, 11.53, 30.16 and 27.44%, respectively of the animals slaughtered at the Abattoir and 35.96, 9.68, 26.31 and 28.05% of the foetal loss. The effect of season and species on foetal loss was significant (P < 0.05). The number of pregnant animals slaughtered and foetal loss were higher during wet than dry season. Similarly, foetal loss at the abattoir was higher in the month of May in respect of camel, goat and sheep, and August for cattle. Camel was the least slaughtered animal and had the least number of pregnant females. Foetal loss (%) was higher (P < 0.05) for cattle compared to other species. The interaction showed that camel was the least slaughtered species in both seasons and cattle in the wet season had the highest foetal loss.

Keywords: abattoir, foetal loss, season, species

Procedia PDF Downloads 526
9490 Estimation of Optimum Parameters of Non-Linear Muskingum Model of Routing Using Imperialist Competition Algorithm (ICA)

Authors: Davood Rajabi, Mojgan Yazdani

Abstract:

Non-linear Muskingum model is an efficient method for flood routing, however, the efficiency of this method is influenced by three applied parameters. Therefore, efficiency assessment of Imperialist Competition Algorithm (ICA) to evaluate optimum parameters of non-linear Muskingum model was addressed through this study. In addition to ICA, Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) were also used aiming at an available criterion to verdict ICA. In this regard, ICA was applied for Wilson flood routing; then, routing of two flood events of DoAab Samsami River was investigated. In case of Wilson flood that the target function was considered as the sum of squared deviation (SSQ) of observed and calculated discharges. Routing two other floods, in addition to SSQ, another target function was also considered as the sum of absolute deviations of observed and calculated discharge. For the first floodwater based on SSQ, GA indicated the best performance, however, ICA was on first place, based on SAD. For the second floodwater, based on both target functions, ICA indicated a better operation. According to the obtained results, it can be said that ICA could be used as an appropriate method to evaluate the parameters of Muskingum non-linear model.

Keywords: Doab Samsami river, genetic algorithm, imperialist competition algorithm, meta-exploratory algorithms, particle swarm optimization, Wilson flood

Procedia PDF Downloads 497
9489 Discrete Estimation of Spectral Density for Alpha Stable Signals Observed with an Additive Error

Authors: R. Sabre, W. Horrigue, J. C. Simon

Abstract:

This paper is interested in two difficulties encountered in practice when observing a continuous time process. The first is that we cannot observe a process over a time interval; we only take discrete observations. The second is the process frequently observed with a constant additive error. It is important to give an estimator of the spectral density of such a process taking into account the additive observation error and the choice of the discrete observation times. In this work, we propose an estimator based on the spectral smoothing of the periodogram by the polynomial Jackson kernel reducing the additive error. In order to solve the aliasing phenomenon, this estimator is constructed from observations taken at well-chosen times so as to reduce the estimator to the field where the spectral density is not zero. We show that the proposed estimator is asymptotically unbiased and consistent. Thus we obtain an estimate solving the two difficulties concerning the choice of the instants of observations of a continuous time process and the observations affected by a constant error.

Keywords: spectral density, stable processes, aliasing, periodogram

Procedia PDF Downloads 133
9488 The Underestimate of the Annual Maximum Rainfall Depths Due to Coarse Time Resolution Data

Authors: Renato Morbidelli, Carla Saltalippi, Alessia Flammini, Tommaso Picciafuoco, Corrado Corradini

Abstract:

A considerable part of rainfall data to be used in the hydrological practice is available in aggregated form within constant time intervals. This can produce undesirable effects, like the underestimate of the annual maximum rainfall depth, Hd, associated with a given duration, d, that is the basic quantity in the development of rainfall depth-duration-frequency relationships and in determining if climate change is producing effects on extreme event intensities and frequencies. The errors in the evaluation of Hd from data characterized by a coarse temporal aggregation, ta, and a procedure to reduce the non-homogeneity of the Hd series are here investigated. Our results indicate that: 1) in the worst conditions, for d=ta, the estimation of a single Hd value can be affected by an underestimation error up to 50%, while the average underestimation error for a series with at least 15-20 Hd values, is less than or equal to 16.7%; 2) the underestimation error values follow an exponential probability density function; 3) each very long time series of Hd contains many underestimated values; 4) relationships between the non-dimensional ratio ta/d and the average underestimate of Hd, derived from continuous rainfall data observed in many stations of Central Italy, may overcome this issue; 5) these equations should allow to improve the Hd estimates and the associated depth-duration-frequency curves at least in areas with similar climatic conditions.

Keywords: central Italy, extreme events, rainfall data, underestimation errors

Procedia PDF Downloads 187
9487 Soil Loss Assessment at Steep Slope: A Case Study at the Guthrie Corridor Expressway, Selangor, Malaysia

Authors: Rabiul Islam

Abstract:

The study was in order to assess soil erosion at plot scale Universal Soil Loss Equation (USLE) erosion model and Geographic Information System (GIS) technique have been used for the study 8 plots in Guthrie Corridor Expressway, Selangor, Malaysia. The USLE model estimates an average soil loss soil integrating several factors such as rainfall erosivity factor(R ), Soil erodibility factor (K), slope length and steepness factor (LS), vegetation cover factor as well as conservation practice factor (C &P) and Results shows that the four plots have very low rates of soil loss, i.e. NLDNM, NDNM, PLDM, and NDM having an average soil loss of 0.059, 0.106, 0.386 and 0.372 ton/ha/ year, respectively. The NBNM, PLDNM and NLDM plots had a relatively higher rate of soil loss, with an average of 0.678, 0.757 and 0.493ton/ha/year. Whereas, the NBM is one of the highest rate of soil loss from 0.842 ton/ha/year to maximum 16.466 ton/ha/year. The NBM plot was located at bare the land; hence the magnitude of C factor(C=0.15) was the highest one.

Keywords: USLE model, GIS, Guthrie Corridor Expressway (GCE), Malaysia

Procedia PDF Downloads 522
9486 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 137
9485 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 258
9484 Pin Count Aware Volumetric Error Detection in Arbitrary Microfluidic Bio-Chip

Authors: Kunal Das, Priya Sengupta, Abhishek K. Singh

Abstract:

Pin assignment, scheduling, routing and error detection for arbitrary biochemical protocols in Digital Microfluidic Biochip have been reported in this paper. The research work is concentrating on pin assignment for 2 or 3 droplets routing in the arbitrary biochemical protocol, scheduling and routing in m × n biochip. The volumetric error arises due to droplet split in the biochip. The volumetric error detection is also addressed using biochip AND logic gate which is known as microfluidic AND or mAND gate. The algorithm for pin assignment for m × n biochip required m+n-1 numbers of pins. The basic principle of this algorithm is that no same pin will be allowed to be placed in the same column, same row and diagonal and adjacent cells. The same pin should be placed a distance apart such that interference becomes less. A case study also reported in this paper.

Keywords: digital microfludic biochip, cross-contamination, pin assignment, microfluidic AND gate

Procedia PDF Downloads 270
9483 Solution of the Nonrelativistic Radial Wave Equation of Hydrogen Atom Using the Green's Function Approach

Authors: F. U. Rahman, R. Q. Zhang

Abstract:

This work aims to develop a systematic numerical technique which can be easily extended to many-body problem. The Lippmann Schwinger equation (integral form of the Schrodinger wave equation) is solved for the nonrelativistic radial wave of hydrogen atom using iterative integration scheme. As the unknown wave function appears on both sides of the Lippmann Schwinger equation, therefore an approximate wave function is used in order to solve the equation. The Green’s function is obtained by the method of Laplace transform for the radial wave equation with excluded potential term. Using the Lippmann Schwinger equation, the product of approximate wave function, the Green’s function and the potential term is integrated iteratively. Finally, the wave function is normalized and plotted against the standard radial wave for comparison. The outcome wave function converges to the standard wave function with the increasing number of iteration. Results are verified for the first fifteen states of hydrogen atom. The method is efficient and consistent and can be applied to complex systems in future.

Keywords: Green’s function, hydrogen atom, Lippmann Schwinger equation, radial wave

Procedia PDF Downloads 389
9482 Resilience among Children with and without Hearing Loss: A Comparative Study in Pakistan

Authors: Bushra Akram, Amina Tariq

Abstract:

Objective: This cross-sectional descriptive study aimed to compare the level of resilience among children with and without hearing loss. Methodology: In this descriptive cross sectional study total 500 children (with hearing loss = 250 and without hearing loss = 250) were recruited conveniently. Children with hearing loss were recruited from the special schools whereas children without hearing loss were selected from regular schools located in cities of Gujrat and Jhelum of Pakistan. Respondents’ age ranged from 9-14 years. Resiliency scale named RSCA (Resiliency Scales for children and adolescents) developed by Sandra Prince Embury (2006) was used. RSCA consist of three core theoretical areas: MAS (Sense of Mastery Scale), REL (Sense of Relatedness Scale) and REA (Emotional Reactivity Scale). Results: Findings indicated that there was a significant difference in the resilience level of participants with and without hearing loss. The mean comparison showed that the children with hearing loss showed lower scores on MAS (X̅ = 43.32, SD = 7.55) as well as on REL (X̅ = 49.96, SD = 7.65) as compared to their counterparts on MAS (X̅ = 53.96, SD = 9.90, t= -7.31***) and on REL (X̅ = 68.43, SD = 14.57,t= -10.18***). However children with hearing loss showed higher scores on REA (X̅ = 42.12, SD = 5.84) as compared to hearing participants (X̅ = 28.84, SD = 13.97, t = -8.20***). The findings revealed no significance difference in the resilience level of hearing and deaf children on the basis of their gender and age. Research Outcomes and Future Scope: Children with hearing loss showed a lower level of resilience, therefore, needs a program to develop resilience for better social-emotional adjustment and enhancement of their psychological well-being. In the end, the researcher gave recommendations for future research.

Keywords: children with hearing loss, psychological Wellbeing, resiliency scales for children and adolescents, resilience

Procedia PDF Downloads 179
9481 A Game of Information in Defense/Attack Strategies: Case of Poisson Attacks

Authors: Asma Ben Yaghlane, Mohamed Naceur Azaiez

Abstract:

In this paper, we briefly introduce the concept of Poisson attacks in the case of defense/attack strategies where attacks are assumed to be continuous. We suggest a game model in which the attacker will combine both criteria of a sufficient confidence level of a successful attack and a reasonably small size of the estimation error in order to launch an attack. Here, estimation error arises from assessing the system failure upon attack using aggregate data at the system level. The corresponding error is referred to as aggregation error. On the other hand, the defender will attempt to deter attack by making one or both criteria inapplicable. The defender will build his/her strategy by both strengthening the targeted system and increasing the size of error. We will formulate the defender problem based on appropriate optimization models. The attacker will opt for a Bayesian updating in assessing the impact on the improvement made by the defender. Then, the attacker will evaluate the feasibility of the attack before making the decision of whether or not to launch it. We will provide illustrations to better explain the process.

Keywords: attacker, defender, game theory, information

Procedia PDF Downloads 461