Search results for: propagation of error
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2589

Search results for: propagation of error

1899 Phytochemical Investigation of Butanol Extract from Launeae Arborescens

Authors: Khaled Sekoum, Nasser Belboukhari, Abelkrim Cheriti

Abstract:

Launeae arborescens (L. arborescens) is a medicinal plant having capacities of important propagation. Following its biotope, associate to different species, it is frequently notably in the whole region of Algerian southwest of Wadi– Namous until the region of Karzaz. According to our ethnopharmacological survey, L. arborescens is used for treatment of the illnesses gastric. Following our phytochemical works achieved on the polyphenols of the methanolic extract of aerial part of L. arborescens, we are also interested to investigate the butanol fraction of the water/acetone extract and isolate of the new flavonoids from this plant.

Keywords: Launeae arborescens, asteraceae, flavanone, isoflavanone, glycosid flavanone

Procedia PDF Downloads 465
1898 Influence of Model Hydrometeor Form on Probability of Discharge Initiation from Artificial Charged Water Aerosol Cloud

Authors: A. G. Temnikov, O. S. Belova, L. L. Chernensky, T. K. Gerastenok, N. Y. Lysov, A. V. Orlov, D. S. Zhuravkova

Abstract:

Hypothesis of the lightning initiation on the arrays of large hydrometeors are in the consideration. There is no agreement about the form the hydrometeors that could be the best for the lightning initiation from the thundercloud. Artificial charged water aerosol clouds of the positive or negative polarity could help investigate the possible influence of the hydrometeor form on the peculiarities and the probability of the lightning discharge initiation between the thundercloud and the ground. Artificial charged aerosol clouds that could create the electric field strength in the range of 5-6 kV/cm to 16-18 kV/cm have been used in experiments. The array of the model hydrometeors of the volume and plate form has been disposed near the bottom cloud boundary. It was established that the different kinds of the discharge could be initiated in the presence of the model hydrometeors array – from the cloud discharges up to the diffuse and channel discharges between the charged cloud and the ground. It was found that the form of the model hydrometeors could significantly influence the channel discharge initiation from the artificial charged aerosol cloud of the negative or positive polarity correspondingly. Analysis and generalization of the experimental results have shown that the maximal probability of the channel discharge initiation and propagation stimulation has been observed for the artificial charged cloud of the positive polarity when the arrays of the model hydrometeors of the cylinder revolution form have been used. At the same time, for the artificial charged clouds of the negative polarity, application of the model hydrometeor array of the plate rhombus form has provided the maximal probability of the channel discharge formation between the charged cloud and the ground. The established influence of the form of the model hydrometeors on the channel discharge initiation and from the artificial charged water aerosol cloud and its following successful propagation has been related with the different character of the positive and negative streamer and volume leader development on the model hydrometeors array being near the bottom boundary of the charged cloud. The received experimental results have shown the possibly important role of the form of the large hail particles precipitated in thundercloud on the discharge initiation.

Keywords: cloud and channel discharges, hydrometeor form, lightning initiation, negative and positive artificial charged aerosol cloud

Procedia PDF Downloads 312
1897 An Automatic Speech Recognition of Conversational Telephone Speech in Malay Language

Authors: M. Draman, S. Z. Muhamad Yassin, M. S. Alias, Z. Lambak, M. I. Zulkifli, S. N. Padhi, K. N. Baharim, F. Maskuriy, A. I. A. Rahim

Abstract:

The performance of Malay automatic speech recognition (ASR) system for the call centre environment is presented. The system utilizes Kaldi toolkit as the platform to the entire library and algorithm used in performing the ASR task. The acoustic model implemented in this system uses a deep neural network (DNN) method to model the acoustic signal and the standard (n-gram) model for language modelling. With 80 hours of training data from the call centre recordings, the ASR system can achieve 72% of accuracy that corresponds to 28% of word error rate (WER). The testing was done using 20 hours of audio data. Despite the implementation of DNN, the system shows a low accuracy owing to the varieties of noises, accent and dialect that typically occurs in Malaysian call centre environment. This significant variation of speakers is reflected by the large standard deviation of the average word error rate (WERav) (i.e., ~ 10%). It is observed that the lowest WER (13.8%) was obtained from recording sample with a standard Malay dialect (central Malaysia) of native speaker as compared to 49% of the sample with the highest WER that contains conversation of the speaker that uses non-standard Malay dialect.

Keywords: conversational speech recognition, deep neural network, Malay language, speech recognition

Procedia PDF Downloads 318
1896 The Effect of Exposure to High Noise Level on the Performance and Rate of Error in Manual Activities

Authors: Zahra Zamanian, Alireza Zamanian, Jafar Hasanzadeh

Abstract:

Background: Unwanted sound, as one of the most important physical factors in the majority of production units, imposes a great number of problems on the industrial workers. Sound is one of the environmental factors which can cause physical as well as psychological damages and also affects the individuals’ performance and productivity. Therefore, the present study aimed to determine the effect of noise exposure on human performance. Methods: The present study assessed the effect of noise on the performance of 50 students of Shiraz University of Medical Sciences (25 males and 25 females) at the sound pressures of 70, 90, and 110 dB by using two factors of physical features and the creation of different conditions of sound pressure source as well as applying Two-Arm coordination Test. Results: The results of the present study revealed no significant difference between male and female subjects as well as different conditions of creating sound pressure regarding the length of performance (p> 0.05). In addition, as the sound pressure increased, the length of performance increased, as well. According to the results, no significant difference was found between the performance at 70 and 90 dB. On the other hand, the performance at 110 dB was significantly different from the performance at 70 and 90 dB (p<0.05 and p<0.001). Conclusion: In general, as the sound pressure increases, the performance decreases which results in a considerable increase in the individuals’ rate of error.

Keywords: physical factors, two-arm coordination test, Shiraz University of Medical Sciences, noise

Procedia PDF Downloads 299
1895 Cross-Sectional Study Investigating the Prevalence of Uncorrected Refractive Error and Visual Acuity through Mobile Vision Screening in the Homeless in Wales

Authors: Pakinee Pooprasert, Wanxin Wang, Tina Parmar, Dana Ahnood, Tafadzwa Young-Zvandasara, James Morgan

Abstract:

Homelessness has been shown to be correlated to poor health outcomes, including increased visual health morbidity. Despite this, there are relatively few studies regarding visual health in the homeless population, especially in the UK. This research aims to investigate visual disability and access barriers prevalent in the homeless population in Cardiff, South Wales. Data was collected from 100 homeless participants in three different shelters. Visual outcomes included near and distance visual acuity as well as non-cycloplegic refraction. Qualitative data was collected via a questionnaire and included socio-demographic profile, ocular history, subjective visual acuity and level of access to healthcare facilities. Based on the participants’ presenting visual acuity, the total prevalence of myopia and hyperopia was 17.0% and 19.0% respectively based on spherical equivalent from the eye with the greatest absolute value. The prevalence of astigmatism was 8.0%. The mean absolute spherical equivalent was 0.841D and 0.853D for right and left eye respectively. The number of participants with sight loss (as defined by VA= 6/12-6/60 in the better-seeing eye) was 27.0% in comparison to 0.89% and 1.1% in the general Cardiff and Wales population respectively (p-value is < 0.05). Additionally, 1.0% of the homeless subjects were registered blind (VA less than 3/60), in comparison to 0.17% for the national consensus after age standardization. Most participants had good knowledge regarding access to prescription glasses and eye examination services. Despite this, 85.0% never had their eyes examined by a doctor and 73.0% had their last optometrist appointment in more than 5 years. These findings suggested that there was a significant disparity in ocular health, including visual acuity and refractive error amongst the homeless in comparison to the general population. Further, the homeless were less likely to receive the same level of support and continued care in the community due to access barriers. These included a number of socio-economic factors such as travel expenses and regional availability of services, as well as administrative shortcomings. In conclusion, this research demonstrated unmet visual health needs within the homeless, and that inclusive policy changes may need to be implemented for better healthcare outcomes within this marginalized community.

Keywords: homelessness, refractive error, visual disability, Wales

Procedia PDF Downloads 165
1894 Proposal of Optimality Evaluation for Quantum Secure Communication Protocols by Taking the Average of the Main Protocol Parameters: Efficiency, Security and Practicality

Authors: Georgi Bebrov, Rozalina Dimova

Abstract:

In the field of quantum secure communication, there is no evaluation that characterizes quantum secure communication (QSC) protocols in a complete, general manner. The current paper addresses the problem concerning the lack of such an evaluation for QSC protocols by introducing an optimality evaluation, which is expressed as the average over the three main parameters of QSC protocols: efficiency, security, and practicality. For the efficiency evaluation, the common expression of this parameter is used, which incorporates all the classical and quantum resources (bits and qubits) utilized for transferring a certain amount of information (bits) in a secure manner. By using criteria approach whether or not certain criteria are met, an expression for the practicality evaluation is presented, which accounts for the complexity of the QSC practical realization. Based on the error rates that the common quantum attacks (Measurement and resend, Intercept and resend, probe attack, and entanglement swapping attack) induce, the security evaluation for a QSC protocol is proposed as the minimum function taken over the error rates of the mentioned quantum attacks. For the sake of clarity, an example is presented in order to show how the optimality is calculated.

Keywords: quantum cryptography, quantum secure communcation, quantum secure direct communcation security, quantum secure direct communcation efficiency, quantum secure direct communcation practicality

Procedia PDF Downloads 179
1893 Effects of Various Wavelet Transforms in Dynamic Analysis of Structures

Authors: Seyed Sadegh Naseralavi, Sadegh Balaghi, Ehsan Khojastehfar

Abstract:

Time history dynamic analysis of structures is considered as an exact method while being computationally intensive. Filtration of earthquake strong ground motions applying wavelet transform is an approach towards reduction of computational efforts, particularly in optimization of structures against seismic effects. Wavelet transforms are categorized into continuum and discrete transforms. Since earthquake strong ground motion is a discrete function, the discrete wavelet transform is applied in the present paper. Wavelet transform reduces analysis time by filtration of non-effective frequencies of strong ground motion. Filtration process may be repeated several times while the approximation induces more errors. In this paper, strong ground motion of earthquake has been filtered once applying each wavelet. Strong ground motion of Northridge earthquake is filtered applying various wavelets and dynamic analysis of sampled shear and moment frames is implemented. The error, regarding application of each wavelet, is computed based on comparison of dynamic response of sampled structures with exact responses. Exact responses are computed by dynamic analysis of structures applying non-filtered strong ground motion.

Keywords: wavelet transform, computational error, computational duration, strong ground motion data

Procedia PDF Downloads 375
1892 Fracture And Fatigue Crack Growth Analysis and Modeling

Authors: Volkmar Nolting

Abstract:

Fatigue crack growth prediction has become an important topic in both engineering and non-destructive evaluation. Crack propagation is influenced by the mechanical properties of the material and is conveniently modelled by the Paris-Erdogan equation. The critical crack size and the total number of load cycles are calculated. From a Larson-Miller plot the maximum operational temperature can for a given stress level be determined so that failure does not occur within a given time interval t. The study is used to determine a reasonable inspection cycle and thus enhances operational safety and reduces costs.

Keywords: fracturemechanics, crack growth prediction, lifetime of a component, structural health monitoring

Procedia PDF Downloads 42
1891 Towards Automatic Calibration of In-Line Machine Processes

Authors: David F. Nettleton, Elodie Bugnicourt, Christian Wasiak, Alejandro Rosales

Abstract:

In this presentation, preliminary results are given for the modeling and calibration of two different industrial winding MIMO (Multiple Input Multiple Output) processes using machine learning techniques. In contrast to previous approaches which have typically used ‘black-box’ linear statistical methods together with a definition of the mechanical behavior of the process, we use non-linear machine learning algorithms together with a ‘white-box’ rule induction technique to create a supervised model of the fitting error between the expected and real force measures. The final objective is to build a precise model of the winding process in order to control de-tension of the material being wound in the first case, and the friction of the material passing through the die, in the second case. Case 1, Tension Control of a Winding Process. A plastic web is unwound from a first reel, goes over a traction reel and is rewound on a third reel. The objectives are: (i) to train a model to predict the web tension and (ii) calibration to find the input values which result in a given tension. Case 2, Friction Force Control of a Micro-Pullwinding Process. A core+resin passes through a first die, then two winding units wind an outer layer around the core, and a final pass through a second die. The objectives are: (i) to train a model to predict the friction on die2; (ii) calibration to find the input values which result in a given friction on die2. Different machine learning approaches are tested to build models, Kernel Ridge Regression, Support Vector Regression (with a Radial Basis Function Kernel) and MPART (Rule Induction with continuous value as output). As a previous step, the MPART rule induction algorithm was used to build an explicative model of the error (the difference between expected and real friction on die2). The modeling of the error behavior using explicative rules is used to help improve the overall process model. Once the models are built, the inputs are calibrated by generating Gaussian random numbers for each input (taking into account its mean and standard deviation) and comparing the output to a target (desired) output until a closest fit is found. The results of empirical testing show that a high precision is obtained for the trained models and for the calibration process. The learning step is the slowest part of the process (max. 5 minutes for this data), but this can be done offline just once. The calibration step is much faster and in under one minute obtained a precision error of less than 1x10-3 for both outputs. To summarize, in the present work two processes have been modeled and calibrated. A fast processing time and high precision has been achieved, which can be further improved by using heuristics to guide the Gaussian calibration. Error behavior has been modeled to help improve the overall process understanding. This has relevance for the quick optimal set up of many different industrial processes which use a pull-winding type process to manufacture fibre reinforced plastic parts. Acknowledgements to the Openmind project which is funded by Horizon 2020 European Union funding for Research & Innovation, Grant Agreement number 680820

Keywords: data model, machine learning, industrial winding, calibration

Procedia PDF Downloads 238
1890 Developing an ANN Model to Predict Anthropometric Dimensions Based on Real Anthropometric Database

Authors: Waleed A. Basuliman, Khalid S. AlSaleh, Mohamed Z. Ramadan

Abstract:

Applying the anthropometric dimensions is considered one of the important factors when designing any human-machine system. In this study, the estimation of anthropometric dimensions has been improved by developing artificial neural network that aims to predict the anthropometric measurements of the male in Saudi Arabia. A total of 1427 Saudi males from age 6 to 60 participated in measuring twenty anthropometric dimensions. These anthropometric measurements are important for designing the majority of work and life applications in Saudi Arabia. The data were collected during 8 months from different locations in Riyadh City. Five of these dimensions were used as predictors variables (inputs) of the model, and the remaining fifteen dimensions were set to be the measured variables (outcomes). The hidden layers have been varied during the structuring stage, and the best performance was achieved with the network structure 6-25-15. The results showed that the developed Neural Network model was significantly able to predict the body dimensions for the population of Saudi Arabia. The network mean absolute percentage error (MAPE) and the root mean squared error (RMSE) were found 0.0348 and 3.225 respectively. The accuracy of the developed neural network was evaluated by compare the predicted outcomes with a multiple regression model. The ANN model performed better and resulted excellent correlation coefficients between the predicted and actual dimensions.

Keywords: artificial neural network, anthropometric measurements, backpropagation, real anthropometric database

Procedia PDF Downloads 569
1889 Enhancing Signal Reception in a Mobile Radio Network Using Adaptive Beamforming Antenna Arrays Technology

Authors: Ugwu O. C., Mamah R. O., Awudu W. S.

Abstract:

This work is aimed at enhancing signal reception on a mobile radio network and minimizing outage probability in a mobile radio network using adaptive beamforming antenna arrays. In this research work, an empirical real-time drive measurement was done in a cellular network of Globalcom Nigeria Limited located at Ikeja, the headquarters of Lagos State, Nigeria, with reference base station number KJA 004. The empirical measurement includes Received Signal Strength and Bit Error Rate which were recorded for exact prediction of the signal strength of the network as at the time of carrying out this research work. The Received Signal Strength and Bit Error Rate were measured with a spectrum monitoring Van with the help of a Ray Tracer at an interval of 100 meters up to 700 meters from the transmitting base station. The distance and angular location measurements from the reference network were done with the help Global Positioning System (GPS). The other equipment used were transmitting equipment measurements software (Temsoftware), Laptops and log files, which showed received signal strength with distance from the base station. Results obtained were about 11% from the real-time experiment, which showed that mobile radio networks are prone to signal failure and can be minimized using an Adaptive Beamforming Antenna Array in terms of a significant reduction in Bit Error Rate, which implies improved performance of the mobile radio network. In addition, this work did not only include experiments done through empirical measurement but also enhanced mathematical models that were developed and implemented as a reference model for accurate prediction. The proposed signal models were based on the analysis of continuous time and discrete space, and some other assumptions. These developed (proposed) enhanced models were validated using MATLAB (version 7.6.3.35) program and compared with the conventional antenna for accuracy. These outage models were used to manage the blocked call experience in the mobile radio network. 20% improvement was obtained when the adaptive beamforming antenna arrays were implemented on the wireless mobile radio network.

Keywords: beamforming algorithm, adaptive beamforming, simulink, reception

Procedia PDF Downloads 34
1888 Simulation of Channel Models for Device-to-Device Application of 5G Urban Microcell Scenario

Authors: H. Zormati, J. Chebil, J. Bel Hadj Tahar

Abstract:

Next generation wireless transmission technology (5G) is expected to support the development of channel models for higher frequency bands, so clarification of high frequency bands is the most important issue in radio propagation research for 5G, multiple urban microcellular measurements have been carried out at 60 GHz. In this paper, the collected data is uniformly analyzed with focus on the path loss (PL), the objective is to compare simulation results of some studied channel models with the purpose of testing the performance of each one.

Keywords: 5G, channel model, 60GHz channel, millimeter-wave, urban microcell

Procedia PDF Downloads 314
1887 Measuring the Height of a Person in Closed Circuit Television Video Footage Using 3D Human Body Model

Authors: Dojoon Jung, Kiwoong Moon, Joong Lee

Abstract:

The height of criminals is one of the important clues that can determine the scope of the suspect's search or exclude the suspect from the search target. Although measuring the height of criminals by video alone is limited by various reasons, the 3D data of the scene and the Closed Circuit Television (CCTV) footage are matched, the height of the criminal can be measured. However, it is still difficult to measure the height of CCTV footage in the non-contact type measurement method because of variables such as position, posture, and head shape of criminals. In this paper, we propose a method of matching the CCTV footage with the 3D data on the crime scene and measuring the height of the person using the 3D human body model in the matched data. In the proposed method, the height is measured by using 3D human model in various scenes of the person in the CCTV footage, and the measurement value of the target person is corrected by the measurement error of the replay CCTV footage of the reference person. We tested for 20 people's walking CCTV footage captured from an indoor and an outdoor and corrected the measurement values with 5 reference persons. Experimental results show that the measurement error (true value-measured value) average is 0.45 cm, and this method is effective for the measurement of the person's height in CCTV footage.

Keywords: human height, CCTV footage, 2D/3D matching, 3D human body model

Procedia PDF Downloads 246
1886 Simulation of Reflectometry in Alborz Tokamak

Authors: S. Kohestani, R. Amrollahi, P. Daryabor

Abstract:

Microwave diagnostics such as reflectometry are receiving growing attention in magnetic confinement fusionresearch. In order to obtain the better understanding of plasma confinement physics, more detailed measurements on density profile and its fluctuations might be required. A 2D full-wave simulation of ordinary mode propagation has been written in an effort to model effects seen in reflectometry experiment. The code uses the finite-difference-time-domain method with a perfectly-matched-layer absorption boundary to solve Maxwell’s equations.The code has been used to simulate the reflectometer measurement in Alborz Tokamak.

Keywords: reflectometry, simulation, ordinary mode, tokamak

Procedia PDF Downloads 417
1885 Optimization of the Numerical Fracture Mechanics

Authors: H. Hentati, R. Abdelmoula, Li Jia, A. Maalej

Abstract:

In this work, we present numerical simulations of the quasi-static crack propagation based on the variation approach. We perform numerical simulations of a piece of brittle material without initial crack. An alternate minimization algorithm is used. Based on these numerical results, we determine the influence of numerical parameters on the location of crack. We show the importance of trying to optimize the time of numerical computation and we present the first attempt to develop a simple numerical method to optimize this time.

Keywords: fracture mechanics, optimization, variation approach, mechanic

Procedia PDF Downloads 602
1884 A Weighted Sum Particle Swarm Approach (WPSO) Combined with a Novel Feasibility-Based Ranking Strategy for Constrained Multi-Objective Optimization of Compact Heat Exchangers

Authors: Milad Yousefi, Moslem Yousefi, Ricarpo Poley, Amer Nordin Darus

Abstract:

Design optimization of heat exchangers is a very complicated task that has been traditionally carried out based on a trial-and-error procedure. To overcome the difficulties of the conventional design approaches especially when a large number of variables, constraints and objectives are involved, a new method based on a well-stablished evolutionary algorithm, particle swarm optimization (PSO), weighted sum approach and a novel constraint handling strategy is presented in this study. Since, the conventional constraint handling strategies are not effective and easy-to-implement in multi-objective algorithms, a novel feasibility-based ranking strategy is introduced which is both extremely user-friendly and effective. A case study from industry has been investigated to illustrate the performance of the presented approach. The results show that the proposed algorithm can find the near pareto-optimal with higher accuracy when it is compared to conventional non-dominated sorting genetic algorithm II (NSGA-II). Moreover, the difficulties of a trial-and-error process for setting the penalty parameters is solved in this algorithm.

Keywords: Heat exchanger, Multi-objective optimization, Particle swarm optimization, NSGA-II Constraints handling.

Procedia PDF Downloads 554
1883 Analytical Performance of Cobas C 8000 Analyzer Based on Sigma Metrics

Authors: Sairi Satari

Abstract:

Introduction: Six-sigma is a metric that quantifies the performance of processes as a rate of Defects-Per-Million Opportunities. Sigma methodology can be applied in chemical pathology laboratory for evaluating process performance with evidence for process improvement in quality assurance program. In the laboratory, these methods have been used to improve the timeliness of troubleshooting, reduce the cost and frequency of quality control and minimize pre and post-analytical errors. Aim: The aim of this study is to evaluate the sigma values of the Cobas 8000 analyzer based on the minimum requirement of the specification. Methodology: Twenty-one analytes were chosen in this study. The analytes were alanine aminotransferase (ALT), albumin, alkaline phosphatase (ALP), Amylase, aspartate transaminase (AST), total bilirubin, calcium, chloride, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, lactate dehydrogenase (LDH), magnesium, potassium, protein, sodium, triglyceride, uric acid and urea. Total error was obtained from Clinical Laboratory Improvement Amendments (CLIA). The Bias was calculated from end cycle report of Royal College of Pathologists of Australasia (RCPA) cycle from July to December 2016 and coefficient variation (CV) from six-month internal quality control (IQC). The sigma was calculated based on the formula :Sigma = (Total Error - Bias) / CV. The analytical performance was evaluated based on the sigma, sigma > 6 is world class, sigma > 5 is excellent, sigma > 4 is good and sigma < 4 is satisfactory and sigma < 3 is poor performance. Results: Based on the calculation, we found that, 96% are world class (ALT, albumin, ALP, amylase, AST, total bilirubin, cholesterol, HDL-cholesterol, creatinine, creatinine kinase, glucose, LDH, magnesium, potassium, triglyceride and uric acid. 14% are excellent (calcium, protein and urea), and 10% ( chloride and sodium) require more frequent IQC performed per day. Conclusion: Based on this study, we found that IQC should be performed frequently for only Chloride and Sodium to ensure accurate and reliable analysis for patient management.

Keywords: sigma matrics, analytical performance, total error, bias

Procedia PDF Downloads 167
1882 Spatial Climate Changes in the Province of Macerata, Central Italy, Analyzed by GIS Software

Authors: Matteo Gentilucci, Marco Materazzi, Gilberto Pambianchi

Abstract:

Climate change is an increasingly central issue in the world, because it affects many of human activities. In this context regional studies are of great importance because they sometimes differ from the general trend. This research focuses on a small area of central Italy which overlooks the Adriatic Sea, the province of Macerata. The aim is to analyze space-based climate changes, for precipitation and temperatures, in the last 3 climatological standard normals (1961-1990; 1971-2000; 1981-2010) through GIS software. The data collected from 30 weather stations for temperature and 61 rain gauges for precipitation were subject to quality controls: validation and homogenization. These data were fundamental for the spatialization of the variables (temperature and precipitation) through geostatistical techniques. To assess the best geostatistical technique for interpolation, the results of cross correlation were used. The co-kriging method with altitude as independent variable produced the best cross validation results for all time periods, among the methods analysed, with 'root mean square error standardized' close to 1, 'mean standardized error' close to 0, 'average standard error' and 'root mean square error' with similar values. The maps resulting from the analysis were compared by subtraction between rasters, producing 3 maps of annual variation and three other maps for each month of the year (1961/1990-1971/2000; 1971/2000-1981/2010; 1961/1990-1981/2010). The results show an increase in average annual temperature of about 0.1°C between 1961-1990 and 1971-2000 and 0.6 °C between 1961-1990 and 1981-2010. Instead annual precipitation shows an opposite trend, with an average difference from 1961-1990 to 1971-2000 of about 35 mm and from 1961-1990 to 1981-2010 of about 60 mm. Furthermore, the differences in the areas have been highlighted with area graphs and summarized in several tables as descriptive analysis. In fact for temperature between 1961-1990 and 1971-2000 the most areally represented frequency is 0.08°C (77.04 Km² on a total of about 2800 km²) with a kurtosis of 3.95 and a skewness of 2.19. Instead, the differences for temperatures from 1961-1990 to 1981-2010 show a most areally represented frequency of 0.83 °C, with -0.45 as kurtosis and 0.92 as skewness (36.9 km²). Therefore it can be said that distribution is more pointed for 1961/1990-1971/2000 and smoother but more intense in the growth for 1961/1990-1981/2010. In contrast, precipitation shows a very similar shape of distribution, although with different intensities, for both variations periods (first period 1961/1990-1971/2000 and second one 1961/1990-1981/2010) with similar values of kurtosis (1st = 1.93; 2nd = 1.34), skewness (1st = 1.81; 2nd = 1.62 for the second) and area of the most represented frequency (1st = 60.72 km²; 2nd = 52.80 km²). In conclusion, this methodology of analysis allows the assessment of small scale climate change for each month of the year and could be further investigated in relation to regional atmospheric dynamics.

Keywords: climate change, GIS, interpolation, co-kriging

Procedia PDF Downloads 121
1881 Deep Learning for Renewable Power Forecasting: An Approach Using LSTM Neural Networks

Authors: Fazıl Gökgöz, Fahrettin Filiz

Abstract:

Load forecasting has become crucial in recent years and become popular in forecasting area. Many different power forecasting models have been tried out for this purpose. Electricity load forecasting is necessary for energy policies, healthy and reliable grid systems. Effective power forecasting of renewable energy load leads the decision makers to minimize the costs of electric utilities and power plants. Forecasting tools are required that can be used to predict how much renewable energy can be utilized. The purpose of this study is to explore the effectiveness of LSTM-based neural networks for estimating renewable energy loads. In this study, we present models for predicting renewable energy loads based on deep neural networks, especially the Long Term Memory (LSTM) algorithms. Deep learning allows multiple layers of models to learn representation of data. LSTM algorithms are able to store information for long periods of time. Deep learning models have recently been used to forecast the renewable energy sources such as predicting wind and solar energy power. Historical load and weather information represent the most important variables for the inputs within the power forecasting models. The dataset contained power consumption measurements are gathered between January 2016 and December 2017 with one-hour resolution. Models use publicly available data from the Turkish Renewable Energy Resources Support Mechanism. Forecasting studies have been carried out with these data via deep neural networks approach including LSTM technique for Turkish electricity markets. 432 different models are created by changing layers cell count and dropout. The adaptive moment estimation (ADAM) algorithm is used for training as a gradient-based optimizer instead of SGD (stochastic gradient). ADAM performed better than SGD in terms of faster convergence and lower error rates. Models performance is compared according to MAE (Mean Absolute Error) and MSE (Mean Squared Error). Best five MAE results out of 432 tested models are 0.66, 0.74, 0.85 and 1.09. The forecasting performance of the proposed LSTM models gives successful results compared to literature searches.

Keywords: deep learning, long short term memory, energy, renewable energy load forecasting

Procedia PDF Downloads 260
1880 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 274
1879 The Mirage of Progress? a Longitudinal Study of Japanese Students’ L2 Oral Grammar

Authors: Robert Long, Hiroaki Watanabe

Abstract:

This longitudinal study examines the grammatical errors of Japanese university students’ dialogues with a native speaker over an academic year. The L2 interactions of 15 Japanese speakers were taken from the JUSFC2018 corpus (April/May 2018) and the JUSFC2019 corpus (January/February). The corpora were based on a self-introduction monologue and a three-question dialogue; however, this study examines the grammatical accuracy found in the dialogues. Research questions focused on a possible significant difference in grammatical accuracy from the first interview session in 2018 and the second one the following year, specifically regarding errors in clauses per 100 words, global errors and local errors, and with specific errors related to parts of speech. The investigation also focused on which forms showed the least improvement or had worsened? Descriptive statistics showed that error-free clauses/errors per 100 words decreased slightly while clauses with errors/100 words increased by one clause. Global errors showed a significant decline, while local errors increased from 97 to 158 errors. For errors related to parts of speech, a t-test confirmed there was a significant difference between the two speech corpora with more error frequency occurring in the 2019 corpus. This data highlights the difficulty in having students self-edit themselves.

Keywords: clause analysis, global vs. local errors, grammatical accuracy, L2 output, longitudinal study

Procedia PDF Downloads 126
1878 Determinants of Aggregate Electricity Consumption in Ghana: A Multivariate Time Series Analysis

Authors: Renata Konadu

Abstract:

In Ghana, electricity has become the main form of energy which all sectors of the economy rely on for their businesses. Therefore, as the economy grows, the demand and consumption of electricity also grow alongside due to the heavy dependence on it. However, since the supply of electricity has not increased to match the demand, there has been frequent power outages and load shedding affecting business performances. To solve this problem and advance policies to secure electricity in Ghana, it is imperative that those factors that cause consumption to increase be analysed by considering the three classes of consumers; residential, industrial and non-residential. The main argument, however, is that, export of electricity to other neighbouring countries should be included in the electricity consumption model and considered as one of the significant factors which can decrease or increase consumption. The author made use of multivariate time series data from 1980-2010 and econometric models such as Ordinary Least Squares (OLS) and Vector Error Correction Model. Findings show that GDP growth, urban population growth, electricity exports and industry value added to GDP were cointegrated. The results also showed that there is unidirectional causality from electricity export and GDP growth and Industry value added to GDP to electricity consumption in the long run. However, in the short run, there was found to be a directional causality among all the variables and electricity consumption. The results have useful implication for energy policy makers especially with regards to electricity consumption, demand, and supply.

Keywords: electricity consumption, energy policy, GDP growth, vector error correction model

Procedia PDF Downloads 430
1877 Modeling of the Attitude Control Reaction Wheels of a Spacecraft in Software in the Loop Test Bed

Authors: Amr AbdelAzim Ali, G. A. Elsheikh, Moutaz M. Hegazy

Abstract:

Reaction wheels (RWs) are generally used as main actuator in the attitude control system (ACS) of spacecraft (SC) for fast orientation and high pointing accuracy. In order to achieve the required accuracy for the RWs model, the main characteristics of the RWs that necessitate analysis during the ACS design phase include: technical features, sequence of operating and RW control logic are included in function (behavior) model. A mathematical model is developed including the various errors source. The errors in control torque including relative, absolute, and error due to time delay. While the errors in angular velocity due to differences between average and real speed, resolution error, loose in installation of angular sensor, and synchronization errors. The friction torque is presented in the model include the different feature of friction phenomena: steady velocity friction, static friction and break-away torque, and frictional lag. The model response is compared with the experimental torque and frequency-response characteristics of tested RWs. Based on the created RW model, some criteria of optimization based control torque allocation problem can be recommended like: avoiding the zero speed crossing, bias angular velocity, or preventing wheel from running on the same angular velocity.

Keywords: friction torque, reaction wheels modeling, software in the loop, spacecraft attitude control

Procedia PDF Downloads 262
1876 Examining the Changes in Complexity, Accuracy, and Fluency in Japanese L2 Writing Over an Academic Semester

Authors: Robert Long

Abstract:

The results of a one-year study on the evolution of complexity, accuracy, and fluency (CAF) in the compositions of Japanese L2 university students throughout a semester are presented in this study. One goal was to determine if any improvement in writing abilities over this academic term had occurred, while another was to examine methods of editing. Participants had 30 minutes to write each essay with an additional 10 minutes allotted for editing. As for editing, participants were divided into two groups, one of which utilized an online grammar checker, while the other half self-edited their initial manuscripts. From the three different institutions, there was a total of 159 students. Research questions focused on determining if the CAF had evolved over the previous year, identifying potential variations in editing techniques, and describing the connections between the CAF dimensions. According to the findings, there was some improvement in accuracy (fewer errors) in all three of the measures), whereas there was a marked decline in complexity and fluency. As for the second research aim relating to the interaction among the three dimensions (CAF) and of possible increases in fluency being offset by decreases in grammatical accuracy, results showed (there is a logical high correlation with clauses and word counts, and mean length of T-unit (MLT) and (coordinate phrase of T-unit (CP/T) as well as MLT and clause per T-unit (C/T); furthermore, word counts and error/100 ratio correlated highly with error-free clause totals (EFCT). Issues of syntactical complexity had a negative correlation with EFCT, indicating that more syntactical complexity relates to decreased accuracy. Concerning a difference in error correction between those who self-edited and those who used an online grammar correction tool, results indicated that the variable of errors-free clause ratios (EFCR) had the greatest difference regarding accuracy, with fewer errors noted with writers using an online grammar checker. As for possible differences between the first and second (edited) drafts regarding CAF, results indicated there were positive changes in accuracy, the most significant change seen in complexity (CP/T and MLT), while there were relatively insignificant changes in fluency. Results also indicated significant differences among the three institutions, with Fujian University of Technology having the most fluency and accuracy. These findings suggest that to raise students' awareness of their overall writing development, teachers should support them in developing more complex syntactic structures, improving their fluency, and making more effective use of online grammar checkers.

Keywords: complexity, accuracy, fluency, writing

Procedia PDF Downloads 32
1875 Performance of High Efficiency Video Codec over Wireless Channels

Authors: Mohd Ayyub Khan, Nadeem Akhtar

Abstract:

Due to recent advances in wireless communication technologies and hand-held devices, there is a huge demand for video-based applications such as video surveillance, video conferencing, remote surgery, Digital Video Broadcast (DVB), IPTV, online learning courses, YouTube, WhatsApp, Instagram, Facebook, Interactive Video Games. However, the raw videos posses very high bandwidth which makes the compression a must before its transmission over the wireless channels. The High Efficiency Video Codec (HEVC) (also called H.265) is latest state-of-the-art video coding standard developed by the Joint effort of ITU-T and ISO/IEC teams. HEVC is targeted for high resolution videos such as 4K or 8K resolutions that can fulfil the recent demands for video services. The compression ratio achieved by the HEVC is twice as compared to its predecessor H.264/AVC for same quality level. The compression efficiency is generally increased by removing more correlation between the frames/pixels using complex techniques such as extensive intra and inter prediction techniques. As more correlation is removed, the chances of interdependency among coded bits increases. Thus, bit errors may have large effect on the reconstructed video. Sometimes even single bit error can lead to catastrophic failure of the reconstructed video. In this paper, we study the performance of HEVC bitstream over additive white Gaussian noise (AWGN) channel. Moreover, HEVC over Quadrature Amplitude Modulation (QAM) combined with forward error correction (FEC) schemes are also explored over the noisy channel. The video will be encoded using HEVC, and the coded bitstream is channel coded to provide some redundancies. The channel coded bitstream is then modulated using QAM and transmitted over AWGN channel. At the receiver, the symbols are demodulated and channel decoded to obtain the video bitstream. The bitstream is then used to reconstruct the video using HEVC decoder. It is observed that as the signal to noise ratio of channel is decreased the quality of the reconstructed video decreases drastically. Using proper FEC codes, the quality of the video can be restored up to certain extent. Thus, the performance analysis of HEVC presented in this paper may assist in designing the optimized code rate of FEC such that the quality of the reconstructed video is maximized over wireless channels.

Keywords: AWGN, forward error correction, HEVC, video coding, QAM

Procedia PDF Downloads 146
1874 The Influence of Using Soft Knee Pads on Static and Dynamic Balance among Male Athletes and Non-Athletes

Authors: Yaser Kazemzadeh, Keyvan Molanoruzy, Mojtaba Izady

Abstract:

The balance is the key component of motor skills to maintain postural control and the execution of complex skills. The present study was designed to evaluate the impact of soft knee pads on static and dynamic balance of male athletes. For this aim, thirty young athletes in different sport fields with 3 years professional sport training background and thirty healthy young men nonathletic (age: 24.5 ± 2.9, 24.3 ± 2.4, weight: 77.2 ± 4.3 and 80/9 ± 6/3 and height: 175 ± 2/84, 172 ± 5/44 respectively) as subjects selected. Then, subjects in two manner (without knee and with soft knee pads made of neoprene) execute standard error test (BESS) to assess static balance and star test to assess dynamic balance. For analyze of data, t-tests and one-way ANOVA were significant 05/0 ≥ α statistical analysis. The results showed that the use of soft knee significantly reduced error rate in static balance test (p ≥ 0/05). Also, use a soft knee pads decreased score of athlete group and increased score of nonathletic group in star test (p ≥ 0/05). These findings, indicates that use of knees affects static and dynamic balance in athletes and nonathletic in different manner and may increased athletic performance in sports that rely on static balance and decreased performance in sports that rely on dynamic balance.

Keywords: static balance, dynamic balance, soft knee, athletic men, non athletic men

Procedia PDF Downloads 285
1873 The Impact of Natural Resources on Financial Development: The Global Perspective

Authors: Remy Jonkam Oben

Abstract:

Using a time series approach, this study investigates how natural resources impact financial development from a global perspective over the 1980-2019 period. Some important determinants of financial development (economic growth, trade openness, population growth, and investment) have been added to the model as control variables. Unit root tests have revealed that all the variables are integrated into order one. Johansen's cointegration test has shown that the variables are in a long-run equilibrium relationship. The vector error correction model (VECM) has estimated the coefficient of the error correction term (ECT), which suggests that the short-run values of natural resources, economic growth, trade openness, population growth, and investment contribute to financial development converging to its long-run equilibrium level by a 23.63% annual speed of adjustment. The estimated coefficients suggest that global natural resource rent has a statistically-significant negative impact on global financial development in the long-run (thereby validating the financial resource curse) but not in the short-run. Causality test results imply that neither global natural resource rent nor global financial development Granger-causes each other.

Keywords: financial development, natural resources, resource curse hypothesis, time series analysis, Granger causality, global perspective

Procedia PDF Downloads 159
1872 Movie Genre Preference Prediction Using Machine Learning for Customer-Based Information

Authors: Haifeng Wang, Haili Zhang

Abstract:

Most movie recommendation systems have been developed for customers to find items of interest. This work introduces a predictive model usable by small and medium-sized enterprises (SMEs) who are in need of a data-based and analytical approach to stock proper movies for local audiences and retain more customers. We used classification models to extract features from thousands of customers’ demographic, behavioral and social information to predict their movie genre preference. In the implementation, a Gaussian kernel support vector machine (SVM) classification model and a logistic regression model were established to extract features from sample data and their test error-in-sample were compared. Comparison of error-out-sample was also made under different Vapnik–Chervonenkis (VC) dimensions in the machine learning algorithm to find and prevent overfitting. Gaussian kernel SVM prediction model can correctly predict movie genre preferences in 85% of positive cases. The accuracy of the algorithm increased to 93% with a smaller VC dimension and less overfitting. These findings advance our understanding of how to use machine learning approach to predict customers’ preferences with a small data set and design prediction tools for these enterprises.

Keywords: computational social science, movie preference, machine learning, SVM

Procedia PDF Downloads 254
1871 Deflagration and Detonation Simulation in Hydrogen-Air Mixtures

Authors: Belyayev P. E., Makeyeva I. R., Mastyuk D. A., Pigasov E. E.

Abstract:

Previously, the phrase ”hydrogen safety” was often used in terms of NPP safety. Due to the rise of interest to “green” and, particularly, hydrogen power engineering, the problem of hydrogen safety at industrial facilities has become ever more urgent. In Russia, the industrial production of hydrogen is meant to be performed by placing a chemical engineering plant near NPP, which supplies the plant with the necessary energy. In this approach, the production of hydrogen involves a wide range of combustible gases, such as methane, carbon monoxide, and hydrogen itself. Considering probable incidents, sudden combustible gas outburst into open space with further ignition is less dangerous by itself than ignition of the combustible mixture in the presence of many pipelines, reactor vessels, and any kind of fitting frames. Even ignition of 2100 cubic meters of the hydrogen-air mixture in open space gives velocity and pressure that are much lesser than velocity and pressure in Chapman-Jouguet condition and do not exceed 80 m/s and 6 kPa accordingly. However, the space blockage, the significant change of channel diameter on the way of flame propagation, and the presence of gas suspension lead to significant deflagration acceleration and to its transition into detonation or quasi-detonation. At the same time, process parameters acquired from the experiments at specific experimental facilities are not general, and their application to different facilities can only have a conventional and qualitative character. Yet, conducting deflagration and detonation experimental investigation for each specific industrial facility project in order to determine safe infrastructure unit placement does not seem feasible due to its high cost and hazard, while the conduction of numerical experiments is significantly cheaper and safer. Hence, the development of a numerical method that allows the description of reacting flows in domains with complex geometry seems promising. The base for this method is the modification of Kuropatenko method for calculating shock waves recently developed by authors, which allows using it in Eulerian coordinates. The current work contains the results of the development process. In addition, the comparison of numerical simulation results and experimental series with flame propagation in shock tubes with orifice plates is presented.

Keywords: CFD, reacting flow, DDT, gas explosion

Procedia PDF Downloads 82
1870 Implication of Soil and Seismic Ground Motion Variability on Dynamic Pile Group Impedance for Bridges

Authors: Muhammad Tariq Chaudhary

Abstract:

Bridges constitute a vital link in a transportation system and their functionality after an earthquake is critical in reducing disruption to social and economic activities of the society. Bridges supported on pile foundations are commonly used in many earthquake-prone regions. In order to properly design or investigate the performance of such structures, it is imperative that the effect of soil-foundation-structure interaction be properly taken into account. This study focused on the influence of soil and seismic ground motion variability on the dynamic impedance of pile-group foundations typically used for medium-span (about 30 m) urban viaduct bridges. Soil profiles corresponding to various AASHTO soil classes were selected from actual data of such bridges and / or from the literature. The selected soil profiles were subjected to 1-D wave propagation analysis to determine effective values of soil shear modulus and damping ratio for a suite of properly selected actual seismic ground motions varying in PGA from 0.01g to 0.64g, and having variable velocity and frequency content. The effective values of the soil parameters were then employed to determine the dynamic impedance of pile groups in horizontal, vertical and rocking modes in various soil profiles. Pile diameter was kept constant for bridges in various soil profiles while pile length and number of piles were changed based on AASHTO design requirements for various soil profiles and earthquake ground motions. Conclusions were drawn regarding variability in effective soil shear modulus, soil damping, shear wave velocity and pile group impedance for various soil profiles and ground motions and its implications for design and evaluation of pile-supported bridges. It was found that even though the effective soil parameters underwent drastic variation with increasing PGA, the pile group impedance was not affected much in properly designed pile foundations due to the corresponding increase in pile length or increase in a number of piles or both when subjected to increasing PGA or founded in weaker soil profiles.

Keywords: bridge, pile foundation, dynamic foundation impedance, soil profile, shear wave velocity, seismic ground motion, seismic wave propagation

Procedia PDF Downloads 319