Search results for: radial installation limit error
3571 Position and Speed Tracking of DC Motor Based on Experimental Analysis in LabVIEW
Authors: Muhammad Ilyas, Awais Khan, Syed Ali Raza Shah
Abstract:
DC motors are widely used in industries to provide mechanical power in speed and torque. The position and speed control of DC motors is getting the interest of the scientific community in robotics, especially in the robotic arm, a flexible joint manipulator. The current research work is based on position control of DC motors using experimental investigations in LabVIEW. The linear control strategy is applied to track the position and speed of the DC motor with comparative analysis in the LabVIEW platform and simulation analysis in MATLAB. The tracking error in hardware setup based on LabVIEW programming is slightly greater than simulation analysis in MATLAB due to the inertial load of the motor during steady-state conditions. The controller output shows the input voltage applied to the dc motor varies between 0-8V to ensure minimal steady error while tracking the position and speed of the DC motor.Keywords: DC motor, labview, proportional integral derivative control, position tracking, speed tracking
Procedia PDF Downloads 1063570 Signal Processing Techniques for Adaptive Beamforming with Robustness
Authors: Ju-Hong Lee, Ching-Wei Liao
Abstract:
Adaptive beamforming using antenna array of sensors is useful in the process of adaptively detecting and preserving the presence of the desired signal while suppressing the interference and the background noise. For conventional adaptive array beamforming, we require a prior information of either the impinging direction or the waveform of the desired signal to adapt the weights. The adaptive weights of an antenna array beamformer under a steered-beam constraint are calculated by minimizing the output power of the beamformer subject to the constraint that forces the beamformer to make a constant response in the steering direction. Hence, the performance of the beamformer is very sensitive to the accuracy of the steering operation. In the literature, it is well known that the performance of an adaptive beamformer will be deteriorated by any steering angle error encountered in many practical applications, e.g., the wireless communication systems with massive antennas deployed at the base station and user equipment. Hence, developing effective signal processing techniques to deal with the problem due to steering angle error for array beamforming systems has become an important research work. In this paper, we present an effective signal processing technique for constructing an adaptive beamformer against the steering angle error. The proposed array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. Based on the presumed steering vector and a preset angle range for steering mismatch tolerance, we first create a matrix related to the direction vector of signal sources. Two projection matrices are generated from the matrix. The projection matrix associated with the desired signal information and the received array data are utilized to iteratively estimate the actual direction vector of the desired signal. The estimated direction vector of the desired signal is then used for appropriately finding the quiescent weight vector. The other projection matrix is set to be the signal blocking matrix required for performing adaptive beamforming. Accordingly, the proposed beamformer consists of adaptive quiescent weights and partially adaptive weights. Several computer simulation examples are provided for evaluating and comparing the proposed technique with the existing robust techniques.Keywords: adaptive beamforming, robustness, signal blocking, steering angle error
Procedia PDF Downloads 1243569 High-Pressure Steam Turbine for Medium-Scale Concentrated Solar Power Plants
Authors: Ambra Giovannelli, Coriolano Salvini
Abstract:
Many efforts have been spent in the design and development of Concentrated Solar Power (CPS) Plants worldwide. Most of them are for on-grid electricity generation and they are large plants which can benefit from the economies of scale. Nevertheless, several potential applications for Small and Medium-Scale CSP plants can be relevant in the industrial sector as well as for off-grid purposes (i.e. in rural contexts). In a wide range of industrial processes, CSP technologies can be used for heat generation replacing conventional primary sources. For such market, proven technologies (usually hybrid solutions) already exist: more than 100 installations, especially in developing countries, are in operation and performance can be verified. On the other hand, concerning off-grid applications, solar technologies are not so mature. Even if the market offers a potential deployment of such systems, especially in countries where the access to grid is strongly limited, optimized solutions have not been developed yet. In this context, steam power plants can be taken into consideration for medium scale installations, due to the recent results achieved with direct steam generation systems based on paraboloidal dish or Fresnel lens solar concentrators. Steam at 4.0-4.5 MPa and 500°C can be produced directly by means of innovative solar receivers (some prototypes already exist). Although it could seem a promising technology, presently, steam turbines commercially available do not cover the required cycle specifications. In particular, while low-pressure turbines already exist on the market, high-pressure groups, necessary for the abovementioned applications, are not available. The present paper deals with the preliminary design of a high-pressure steam turbine group for a medium-scale CSP plant (200-1000 kWe). Such a group is arranged in a single geared package composed of four radial expander wheels. Such wheels have been chosen on the basis of automotive turbocharging technology and then modified to take the new requirements into account. Results related to the preliminary geometry selection and to the analysis of the high-pressure turbine group performance are reported and widely discussed.Keywords: concentrated solar power (CSP) plants, steam turbine, radial turbine, medium-scale power plants
Procedia PDF Downloads 2173568 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks
Authors: Emad A. Mohammed
Abstract:
The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.Keywords: permeability, hydraulic flow units, artificial intelligence, correlation
Procedia PDF Downloads 1363567 Four-Electron Auger Process for Hollow Ions
Authors: Shahin A. Abdel-Naby, James P. Colgan, Michael S. Pindzola
Abstract:
A time-dependent close-coupling method is developed to calculate a total, double and triple autoionization rates for hollow atomic ions of four-electron systems. This work was motivated by recent observations of the four-electron Auger process in near K-edge photoionization of C+ ions. The time-dependent close-coupled equations are solved using lattice techniques to obtain a discrete representation of radial wave functions and all operators on a four-dimensional grid with uniform spacing. Initial excited states are obtained by relaxation of the Schrodinger equation in imaginary time using a Schmidt orthogonalization method involving interior subshells. The radial wave function grids are partitioned over the cores on a massively parallel computer, which is essential due to the large memory requirements needed to store the coupled-wave functions and the long run times needed to reach the convergence of the ionization process. Total, double, and triple autoionization rates are obtained by the propagation of the time-dependent close-coupled equations in real-time using integration over bound and continuum single-particle states. These states are generated by matrix diagonalization of one-electron Hamiltonians. The total autoionization rates for each L excited state is found to be slightly above the single autoionization rate for the excited configuration using configuration-average distorted-wave theory. As expected, we find the double and triple autoionization rates to be much smaller than the total autoionization rates. Future work can be extended to study electron-impact triple ionization of atoms or ions. The work was supported in part by grants from the American University of Sharjah and the US Department of Energy. Computational work was carried out at the National Energy Research Scientific Computing Center (NERSC) in Berkeley, California, USA.Keywords: hollow atoms, autoionization, auger rates, time-dependent close-coupling method
Procedia PDF Downloads 1533566 The Control of Wall Thickness Tolerance during Pipe Purchase Stage Based on Reliability Approach
Authors: Weichao Yu, Kai Wen, Weihe Huang, Yang Yang, Jing Gong
Abstract:
Metal-loss corrosion is a major threat to the safety and integrity of gas pipelines as it may result in the burst failures which can cause severe consequences that may include enormous economic losses as well as the personnel casualties. Therefore, it is important to ensure the corroding pipeline integrity and efficiency, considering the value of wall thickness, which plays an important role in the failure probability of corroding pipeline. Actually, the wall thickness is controlled during pipe purchase stage. For example, the API_SPEC_5L standard regulates the allowable tolerance of the wall thickness from the specified value during the pipe purchase. The allowable wall thickness tolerance will be used to determine the wall thickness distribution characteristic such as the mean value, standard deviation and distribution. Taking the uncertainties of the input variables in the burst limit-state function into account, the reliability approach rather than the deterministic approach will be used to evaluate the failure probability. Moreover, the cost of pipe purchase will be influenced by the allowable wall thickness tolerance. More strict control of the wall thickness usually corresponds to a higher pipe purchase cost. Therefore changing the wall thickness tolerance will vary both the probability of a burst failure and the cost of the pipe. This paper describes an approach to optimize the wall thickness tolerance considering both the safety and economy of corroding pipelines. In this paper, the corrosion burst limit-state function in Annex O of CSAZ662-7 is employed to evaluate the failure probability using the Monte Carlo simulation technique. By changing the allowable wall thickness tolerance, the parameters of the wall thickness distribution in the limit-state function will be changed. Using the reliability approach, the corresponding variations in the burst failure probability will be shown. On the other hand, changing the wall thickness tolerance will lead to a change in cost in pipe purchase. Using the variation of the failure probability and pipe cost caused by changing wall thickness tolerance specification, the optimal allowable tolerance can be obtained, and used to define pipe purchase specifications.Keywords: allowable tolerance, corroding pipeline segment, operation cost, production cost, reliability approach
Procedia PDF Downloads 3963565 Study of Strontium Sorption onto Indian Bentonite
Authors: Pankaj Pathak, Susmita Sharma
Abstract:
Incessant industrial growth fulfill the energy demand of present day society, at the same time it produces huge amount of waste which could be hazardous or non-hazardous in nature. These wastes are coming out from different sources viz, nuclear power, thermal power, coal mines which contain different types of contaminants and one of the emergent contaminant is strontium, used in the present study. The isotope of strontium (Sr90) is radioactive in nature with half-life of 28.8 years and permissible limit of strontium in drinking water is 1.5 ppm. Above the permissible limit causes several types of diseases in human being. Therefore, safe disposal of strontium into ground becomes a biggest challenge for the researchers. In this context, bentonite is being used as an efficient material to retain strontium onto ground due to its specific physical, chemical and mineralogical properties which exhibits higher cation exchange capacity and specific surface area. These properties influence the interaction between strontium and bentonite, which is quantified by employing a parameter known as distribution coefficient. Batch test was conducted, and sorption isotherms were modelled at different interaction time. The pseudo first-order and pseudo second order kinetic models have been used to fit experimental data, which helps to determine the sorption rate and mechanism.Keywords: bentonite, interaction time, sorption, strontium
Procedia PDF Downloads 3053564 Experimental Study on Single Bay RC Frame Designed Using EC8 under In-Plane Cyclic Loading
Authors: N. H. Hamid, M. S. Syaref, M. I. Adiyanto, M. Mohamed
Abstract:
A one-half scale of single-bay two-storey RC frame together with foundation beam and mass concrete block is investigated. Moment resisting RC frame was designed using EC8 by including the provision for seismic loading and detailing of its connection. The objective of the experimental work is to determine seismic behaviour RC frame under in-plane lateral cyclic loading using displacement control method. A double actuator is placed at centre of the mass concrete block at top of frame to represent the seismic load. The percentage drifts are starting from ±0.01% until ±2.25% with increment of ±0.25% drift. The ultimate lateral load of 158.48 kN was recorded at +2.25% drift in pushing and -126.09 kN in pulling direction. From the experimental hysteresis loops, the parameters such as lateral strength capacity, stiffness, ductility and equivalent viscous damping can be obtained. RC frame behaves in the elastic manner followed by inelastic behaviour after reaches the yield limit. The ductility value for this type frame is 4 which lies between the limit 3 and 6. Therefore, it is recommended to build this RC frame for moderate seismic regions under Ductility Class Medium (DCM) such as in Sabah, East Malaysia.Keywords: single bay, moment resisting RC frame, ductility class medium, inelastic behavior, seismic load
Procedia PDF Downloads 3893563 Physical Theory for One-Dimensional Correlated Electron Systems
Authors: Nelson Nenuwe
Abstract:
The behavior of interacting electrons in one dimension was studied by calculating correlation functions and critical exponents at zero and external magnetic fields for arbitrary band filling. The technique employed in this study is based on the conformal field theory (CFT). The charge and spin degrees of freedom are separated, and described by two independent conformal theories. A detailed comparison of the t-J model with the repulsive Hubbard model was then undertaken with emphasis on their Tomonaga-Luttinger (TL) liquid properties. Near half-filling the exponents of the t-J model take the values of the strong-correlation limit of the Hubbard model, and in the low-density limit the exponents are those of a non-interacting system. The critical exponents obtained in this study belong to the repulsive TL liquid (conducting phase) and attractive TL liquid (superconducting phase). The theoretical results from this study find applications in one-dimensional organic conductors (TTF-TCNQ), organic superconductors (Bechgaard salts) and carbon nanotubes (SWCNTs, DWCNTs and MWCNTs). For instance, the critical exponent at from this study is consistent with the experimental result from optical and photoemission evidence of TL liquid in one-dimensional metallic Bechgaard salt- (TMTSF)2PF6.Keywords: critical exponents, conformal field theory, Hubbard model, t-J model
Procedia PDF Downloads 3433562 Comparison between Some of Robust Regression Methods with OLS Method with Application
Authors: Sizar Abed Mohammed, Zahraa Ghazi Sadeeq
Abstract:
The use of the classic method, least squares (OLS) to estimate the linear regression parameters, when they are available assumptions, and capabilities that have good characteristics, such as impartiality, minimum variance, consistency, and so on. The development of alternative statistical techniques to estimate the parameters, when the data are contaminated with outliers. These are powerful methods (or resistance). In this paper, three of robust methods are studied, which are: Maximum likelihood type estimate M-estimator, Modified Maximum likelihood type estimate MM-estimator and Least Trimmed Squares LTS-estimator, and their results are compared with OLS method. These methods applied to real data taken from Duhok company for manufacturing furniture, the obtained results compared by using the criteria: Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE) and Mean Sum of Absolute Error (MSAE). Important conclusions that this study came up with are: a number of typical values detected by using four methods in the furniture line and very close to the data. This refers to the fact that close to the normal distribution of standard errors, but typical values in the doors line data, using OLS less than that detected by the powerful ways. This means that the standard errors of the distribution are far from normal departure. Another important conclusion is that the estimated values of the parameters by using the lifeline is very far from the estimated values using powerful methods for line doors, gave LTS- destined better results using standard MSE, and gave the M- estimator better results using standard MAPE. Moreover, we noticed that using standard MSAE, and MM- estimator is better. The programs S-plus (version 8.0, professional 2007), Minitab (version 13.2) and SPSS (version 17) are used to analyze the data.Keywords: Robest, LTS, M estimate, MSE
Procedia PDF Downloads 2323561 Anomalies of Visual Perceptual Skills Amongst School Children in Foundation Phase in Olievenhoutbosch, Gauteng Province, South Africa
Authors: Maria Bonolo Mathevula
Abstract:
Background: Children are important members of communities playing major role in the future of any given country (Pera, Fails, Gelsomini, &Garzotto, 2018). Visual Perceptual Skills (VPSs) in children are important health aspect of early childhood development through the Foundation Phases in school. Subsequently, children should undergo visual screening before commencement of schooling for early diagnosis ofVPSs anomalies because the primary role of VPSs is to capacitate children with academic performance in general. Aim : The aim of this study was to determine the anomalies of visual VPSs amongst school children in Foundation Phase. The study’s objectives were to determine the prevalence of VPSs anomalies amongst school children in Foundation Phase; Determine the relationship between children’s academic and VPSs anomalies; and to investigate the relationship between VPSs anomalies and refractive error. Methodology: This study was a mixed method whereby triangulated qualitative (interviews) and quantitative (questionnaire and clinical data) was used. This was, therefore, descriptive by nature. The study’s target population was school children in Foundation Phase. The study followed purposive sampling method. School children in Foundation Phase were purposively sampled to form part of this study provided their parents have given a signed the consent. Data was collected by the use of standardized interviews; questionnaire; clinical data card, and TVPS standard data card. Results: Although the study is still ongoing, the preliminary study outcome based on data collected from one of the Foundation Phases have suggested the following:While VPSs anomalies is not prevalent, it, however, have indirect relationship with children’s academic performance in Foundation phase; Notably, VPSs anomalies and refractive error are directly related since majority of children with refractive error, specifically compound hyperopic astigmatism, failed most subtests of TVPS standard tests. Conclusion: Based on the study’s preliminary findings, it was clear that optometrists still have a lot to do in as far as researching on VPSs is concerned. Furthermore, the researcher recommends that optometrist, as the primary healthcare professionals, should also conduct the school-readiness pre-assessment on children before commencement of their grades in Foundation phase.Keywords: foundation phase, visual perceptual skills, school children, refractive error
Procedia PDF Downloads 1023560 Accuracy/Precision Evaluation of Excalibur I: A Neurosurgery-Specific Haptic Hand Controller
Authors: Hamidreza Hoshyarmanesh, Benjamin Durante, Alex Irwin, Sanju Lama, Kourosh Zareinia, Garnette R. Sutherland
Abstract:
This study reports on a proposed method to evaluate the accuracy and precision of Excalibur I, a neurosurgery-specific haptic hand controller, designed and developed at Project neuroArm. Having an efficient and successful robot-assisted telesurgery is considerably contingent on how accurate and precise a haptic hand controller (master/local robot) would be able to interpret the kinematic indices of motion, i.e., position and orientation, from the surgeon’s upper limp to the slave/remote robot. A proposed test rig is designed and manufactured according to standard ASTM F2554-10 to determine the accuracy and precision range of Excalibur I at four different locations within its workspace: central workspace, extreme forward, far left and far right. The test rig is metrologically characterized by a coordinate measuring machine (accuracy and repeatability < ± 5 µm). Only the serial linkage of the haptic device is examined due to the use of the Structural Length Index (SLI). The results indicate that accuracy decreases by moving from the workspace central area towards the borders of the workspace. In a comparative study, Excalibur I performs on par with the PHANToM PremiumTM 3.0 and more accurate/precise than the PHANToM PremiumTM 1.5. The error in Cartesian coordinate system shows a dominant component in one direction (δx, δy or δz) for the movements on horizontal, vertical and inclined surfaces. The average error magnitude of three attempts is recorded, considering all three error components. This research is the first promising step to quantify the kinematic performance of Excalibur I.Keywords: accuracy, advanced metrology, hand controller, precision, robot-assisted surgery, tele-operation, workspace
Procedia PDF Downloads 3363559 The Study of Formal and Semantic Errors of Lexis by Persian EFL Learners
Authors: Mohammad J. Rezai, Fereshteh Davarpanah
Abstract:
Producing a text in a language which is not one’s mother tongue can be a demanding task for language learners. Examining lexical errors committed by EFL learners is a challenging area of investigation which can shed light on the process of second language acquisition. Despite the considerable number of investigations into grammatical errors, few studies have tackled formal and semantic errors of lexis committed by EFL learners. The current study aimed at examining Persian learners’ formal and semantic errors of lexis in English. To this end, 60 students at three different proficiency levels were asked to write on 10 different topics in 10 separate sessions. Finally, 600 essays written by Persian EFL learners were collected, acting as the corpus of the study. An error taxonomy comprising formal and semantic errors was selected to analyze the corpus. The formal category covered misselection and misformation errors, while the semantic errors were classified into lexical, collocational and lexicogrammatical categories. Each category was further classified into subcategories depending on the identified errors. The results showed that there were 2583 errors in the corpus of 9600 words, among which, 2030 formal errors and 553 semantic errors were identified. The most frequent errors in the corpus included formal error commitment (78.6%), which were more prevalent at the advanced level (42.4%). The semantic errors (21.4%) were more frequent at the low intermediate level (40.5%). Among formal errors of lexis, the highest number of errors was devoted to misformation errors (98%), while misselection errors constituted 2% of the errors. Additionally, no significant differences were observed among the three semantic error subcategories, namely collocational, lexical choice and lexicogrammatical. The results of the study can shed light on the challenges faced by EFL learners in the second language acquisition process.Keywords: collocational errors, lexical errors, Persian EFL learners, semantic errors
Procedia PDF Downloads 1423558 Numerical Study of Piled Raft Foundation Under Vertical Static and Seismic Loads
Authors: Hamid Oumer Seid
Abstract:
Piled raft foundation (PRF) is a union of pile and raft working together through the interaction of soil-pile, pile-raft, soil-raft and pile-pile to provide adequate bearing capacity and controlled settlement. A uniform pile positioning is used in PRF; however, there is a wide room for optimization through parametric study under vertical load to result in a safer and economical foundation. Addis Ababa is found in seismic zone 3 with a peak ground acceleration (PGA) above the threshold of damage, which makes investigating the performance of PRF under seismic load considering the dynamic kinematic soil structure interaction (SSI) vital. The study area is located in Addis Ababa around Mexico (commercial bank) and Kirkos (Nib, Zemen and United Bank) in which input parameters (pile length, pile diameter, pile spacing, raft area, raft thickness and load) are taken. A finite difference-based numerical software, FLAC3D V6, was used for the analysis. The Kobe (1995) and Northridge (1994) earthquakes were selected, and deconvolution analysis was done. A close load sharing between pile and raft was achieved at a spacing of 7D with different pile lengths and diameters. The maximum settlement reduction achieved is 9% for a pile of 2m diameter by increasing length from 10m to 20m, which shows pile length is not effective in reducing settlement. The installation of piles results in an increase in the negative bending moment of the raft compared with an unpiled raft. Hence, the optimized design depends on pile spacing and the raft edge length, while pile length and diameter are not significant parameters. An optimized piled raft configuration (𝐴𝐺/𝐴𝑅 = 0.25 at the center and piles provided around the edge) has reduced pile number by 40% and differential settlement by 95%. The dynamic analysis shows acceleration plot at the top of the piled raft has PGA of 0.25𝑚2/𝑠𝑒𝑐 and 0.63𝑚2/𝑠𝑒𝑐 for Northridge (1994) and Kobe (1995) earthquakes, respectively, due to attenuation of seismic waves. Pile head displacement (maximum is 2mm, and it is under the allowable limit) is affected by the PGA rather than the duration of an earthquake. End bearing and friction PRF performed similarly under two different earthquakes except for their vertical settlement considering SSI. Hence, PRF has shown adequate resistance to seismic loads.Keywords: FLAC3D V6, earthquake, optimized piled raft foundation, pile head department
Procedia PDF Downloads 263557 Continuous Wave Interference Effects on Global Position System Signal Quality
Authors: Fang Ye, Han Yu, Yibing Li
Abstract:
Radio interference is one of the major concerns in using the global positioning system (GPS) for civilian and military applications. Interference signals are produced not only through all electronic systems but also illegal jammers. Among different types of interferences, continuous wave (CW) interference has strong adverse impacts on the quality of the received signal. In this paper, we make more detailed analysis for CW interference effects on GPS signal quality. Based on the C/A code spectrum lines, the influence of CW interference on the acquisition performance of GPS receivers is further analysed. This influence is supported by simulation results using GPS software receiver. As the most important user parameter of GPS receivers, the mathematical expression of bit error probability is also derived in the presence of CW interference, and the expression is consistent with the Monte Carlo simulation results. The research on CW interference provides some theoretical gist and new thoughts on monitoring the radio noise environment and improving the anti-jamming ability of GPS receivers.Keywords: GPS, CW interference, acquisition performance, bit error probability, Monte Carlo
Procedia PDF Downloads 2603556 Levels of Toxic Metals in Different Tissues of Lethrinus miniatus Fish from Arabian Gulf
Authors: Muhammad Waqar Ashraf, Atiq A. Mian
Abstract:
In the present study, accumulation of eight heavy metals, lead (Pb), cadmium (Cd), manganese (Mn), copper (Cu), zinc (Zn), iron (Fe), nickel (Ni) and chromium (Cr)was determined in kidney, heart, liver and muscle tissues of Lethrinus miniatus fish caught from Arabian Gulf. Metal concentrations in all the samples were measured using Atomic Absorption Spectroscopy. Analytical validation of data was carried out by applying the same digestion procedure to standard reference material (NIST-SRM 1577b bovine liver). Levels of lead (Pb) in the liver tissue (0.60µg/g) exceeded the limit set by European Commission (2005) at 0.30 µg/g. Zinc concentration in all tissue samples were below the maximum permissible limit (50 µg/g) as set by FAO. Maximum mean cadmium concentration was found 0.15 µg/g in the kidney tissues. Highest content of Mn in the studied tissues was seen in the kidney tissue (2.13 µg/g), whereas minimum was found in muscle tissue (0.87 µg/g). The present study led to the conclusion that muscle tissue is the least contaminated tissue in Lethrinus miniatus and consumption of organs should be avoided as much as possible.Keywords: lethrinus miniatus, arabian gulf, heavy metals, atomic absorption spectroscopy
Procedia PDF Downloads 3563555 [Keynote Speech]: Feature Selection and Predictive Modeling of Housing Data Using Random Forest
Authors: Bharatendra Rai
Abstract:
Predictive data analysis and modeling involving machine learning techniques become challenging in presence of too many explanatory variables or features. Presence of too many features in machine learning is known to not only cause algorithms to slow down, but they can also lead to decrease in model prediction accuracy. This study involves housing dataset with 79 quantitative and qualitative features that describe various aspects people consider while buying a new house. Boruta algorithm that supports feature selection using a wrapper approach build around random forest is used in this study. This feature selection process leads to 49 confirmed features which are then used for developing predictive random forest models. The study also explores five different data partitioning ratios and their impact on model accuracy are captured using coefficient of determination (r-square) and root mean square error (rsme).Keywords: housing data, feature selection, random forest, Boruta algorithm, root mean square error
Procedia PDF Downloads 3233554 Levels of Heavy Metals in Different Tissues of Lethrinus Miniatus Fish from Arabian Gulf
Authors: Muhammad Waqar Ashraf
Abstract:
In the present study, accumulation of eight heavy metals, lead (Pb), cadmium (Cd), manganese (Mn), copper (Cu), zinc (Zn), iron (Fe), nickel (Ni) and chromium (Cr)was determined in kidney, heart, liver and muscle tissues of Lethrinus Miniatus fish caught from Arabian Gulf. Metal concentrations in all the samples were measured using Graphite Furnace Atomic Absorption Spectroscopy (GF-AAS). Analytical validation of data was carried out by applying the same digestion procedure to standard reference material (NIST-SRM 1577b bovine liver). Levels of lead (Pb) in the liver tissue (0.60µg/g) exceeded the limit set by European Commission (2005) at 0.30 µg/g. Zinc concentration in all tissue samples were below the maximum permissible limit (50 µg/g) as set by FAO. Maximum mean cadmium concentration was found to be 0.15 µg/g in the kidney tissues. Highest content of Mn in the studied tissues was seen in the kidney tissue (2.13 µg/g), whereas minimum was found in muscle tissue (0.87 µg/g). The present study led to the conclusion that muscle tissue is the least contaminated tissue in Lethrinus Miniatus and consumption of organs should be avoided as much as possible.Keywords: Arabian gulf, Lethrinus miniatus, heavy metals, atomic absorption spectroscopy
Procedia PDF Downloads 2733553 The Link between Money Market and Economic Growth in Nigeria: Vector Error Correction Model Approach
Authors: Uyi Kizito Ehigiamusoe
Abstract:
The paper examines the impact of money market on economic growth in Nigeria using data for the period 1980-2012. Econometrics techniques such as Ordinary Least Squares Method, Johanson’s Co-integration Test and Vector Error Correction Model were used to examine both the long-run and short-run relationship. Evidence from the study suggest that though a long-run relationship exists between money market and economic growth, but the present state of the Nigerian money market is significantly and negatively related to economic growth. The link between the money market and the real sector of the economy remains very weak. This implies that the market is not yet developed enough to produce the needed growth that will propel the Nigerian economy because of several challenges. It was therefore recommended that government should create the appropriate macroeconomic policies, legal framework and sustain the present reforms with a view to developing the market so as to promote productive activities, investments, and ultimately economic growth.Keywords: economic growth, investments, money market, money market challenges, money market instruments
Procedia PDF Downloads 3443552 Modernization of the Economic Price Adjustment Software
Authors: Roger L. Goodwin
Abstract:
The US Consumer Price Indices (CPIs) measures hundreds of items in the US economy. Many social programs and government benefits index to the CPIs. In mid to late 1990, much research went into changes to the CPI by a Congressional Advisory Committee. One thing can be said from the research is that, aside from there are alternative estimators for the CPI; any fundamental change to the CPI will affect many government programs. The purpose of this project is to modernize an existing process. This paper will show the development of a small, visual, software product that documents the Economic Price Adjustment (EPA) for long-term contracts. The existing workbook does not provide the flexibility to calculate EPAs where the base-month and the option-month are different. Nor does the workbook provide automated error checking. The small, visual, software product provides the additional flexibility and error checking. This paper presents the feedback to project.Keywords: Consumer Price Index, Economic Price Adjustment, contracts, visualization tools, database, reports, forms, event procedures
Procedia PDF Downloads 3173551 Soil Stress State under Tractive Tire and Compaction Model
Authors: Prathuang Usaborisut, Dithaporn Thungsotanon
Abstract:
Soil compaction induced by a tractor towing trailer becomes a major problem associated to sugarcane productivity. Soil beneath the tractor’s tire is not only under compressing stress but also shearing stress. Therefore, in order to help to understand such effects on soil, this research aimed to determine stress state in soil and predict compaction of soil under a tractive tire. The octahedral stress ratios under the tires were higher than one and much higher under higher draft forces. Moreover, the ratio was increasing with increase of number of tire’s passage. Soil compaction model was developed using data acquired from triaxial tests. The model was then used to predict soil bulk density under tractive tire. The maximum error was about 4% at 15 cm depth under lower draft force and tended to increase with depth and draft force. At depth of 30 cm and under higher draft force, the maximum error was about 16%.Keywords: draft force, soil compaction model, stress state, tractive tire
Procedia PDF Downloads 3523550 Impact of Zn/Cr Ratio on ZnCrOx-SAPO-34 Bifunctional Catalyst for Direct Conversion of Syngas to Light Olefins
Authors: Yuxuan Huang, Weixin Qian, Hongfang Ma, Haitao Zhang, Weiyong Ying
Abstract:
Light olefins are important building blocks for chemical industry. Direct conversion of syngas to light olefins has been investigated for decades. Meanwhile, the limit for light olefins selectivity described by Anderson-Schulz-Flory (ASF) distribution model is still a great challenge to conventional Fischer-Tropsch synthesis. The emerging strategy called oxide-zeolite concept (OX-ZEO) is a promising way to get rid of this limit. ZnCrOx was prepared by co-precipitation method and (NH4)2CO3 was used as precipitant. SAPO-34 was prepared by hydrothermal synthesis, and Tetraethylammonium hydroxide (TEAOH) was used as template, while silica sol, pseudo-boehmite, and phosphoric acid were Al, Si and P source, respectively. The bifunctional catalyst was prepared by mechanical mixing of ZnCrOx and SAPO-34. Catalytic reactions were carried out under H2/CO=2, 380 ℃, 1 MPa and 6000 mL·gcat-1·h-1 in a fixed-bed reactor with a quartz lining. Catalysts were characterized by XRD, N2 adsorption-desorption, NH3-TPD, H2-TPR, and CO-TPD. The addition of Al as structure promoter enhances CO conversion and selectivity to light olefins. Zn/Cr ratio, which decides the active component content and chemisorption property of the catalyst, influences CO conversion and selectivity to light olefins at the same time. C2-4= distribution of 86% among hydrocarbons at CO conversion of 14% was reached when Zn/Cr=1.5.Keywords: light olefins, OX-ZEO, Syngas, ZnCrOₓ
Procedia PDF Downloads 1813549 A Multilayer Perceptron Neural Network Model Optimized by Genetic Algorithm for Significant Wave Height Prediction
Authors: Luis C. Parra
Abstract:
The significant wave height prediction is an issue of great interest in the field of coastal activities because of the non-linear behavior of the wave height and its complexity of prediction. This study aims to present a machine learning model to forecast the significant wave height of the oceanographic wave measuring buoys anchored at Mooloolaba of the Queensland Government Data. Modeling was performed by a multilayer perceptron neural network-genetic algorithm (GA-MLP), considering Relu(x) as the activation function of the MLPNN. The GA is in charge of optimized the MLPNN hyperparameters (learning rate, hidden layers, neurons, and activation functions) and wrapper feature selection for the window width size. Results are assessed using Mean Square Error (MSE), Root Mean Square Error (RMSE), and Mean Absolute Error (MAE). The GAMLPNN algorithm was performed with a population size of thirty individuals for eight generations for the prediction optimization of 5 steps forward, obtaining a performance evaluation of 0.00104 MSE, 0.03222 RMSE, 0.02338 MAE, and 0.71163% of MAPE. The results of the analysis suggest that the MLPNNGA model is effective in predicting significant wave height in a one-step forecast with distant time windows, presenting 0.00014 MSE, 0.01180 RMSE, 0.00912 MAE, and 0.52500% of MAPE with 0.99940 of correlation factor. The GA-MLP algorithm was compared with the ARIMA forecasting model, presenting better performance criteria in all performance criteria, validating the potential of this algorithm.Keywords: significant wave height, machine learning optimization, multilayer perceptron neural networks, evolutionary algorithms
Procedia PDF Downloads 1073548 Vitamin A Status and Its Correlation with the Dietary Intake of Young Females of Lahore, Pakistan
Authors: Sarah Fatima, Ahmad A. Malik, Saima Sadaf
Abstract:
This study was conducted in order to assess the dietary record and vitamin A status of young females of Lahore. A total sample of 376 consisted of 16 – 20 years of unmarried college going females. Three main tools were adopted: questionnaire, 3-day food diary and serum retinol test. The anthropometric measurements showed that a total of 32.6% of the sample was underweight (BMI < 18.5) and 54.5% had a healthy weight (BMI 18.5 – 22.9). The average Vitamin A intake of the sample was 257.95 µg/day while the RDA for the selected age group was 700 µg/day. The mean energy intake of the adolescents was 1153.64 kcal/ day, whereas the Estimated Energy Requirement (EER) for this age group was 2368 kcal/day. The mean serum Vitamin A level was 24.81µg/dL. 69.6% of the sample was deficient in serum Vitamin A i.e. serum retinol < 24 µg/dL. 30.4% had serum retinol in normal limit (24 – 84 µg/dL) from which 25.3% lied in lower limit (24 – 44 µg/dL) and only 5.1% had serum retinol in 44 – 64 µg/dL range. A slightly negative correlation (r = - 0.21, 95% confidence interval) was found between dietary intake of Vitamin A and serum Vitamin A It was concluded that the dietary intake of major nutrients and vitamin A is not adequate in the selected group. This is also confirmed by the lower serum retinol levels. Hence, vitamin An intake and status are generally inadequate, and vitamin deficiency is prevalent in the unmarried young females of Lahore.Keywords: vitamin A, young Females, vitamin deficiency, Lahore
Procedia PDF Downloads 3143547 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process
Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization
Procedia PDF Downloads 1163546 Parametric Optimization of High-Performance Electric Vehicle E-Gear Drive for Radiated Noise Using 1-D System Simulation
Authors: Sanjai Sureshkumar, Sathish G. Kumar, P. V. V. Sathyanarayana
Abstract:
For e-gear drivetrain, the transmission error and the resulting variation in mesh stiffness is one of the main source of excitation in High performance Electric Vehicle. These vibrations are transferred through the shaft to the bearings and then to the e-Gear drive housing eventually radiating noise. A parametrical model developed in 1-D system simulation by optimizing the micro and macro geometry along with bearing properties and oil filtration to achieve least transmission error and high contact ratio. Histogram analysis is performed to condense the actual road load data into condensed duty cycle to find the bearing forces. The structural vibration generated by these forces will be simulated in a nonlinear solver obtaining the normal surface velocity of the housing and the results will be carried forward to Acoustic software wherein a virtual environment of the surrounding (actual testing scenario) with accurate microphone position will be maintained to predict the sound pressure level of radiated noise and directivity plot of the e-Gear Drive. Order analysis will be carried out to find the root cause of the vibration and whine noise. Broadband spectrum will be checked to find the rattle noise source. Further, with the available results, the design will be optimized, and the next loop of simulation will be performed to build a best e-Gear Drive on NVH aspect. Structural analysis will be also carried out to check the robustness of the e-Gear Drive.Keywords: 1-D system simulation, contact ratio, e-Gear, mesh stiffness, micro and macro geometry, transmission error, radiated noise, NVH
Procedia PDF Downloads 1493545 Verification of Satellite and Observation Measurements to Build Solar Energy Projects in North Africa
Authors: Samy A. Khalil, U. Ali Rahoma
Abstract:
The measurements of solar radiation, satellite data has been routinely utilize to estimate solar energy. However, the temporal coverage of satellite data has some limits. The reanalysis, also known as "retrospective analysis" of the atmosphere's parameters, is produce by fusing the output of NWP (Numerical Weather Prediction) models with observation data from a variety of sources, including ground, and satellite, ship, and aircraft observation. The result is a comprehensive record of the parameters affecting weather and climate. The effectiveness of reanalysis datasets (ERA-5) for North Africa was evaluate against high-quality surfaces measured using statistical analysis. Estimating the distribution of global solar radiation (GSR) over five chosen areas in North Africa through ten-years during the period time from 2011 to 2020. To investigate seasonal change in dataset performance, a seasonal statistical analysis was conduct, which showed a considerable difference in mistakes throughout the year. By altering the temporal resolution of the data used for comparison, the performance of the dataset is alter. Better performance is indicate by the data's monthly mean values, but data accuracy is degraded. Solar resource assessment and power estimation are discuses using the ERA-5 solar radiation data. The average values of mean bias error (MBE), root mean square error (RMSE) and mean absolute error (MAE) of the reanalysis data of solar radiation vary from 0.079 to 0.222, 0.055 to 0.178, and 0.0145 to 0.198 respectively during the period time in the present research. The correlation coefficient (R2) varies from 0.93 to 99% during the period time in the present research. This research's objective is to provide a reliable representation of the world's solar radiation to aid in the use of solar energy in all sectors.Keywords: solar energy, ERA-5 analysis data, global solar radiation, North Africa
Procedia PDF Downloads 983544 Forecasting Free Cash Flow of an Industrial Enterprise Using Fuzzy Set Tools
Authors: Elena Tkachenko, Elena Rogova, Daria Koval
Abstract:
The paper examines the ways of cash flows forecasting in the dynamic external environment. The so-called new reality in economy lowers the predictability of the companies’ performance indicators due to the lack of long-term steady trends in external conditions of development and fast changes in the markets. The traditional methods based on the trend analysis lead to a very high error of approximation. The macroeconomic situation for the last 10 years is defined by continuous consequences of financial crisis and arising of another one. In these conditions, the instruments of forecasting on the basis of fuzzy sets show good results. The fuzzy sets based models turn out to lower the error of approximation to acceptable level and to provide the companies with reliable cash flows estimation that helps to reach the financial stability. In the paper, the applicability of the model of cash flows forecasting based on fuzzy logic was analyzed.Keywords: cash flow, industrial enterprise, forecasting, fuzzy sets
Procedia PDF Downloads 2083543 A Study on Accident Result Contribution of Individual Major Variables Using Multi-Body System of Accident Reconstruction Program
Authors: Donghun Jeong, Somyoung Shin, Yeoil Yun
Abstract:
A large-scale traffic accident refers to an accident in which more than three people die or more than thirty people are dead or injured. In order to prevent a large-scale traffic accident from causing a big loss of lives or establish effective improvement measures, it is important to analyze accident situations in-depth and understand the effects of major accident variables on an accident. This study aims to analyze the contribution of individual accident variables to accident results, based on the accurate reconstruction of traffic accidents using PC-Crash’s Multi-Body, which is an accident reconstruction program, and simulation of each scenario. Multi-Body system of PC-Crash accident reconstruction program is used for multi-body accident reconstruction that shows motions in diverse directions that were not approached previously. MB System is to design and reproduce a form of body, which shows realistic motions, using several bodies. Targeting the 'freight truck cargo drop accident around the Changwon Tunnel' that happened in November 2017, this study conducted a simulation of the freight truck cargo drop accident and analyzed the contribution of individual accident majors. Then on the basis of the driving speed, cargo load, and stacking method, six scenarios were devised. The simulation analysis result displayed that the freight car was driven at a speed of 118km/h(speed limit: 70km/h) right before the accident, carried 196 oil containers with a weight of 7,880kg (maximum load: 4,600kg) and was not fully equipped with anchoring equipment that could prevent a drop of cargo. The vehicle speed, cargo load, and cargo anchoring equipment were major accident variables, and the accident contribution analysis results of individual variables are as follows. When the freight car only obeyed the speed limit, the scattering distance of oil containers decreased by 15%, and the number of dropped oil containers decreased by 39%. When the freight car only obeyed the cargo load, the scattering distance of oil containers decreased by 5%, and the number of dropped oil containers decreased by 34%. When the freight car obeyed both the speed limit and cargo load, the scattering distance of oil containers fell by 38%, and the number of dropped oil containers fell by 64%. The analysis result of each scenario revealed that the overspeed and excessive cargo load of the freight car contributed to the dispersion of accident damage; in the case of a truck, which did not allow a fall of cargo, there was a different type of accident when driven too fast and carrying excessive cargo load, and when the freight car obeyed the speed limit and cargo load, there was the lowest possibility of causing an accident.Keywords: accident reconstruction, large-scale traffic accident, PC-Crash, MB system
Procedia PDF Downloads 2003542 Improving Human Hand Localization in Indoor Environment by Using Frequency Domain Analysis
Authors: Wipassorn Vinicchayakul, Pichaya Supanakoon, Sathaporn Promwong
Abstract:
A human’s hand localization is revised by using radar cross section (RCS) measurements with a minimum root mean square (RMS) error matching algorithm on a touchless keypad mock-up model. RCS and frequency transfer function measurements are carried out in an indoor environment on the frequency ranged from 3.0 to 11.0 GHz to cover federal communications commission (FCC) standards. The touchless keypad model is tested in two different distances between the hand and the keypad. The initial distance of 19.50 cm is identical to the heights of transmitting (Tx) and receiving (Rx) antennas, while the second distance is 29.50 cm from the keypad. Moreover, the effects of Rx angles relative to the hand of human factor are considered. The RCS input parameters are compared with power loss parameters at each frequency. From the results, the performance of the RCS input parameters with the second distance, 29.50 cm at 3 GHz is better than the others.Keywords: radar cross section, fingerprint-based localization, minimum root mean square (RMS) error matching algorithm, touchless keypad model
Procedia PDF Downloads 342