Search results for: total vector error
10693 The Use of Performance Indicators for Evaluating Models of Drying Jackfruit (Artocarpus heterophyllus L.): Page, Midilli, and Lewis
Authors: D. S. C. Soares, D. G. Costa, J. T. S., A. K. S. Abud, T. P. Nunes, A. M. Oliveira Júnior
Abstract:
Mathematical models of drying are used for the purpose of understanding the drying process in order to determine important parameters for design and operation of the dryer. The jackfruit is a fruit with high consumption in the Northeast and perishability. It is necessary to apply techniques to improve their conservation for longer in order to diffuse it by regions with low consumption. This study aimed to analyse several mathematical models (Page, Lewis, and Midilli) to indicate one that best fits the conditions of convective drying process using performance indicators associated with each model: accuracy (Af) and noise factors (Bf), mean square error (RMSE) and standard error of prediction (% SEP). Jackfruit drying was carried out in convective type tray dryer at a temperature of 50°C for 9 hours. It is observed that the model Midili was more accurate with Af: 1.39, Bf: 1.33, RMSE: 0.01%, and SEP: 5.34. However, the use of the Model Midilli is not appropriate for purposes of control process due to need four tuning parameters. With the performance indicators used in this paper, the Page model showed similar results with only two parameters. It is concluded that the best correlation between the experimental and estimated data is given by the Page’s model.Keywords: drying, models, jackfruit, biotechnology
Procedia PDF Downloads 37910692 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems does not scale well on multi-CPU/multi-GPUs clusters. For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration instead of two for standard CG. The standard and pipelined CG methods need the vector entries generated by the current GPU and other GPUs for matrix-vector products. So the communication between GPUs becomes a major performance bottleneck on multi GPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using the pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP, and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.Keywords: conjugate gradient, GPU, parallel programming, pipelined algorithm
Procedia PDF Downloads 16510691 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria
Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi
Abstract:
There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria
Procedia PDF Downloads 7910690 A Machine Learning Approach for Earthquake Prediction in Various Zones Based on Solar Activity
Authors: Viacheslav Shkuratskyy, Aminu Bello Usman, Michael O’Dea, Saifur Rahman Sabuj
Abstract:
This paper examines relationships between solar activity and earthquakes; it applied machine learning techniques: K-nearest neighbour, support vector regression, random forest regression, and long short-term memory network. Data from the SILSO World Data Center, the NOAA National Center, the GOES satellite, NASA OMNIWeb, and the United States Geological Survey were used for the experiment. The 23rd and 24th solar cycles, daily sunspot number, solar wind velocity, proton density, and proton temperature were all included in the dataset. The study also examined sunspots, solar wind, and solar flares, which all reflect solar activity and earthquake frequency distribution by magnitude and depth. The findings showed that the long short-term memory network model predicts earthquakes more correctly than the other models applied in the study, and solar activity is more likely to affect earthquakes of lower magnitude and shallow depth than earthquakes of magnitude 5.5 or larger with intermediate depth and deep depth.Keywords: k-nearest neighbour, support vector regression, random forest regression, long short-term memory network, earthquakes, solar activity, sunspot number, solar wind, solar flares
Procedia PDF Downloads 7310689 Correlation between the Ratios of House Dust Mite-Specific IgE/Total IgE and Asthma Control Test Score as a Biomarker of Immunotherapy Response Effectiveness in Pediatric Allergic Asthma Patients
Authors: Bela Siska Afrida, Wisnu Barlianto, Desy Wulandari, Ery Olivianto
Abstract:
Background: Allergic asthma, caused by IgE-mediated allergic reactions, remains a global health issue with high morbidity and mortality rates. Immunotherapy is the only etiology-based approach to treating asthma, but no standard biomarkers have been established to evaluate the therapy’s effectiveness. This study aims to determine the correlation between the ratios of serum levels of HDM-specific IgE/total IgE and Asthma Control Test (ACT) score as a biomarker of the response to immunotherapy in pediatric allergic asthma patients. Patient and Methods: This retrospective cohort study involved 26 pediatric allergic asthma patients who underwent HDM-specific subcutaneous immunotherapy for 14 weeks at the Pediatric Allergy Immunology Outpatient Clinic at Saiful Anwar General Hospital, Malang. Serum levels of HDM-Specific IgE and Total IgE were measured before and after immunotherapy using Chemiluminescence Immunoassay and Enzyme-linked Immunosorbent Assay (ELISA) method. Changes in asthma control were assessed using the ACT score. The Wilcoxon Signed Ranked Test and Spearman correlation test were used for data analysis. Results: There were 14 boys and 12 girls with a mean age of 6.48 ± 2.54 years. The study showed a significant decrease in serum HMD-specific levels before immunotherapy [9.88 ± 5.74 kuA/L] compared to those of 14 weeks after immunotherapy [4.51 ± 3.98 kuA/L], p = 0.000. Serum Total IgE levels significant decrease before immunotherapy [207.6 ± 120.8IU/ml] compared to those of 14 weeks after immunotherapy [109.83 ± 189.39 IU/mL], p = 0.000. The ratios of serum HDM-specific IgE/total IgE levels significant decrease before immunotherapy [0.063 ± 0.05] compared to those of 14 weeks after immunotherapy [0.041 ± 0.039], p = 0.012. There was also a significant increase in ACT scores before and after immunotherapy (each 15.5 ± 1.79 and 20.96 ± 2.049, p = 0.000). The correlation test showed a weak negative correlation between the ratios of HDM-specific IgE/total IgE levels and ACT score (p = 0.034 and r = -0.29). Conclusion: In conclusion, this study showed that a decrease in HDM-specific IgE levels, total IgE levels, and HDM-specific IgE/total IgE ratios, and an increase in ACT score, was observed after 14 weeks of HDM-specific subcutaneous immunotherapy. The weak negative correlation between the HDM-specific IgE/total IgE ratio and the ACT score suggests that this ratio can serve as a potential biomarker of the effectiveness of immunotherapy in treating pediatric allergic asthma patients.Keywords: HDM-specific IgE/total IgE ratio, ACT score, immunotherapy, allergic asthma
Procedia PDF Downloads 6910688 Minimizing Total Completion Time in No-Wait Flowshops with Setup Times
Authors: Ali Allahverdi
Abstract:
The m-machine no-wait flowshop scheduling problem is addressed in this paper. The objective is to minimize total completion time subject to the constraint that the makespan value is not greater than a certain value. Setup times are treated as separate from processing times. Several recent algorithms are adapted and proposed for the problem. An extensive computational analysis has been conducted for the evaluation of the proposed algorithms. The computational analysis indicates that the best proposed algorithm performs significantly better than the earlier existing best algorithm.Keywords: scheduling, no-wait flowshop, algorithm, setup times, total completion time, makespan
Procedia PDF Downloads 34010687 Photo-Fenton Decolorization of Methylene Blue Adsolubilized on Co2+ -Embedded Alumina Surface: Comparison of Process Modeling through Response Surface Methodology and Artificial Neural Network
Authors: Prateeksha Mahamallik, Anjali Pal
Abstract:
In the present study, Co(II)-adsolubilized surfactant modified alumina (SMA) was prepared, and methylene blue (MB) degradation was carried out on Co-SMA surface by visible light photo-Fenton process. The entire reaction proceeded on solid surface as MB was embedded on Co-SMA surface. The reaction followed zero order kinetics. Response surface methodology (RSM) and artificial neural network (ANN) were used for modeling the decolorization of MB by photo-Fenton process as a function of dose of Co-SMA (10, 20 and 30 g/L), initial concentration of MB (10, 20 and 30 mg/L), concentration of H2O2 (174.4, 348.8 and 523.2 mM) and reaction time (30, 45 and 60 min). The prediction capabilities of both the methodologies (RSM and ANN) were compared on the basis of correlation coefficient (R2), root mean square error (RMSE), standard error of prediction (SEP), relative percent deviation (RPD). Due to lower value of RMSE (1.27), SEP (2.06) and RPD (1.17) and higher value of R2 (0.9966), ANN was proved to be more accurate than RSM in order to predict decolorization efficiency.Keywords: adsolubilization, artificial neural network, methylene blue, photo-fenton process, response surface methodology
Procedia PDF Downloads 25410686 A Novel RLS Based Adaptive Filtering Method for Speech Enhancement
Authors: Pogula Rakesh, T. Kishore Kumar
Abstract:
Speech enhancement is a long standing problem with numerous applications like teleconferencing, VoIP, hearing aids, and speech recognition. The motivation behind this research work is to obtain a clean speech signal of higher quality by applying the optimal noise cancellation technique. Real-time adaptive filtering algorithms seem to be the best candidate among all categories of the speech enhancement methods. In this paper, we propose a speech enhancement method based on Recursive Least Squares (RLS) adaptive filter of speech signals. Experiments were performed on noisy data which was prepared by adding AWGN, Babble and Pink noise to clean speech samples at -5dB, 0dB, 5dB, and 10dB SNR levels. We then compare the noise cancellation performance of proposed RLS algorithm with existing NLMS algorithm in terms of Mean Squared Error (MSE), Signal to Noise ratio (SNR), and SNR loss. Based on the performance evaluation, the proposed RLS algorithm was found to be a better optimal noise cancellation technique for speech signals.Keywords: adaptive filter, adaptive noise canceller, mean squared error, noise reduction, NLMS, RLS, SNR, SNR loss
Procedia PDF Downloads 48110685 Analysis of Human Mental and Behavioral Models for Development of an Electroencephalography-Based Human Performance Management System
Authors: John Gaber, Youssef Ahmed, Hossam A. Gabbar, Jing Ren
Abstract:
Accidents at Nuclear Power Plants (NPPs) occur due to various factors, notable among them being poor safety management and poor safety culture. During abnormal situations, the likelihood of human error is many-fold higher due to the higher cognitive workload. The most common cause of human error and high cognitive workload is mental fatigue. Electroencephalography (EEG) is a method of gathering the electromagnetic waves emitted by a human brain. We propose a safety system by monitoring brainwaves for signs of mental fatigue using an EEG system. This requires an analysis of the mental model of the NPP operator, changes in brain wave power in response to certain stimuli, and the risk factors on mental fatigue and attention that NPP operators face when performing their tasks. We analyzed these factors and developed an EEG-based monitoring system, which aims to alert NPP operators when levels of mental fatigue and attention hinders their ability to maintain safety.Keywords: brain imaging, EEG, power plant operator, psychology
Procedia PDF Downloads 10110684 The Relationships between Carbon Dioxide (CO2) Emissions, Energy Consumption and GDP for Israel: Time Series Analysis, 1980-2010
Authors: Jinhoa Lee
Abstract:
The relationships between environmental quality, energy use and economic output have created growing attention over the past decades among researchers and policy makers. Focusing on the empirical aspects of the role of CO2 emissions and energy use in affecting the economic output, this paper is an effort to fulfill the gap in a comprehensive case study at a country level using modern econometric techniques. To achieve the goal, this country-specific study examines the short-run and long-run relationships among energy consumption (using disaggregated energy sources: crude oil, coal, natural gas, electricity), carbon dioxide (CO2) emissions and gross domestic product (GDP) for Israel using time series analysis from the year 1980-2010. To investigate the relationships between the variables, this paper employs the Phillips–Perron (PP) test for stationarity, Johansen maximum likelihood method for cointegration and a Vector Error Correction Model (VECM) for both short- and long-run causality among the research variables for the sample. The long-run equilibrium in the VECM suggests significant positive impacts of coal and natural gas consumptions on GDP in Israel. In the short run, GDP positively affects coal consumption. While there exists a positive unidirectional causality running from coal consumption to consumption of petroleum products and the direct combustion of crude oil, there exists a negative unidirectional causality running from natural gas consumption to consumption of petroleum products and the direct combustion of crude oil in the short run. Overall, the results support arguments that there are relationships among environmental quality, energy use and economic output but the associations can to be differed by the sources of energy in the case of Israel over of period 1980-2010.Keywords: CO2 emissions, energy consumption, GDP, Israel, time series analysis
Procedia PDF Downloads 65110683 Evaluation of Solid-Gas Separation Efficiency in Natural Gas Cyclones
Authors: W. I. Mazyan, A. Ahmadi, M. Hoorfar
Abstract:
Objectives/Scope: This paper proposes a mathematical model for calculating the solid-gas separation efficiency in cyclones. This model provides better agreement with experimental results compared to existing mathematical models. Methods: The separation ratio efficiency, ϵsp, is evaluated by calculating the outlet to inlet count ratio. Similar to mathematical derivations in the literature, the inlet and outlet particle count were evaluated based on Eulerian approach. The model also includes the external forces acting on the particle (i.e., centrifugal and drag forces). In addition, the proposed model evaluates the exact length that the particle travels inside the cyclone for the evaluation of number of turns inside the cyclone. The separation efficiency model derivation using Stoke’s law considers the effect of the inlet tangential velocity on the separation performance. In cyclones, the inlet velocity is a very important factor in determining the performance of the cyclone separation. Therefore, the proposed model provides accurate estimation of actual cyclone separation efficiency. Results/Observations/Conclusion: The separation ratio efficiency, ϵsp, is studied to evaluate the performance of the cyclone for particles ranging from 1 microns to 10 microns. The proposed model is compared with the results in the literature. It is shown that the proposed mathematical model indicates an error of 7% between its efficiency and the efficiency obtained from the experimental results for 1 micron particles. At the same time, the proposed model gives the user the flexibility to analyze the separation efficiency at different inlet velocities. Additive Information: The proposed model determines the separation efficiency accurately and could also be used to optimize the separation efficiency of cyclones at low cost through trial and error testing, through dimensional changes to enhance separation and through increasing the particle centrifugal forces. Ultimately, the proposed model provides a powerful tool to optimize and enhance existing cyclones at low cost.Keywords: cyclone efficiency, solid-gas separation, mathematical model, models error comparison
Procedia PDF Downloads 39210682 Improved Acoustic Source Sensing and Localization Based On Robot Locomotion
Authors: V. Ramu Reddy, Parijat Deshpande, Ranjan Dasgupta
Abstract:
This paper presents different methodology for an acoustic source sensing and localization in an unknown environment. The developed methodology includes an acoustic based sensing and localization system, a converging target localization based on the recursive direction of arrival (DOA) error minimization, and a regressive obstacle avoidance function. Our method is able to augment the existing proven localization techniques and improve results incrementally by utilizing robot locomotion and is capable of converging to a position estimate with greater accuracy using fewer measurements. The results also evinced the DOA error minimization at each iteration, improvement in time for reaching the destination and the efficiency of this target localization method as gradually converging to the real target position. Initially, the system is tested using Kinect mounted on turntable with DOA markings which serve as a ground truth and then our approach is validated using a FireBird VI (FBVI) mobile robot on which Kinect is used to obtain bearing information.Keywords: acoustic source localization, acoustic sensing, recursive direction of arrival, robot locomotion
Procedia PDF Downloads 49210681 Forecasting Regional Data Using Spatial Vars
Authors: Taisiia Gorshkova
Abstract:
Since the 1980s, spatial correlation models have been used more often to model regional indicators. An increasingly popular method for studying regional indicators is modeling taking into account spatial relationships between objects that are part of the same economic zone. In 2000s the new class of model – spatial vector autoregressions was developed. The main difference between standard and spatial vector autoregressions is that in the spatial VAR (SpVAR), the values of indicators at time t may depend on the values of explanatory variables at the same time t in neighboring regions and on the values of explanatory variables at time t-k in neighboring regions. Thus, VAR is a special case of SpVAR in the absence of spatial lags, and the spatial panel data model is a special case of spatial VAR in the absence of time lags. Two specifications of SpVAR were applied to Russian regional data for 2000-2017. The values of GRP and regional CPI are used as endogenous variables. The lags of GRP, CPI and the unemployment rate were used as explanatory variables. For comparison purposes, the standard VAR without spatial correlation was used as “naïve” model. In the first specification of SpVAR the unemployment rate and the values of depending variables, GRP and CPI, in neighboring regions at the same moment of time t were included in equations for GRP and CPI respectively. To account for the values of indicators in neighboring regions, the adjacency weight matrix is used, in which regions with a common sea or land border are assigned a value of 1, and the rest - 0. In the second specification the values of depending variables in neighboring regions at the moment of time t were replaced by these values in the previous time moment t-1. According to the results obtained, when inflation and GRP of neighbors are added into the model both inflation and GRP are significantly affected by their previous values, and inflation is also positively affected by an increase in unemployment in the previous period and negatively affected by an increase in GRP in the previous period, which corresponds to economic theory. GRP is not affected by either the inflation lag or the unemployment lag. When the model takes into account lagged values of GRP and inflation in neighboring regions, the results of inflation modeling are practically unchanged: all indicators except the unemployment lag are significant at a 5% significance level. For GRP, in turn, GRP lags in neighboring regions also become significant at a 5% significance level. For both spatial and “naïve” VARs the RMSE were calculated. The minimum RMSE are obtained via SpVAR with lagged explanatory variables. Thus, according to the results of the study, it can be concluded that SpVARs can accurately model both the actual values of macro indicators (particularly CPI and GRP) and the general situation in the regionsKeywords: forecasting, regional data, spatial econometrics, vector autoregression
Procedia PDF Downloads 14110680 Virtual Chemistry Laboratory as Pre-Lab Experiences: Stimulating Student's Prediction Skill
Authors: Yenni Kurniawati
Abstract:
Students Prediction Skill in chemistry experiments is an important skill for pre-service chemistry students to stimulate students reflective thinking at each stage of many chemistry experiments, qualitatively and quantitatively. A Virtual Chemistry Laboratory was designed to give students opportunities and times to practicing many kinds of chemistry experiments repeatedly, everywhere and anytime, before they do a real experiment. The Virtual Chemistry Laboratory content was constructed using the Model of Educational Reconstruction and developed to enhance students ability to predicted the experiment results and analyzed the cause of error, calculating the accuracy and precision with carefully in using chemicals. This research showed students changing in making a decision and extremely beware with accuracy, but still had a low concern in precision. It enhancing students level of reflective thinking skill related to their prediction skill 1 until 2 stage in average. Most of them could predict the characteristics of the product in experiment, and even the result will going to be an error. In addition, they take experiments more seriously and curiously about the experiment results. This study recommends for a different subject matter to provide more opportunities for students to learn about other kinds of chemistry experiments design.Keywords: virtual chemistry laboratory, chemistry experiments, prediction skill, pre-lab experiences
Procedia PDF Downloads 34010679 Determinants of Economic Growth in Pakistan: A Structural Vector Auto Regression Approach
Authors: Muhammad Ajmair
Abstract:
This empirical study followed structural vector auto regression (SVAR) approach proposed by the so-called AB-model of Amisano and Giannini (1997) to check the impact of relevant macroeconomic determinants on economic growth in Pakistan. Before that auto regressive distributive lag (ARDL) bound testing technique and time varying parametric approach along with general to specific approach was employed to find out relevant significant determinants of economic growth. To our best knowledge, no author made such a study that employed auto regressive distributive lag (ARDL) bound testing and time varying parametric approach with general to specific approach in empirical literature, but current study will bridge this gap. Annual data was taken from World Development Indicators (2014) during period 1976-2014. The widely-used Schwarz information criterion and Akaike information criterion were considered for the lag length in each estimated equation. Main findings of the study are that remittances received, gross national expenditures and inflation are found to be the best relevant positive and significant determinants of economic growth. Based on these empirical findings, we conclude that government should focus on overall economic growth augmenting factors while formulating any policy relevant to the concerned sector.Keywords: economic growth, gross national expenditures, inflation, remittances
Procedia PDF Downloads 19910678 Kinematic Optimization of Energy Extraction Performances for Flapping Airfoil by Using Radial Basis Function Method and Genetic Algorithm
Authors: M. Maatar, M. Mekadem, M. Medale, B. Hadjed, B. Imine
Abstract:
In this paper, numerical simulations have been carried out to study the performances of a flapping wing used as an energy collector. Metamodeling and genetic algorithms are used to detect the optimal configuration, improving power coefficient and/or efficiency. Radial basis functions and genetic algorithms have been applied to solve this problem. Three optimization factors are controlled, namely dimensionless heave amplitude h₀, pitch amplitude θ₀ and flapping frequency f. ANSYS FLUENT software has been used to solve the principal equations at a Reynolds number of 1100, while the heave and pitch motion of a NACA0015 airfoil has been realized using a developed function (UDF). The results reveal an average power coefficient and efficiency of 0.78 and 0.338 with an inexpensive low-fidelity model and a total relative error of 4.1% versus the simulation. The performances of the simulated optimum RBF-NSGA-II have been improved by 1.2% compared with the validated model.Keywords: numerical simulation, flapping wing, energy extraction, power coefficient, efficiency, RBF, NSGA-II
Procedia PDF Downloads 4310677 Measurement of Ionospheric Plasma Distribution over Myanmar Using Single Frequency Global Positioning System Receiver
Authors: Win Zaw Hein, Khin Sandar Linn, Su Su Yi Mon, Yoshitaka Goto
Abstract:
The Earth ionosphere is located at the altitude of about 70 km to several 100 km from the ground, and it is composed of ions and electrons called plasma. In the ionosphere, these plasma makes delay in GPS (Global Positioning System) signals and reflect in radio waves. The delay along the signal path from the satellite to the receiver is directly proportional to the total electron content (TEC) of plasma, and this delay is the largest error factor in satellite positioning and navigation. Sounding observation from the top and bottom of the ionosphere was popular to investigate such ionospheric plasma for a long time. Recently, continuous monitoring of the TEC using networks of GNSS (Global Navigation Satellite System) observation stations, which are basically built for land survey, has been conducted in several countries. However, in these stations, multi-frequency support receivers are installed to estimate the effect of plasma delay using their frequency dependence and the cost of multi-frequency support receivers are much higher than single frequency support GPS receiver. In this research, single frequency GPS receiver was used instead of expensive multi-frequency GNSS receivers to measure the ionospheric plasma variation such as vertical TEC distribution. In this measurement, single-frequency support ublox GPS receiver was used to probe ionospheric TEC. The location of observation was assigned at Mandalay Technological University in Myanmar. In the method, the ionospheric TEC distribution is represented by polynomial functions for latitude and longitude, and parameters of the functions are determined by least-squares fitting on pseudorange data obtained at a known location under an assumption of thin layer ionosphere. The validity of the method was evaluated by measurements obtained by the Japanese GNSS observation network called GEONET. The performance of measurement results using single-frequency of GPS receiver was compared with the results by dual-frequency measurement.Keywords: ionosphere, global positioning system, GPS, ionospheric delay, total electron content, TEC
Procedia PDF Downloads 13710676 Numerical Study on Ultimate Capacity of Bi-Modulus Beam-Column
Authors: Zhiming Ye, Dejiang Wang, Huiling Zhao
Abstract:
Development of the technology demands a higher-level research on the mechanical behavior of materials. Structural members made of bi-modulus materials have different elastic modulus when they are under tension and compression. The stress and strain states of the point effect on the elastic modulus and Poisson ratio of every point in the bi-modulus material body. Accompanied by the uncertainty and nonlinearity of the elastic constitutive relation is the complicated nonlinear problem of the bi-modulus members. In this paper, the small displacement and large displacement finite element method for the bi-modulus members have been proposed. Displacement nonlinearity is considered in the elastic constitutive equation. Mechanical behavior of slender bi-modulus beam-column under different boundary conditions and loading patterns has been simulated by the proposed method. The influence factors on the ultimate bearing capacity of slender beam and columns have been studied. The results show that as the ratio of tensile modulus to compressive modulus increases, the error of the simulation employing the same elastic modulus theory exceeds the engineering permissible error.Keywords: bi-modulus, ultimate capacity, beam-column, nonlinearity
Procedia PDF Downloads 41210675 Comparison of Different Intraocular Lens Power Calculation Formulas in People With Very High Myopia
Authors: Xia Chen, Yulan Wang
Abstract:
purpose: To compare the accuracy of Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, Emmetropia Verifying Optical (EVO) and Kane for intraocular lens power calculation in patients with axial length (AL) ≥ 28 mm. Methods: In this retrospective single-center study, 50 eyes of 41 patients with AL ≥ 28 mm that underwent uneventful cataract surgery were enrolled. The actual postoperative refractive results were compared to the predicted refraction calculated with different formulas (Haigis, SRK/T, T2, Holladay 1, Hoffer Q, Barrett Universal II, EVO and Kane). The mean absolute prediction errors (MAE) 1 month postoperatively were compared. Results: The MAE of different formulas were as follows: Haigis (0.509), SRK/T (0.705), T2 (0.999), Holladay 1 (0.714), Hoffer Q (0.583), Barrett Universal II (0.552), EVO (0.463) and Kane (0.441). No significant difference was found among the different formulas (P = .122). The Kane and EVO formulas achieved the lowest level of mean prediction error (PE) and median absolute error (MedAE) (p < 0.05). Conclusion: The Kane and EVO formulas had a better success rate than others in predicting IOL power in high myopic eyes with AL longer than 28 mm in this study.Keywords: cataract, power calculation formulas, intraocular lens, long axial length
Procedia PDF Downloads 8310674 Statically Fused Unbiased Converted Measurements Kalman Filter
Authors: Zhengkun Guo, Yanbin Li, Wenqing Wang, Bo Zou
Abstract:
The statically fused converted position and doppler measurements Kalman filter (SF-CMKF) with additive debiased measurement conversion has been previously presented to combine the resulting states of converted position measurements Kalman filter (CPMKF) and converted doppler measurement Kalman filter (CDMKF) to yield the final state estimates under minimum mean squared error (MMSE) criterion. However, the exact compensation for the bias in the polar-to-cartesian and spherical-to-cartesian conversion are multiplicative and depend on the statistics of the cosine of the angle measurement errors. As a result, the consistency and performance of the SF-CMKF may be suboptimal in large-angle error situations. In this paper, the multiplicative unbiased position and Doppler measurement conversion for 2D (polar-to-cartesian) tracking are derived, and the SF-CMKF is improved to use those conversions. Monte Carlo simulations are presented to demonstrate the statistical consistency of the multiplicative unbiased conversion and the superior performance of the modified SF-CMKF (SF-UCMKF).Keywords: measurement conversion, Doppler, Kalman filter, estimation, tracking
Procedia PDF Downloads 20810673 Extraction of Polystyrene from Styrofoam Waste: Synthesis of Novel Chelating Resin for the Enrichment and Speciation of Cr(III)/Cr(vi) Ions in Industrial Effluents
Authors: Ali N. Siyal, Saima Q. Memon, Latif Elçi, Aydan Elçi
Abstract:
Polystyrene (PS) was extracted from Styrofoam (expanded polystyrene foam) waste, so called white pollutant. The PS was functionalized with N, N- Bis(2-aminobenzylidene)benzene-1,2-diamine (ABA) ligand through an azo spacer. The resin was characterized by FT-IR spectroscopy and elemental analysis. The PS-N=N-ABA resin was used for the enrichment and speciation of Cr(III)/Cr(VI) ions and total Cr determination in aqueous samples by Flame Atomic Absorption Spectrometry (FAAS). The separation of Cr(III)/Cr(VI) ions was achieved at pH 2. The recovery of Cr(VI) ions was achieved ≥ 95.0% at optimum parameters: pH 2; resin amount 300 mg; flow rates 2.0 mL min-1 of solution and 2.0 mL min-1 of eluent (2.0 mol L-1 HNO3). Total Cr was determined by oxidation of Cr(III) to Cr(VI) ions using H2O2. The limit of detection (LOD) and quantification (LOQ) of Cr(VI) were found to be 0.40 and 1.20 μg L-1, respectively with preconcentration factor of 250. Total saturation and breakthrough capacitates of the resin for Cr(IV) ions were found to be 0.181 and 0.531 mmol g-1, respectively. The proposed method was successfully applied for the preconcentration/speciation of Cr(III)/Cr(VI) ions and determination of total Cr in industrial effluents.Keywords: styrofoam waste, polymeric resin, preconcentration, speciation, Cr(III)/Cr(VI) ions, FAAS
Procedia PDF Downloads 29410672 Performance Analysis of MIMO-OFDM Using Convolution Codes with QAM Modulation
Authors: I Gede Puja Astawa, Yoedy Moegiharto, Ahmad Zainudin, Imam Dui Agus Salim, Nur Annisa Anggraeni
Abstract:
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct the errors that occur during data transmission. One can use the convolution code. This paper presents performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate 1/2. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs. Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 sub-carrier which transmits Rayleigh multipath channel in OFDM system. To achieve a BER of 10-3 is required 30 dB SNR in SISO-OFDM scheme. For 2x2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4x4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4x4 MIMO-OFDM system without coding, power saving 7 dB of 2x2 MIMO-OFDM system without coding and significant power savings from SISO-OFDM system.Keywords: convolution code, OFDM, MIMO, QAM, BER
Procedia PDF Downloads 38810671 Breast Cancer Sensing and Imaging Utilized Printed Ultra Wide Band Spherical Sensor Array
Authors: Elyas Palantei, Dewiani, Farid Armin, Ardiansyah
Abstract:
High precision of printed microwave sensor utilized for sensing and monitoring the potential breast cancer existed in women breast tissue was optimally computed. The single element of UWB printed sensor that successfully modeled through several numerical optimizations was multiple fabricated and incorporated with woman bra to form the spherical sensors array. One sample of UWB microwave sensor obtained through the numerical computation and optimization was chosen to be fabricated. In overall, the spherical sensors array consists of twelve stair patch structures, and each element was individually measured to characterize its electrical properties, especially the return loss parameter. The comparison of S11 profiles of all UWB sensor elements is discussed. The constructed UWB sensor is well verified using HFSS programming, CST programming, and experimental measurement. Numerically, both HFSS and CST confirmed the potential operation bandwidth of UWB sensor is more or less 4.5 GHz. However, the measured bandwidth provided is about 1.2 GHz due to the technical difficulties existed during the manufacturing step. The configuration of UWB microwave sensing and monitoring system implemented consists of 12 element UWB printed sensors, vector network analyzer (VNA) to perform as the transceiver and signal processing part, the PC Desktop/Laptop acting as the image processing and displaying unit. In practice, all the reflected power collected from whole surface of artificial breast model are grouped into several numbers of pixel color classes positioned on the corresponding row and column (pixel number). The total number of power pixels applied in 2D-imaging process was specified to 100 pixels (or the power distribution pixels dimension 10x10). This was determined by considering the total area of breast phantom of average Asian women breast size and synchronizing with the single UWB sensor physical dimension. The interesting microwave imaging results were plotted and together with some technical problems arisen on developing the breast sensing and monitoring system are examined in the paper.Keywords: UWB sensor, UWB microwave imaging, spherical array, breast cancer monitoring, 2D-medical imaging
Procedia PDF Downloads 19410670 Improving Fingerprinting-Based Localization System Using Generative Artificial Intelligence
Authors: Getaneh Berie Tarekegn
Abstract:
A precise localization system is crucial for many artificial intelligence Internet of Things (AI-IoT) applications in the era of smart cities. Their applications include traffic monitoring, emergency alarming, environmental monitoring, location-based advertising, intelligent transportation, and smart health care. The most common method for providing continuous positioning services in outdoor environments is by using a global navigation satellite system (GNSS). Due to nonline-of-sight, multipath, and weather conditions, GNSS systems do not perform well in dense urban, urban, and suburban areas.This paper proposes a generative AI-based positioning scheme for large-scale wireless settings using fingerprinting techniques. In this article, we presented a novel semi-supervised deep convolutional generative adversarial network (S-DCGAN)-based radio map construction method for real-time device localization. We also employed a reliable signal fingerprint feature extraction method with t-distributed stochastic neighbor embedding (t-SNE), which extracts dominant features while eliminating noise from hybrid WLAN and long-term evolution (LTE) fingerprints. The proposed scheme reduced the workload of site surveying required to build the fingerprint database by up to 78.5% and significantly improved positioning accuracy. The results show that the average positioning error of GAILoc is less than 39 cm, and more than 90% of the errors are less than 82 cm. That is, numerical results proved that, in comparison to traditional methods, the proposed SRCLoc method can significantly improve positioning performance and reduce radio map construction costs.Keywords: location-aware services, feature extraction technique, generative adversarial network, long short-term memory, support vector machine
Procedia PDF Downloads 7110669 Experimental Research and Analyses of Yoruba Native Speakers’ Chinese Phonetic Errors
Authors: Obasa Joshua Ifeoluwa
Abstract:
Phonetics is the foundation and most important part of language learning. This article, through an acoustic experiment as well as using Praat software, uses Yoruba students’ Chinese consonants, vowels, and tones pronunciation to carry out a visual comparison with that of native Chinese speakers. This article is aimed at Yoruba native speakers learning Chinese phonetics; therefore, Yoruba students are selected. The students surveyed are required to be at an elementary level and have learned Chinese for less than six months. The students selected are all undergraduates majoring in Chinese Studies at the University of Lagos. These students have already learned Chinese Pinyin and are all familiar with the pinyin used in the provided questionnaire. The Chinese students selected are those that have passed the level two Mandarin proficiency examination, which serves as an assurance that their pronunciation is standard. It is discovered in this work that in terms of Mandarin’s consonants pronunciation, Yoruba students cannot distinguish between the voiced and voiceless as well as the aspirated and non-aspirated phonetics features. For instance, while pronouncing [ph] it is clearly shown in the spectrogram that the Voice Onset Time (VOT) of a Chinese speaker is higher than that of a Yoruba native speaker, which means that the Yoruba speaker is pronouncing the unaspirated counterpart [p]. Another difficulty is to pronounce some affricates like [tʂ]、[tʂʰ]、[ʂ]、[ʐ]、 [tɕ]、[tɕʰ]、[ɕ]. This is because these sounds are not in the phonetic system of the Yoruba language. In terms of vowels, some students find it difficult to pronounce some allophonic high vowels such as [ɿ] and [ʅ], therefore pronouncing them as their phoneme [i]; another pronunciation error is pronouncing [y] as [u], also as shown in the spectrogram, a student pronounced [y] as [iu]. In terms of tone, it is most difficult for students to differentiate between the second (rising) and third (falling and rising) tones because these tones’ emphasis is on the rising pitch. This work concludes that the major error made by Yoruba students while pronouncing Chinese sounds is caused by the interference of their first language (LI) and sometimes by their lingua franca.Keywords: Chinese, Yoruba, error analysis, experimental phonetics, consonant, vowel, tone
Procedia PDF Downloads 11110668 Field-Programmable Gate Array-Based Baseband Signals Generator of X-Band Transmitter for Micro Satellite/CubeSat
Authors: Shih-Ming Wang, Chun-Kai Yeh, Ming-Hwang Shie, Tai-Wei Lin, Chieh-Fu Chang
Abstract:
This paper introduces a FPGA-based baseband signals generator (BSG) of X-band transmitter developed by National Space Organization (NSPO), Taiwan, for earth observation. In order to gain more flexibility for various applications, a number of modulation schemes, QPSK, DeQPSK and 8PSK 4D-TCM are included. For micro satellite scenario, the maximum symbol rate is up to 150Mbsps, and the EVM is as low as 1.9%. For CubeSat scenario, the maximum symbol rate is up to 60Mbsps, and the EVM is less than 1.7%. The maximum data rates are 412.5Mbps and 165Mbps, respectively. Besides, triple modular redundancy (TMR) scheme is implemented in order to reduce single event effect (SEE) induced by radiation. Finally, the theoretical error performance is provided based on comprehensive analysis, especially when BER is lower and much lower than 10⁻⁶ due to low error bit requirement of modern high-resolution earth remote-sensing instruments.Keywords: X-band transmitter, FPGA (Field-Programmable Gate Array), CubeSat, micro satellite
Procedia PDF Downloads 29510667 In-vitro Antioxidant Activity of Two Selected Herbal Medicines
Authors: S. Vinotha, I. Thabrew, S. Sri Ranjani
Abstract:
Hot aqueous and methanol extracts of the two selected herbal medicines such are Vellarugu Chooranam (V.C) and Amukkirai Chooranam (A.C) were examined for total phenolic and flavonoid contents and in-vitro antioxidant activity using four different methods. The total phenolic and flavonoid contents in methanol extract of V.C were found to be higher (44.41±1.26 mg GAE⁄g; 174.44±9.32 mg QE⁄g) than in the methanol extract of A.C (20.56±0.67 mg GAE⁄g;7.21±0.85 mg QE⁄g). Hot methanol and aqueous extracts of both medicines showed low antioxidant activity in DPPH, ABTS, and FRAP methods and Iron chelating activity not found at highest possible concentration. V.C contains higher concentrations of total phenolic and flavonoid contents than A.C and can also exert greater antioxidant activity than A.C, although the activities demonstrated were lower than the positive control Trolox. The in-vitro antioxidant activity was not related with the total phenolic and flavonoid contents of the methanol and aqueous extracts of both herbal medicines (A.C and V.C).Keywords: activity, different extracts, herbal medicines, in-vitro antioxidant
Procedia PDF Downloads 40510666 Groundwater Level Prediction Using hybrid Particle Swarm Optimization-Long-Short Term Memory Model and Performance Evaluation
Authors: Sneha Thakur, Sanjeev Karmakar
Abstract:
This paper proposed hybrid Particle Swarm Optimization (PSO) – Long-Short Term Memory (LSTM) model for groundwater level prediction. The evaluation of the performance is realized using the parameters: root mean square error (RMSE) and mean absolute error (MAE). Ground water level forecasting will be very effective for planning water harvesting. Proper calculation of water level forecasting can overcome the problem of drought and flood to some extent. The objective of this work is to develop a ground water level forecasting model using deep learning technique integrated with optimization technique PSO by applying 29 years data of Chhattisgarh state, In-dia. It is important to find the precise forecasting in case of ground water level so that various water resource planning and water harvesting can be managed effectively.Keywords: long short-term memory, particle swarm optimization, prediction, deep learning, groundwater level
Procedia PDF Downloads 7810665 Open Source, Open Hardware Ground Truth for Visual Odometry and Simultaneous Localization and Mapping Applications
Authors: Janusz Bedkowski, Grzegorz Kisala, Michal Wlasiuk, Piotr Pokorski
Abstract:
Ground-truth data is essential for VO (Visual Odometry) and SLAM (Simultaneous Localization and Mapping) quantitative evaluation using e.g. ATE (Absolute Trajectory Error) and RPE (Relative Pose Error). Many open-access data sets provide raw and ground-truth data for benchmark purposes. The issue appears when one would like to validate Visual Odometry and/or SLAM approaches on data captured using the device for which the algorithm is targeted for example mobile phone and disseminate data for other researchers. For this reason, we propose an open source, open hardware groundtruth system that provides an accurate and precise trajectory with a 3D point cloud. It is based on LiDAR Livox Mid-360 with a non-repetitive scanning pattern, on-board Raspberry Pi 4B computer, battery and software for off-line calculations (camera to LiDAR calibration, LiDAR odometry, SLAM, georeferencing). We show how this system can be used for the evaluation of various the state of the art algorithms (Stella SLAM, ORB SLAM3, DSO) in typical indoor monocular VO/SLAM.Keywords: SLAM, ground truth, navigation, LiDAR, visual odometry, mapping
Procedia PDF Downloads 6910664 A Study on the Impact of Artificial Intelligence on Human Society and the Necessity for Setting up the Boundaries on AI Intrusion
Authors: Swarna Pundir, Prabuddha Hans
Abstract:
As AI has already stepped into the daily life of human society, one cannot be ignorant about the data it collects and used it to provide a quality of services depending up on the individuals’ choices. It also helps in giving option for making decision Vs choice selection with a calculation based on the history of our search criteria. Over the past decade or so, the way Artificial Intelligence (AI) has impacted society is undoubtedly large.AI has changed the way we shop, the way we entertain and challenge ourselves, the way information is handled, and has automated some sections of our life. We have answered as to what AI is, but not why one may see it as useful. AI is useful because it is capable of learning and predicting outcomes, using Machine Learning (ML) and Deep Learning (DL) with the help of Artificial Neural Networks (ANN). AI can also be a system that can act like humans. One of the major impacts be Joblessness through automation via AI which is seen mostly in manufacturing sectors, especially in the routine manual and blue-collar occupations and those without a college degree. It raises some serious concerns about AI in regards of less employment, ethics in making moral decisions, Individuals privacy, human judgement’s, natural emotions, biased decisions, discrimination. So, the question is if an error occurs who will be responsible, or it will be just waved off as a “Machine Error”, with no one taking the responsibility of any wrongdoing, it is essential to form some rules for using the AI where both machines and humans are involved. Procedia PDF Downloads 97