Search results for: discretization error
1485 Progression of Myopia in School Going Children During COVID Era
Authors: Sony Singh M. Optom, Vivekananda U. Warkad, Debasmita Majhi
Abstract:
Purpose: The purpose is to observe the progression of myopia in school-aged children during the COVID-19 era, with home confinement having high exposure to screen time and fewer outdoor activities. Method: A Retrospective analysis was done for all mild, moderate, and high myopic school-going children who presented to L V Prasad Eye Institute (MTC- campus) from December 2019 to March 2021 with minimum 2 follow-ups (6 months and 1 year follow-up) with mean age group of 11.47+/-2.73 and refractive error at presentation was OD 2.31+/-1.66 in OD and 2.375+/-1.83 in OS and mean BCVA (OD)0.32+/-0.06, (OS) 0.31+/-0.06. The refractive error on the last follow-up was 3.23+/-1.71 in OD and 3.30+/-1.90 in OS, and the mean BCVA was 0.013+/-0.039 in OD and 0.015+/-0.043 in OS. Altogether 131 patients’ data were analyzed who adhered to our inclusion and exclusion criteria, and a questionnaire was designed regarding the average screen-time exposure where all the parents were asked either face-to-face or were called over the phone to give feedback. Mean spherical values and annual myopia progression based on gender, age, severity of myopia, and interview data, which was analyzed by Kruskal Wallis test, and Mann Whitney test. Conclusion: When compared based on the severity of myopia, myopia progression was found more in emmetropes rather than mild, moderate and high myopes and was statistically significant with p p-value of <0.001. 69% of subjects who were found using mobile phones for more than 4 hours per day had myopia progression by 0.75D, which was statistically significant (p-value <0.001) as compared to those who didn’t attend online classes (myopia progression was by -0.25D.Keywords: myopia, school going children, annual progression, COVID ERA
Procedia PDF Downloads 101484 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 3061483 Classification of Barley Varieties by Artificial Neural Networks
Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran
Abstract:
In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.Keywords: physical properties, artificial neural networks, barley, classification
Procedia PDF Downloads 1821482 Of an 80 Gbps Passive Optical Network Using Time and Wavelength Division Multiplexing
Authors: Malik Muhammad Arslan, Muneeb Ullah, Dai Shihan, Faizan Khan, Xiaodong Yang
Abstract:
Internet Service Providers are driving endless demands for higher bandwidth and data throughput as new services and applications require higher bandwidth. Users want immediate and accurate data delivery. This article focuses on converting old conventional networks into passive optical networks based on time division and wavelength division multiplexing. The main focus of this research is to use a hybrid of time-division multiplexing and wavelength-division multiplexing to improve network efficiency and performance. In this paper, we design an 80 Gbps Passive Optical Network (PON), which meets the need of the Next Generation PON Stage 2 (NGPON2) proposed in this paper. The hybrid of the Time and Wavelength division multiplexing (TWDM) is said to be the best solution for the implementation of NGPON2, according to Full-Service Access Network (FSAN). To co-exist with or replace the current PON technologies, many wavelengths of the TWDM can be implemented simultaneously. By utilizing 8 pairs of wavelengths that are multiplexed and then transmitted over optical fiber for 40 Kms and on the receiving side, they are distributed among 256 users, which shows that the solution is reliable for implementation with an acceptable data rate. From the results, it can be concluded that the overall performance, Quality Factor, and bandwidth of the network are increased, and the Bit Error rate is minimized by the integration of this approach.Keywords: bit error rate, fiber to the home, passive optical network, time and wavelength division multiplexing
Procedia PDF Downloads 731481 Effect of Two Radial Fins on Heat Transfer and Flow Structure in a Horizontal Annulus
Authors: Anas El Amraoui, Abdelkhalek Cheddadi, Mohammed Touhami Ouazzani
Abstract:
Laminar natural convection in a cylindrical annular cavity filled with air and provided with two fins is studied numerically using the discretization of the governing equations with the Centered Finite Difference method based on the Alternating Direction Implicit (ADI) scheme. The fins are attached to the inner cylinder of radius ri (hot wall of temperature Ti). The outer cylinder of radius ro is maintained at a temperature To (To < Ti). Two values of the dimensionless thickness of the fins are considered: 0.015 and 0.203. We consider a low fin height equal to 0.078 and medium fin heights equal to 0.093 and 0.203. The position of the fin is 0.82π and the radius ratio is equal to 2. The effect of Rayleigh number, Ra, on the flow structure and heat transfer is analyzed for a range of Ra from 103 to 104. The results for established flow structures and heat transfer at low height indicate that the flow regime that occurs is unicellular for all Ra and fin thickness; in addition, the heat transfer rate increases with increasing Rayleigh number and is the same for both thicknesses. At median fin heights 0.093 and 0.203, the increase of Rayleigh number leads to transitions of flow structure which correspond to significant variations of the heat transfer. The critical Rayleigh numbers, Rac.app and Rac.disp corresponding to the appearance of the bicellular flow regime and its disappearance, are determined and their influence on the change of heat transfer rate is analyzed.Keywords: natural convection, fins, critical Rayleigh number, heat transfer, fluid flow regime, horizontal annulus
Procedia PDF Downloads 4071480 Impact Position Method Based on Distributed Structure Multi-Agent Coordination with JADE
Authors: YU Kaijun, Liang Dong, Zhang Yarong, Jin Zhenzhou, Yang Zhaobao
Abstract:
For the impact monitoring of distributed structures, the traditional positioning methods are based on the time difference, which includes the four-point arc positioning method and the triangulation positioning method. But in the actual operation, these two methods have errors. In this paper, the Multi-Agent Blackboard Coordination Principle is used to combine the two methods. Fusion steps: (1) The four-point arc locating agent calculates the initial point and records it to the Blackboard Module.(2) The triangulation agent gets its initial parameters by accessing the initial point.(3) The triangulation agent constantly accesses the blackboard module to update its initial parameters, and it also logs its calculated point into the blackboard.(4) When the subsequent calculation point and the initial calculation point are within the allowable error, the whole coordination fusion process is finished. This paper presents a Multi-Agent collaboration method whose agent framework is JADE. The JADE platform consists of several agent containers, with the agent running in each container. Because of the perfect management and debugging tools of the JADE, it is very convenient to deal with complex data in a large structure. Finally, based on the data in Jade, the results show that the impact location method based on Multi-Agent coordination fusion can reduce the error of the two methods.Keywords: impact monitoring, structural health monitoring(SHM), multi-agent system(MAS), black-board coordination, JADE
Procedia PDF Downloads 1811479 Relationship between Electricity Consumption and Economic Growth: Evidence from Nigeria (1971-2012)
Authors: N. E Okoligwe, Okezie A. Ihugba
Abstract:
Few scholars disagrees that electricity consumption is an important supporting factor for economy growth. However, the relationship between electricity consumption and economy growth has different manifestation in different countries according to previous studies. This paper examines the causal relationship between electricity consumption and economic growth for Nigeria. In an attempt to do this, the paper tests the validity of the modernization or depending hypothesis by employing various econometric tools such as Augmented Dickey Fuller (ADF) and Johansen Co-integration test, the Error Correction Mechanism (ECM) and Granger Causality test on time series data from 1971-2012. The Granger causality is found not to run from electricity consumption to real GDP and from GDP to electricity consumption during the year of study. The null hypothesis is accepted at the 5 per cent level of significance where the probability value (0.2251 and 0.8251) is greater than five per cent level of significance because both of them are probably determined by some other factors like; increase in urban population, unemployment rate and the number of Nigerians that benefit from the increase in GDP and increase in electricity demand is not determined by the increase in GDP (income) over the period of study because electricity demand has always been greater than consumption. Consequently; the policy makers in Nigeria should place priority in early stages of reconstruction on building capacity additions and infrastructure development of the electric power sector as this would force the sustainable economic growth in Nigeria.Keywords: economic growth, electricity consumption, error correction mechanism, granger causality test
Procedia PDF Downloads 3141478 Research on Pilot Sequence Design Method of Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing System Based on High Power Joint Criterion
Authors: Linyu Wang, Jiahui Ma, Jianhong Xiang, Hanyu Jiang
Abstract:
For the pilot design of the sparse channel estimation model in Multiple Input Multiple Output Orthogonal Frequency Division Multiplexing (MIMO-OFDM) systems, the observation matrix constructed according to the matrix cross-correlation criterion, total correlation criterion and other optimization criteria are not optimal, resulting in inaccurate channel estimation and high bit error rate at the receiver. This paper proposes a pilot design method combining high-power sum and high-power variance criteria, which can more accurately estimate the channel. First, the pilot insertion position is designed according to the high-power variance criterion under the condition of equal power. Then, according to the high power sum criterion, the pilot power allocation is converted into a cone programming problem, and the power allocation is carried out. Finally, the optimal pilot is determined by calculating the weighted sum of the high power sum and the high power variance. Compared with the traditional pilot frequency, under the same conditions, the constructed MIMO-OFDM system uses the optimal pilot frequency for channel estimation, and the communication bit error rate performance obtains a gain of 6~7dB.Keywords: MIMO-OFDM, pilot optimization, compressed sensing, channel estimation
Procedia PDF Downloads 1551477 Usage the Point Analysis Algorithm (SANN) on Drought Analysis
Authors: Khosro Shafie Motlaghi, Amir Reza Salemian
Abstract:
In arid and semi-arid regions like our country Evapotranspiration is the greatestportion of water resource. Therefor knowlege of its changing and other climate parameters plays an important role for planning, development, and management of water resource. In this search the Trend of long changing of Evapotranspiration (ET0), average temprature, monthly rainfall were tested. To dose, all synoptic station s in iran were divided according to the climate with Domarton climate. The present research was done in semi-arid climate of Iran, and in which 14 synoptic with 30 years period of statistics were investigated with 3 methods of minimum square error, Mann Kendoll, and Vald-Volfoytz Evapotranspiration was calculated by using the method of FAO-Penman. The results of investigation in periods of statistic has shown that the process Evapotranspiration parameter of 24 percent of stations is positive, and for 2 percent is negative, and for 47 percent. It was without any Trend. Similary for 22 percent of stations was positive the Trend of parameter of temperature for 19 percent , the trend was negative and for 64 percent, it was without any Trend. The results of rainfall trend has shown that the amount of rainfall in most stations was not considered as a meaningful trend. The result of Mann-kendoll method similar to minimum square error method. regarding the acquired result was can admit that in future years Some regions will face increase of temperature and Evapotranspiration.Keywords: analysis, algorithm, SANN, ET0
Procedia PDF Downloads 3011476 Error Analysis of Pronunciation of French by Sinhala Speaking Learners
Authors: Chandeera Gunawardena
Abstract:
The present research analyzes the pronunciation errors encountered by thirty Sinhala speaking learners of French on the assumption that the pronunciation errors were systematic and they reflect the interference of the native language of the learners. The thirty participants were selected using random sampling method. By the time of the study, the subjects were studying French as a foreign language for their Bachelor of Arts Degree at University of Kelaniya, Sri Lanka. The participants were from a homogenous linguistics background. All participants speak the same native language (Sinhala) thus they had completed their secondary education in Sinhala medium and during which they had also learnt French as a foreign language. A battery operated audio tape recorder and a 120-minute blank cassettes were used for recording. A list comprised of 60 words representing all French phonemes was used to diagnose pronunciation difficulties. Before the recording process commenced, the subjects were requested to familiarize themselves with the words through reading them several times. The recording was conducted individually in a quiet classroom and each recording approximately took fifteen minutes. Each subject was required to read at a normal speed. After the completion of recording, the recordings were replayed to identify common errors which were immediately transcribed using the International Phonetic Alphabet. Results show that Sinhala speaking learners face problems with French nasal vowels and French initial consonants clusters. The learners also exhibit errors which occur because of their second language (English) interference.Keywords: error analysis, pronunciation difficulties, pronunciation errors, Sinhala speaking learners of French
Procedia PDF Downloads 2141475 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 2441474 Continuous Measurement of Spatial Exposure Based on Visual Perception in Three-Dimensional Space
Authors: Nanjiang Chen
Abstract:
In the backdrop of expanding urban landscapes, accurately assessing spatial openness is critical. Traditional visibility analysis methods grapple with discretization errors and inefficiencies, creating a gap in truly capturing the human experi-ence of space. Addressing these gaps, this paper introduces a distinct continuous visibility algorithm, a leap in measuring urban spaces from a human-centric per-spective. This study presents a methodological breakthrough by applying this algorithm to urban visibility analysis. Unlike conventional approaches, this tech-nique allows for a continuous range of visibility assessment, closely mirroring hu-man visual perception. By eliminating the need for predefined subdivisions in ray casting, it offers a more accurate and efficient tool for urban planners and architects. The proposed algorithm not only reduces computational errors but also demonstrates faster processing capabilities, validated through a case study in Bei-jing's urban setting. Its key distinction lies in its potential to benefit a broad spec-trum of stakeholders, ranging from urban developers to public policymakers, aid-ing in the creation of urban spaces that prioritize visual openness and quality of life. This advancement in urban analysis methods could lead to more inclusive, comfortable, and well-integrated urban environments, enhancing the spatial experience for communities worldwide.Keywords: visual openness, spatial continuity, ray-tracing algorithms, urban computation
Procedia PDF Downloads 521473 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 2671472 An Application of Vector Error Correction Model to Assess Financial Innovation Impact on Economic Growth of Bangladesh
Authors: Md. Qamruzzaman, Wei Jianguo
Abstract:
Over the decade, it is observed that financial development, through financial innovation, not only accelerated development of efficient and effective financial system but also act as a catalyst in the economic development process. In this study, we try to explore insight about how financial innovation causes economic growth in Bangladesh by using Vector Error Correction Model (VECM) for the period of 1990-2014. Test of Cointegration confirms the existence of a long-run association between financial innovation and economic growth. For investigating directional causality, we apply Granger causality test and estimation explore that long-run growth will be affected by capital flow from non-bank financial institutions and inflation in the economy but changes of growth rate do not have any impact on Capital flow in the economy and level of inflation in long-run. Whereas, growth and Market capitalization, as well as market capitalization and capital flow, confirm feedback hypothesis. Variance decomposition suggests that any innovation in the financial sector can cause GDP variation fluctuation in both long run and short run. Financial innovation promotes efficiency and cost in financial transactions in the financial system, can boost economic development process. The study proposed two policy recommendations for further development. First, innovation friendly financial policy should formulate to encourage adaption and diffusion of financial innovation in the financial system. Second, operation of financial market and capital market should be regulated with implementation of rules and regulation to create conducive environment.Keywords: financial innovation, economic growth, GDP, financial institution, VECM
Procedia PDF Downloads 2751471 Neural Network Models for Actual Cost and Actual Duration Estimation in Construction Projects: Findings from Greece
Authors: Panagiotis Karadimos, Leonidas Anthopoulos
Abstract:
Predicting the actual cost and duration in construction projects concern a continuous and existing problem for the construction sector. This paper addresses this problem with modern methods and data available from past public construction projects. 39 bridge projects, constructed in Greece, with a similar type of available data were examined. Considering each project’s attributes with the actual cost and the actual duration, correlation analysis is performed and the most appropriate predictive project variables are defined. Additionally, the most efficient subgroup of variables is selected with the use of the WEKA application, through its attribute selection function. The selected variables are used as input neurons for neural network models through correlation analysis. For constructing neural network models, the application FANN Tool is used. The optimum neural network model, for predicting the actual cost, produced a mean squared error with a value of 3.84886e-05 and it was based on the budgeted cost and the quantity of deck concrete. The optimum neural network model, for predicting the actual duration, produced a mean squared error with a value of 5.89463e-05 and it also was based on the budgeted cost and the amount of deck concrete.Keywords: actual cost and duration, attribute selection, bridge construction, neural networks, predicting models, FANN TOOL, WEKA
Procedia PDF Downloads 1371470 On Influence of Web Openings Presence on Structural Performance of Steel and Concrete Beams
Authors: Jakub Bartus, Jaroslav Odrobinak
Abstract:
In general, composite steel and concrete structures present an effective structural solution utilizing the full potential of both materials. As they have numerous advantages on the construction side, they can greatly reduce the overall cost of construction, which has been the main objective of the last decade, highlighted by the current economic and social crisis. The study represents not only an analysis of composite beams’ behavior having web openings but emphasizes the influence of these openings on the total strain distribution at the level of the steel bottom flange as well. The major investigation was focused on a change in structural performance with respect to various layouts of openings. Examining this structural modification, an improvement of load carrying capacity of composite beams was a prime objective. The study is divided into analytical and numerical parts. The analytical part served as an initial step into the design process of composite beam samples, in which optimal dimensions and specific levels of utilization in individual stress states were taken into account. The numerical part covered the discretization of the preset structural issue in the form of a finite element (FE) model using beam and shell elements accounting for material non–linearities. As an outcome, several conclusions were drawn describing and explaining the effect of web opening presence on the structural performance of composite beams.Keywords: beam, steel flange, total strain, web opening
Procedia PDF Downloads 821469 A Comparative Study of Optimization Techniques and Models to Forecasting Dengue Fever
Abstract:
Dengue is a serious public health issue that causes significant annual economic and welfare burdens on nations. However, enhanced optimization techniques and quantitative modeling approaches can predict the incidence of dengue. By advocating for a data-driven approach, public health officials can make informed decisions, thereby improving the overall effectiveness of sudden disease outbreak control efforts. The National Oceanic and Atmospheric Administration and the Centers for Disease Control and Prevention are two of the U.S. Federal Government agencies from which this study uses environmental data. Based on environmental data that describe changes in temperature, precipitation, vegetation, and other factors known to affect dengue incidence, many predictive models are constructed that use different machine learning methods to estimate weekly dengue cases. The first step involves preparing the data, which includes handling outliers and missing values to make sure the data is prepared for subsequent processing and the creation of an accurate forecasting model. In the second phase, multiple feature selection procedures are applied using various machine learning models and optimization techniques. During the third phase of the research, machine learning models like the Huber Regressor, Support Vector Machine, Gradient Boosting Regressor (GBR), and Support Vector Regressor (SVR) are compared with several optimization techniques for feature selection, such as Harmony Search and Genetic Algorithm. In the fourth stage, the model's performance is evaluated using Mean Square Error (MSE), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) as assistance. Selecting an optimization strategy with the least number of errors, lowest price, biggest productivity, or maximum potential results is the goal. In a variety of industries, including engineering, science, management, mathematics, finance, and medicine, optimization is widely employed. An effective optimization method based on harmony search and an integrated genetic algorithm is introduced for input feature selection, and it shows an important improvement in the model's predictive accuracy. The predictive models with Huber Regressor as the foundation perform the best for optimization and also prediction.Keywords: deep learning model, dengue fever, prediction, optimization
Procedia PDF Downloads 701468 Performance Analysis of Geophysical Database Referenced Navigation: The Combination of Gravity Gradient and Terrain Using Extended Kalman Filter
Authors: Jisun Lee, Jay Hyoun Kwon
Abstract:
As an alternative way to compensate the INS (inertial navigation system) error in non-GNSS (Global Navigation Satellite System) environment, geophysical database referenced navigation is being studied. In this study, both gravity gradient and terrain data were combined to complement the weakness of sole geophysical data as well as to improve the stability of the positioning. The main process to compensate the INS error using geophysical database was constructed on the basis of the EKF (Extended Kalman Filter). In detail, two type of combination method, centralized and decentralized filter, were applied to check the pros and cons of its algorithm and to find more robust results. The performance of each navigation algorithm was evaluated based on the simulation by supposing that the aircraft flies with precise geophysical DB and sensors above nine different trajectories. Especially, the results were compared to the ones from sole geophysical database referenced navigation to check the improvement due to a combination of the heterogeneous geophysical database. It was found that the overall navigation performance was improved, but not all trajectories generated better navigation result by the combination of gravity gradient with terrain data. Also, it was found that the centralized filter generally showed more stable results. It is because that the way to allocate the weight for the decentralized filter could not be optimized due to the local inconsistency of geophysical data. In the future, switching of geophysical data or combining different navigation algorithm are necessary to obtain more robust navigation results.Keywords: Extended Kalman Filter, geophysical database referenced navigation, gravity gradient, terrain
Procedia PDF Downloads 3521467 Analysis of Spectral Radiative Entropy Generation in a Non-Gray Participating Medium with Heat Source (Furnaces)
Authors: Asadollah Bahrami
Abstract:
In the present study, spectral radiative entropy generation is analyzed in a furnace filled with a mixture of H₂O, CO₂ and soot at radiative equilibrium. For the angular and spatial discretization of the radiative transfer equation and radiative entropy generation equations, the discrete ordinates method and the finite volume method are used, respectively. Spectral radiative properties are obtained using the correlated-k (CK) non-gray model with updated parameters based on the HITEMP2010 high-resolution database. In order to evaluate the effects of the location of the heat source, boundary condition and wall emissivity on radiative entropy generation, five cases are considered with different conditions. The spectral and total radiative entropy generation in the system are calculated for all cases and the effects of mentioned parameters on radiative entropy generation are attentively analyzed and finally, the optimum condition is especially presented. The most important results can be stated as follows: Results demonstrate that the wall emissivity has a considerable effect on the radiative entropy generation. Also, irreversible radiative transfer at the wall with lower temperatures is the main source of radiative entropy generation in the furnaces. In addition, the effect of the location of the heat source on total radiative entropy generation is less than other factors. Eventually, it can be said that characterizing the effective parameters of radiative entropy generation provides an approach to minimizing the radiative entropy generation and enhancing the furnace's performance practicality.Keywords: spectral radiative entropy generation, non-gray medium, correlated k(CK) model, heat source
Procedia PDF Downloads 1111466 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer
Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo
Abstract:
Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.Keywords: boom, knapsack, farm, sprayer, wheel axle
Procedia PDF Downloads 2871465 The Influence of Different Flux Patterns on Magnetic Losses in Electric Machine Cores
Authors: Natheer Alatawneh
Abstract:
The finite element analysis of magnetic fields in electromagnetic devices shows that the machine cores experience different flux patterns including alternating and rotating fields. The rotating fields are generated in different configurations range between circular and elliptical with different ratios between the major and minor axis of the flux locus. Experimental measurements on electrical steel exposed to different flux patterns disclose different magnetic losses in the samples under test. Consequently, electric machines require special attention during the cores loss calculation process to consider the flux patterns. In this study, a circular rotational single sheet tester is employed to measure the core losses in electric steel sample of M36G29. The sample was exposed to alternating field, circular field, and elliptical fields with axis ratios of 0.2, 0.4, 0.6 and 0.8. The measured data was implemented on 6-4 switched reluctance motor at three different frequencies of interest to the industry as 60 Hz, 400 Hz, and 1 kHz. The results disclose a high margin of error that may occur during the loss calculations if the flux patterns issue is neglected. The error in different parts of the machine associated with considering the flux patterns can be around 50%, 10%, and 2% at 60Hz, 400Hz, and 1 kHz, respectively. The future work will focus on the optimization of machine geometrical shape which has a primary effect on the flux pattern in order to minimize the magnetic losses in machine cores.Keywords: alternating core losses, electric machines, finite element analysis, rotational core losses
Procedia PDF Downloads 2541464 Video Compression Using Contourlet Transform
Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan
Abstract:
Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform
Procedia PDF Downloads 4481463 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models
Authors: Suriya
Abstract:
Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar
Procedia PDF Downloads 521462 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate
Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian
Abstract:
This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.Keywords: land certificates, QR code, track and trace, packaging
Procedia PDF Downloads 1641461 Perfectly Matched Layer Boundary Stabilized Using Multiaxial Stretching Functions
Authors: Adriano Trono, Federico Pinto, Diego Turello, Marcelo A. Ceballos
Abstract:
Numerical modeling of dynamic soil-structure interaction problems requires an adequate representation of the unbounded characteristics of the ground, material non-linearity of soils, and geometrical non-linearities such as large displacements due to rocking of the structure. In order to account for these effects simultaneously, it is often required that the equations of motion are solved in the time domain. However, boundary conditions in conventional finite element codes generally present shortcomings in fully absorbing the energy of outgoing waves. In this sense, the Perfectly Matched Layers (PML) technique allows a satisfactory absorption of inclined body waves, as well as surface waves. However, the PML domain is inherently unstable, meaning that it its instability does not depend upon the discretization considered. One way to stabilize the PML domain is to use multiaxial stretching functions. This development is questionable because some Jacobian terms of the coordinate transformation are not accounted for. For this reason, the resulting absorbing layer element is often referred to as "uncorrected M-PML” in the literature. In this work, the strong formulation of the "corrected M-PML” absorbing layer is proposed using multiaxial stretching functions that incorporate all terms of the coordinate transformation. The results of the stable model are compared with reference solutions obtained from extended domain models.Keywords: mixed finite elements, multiaxial stretching functions, perfectly matched layer, soil-structure interaction
Procedia PDF Downloads 761460 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System
Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi
Abstract:
Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.Keywords: channel estimation, OFDM, pilot-assist, VLC
Procedia PDF Downloads 1851459 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)
Authors: Pablo Martin, Jorge Olivares, Fernando Maass
Abstract:
A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions
Procedia PDF Downloads 2541458 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model
Authors: Nureni O. Adeboye, Dawud A. Agunbiade
Abstract:
This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity
Procedia PDF Downloads 1451457 Knowledge-Attitude-Practice Survey Regarding High Alert Medication in a Teaching Hospital in Eastern India
Authors: D. S. Chakraborty, S. Ghosh, A. Hazra
Abstract:
Objective: Medication errors are a reality in all settings where medicines are prescribed, dispensed and used. High Alert Medications (HAM) are those that bear a heightened risk of causing significant patient harm when used in error. We conducted a knowledge-attitude-practice survey, among residents working in a teaching hospital, to assess the ground situation with regard to the handling of HAM. Methods: We plan to approach 242 residents among the approximately 600 currently working in the hospital through purposive sampling. Residents in all disciplines (clinical, paraclinical and preclinical) are being targeted. A structured questionnaire that has been pretested on 5 volunteer residents is being used for data collection. The questionnaire is being administered to residents individually through face-to-face interview, by two raters, while they are on duty but not during rush hours. Results: Of the 156 residents approached so far, data from 140 have been analyzed, the rest having refused participation. Although background knowledge exists for the majority of respondents, awareness levels regarding HAM are moderate, and attitude is non-uniform. The number of respondents correctly able to identify most ( > 80%) HAM in three common settings– accident and emergency, obstetrics and intensive care unit are less than 70%. Several potential errors in practice have been identified. The study is ongoing. Conclusions: Situation requires corrective action. There is an urgent need for improving awareness regarding HAM for the sake of patient safety. The pharmacology department can take the lead in designing awareness campaign with support from the hospital administration.Keywords: high alert medication, medication error, questionnaire, resident
Procedia PDF Downloads 1321456 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain
Authors: Jia Zhang, Fengmei Yao, Yanjing Tan
Abstract:
The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain
Procedia PDF Downloads 381