Search results for: error bound
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2322

Search results for: error bound

1752 Video Compression Using Contourlet Transform

Authors: Delara Kazempour, Mashallah Abasi Dezfuli, Reza Javidan

Abstract:

Video compression used for channels with limited bandwidth and storage devices has limited storage capabilities. One of the most popular approaches in video compression is the usage of different transforms. Discrete cosine transform is one of the video compression methods that have some problems such as blocking, noising and high distortion inappropriate effect in compression ratio. wavelet transform is another approach is better than cosine transforms in balancing of compression and quality but the recognizing of curve curvature is so limit. Because of the importance of the compression and problems of the cosine and wavelet transforms, the contourlet transform is most popular in video compression. In the new proposed method, we used contourlet transform in video image compression. Contourlet transform can save details of the image better than the previous transforms because this transform is multi-scale and oriented. This transform can recognize discontinuity such as edges. In this approach we lost data less than previous approaches. Contourlet transform finds discrete space structure. This transform is useful for represented of two dimension smooth images. This transform, produces compressed images with high compression ratio along with texture and edge preservation. Finally, the results show that the majority of the images, the parameters of the mean square error and maximum signal-to-noise ratio of the new method based contourlet transform compared to wavelet transform are improved but in most of the images, the parameters of the mean square error and maximum signal-to-noise ratio in the cosine transform is better than the method based on contourlet transform.

Keywords: video compression, contourlet transform, discrete cosine transform, wavelet transform

Procedia PDF Downloads 441
1751 Prediction of PM₂.₅ Concentration in Ulaanbaatar with Deep Learning Models

Authors: Suriya

Abstract:

Rapid socio-economic development and urbanization have led to an increasingly serious air pollution problem in Ulaanbaatar (UB), the capital of Mongolia. PM₂.₅ pollution has become the most pressing aspect of UB air pollution. Therefore, monitoring and predicting PM₂.₅ concentration in UB is of great significance for the health of the local people and environmental management. As of yet, very few studies have used models to predict PM₂.₅ concentrations in UB. Using data from 0:00 on June 1, 2018, to 23:00 on April 30, 2020, we proposed two deep learning models based on Bayesian-optimized LSTM (Bayes-LSTM) and CNN-LSTM. We utilized hourly observed data, including Himawari8 (H8) aerosol optical depth (AOD), meteorology, and PM₂.₅ concentration, as input for the prediction of PM₂.₅ concentrations. The correlation strengths between meteorology, AOD, and PM₂.₅ were analyzed using the gray correlation analysis method; the comparison of the performance improvement of the model by using the AOD input value was tested, and the performance of these models was evaluated using mean absolute error (MAE) and root mean square error (RMSE). The prediction accuracies of Bayes-LSTM and CNN-LSTM deep learning models were both improved when AOD was included as an input parameter. Improvement of the prediction accuracy of the CNN-LSTM model was particularly enhanced in the non-heating season; in the heating season, the prediction accuracy of the Bayes-LSTM model slightly improved, while the prediction accuracy of the CNN-LSTM model slightly decreased. We propose two novel deep learning models for PM₂.₅ concentration prediction in UB, Bayes-LSTM, and CNN-LSTM deep learning models. Pioneering the use of AOD data from H8 and demonstrating the inclusion of AOD input data improves the performance of our two proposed deep learning models.

Keywords: deep learning, AOD, PM2.5, prediction, Ulaanbaatar

Procedia PDF Downloads 46
1750 Track and Trace Solution on Land Certificate Production: Indonesian Land Certificate

Authors: Adrian Rifqi, Febe Napitupulu, Erdi Hermawan, Edwin Putra, Yang Leprilian

Abstract:

This article focuses on the implementation of the production improvement process of the Indonesian land certificate product that printed in Perum Peruri as the state-owned enterprises. Based on the data obtained, there are several complaints from customers of the 2019 land certificate production. The complaints become a negative value to loyal customers of Perum Peruri. Almost all the complaints are referring to ‘defective printouts and the difference between products in packaging and packaging labels both in terms of type and quantity’. To overcome this problem, we intend to make an improvement to the production process that focuses on complaints ‘there is a difference between products in packaging with packaging labels’. Improvements in the land certificate production process are relying on the technology of the scales and QR code on the packaging label. In addition, using the QR code on the packaging label will facilitate the process of tracking product data. With this method, we hope to reduce the error rate between products in packaging with the packaging label both in terms of quantity, type, and product number on the land certificate and error rate of sending land certificates, which will be sent to many places to 0%. With this solution, we also hope to get precise data and real-time reports on the production of land certificates in the near future, so track and trace implementation can be done as the solution of the land certificate production.

Keywords: land certificates, QR code, track and trace, packaging

Procedia PDF Downloads 158
1749 The Energy Consumption by the Sector of Transport and His Impact on the Atmospheric Pollution

Authors: Mme Hamani Née Guessas Ghaniya

Abstract:

The transport is the base of the development of the exchanges and the business, being both a recognized determiner of the economic and social development. The development of the transport is in the center of the big challenges of development of countries, but it is also at the heart of big contradictions, since we integrate the environmental issues which are bound to him, in particular through the questions of energy. Indeed, the energy consumption by the sector of transport is one of bigger concerns, because it is increasing and it has a big impact on our environment. The main consequences are, the atmospheric pollution causing an increase of the greenhouse effect which causes a global warming. These global warming risks to engender a partial cast iron of polar caps so raising the level of seas, flooding the low coastal zones, certain islands and the deltas. Thus, the purpose of this communication is to present the impact of the energy consumption by the sector of transport on the air quality, showing its effect on the health and on the global warming.

Keywords: energy consumption, sector of transport, air quality, atmospheric pollution

Procedia PDF Downloads 328
1748 Pilot-Assisted Direct-Current Biased Optical Orthogonal Frequency Division Multiplexing Visible Light Communication System

Authors: Ayad A. Abdulkafi, Shahir F. Nawaf, Mohammed K. Hussein, Ibrahim K. Sileh, Fouad A. Abdulkafi

Abstract:

Visible light communication (VLC) is a new approach of optical wireless communication proposed to support the congested radio frequency (RF) spectrum. VLC systems are combined with orthogonal frequency division multiplexing (OFDM) to achieve high rate transmission and high spectral efficiency. In this paper, we investigate the Pilot-Assisted Channel Estimation for DC biased Optical OFDM (PACE-DCO-OFDM) systems to reduce the effects of the distortion on the transmitted signal. Least-square (LS) and linear minimum mean-squared error (LMMSE) estimators are implemented in MATLAB/Simulink to enhance the bit-error-rate (BER) of PACE-DCO-OFDM. Results show that DCO-OFDM system based on PACE scheme has achieved better BER performance compared to conventional system without pilot assisted channel estimation. Simulation results show that the proposed PACE-DCO-OFDM based on LMMSE algorithm can more accurately estimate the channel and achieves better BER performance when compared to the LS based PACE-DCO-OFDM and the traditional system without PACE. For the same signal to noise ratio (SNR) of 25 dB, the achieved BER is about 5×10-4 for LMMSE-PACE and 4.2×10-3 with LS-PACE while it is about 2×10-1 for system without PACE scheme.

Keywords: channel estimation, OFDM, pilot-assist, VLC

Procedia PDF Downloads 178
1747 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.

Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions

Procedia PDF Downloads 249
1746 Monte Carlo Estimation of Heteroscedasticity and Periodicity Effects in a Panel Data Regression Model

Authors: Nureni O. Adeboye, Dawud A. Agunbiade

Abstract:

This research attempts to investigate the effects of heteroscedasticity and periodicity in a Panel Data Regression Model (PDRM) by extending previous works on balanced panel data estimation within the context of fitting PDRM for Banks audit fee. The estimation of such model was achieved through the derivation of Joint Lagrange Multiplier (LM) test for homoscedasticity and zero-serial correlation, a conditional LM test for zero serial correlation given heteroscedasticity of varying degrees as well as conditional LM test for homoscedasticity given first order positive serial correlation via a two-way error component model. Monte Carlo simulations were carried out for 81 different variations, of which its design assumed a uniform distribution under a linear heteroscedasticity function. Each of the variation was iterated 1000 times and the assessment of the three estimators considered are based on Variance, Absolute bias (ABIAS), Mean square error (MSE) and the Root Mean Square (RMSE) of parameters estimates. Eighteen different models at different specified conditions were fitted, and the best-fitted model is that of within estimator when heteroscedasticity is severe at either zero or positive serial correlation value. LM test results showed that the tests have good size and power as all the three tests are significant at 5% for the specified linear form of heteroscedasticity function which established the facts that Banks operations are severely heteroscedastic in nature with little or no periodicity effects.

Keywords: audit fee lagrange multiplier test, heteroscedasticity, lagrange multiplier test, Monte-Carlo scheme, periodicity

Procedia PDF Downloads 140
1745 Knowledge-Attitude-Practice Survey Regarding High Alert Medication in a Teaching Hospital in Eastern India

Authors: D. S. Chakraborty, S. Ghosh, A. Hazra

Abstract:

Objective: Medication errors are a reality in all settings where medicines are prescribed, dispensed and used. High Alert Medications (HAM) are those that bear a heightened risk of causing significant patient harm when used in error. We conducted a knowledge-attitude-practice survey, among residents working in a teaching hospital, to assess the ground situation with regard to the handling of HAM. Methods: We plan to approach 242 residents among the approximately 600 currently working in the hospital through purposive sampling. Residents in all disciplines (clinical, paraclinical and preclinical) are being targeted. A structured questionnaire that has been pretested on 5 volunteer residents is being used for data collection. The questionnaire is being administered to residents individually through face-to-face interview, by two raters, while they are on duty but not during rush hours. Results: Of the 156 residents approached so far, data from 140 have been analyzed, the rest having refused participation. Although background knowledge exists for the majority of respondents, awareness levels regarding HAM are moderate, and attitude is non-uniform. The number of respondents correctly able to identify most ( > 80%) HAM in three common settings– accident and emergency, obstetrics and intensive care unit are less than 70%. Several potential errors in practice have been identified. The study is ongoing. Conclusions: Situation requires corrective action. There is an urgent need for improving awareness regarding HAM for the sake of patient safety. The pharmacology department can take the lead in designing awareness campaign with support from the hospital administration.

Keywords: high alert medication, medication error, questionnaire, resident

Procedia PDF Downloads 130
1744 Estimation of Maize Yield by Using a Process-Based Model and Remote Sensing Data in the Northeast China Plain

Authors: Jia Zhang, Fengmei Yao, Yanjing Tan

Abstract:

The accurate estimation of crop yield is of great importance for the food security. In this study, a process-based mechanism model was modified to estimate yield of C4 crop by modifying the carbon metabolic pathway in the photosynthesis sub-module of the RS-P-YEC (Remote-Sensing-Photosynthesis-Yield estimation for Crops) model. The yield was calculated by multiplying net primary productivity (NPP) and the harvest index (HI) derived from the ratio of grain to stalk yield. The modified RS-P-YEC model was used to simulate maize yield in the Northeast China Plain during the period 2002-2011. The statistical data of maize yield from study area was used to validate the simulated results at county-level. The results showed that the Pearson correlation coefficient (R) was 0.827 (P < 0.01) between the simulated yield and the statistical data, and the root mean square error (RMSE) was 712 kg/ha with a relative error (RE) of 9.3%. From 2002-2011, the yield of maize planting zone in the Northeast China Plain was increasing with smaller coefficient of variation (CV). The spatial pattern of simulated maize yield was consistent with the actual distribution in the Northeast China Plain, with an increasing trend from the northeast to the southwest. Hence the results demonstrated that the modified process-based model coupled with remote sensing data was suitable for yield prediction of maize in the Northeast China Plain at the spatial scale.

Keywords: process-based model, C4 crop, maize yield, remote sensing, Northeast China Plain

Procedia PDF Downloads 372
1743 Modeling of Age Hardening Process Using Adaptive Neuro-Fuzzy Inference System: Results from Aluminum Alloy A356/Cow Horn Particulate Composite

Authors: Chidozie C. Nwobi-Okoye, Basil Q. Ochieze, Stanley Okiy

Abstract:

This research reports on the modeling of age hardening process using adaptive neuro-fuzzy inference system (ANFIS). The age hardening output (Hardness) was predicted using ANFIS. The input parameters were ageing time, temperature and percentage composition of cow horn particles (CHp%). The results show the correlation coefficient (R) of the predicted hardness values versus the measured values was of 0.9985. Subsequently, values outside the experimental data points were predicted. When the temperature was kept constant, and other input parameters were varied, the average relative error of the predicted values was 0.0931%. When the temperature was varied, and other input parameters kept constant, the average relative error of the hardness values predictions was 80%. The results show that ANFIS with coarse experimental data points for learning is not very effective in predicting process outputs in the age hardening operation of A356 alloy/CHp particulate composite. The fine experimental data requirements by ANFIS make it more expensive in modeling and optimization of age hardening operations of A356 alloy/CHp particulate composite.

Keywords: adaptive neuro-fuzzy inference system (ANFIS), age hardening, aluminum alloy, metal matrix composite

Procedia PDF Downloads 153
1742 Unified Power Quality Conditioner Presentation and Dimensioning

Authors: Abderrahmane Kechich, Othmane Abdelkhalek

Abstract:

Static converters behave as nonlinear loads that inject harmonic currents into the grid and increase the consumption of the inactive power. On the other hand, the increased use of sensitive equipment requires the application of sinusoidal voltages. As a result, the electrical power quality control has become a major concern in the field of power electronics. In this context, the active power conditioner (UPQC) was developed. It combines both serial and parallel structures; the series filter can protect sensitive loads and compensate for voltage disturbances such as voltage harmonics, voltage dips or flicker when the shunt filter compensates for current disturbances such as current harmonics, reactive currents and imbalance. This double feature is that it is one of the most appropriate devices. Calculating parameters is an important step and in the same time it’s not easy for that reason several researchers based on trial and error method for calculating parameters but this method is not easy for beginners researchers especially what about the controller’s parameters, for that reason this paper gives a mathematical way to calculate of almost all of UPQC parameters away from trial and error method. This paper gives also a new approach for calculating of PI regulators parameters for purpose to have a stable UPQC able to compensate for disturbances acting on the waveform of line voltage and load current in order to improve the electrical power quality.

Keywords: UPQC, Shunt active filer, series active filer, PI controller, PWM control, dual-loop control

Procedia PDF Downloads 401
1741 Determination of Direct Solar Radiation Using Atmospheric Physics Models

Authors: Pattra Pukdeekiat, Siriluk Ruangrungrote

Abstract:

This work was originated to precisely determine direct solar radiation by using atmospheric physics models since the accurate prediction of solar radiation is necessary and useful for solar energy applications including atmospheric research. The possible models and techniques for a calculation of regional direct solar radiation were challenging and compulsory for the case of unavailable instrumental measurement. The investigation was mathematically governed by six astronomical parameters i.e. declination (δ), hour angle (ω), solar time, solar zenith angle (θz), extraterrestrial radiation (Iso) and eccentricity (E0) along with two atmospheric parameters i.e. air mass (mr) and dew point temperature at Bangna meteorological station (13.67° N, 100.61° E) in Bangkok, Thailand. Analyses of five models of solar radiation determination with the assumption of clear sky were applied accompanied by three statistical tests: Mean Bias Difference (MBD), Root Mean Square Difference (RMSD) and Coefficient of determination (R2) in order to validate the accuracy of obtainable results. The calculated direct solar radiation was in a range of 491-505 Watt/m2 with relative percentage error 8.41% for winter and 532-540 Watt/m2 with relative percentage error 4.89% for summer 2014. Additionally, dataset of seven continuous days, representing both seasons were considered with the MBD, RMSD and R2 of -0.08, 0.25, 0.86 and -0.14, 0.35, 3.29, respectively, which belong to Kumar model for winter and CSR model for summer. In summary, the determination of direct solar radiation based on atmospheric models and empirical equations could advantageously provide immediate and reliable values of the solar components for any site in the region without a constraint of actual measurement.

Keywords: atmospheric physics models, astronomical parameters, atmospheric parameters, clear sky condition

Procedia PDF Downloads 404
1740 Prediction of Compressive Strength of Concrete from Early Age Test Result Using Design of Experiments (Rsm)

Authors: Salem Alsanusi, Loubna Bentaher

Abstract:

Response Surface Methods (RSM) provide statistically validated predictive models that can then be manipulated for finding optimal process configurations. Variation transmitted to responses from poorly controlled process factors can be accounted for by the mathematical technique of propagation of error (POE), which facilitates ‘finding the flats’ on the surfaces generated by RSM. The dual response approach to RSM captures the standard deviation of the output as well as the average. It accounts for unknown sources of variation. Dual response plus propagation of error (POE) provides a more useful model of overall response variation. In our case, we implemented this technique in predicting compressive strength of concrete of 28 days in age. Since 28 days is quite time consuming, while it is important to ensure the quality control process. This paper investigates the potential of using design of experiments (DOE-RSM) to predict the compressive strength of concrete at 28th day. Data used for this study was carried out from experiment schemes at university of Benghazi, civil engineering department. A total of 114 sets of data were implemented. ACI mix design method was utilized for the mix design. No admixtures were used, only the main concrete mix constituents such as cement, coarse-aggregate, fine aggregate and water were utilized in all mixes. Different mix proportions of the ingredients and different water cement ratio were used. The proposed mathematical models are capable of predicting the required concrete compressive strength of concrete from early ages.

Keywords: mix proportioning, response surface methodology, compressive strength, optimal design

Procedia PDF Downloads 266
1739 Hygro-Thermal Modelling of Timber Decks

Authors: Stefania Fortino, Petr Hradil, Timo Avikainen

Abstract:

Timber bridges have an excellent environmental performance, are economical, relatively easy to build and can have a long service life. However, the durability of these bridges is the main problem because of their exposure to outdoor climate conditions. The moisture content accumulated in wood for long periods, in combination with certain temperatures, may cause conditions suitable for timber decay. In addition, moisture content variations affect the structural integrity, serviceability and loading capacity of timber bridges. Therefore, the monitoring of the moisture content in wood is important for the durability of the material but also for the whole superstructure. The measurements obtained by the usual sensor-based techniques provide hygro-thermal data only in specific locations of the wood components. In this context, the monitoring can be assisted by numerical modelling to get more information on the hygro-thermal response of the bridges. This work presents a hygro-thermal model based on a multi-phase moisture transport theory to predict the distribution of moisture content, relative humidity and temperature in wood. Below the fibre saturation point, the multi-phase theory simulates three phenomena in cellular wood during moisture transfer, i.e., the diffusion of water vapour in the pores, the sorption of bound water and the diffusion of bound water in the cell walls. In the multi-phase model, the two water phases are separated, and the coupling between them is defined through a sorption rate. Furthermore, an average between the temperature-dependent adsorption and desorption isotherms is used. In previous works by some of the authors, this approach was found very suitable to study the moisture transport in uncoated and coated stress-laminated timber decks. Compared to previous works, the hygro-thermal fluxes on the external surfaces include the influence of the absorbed solar radiation during the time and consequently, the temperatures on the surfaces exposed to the sun are higher. This affects the whole hygro-thermal response of the timber component. The multi-phase model, implemented in a user subroutine of Abaqus FEM code, provides the distribution of the moisture content, the temperature and the relative humidity in a volume of the timber deck. As a case study, the hygro-thermal data in wood are collected from the ongoing monitoring of the stress-laminated timber deck of Tapiola Bridge in Finland, based on integrated humidity-temperature sensors and the numerical results are found in good agreement with the measurements. The proposed model, used to assist the monitoring, can contribute to reducing the maintenance costs of bridges, as well as the cost of instrumentation, and increase safety.

Keywords: moisture content, multi-phase models, solar radiation, timber decks, FEM

Procedia PDF Downloads 174
1738 Numerical Simulation and Experimental Validation of the Tire-Road Separation in Quarter-car Model

Authors: Quy Dang Nguyen, Reza Nakhaie Jazar

Abstract:

The paper investigates vibration dynamics of tire-road separation for a quarter-car model; this separation model is developed to be close to the real situation considering the tire is able to separate from the ground plane. A set of piecewise linear mathematical models is developed and matches the in-contact and no-contact states to be considered as mother models for further investigations. The bound dynamics are numerically simulated in the time response and phase portraits. The separation analysis may determine which values of suspension parameters can delay and avoid the no-contact phenomenon, which results in improving ride comfort and eliminating the potentially dangerous oscillation. Finally, model verification is carried out in the MSC-ADAMS environment.

Keywords: quarter-car vibrations, tire-road separation, separation analysis, separation dynamics, ride comfort, ADAMS validation

Procedia PDF Downloads 89
1737 Reliability and Validity of a Portable Inertial Sensor and Pressure Mat System for Measuring Dynamic Balance Parameters during Stepping

Authors: Emily Rowe

Abstract:

Introduction: Balance assessments can be used to help evaluate a person’s risk of falls, determine causes of balance deficits and inform intervention decisions. It is widely accepted that instrumented quantitative analysis can be more reliable and specific than semi-qualitative ordinal scales or itemised scoring methods. However, the uptake of quantitative methods is hindered by expense, lack of portability, and set-up requirements. During stepping, foot placement is actively coordinated with the body centre of mass (COM) kinematics during pre-initiation. Based on this, the potential to use COM velocity just prior to foot off and foot placement error as an outcome measure of dynamic balance is currently being explored using complex 3D motion capture. Inertial sensors and pressure mats might be more practical technologies for measuring these parameters in clinical settings. Objective: The aim of this study was to test the criterion validity and test-retest reliability of a synchronised inertial sensor and pressure mat-based approach to measure foot placement error and COM velocity while stepping. Methods: Trials were held with 15 healthy participants who each attended for two sessions. The trial task was to step onto one of 4 targets (2 for each foot) multiple times in a random, unpredictable order. The stepping target was cued using an auditory prompt and electroluminescent panel illumination. Data was collected using 3D motion capture and a combined inertial sensor-pressure mat system simultaneously in both sessions. To assess the reliability of each system, ICC estimates and their 95% confident intervals were calculated based on a mean-rating (k = 2), absolute-agreement, 2-way mixed-effects model. To test the criterion validity of the combined inertial sensor-pressure mat system against the motion capture system multi-factorial two-way repeated measures ANOVAs were carried out. Results: It was found that foot placement error was not reliably measured between sessions by either system (ICC 95% CIs; motion capture: 0 to >0.87 and pressure mat: <0.53 to >0.90). This could be due to genuine within-subject variability given the nature of the stepping task and brings into question the suitability of average foot placement error as an outcome measure. Additionally, results suggest the pressure mat is not a valid measure of this parameter since it was statistically significantly different from and much less precise than the motion capture system (p=0.003). The inertial sensor was found to be a moderately reliable (ICC 95% CIs >0.46 to >0.95) but not valid measure for anteroposterior and mediolateral COM velocities (AP velocity: p=0.000, ML velocity target 1 to 4: p=0.734, 0.001, 0.000 & 0.376). However, it is thought that with further development, the COM velocity measure validity could be improved. Possible options which could be investigated include whether there is an effect of inertial sensor placement with respect to pelvic marker placement or implementing more complex methods of data processing to manage inherent accelerometer and gyroscope limitations. Conclusion: The pressure mat is not a suitable alternative for measuring foot placement errors. The inertial sensors have the potential for measuring COM velocity; however, further development work is needed.

Keywords: dynamic balance, inertial sensors, portable, pressure mat, reliability, stepping, validity, wearables

Procedia PDF Downloads 151
1736 Life Time Improvement of Clamp Structural by Using Fatigue Analysis

Authors: Pisut Boonkaew, Jatuporn Thongsri

Abstract:

In hard disk drive manufacturing industry, the process of reducing an unnecessary part and qualifying the quality of part before assembling is important. Thus, clamp was designed and fabricated as a fixture for holding in testing process. Basically, testing by trial and error consumes a long time to improve. Consequently, the simulation was brought to improve the part and reduce the time taken. The problem is the present clamp has a low life expectancy because of the critical stress that occurred. Hence, the simulation was brought to study the behavior of stress and compressive force to improve the clamp expectancy with all probability of designs which are present up to 27 designs, which excluding the repeated designs. The probability was calculated followed by the full fractional rules of six sigma methodology which was provided correctly. The six sigma methodology is a well-structured method for improving quality level by detecting and reducing the variability of the process. Therefore, the defective will be decreased while the process capability increasing. This research focuses on the methodology of stress and fatigue reduction while compressive force still remains in the acceptable range that has been set by the company. In the simulation, ANSYS simulates the 3D CAD with the same condition during the experiment. Then the force at each distance started from 0.01 to 0.1 mm will be recorded. The setting in ANSYS was verified by mesh convergence methodology and compared the percentage error with the experimental result; the error must not exceed the acceptable range. Therefore, the improved process focuses on degree, radius, and length that will reduce stress and still remain in the acceptable force number. Therefore, the fatigue analysis will be brought as the next process in order to guarantee that the lifetime will be extended by simulating through ANSYS simulation program. Not only to simulate it, but also to confirm the setting by comparing with the actual clamp in order to observe the different of fatigue between both designs. This brings the life time improvement up to 57% compared with the actual clamp in the manufacturing. This study provides a precise and trustable setting enough to be set as a reference methodology for the future design. Because of the combination and adaptation from the six sigma method, finite element, fatigue and linear regressive analysis that lead to accurate calculation, this project will able to save up to 60 million dollars annually.

Keywords: clamp, finite element analysis, structural, six sigma, linear regressive analysis, fatigue analysis, probability

Procedia PDF Downloads 233
1735 Artificial Intelligence in the Design of a Retaining Structure

Authors: Kelvin Lo

Abstract:

Nowadays, numerical modelling in geotechnical engineering is very common but sophisticated. Many advanced input settings and considerable computational efforts are required to optimize the design to reduce the construction cost. To optimize a design, it usually requires huge numerical models. If the optimization is conducted manually, there is a potentially dangerous consequence from human errors, and the time spent on the input and data extraction from output is significant. This paper presents an automation process introduced to numerical modelling (Plaxis 2D) of a trench excavation supported by a secant-pile retaining structure for a top-down tunnel project. Python code is adopted to control the process, and numerical modelling is conducted automatically in every 20m chainage along the 200m tunnel, with maximum retained height occurring in the middle chainage. Python code continuously changes the geological stratum and excavation depth under groundwater flow conditions in each 20m section. It automatically conducts trial and error to determine the required pile length and the use of props to achieve the required factor of safety and target displacement. Once the bending moment of the pile exceeds its capacity, it will increase in size. When the pile embedment reaches the default maximum length, it will turn on the prop system. Results showed that it saves time, increases efficiency, lowers design costs, and replaces human labor to minimize error.

Keywords: automation, numerical modelling, Python, retaining structures

Procedia PDF Downloads 50
1734 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 441
1733 Exploring Time-Series Phosphoproteomic Datasets in the Context of Network Models

Authors: Sandeep Kaur, Jenny Vuong, Marcel Julliard, Sean O'Donoghue

Abstract:

Time-series data are useful for modelling as they can enable model-evaluation. However, when reconstructing models from phosphoproteomic data, often non-exact methods are utilised, as the knowledge regarding the network structure, such as, which kinases and phosphatases lead to the observed phosphorylation state, is incomplete. Thus, such reactions are often hypothesised, which gives rise to uncertainty. Here, we propose a framework, implemented via a web-based tool (as an extension to Minardo), which given time-series phosphoproteomic datasets, can generate κ models. The incompleteness and uncertainty in the generated model and reactions are clearly presented to the user via the visual method. Furthermore, we demonstrate, via a toy EGF signalling model, the use of algorithmic verification to verify κ models. Manually formulated requirements were evaluated with regards to the model, leading to the highlighting of the nodes causing unsatisfiability (i.e. error causing nodes). We aim to integrate such methods into our web-based tool and demonstrate how the identified erroneous nodes can be presented to the user via the visual method. Thus, in this research we present a framework, to enable a user to explore phosphorylation proteomic time-series data in the context of models. The observer can visualise which reactions in the model are highly uncertain, and which nodes cause incorrect simulation outputs. A tool such as this enables an end-user to determine the empirical analysis to perform, to reduce uncertainty in the presented model - thus enabling a better understanding of the underlying system.

Keywords: κ-models, model verification, time-series phosphoproteomic datasets, uncertainty and error visualisation

Procedia PDF Downloads 252
1732 Effect of Slope Height and Horizontal Forces on the Bearing Capacity of Strip Footings near Slopes in Cohesionless Soil

Authors: Sven Krabbenhoft, Kristian Krabbenhoft, Lars Damkilde

Abstract:

The problem of determining the bearing capacity of a strip foundation located near a slope of infinite height has been dealt with by several authors. Very often in practical problems the slope is of limited height, and furthermore the resulting load may be inclined at an angle to the horizontal, and in such cases the bearing capacity of the footing cannot be found using the existing methods. The present work comprises finite element based upper- and lower-bound calculations, using the geotechnical software OptumG2 to investigate the effect of the slope height and horizontal forces on the total bearing capacity, both without and with using superposition as presupposed in the traditional bearing capacity equation. The results for friction angles 30, 35 and 40 degrees, slope inclinations 1:2, 1:3 and 1:4, for selfweight and surcharge are given as charts showing the slope inclination factors suitable for design.

Keywords: footings, bearing capacity, slopes, cohesionnless soil

Procedia PDF Downloads 462
1731 Temporal Variation of PM10-Bound Benzo(a)Pyrene Concentration in an Urban and a Rural Site of Northwestern Hungary

Authors: Zs. Csanádi, A. Szabó Nagy, J. Szabó, J. Erdős

Abstract:

The main objective of this study was to assess the annual concentration and seasonal variation of benzo(a)pyrene (BaP) associated with PM10 in an urban site of Győr and in a rural site of Sarród in the sampling period of 2008–2012. A total of 280 PM10 aerosol samples were collected in each sampling site and analyzed for BaP by gas chromatography method. The BaP concentrations ranged from undetected to 8 ng/m3 with the mean value of 1.01 ng/m3 in the sampling site of Győr, and from undetected to 4.07 ng/m3 with the mean value of 0.52 ng/m3 in the sampling site of Sarród, respectively. Relatively higher concentrations of BaP were detected in samples collected in both sampling sites in the heating seasons compared with non-heating periods. The annual mean BaP concentrations were comparable with the published data of different other Hungarian sites.

Keywords: air quality, benzo(a)pyrene, PAHs, polycyclic aromatic hydrocarbons

Procedia PDF Downloads 390
1730 Investigation on the stability of rock slopes subjected to tension cracks via limit analysis

Authors: Weigao. Wu, Stefano. Utili

Abstract:

Based on the kinematic approach of limit analysis, a full set of upper bound solutions for the stability of homogeneous rock slopes subjected to tension cracks are obtained. The generalized Hoek-Brown failure criterion is employed to describe the non-linear strength envelope of rocks. In this paper, critical failure mechanisms are determined for cracks of known depth but unspecified location, cracks of known location but unknown depth, and cracks of unspecified location and depth. It is shown that there is a nearly up to 50% drop in terms of the stability factors for the rock slopes intersected by a tension crack compared with intact ones. Tables and charts of solutions in dimensionless forms are presented for ease of use by practitioners.

Keywords: Hoek-Brown failure criterion, limit analysis, rock slope, tension cracks

Procedia PDF Downloads 343
1729 Gravitationally Confined Relativistic Neutrinos and Mathematical Modeling of the Structure of Pions

Authors: Constantinos Vayenas, Athanasios Fokas, Dimitrios Grigoriou

Abstract:

We use special relativity to compute the inertial and thus gravitational mass of relativistic electron and muon neutrinos, and we find that, for neutrino kinetic energies above 150 MeV/c2, these masses are in the Planck mass range. Consequently, we develop a simple Bohr-type model using gravitational rather than electrostatic forces between the rotating neutrinos as the centripetal force in order to examine the bound rotational states formed by two or three such relativistic neutrinos. We find that the masses of the composite rotational structures formed, are in the meson and baryon mass ranges, respectively. These models contain no adjustable parameters and by comparing their predictions with the experimental values of the masses of protons and pions, we compute a mass of 0.0437 eV/c2 for the heaviest electron neutrino mass and of 1.1 x10-3 eV/c2 for the heaviest muon neutrino mass.

Keywords: geons, gravitational confinement, neutrino masses, special relativity

Procedia PDF Downloads 261
1728 Seismic Bearing Capacity Estimation of Shallow Foundations on Dense Sand Underlain by Loose Sand Strata by Using Finite Elements Limit Analysis

Authors: Pragyan Paramita Das, Vishwas N. Khatri

Abstract:

By using the lower- and upper- bound finite elements to limit analysis in conjunction with second-order conic programming (SOCP), the effect of seismic forces on the bearing capacity of surface strip footing resting on dense sand underlain by loose sand deposit is explored. The soil is assumed to obey the Mohr-Coulomb’s yield criterion and an associated flow rule. The angle of internal friction (ϕ) of the top and the bottom layer is varied from 42° to 44° and 32° to 34° respectively. The coefficient of seismic acceleration is varied from 0 to 0.3. The variation of bearing capacity with different thickness of top layer for various seismic acceleration coefficients is generated. A comparison will be made with the available solutions from literature wherever applicable.

Keywords: bearing capacity, conic programming, finite elements, seismic forces

Procedia PDF Downloads 169
1727 Effect of Temperature on the Permeability and Time-Dependent Change in Thermal Volume of Bentonite Clay During the Heating-Cooling Cycle

Authors: Nilufar Chowdhury, Fereydoun Najafian Jazi, Omid Ghasemi-Fare

Abstract:

The thermal effect on soil properties induces significant variations in hydraulic conductivity, which is attributable to temperature-dependent transitions in soil properties. With the elevation of temperature, there can be a notable increase in intrinsic permeability due to the degeneration of bound water molecules into a free state facilitated by thermal energy input. Conversely, thermal consolidation may cause a reduction in intrinsic permeability as soil particles undergo densification. This thermal response of soil permeability exhibits pronounced heterogeneity across different soil types. Furthermore, this temperature-induced disruption of the bound water within clay matrices can enhance the mineral-to-mineral contact, initiating irreversible deformation within the clay structure. This indicates that when soil undergoes heating-cooling cycles, plastic strain can develop, which needs to be investigated for every soil type to understand the thermo-hydro mechanical behavior of clay properly. This research aims to study the effect of the heating-cooling cycle on the intrinsic permeability and time-dependent evaluation of thermal volume change of sodium Bentonite clay. A temperature-controlled triaxial permeameter cell is used in this study. The selected temperature is 20° C, 40° C, 40° C and 80° C. The hydraulic conductivity of Bentonite clay under 100 kPa confining stresses was measured. Hydraulic conductivity analysis was performed on a saturated sample for a void ratio e = 0.9, corresponding to a dry density of 1.2 Mg/m3. Different hydraulic gradients were applied between the top and bottom of the sample to obtain a measurable flow through the sample. The hydraulic gradient used for the experiment was 4000. The diameter and thickness of the sample are 101. 6 mm, and 25.4 mm, respectively. Both for heating and cooling, the hydraulic conductivity at each temperature is measured after the flow reaches the steady state condition to make sure the volume change due to thermal loading is stabilized. Thus, soil specimens were kept at a constant temperature during both the heating and cooling phases for at least 10-18 days to facilitate the equilibration of hydraulic transients. To assess the influence of temperature-induced volume changes of Bentonite clay, the evaluation of void ratio change during this time period has been monitored. It is observed that the intrinsic permeability increases by 30-40% during the heating cycle. The permeability during the cooling cycle is 10-12% lower compared to the permeability observed during the heating cycle at a particular temperature. This reduction in permeability implies a change in soil fabric due to the thermal effect. An initial increase followed by a rapid decrease in void ratio was observed, representing the occurrence of possible osmotic swelling phenomena followed by thermal consolidation. It has been observed that after a complete heating-cooling cycle, there is a significant change in the void ratio compared to the initial void ratio of the sample. The results obtained suggest that Bentonite clay’s microstructure can change subject to a complete heating-cooling process, which regulates macro behavior such as the permeability of Bentonite clay.

Keywords: bentonite, permeability, temperature, thermal volume change

Procedia PDF Downloads 49
1726 Surface Elevation Dynamics Assessment Using Digital Elevation Models, Light Detection and Ranging, GPS and Geospatial Information Science Analysis: Ecosystem Modelling Approach

Authors: Ali K. M. Al-Nasrawi, Uday A. Al-Hamdany, Sarah M. Hamylton, Brian G. Jones, Yasir M. Alyazichi

Abstract:

Surface elevation dynamics have always responded to disturbance regimes. Creating Digital Elevation Models (DEMs) to detect surface dynamics has led to the development of several methods, devices and data clouds. DEMs can provide accurate and quick results with cost efficiency, in comparison to the inherited geomatics survey techniques. Nowadays, remote sensing datasets have become a primary source to create DEMs, including LiDAR point clouds with GIS analytic tools. However, these data need to be tested for error detection and correction. This paper evaluates various DEMs from different data sources over time for Apple Orchard Island, a coastal site in southeastern Australia, in order to detect surface dynamics. Subsequently, 30 chosen locations were examined in the field to test the error of the DEMs surface detection using high resolution global positioning systems (GPSs). Results show significant surface elevation changes on Apple Orchard Island. Accretion occurred on most of the island while surface elevation loss due to erosion is limited to the northern and southern parts. Concurrently, the projected differential correction and validation method aimed to identify errors in the dataset. The resultant DEMs demonstrated a small error ratio (≤ 3%) from the gathered datasets when compared with the fieldwork survey using RTK-GPS. As modern modelling approaches need to become more effective and accurate, applying several tools to create different DEMs on a multi-temporal scale would allow easy predictions in time-cost-frames with more comprehensive coverage and greater accuracy. With a DEM technique for the eco-geomorphic context, such insights about the ecosystem dynamic detection, at such a coastal intertidal system, would be valuable to assess the accuracy of the predicted eco-geomorphic risk for the conservation management sustainability. Demonstrating this framework to evaluate the historical and current anthropogenic and environmental stressors on coastal surface elevation dynamism could be profitably applied worldwide.

Keywords: DEMs, eco-geomorphic-dynamic processes, geospatial Information Science, remote sensing, surface elevation changes,

Procedia PDF Downloads 266
1725 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: anti-spoofing, CNN, fingerprint recognition, GAN

Procedia PDF Downloads 183
1724 Integrating Machine Learning and Rule-Based Decision Models for Enhanced B2B Sales Forecasting and Customer Prioritization

Authors: Wenqi Liu, Reginald Bailey

Abstract:

This study explores an advanced approach to enhancing B2B sales forecasting by integrating machine learning models with a rule-based decision framework. The methodology begins with the development of a machine learning classification model to predict conversion likelihood, aiming to improve accuracy over traditional methods like logistic regression. The classification model's effectiveness is measured using metrics such as accuracy, precision, recall, and F1 score, alongside a feature importance analysis to identify key predictors. Following this, a machine learning regression model is used to forecast sales value, with the objective of reducing mean absolute error (MAE) compared to linear regression techniques. The regression model's performance is assessed using MAE, root mean square error (RMSE), and R-squared metrics, emphasizing feature contribution to the prediction. To bridge the gap between predictive analytics and decision-making, a rule-based decision model is introduced that prioritizes customers based on predefined thresholds for conversion probability and predicted sales value. This approach significantly enhances customer prioritization and improves overall sales performance by increasing conversion rates and optimizing revenue generation. The findings suggest that this combined framework offers a practical, data-driven solution for sales teams, facilitating more strategic decision-making in B2B environments.

Keywords: sales forecasting, machine learning, rule-based decision model, customer prioritization, predictive analytics

Procedia PDF Downloads 14
1723 Students' Errors in Translating Algebra Word Problems to Mathematical Structure

Authors: Ledeza Jordan Babiano

Abstract:

Translating statements into mathematical notations is one of the processes in word problem-solving. However, based on the literature, students still have difficulties with this skill. The purpose of this study was to investigate the translation errors of the students when they translate algebraic word problems into mathematical structures and locate the errors via the lens of the Translation-Verification Model. Moreover, this qualitative research study employed content analysis. During the data-gathering process, the students were asked to answer a six-item algebra word problem questionnaire, and their answers were analyzed by experts through blind coding using the Translation-Verification Model to determine their translation errors. After this, a focus group discussion was conducted, and the data gathered was analyzed through thematic analysis to determine the causes of the students’ translation errors. It was found out that students’ prevalent error in translation was the interpretation error, which was situated in the Attribute construct. The emerging themes during the FGD were: (1) The procedure of translation is strategically incorrect; (2) Lack of comprehension; (3) Algebra concepts related to difficulty; (4) Lack of spatial skills; (5) Unprepared for independent learning; and (6) The content of the problem is developmentally inappropriate. These themes boiled down to the major concept of independent learning preparedness in solving mathematical problems. This concept has subcomponents, which include contextual and conceptual factors in translation. Consequently, the results provided implications for instructors and professors in Mathematics to innovate their teaching pedagogies and strategies to address translation gaps among students.

Keywords: mathematical structure, algebra word problems, translation, errors

Procedia PDF Downloads 46