Search results for: apparent error rate
9561 High Accuracy Analytic Approximation for Special Functions Applied to Bessel Functions J₀(x) and Its Zeros
Authors: Fernando Maass, Pablo Martin, Jorge Olivares
Abstract:
The Bessel function J₀(x) is very important in Electrodynamics and Physics, as well as its zeros. In this work, a method to obtain high accuracy approximation is presented through an application to that function. In most of the applications of this function, the values of the zeros are very important. In this work, analytic approximations for this function have been obtained valid for all positive values of the variable x, which have high accuracy for the function as well as for the zeros. The approximation is determined by the simultaneous used of the power series and asymptotic expansion. The structure of the approximation is a combination of two rational functions with elementary functions as trigonometric and fractional powers. Here us in Pade method, rational functions are used, but now there combined with elementary functions us fractional powers hyperbolic or trigonometric functions, and others. The reason of this is that now power series of the exact function are used, but together with the asymptotic expansion, which usually includes fractional powers trigonometric functions and other type of elementary functions. The approximation must be a bridge between both expansions, and this can not be accomplished using only with rational functions. In the simplest approximation using 4 parameters the maximum absolute error is less than 0.006 at x ∼ 4.9. In this case also the maximum relative error for the zeros is less than 0.003 which is for the second zero, but that value decreases rapidly for the other zeros. The same kind of behaviour happens for the relative error of the maximum and minimum of the functions. Approximations with higher accuracy and more parameters will be also shown. All the approximations are valid for any positive value of x, and they can be calculated easily.Keywords: analytic approximations, asymptotic approximations, Bessel functions, quasirational approximations
Procedia PDF Downloads 2579560 Strategy in Practice: Strategy Development, Strategic Error and Project Delivery
Authors: Nipun Agarwal, David Paul, Fareed Un Din
Abstract:
Strategy development and implementation is the key to an organization’s success in today’s competitive marketplace. Many organizations develop excellent strategy but are unable to implement this strategy in order to succeed. The difference between strategic goals and its implementation is called strategic error. Strategic error occurs when an organization does not have structures in place to implement their strategy. Strategy implementation happens through projects and having a project management method that provides certainty and agility will help an organization become more competitive in implementing strategy. Numerous project management methods exist in theory and practice. However, projects mainly used the Waterfall method in the past that provides certainty in terms of budget, delivery date and resourcing. It is common practice now to utilise Agile based methods. However, Agile based methods do not provide specific deadlines and budgets. But provide agility in product design and project delivery, which is useful to companies. Both Waterfall and Agile methods in some forms are the opposites of each other. Executive management prefer agility in delivery projects as the competitive landscape changes frequently. However, they also appreciate certainty in the projects being able to quantify budgets, deadlines and resources that is harder for an Agile based method to provide. This paper attempts to develop a hybrid project management method that attempts to merge these Waterfall and Agile methods to provide the positives from both these approaches.Keywords: strategy, project management, strategy implementation, agile
Procedia PDF Downloads 1209559 Adiabatic Flame Temperature: New Calculation Methode
Authors: Muthana Abdul Mjed Jamel Al-gburi
Abstract:
The present paper introduces the methane-air flame and its main chemical reaction, the mass burning rate, the burning velocity, and the most important parameter, the adiabatic and its evaluation. Those major important flame parameters will be mathematically formulated and computerized using the MATLAB program. The present program established a new technique to decide the true adiabatic flame temperature. The new technique implements the trial and error procedure to obtained the calculated total internal energy of the product species then evaluate of the reactants ones, from both, we can draw two energy lines their intersection will decide the true required temperature. The obtained results show accurate evaluation for the atmospheric Stoichiometric (Φ=1.05) methane-air flame, and the value was 2136.36 K.Keywords: 1- methane-air flame, 2-, adiabatic flame temperature, 3-, reaction model, 4- matlab program, 5-, new technique
Procedia PDF Downloads 819558 A Crystal Plasticity Approach to Model Dynamic Strain Aging
Authors: Burak Bal, Demircan Canadinc
Abstract:
Dynamic strain aging (DSA), resulting from the reorientation of C-Mn clusters in the core of dislocations, can provide a strain hardening mechanism. In addition, in Hadfield steel, negative strain rate sensitivity is observed due to the DSA. In our study, we incorporated dynamic strain aging onto crystal plasticity computations to predict the local instabilities and corresponding negative strain rate sensitivity. Specifically, the material response of Hadfield steel was obtained from monotonic and strain-rate jump experiments under tensile loading. The strain rate range was adjusted from 10⁻⁴ to 10⁻¹s ⁻¹. The crystal plasticity modeling of the material response was carried out based on Voce-type hardening law and corresponding Voce hardening parameters were determined. The solute pinning effect of carbon atom was incorporated to crystal plasticity simulations at microscale level by computing the shear stress contribution imposed on an arrested dislocation by carbon atom. After crystal plasticity simulations with modifying hardening rule, which takes into account the contribution of DSA, it was seen that the model successfully predicts both the role of DSA and corresponding strain rate sensitivity.Keywords: crystal plasticity, dynamic strain aging, Hadfield steel, negative strain rate sensitivity
Procedia PDF Downloads 2649557 Household Size and Poverty Rate: Evidence from Nepal
Authors: Basan Shrestha
Abstract:
The relationship between the household size and the poverty is not well understood. Malthus followers advocate that the increasing population add pressure to the dwindling resource base due to increasing demand that would lead to poverty. Others claim that bigger households are richer due to availability of household labour for income generation activities. Facts from Nepal were analyzed to examine the relationship between the household size and poverty rate. The analysis of data from 3,968 Village Development Committee (VDC)/ municipality (MP) located in 75 districts of all five development regions revealed that the average household size had moderate positive correlation with the poverty rate (Karl Pearson's correlation coefficient=0.44). In a regression analysis, the household size determined 20% of the variation in the poverty rate. Higher positive correlation was observed in eastern Nepal (Karl Pearson's correlation coefficient=0.66). The regression analysis showed that the household size determined 43% of the variation in the poverty rate in east. The relation was poor in far-west. It could be because higher incidence of poverty was there irrespective of household size. Overall, the facts revealed that the bigger households were relatively poorer. With the increasing level of awareness and interventions for family planning, it is anticipated that the household size will decrease leading to the decreased poverty rate. In addition, the government needs to devise a mechanism to create employment opportunities for the household labour force to reduce poverty.Keywords: household size, poverty rate, nepal, regional development
Procedia PDF Downloads 3659556 Video Heart Rate Measurement for the Detection of Trauma-Related Stress States
Authors: Jarek Krajewski, David Daxberger, Luzi Beyer
Abstract:
Finding objective and non-intrusive measurements of emotional and psychopathological states (e.g., post-traumatic stress disorder, PTSD) is an important challenge. Thus, the proposed approach here uses Photoplethysmographic imaging (PPGI) applying facial RGB Cam videos to estimate heart rate levels. A pipeline for the signal processing of the raw image has been proposed containing different preprocessing approaches, e.g., Independent Component Analysis, Non-negative Matrix factorization, and various other artefact correction approaches. Under resting and constant light conditions, we reached a sensitivity of 84% for pulse peak detection. The results indicate that PPGI can be a suitable solution for providing heart rate data derived from these indirectly post-traumatic stress states.Keywords: heart rate, PTSD, PPGI, stress, preprocessing
Procedia PDF Downloads 1299555 Gas Lift Optimization to Improve Well Performance
Authors: Mohamed A. G. H. Abdalsadig, Amir Nourian, G. G. Nasr, Meisam Babaie
Abstract:
Gas lift optimization is becoming more important now a day in petroleum industry. A proper lift optimization can reduce the operating cost, increase the net present value (NPV) and maximize the recovery from the asset. A widely accepted definition of gas lift optimization is to obtain the maximum output under specified operating conditions. In addition, gas lift, a costly and indispensable means to recover oil from high depth reservoir entails solving the gas lift optimization problems. Gas lift optimization is a continuous process; there are two levels of production optimization. The total field optimization involves optimizing the surface facilities and the injection rate that can be achieved by standard tools softwares. Well level optimization can be achieved by optimizing the well parameters such as point of injection, injection rate, and injection pressure. All these aspects have been investigated and presented in this study by using experimental data and PROSPER simulation program. The results show that the well head pressure has a large influence on the gas lift performance and also proved that smart gas lift valve can be used to improve gas lift performance by controlling gas injection from down hole. Obtaining the optimum gas injection rate is important because excessive gas injection reduces production rate and consequently increases the operation cost.Keywords: optimization, production rate, reservoir pressure effect, gas injection rate effect, gas injection pressure
Procedia PDF Downloads 4199554 Characteristics of Regional Issues in Local Municipalities of Japan in Consideration of Socio-Economic Condition
Authors: Akiko Kondo, Akio Kondo
Abstract:
We are facing serious problems related to a long-term depopulation and an aging society with a falling birth rate in Japan. In this situation, we are suffering from a shortfall in human resources as well as a shortage of workforce in rural regions. In addition, we are struggling with a protracted economic slump and excess concentration of population in the Tokyo Metropolitan area. It is an urgent national issue to consider how to live in this country and what kind of structure of society and administration policy is needed. It is necessary to clarify people’s desire for their way of living and social assistance to be provided. The aim of this study is to clarify the characteristics of regional issues and the degree of their seriousness in local municipalities of Japan. We conducted a questionnaire survey about regional agenda in all local municipalities in Japan. We obtained responses concerning the degree of seriousness of regional issues and degree of importance of policies. Based on the data gathered from the survey, it is apparent that many local municipalities are facing an aging population and declining population. We constructed a model to analyze factors for declining population. Using the model, it was clarified that a population’s age structure, job opportunities, and income level affect the decline of population. In addition, we showed the way of the evaluation of the state of a local municipality.Keywords: evaluation, local municipality, regional analysis, regional issue
Procedia PDF Downloads 2949553 Design and Performance Analysis of Advanced B-Spline Algorithm for Image Resolution Enhancement
Authors: M. Z. Kurian, M. V. Chidananda Murthy, H. S. Guruprasad
Abstract:
An approach to super-resolve the low-resolution (LR) image is presented in this paper which is very useful in multimedia communication, medical image enhancement and satellite image enhancement to have a clear view of the information in the image. The proposed Advanced B-Spline method generates a high-resolution (HR) image from single LR image and tries to retain the higher frequency components such as edges in the image. This method uses B-Spline technique and Crispening. This work is evaluated qualitatively and quantitatively using Mean Square Error (MSE) and Peak Signal to Noise Ratio (PSNR). The method is also suitable for real-time applications. Different combinations of decimation and super-resolution algorithms in the presence of different noise and noise factors are tested.Keywords: advanced b-spline, image super-resolution, mean square error (MSE), peak signal to noise ratio (PSNR), resolution down converter
Procedia PDF Downloads 4029552 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria
Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter
Abstract:
Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis
Procedia PDF Downloads 829551 Growth Performance, Survival Rate and Feed Efficacy of Climbing Perch, Anabas testudineus, Feed Experimental Diet with Several Dosages of Papain Enzyme
Authors: Zainal A. Muchlisin, Muhammad Iqbal, Abdullah A. Muhammadar
Abstract:
The objective of the present study was to determine the optimum dose of papain enzyme in the diet for growing, survival rate and feed efficacy of climbing perch (Anabas testudineus). The study was conducted at the Laboratory of Aquatic of Faculty of Veterinary, Syiah Kuala University from January to March 2016. The completely randomized design was used in this study. Six dosages level of papain enzyme were tested with 4 replications i.e. 0 g kg-1 of feed, 20.0 g kg-1 feed, 22.5 g kg-1 of feed, 25.0 g kg-1 of feed, 27.5 g kg-1 of feed, and 30.0 g kg-1 of feed. The experimental fish fed twice a day at feeding level of 5% for 60 days. The results showed that weight gain ranged from 2.41g to 7.37g, total length gain ranged from 0.67cm to 3.17cm, specific growth rate ranged from 1.46 % day to 3.41% day, daily growth rate ranged from 0.04 g day to 0.13 g day, feed conversion ratio ranged from 1.94 to 3.59, feed efficiency ranged from 27.99% to 51.37%, protein retention ranged from 3.38% to 28.28%, protein digestibility ranged from 50.63% to 90.38%, and survival rate ranged from 88.89% to 100%. The highest rate for all parameters was found in the dosage of 3.00% papain enzyme kg feed. The ANOVA test showed that enzyme papain gave a significant effect on the weight gain, total length gain, daily growth rate, specific growth rate, feed conversion ratio, feed efficiency, protein retention, protein digestibility, and survival rate of the climbing perch (Anabas testudieus). The best enzyme papain dosage was 3.0%.Keywords: betok, feed conversion ratio, freshwater fish, nutrition, feeding
Procedia PDF Downloads 2419550 Constructions of Linear and Robust Codes Based on Wavelet Decompositions
Authors: Alla Levina, Sergey Taranov
Abstract:
The classical approach to the providing noise immunity and integrity of information that process in computing devices and communication channels is to use linear codes. Linear codes have fast and efficient algorithms of encoding and decoding information, but this codes concentrate their detect and correct abilities in certain error configurations. To protect against any configuration of errors at predetermined probability can robust codes. This is accomplished by the use of perfect nonlinear and almost perfect nonlinear functions to calculate the code redundancy. The paper presents the error-correcting coding scheme using biorthogonal wavelet transform. Wavelet transform applied in various fields of science. Some of the wavelet applications are cleaning of signal from noise, data compression, spectral analysis of the signal components. The article suggests methods for constructing linear codes based on wavelet decomposition. For developed constructions we build generator and check matrix that contain the scaling function coefficients of wavelet. Based on linear wavelet codes we develop robust codes that provide uniform protection against all errors. In article we propose two constructions of robust code. The first class of robust code is based on multiplicative inverse in finite field. In the second robust code construction the redundancy part is a cube of information part. Also, this paper investigates the characteristics of proposed robust and linear codes.Keywords: robust code, linear code, wavelet decomposition, scaling function, error masking probability
Procedia PDF Downloads 4929549 Improved Performance of Cooperative Scheme in the Cellular and Broadcasting System
Authors: Hyun-Jee Yang, Bit-Na Kwon, Yong-Jun Kim, Hyoung-Kyu Song
Abstract:
In the cooperative transmission scheme, both the cellular system and broadcasting system are composed. Two cellular base stations (CBSs) communicating with a user in the cell edge use cooperative transmission scheme in the conventional scheme. In the case that the distance between two CBSs and the user is distant, the conventional scheme does not guarantee the quality of the communication because the channel condition is bad. Therefore, if the distance between CBSs and a user is distant, the performance of the conventional scheme is decreased. Also, the bad channel condition has bad effects on the performance. The proposed scheme uses two relays to communicate well with CBSs when the channel condition between CBSs and the user is poor. Using the relay in the high attenuation environment can obtain both advantages of the high bit error rate (BER) and throughput performance.Keywords: cooperative communications, diversity gain, OFDM, interworking system
Procedia PDF Downloads 5779548 Accurate Positioning Method of Indoor Plastering Robot Based on Line Laser
Authors: Guanqiao Wang, Hongyang Yu
Abstract:
There is a lot of repetitive work in the traditional construction industry. These repetitive tasks can significantly improve production efficiency by replacing manual tasks with robots. There- fore, robots appear more and more frequently in the construction industry. Navigation and positioning are very important tasks for construction robots, and the requirements for accuracy of positioning are very high. Traditional indoor robots mainly use radiofrequency or vision methods for positioning. Compared with ordinary robots, the indoor plastering robot needs to be positioned closer to the wall for wall plastering, so the requirements for construction positioning accuracy are higher, and the traditional navigation positioning method has a large error, which will cause the robot to move. Without the exact position, the wall cannot be plastered, or the error of plastering the wall is large. A new positioning method is proposed, which is assisted by line lasers and uses image processing-based positioning to perform more accurate positioning on the traditional positioning work. In actual work, filter, edge detection, Hough transform and other operations are performed on the images captured by the camera. Each time the position of the laser line is found, it is compared with the standard value, and the position of the robot is moved or rotated to complete the positioning work. The experimental results show that the actual positioning error is reduced to less than 0.5 mm by this accurate positioning method.Keywords: indoor plastering robot, navigation, precise positioning, line laser, image processing
Procedia PDF Downloads 1519547 Secure Optical Communication System Using Quantum Cryptography
Authors: Ehab AbdulRazzaq Hussein
Abstract:
Quantum cryptography (QC) is an emerging technology for secure key distribution with single-photon transmissions. In contrast to classical cryptographic schemes, the security of QC schemes is guaranteed by the fundamental laws of nature. Their security stems from the impossibility to distinguish non-orthogonal quantum states with certainty. A potential eavesdropper introduces errors in the transmissions, which can later be discovered by the legitimate participants of the communication. In this paper, the modeling approach is proposed for QC protocol BB84 using polarization coding. The single-photon system is assumed to be used in the designed models. Thus, Eve cannot use beam-splitting strategy to eavesdrop on the quantum channel transmission. The only eavesdropping strategy possible to Eve is the intercept/resend strategy. After quantum transmission of the QC protocol, the quantum bit error rate (QBER) is estimated and compared with a threshold value. If it is above this value the procedure must be stopped and performed later again.Keywords: security, key distribution, cryptography, quantum protocols, Quantum Cryptography (QC), Quantum Key Distribution (QKD).
Procedia PDF Downloads 4129546 Ragging and Sludging Measurement in Membrane Bioreactors
Authors: Pompilia Buzatu, Hazim Qiblawey, Albert Odai, Jana Jamaleddin, Mustafa Nasser, Simon J. Judd
Abstract:
Membrane bioreactor (MBR) technology is challenged by the tendency for the membrane permeability to decrease due to ‘clogging’. Clogging includes ‘sludging’, the filling of the membrane channels with sludge solids, and ‘ragging’, the aggregation of short filaments to form long rag-like particles. Both sludging and ragging demand manual intervention to clear out the solids, which is time-consuming, labour-intensive and potentially damaging to the membranes. These factors impact on costs more significantly than membrane surface fouling which, unlike clogging, is largely mitigated by the chemical clean. However, practical evaluation of MBR clogging has thus far been limited. This paper presents the results of recent work attempting to quantify sludging and clogging based on simple bench-scale tests. Results from a novel ragging simulation trial indicated that rags can be formed within 24-36 hours from dispersed < 5 mm-long filaments at concentrations of 5-10 mg/L under gently agitated conditions. Rag formation occurred for both a cotton wool standard and samples taken from an operating municipal MBR, with between 15% and 75% of the added fibrous material forming a single rag. The extent of rag formation depended both on the material type or origin – lint from laundering operations forming zero rags – and the filament length. Sludging rates were quantified using a bespoke parallel-channel test cell representing the membrane channels of an immersed flat sheet MBR. Sludge samples were provided from two local MBRs, one treating municipal and the other industrial effluent. Bulk sludge properties measured comprised mixed liquor suspended solids (MLSS) concentration, capillary suction time (CST), particle size, soluble COD (sCOD) and rheology (apparent viscosity μₐ vs shear rate γ). The fouling and sludging propensity of the sludge was determined using the test cell, ‘fouling’ being quantified as the pressure incline rate against flux via the flux step test (for which clogging was absent) and sludging by photographing the channel and processing the image to determine the ratio of the clogged to unclogged regions. A substantial difference in rheological and fouling behaviour was evident between the two sludge sources, the industrial sludge having a higher viscosity but less shear-thinning than the municipal. Fouling, as manifested by the pressure increase Δp/Δt, as a function of flux from classic flux-step experiments (where no clogging was evident), was more rapid for the industrial sludge. Across all samples of both sludge origins the expected trend of increased fouling propensity with increased CST and sCOD was demonstrated, whereas no correlation was observed between clogging rate and these parameters. The relative contribution of fouling and clogging was appraised by adjusting the clogging propensity via increasing the MLSS both with and without a commensurate increase in the COD. Results indicated that whereas for the municipal sludge the fouling propensity was affected by the increased sCOD, there was no associated increased in the sludging propensity (or cake formation). The clogging rate actually decreased on increasing the MLSS. Against this, for the industrial sludge the clogging rate dramatically increased with solids concentration despite a decrease in the soluble COD. From this was surmised that sludging did not relate to fouling.Keywords: clogging, membrane bioreactors, ragging, sludge
Procedia PDF Downloads 1859545 Relay Mining: Verifiable Multi-Tenant Distributed Rate Limiting
Authors: Daniel Olshansky, Ramiro Rodrıguez Colmeiro
Abstract:
Relay Mining presents a scalable solution employing probabilistic mechanisms and crypto-economic incentives to estimate RPC volume usage, facilitating decentralized multitenant rate limiting. Network traffic from individual applications can be concurrently serviced by multiple RPC service providers, with costs, rewards, and rate limiting governed by a native cryptocurrency on a distributed ledger. Building upon established research in token bucket algorithms and distributed rate-limiting penalty models, our approach harnesses a feedback loop control mechanism to adjust the difficulty of mining relay rewards, dynamically scaling with network usage growth. By leveraging crypto-economic incentives, we reduce coordination overhead costs and introduce a mechanism for providing RPC services that are both geopolitically and geographically distributed.Keywords: remote procedure call, crypto-economic, commit-reveal, decentralization, scalability, blockchain, rate limiting, token bucket
Procedia PDF Downloads 589544 A More Powerful Test Procedure for Multiple Hypothesis Testing
Authors: Shunpu Zhang
Abstract:
We propose a new multiple test called the minPOP test for testing multiple hypotheses simultaneously. Under the assumption that the test statistics are independent, we show that the minPOP test has higher global power than the existing multiple testing methods. We further propose a stepwise multiple-testing procedure based on the minPOP test and two of its modified versions (the Double Truncated and Left Truncated minPOP tests). We show that these multiple tests have strong control of the family-wise error rate (FWER). A method for finding the p-values of the proposed tests after adjusting for multiplicity is also developed. Simulation results show that the Double Truncated and Left Truncated minPOP tests, in general, have a higher number of rejections than the existing multiple testing procedures.Keywords: multiple test, single-step procedure, stepwise procedure, p-value for multiple testing
Procedia PDF Downloads 879543 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 4189542 Numerical Simulation of Axially Loaded to Failure Large Diameter Bored Pile
Authors: M. Ezzat, Y. Zaghloul, T. Sorour, A. Hefny, M. Eid
Abstract:
Ultimate capacity of large diameter bored piles is usually determined from pile loading tests as recommended by several international codes and foundation design standards. However, loading of this type of piles till achieving apparent failure is practically seldom. In this paper, numerical analyses are carried out to simulate load test of a large diameter bored pile performed at the location of Alzey highway bridge project (Germany). Test results of pile load settlement relationship till failure as well as results of the base and shaft resistances are available. Apparent failure was indicated in this test by the significant increase of the induced settlement during the last load increment applied on the pile head. Measurements of this pile load test are used to assess the quality of the numerical models investigated. Three different material soil models are implemented in the analyses: Mohr coulomb (MC), Soft soil (SS), and Modified Mohr coulomb (MMC). Very good agreement is obtained between the field measured settlement and the calculated settlement using the MMC model. Results of analysis showed also that the MMC constitutive model is superior to MC, and SS models in predicting the ultimate base and shaft resistances of the large diameter bored pile. After calibrating the numerical model, behavior of large diameter bored piles under axial loads is discussed and the formation of the plastic zone around the pile is explored. Results obtained showed that the plastic zone below the base of the pile at failure extended laterally to about four times the pile diameter and vertically to about three times the pile diameter.Keywords: ultimate capacity, large diameter bored piles, plastic zone, failure, pile load test
Procedia PDF Downloads 1469541 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 1239540 Design of Membership Ranges for Fuzzy Logic Control of Refrigeration Cycle Driven by a Variable Speed Compressor
Authors: Changho Han, Jaemin Lee, Li Hua, Seokkwon Jeong
Abstract:
Design of membership function ranges in fuzzy logic control (FLC) is presented for robust control of a variable speed refrigeration system (VSRS). The criterion values of the membership function ranges can be carried out from the static experimental data, and two different values are offered to compare control performance. Some simulations and real experiments for the VSRS were conducted to verify the validity of the designed membership functions. The experimental results showed good agreement with the simulation results, and the error change rate and its sampling time strongly affected the control performance at transient state of the VSRS.Keywords: variable speed refrigeration system, fuzzy logic control, membership function range, control performance
Procedia PDF Downloads 2669539 Heart Rate Variability as a Measure of Dairy Calf Welfare
Authors: J. B. Clapp, S. Croarkin, C. Dolphin, S. K. Lyons
Abstract:
Chronic pain or stress in farm animals impacts both on their welfare and productivity. Measuring chronic pain or stress can be problematic using hormonal or behavioural changes because hormones are modulated by homeostatic mechanisms and observed behaviour can be highly subjective. We propose that heart rate variability (HRV) can quantify chronic pain or stress in farmed animal and represents a more robust and objective measure of their welfare.Keywords: dairy calf, welfare, heart rate variability, non-invasive, biomonitor
Procedia PDF Downloads 6039538 Determinants for Discontinuing Contraceptive Use and Regional Variations in Bangladesh: A Sociological Perspective
Authors: Md. Shahriar Sabuz
Abstract:
Bangladesh, a South Asian developing country, has experienced an increasing rate of contraceptive use in the last few decades. But one-third of the pregnancies are still unintended, and the fertility rate surpasses the desired rate of children. It may be because of the discontinuation of the use of contraceptive methods. So, it is necessary to find out the reasons for the discontinuation of the use of contraceptives. Moreover, the rate of contraception discontinuation varies from rural to urban, region to region. In this study, our objectives are to find out the reasons behind the discontinuation of the use of the contraceptive method, and the regional variations of the rate of those reasons. We are using the dataset of Bangladesh Demographic and Health Surveys (BDHS) 2014 for this study and the ever-married women of Bangladesh who have discontinued the use of contraceptive methods aged 15-49. The data was collected from the seven districts of the country. The finding shows that currently there are 23% of women have stopped using their contraception. The most common reasons for stopping using the method are that either they are pregnant or want to be pregnant. A significant number of people are not using the contraceptive method because of the fear of side effects. Though the rate of non-user is higher in rural areas than in urban areas, reasons for method discontinuation are not significantly different between urban and rural areas. However, reasons for discontinuing contraceptive methods significantly vary from region to region.Keywords: discontinuation of contraceptive, health, pregnant, fertility
Procedia PDF Downloads 989537 An Experimental Study of the Parameters Affecting the Compression Index of Clay Soil
Authors: Rami Rami Mahmoud Bakr
Abstract:
The constant rate of strain (CRS) test is a rapid technique that effectively measures specific properties of cohesive soil, including the rate of consolidation, hydraulic conductivity, compressibility, and stress history. Its simple operation and frequent readings enable efficient definition, especially of the compression curve. However, its limitations include an inability to handle strain-rate-dependent soil behavior, initial transient conditions, and pore pressure evaluation errors. There are currently no effective techniques for interpreting CRS data. In this study, experiments were performed to evaluate the effects of different parameters on CRS results. Extensive tests were performed on two types of clay to analyze the soil behavior during strain consolidation at a constant rate. The results were used to evaluate the transient conditions and pore pressure system.Keywords: constant rate of strain (CRS), resedimented boston blue clay (RBBC), resedimented vicksburg buckshot clay (RVBC), compression index
Procedia PDF Downloads 469536 Beneficiation of Low Grade Chromite Ore and Its Characterization for the Formation of Magnesia-Chromite Refractory by Economically Viable Process
Authors: Amit Kumar Bhandary, Prithviraj Gupta, Siddhartha Mukherjee, Mahua Ghosh Chaudhuri, Rajib Dey
Abstract:
Chromite ores are primarily used for extraction of chromium, which is an expensive metal. For low grade chromite ores (containing less than 40% Cr2O3), the chromium extraction is not usually economically viable. India possesses huge quantities of low grade chromite reserves. This deposit can be utilized after proper physical beneficiation. Magnetic separation techniques may be useful after reduction for the beneficiation of low grade chromite ore. The sample collected from the sukinda mines is characterized by XRD which shows predominant phases like maghemite, chromite, silica, magnesia and alumina. The raw ore is crushed and ground to below 75 micrometer size. The microstructure of the ore shows that the chromite grains surrounded by a silicate matrix and porosity observed the exposed side of the chromite ore. However, this ore may be utilized in refractory applications. Chromite ores contain Cr2O3, FeO, Al2O3 and other oxides like Fe-Cr, Mg-Cr have a high tendency to form spinel compounds, which usually show high refractoriness. Initially, the low grade chromite ore (containing 34.8% Cr2O3) was reduced at 1200 0C for 80 minutes with 30% coke fines by weight, before being subjected to magnetic separation. The reduction by coke leads to conversion of higher state of iron oxides converted to lower state of iron oxides. The pre-reduced samples are then characterized by XRD. The magnetically inert mass was then reacted with 20% MgO by weight at 1450 0C for 2 hours. The resultant product was then tested for various refractoriness parameters like apparent porosity, slag resistance etc. The results were satisfactory, indicating that the resultant spinel compounds are suitable for refractory applications for elevated temperature processes.Keywords: apparent porosity, beneficiation, low-grade chromite, refractory, spinel compounds, slag resistance
Procedia PDF Downloads 3909535 Determination Power and Sample Size Zero-Inflated Negative Binomial Dependent Death Rate of Age Model (ZINBD): Regression Analysis Mortality Acquired Immune Deficiency Deciency Syndrome (AIDS)
Authors: Mohd Asrul Affendi Bin Abdullah
Abstract:
Sample size calculation is especially important for zero inflated models because a large sample size is required to detect a significant effect with this model. This paper verify how to present percentage of power approximation for categorical and then extended to zero inflated models. Wald test was chosen to determine power sample size of AIDS death rate because it is frequently used due to its approachability and its natural for several major recent contribution in sample size calculation for this test. Power calculation can be conducted when covariates are used in the modeling ‘excessing zero’ data and assist categorical covariate. Analysis of AIDS death rate study is used for this paper. Aims of this study to determine the power of sample size (N = 945) categorical death rate based on parameter estimate in the simulation of the study.Keywords: power sample size, Wald test, standardize rate, ZINBDR
Procedia PDF Downloads 4389534 The Influence of Active Breaks on the Attention/Concentration Performance in Eighth-Graders
Authors: Christian Andrä, Luisa Zimmermann, Christina Müller
Abstract:
Introduction: The positive relation between physical activity and cognition is commonly known. Relevant studies show that in everyday school life active breaks can lead to improvement in certain abilities (e.g. attention and concentration). A beneficial effect is in particular attributed to moderate activity. It is still unclear whether active breaks are beneficial after relatively short phases of cognitive load and whether the postulated effects of activity really have an immediate impact. The objective of this study was to verify whether an active break after 18 minutes of cognitive load leads to enhanced attention/concentration performance, compared to inactive breaks with voluntary mobile phone activity. Methodology: For this quasi-experimental study, 36 students [age: 14.0 (mean value) ± 0.3 (standard deviation); male/female: 21/15] of a secondary school were tested. In week 1, every student’s maximum heart rate (Hfmax) was determined through maximum effort tests conducted during physical education classes. The task was to run 3 laps of 300 m with increasing subjective effort (lap 1: 60%, lap 2: 80%, lap 3: 100% of the maximum performance capacity). Furthermore, first attention/concentration tests (D2-R) took place (pretest). The groups were matched on the basis of the pretest results. During week 2 and 3, crossover testing was conducted, comprising of 18 minutes of cognitive preload (test for concentration performance, KLT-R), a break and an attention/concentration test after a 2-minutes transition. Different 10-minutes breaks (active break: moderate physical activity with 65% Hfmax or inactive break: mobile phone activity) took place between preloading and transition. Major findings: In general, there was no impact of the different break interventions on the concentration test results (symbols processed after physical activity: 185.2 ± 31.3 / after inactive break: 184.4 ± 31.6; errors after physical activity: 5.7 ± 6.3 / after inactive break: 7.0. ± 7.2). There was, however, a noticeable development of the values over the testing periods. Although no difference in the number of processed symbols was detected (active/inactive break: period 1: 49.3 ± 8.8/46.9 ± 9.0; period 2: 47.0 ± 7.7/47.3 ± 8.4; period 3: 45.1 ± 8.3/45.6 ± 8.0; period 4: 43.8 ± 7.8/44.6 ± 8.0), error rates decreased successively after physical activity and increased gradually after an inactive break (active/inactive break: period 1: 1.9 ± 2.4/1.2 ± 1.4; period 2: 1.7 ± 1.8/ 1.5 ± 2.0, period 3: 1.2 ± 1.6/1.8 ± 2.1; period 4: 0.9 ± 1.5/2.5 ± 2.6; p= .012). Conclusion: Taking into consideration only the study’s overall results, the hypothesis must be dismissed. However, more differentiated evaluation shows that the error rates decreased after active breaks and increased after inactive breaks. Obviously, the effects of active intervention occur with a delay. The 2-minutes transition (regeneration time) used for this study seems to be insufficient due to the longer adaptation time of the cardio-vascular system in untrained individuals, which might initially affect the concentration capacity. To use the positive effects of physical activity for teaching and learning processes, physiological characteristics must also be considered. Only this will ensure optimum ability to perform.Keywords: active breaks, attention/concentration test, cognitive performance capacity, heart rate, physical activity
Procedia PDF Downloads 3189533 Comparing Machine Learning Estimation of Fuel Consumption of Heavy-Duty Vehicles
Authors: Victor Bodell, Lukas Ekstrom, Somayeh Aghanavesi
Abstract:
Fuel consumption (FC) is one of the key factors in determining expenses of operating a heavy-duty vehicle. A customer may therefore request an estimate of the FC of a desired vehicle. The modular design of heavy-duty vehicles allows their construction by specifying the building blocks, such as gear box, engine and chassis type. If the combination of building blocks is unprecedented, it is unfeasible to measure the FC, since this would first r equire the construction of the vehicle. This paper proposes a machine learning approach to predict FC. This study uses around 40,000 vehicles specific and o perational e nvironmental c onditions i nformation, such as road slopes and driver profiles. A ll v ehicles h ave d iesel engines and a mileage of more than 20,000 km. The data is used to investigate the accuracy of machine learning algorithms Linear regression (LR), K-nearest neighbor (KNN) and Artificial n eural n etworks (ANN) in predicting fuel consumption for heavy-duty vehicles. Performance of the algorithms is evaluated by reporting the prediction error on both simulated data and operational measurements. The performance of the algorithms is compared using nested cross-validation and statistical hypothesis testing. The statistical evaluation procedure finds that ANNs have the lowest prediction error compared to LR and KNN in estimating fuel consumption on both simulated and operational data. The models have a mean relative prediction error of 0.3% on simulated data, and 4.2% on operational data.Keywords: artificial neural networks, fuel consumption, friedman test, machine learning, statistical hypothesis testing
Procedia PDF Downloads 1859532 3D CFD Modelling of the Airflow and Heat Transfer in Cold Room Filled with Dates
Authors: Zina Ghiloufi, Tahar Khir
Abstract:
A transient three-dimensional computational fluid dynamics (CFD) model is developed to determine the velocity and temperature distribution in different positions cold room during pre-cooling of dates. The turbulence model used is the k-ω Shear Stress Transport (SST) with the standard wall function, the air. The numerical results obtained show that cooling rate is not uniform inside the room; the product at the medium of room has a slower cooling rate. This cooling heterogeneity has a large effect on the energy consumption during cold storage.Keywords: CFD, cold room, cooling rate, dDates, numerical simulation, k-ω (SST)
Procedia PDF Downloads 238