Search results for: Weibull distribution model
19799 Numerical Simulation of the Flowing of Ice Slurry in Seawater Pipe of Polar Ships
Authors: Li Xu, Huanbao Jiang, Zhenfei Huang, Lailai Zhang
Abstract:
In recent years, as global warming, the sea-ice extent of North Arctic undergoes an evident decrease and Arctic channel has attracted the attention of shipping industry. Ice crystals existing in the seawater of Arctic channel which enter the seawater system of the ship with the seawater were found blocking the seawater pipe. The appearance of cooler paralysis, auxiliary machine error and even ship power system paralysis may be happened if seriously. In order to reduce the effect of high temperature in auxiliary equipment, seawater system will use external ice-water to participate in the cooling cycle and achieve the state of its flow. The distribution of ice crystals in seawater pipe can be achieved. As the ice slurry system is solid liquid two-phase system, the flow process of ice-water mixture is very complex and diverse. In this paper, the flow process in seawater pipe of ice slurry is simulated with fluid dynamics simulation software based on k-ε turbulence model. As the ice packing fraction is a key factor effecting the distribution of ice crystals, the influence of ice packing fraction on the flowing process of ice slurry is analyzed. In this work, the simulation results show that as the ice packing fraction is relatively large, the distribution of ice crystals is uneven in the flowing process of the seawater which has such disadvantage as increase the possibility of blocking, that will provide scientific forecasting methods for the forming of ice block in seawater piping system. It has important significance for the reliability of the operating of polar ships in the future.Keywords: ice slurry, seawater pipe, ice packing fraction, numerical simulation
Procedia PDF Downloads 36719798 Optimization of Multi Commodities Consumer Supply Chain: Part 1-Modelling
Authors: Zeinab Haji Abolhasani, Romeo Marian, Lee Luong
Abstract:
This paper and its companions (Part II, Part III) will concentrate on optimizing a class of supply chain problems known as Multi- Commodities Consumer Supply Chain (MCCSC) problem. MCCSC problem belongs to production-distribution (P-D) planning category. It aims to determine facilities location, consumers’ allocation, and facilities configuration to minimize total cost (CT) of the entire network. These facilities can be manufacturer units (MUs), distribution centres (DCs), and retailers/end-users (REs) but not limited to them. To address this problem, three major tasks should be undertaken. At the first place, a mixed integer non-linear programming (MINP) mathematical model is developed. Then, system’s behaviors under different conditions will be observed using a simulation modeling tool. Finally, the most optimum solution (minimum CT) of the system will be obtained using a multi-objective optimization technique. Due to the large size of the problem, and the uncertainties in finding the most optimum solution, integration of modeling and simulation methodologies is proposed followed by developing new approach known as GASG. It is a genetic algorithm on the basis of granular simulation which is the subject of the methodology of this research. In part II, MCCSC is simulated using discrete-event simulation (DES) device within an integrated environment of SimEvents and Simulink of MATLAB® software package followed by a comprehensive case study to examine the given strategy. Also, the effect of genetic operators on the obtained optimal/near optimal solution by the simulation model will be discussed in part III.Keywords: supply chain, genetic algorithm, optimization, simulation, discrete event system
Procedia PDF Downloads 31619797 The Effect of Excel on Undergraduate Students’ Understanding of Statistics and the Normal Distribution
Authors: Masomeh Jamshid Nejad
Abstract:
Nowadays, statistical literacy is no longer a necessary skill but an essential skill with broad applications across diverse fields, especially in operational decision areas such as business management, finance, and economics. As such, learning and deep understanding of statistical concepts are essential in the context of business studies. One of the crucial topics in statistical theory and its application is the normal distribution, often called a bell-shaped curve. To interpret data and conduct hypothesis tests, comprehending the properties of normal distribution (the mean and standard deviation) is essential for business students. This requires undergraduate students in the field of economics and business management to visualize and work with data following a normal distribution. Since technology is interconnected with education these days, it is important to teach statistics topics in the context of Python, R-studio, and Microsoft Excel to undergraduate students. This research endeavours to shed light on the effect of Excel-based instruction on learners’ knowledge of statistics, specifically the central concept of normal distribution. As such, two groups of undergraduate students (from the Business Management program) were compared in this research study. One group underwent Excel-based instruction and another group relied only on traditional teaching methods. We analyzed experiential data and BBA participants’ responses to statistic-related questions focusing on the normal distribution, including its key attributes, such as the mean and standard deviation. The results of our study indicate that exposing students to Excel-based learning supports learners in comprehending statistical concepts more effectively compared with the other group of learners (teaching with the traditional method). In addition, students in the context of Excel-based instruction showed ability in picturing and interpreting data concentrated on normal distribution.Keywords: statistics, excel-based instruction, data visualization, pedagogy
Procedia PDF Downloads 5319796 On Modeling Data Sets by Means of a Modified Saddlepoint Approximation
Authors: Serge B. Provost, Yishan Zhang
Abstract:
A moment-based adjustment to the saddlepoint approximation is introduced in the context of density estimation. First applied to univariate distributions, this methodology is extended to the bivariate case. It then entails estimating the density function associated with each marginal distribution by means of the saddlepoint approximation and applying a bivariate adjustment to the product of the resulting density estimates. The connection to the distribution of empirical copulas will be pointed out. As well, a novel approach is proposed for estimating the support of distribution. As these results solely rely on sample moments and empirical cumulant-generating functions, they are particularly well suited for modeling massive data sets. Several illustrative applications will be presented.Keywords: empirical cumulant-generating function, endpoints identification, saddlepoint approximation, sample moments, density estimation
Procedia PDF Downloads 16219795 Two-Dimensional Observation of Oil Displacement by Water in a Petroleum Reservoir through Numerical Simulation and Application to a Petroleum Reservoir
Authors: Ahmad Fahim Nasiry, Shigeo Honma
Abstract:
We examine two-dimensional oil displacement by water in a petroleum reservoir. The pore fluid is immiscible, and the porous media is homogenous and isotropic in the horizontal direction. Buckley-Leverett theory and a combination of Laplacian and Darcy’s law are used to study the fluid flow through porous media, and the Laplacian that defines the dispersion and diffusion of fluid in the sand using heavy oil is discussed. The reservoir is homogenous in the horizontal direction, as expressed by the partial differential equation. Two main factors which are observed are the water saturation and pressure distribution in the reservoir, and they are evaluated for predicting oil recovery in two dimensions by a physical and mathematical simulation model. We review the numerical simulation that solves difficult partial differential reservoir equations. Based on the numerical simulations, the saturation and pressure equations are calculated by the iterative alternating direction implicit method and the iterative alternating direction explicit method, respectively, according to the finite difference assumption. However, to understand the displacement of oil by water and the amount of water dispersion in the reservoir better, an interpolated contour line of the water distribution of the five-spot pattern, that provides an approximate solution which agrees well with the experimental results, is also presented. Finally, a computer program is developed to calculate the equation for pressure and water saturation and to draw the pressure contour line and water distribution contour line for the reservoir.Keywords: numerical simulation, immiscible, finite difference, IADI, IDE, waterflooding
Procedia PDF Downloads 33119794 Effect of Gender on Carcass Parameters in Japanese Quail
Authors: M. Bolacali
Abstract:
This study was conducted to determine the effects of and sex on the carcass characteristics in Japanese quails. A total of 320 (160 for each sex groups) one-day-old quail chicks were randomly allocated to the sex groups, each containing 160 chicks according to a completely randomized design. Each gender was then divided into five replicate groups of 32 chicks. According to sex groups, the chicks of all replicate groups were housed in cages. The normality of distribution for all data was tested with the Shapiro-Wilk test at 95% confidence interval. A P value of ≤ 0.05 was interpreted as different. The statistical analysis for normal distribution data of the dietary groups was carried out with the general linear model procedure of SPSS software. The results are expressed as mean ± standard deviation of five replications. Duncan’s multiple range test was used for multiple comparisons in important groups. Data points bearing different letters are significantly different P ≤ 0.05. For the distribution of data that was different from normal, Kruskal Wallis H-Test was applied as a nonparametric test, and the results were expressed as median, minimum and maximum values. Pairwise comparisons of groups were made when Kruskal Wallis H-Test was significant. The study period lasted 42 days. Hot carcass, cold carcass, heart, and leg percentages in male quails was higher than female quails (P < 0.05), but liver, and breast percentages in female quails was higher than male quails (P > 0.05). The highest slaughter and carcass weight values were determined in the female quails in the cage. As a conclusion, it may be recommended to quail meat producers, who would like to obtain higher carcass weight to make more economic profit, to raise female quails in cage.Keywords: carcass yield, chick, gender, management
Procedia PDF Downloads 18819793 Reducing Uncertainty of Monte Carlo Estimated Fatigue Damage in Offshore Wind Turbines Using FORM
Authors: Jan-Tore H. Horn, Jørgen Juncher Jensen
Abstract:
Uncertainties related to fatigue damage estimation of non-linear systems are highly dependent on the tail behaviour and extreme values of the stress range distribution. By using a combination of the First Order Reliability Method (FORM) and Monte Carlo simulations (MCS), the accuracy of the fatigue estimations may be improved for the same computational efforts. The method is applied to a bottom-fixed, monopile-supported large offshore wind turbine, which is a non-linear and dynamically sensitive system. Different curve fitting techniques to the fatigue damage distribution have been used depending on the sea-state dependent response characteristics, and the effect of a bi-linear S-N curve is discussed. Finally, analyses are performed on several environmental conditions to investigate the long-term applicability of this multistep method. Wave loads are calculated using state-of-the-art theory, while wind loads are applied with a simplified model based on rotor thrust coefficients.Keywords: fatigue damage, FORM, monopile, Monte Carlo, simulation, wind turbine
Procedia PDF Downloads 26019792 Understanding the Influence of Fibre Meander on the Tensile Properties of Advanced Composite Laminates
Authors: Gaoyang Meng, Philip Harrison
Abstract:
When manufacturing composite laminates, the fibre directions within the laminate are never perfectly straight and inevitably contain some degree of stochastic in-plane waviness or ‘meandering’. In this work we aim to understand the relationship between the degree of meandering of the fibre paths, and the resulting uncertainty in the laminate’s final mechanical properties. To do this, a numerical tool is developed to automatically generate meandering fibre paths in each of the laminate's 8 plies (using Matlab) and after mapping this information into finite element simulations (using Abaqus), the statistical variability of the tensile mechanical properties of a [45°/90°/-45°/0°]s carbon/epoxy (IM7/8552) laminate is predicted. The stiffness, first ply failure strength and ultimate failure strength are obtained. Results are generated by inputting the degree of variability in the fibre paths and the laminate is then examined in all directions (from 0° to 359° in increments of 1°). The resulting predictions are output as flower (polar) plots for convenient analysis. The average fibre orientation of each ply in a given laminate is determined by the laminate layup code [45°/90°/-45°/0°]s. However, in each case, the plies contain increasingly large amounts of in-plane waviness (quantified by the standard deviation of the fibre direction in each ply across the laminate. Four different amounts of variability in the fibre direction are tested (2°, 4°, 6° and 8°). Results show that both the average tensile stiffness and the average tensile strength decrease, while the standard deviations increase, with an increasing degree of fibre meander. The variability in stiffness is found to be relatively insensitive to the rotation angle, but the variability in strength is sensitive. Specifically, the uncertainty in laminate strength is relatively low at orientations centred around multiples of 45° rotation angle, and relatively high between these rotation angles. To concisely represent all the information contained in the various polar plots, rotation-angle dependent Weibull distribution equations are fitted to the data. The resulting equations can be used to quickly estimate the size of the errors bars for the different mechanical properties, resulting from the amount of fibre directional variability contained within the laminate. A longer term goal is to use these equations to quickly introduce realistic variability at the component level.Keywords: advanced composite laminates, FE simulation, in-plane waviness, tensile properties, uncertainty quantification
Procedia PDF Downloads 8919791 Comparison of Receiver Operating Characteristic Curve Smoothing Methods
Authors: D. Sigirli
Abstract:
The Receiver Operating Characteristic (ROC) curve is a commonly used statistical tool for evaluating the diagnostic performance of screening and diagnostic test with continuous or ordinal scale results which aims to predict the presence or absence probability of a condition, usually a disease. When the test results were measured as numeric values, sensitivity and specificity can be computed across all possible threshold values which discriminate the subjects as diseased and non-diseased. There are infinite numbers of possible decision thresholds along the continuum of the test results. The ROC curve presents the trade-off between sensitivity and the 1-specificity as the threshold changes. The empirical ROC curve which is a non-parametric estimator of the ROC curve is robust and it represents data accurately. However, especially for small sample sizes, it has a problem of variability and as it is a step function there can be different false positive rates for a true positive rate value and vice versa. Besides, the estimated ROC curve being in a jagged form, since the true ROC curve is a smooth curve, it underestimates the true ROC curve. Since the true ROC curve is assumed to be smooth, several smoothing methods have been explored to smooth a ROC curve. These include using kernel estimates, using log-concave densities, to fit parameters for the specified density function to the data with the maximum-likelihood fitting of univariate distributions or to create a probability distribution by fitting the specified distribution to the data nd using smooth versions of the empirical distribution functions. In the present paper, we aimed to propose a smooth ROC curve estimation based on the boundary corrected kernel function and to compare the performances of ROC curve smoothing methods for the diagnostic test results coming from different distributions in different sample sizes. We performed simulation study to compare the performances of different methods for different scenarios with 1000 repetitions. It is seen that the performance of the proposed method was typically better than that of the empirical ROC curve and only slightly worse compared to the binormal model when in fact the underlying samples were generated from the normal distribution.Keywords: empirical estimator, kernel function, smoothing, receiver operating characteristic curve
Procedia PDF Downloads 15219790 Parametric Analysis of Solid Oxide Fuel Cell Using Lattice Boltzmann Method
Authors: Abir Yahya, Hacen Dhahri, Khalifa Slimi
Abstract:
The present paper deals with a numerical simulation of temperature field inside a solid oxide fuel cell (SOFC) components. The temperature distribution is investigated using a co-flow planar SOFC comprising the air and fuel channel and two-ceramic electrodes, anode and cathode, separated by a dense ceramic electrolyte. The Lattice Boltzmann method (LBM) is used for the numerical simulation of the physical problem. The effects of inlet temperature, anode thermal conductivity and current density on temperature distribution are discussed. It was found that temperature distribution is very sensitive to the inlet temperature and the current density.Keywords: heat sources, Lattice Boltzmann method, solid oxide fuel cell, temperature
Procedia PDF Downloads 30919789 Structural Damage Detection Using Modal Data Employing Teaching Learning Based Optimization
Authors: Subhajit Das, Nirjhar Dhang
Abstract:
Structural damage detection is a challenging work in the field of structural health monitoring (SHM). The damage detection methods mainly focused on the determination of the location and severity of the damage. Model updating is a well known method to locate and quantify the damage. In this method, an error function is defined in terms of difference between the signal measured from ‘experiment’ and signal obtained from undamaged finite element model. This error function is minimised with a proper algorithm, and the finite element model is updated accordingly to match the measured response. Thus, the damage location and severity can be identified from the updated model. In this paper, an error function is defined in terms of modal data viz. frequencies and modal assurance criteria (MAC). MAC is derived from Eigen vectors. This error function is minimized by teaching-learning-based optimization (TLBO) algorithm, and the finite element model is updated accordingly to locate and quantify the damage. Damage is introduced in the model by reduction of stiffness of the structural member. The ‘experimental’ data is simulated by the finite element modelling. The error due to experimental measurement is introduced in the synthetic ‘experimental’ data by adding random noise, which follows Gaussian distribution. The efficiency and robustness of this method are explained through three examples e.g., one truss, one beam and one frame problem. The result shows that TLBO algorithm is efficient to detect the damage location as well as the severity of damage using modal data.Keywords: damage detection, finite element model updating, modal assurance criteria, structural health monitoring, teaching learning based optimization
Procedia PDF Downloads 21519788 Identifying Model to Predict Deterioration of Water Mains Using Robust Analysis
Authors: Go Bong Choi, Shin Je Lee, Sung Jin Yoo, Gibaek Lee, Jong Min Lee
Abstract:
In South Korea, it is difficult to obtain data for statistical pipe assessment. In this paper, to address these issues, we find that various statistical model presented before is how data mixed with noise and are whether apply in South Korea. Three major type of model is studied and if data is presented in the paper, we add noise to data, which affects how model response changes. Moreover, we generate data from model in paper and analyse effect of noise. From this we can find robustness and applicability in Korea of each model.Keywords: proportional hazard model, survival model, water main deterioration, ecological sciences
Procedia PDF Downloads 74319787 The Effects of Distribution Channels on the Selling Prices of Hotels in Time of Crisis
Authors: Y. Yılmaz, C. Ünal, A. Dursun
Abstract:
Distribution channels play significant role for hotels. Direct and indirect selling options of hotel rooms have been increased especially with the help of new technologies, i.e. hotel’s own web sites and online booking sites. Although these options emerged as tools for diversifying the distribution channels, vast number of hotels -mostly resort hotels- is still heavily dependent upon international tour operators when selling their products. On the other hand, hotel sector is so vulnerable against crises. Economic, political or any other crisis can affect hotels very badly and so it is critical to have the right balance of distribution channel to avoid the adverse impacts of a crisis. In this study, it is aimed to search the impacts of a general crisis on the selling prices of hotels which have different weights of distribution channels. The study was done in Turkey where various crises occurred in 2015 and 2016 which had great negative impacts on Turkish tourism and led enormous occupancy rate and selling price reductions. 112 upscale resort hotel in Antalya, which is the most popular tourism destination of Turkey, joined to the research. According to the results, hotels with high dependency to international tour operators are more forced to reduce their room prices in crisis time compared to the ones which use their own web sites more. It was also found that the decline in room prices is limited for hotels which are working with national tour operators and travel agencies in crisis time.Keywords: marketing channels, crisis, hotel, international tour operators, online travel agencies
Procedia PDF Downloads 32019786 Evaluation of Reliability Flood Control System Based on Uncertainty of Flood Discharge, Case Study Wulan River, Central Java, Indonesia
Authors: Anik Sarminingsih, Krishna V. Pradana
Abstract:
The failure of flood control system can be caused by various factors, such as not considering the uncertainty of designed flood causing the capacity of the flood control system is exceeded. The presence of the uncertainty factor is recognized as a serious issue in hydrological studies. Uncertainty in hydrological analysis is influenced by many factors, starting from reading water elevation data, rainfall data, selection of method of analysis, etc. In hydrological modeling selection of models and parameters corresponding to the watershed conditions should be evaluated by the hydraulic model in the river as a drainage channel. River cross-section capacity is the first defense in knowing the reliability of the flood control system. Reliability of river capacity describes the potential magnitude of flood risk. Case study in this research is Wulan River in Central Java. This river occurring flood almost every year despite some efforts to control floods such as levee, floodway and diversion. The flood-affected areas include several sub-districts, mainly in Kabupaten Kudus and Kabupaten Demak. First step is analyze the frequency of discharge observation from Klambu weir which have time series data from 1951-2013. Frequency analysis is performed using several distribution frequency models such as Gumbel distribution, Normal, Normal Log, Pearson Type III and Log Pearson. The result of the model based on standard deviation overlaps, so the maximum flood discharge from the lower return periods may be worth more than the average discharge for larger return periods. The next step is to perform a hydraulic analysis to evaluate the reliability of river capacity based on the flood discharge resulted from several methods. The selection of the design flood discharge of flood control system is the result of the method closest to bankfull capacity of the river.Keywords: design flood, hydrological model, reliability, uncertainty, Wulan river
Procedia PDF Downloads 29419785 Extraction of Strontium Ions through Ligand Assisted Ionic Liquids
Authors: Pradeep Kumar, Abhishek Kumar Chandra, Ashok Khanna
Abstract:
Extraction of Strontium by crown ether (DCH18C6) hasbeen investigated in [BMIM][TF2N] Ionic Liquid (IL) giving higher extraction ~98% and distribution ratio as compared to other organic solvents (Dodecane, Hexane, & Isodecyl alcohol + Dodecane). Distribution ratio of Sr in IL at 0.15M DCH18C6 indicates an enhancement of 20000, 2000, 500 times over Dodecane, Hexane and 5% Isodecyl Alcohol + 95 % Dodecane at 0.01M aqueous acidity respectively. In presence of IL, Sr extraction decreases with increase in HNO3 concentration in aqueous phase whereas opposite trend was observed with organic solvents.Extraction of Sr initially increases with increase in DCH18C6 concentration in IL, finally reaching an asymptotic constant.Keywords: distribution ratio, ionic liquid, ligand, organic solvent, stripping
Procedia PDF Downloads 44319784 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data
Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca
Abstract:
In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.Keywords: citizen science, data quality filtering, species distribution models, trait profiles
Procedia PDF Downloads 20219783 Temperature Distribution for Asphalt Concrete-Concrete Composite Pavement
Authors: Tetsya Sok, Seong Jae Hong, Young Kyu Kim, Seung Woo Lee
Abstract:
The temperature distribution for asphalt concrete (AC)-Concrete composite pavement is one of main influencing factor that affects to performance life of pavement. The temperature gradient in concrete slab underneath the AC layer results the critical curling stress and lead to causes de-bonding of AC-Concrete interface. These stresses, when enhanced by repetitive axial loadings, also contribute to the fatigue damage and eventual crack development within the slab. Moreover, the temperature change within concrete slab extremely causes the slab contracts and expands that significantly induces reflective cracking in AC layer. In this paper, the numerical prediction of pavement temperature was investigated using one-dimensional finite different method (FDM) in fully explicit scheme. The numerical predicted model provides a fundamental and clear understanding of heat energy balance including incoming and outgoing thermal energies in addition to dissipated heat in the system. By using the reliable meteorological data for daily air temperature, solar radiation, wind speech and variable pavement surface properties, the predicted pavement temperature profile was validated with the field measured data. Additionally, the effects of AC thickness and daily air temperature on the temperature profile in underlying concrete were also investigated. Based on obtained results, the numerical predicted temperature of AC-Concrete composite pavement using FDM provided a good accuracy compared to field measured data and thicker AC layer significantly insulates the temperature distribution in underlying concrete slab.Keywords: asphalt concrete, finite different method (FDM), curling effect, heat transfer, solar radiation
Procedia PDF Downloads 26919782 Assessing Vertical Distribution of Soil Organic Carbon Stocks in Westleigh Soil under Shrub Encroached Rangeland, Limpopo Province, South Africa
Authors: Abel L. Masotla, Phesheya E. Dlamini, Vusumuzi E. Mbanjwa
Abstract:
Accurate quantification of the vertical distribution of soil organic carbon (SOC) in relation to land cover transformations, associated with shrub encroachment is crucial because deeper lying horizons have been shown to have greater capacity to sequester SOC. Despite this, in-depth soil carbon dynamics remain poorly understood, especially in arid and semi-arid rangelands. The objective of this study was to quantify and compare the vertical distribution of soil organic carbon stocks (SOCs) in shrub-encroached and open grassland sites. To achieve this, soil samples were collected vertically at 10 cm depth intervals under both sites. The results showed that SOC was on average 19% and 13% greater in the topsoil and subsoil respectively, under shrub-encroached grassland compared to open grassland. In both topsoil and subsoil, lower SOCs were found under shrub-encroached (4.53 kg m⁻² and 3.90 kgm⁻²) relative to open grassland (4.39 kgm⁻² and 3.67 kgm⁻²). These results demonstrate that deeper soil horizon play a critical role in the storage of SOC in savanna grassland.Keywords: savanna grasslands, shrub-encroachment, soil organic carbon, vertical distribution
Procedia PDF Downloads 13919781 Electromagnetic Modeling of a MESFET Transistor Using the Moments Method Combined with Generalised Equivalent Circuit Method
Authors: Takoua Soltani, Imen Soltani, Taoufik Aguili
Abstract:
The communications' and radar systems' demands give rise to new developments in the domain of active integrated antennas (AIA) and arrays. The main advantages of AIA arrays are the simplicity of fabrication, low cost of manufacturing, and the combination between free space power and the scanner without a phase shifter. The integrated active antenna modeling is the coupling between the electromagnetic model and the transport model that will be affected in the high frequencies. Global modeling of active circuits is important for simulating EM coupling, interaction between active devices and the EM waves, and the effects of EM radiation on active and passive components. The current review focuses on the modeling of the active element which is a MESFET transistor immersed in a rectangular waveguide. The proposed EM analysis is based on the Method of Moments combined with the Generalised Equivalent Circuit method (MOM-GEC). The Method of Moments which is the most common and powerful software as numerical techniques have been used in resolving the electromagnetic problems. In the class of numerical techniques, MOM is the dominant technique in solving of Maxwell and Transport’s integral equations for an active integrated antenna. In this situation, the equivalent circuit is introduced to the development of an integral method formulation based on the transposition of field problems in a Generalised equivalent circuit that is simpler to treat. The method of Generalised Equivalent Circuit (MGEC) was suggested in order to represent integral equations circuits that describe the unknown electromagnetic boundary conditions. The equivalent circuit presents a true electric image of the studied structures for describing the discontinuity and its environment. The aim of our developed method is to investigate the antenna parameters such as the input impedance and the current density distribution and the electric field distribution. In this work, we propose a global EM modeling of the MESFET AsGa transistor using an integral method. We will begin by describing the modeling structure that allows defining an equivalent EM scheme translating the electromagnetic equations considered. Secondly, the projection of these equations on common-type test functions leads to a linear matrix equation where the unknown variable represents the amplitudes of the current density. Solving this equation resulted in providing the input impedance, the distribution of the current density and the electric field distribution. From electromagnetic calculations, we were able to present the convergence of input impedance for different test function number as a function of the guide mode numbers. This paper presents a pilot study to find the answer to map out the variation of the existing current evaluated by the MOM-GEC. The essential improvement of our method is reducing computing time and memory requirements in order to provide a sufficient global model of the MESFET transistor.Keywords: active integrated antenna, current density, input impedance, MESFET transistor, MOM-GEC method
Procedia PDF Downloads 19819780 Characteristics and Item Parameters Fitness on Chemistry Teacher-Made Test Instrument
Authors: Rizki Nor Amelia, Farida A. Setiawati
Abstract:
This study aimed to: (1) describe the characteristics of teacher-made test instrument used to measure the ability of students’chemistry, and (2) identify the presence of the compability difficulty level set by teachers to difficulty level by empirical results. Based on these objectives, this study was a descriptive research. The analysis in this study used the Rasch model and Chi-square statistics. Analysis using Rasch Model was based on the response patterns of high school students to the teacher-made test instrument on chemistry subject Academic Year 2015/2016 in the Yogyakarta. The sample of this research were 358 students taken by cluster random sampling technique. The analysis showed that: (1) a teacher-made tests instrument has a medium on the mean difficulty level. This instrument is capable to measure the ability on the interval of -0,259 ≤ θ ≤ 0,659 logit. Maximum Test Information Function obtained at 18.187 on the ability +0,2 logit; (2) 100% items categorized either as easy or difficult by rasch model is match with the teachers’ judgment; while 37 items are categorized according to rasch model which 8.10% and 10.81% categorized as easy and difficult items respectively according to the teachers, the others are medium categorized. Overall, the distribution of the level of difficulty formulated by the teachers has the distinction (not match) to the level of difficulty based on the empirical results.Keywords: chemistry, items parameter fitness, Rasch model, teacher-made test
Procedia PDF Downloads 23819779 Secure Optical Communication System Using Quantum Cryptography
Authors: Ehab AbdulRazzaq Hussein
Abstract:
Quantum cryptography (QC) is an emerging technology for secure key distribution with single-photon transmissions. In contrast to classical cryptographic schemes, the security of QC schemes is guaranteed by the fundamental laws of nature. Their security stems from the impossibility to distinguish non-orthogonal quantum states with certainty. A potential eavesdropper introduces errors in the transmissions, which can later be discovered by the legitimate participants of the communication. In this paper, the modeling approach is proposed for QC protocol BB84 using polarization coding. The single-photon system is assumed to be used in the designed models. Thus, Eve cannot use beam-splitting strategy to eavesdrop on the quantum channel transmission. The only eavesdropping strategy possible to Eve is the intercept/resend strategy. After quantum transmission of the QC protocol, the quantum bit error rate (QBER) is estimated and compared with a threshold value. If it is above this value the procedure must be stopped and performed later again.Keywords: security, key distribution, cryptography, quantum protocols, Quantum Cryptography (QC), Quantum Key Distribution (QKD).
Procedia PDF Downloads 40419778 A Source Point Distribution Scheme for Wave-Body Interaction Problem
Authors: Aichun Feng, Zhi-Min Chen, Jing Tang Xing
Abstract:
A two-dimensional linear wave-body interaction problem can be solved using a desingularized integral method by placing free surface Rankine sources over calm water surface and satisfying boundary conditions at prescribed collocation points on the calm water surface. A new free-surface Rankine source distribution scheme, determined by the intersection points of free surface and body surface, is developed to reduce numerical computation cost. Associated with this, a new treatment is given to the intersection point. The present scheme results are in good agreement with traditional numerical results and measurements.Keywords: source point distribution, panel method, Rankine source, desingularized algorithm
Procedia PDF Downloads 36519777 Ground Short Circuit Contributions of a MV Distribution Line Equipped with PWMSC
Authors: Mohamed Zellagui, Heba Ahmed Hassan
Abstract:
This paper proposes a new approach for the calculation of short-circuit parameters in the presence of Pulse Width Modulated based Series Compensator (PWMSC). PWMSC is a newly Flexible Alternating Current Transmission System (FACTS) device that can modulate the impedance of a transmission line through applying a variation to the duty cycle (D) of a train of pulses with fixed frequency. This results in an improvement of the system performance as it provides virtual compensation of distribution line impedance by injecting controllable apparent reactance in series with the distribution line. This controllable reactance can operate in both capacitive and inductive modes and this makes PWMSC highly effective in controlling the power flow and increasing system stability in the system. The purpose of this work is to study the impact of fault resistance (RF) which varies between 0 to 30 Ω on the fault current calculations in case of a ground fault and a fixed fault location. The case study is for a medium voltage (MV) Algerian distribution line which is compensated by PWMSC in the 30 kV Algerian distribution power network. The analysis is based on symmetrical components method which involves the calculations of symmetrical components of currents and voltages, without and with PWMSC in both cases of maximum and minimum duty cycle value for capacitive and inductive modes. The paper presents simulation results which are verified by the theoretical analysis.Keywords: pulse width modulated series compensator (pwmsc), duty cycle, distribution line, short-circuit calculations, ground fault, symmetrical components method
Procedia PDF Downloads 50019776 Spatial Emission of Ions Produced by the APF Plasma Focus Device
Authors: M. Habibi
Abstract:
The angular distribution of ion beam emission from the APF plasma focus device (15kV, 40μf, 115nH) filled with nitrogen gas has been examined through investigating the effect of ion beams on aluminum thin foils in different angular positions. The samples are studied in different distances from the anode end with different shots. The optimum pressure that would be obtained at the applied voltages of 12kV was 0.7 torr. The ions flux declined as the pressure inclined and the maximum ion density at 0.7 torr was about 10.26 × 1022 ions/steradian. The irradiated foils were analyzed with SEM method in order to study their surface and morphological changes. The results of the analysis showed melting and surface evaporation effects and generation of some cracks in the specimens. The result of ion patterns on the samples obtained in this study can be useful in determining ion spatial distributions on the top of anode.Keywords: plasma focus, spatial distribution, high energy ions, ion angular distribution
Procedia PDF Downloads 45819775 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth
Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova
Abstract:
Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.Keywords: treeline, dynamic, climate, modeling
Procedia PDF Downloads 8219774 Input Data Balancing in a Neural Network PM-10 Forecasting System
Authors: Suk-Hyun Yu, Heeyong Kwon
Abstract:
Recently PM-10 has become a social and global issue. It is one of major air pollutants which affect human health. Therefore, it needs to be forecasted rapidly and precisely. However, PM-10 comes from various emission sources, and its level of concentration is largely dependent on meteorological and geographical factors of local and global region, so the forecasting of PM-10 concentration is very difficult. Neural network model can be used in the case. But, there are few cases of high concentration PM-10. It makes the learning of the neural network model difficult. In this paper, we suggest a simple input balancing method when the data distribution is uneven. It is based on the probability of appearance of the data. Experimental results show that the input balancing makes the neural networks’ learning easy and improves the forecasting rates.Keywords: artificial intelligence, air quality prediction, neural networks, pattern recognition, PM-10
Procedia PDF Downloads 23119773 The Location-Routing Problem with Pickup Facilities and Heterogeneous Demand: Formulation and Heuristics Approach
Authors: Mao Zhaofang, Xu Yida, Fang Kan, Fu Enyuan, Zhao Zhao
Abstract:
Nowadays, last-mile distribution plays an increasingly important role in the whole industrial chain delivery link and accounts for a large proportion of the whole distribution process cost. Promoting the upgrading of logistics networks and improving the layout of final distribution points has become one of the trends in the development of modern logistics. Due to the discrete and heterogeneous needs and spatial distribution of customer demand, which will lead to a higher delivery failure rate and lower vehicle utilization, last-mile delivery has become a time-consuming and uncertain process. As a result, courier companies have introduced a range of innovative parcel storage facilities, including pick-up points and lockers. The introduction of pick-up points and lockers has not only improved the users’ experience but has also helped logistics and courier companies achieve large-scale economy. Against the backdrop of the COVID-19 of the previous period, contactless delivery has become a new hotspot, which has also created new opportunities for the development of collection services. Therefore, a key issue for logistics companies is how to design/redesign their last-mile distribution network systems to create integrated logistics and distribution networks that consider pick-up points and lockers. This paper focuses on the introduction of self-pickup facilities in new logistics and distribution scenarios and the heterogeneous demands of customers. In this paper, we consider two types of demand, including ordinary products and refrigerated products, as well as corresponding transportation vehicles. We consider the constraints associated with self-pickup points and lockers and then address the location-routing problem with self-pickup facilities and heterogeneous demands (LRP-PFHD). To solve this challenging problem, we propose a mixed integer linear programming (MILP) model that aims to minimize the total cost, which includes the facility opening cost, the variable transport cost, and the fixed transport cost. Due to the NP-hardness of the problem, we propose a hybrid adaptive large-neighbourhood search algorithm to solve LRP-PFHD. We evaluate the effectiveness and efficiency of the proposed algorithm by using instances generated based on benchmark instances. The results demonstrate that the hybrid adaptive large neighbourhood search algorithm is more efficient than MILP solvers such as Gurobi for LRP-PFHD, especially for large-scale instances. In addition, we made a comprehensive analysis of some important parameters (e.g., facility opening cost and transportation cost) to explore their impacts on the results and suggested helpful managerial insights for courier companies.Keywords: city logistics, last-mile delivery, location-routing, adaptive large neighborhood search
Procedia PDF Downloads 7819772 Association Between Short-term NOx Exposure and Asthma Exacerbations in East London: A Time Series Regression Model
Authors: Hajar Hajmohammadi, Paul Pfeffer, Anna De Simoni, Jim Cole, Chris Griffiths, Sally Hull, Benjamin Heydecker
Abstract:
Background: There is strong interest in the relationship between short-term air pollution exposure and human health. Most studies in this field focus on serious health effects such as death or hospital admission, but air pollution exposure affects many people with less severe impacts, such as exacerbations of respiratory conditions. A lack of quantitative analysis and inconsistent findings suggest improved methodology is needed to understand these effectsmore fully. Method: We developed a time series regression model to quantify the relationship between daily NOₓ concentration and Asthma exacerbations requiring oral steroids from primary care settings. Explanatory variables include daily NOₓ concentration measurements extracted from 8 available background and roadside monitoring stations in east London and daily ambient temperature extracted for London City Airport, located in east London. Lags of NOx concentrations up to 21 days (3 weeks) were used in the model. The dependent variable was the daily number of oral steroid courses prescribed for GP registered patients with asthma in east London. A mixed distribution model was then fitted to the significant lags of the regression model. Result: Results of the time series modelling showed a significant relationship between NOₓconcentrations on each day and the number of oral steroid courses prescribed in the following three weeks. In addition, the model using only roadside stations performs better than the model with a mixture of roadside and background stations.Keywords: air pollution, time series modeling, public health, road transport
Procedia PDF Downloads 14219771 Analysis of Spatial Heterogeneity of Residential Prices in Guangzhou: An Actual Study Based on Point of Interest Geographically Weighted Regression Model
Authors: Zichun Guo
Abstract:
Guangzhou's house price has long been lower than the other three major cities; with the gradual increase in Guangzhou's house price, the influencing factors of house price have gradually been paid attention to; this paper tries to use house price data and POI (Point of Interest) data, and explores the distribution of house price and influencing factors by applying the Kriging spatial interpolation method and geographically weighted regression model in ArcGIS. The results show that the interpolation result of house price has a significant relationship with the economic development and development potential of the region and that different POI types have different impacts on the growth of house prices in different regions.Keywords: POI, house price, spatial heterogeneity, Guangzhou
Procedia PDF Downloads 5519770 Examination of the Influence of the Near-Surface Geology on the Initial Infrastructural Development Using High-Resolution Seismic Method
Authors: Collins Chiemeke, Stephen Ibe, Godwin Onyedim
Abstract:
This research work on high-resolution seismic tomography method was carried out with the aim of investigating how near-surface geology influences the initial distribution of infrastructural development in an area like Otuoke and its environs. To achieve this objective, seismic tomography method was employed. The result revealed that the overburden (highly-weathered layer) thickness ranges from 27 m to 50 m within the survey area, with an average value of 37 m. The 3D surface analysis for the overburden thickness distribution within the survey area showed that the thickness of the overburden is more in regions with less infrastructural development, and least in built-up areas. The range of velocity distribution from the surface to within a depth of 5 m is about 660 m/s to 1160 m/s, with an average value of 946 m/s. The 3D surface analysis of the velocity distribution also revealed that the areas with large infrastructural development are characterized with large velocity values compared with the undeveloped regions that has average low-velocity values. Hence, one can conclusively say that the initial settlement of Otuoke and its environs and the subsequent infrastructural development was influenced by the underlying near surface geology (rigid earth), among other factors.Keywords: geology, seismic, infrastructural, near-surface
Procedia PDF Downloads 307