Search results for: threshold estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2546

Search results for: threshold estimation

656 Estimated Human Absorbed Dose of 111 In-BPAMD as a New Bone-Seeking Spect-Imaging Agent

Authors: H. Yousefnia, S. Zolghadri

Abstract:

An early diagnosis of bone metastases is very important for providing a profound decision on a subsequent therapy. A prerequisite for the clinical application of new diagnostic radiopharmaceutical is the measurement of organ radiation exposure dose from biodistribution data in animals. In this study, the dosimetric studies of a novel agent for SPECT-imaging of bone methastases, 111In-(4-{[(bis(phosphonomethyl))carbamoyl]methyl}-7,10-bis(carboxymethyl)-1,4,7,10-tetraazacyclododec-1-yl) acetic acid (111In-BPAMD) complex, have been estimated in human organs based on mice data. The radiolabeled complex was prepared with high radiochemical purity at the optimal conditions. Biodistribution studies of the complex were investigated in male Syrian mice at selected times after injection (2, 4, 24 and 48 h). The human absorbed dose estimation of the complex was performed based on mice data by the radiation absorbed dose assessment resource (RADAR) method. 111In-BPAMD complex was prepared with high radiochemical purity >95% (ITLC) and specific activities of 2.85 TBq/mmol. Total body effective absorbed dose for 111In-BPAMD was 0.205 mSv/MBq. This value is comparable to the other 111In clinically used complexes. The results show that the dose to critical organs the complex is well within the acceptable considered range for diagnostic nuclear medicine procedures. Generally, 111In-BPAMD has interesting characteristics and can be considered as a viable agent for SPECT-imaging of the bone metastases in the near future.

Keywords: In-111, BPAMD, absorbed dose, RADAR

Procedia PDF Downloads 456
655 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules

Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang

Abstract:

Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.

Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor

Procedia PDF Downloads 547
654 Utilizing Experiential Teaching Strategies to Reduce the Incidence of Falls in Patients in Orthopedic Wards

Authors: Yu-Shi Ye, Jia-Min Wu, Jhih-Ci Li

Abstract:

Background: Most orthopedic inpatients and primary caregivers are elderly, and patients are at high risk of falls. We set up a quality control team to analyze the root cause and found the following issues: 1. The nursing staff did not conduct cognitive assessments of patients and their primary caregivers to ensure that health education content was understood. 2. Nurses prefer to use spoken language in health education but lack the skills to use diverse teaching materials. 3. Newly recruited nurses have insufficient awareness of fall prevention. Methods: The study subjects were 16 nurses in the orthopedic ward of a teaching hospital in central Taiwan. We implemented the following strategies: 1. Developed a fall simulation teaching plan and conducted teaching courses and assessments in the morning meeting; 2. Designed and used a "fall prevention awareness card" to improve the prevention awareness of elderly patients; 3. All staff (including new staff) received experiential education training. Results: In 2021, 40% of patients in the orthopedic wards were aged 60-79 years (792/1979) with a high risk of falls. According to data collection, the incidence of falls in hospitalized patients was 0.04% (5/12651), which exceeded the threshold of 0.02% in our ward. After completing the on-the-job education training in October, the nursing staff expressed that they were more aware of the special situation of fall prevention. Through practical sharing and drills, combined with experiential teaching strategies, nurses can reconstruct the safety awareness of fall prevention and deepen their cognitive memory. Participants scored between 30 and 80 on the pretest (16 students, mean: 72.6) and between 90 and 100 on the post-test (16 students, mean: 92.6), resulting in a 73.8% improvement in overall scores. We have a total of 4 new employees who have all completed the first 3 months of compulsory PGY courses. From January to April 2022, the incidence of falls in hospitalized patients was 0.025% (1/3969). We have made good improvements and will continue to track the outcome. Discussion: In addition to enhancing the awareness of falls among nursing staff, how-to guide patients and primary caregivers to prevent falls is also the focus of improvement. The proper way of health education can be better understood through practical exercises and case sharing.

Keywords: experiential teaching strategies, fall prevention, cognitive card, elderly patients, orthopedic wards

Procedia PDF Downloads 35
653 An Automatic Bayesian Classification System for File Format Selection

Authors: Roman Graf, Sergiu Gordea, Heather M. Ryan

Abstract:

This paper presents an approach for the classification of an unstructured format description for identification of file formats. The main contribution of this work is the employment of data mining techniques to support file format selection with just the unstructured text description that comprises the most important format features for a particular organisation. Subsequently, the file format indentification method employs file format classifier and associated configurations to support digital preservation experts with an estimation of required file format. Our goal is to make use of a format specification knowledge base aggregated from a different Web sources in order to select file format for a particular institution. Using the naive Bayes method, the decision support system recommends to an expert, the file format for his institution. The proposed methods facilitate the selection of file format and the quality of a digital preservation process. The presented approach is meant to facilitate decision making for the preservation of digital content in libraries and archives using domain expert knowledge and specifications of file formats. To facilitate decision-making, the aggregated information about the file formats is presented as a file format vocabulary that comprises most common terms that are characteristic for all researched formats. The goal is to suggest a particular file format based on this vocabulary for analysis by an expert. The sample file format calculation and the calculation results including probabilities are presented in the evaluation section.

Keywords: data mining, digital libraries, digital preservation, file format

Procedia PDF Downloads 472
652 Intercultural Trainings for Future Global Managers: Evaluating the Effect on the Global Mind-Set

Authors: Nina Dziatzko, Christopher Stehr, Franziska Struve

Abstract:

Intercultural competence as an explicit required skill nearly never appears in job advertisements in international or even global contexts. But especially those who have to deal with different nationalities and cultures in their everyday business need to have several intercultural competencies and further a global mind-set. This way the question arises how potential future global managers can be trained to learn these competencies. In this regard, it might be helpful to see if different types of intercultural trainings have different effects on those skills. This paper outlines lessons learned based on the evaluation of two different intercultural trainings for management students. The main differences between the observed intercultural trainings are the amount of theoretical input in relation to hands-on experiences, the number of trainers as well as the used methods to teach implicit cultural rules. Both groups contain management students with the willingness and perspective to work abroad or to work in international context. The research is carried out with a pre-training-survey and a post-training-survey which consists of questions referring the international context of the students and a self-estimation of 19 identified intercultural and global mind-set skills, such as: cosmopolitanism, empathy, differentiation and adaptability. Whereas there is no clear result which training gets overall a significant higher increase of skills, there is a clear difference between the focus of competencies trained by each of the intercultural trainings. This way this research provides a guideline for both academicals institutions as well as companies for the decision between different types of intercultural trainings, if the to be trained required skills are defined. Therefore the efficiency and the accuracy of fit of the education of future global managers get optimized.

Keywords: global mind-set, intercultural competencies, intercultural training, learning experiences

Procedia PDF Downloads 252
651 Estimation of Elastic Modulus of Soil Surrounding Buried Pipeline Using Multi-Response Surface Methodology

Authors: Won Mog Choi, Seong Kyeong Hong, Seok Young Jeong

Abstract:

The stress on the buried pipeline under pavement is significantly affected by vehicle loads and elastic modulus of the soil surrounding the pipeline. The correct elastic modulus of soil has to be applied to the finite element model to investigate the effect of the vehicle loads on the buried pipeline using finite element analysis. The purpose of this study is to establish the approach to calculating the correct elastic modulus of soil using the optimization process. The optimal elastic modulus of soil, which minimizes the difference between the strain measured from vehicle driving test at the velocity of 35km/h and the strain calculated from finite element analyses, was calculated through the optimization process using multi-response surface methodology. Three elastic moduli of soil (road layer, original soil, dense sand) surrounding the pipeline were defined as the variables for the optimization. Further analyses with the optimal elastic modulus at the velocities of 4.27km/h, 15.47km/h, 24.18km/h were performed and compared to the test results to verify the applicability of multi-response surface methodology. The results indicated that the strain of the buried pipeline was mostly affected by the elastic modulus of original soil, followed by the dense sand and the load layer, as well as the results of further analyses with optimal elastic modulus of soil show good agreement with the test.

Keywords: pipeline, optimization, elastic modulus of soil, response surface methodology

Procedia PDF Downloads 355
650 Techno-Economic Analysis of the Production of Aniline

Authors: Dharshini M., Hema N. S.

Abstract:

The project for the production of aniline is done by providing 295.46 tons per day of nitrobenzene as feed. The material and energy balance calculations for the different equipment like distillation column, heat exchangers, reactor and mixer are carried out with simulation via DWSIM. The conversion of nitrobenzene to aniline by hydrogenation process is considered to be 96% and the total production of the plant was found to be 215 TPD. The cost estimation of the process is carried out to estimate the feasibility of the plant. The net profit and percentage return of investment is estimated to be ₹27 crores and 24.6%. The payback period was estimated to be 4.05 years and the unit production cost is ₹113/kg. A techno-economic analysis was performed for the production of aniline; the result includes economic analysis and sensitivity analysis of critical factors. From economic analysis, larger the plant scale increases the total capital investment and annual operating cost, even though the unit production cost decreases. Uncertainty analysis was performed to predict the influence of economic factors on profitability and the scenario analysis is one way to quantify uncertainty. In scenario analysis the best-case scenario and the worst-case scenario are compared with the base case scenario. The best-case scenario was found at a feed rate of 120 kmol/hr with a unit production cost of ₹112.05/kg and the worst-case scenario was found at a feed rate of 60 kmol/hr with a unit production cost of ₹115.9/kg. The base case is closely related to the best case by 99.2% in terms of unit production cost. since the unit production cost is less and the profitability is more with less payback time, it is feasible to construct a plant at this capacity.

Keywords: aniline, nitrobenzene, economic analysis, unit production cost

Procedia PDF Downloads 82
649 Analysis of Sickle Cell Disease and Maternal Mortality in United Kingdom

Authors: Basma Hassabo, Sarah Ahmed, Aisha Hameed

Abstract:

Aims and Objectives: To determine the incidence of maternal mortality amongst pregnant women with sickle cell disease (SCD) in the United Kingdom and to determine exact cause of death in these women. Background: SCD is caused by the ‘sickle’ gene and is characterized by episodes of severe bone pain and other complications like acute chest syndrome, chronic pulmonary hypertension, stroke, retinopathy, chronic renal failure, hepato-splenic crises, avascular bone necrosis, sepsis and leg ulcers. SCD is a continual cause of maternal mortality and fetal complications, and it comprises 1.5% of all Direct and Indirect deaths in the UK. Sepsis following premature rupture of membranes with ascending infection, post-partum infection and pre-labour overwhelming septic shock is one of its leading causes of death. Over the last fifty years of maternal mortality reports in UK, between 1 to 4 pregnant women died in each triennium. Material and Method: This is a retrospective study that involves pregnant women who died from SCD complications in the UK between 1952-2012. Data were collected from the UK Confidential Enquiries into Maternal Death and its causes between 1952–2012. Prior to 1985, exact cause of death in this cohort was not recorded. Results: 33 deaths reported between 1964 and 1984. 17 deaths were reported due to sickle cell disease between 1985 and 2012. Five women in this group died of sickle cell crisis, one woman had liver sequestration crisis, two women died of venous thromboembolism, two had myocardial fibrosis and three died of sepsis. Remaining women died of amniotic fluid embolism, SUDEP, myocardial ischemia and intracranial haemorrhage. Conclusion: The leading causes of death in sickle cell sick pregnant women are sickle cell crises, sepsis, venous thrombosis and thromboembolism. Prenatal care for women with SCD should be managed by a multidisciplinary team that includes an obstetrician, nutritionist, primary care physician, and haematologist. In every sick Sickle Cell woman Sickle Cell crises should be on the top of the list of differential diagnosis. Aggressive treatment of complications with low threshold to commence broad-spectrum antibiotics and LMWH contribute to better outcomes.

Keywords: incidence, maternal mortality, sickle cell disease (SCD), uk

Procedia PDF Downloads 201
648 Estimation of PM10 Concentration Using Ground Measurements and Landsat 8 OLI Satellite Image

Authors: Salah Abdul Hameed Saleh, Ghada Hasan

Abstract:

The aim of this work is to produce an empirical model for the determination of particulate matter (PM10) concentration in the atmosphere using visible bands of Landsat 8 OLI satellite image over Kirkuk city- IRAQ. The suggested algorithm is established on the aerosol optical reflectance model. The reflectance model is a function of the optical properties of the atmosphere, which can be related to its concentrations. The concentration of PM10 measurements was collected using Particle Mass Profiler and Counter in a Single Handheld Unit (Aerocet 531) meter simultaneously by the Landsat 8 OLI satellite image date. The PM10 measurement locations were defined by a handheld global positioning system (GPS). The obtained reflectance values for visible bands (Coastal aerosol, Blue, Green and blue bands) of landsat 8 OLI image were correlated with in-suite measured PM10. The feasibility of the proposed algorithms was investigated based on the correlation coefficient (R) and root-mean-square error (RMSE) compared with the PM10 ground measurement data. A choice of our proposed multispectral model was founded on the highest value correlation coefficient (R) and lowest value of the root mean square error (RMSE) with PM10 ground data. The outcomes of this research showed that visible bands of Landsat 8 OLI were capable of calculating PM10 concentration with an acceptable level of accuracy.

Keywords: air pollution, PM10 concentration, Lansat8 OLI image, reflectance, multispectral algorithms, Kirkuk area

Procedia PDF Downloads 422
647 Transmission of Values among Polish Young Adults and Their Parents: Pseudo Dyad Analysis and Gender Differences

Authors: Karolina Pietras, Joanna Fryt, Aleksandra Gronostaj, Tomasz Smolen

Abstract:

Young women and men differ from their parents in preferred values. Those differences enable their adaptability to a new socio-cultural context and help with fulfilling developmental tasks specific to young adulthood. At the same time core values, with special importance to family members, are transmitted within families. Intergenerational similarities in values may thus be both an effect of value transmission within a family and a consequence of sharing the same socio-cultural context. These processes are difficult to separate. In our study we assessed similarities and differences in values within four intergenerational family dyads (mothers-daughters, fathers-daughters, mothers-sons, fathers-sons). Sixty Polish young adults (30 women and 30 men aged 19-25) along with their parents (a total of 180 participants) completed the Schwartz’ Portrait Value Questionnaire (PVQ-21). To determine which values may be transmitted within families, we used a correlation analysis and pseudo dyad analysis that allows for the estimation of a baseline likeness between all tested subjects and consequently makes it possible to determine if similarities between actual family members are greater than chance. We also assessed whether different strategies of measuring similarity between family members render different results, and checked whether resemblances in family dyads are influenced by child’s and parent’s gender. Reported similarities were interpreted in light of the evolutionary and the value salience perspective.

Keywords: intergenerational differences in values, gender differences, pseudo dyad analysis, transmission of values

Procedia PDF Downloads 484
646 Hounsfield-Based Automatic Evaluation of Volumetric Breast Density on Radiotherapy CT-Scans

Authors: E. M. D. Akuoko, Eliana Vasquez Osorio, Marcel Van Herk, Marianne Aznar

Abstract:

Radiotherapy is an integral part of treatment for many patients with breast cancer. However, side effects can occur, e.g., fibrosis or erythema. If patients at higher risks of radiation-induced side effects could be identified before treatment, they could be given more individual information about the risks and benefits of radiotherapy. We hypothesize that breast density is correlated with the risk of side effects and present a novel method for automatic evaluation based on radiotherapy planning CT scans. Methods: 799 supine CT scans of breast radiotherapy patients were available from the REQUITE dataset. The methodology was first established in a subset of 114 patients (cohort 1) before being applied to the whole dataset (cohort 2). All patients were scanned in the supine position, with arms up, and the treated breast (ipsilateral) was identified. Manual experts contour available in 96 patients for both the ipsilateral and contralateral breast in cohort 1. Breast tissue was segmented using atlas-based automatic contouring software, ADMIRE® v3.4 (Elekta AB, Sweden). Once validated, the automatic segmentation method was applied to cohort 2. Breast density was then investigated by thresholding voxels within the contours, using Otsu threshold and pixel intensity ranges based on Hounsfield units (-200 to -100 for fatty tissue, and -99 to +100 for fibro-glandular tissue). Volumetric breast density (VBD) was defined as the volume of fibro-glandular tissue / (volume of fibro-glandular tissue + volume of fatty tissue). A sensitivity analysis was performed to verify whether calculated VBD was affected by the choice of breast contour. In addition, we investigated the correlation between volumetric breast density (VBD) and patient age and breast size. VBD values were compared between ipsilateral and contralateral breast contours. Results: Estimated VBD values were 0.40 (range 0.17-0.91) in cohort 1, and 0.43 (0.096-0.99) in cohort 2. We observed ipsilateral breasts to be denser than contralateral breasts. Breast density was negatively associated with breast volume (Spearman: R=-0.5, p-value < 2.2e-16) and age (Spearman: R=-0.24, p-value = 4.6e-10). Conclusion: VBD estimates could be obtained automatically on a large CT dataset. Patients’ age or breast volume may not be the only variables that explain breast density. Future work will focus on assessing the usefulness of VBD as a predictive variable for radiation-induced side effects.

Keywords: breast cancer, automatic image segmentation, radiotherapy, big data, breast density, medical imaging

Procedia PDF Downloads 108
645 Different Stages for the Creation of Electric Arc Plasma through Slow Rate Current Injection to Single Exploding Wire, by Simulation and Experiment

Authors: Ali Kadivar, Kaveh Niayesh

Abstract:

This work simulates the voltage drop and resistance of the explosion of copper wires of diameters 25, 40, and 100 µm surrounded by 1 bar nitrogen exposed to a 150 A current and before plasma formation. The absorption of electrical energy in an exploding wire is greatly diminished when the plasma is formed. This study shows the importance of considering radiation and heat conductivity in the accuracy of the circuit simulations. The radiation of the dense plasma formed on the wire surface is modeled with the Net Emission Coefficient (NEC) and is mixed with heat conductivity through PLASIMO® software. A time-transient code for analyzing wire explosions driven by a slow current rise rate is developed. It solves a circuit equation coupled with one-dimensional (1D) equations for the copper electrical conductivity as a function of its physical state and Net Emission Coefficient (NEC) radiation. At first, an initial voltage drop over the copper wire, current, and temperature distribution at the time of expansion is derived. The experiments have demonstrated that wires remain rather uniform lengthwise during the explosion and can be simulated utilizing 1D simulations. Data from the first stage are then used as the initial conditions of the second stage, in which a simplified 1D model for high-Mach-number flows is adopted to describe the expansion of the core. The current was carried by the vaporized wire material before it was dispersed in nitrogen by the shock wave. In the third stage, using a three-dimensional model of the test bench, the streamer threshold is estimated. Electrical breakdown voltage is calculated without solving a full-blown plasma model by integrating Townsend growth coefficients (TdGC) along electric field lines. BOLSIG⁺ and LAPLACE databases are used to calculate the TdGC at different mixture ratios of nitrogen/copper vapor. The simulations show both radiation and heat conductivity should be considered for an adequate description of wire resistance, and gaseous discharges start at lower voltages than expected due to ultraviolet radiation and the exploding shocks, which may have ionized the nitrogen.

Keywords: exploding wire, Townsend breakdown mechanism, streamer, metal vapor, shock waves

Procedia PDF Downloads 61
644 Accuracy of Small Field of View CBCT in Determining Endodontic Working Length

Authors: N. L. S. Ahmad, Y. L. Thong, P. Nambiar

Abstract:

An in vitro study was carried out to evaluate the feasibility of small field of view (FOV) cone beam computed tomography (CBCT) in determining endodontic working length. The objectives were to determine the accuracy of CBCT in measuring the estimated preoperative working lengths (EPWL), endodontic working lengths (EWL) and file lengths. Access cavities were prepared in 27 molars. For each root canal, the baseline electronic working length was determined using an EAL (Raypex 5). The teeth were then divided into overextended, non-modified and underextended groups and the lengths were adjusted accordingly. Imaging and measurements were made using the respective software of the RVG (Kodak RVG 6100) and CBCT units (Kodak 9000 3D). Root apices were then shaved and the apical constrictions viewed under magnification to measure the control working lengths. The paired t-test showed a statistically significant difference between CBCT EPWL and control length but the difference was too small to be clinically significant. From the Bland Altman analysis, the CBCT method had the widest range of 95% limits of agreement, reflecting its greater potential of error. In measuring file lengths, RVG had a bigger window of 95% limits of agreement compared to CBCT. Conclusions: (1) The clinically insignificant underestimation of the preoperative working length using small FOV CBCT showed that it is acceptable for use in the estimation of preoperative working length. (2) Small FOV CBCT may be used in working length determination but it is not as accurate as the currently practiced method of using the EAL. (3) It is also more accurate than RVG in measuring file lengths.

Keywords: accuracy, CBCT, endodontics, measurement

Procedia PDF Downloads 279
643 Estimation of Geotechnical Parameters by Comparing Monitoring Data with Numerical Results: Case Study of Arash–Esfandiar-Niayesh Under-Passing Tunnel, Africa Tunnel, Tehran, Iran

Authors: Aliakbar Golshani, Seyyed Mehdi Poorhashemi, Mahsa Gharizadeh

Abstract:

The under passing tunnels are strongly influenced by the soils around. There are some complexities in the specification of real soil behavior, owing to the fact that lots of uncertainties exist in soil properties, and additionally, inappropriate soil constitutive models. Such mentioned factors may cause incompatible settlements in numerical analysis with the obtained values in actual construction. This paper aims to report a case study on a specific tunnel constructed by NATM. The tunnel has a depth of 11.4 m, height of 12.2 m, and width of 14.4 m with 2.5 lanes. The numerical modeling was based on a 2D finite element program. The soil material behavior was modeled by hardening soil model. According to the field observations, the numerical estimated settlement at the ground surface was approximately four times more than the measured one, after the entire installation of the initial lining, indicating that some unknown factors affect the values. Consequently, the geotechnical parameters are accurately revised by a numerical back-analysis using laboratory and field test data and based on the obtained monitoring data. The obtained result confirms that typically, the soil parameters are conservatively low-estimated. And additionally, the constitutive models cannot be applied properly for all soil conditions.

Keywords: NATM tunnel, initial lining, laboratory test data, numerical back-analysis

Procedia PDF Downloads 342
642 Profit Efficiency and Technology Adoption of Boro Rice Production in Bangladesh

Authors: Fazlul Hoque, Tahmina Akter Joya, Asma Akter, Supawat Rungsuriyawiboon

Abstract:

Rice is the staple food in Bangladesh, and therefore, self-sufficiency in rice production remains a major concern. However, Bangladesh is experiencing insufficiency in rice production due to high production cost and low national average productivity of 2.848 ton/ha in comparison to other rice-growing countries in the world. This study aims to find out the profit efficiency and determinants of profit efficiency in Boro rice cultivation in Manikganj and Dhaka districts of Bangladesh. It also focuses on technology adoption and effect of technology adoption on profit efficiency of Boro rice cultivation in Bangladesh. The data were collected from 300 households growing Boro rice through face to face interviews by one set structured questionnaire; Frontier Version 4.1 and STATA 15 software were employed to analyze the data according to the purpose of the study. Maximum likelihood estimates of the specified profit model showed that profit efficiency of the farmer varied between 23% and 97% with a mean of 76% which implied as 24% of the profit is lost due to a combination of technical and allocative inefficiencies in Boro rice cultivation in the study area. The inefficiency model revealed that the education level of the farmer, farm size, variety of seed, and training and extension service influence the profit inefficiency significantly. The study also explained that the level of technology adoption index affects profit efficiency. The technology adoption in Boro rice cultivation is influenced by the education level of the farmer, farm size and farm capital.

Keywords: farmer, maximum likelihood estimation, profit efficiency, rice

Procedia PDF Downloads 113
641 Comparative Fragility Analysis of Shallow Tunnels Subjected to Seismic and Blast Loads

Authors: Siti Khadijah Che Osmi, Mohammed Ahmad Syed

Abstract:

Underground structures are crucial components which required detailed analysis and design. Tunnels, for instance, are massively constructed as transportation infrastructures and utilities network especially in urban environments. Considering their prime importance to the economy and public safety that cannot be compromised, thus any instability to these tunnels will be highly detrimental to their performance. Recent experience suggests that tunnels become vulnerable during earthquakes and blast scenarios. However, a very limited amount of studies has been carried out to study and understanding the dynamic response and performance of underground tunnels under those unpredictable extreme hazards. In view of the importance of enhancing the resilience of these structures, the overall aims of the study are to evaluate probabilistic future performance of shallow tunnels subjected to seismic and blast loads by developing detailed fragility analysis. Critical non-linear time history numerical analyses using sophisticated finite element software Midas GTS NX have been presented about the current methods of analysis, taking into consideration of structural typology, ground motion and explosive characteristics, effect of soil conditions and other associated uncertainties on the tunnel integrity which may ultimately lead to the catastrophic failure of the structures. The proposed fragility curves for both extreme loadings are discussed and compared which provide significant information the performance of the tunnel under extreme hazards which may beneficial for future risk assessment and loss estimation.

Keywords: fragility analysis, seismic loads, shallow tunnels, blast loads

Procedia PDF Downloads 316
640 Validation of SWAT Model for Prediction of Water Yield and Water Balance: Case Study of Upstream Catchment of Jebba Dam in Nigeria

Authors: Adeniyi G. Adeogun, Bolaji F. Sule, Adebayo W. Salami, Michael O. Daramola

Abstract:

Estimation of water yield and water balance in a river catchment is critical to the sustainable management of water resources at watershed level in any country. Therefore, in the present study, Soil and Water Assessment Tool (SWAT) interfaced with Geographical Information System (GIS) was applied as a tool to predict water balance and water yield of a catchment area in Nigeria. The catchment area, which was 12,992km2, is located upstream Jebba hydropower dam in North central part of Nigeria. In this study, data on the observed flow were collected and compared with simulated flow using SWAT. The correlation between the two data sets was evaluated using statistical measures, such as, Nasch-Sucliffe Efficiency (NSE) and coefficient of determination (R2). The model output shows a good agreement between the observed flow and simulated flow as indicated by NSE and R2, which were greater than 0.7 for both calibration and validation period. A total of 42,733 mm of water was predicted by the calibrated model as the water yield potential of the basin for a simulation period 1985 to 2010. This interesting performance obtained with SWAT model suggests that SWAT model could be a promising tool to predict water balance and water yield in sustainable management of water resources. In addition, SWAT could be applied to other water resources in other basins in Nigeria as a decision support tool for sustainable water management in Nigeria.

Keywords: GIS, modeling, sensitivity analysis, SWAT, water yield, watershed level

Procedia PDF Downloads 393
639 Estimation of Asphalt Pavement Surfaces Using Image Analysis Technique

Authors: Mohammad A. Khasawneh

Abstract:

Asphalt concrete pavements gradually lose their skid resistance causing safety problems especially under wet conditions and high driving speeds. In order to enact the actual field polishing and wearing process of asphalt pavement surfaces in a laboratory setting, several laboratory-scale accelerated polishing devices were developed by different agencies. To mimic the actual process, friction and texture measuring devices are needed to quantify surface deterioration at different polishing intervals that reflect different stages of the pavement life. The test could still be considered lengthy and to some extent labor-intensive. Therefore, there is a need to come up with another method that can assist in investigating the bituminous pavement surface characteristics in a practical and time-efficient test procedure. The purpose of this paper is to utilize a well-developed image analysis technique to characterize asphalt pavement surfaces without the need to use conventional friction and texture measuring devices in an attempt to shorten and simplify the polishing procedure in the lab. Promising findings showed the possibility of using image analysis in lieu of the labor-sensitive-variable-in-nature friction and texture measurements. It was found that the exposed aggregate surface area of asphalt specimens made from limestone and gravel aggregates produced solid evidence of the validity of this method in describing asphalt pavement surfaces. Image analysis results correlated well with the British Pendulum Numbers (BPN), Polish Values (PV) and Mean Texture Depth (MTD) values.

Keywords: friction, image analysis, polishing, statistical analysis, texture

Procedia PDF Downloads 286
638 Realistic Modeling of the Preclinical Small Animal Using Commercial Software

Authors: Su Chul Han, Seungwoo Park

Abstract:

As the increasing incidence of cancer, the technology and modality of radiotherapy have advanced and the importance of preclinical model is increasing in the cancer research. Furthermore, the small animal dosimetry is an essential part of the evaluation of the relationship between the absorbed dose in preclinical small animal and biological effect in preclinical study. In this study, we carried out realistic modeling of the preclinical small animal phantom possible to verify irradiated dose using commercial software. The small animal phantom was modeling from 4D Digital Mouse whole body phantom. To manipulate Moby phantom in commercial software (Mimics, Materialise, Leuven, Belgium), we converted Moby phantom to DICOM image file of CT by Matlab and two- dimensional of CT images were converted to the three-dimensional image and it is possible to segment and crop CT image in Sagittal, Coronal and axial view). The CT images of small animals were modeling following process. Based on the profile line value, the thresholding was carried out to make a mask that was connection of all the regions of the equal threshold range. Using thresholding method, we segmented into three part (bone, body (tissue). lung), to separate neighboring pixels between lung and body (tissue), we used region growing function of Mimics software. We acquired 3D object by 3D calculation in the segmented images. The generated 3D object was smoothing by remeshing operation and smoothing operation factor was 0.4, iteration value was 5. The edge mode was selected to perform triangle reduction. The parameters were that tolerance (0.1mm), edge angle (15 degrees) and the number of iteration (5). The image processing 3D object file was converted to an STL file to output with 3D printer. We modified 3D small animal file using 3- Matic research (Materialise, Leuven, Belgium) to make space for radiation dosimetry chips. We acquired 3D object of realistic small animal phantom. The width of small animal phantom was 2.631 cm, thickness was 2.361 cm, and length was 10.817. Mimics software supported efficiency about 3D object generation and usability of conversion to STL file for user. The development of small preclinical animal phantom would increase reliability of verification of absorbed dose in small animal for preclinical study.

Keywords: mimics, preclinical small animal, segmentation, 3D printer

Procedia PDF Downloads 343
637 Optimization of Economic Order Quantity of Multi-Item Inventory Control Problem through Nonlinear Programming Technique

Authors: Prabha Rohatgi

Abstract:

To obtain an efficient control over a huge amount of inventory of drugs in pharmacy department of any hospital, generally, the medicines are categorized on the basis of their cost ‘ABC’ (Always Better Control), first and then categorize on the basis of their criticality ‘VED’ (Vital, Essential, desirable) for prioritization. About one-third of the annual expenditure of a hospital is spent on medicines. To minimize the inventory investment, the hospital management may like to keep the medicines inventory low, as medicines are perishable items. The main aim of each and every hospital is to provide better services to the patients under certain limited resources. To achieve the satisfactory level of health care services to outdoor patients, a hospital has to keep eye on the wastage of medicines because expiry date of medicines causes a great loss of money though it was limited and allocated for a particular period of time. The objectives of this study are to identify the categories of medicines requiring incentive managerial control. In this paper, to minimize the total inventory cost and the cost associated with the wastage of money due to expiry of medicines, an inventory control model is used as an estimation tool and then nonlinear programming technique is used under limited budget and fixed number of orders to be placed in a limited time period. Numerical computations have been given and shown that by using scientific methods in hospital services, we can give more effective way of inventory management under limited resources and can provide better health care services. The secondary data has been collected from a hospital to give empirical evidence.

Keywords: ABC-VED inventory classification, multi item inventory problem, nonlinear programming technique, optimization of EOQ

Procedia PDF Downloads 230
636 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records

Authors: Sara ElElimy, Samir Moustafa

Abstract:

Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).

Keywords: big data analytics, machine learning, CDRs, 5G

Procedia PDF Downloads 113
635 Prevalence of Workplace Bullying in Hong Kong: A Latent Class Analysis

Authors: Catalina Sau Man Ng

Abstract:

Workplace bullying is generally defined as a form of direct and indirect maltreatment at work including harassing, offending, socially isolating someone or negatively affecting someone’s work tasks. Workplace bullying is unfortunately commonplace around the world, which makes it a social phenomenon worth researching. However, the measurements and estimation methods of workplace bullying seem to be diverse in different studies, leading to dubious results. Hence, this paper attempts to examine the prevalence of workplace bullying in Hong Kong using the latent class analysis approach. It is often argued that the traditional classification of workplace bullying into the dichotomous 'victims' and 'non-victims' may not be able to fully represent the complex phenomenon of bullying. By treating workplace bullying as one latent variable and examining the potential categorical distribution within the latent variable, a more thorough understanding of workplace bullying in real-life situations may hence be provided. As a result, this study adopts a latent class analysis method, which was tested to demonstrate higher construct and higher predictive validity previously. In the present study, a representative sample of 2814 employees (Male: 54.7%, Female: 45.3%) in Hong Kong was recruited. The participants were asked to fill in a self-reported questionnaire which included measurements such as Chinese Workplace Bullying Scale (CWBS) and Chinese Version of Depression Anxiety Stress Scale (DASS). It is estimated that four latent classes will emerge: 'non-victims', 'seldom bullied', 'sometimes bullied', and 'victims'. The results of each latent class and implications of the study will also be discussed in this working paper.

Keywords: latent class analysis, prevalence, survey, workplace bullying

Procedia PDF Downloads 293
634 Unlocking the Puzzle of Borrowing Adult Data for Designing Hybrid Pediatric Clinical Trials

Authors: Rajesh Kumar G

Abstract:

A challenging aspect of any clinical trial is to carefully plan the study design to meet the study objective in optimum way and to validate the assumptions made during protocol designing. And when it is a pediatric study, there is the added challenge of stringent guidelines and difficulty in recruiting the necessary subjects. Unlike adult trials, there is not much historical data available for pediatrics, which is required to validate assumptions for planning pediatric trials. Typically, pediatric studies are initiated as soon as approval is obtained for a drug to be marketed for adults, so with the adult study historical information and with the available pediatric pilot study data or simulated pediatric data, the pediatric study can be well planned. Generalizing the historical adult study for new pediatric study is a tedious task; however, it is possible by integrating various statistical techniques and utilizing the advantage of hybrid study design, which will help to achieve the study objective in a smoother way even with the presence of many constraints. This research paper will explain how well the hybrid study design can be planned along with integrated technique (SEV) to plan the pediatric study; In brief the SEV technique (Simulation, Estimation (using borrowed adult data and applying Bayesian methods)) incorporates the use of simulating the planned study data and getting the desired estimates to Validate the assumptions.This method of validation can be used to improve the accuracy of data analysis, ensuring that results are as valid and reliable as possible, which allow us to make informed decisions well ahead of study initiation. With professional precision, this technique based on the collected data allows to gain insight into best practices when using data from historical study and simulated data alike.

Keywords: adaptive design, simulation, borrowing data, bayesian model

Procedia PDF Downloads 42
633 Application of Nonparametric Geographically Weighted Regression to Evaluate the Unemployment Rate in East Java

Authors: Sifriyani Sifriyani, I Nyoman Budiantara, Sri Haryatmi, Gunardi Gunardi

Abstract:

East Java Province has a first rank as a province that has the most counties and cities in Indonesia and has the largest population. In 2015, the population reached 38.847.561 million, this figure showed a very high population growth. High population growth is feared to lead to increase the levels of unemployment. In this study, the researchers mapped and modeled the unemployment rate with 6 variables that were supposed to influence. Modeling was done by nonparametric geographically weighted regression methods with truncated spline approach. This method was chosen because spline method is a flexible method, these models tend to look for its own estimation. In this modeling, there were point knots, the point that showed the changes of data. The selection of the optimum point knots was done by selecting the most minimun value of Generalized Cross Validation (GCV). Based on the research, 6 variables were declared to affect the level of unemployment in eastern Java. They were the percentage of population that is educated above high school, the rate of economic growth, the population density, the investment ratio of total labor force, the regional minimum wage and the ratio of the number of big industry and medium scale industry from the work force. The nonparametric geographically weighted regression models with truncated spline approach had a coefficient of determination 98.95% and the value of MSE equal to 0.0047.

Keywords: East Java, nonparametric geographically weighted regression, spatial, spline approach, unemployed rate

Procedia PDF Downloads 291
632 The Signaling Power of ESG Accounting in Sub-Sahara Africa: A Dynamic Model Approach

Authors: Haruna Maama

Abstract:

Environmental, social and governance (ESG) reporting is gaining considerable attention despite being voluntary. Meanwhile, it consumes resources to provide ESG reporting, raising a question of its value relevance. The study examined the impact of ESG reporting on the market value of listed firms in SSA. The annual and integrated reports of 276 listed sub-Sahara Africa (SSA) firms. The integrated reporting scores of the firm were analysed using a content analysis method. A multiple regression estimation technique using a GMM approach was employed for the analysis. The results revealed that ESG has a positive relationship with firms’ market value, suggesting that investors are interested in the ESG information disclosure of firms in SSA. This suggests that extensive ESG disclosures are attempts by firms to obtain the approval of powerful social, political and environmental stakeholders, especially institutional investors. Furthermore, the market value analysis evidence is consistent with signalling theory, which postulates that firms provide integrated reports as a signal to influence the behaviour of stakeholders. This finding reflects the value placed on investors' social, environmental and governance disclosures, which affirms the views that conventional investors would care about the social, environmental and governance issues of their potential or existing investee firms. Overall, the evidence is consistent with the prediction of signalling theory. In the context of this theory, integrated reporting is seen as part of firms' overall competitive strategy to influence investors' behaviour. The findings of this study make unique contributions to knowledge and practice in corporate reporting.

Keywords: environmental accounting, ESG accounting, signalling theory, sustainability reporting, sub-saharan Africa

Procedia PDF Downloads 44
631 Investigation of Several New Ionic Liquids’ Behaviour during ²¹⁰PB/²¹⁰BI Cherenkov Counting in Waters

Authors: Nataša Todorović, Jovana Nikolov, Ivana Stojković, Milan Vraneš, Jovana Panić, Slobodan Gadžurić

Abstract:

The detection of ²¹⁰Pb levels in aquatic environments evokes interest in various scientific studies. Its precise determination is important not only for the radiological assessment of drinking waters but also ²¹⁰Pb, and ²¹⁰Po distribution in the marine environment are significant for the assessment of the removal rates of particles from the ocean and particle fluxes during transport along the coast, as well as particulate organic carbon export in the upper ocean. Measurement techniques for ²¹⁰Pb determination, gamma spectrometry, alpha spectrometry, or liquid scintillation counting (LSC) are either time-consuming or demand expensive equipment or complicated chemical pre-treatments. However, one other possibility is to measure ²¹⁰Pb on an LS counter if it is in equilibrium with its progeny ²¹⁰Bi - through the Cherenkov counting method. It is unaffected by the chemical quenching and assumes easy sample preparation but has the drawback of lower counting efficiencies than standard LSC methods, typically from 10% up to 20%. The aim of the presented research in this paper is to investigate the possible increment of detection efficiency of Cherenkov counting during ²¹⁰Pb/²¹⁰Bi detection on an LS counter Quantulus 1220. Considering naturally low levels of ²¹⁰Pb in aqueous samples, the addition of ionic liquids to the counting vials with the analysed samples has the benefit of detection limit’s decrement during ²¹⁰Pb quantification. Our results demonstrated that ionic liquid, 1-butyl-3-methylimidazolium salicylate, is more efficient in Cherenkov counting efficiency increment than the previously explored 2-hydroxypropan-1-amminium salicylate. Consequently, the impact of a few other ionic liquids that were synthesized with the same cation group (1-butyl-3-methylimidazolium benzoate, 1-butyl-3-methylimidazolium 3-hydroxybenzoate, and 1-butyl-3-methylimidazolium 4-hydroxybenzoate) was explored in order to test their potential influence on Cherenkov counting efficiency. It was confirmed that, among the explored ones, only ionic liquids in the form of salicylates exhibit a wavelength shifting effect. Namely, the addition of small amounts (around 0.8 g) of 1-butyl-3-methylimidazolium salicylate increases the detection efficiency from 16% to >70%, consequently reducing the detection threshold by more than four times. Moreover, the addition of ionic liquids could find application in the quantification of other radionuclides besides ²¹⁰Pb/²¹⁰Bi via Cherenkov counting method.

Keywords: liquid scintillation counting, ionic liquids, Cherenkov counting, ²¹⁰PB/²¹⁰BI in water

Procedia PDF Downloads 75
630 Uncertainty Assessment in Building Energy Performance

Authors: Fally Titikpina, Abderafi Charki, Antoine Caucheteux, David Bigaud

Abstract:

The building sector is one of the largest energy consumer with about 40% of the final energy consumption in the European Union. Ensuring building energy performance is of scientific, technological and sociological matter. To assess a building energy performance, the consumption being predicted or estimated during the design stage is compared with the measured consumption when the building is operational. When valuing this performance, many buildings show significant differences between the calculated and measured consumption. In order to assess the performance accurately and ensure the thermal efficiency of the building, it is necessary to evaluate the uncertainties involved not only in measurement but also those induced by the propagation of dynamic and static input data in the model being used. The evaluation of measurement uncertainty is based on both the knowledge about the measurement process and the input quantities which influence the result of measurement. Measurement uncertainty can be evaluated within the framework of conventional statistics presented in the \textit{Guide to the Expression of Measurement Uncertainty (GUM)} as well as by Bayesian Statistical Theory (BST). Another choice is the use of numerical methods like Monte Carlo Simulation (MCS). In this paper, we proposed to evaluate the uncertainty associated to the use of a simplified model for the estimation of the energy consumption of a given building. A detailed review and discussion of these three approaches (GUM, MCS and BST) is given. Therefore, an office building has been monitored and multiple sensors have been mounted on candidate locations to get required data. The monitored zone is composed of six offices and has an overall surface of 102 $m^2$. Temperature data, electrical and heating consumption, windows opening and occupancy rate are the features for our research work.

Keywords: building energy performance, uncertainty evaluation, GUM, bayesian approach, monte carlo method

Procedia PDF Downloads 431
629 Object Oriented Classification Based on Feature Extraction Approach for Change Detection in Coastal Ecosystem across Kochi Region

Authors: Mohit Modi, Rajiv Kumar, Manojraj Saxena, G. Ravi Shankar

Abstract:

Change detection of coastal ecosystem plays a vital role in monitoring and managing natural resources along the coastal regions. The present study mainly focuses on the decadal change in Kochi islands connecting the urban flatland areas and the coastal regions where sand deposits have taken place. With this, in view, the change detection has been monitored in the Kochi area to apprehend the urban growth and industrialization leading to decrease in the wetland ecosystem. The region lies between 76°11'19.134"E to 76°25'42.193"E and 9°52'35.719"N to 10°5'51.575"N in the south-western coast of India. The IRS LISS-IV satellite image has been processed using a rule-based algorithm to classify the LULC and to interpret the changes between 2005 & 2015. The approach takes two steps, i.e. extracting features as a single GIS vector layer using different parametric values and to dissolve them. The multi-resolution segmentation has been carried out on the scale ranging from 10-30. The different classes like aquaculture, agricultural land, built-up, wetlands etc. were extracted using parameters like NDVI, mean layer values, the texture-based feature with corresponding threshold values using a rule set algorithm. The objects obtained in the segmentation process were visualized to be overlaying the satellite image at a scale of 15. This layer was further segmented using the spectral difference segmentation rule between the objects. These individual class layers were dissolved in the basic segmented layer of the image and were interpreted in vector-based GIS programme to achieve higher accuracy. The result shows a rapid increase in an industrial area of 40% based on industrial area statistics of 2005. There is a decrease in wetlands area which has been converted into built-up. New roads have been constructed which are connecting the islands to urban areas as well as highways. The increase in coastal region has been visualized due to sand depositions. The outcome is well supported by quantitative assessments which will empower rich understanding of land use land cover change for appropriate policy intervention and further monitoring.

Keywords: land use land cover, multiresolution segmentation, NDVI, object based classification

Procedia PDF Downloads 163
628 Effect of Packing Ratio on Fire Spread across Discrete Fuel Beds: An Experimental Analysis

Authors: Qianqian He, Naian Liu, Xiaodong Xie, Linhe Zhang, Yang Zhang, Weidong Yan

Abstract:

In the wild, the vegetation layer with exceptionally complex fuel composition and heterogeneous spatial distribution strongly affects the rate of fire spread (ROS) and fire intensity. Clarifying the influence of fuel bed structure on fire spread behavior is of great significance to wildland fire management and prediction. The packing ratio is one of the key physical parameters describing the property of the fuel bed. There is a threshold value of the packing ratio for ROS, but little is known about the controlling mechanism. In this study, to address this deficiency, a series of fire spread experiments were performed across a discrete fuel bed composed of some regularly arranged laser-cut cardboards, with constant wind speed and different packing ratios (0.0125-0.0375). The experiment aims to explore the relative importance of the internal and surface heat transfer with packing ratio. The dependence of the measured ROS on the packing ratio was almost consistent with the previous researches. The data of the radiative and total heat fluxes show that the internal heat transfer and surface heat transfer are both enhanced with increasing packing ratio (referred to as ‘Stage 1’). The trend agrees well with the variation of the flame length. The results extracted from the video show that the flame length markedly increases with increasing packing ratio in Stage 1. Combustion intensity is suggested to be increased, which, in turn, enhances the heat radiation. The heat flux data shows that the surface heat transfer appears to be more important than the internal heat transfer (fuel preheating inside the fuel bed) in Stage 1. On the contrary, the internal heat transfer dominates the fuel preheating mechanism when the packing ratio further increases (referred to as ‘Stage 2’) because the surface heat flux keeps almost stable with the packing ratio in Stage 2. As for the heat convection, the flow velocity was measured using Pitot tubes both inside and on the upper surface of the fuel bed during the fire spread. Based on the gas velocity distribution ahead of the flame front, it is found that the airflow inside the fuel bed is restricted in Stage 2, which can reduce the internal heat convection in theory. However, the analysis indicates not the influence of inside flow on convection and combustion, but the decreased internal radiation of per unit fuel is responsible for the decrease of ROS.

Keywords: discrete fuel bed, fire spread, packing ratio, wildfire

Procedia PDF Downloads 109
627 Identification of Outliers in Flood Frequency Analysis: Comparison of Original and Multiple Grubbs-Beck Test

Authors: Ayesha S. Rahman, Khaled Haddad, Ataur Rahman

Abstract:

At-site flood frequency analysis is used to estimate flood quantiles when at-site record length is reasonably long. In Australia, FLIKE software has been introduced for at-site flood frequency analysis. The advantage of FLIKE is that, for a given application, the user can compare a number of most commonly adopted probability distributions and parameter estimation methods relatively quickly using a windows interface. The new version of FLIKE has been incorporated with the multiple Grubbs and Beck test which can identify multiple numbers of potentially influential low flows. This paper presents a case study considering six catchments in eastern Australia which compares two outlier identification tests (original Grubbs and Beck test and multiple Grubbs and Beck test) and two commonly applied probability distributions (Generalized Extreme Value (GEV) and Log Pearson type 3 (LP3)) using FLIKE software. It has been found that the multiple Grubbs and Beck test when used with LP3 distribution provides more accurate flood quantile estimates than when LP3 distribution is used with the original Grubbs and Beck test. Between these two methods, the differences in flood quantile estimates have been found to be up to 61% for the six study catchments. It has also been found that GEV distribution (with L moments) and LP3 distribution with the multiple Grubbs and Beck test provide quite similar results in most of the cases; however, a difference up to 38% has been noted for flood quantiles for annual exceedance probability (AEP) of 1 in 100 for one catchment. These findings need to be confirmed with a greater number of stations across other Australian states.

Keywords: floods, FLIKE, probability distributions, flood frequency, outlier

Procedia PDF Downloads 415