Search results for: prediction of publications
476 Comparing Practices of Swimming in the Netherlands against a Global Model for Integrated Development of Mass and High Performance Sport: Perceptions of Coaches
Authors: Melissa de Zeeuw, Peter Smolianov, Arnold Bohl
Abstract:
This study was designed to help and improve international performance as well increase swimming participation in the Netherlands. Over 200 sources of literature on sport delivery systems from 28 Australasian, North and South American, Western and Eastern European countries were analyzed to construct a globally applicable model of high performance swimming integrated with mass participation, comprising of the following seven elements and three levels: Micro level (operations, processes, and methodologies for development of individual athletes): 1. Talent search and development, 2. Advanced athlete support. Meso level (infrastructures, personnel, and services enabling sport programs): 3. Training centers, 4. Competition systems, 5. Intellectual services. Macro level (socio-economic, cultural, legislative, and organizational): 6. Partnerships with supporting agencies, 7. Balanced and integrated funding and structures of mass and elite sport. This model emerged from the integration of instruments that have been used to analyse and compare national sport systems. The model has received scholarly validation and showed to be a framework for program analysis that is not culturally bound. It has recently been accepted as a model for further understanding North American sport systems, including (in chronological order of publications) US rugby, tennis, soccer, swimming and volleyball. The above model was used to design a questionnaire of 42 statements reflecting desired practices. The statements were validated by 12 international experts, including executives from sport governing bodies, academics who published on high performance and sport development, and swimming coaches and administrators. In this study both a highly structured and open ended qualitative analysis tools were used. This included a survey of swim coaches where open responses accompanied structured questions. After collection of the surveys, semi-structured discussions with Federation coaches were conducted to add triangulation to the findings. Lastly, a content analysis of Dutch Swimming’s website and organizational documentation was conducted. A representative sample of 1,600 Dutch Swim coaches and administrators was collected via email addresses from Royal Dutch Swimming Federation' database. Fully completed questionnaires were returned by 122 coaches from all key country’s regions for a response rate of 7,63% - higher than the response rate of the previously mentioned US studies which used the same model and method. Results suggest possible enhancements at macro level (e.g., greater public and corporate support to prepare and hire more coaches and to address the lack of facilities, monies and publicity at mass participation level in order to make swimming affordable for all), at meso level (e.g., comprehensive education for all coaches and full spectrum of swimming pools particularly 50 meters long), and at micro level (e.g., better preparation of athletes for a future outside swimming and better use of swimmers to stimulate swimming development). Best Dutch swimming management practices (e.g., comprehensive support to most talented swimmers who win Olympic medals) as well as relevant international practices available for transfer to the Netherlands (e.g., high school competitions) are discussed.Keywords: sport development, high performance, mass participation, swimming
Procedia PDF Downloads 205475 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 90474 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty
Authors: D. S. Gomes, A. T. Silva
Abstract:
Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation
Procedia PDF Downloads 292473 Sensitivity Analysis of the Thermal Properties in Early Age Modeling of Mass Concrete
Authors: Farzad Danaei, Yilmaz Akkaya
Abstract:
In many civil engineering applications, especially in the construction of large concrete structures, the early age behavior of concrete has shown to be a crucial problem. The uneven rise in temperature within the concrete in these constructions is the fundamental issue for quality control. Therefore, developing accurate and fast temperature prediction models is essential. The thermal properties of concrete fluctuate over time as it hardens, but taking into account all of these fluctuations makes numerical models more complex. Experimental measurement of the thermal properties at the laboratory conditions also can not accurately predict the variance of these properties at site conditions. Therefore, specific heat capacity and the heat conductivity coefficient are two variables that are considered constant values in many of the models previously recommended. The proposed equations demonstrate that these two quantities are linearly decreasing as cement hydrates, and their value are related to the degree of hydration. The effects of changing the thermal conductivity and specific heat capacity values on the maximum temperature and the time it takes for concrete to reach that temperature are examined in this study using numerical sensibility analysis, and the results are compared to models that take a fixed value for these two thermal properties. The current study is conducted in 7 different mix designs of concrete with varying amounts of supplementary cementitious materials (fly ash and ground granulated blast furnace slag). It is concluded that the maximum temperature will not change as a result of the constant conductivity coefficient, but variable specific heat capacity must be taken into account, also about duration when a concrete's central node reaches its max value again variable specific heat capacity can have a considerable effect on the final result. Also, the usage of GGBFS has more influence compared to fly ash.Keywords: early-age concrete, mass concrete, specific heat capacity, thermal conductivity coefficient
Procedia PDF Downloads 77472 Correlation between Neck Circumference and Other Anthropometric Indices as a Predictor of Obesity
Authors: Madhur Verma, Meena Rajput, Kamal Kishore
Abstract:
Background: The general view that obesity is a problem of prosperous Western countries has been repealed with substantial evidence showing that middle-income countries like India are now at the heart of a fat explosion. Neck circumference has evolved as a promising index to measure obesity, because of the convenience of its use, even in culture sensitive population. Objectives: To determine whether neck circumference (NC) was associated with overweight and obesity and contributed to the prediction like other classical anthropometric indices. Methodology: Cross-sectional study consisting of 1080 adults (> 19 years) selected through Multi-stage random sampling between August 2013 and September 2014 using the pretested semi-structured questionnaire. After recruitment, the demographic and anthropometric parameters [BMI, Waist & Hip Circumference (WC, HC), Waist to hip ratio (WHR), waist to height ratio (WHtR), body fat percentage (BF %), neck circumference (NC)] were recorded & calculated as per standard procedures. Analysis was done using appropriate statistical tests. (SPSS, version 21.) Results: Mean age of study participants was 44.55+15.65 years. Overall prevalence of overweight & obesity as per modified criteria for Asian Indians (BMI ≥ 23 kg/m2) was 49.62% (Females-51.48%; Males-47.77%). Also, number of participants having high WHR, WHtR, BF%, WC & NC was 827(76.57%), 530(49.07%), 513(47.5%), 537(49.72%) & 376(34.81%) respectively. Variation of NC, BMI & BF% with age was non- significant. In both the genders, as per the Pearson’s correlational analysis, neck circumference was positively correlated with BMI (men, r=0.670 {p < 0.05}; women, r=0.564 {p < 0.05}), BF% (men, r=0.407 {p < 0.05}; women, r= 0.283 {p < 0.05}), WC (men, r=0.598{p < 0.05}; women, r=0.615 {p < 0.05}), HC (men, r=0.512{p < 0.05}; women, r=0.523{p < 0.05}), WHR (men, r= 0.380{p > 0.05}; women, r=0.022{p > 0.05}) & WHtR (men, r=0.318 {p < 0.05}; women, r=0.396{p < 0.05}). On ROC analysis, NC showed good discriminatory power to identify obesity with AUC (AUC for males: 0.822 & females: 0.873; p- value < 0.001) with maximum sensitivity and specificity at a cut-off value of 36.55 cms for males & 34.05cms for females. Conclusion: NC has fair validity as a community-based screener for overweight and obese individuals in the study context and has also correlated well with other classical indices.Keywords: neck circumference, obesity, anthropometric indices, body fat percentage
Procedia PDF Downloads 248471 In silico Subtractive Genomics Approach for Identification of Strain-Specific Putative Drug Targets among Hypothetical Proteins of Drug-Resistant Klebsiella pneumoniae Strain 825795-1
Authors: Umairah Natasya Binti Mohd Omeershffudin, Suresh Kumar
Abstract:
Klebsiella pneumoniae, a Gram-negative enteric bacterium that causes nosocomial and urinary tract infections. Particular concern is the global emergence of multidrug-resistant (MDR) strains of Klebsiella pneumoniae. Characterization of antibiotic resistance determinants at the genomic level plays a critical role in understanding, and potentially controlling, the spread of multidrug-resistant (MDR) pathogens. In this study, drug-resistant Klebsiella pneumoniae strain 825795-1 was investigated with extensive computational approaches aimed at identifying novel drug targets among hypothetical proteins. We have analyzed 1099 hypothetical proteins available in genome. We have used in-silico genome subtraction methodology to design potential and pathogen-specific drug targets against Klebsiella pneumoniae. We employed bioinformatics tools to subtract the strain-specific paralogous and host-specific homologous sequences from the bacterial proteome. The sorted 645 proteins were further refined to identify the essential genes in the pathogenic bacterium using the database of essential genes (DEG). We found 135 unique essential proteins in the target proteome that could be utilized as novel targets to design newer drugs. Further, we identified 49 cytoplasmic protein as potential drug targets through sub-cellular localization prediction. Further, we investigated these proteins in the DrugBank databases, and 11 of the unique essential proteins showed druggability according to the FDA approved drug bank databases with diverse broad-spectrum property. The results of this study will facilitate discovery of new drugs against Klebsiella pneumoniae.Keywords: pneumonia, drug target, hypothetical protein, subtractive genomics
Procedia PDF Downloads 177470 Predicting Stem Borer Density in Maize Using RapidEye Data and Generalized Linear Models
Authors: Elfatih M. Abdel-Rahman, Tobias Landmann, Richard Kyalo, George Ong’amo, Bruno Le Ru
Abstract:
Maize (Zea mays L.) is a major staple food crop in Africa, particularly in the eastern region of the continent. The maize growing area in Africa spans over 25 million ha and 84% of rural households in Africa cultivate maize mainly as a means to generate food and income. Average maize yields in Sub Saharan Africa are 1.4 t/ha as compared to global average of 2.5–3.9 t/ha due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In East Africa, yield losses due to stem borers are currently estimated between 12% to 40% of the total production. The objective of the present study was therefore to predict stem borer larvae density in maize fields using RapidEye reflectance data and generalized linear models (GLMs). RapidEye images were captured for a test site in Kenya (Machakos) in January and in February 2015. Stem borer larva numbers were modeled using GLMs assuming Poisson (Po) and negative binomial (NB) distributions with error with log arithmetic link. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were employed to assess the models performance using a leave one-out cross-validation approach. Results showed that NB models outperformed Po ones in all study sites. RMSE and RPD ranged between 0.95 and 2.70, and between 2.39 and 6.81, respectively. Overall, all models performed similar when used the January and the February image data. We conclude that reflectance data from RapidEye data can be used to estimate stem borer larvae density. The developed models could to improve decision making regarding controlling maize stem borers using various integrated pest management (IPM) protocols.Keywords: maize, stem borers, density, RapidEye, GLM
Procedia PDF Downloads 497469 Linear Decoding Applied to V5/MT Neuronal Activity on Past Trials Predicts Current Sensory Choices
Authors: Ben Hadj Hassen Sameh, Gaillard Corentin, Andrew Parker, Kristine Krug
Abstract:
Perceptual decisions about sequences of sensory stimuli often show serial dependence. The behavioural choice on one trial is often affected by the choice on previous trials. We investigated whether the neuronal signals in extrastriate visual area V5/MT on preceding trials might influence choice on the current trial and thereby reveal the neuronal mechanisms of sequential choice effects. We analysed data from 30 single neurons recorded from V5/MT in three Rhesus monkeys making sequential choices about the direction of rotation of a three-dimensional cylinder. We focused exclusively on the responses of neurons that showed significant choice-related firing (mean choice probability =0.73) while the monkey viewed perceptually ambiguous stimuli. Application of a wavelet transform to the choice-related firing revealed differences in the frequency band of neuronal activity that depended on whether the previous trial resulted in a correct choice for an unambiguous stimulus that was in the neuron’s preferred direction (low alpha and high beta and gamma) or non-preferred direction (high alpha and low beta and gamma). To probe this in further detail, we applied a regularized linear decoder to predict the choice for an ambiguous trial by referencing the neuronal activity of the preceding unambiguous trial. Neuronal activity on a previous trial provided a significant prediction of the current choice (61% correc, 95%Cl~52%t), even when limiting analysis to preceding trials that were correct and rewarded. These findings provide a potential neuronal signature of sequential choice effects in the primate visual cortex.Keywords: perception, decision making, attention, decoding, visual system
Procedia PDF Downloads 139468 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation
Authors: Samuel Ahamefula Mba
Abstract:
Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation
Procedia PDF Downloads 94467 CO₂ Absorption Studies Using Amine Solvents with Fourier Transform Infrared Analysis
Authors: Avoseh Funmilola, Osman Khalid, Wayne Nelson, Paramespri Naidoo, Deresh Ramjugernath
Abstract:
The increasing global atmospheric temperature is of great concern and this has led to the development of technologies to reduce the emission of greenhouse gases into the atmosphere. Flue gas emissions from fossil fuel combustion are major sources of greenhouse gases. One of the ways to reduce the emission of CO₂ from flue gases is by post combustion capture process and this can be done by absorbing the gas into suitable chemical solvents before emitting the gas into the atmosphere. Alkanolamines are promising solvents for this capture process. Vapour liquid equilibrium of CO₂-alkanolamine systems is often represented by CO₂ loading and partial pressure of CO₂ without considering the liquid phase. The liquid phase of this system is a complex one comprising of 9 species. Online analysis of the process is important to monitor the concentrations of the liquid phase reacting and product species. Liquid phase analysis of CO₂-diethanolamine (DEA) solution was performed by attenuated total reflection Fourier transform infrared (ATR-FTIR) spectroscopy. A robust Calibration was performed for the CO₂-aqueous DEA system prior to an online monitoring experiment. The partial least square regression method was used for the analysis of the calibration spectra obtained. The models obtained were used for prediction of DEA and CO₂ concentrations in the online monitoring experiment. The experiment was performed with a newly built recirculating experimental set up in the laboratory. The set up consist of a 750 ml equilibrium cell and ATR-FTIR liquid flow cell. Measurements were performed at 400°C. The results obtained indicated that the FTIR spectroscopy combined with Partial least square method is an effective tool for online monitoring of speciation.Keywords: ATR-FTIR, CO₂ capture, online analysis, PLS regression
Procedia PDF Downloads 198466 Teaching Practices for Subverting Significant Retentive Learner Errors in Arithmetic
Authors: Michael Lousis
Abstract:
The systematic identification of the most conspicuous and significant errors made by learners during three-years of testing of their progress in learning Arithmetic throughout the development of the Kassel Project in England and Greece was accomplished. How much retentive these errors were over three-years in the officially provided school instruction of Arithmetic in these countries has also been shown. The learners’ errors in Arithmetic stemmed from a sample, which was comprised of two hundred (200) English students and one hundred and fifty (150) Greek students. The sample was purposefully selected according to the students’ participation in each testing session in the development of the three-year project, in both domains simultaneously in Arithmetic and Algebra. Specific teaching practices have been invented and are presented in this study for subverting these learners’ errors, which were found out to be retentive to the level of the nationally provided mathematical education of each country. The invention and the development of these proposed teaching practices were founded on the rationality of the theoretical accounts concerning the explanation, prediction and control of the errors, on the conceptual metaphor and on an analysis, which tried to identify the required cognitive components and skills of the specific tasks, in terms of Psychology and Cognitive Science as applied to information-processing. The aim of the implementation of these instructional practices is not only the subversion of these errors but the achievement of the mathematical competence, as this was defined to be constituted of three elements: appropriate representations - appropriate meaning - appropriately developed schemata. However, praxis is of paramount importance, because there is no independent of science ‘real-truth’ and because praxis serves as quality control when it takes the form of a cognitive method.Keywords: arithmetic, cognitive science, cognitive psychology, information-processing paradigm, Kassel project, level of the nationally provided mathematical education, praxis, remedial mathematical teaching practices, retentiveness of errors
Procedia PDF Downloads 316465 A 3D Cell-Based Biosensor for Real-Time and Non-Invasive Monitoring of 3D Cell Viability and Drug Screening
Authors: Yuxiang Pan, Yong Qiu, Chenlei Gu, Ping Wang
Abstract:
In the past decade, three-dimensional (3D) tumor cell models have attracted increasing interest in the field of drug screening due to their great advantages in simulating more accurately the heterogeneous tumor behavior in vivo. Drug sensitivity testing based on 3D tumor cell models can provide more reliable in vivo efficacy prediction. The gold standard fluorescence staining is hard to achieve the real-time and label-free monitoring of the viability of 3D tumor cell models. In this study, micro-groove impedance sensor (MGIS) was specially developed for dynamic and non-invasive monitoring of 3D cell viability. 3D tumor cells were trapped in the micro-grooves with opposite gold electrodes for the in-situ impedance measurement. The change of live cell number would cause inversely proportional change to the impedance magnitude of the entire cell/matrigel to construct and reflect the proliferation and apoptosis of 3D cells. It was confirmed that 3D cell viability detected by the MGIS platform is highly consistent with the standard live/dead staining. Furthermore, the accuracy of MGIS platform was demonstrated quantitatively using 3D lung cancer model and sophisticated drug sensitivity testing. In addition, the parameters of micro-groove impedance chip processing and measurement experiments were optimized in details. The results demonstrated that the MGIS and 3D cell-based biosensor and would be a promising platform to improve the efficiency and accuracy of cell-based anti-cancer drug screening in vitro.Keywords: micro-groove impedance sensor, 3D cell-based biosensors, 3D cell viability, micro-electromechanical systems
Procedia PDF Downloads 128464 Evaluation of NASA POWER and CRU Precipitation and Temperature Datasets over a Desert-prone Yobe River Basin: An Investigation of the Impact of Drought in the North-East Arid Zone of Nigeria
Authors: Yusuf Dawa Sidi, Abdulrahman Bulama Bizi
Abstract:
The most dependable and precise source of climate data is often gauge observation. However, long-term records of gauge observations, on the other hand, are unavailable in many regions around the world. In recent years, a number of gridded climate datasets with high spatial and temporal resolutions have emerged as viable alternatives to gauge-based measurements. However, it is crucial to thoroughly evaluate their performance prior to utilising them in hydroclimatic applications. Therefore, this study aims to assess the effectiveness of NASA Prediction of Worldwide Energy Resources (NASA POWER) and Climate Research Unit (CRU) datasets in accurately estimating precipitation and temperature patterns within the dry region of Nigeria from 1990 to 2020. The study employs widely used statistical metrics and the Standardised Precipitation Index (SPI) to effectively capture the monthly variability of precipitation and temperature and inter-annual anomalies in rainfall. The findings suggest that CRU exhibited superior performance compared to NASA POWER in terms of monthly precipitation and minimum and maximum temperatures, demonstrating a high correlation and much lower error values for both RMSE and MAE. Nevertheless, NASA POWER has exhibited a moderate agreement with gauge observations in accurately replicating monthly precipitation. The analysis of the SPI reveals that the CRU product exhibits superior performance compared to NASA POWER in accurately reflecting inter-annual variations in rainfall anomalies. The findings of this study indicate that the CRU gridded product is often regarded as the most favourable gridded precipitation product.Keywords: CRU, climate change, precipitation, SPI, temperature
Procedia PDF Downloads 89463 Dispersion Rate of Spilled Oil in Water Column under Non-Breaking Water Waves
Authors: Hanifeh Imanian, Morteza Kolahdoozan
Abstract:
The purpose of this study is to present a mathematical phrase for calculating the dispersion rate of spilled oil in water column under non-breaking waves. In this regard, a multiphase numerical model is applied for which waves and oil phase were computed concurrently, and accuracy of its hydraulic calculations have been proven. More than 200 various scenarios of oil spilling in wave waters were simulated using the multiphase numerical model and its outcome were collected in a database. The recorded results were investigated to identify the major parameters affected vertical oil dispersion and finally 6 parameters were identified as main independent factors. Furthermore, some statistical tests were conducted to identify any relationship between the dependent variable (dispersed oil mass in the water column) and independent variables (water wave specifications containing height, length and wave period and spilled oil characteristics including density, viscosity and spilled oil mass). Finally, a mathematical-statistical relationship is proposed to predict dispersed oil in marine waters. To verify the proposed relationship, a laboratory example available in the literature was selected. Oil mass rate penetrated in water body computed by statistical regression was in accordance with experimental data was predicted. On this occasion, it was necessary to verify the proposed mathematical phrase. In a selected laboratory case available in the literature, mass oil rate penetrated in water body computed by suggested regression. Results showed good agreement with experimental data. The validated mathematical-statistical phrase is a useful tool for oil dispersion prediction in oil spill events in marine areas.Keywords: dispersion, marine environment, mathematical-statistical relationship, oil spill
Procedia PDF Downloads 233462 The Use of Non-Parametric Bootstrap in Computing of Microbial Risk Assessment from Lettuce Consumption Irrigated with Contaminated Water by Sanitary Sewage in Infulene Valley
Authors: Mario Tauzene Afonso Matangue, Ivan Andres Sanchez Ortiz
Abstract:
The Metropolitan area of Maputo (Mozambique Capital City) is located in semi-arid zone (800 mm annual rainfall) with 1101170 million inhabitants. On the west side, there are the flatlands of Infulene where the Mulauze River flows towards to the Indian Ocean, receiving at this site, the storm water contaminated with sanitary sewage from Maputo, transported through a concrete open channel. In Infulene, local communities grow salads crops such as tomato, onion, garlic, lettuce, and cabbage, which are then commercialized and consumed in several markets in Maputo City. Lettuce is the most daily consumed salad crop in different meals, generally in fast-foods, breakfasts, lunches, and dinners. However, the risk of infection by several pathogens due to the consumption of lettuce, using the Quantitative Microbial Risk Assessment (QMRA) tools, is still unknown since there are few studies or publications concerning to this matter in Mozambique. This work is aimed at determining the annual risk arising from the consumption of lettuce grown in Infulene valley, in Maputo, using QMRA tools. The exposure model was constructed upon the volume of contaminated water remaining in the lettuce leaves, the empirical relations between the number of pathogens and the indicator of microorganisms (E. coli), the consumption of lettuce (g) and reduction of pathogens (days). The reference pathogens were Vibrio cholerae, Cryptosporidium, norovirus, and Ascaris. The water quality samples (E. coli) were collected in the storm water channel from January 2016 to December 2018, comprising 65 samples, and the urban lettuce consumption data were collected through inquiry in Maputo Metropolis covering 350 persons. A non-parametric bootstrap was performed involving 10,000 iterations over the collected dataset, namely, water quality (E. coli) and lettuce consumption. The dose-response models were: Exponential for Cryptosporidium, Kummer Confluent hypergeomtric function (1F1) for Vibrio and Ascaris Gaussian hypergeometric function (2F1-(a,b;c;z) for norovirus. The annual infection risk estimates were performed using R 3.6.0 (CoreTeam) software by Monte Carlo (Latin hypercubes), a sampling technique involving 10,000 iterations. The annual infection risks values expressed by Median and the 95th percentile, per person per year (pppy) arising from the consumption of lettuce are as follows: Vibrio cholerae (1.00, 1.00), Cryptosporidium (3.91x10⁻³, 9.72x 10⁻³), nororvirus (5.22x10⁻¹, 9.99x10⁻¹) and Ascaris (2.59x10⁻¹, 9.65x10⁻¹). Thus, the consumption of the lettuce would result in greater risks than the tolerable levels ( < 10⁻³ pppy or 10⁻⁶ DALY) for all pathogens, and the Vibrio cholerae is the most virulent pathogens, according to the hit-single models followed by the Ascaris lumbricoides and norovirus. The sensitivity analysis carried out in this work pointed out that in the whole QMRA, the most important input variable was the reduction of pathogens (Spearman rank value was 0.69) between harvest and consumption followed by water quality (Spearman rank value was 0.69). The decision-makers (Mozambique Government) must strengthen the prevention measures related to pathogens reduction in lettuce (i.e., washing) and engage in wastewater treatment engineering.Keywords: annual infections risk, lettuce, non-parametric bootstrapping, quantitative microbial risk assessment tools
Procedia PDF Downloads 120461 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)
Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili
Abstract:
The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals
Procedia PDF Downloads 180460 Space Telemetry Anomaly Detection Based On Statistical PCA Algorithm
Authors: Bassem Nassar, Wessam Hussein, Medhat Mokhtar
Abstract:
The crucial concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems in order to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important in order to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the aforementioned problem coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions and the results shows that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.Keywords: space telemetry monitoring, multivariate analysis, PCA algorithm, space operations
Procedia PDF Downloads 415459 Study on The Pile Height Loss of Tunisian Handmade Carpets Under Dynamic Loading
Authors: Fatma Abidi, Taoufik Harizi, Slah Msahli, Faouzi Sakli
Abstract:
Nine different Tunisian handmade carpets were used for the investigation. The raw material of the carpet pile yarns was wool. The influence of the different structure parameters (linear density and pile height) on the carpet compression was investigated. Carpets were tested under dynamic loading in order to evaluate and observe the thickness loss and carpet behavior under dynamic loads. To determine the loss of pile height under dynamic loading, the pile height carpets were measured. The test method was treated according to the Tunisian standard NT 12.165 (corresponds to the standard ISO 2094). The pile height measurements are taken and recorded at intervals up to 1000 impacts (measures of this study were made after 50, 100, 200, 500, and 1000 impacts). The loss of pile height is calculated using the variation between the initial height and those measured after the number of reported impacts. The experimental results were statistically evaluated using Design Expert Analysis of Variance (ANOVA) software. As regards the deformation, results showed that both of the structure parameters of the pile yarn and the pile height have an influence. The carpet with the higher pile and the less linear density of pile yarn showed the worst performance. Results of a polynomial regression analysis are highlighted. There is a good correlation between the loss of pile height and the impacts number of dynamic loads. These equations are in good agreement with measured data. Because the prediction is reasonably accurate for all samples, these equations can also be taken into account when calculating the theoretical loss of pile height for the considered carpet samples. Statistical evaluations of the experimen¬tal data showed that the pile material and number of impacts have a significant effect on mean thickness and thickness loss variations.Keywords: Tunisian handmade carpet, loss of pile height, dynamic loads, performance
Procedia PDF Downloads 321458 Comparison of Cervical Length Using Transvaginal Ultrasonography and Bishop Score to Predict Succesful Induction
Authors: Lubena Achmad, Herman Kristanto, Julian Dewantiningrum
Abstract:
Background: The Bishop score is a standard method used to predict the success of induction. This examination tends to be subjective with high inter and intraobserver variability, so it was presumed to have a low predictive value in terms of the outcome of labor induction. Cervical length measurement using transvaginal ultrasound is considered to be more objective to assess the cervical length. Meanwhile, this examination is not a complicated procedure and less invasive than vaginal touché. Objective: To compare transvaginal ultrasound and Bishop score in predicting successful induction. Methods: This study was a prospective cohort study. One hundred and twenty women with singleton pregnancies undergoing induction of labor at 37 – 42 weeks and met inclusion and exclusion criteria were enrolled in this study. Cervical assessment by both transvaginal ultrasound and Bishop score were conducted prior induction. The success of labor induction was defined as an ability to achieve active phase ≤ 12 hours after induction. To figure out the best cut-off point of cervical length and Bishop score, receiver operating characteristic (ROC) curves were plotted. Logistic regression analysis was used to determine which factors best-predicted induction success. Results: This study showed significant differences in terms of age, premature rupture of the membrane, the Bishop score, cervical length and funneling as significant predictors of successful induction. Using ROC curves found that the best cut-off point for prediction of successful induction was 25.45 mm for cervical length and 3 for Bishop score. Logistic regression was performed and showed only premature rupture of membranes and cervical length ≤ 25.45 that significantly predicted the success of labor induction. By excluding premature rupture of the membrane as the indication of induction, cervical length less than 25.3 mm was a better predictor of successful induction. Conclusion: Compared to Bishop score, cervical length using transvaginal ultrasound was a better predictor of successful induction.Keywords: Bishop Score, cervical length, induction, successful induction, transvaginal sonography
Procedia PDF Downloads 325457 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis
Authors: Abeer A. Aljohani
Abstract:
COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network
Procedia PDF Downloads 92456 Fluvial Stage-Discharge Rating of a Selected Reach of Jamuna River
Authors: Makduma Zahan Badhan, M. Abdul Matin
Abstract:
A study has been undertaken to develop a fluvial stage-discharge rating curve for Jamuna River. Past Cross-sectional survey of Jamuna River reach within Sirajgonj and Tangail has been analyzed. The analysis includes the estimation of discharge carrying capacity, possible maximum scour depth and sediment transport capacity of the selected reaches. To predict the discharge and sediment carrying capacity, stream flow data which include cross-sectional area, top width, water surface slope and median diameter of the bed material of selected stations have been collected and some are calculated from reduced level data. A well-known resistance equation has been adopted and modified to a simple form in order to be used in the present analysis. The modified resistance equation has been used to calculate the mean velocity through the channel sections. In addition, a sediment transport equation has been applied for the prediction of transport capacity of the various sections. Results show that the existing drainage sections of Jamuna channel reach under study have adequate carrying capacity under existing bank-full conditions, but these reaches are subject to bed erosion even in low flow situations. Regarding sediment transport rate, it can be estimated that the channel flow has a relatively high range of bed material concentration. Finally, stage discharge curves for various sections have been developed. Based on stage-discharge rating data of various sections, water surface profile and sediment-rating curve of Jamuna River have been developed and also the flooding conditions have been analyzed from predicted water surface profile.Keywords: discharge rating, flow profile, fluvial, sediment rating
Procedia PDF Downloads 185455 Prediction of Ionizing Radiation Doses in Irradiated red Pepper (Capsicum annuum) and Mint (Mentha piperita) by Gel Electrophoresis
Authors: Şeyma Özçirak Ergün, Ergün Şakalar, Emrah Yalazi̇, Nebahat Şahi̇n
Abstract:
Food irradiation is a usage of exposing food to ionising radiation (IR) such as gamma rays. IR has been used to decrease the number of harmful microorganisms in the food such as spices. Excessive usage of IR can cause damage to both food and people who consuming food. And also it causes to damages on food DNA. Generally, IR detection techniques were utilized in literature for spices are Electron Spin Resonance (ESR), Thermos Luminescence (TL). Storage creates negative effect on IR detection method then analyses of samples have been performed without storage in general. In the experimental part, red pepper (Capsicum annuum) and mint (Mentha piperita) as spices were exposed to 0, 0.272, 0.497, 1.06, 3.64, 8.82, and 17.42 kGy ionize radiation. ESR was applied to samples irradiated. DNA isolation from irradiated samples was performed using GIDAGEN Multi Fast DNA isolation kit. The DNA concentration was measured using a microplate reader spectrophotometer (Infinite® 200 PRO-Life Science–Tecan). The concentration of each DNA was adjusted to 50 ng/µL. Genomic DNA was imaged by UV transilluminator (Gel Doc XR System, Bio-Rad) for the estimation of genomic DNA bp-fragment size after IR. Thus, agarose gel profiles of irradiated spices were obtained to determine the change of band profiles. Besides, samples were examined at three different time periods (0, 3, 6 months storage) to show the feasibility of developed method. Results of gel electrophoresis showed especially degradation of DNA of irradiated samples. In conclusion, this study with gel electrophoresis can be used as a basis for the identification of the dose of irradiation by looking at degradation profiles at specific amounts of irradiation. Agarose gel results of irradiated samples were confirmed with ESR analysis. This method can be applied widely to not only food products but also all biological materials containing DNA to predict radiation-induced damage of DNA.Keywords: DNA, electrophoresis, gel electrophoresis, ionizeradiation
Procedia PDF Downloads 259454 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 391453 Features of the Functional and Spatial Organization of Railway Hubs as a Part of the Urban Nodal Area
Authors: Khayrullina Yulia Sergeevna, Tokareva Goulsine Shavkatovna
Abstract:
The article analyzes the modern major railway hubs as a main part of the Urban Nodal Area (UNA). The term was introduced into the theory of urban planning at the end of the XX century. Tokareva G.S. jointly with Gutnov A.E. investigated the structure-forming elements of the city. UNA is the basic unit, the "cell" of the city structure. Specialization is depending on the position in the frame or the fabric of the city. This is related to feature of its organization. Spatial and functional features of UNA proposed to investigate in this paper. The base object for researching are railway hubs as connective nodes of inner and extern-city communications. Research used a stratified sampling type with the selection of typical objects. Research is being conducted on the 14 railway hubs of the native and foreign experience of the largest cities with a population over 1 million people located in one and close to the Russian climate zones. Features of the organization identified in the complex research of functional and spatial characteristics based on the hypothesis of the existence of dual characteristics of the organization of urban nodes. According to the analysis, there is using the approximation method that enable general conclusions of a representative selection of the entire population of railway hubs and it development’s area. Results of the research show specific ratio of functional and spatial organization of UNA based on railway hubs. Based on it there proposed typology of spaces and urban nodal areas. Identification of spatial diversity and functional organization’s features of the greatest railway hubs and it development’s area gives an indication of the different evolutionary stages of formation approaches. It help to identify new patterns for the complex and effective design as a prediction of the native hub’s development direction.Keywords: urban nodal area, railway hubs, features of structural, functional organization
Procedia PDF Downloads 387452 Comparison of Various Landfill Ground Improvement Techniques for Redevelopment of Closed Landfills to Cater Transport Infrastructure
Authors: Michael D. Vinod, Hadi Khabbaz
Abstract:
Construction of infrastructure above or adjacent to landfills is becoming more common to capitalize on the limited space available within urban areas. However, development above landfills is a challenging task due to large voids, the presence of organic matter, heterogeneous nature of waste and ambiguity surrounding landfill settlement prediction. Prior to construction of infrastructure above landfills, ground improvement techniques are being employed to improve the geotechnical properties of landfill material. Although the ground improvement techniques have little impact on long term biodegradation and creep related landfill settlement, they have shown some notable short term success with a variety of techniques, including methods for verifying the level of effectiveness of ground improvement techniques. This paper provides geotechnical and landfill engineers a guideline for selection of landfill ground improvement techniques and their suitability to project-specific sites. Ground improvement methods assessed and compared in this paper include concrete injected columns (CIC), dynamic compaction, rapid impact compaction (RIC), preloading, high energy impact compaction (HEIC), vibro compaction, vibro replacement, chemical stabilization and the inclusion of geosynthetics such as geocells. For each ground improvement technique a summary of the existing theory, benefits, limitations, suitable modern ground improvement monitoring methods, the applicability of ground improvement techniques for landfills and supporting case studies are provided. The authors highlight the importance of implementing cost-effective monitoring techniques to allow observation and necessary remediation of the subsidence effects associated with long term landfill settlement. These ground improvement techniques are primarily for the purpose of construction above closed landfills to cater for transport infrastructure loading.Keywords: closed landfills, ground improvement, monitoring, settlement, transport infrastructure
Procedia PDF Downloads 224451 Investigations of Bergy Bits and Ship Interactions in Extreme Waves Using Smoothed Particle Hydrodynamics
Authors: Mohammed Islam, Jungyong Wang, Dong Cheol Seo
Abstract:
The Smoothed Particle Hydrodynamics (SPH) method is a novel, meshless, and Lagrangian technique based numerical method that has shown promises to accurately predict the hydrodynamics of water and structure interactions in violent flow conditions. The main goal of this study is to build confidence on the versatility of the Smoothed Particle Hydrodynamics (SPH) based tool, to use it as a complementary tool to the physical model testing capabilities and support research need for the performance evaluation of ships and offshore platforms exposed to an extreme and harsh environment. In the current endeavor, an open-sourced SPH-based tool was used and validated for modeling and predictions of the hydrodynamic interactions of a 6-DOF ship and bergy bits. The study involved the modeling of a modern generic drillship and simplified bergy bits in floating and towing scenarios and in regular and irregular wave conditions. The predictions were validated using the model-scale measurements on a moored ship towed at multiple oblique angles approaching a floating bergy bit in waves. Overall, this study results in a thorough comparison between the model scale measurements and the prediction outcomes from the SPH tool for performance and accuracy. The SPH predicted ship motions and forces were primarily within ±5% of the measurements. The velocity and pressure distribution and wave characteristics over the free surface depicts realistic interactions of the wave, ship, and the bergy bit. This work identifies and presents several challenges in preparing the input file, particularly while defining the mass properties of complex geometry, the computational requirements, and the post-processing of the outcomes.Keywords: SPH, ship and bergy bit, hydrodynamic interactions, model validation, physical model testing
Procedia PDF Downloads 133450 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example
Authors: Hongyun Li, Zhibin Jiang
Abstract:
The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern
Procedia PDF Downloads 84449 An Exponential Field Path Planning Method for Mobile Robots Integrated with Visual Perception
Authors: Magdy Roman, Mostafa Shoeib, Mostafa Rostom
Abstract:
Global vision, whether provided by overhead fixed cameras, on-board aerial vehicle cameras, or satellite images can always provide detailed information on the environment around mobile robots. In this paper, an intelligent vision-based method of path planning and obstacle avoidance for mobile robots is presented. The method integrates visual perception with a new proposed field-based path-planning method to overcome common path-planning problems such as local minima, unreachable destination and unnecessary lengthy paths around obstacles. The method proposes an exponential angle deviation field around each obstacle that affects the orientation of a close robot. As the robot directs toward, the goal point obstacles are classified into right and left groups, and a deviation angle is exponentially added or subtracted to the orientation of the robot. Exponential field parameters are chosen based on Lyapunov stability criterion to guarantee robot convergence to the destination. The proposed method uses obstacles' shape and location, extracted from global vision system, through a collision prediction mechanism to decide whether to activate or deactivate obstacles field. In addition, a search mechanism is developed in case of robot or goal point is trapped among obstacles to find suitable exit or entrance. The proposed algorithm is validated both in simulation and through experiments. The algorithm shows effectiveness in obstacles' avoidance and destination convergence, overcoming common path planning problems found in classical methods.Keywords: path planning, collision avoidance, convergence, computer vision, mobile robots
Procedia PDF Downloads 194448 Analysing the Interactive Effects of Factors Influencing Sand Production on Drawdown Time in High Viscosity Reservoirs
Authors: Gerald Gwamba, Bo Zhou, Yajun Song, Dong Changyin
Abstract:
The challenges that sand production presents to the oil and gas industry, particularly while working in poorly consolidated reservoirs, cannot be overstated. From restricting production to blocking production tubing, sand production increases the costs associated with production as it elevates the cost of servicing production equipment over time. Production in reservoirs that present with high viscosities, flow rate, cementation, clay content as well as fine sand contents is even more complex and challenging. As opposed to the one-factor at a-time testing, investigating the interactive effects arising from a combination of several factors offers increased reliability of results as well as representation of actual field conditions. It is thus paramount to investigate the conditions leading to the onset of sanding during production to ensure the future sustainability of hydrocarbon production operations under viscous conditions. We adopt the Design of Experiments (DOE) to analyse, using Taguchi factorial designs, the most significant interactive effects of sanding. We propose an optimized regression model to predict the drawdown time at sand production. The results obtained underscore that reservoirs characterized by varying (high and low) levels of viscosity, flow rate, cementation, clay, and fine sand content have a resulting impact on sand production. The only significant interactive effect recorded arises from the interaction between BD (fine sand content and flow rate), while the main effects included fluid viscosity and cementation, with percentage significances recorded as 31.3%, 37.76%, and 30.94%, respectively. The drawdown time model presented could be useful for predicting the time to reach the maximum drawdown pressure under viscous conditions during the onset of sand production.Keywords: factorial designs, DOE optimization, sand production prediction, drawdown time, regression model
Procedia PDF Downloads 152447 Implications of Optimisation Algorithm on the Forecast Performance of Artificial Neural Network for Streamflow Modelling
Authors: Martins Y. Otache, John J. Musa, Abayomi I. Kuti, Mustapha Mohammed
Abstract:
The performance of an artificial neural network (ANN) is contingent on a host of factors, for instance, the network optimisation scheme. In view of this, the study examined the general implications of the ANN training optimisation algorithm on its forecast performance. To this end, the Bayesian regularisation (Br), Levenberg-Marquardt (LM), and the adaptive learning gradient descent: GDM (with momentum) algorithms were employed under different ANN structural configurations: (1) single-hidden layer, and (2) double-hidden layer feedforward back propagation network. Results obtained revealed generally that the gradient descent with momentum (GDM) optimisation algorithm, with its adaptive learning capability, used a relatively shorter time in both training and validation phases as compared to the Levenberg- Marquardt (LM) and Bayesian Regularisation (Br) algorithms though learning may not be consummated; i.e., in all instances considering also the prediction of extreme flow conditions for 1-day and 5-day ahead, respectively especially using the ANN model. In specific statistical terms on the average, model performance efficiency using the coefficient of efficiency (CE) statistic were Br: 98%, 94%; LM: 98 %, 95 %, and GDM: 96 %, 96% respectively for training and validation phases. However, on the basis of relative error distribution statistics (MAE, MAPE, and MSRE), GDM performed better than the others overall. Based on the findings, it is imperative to state that the adoption of ANN for real-time forecasting should employ training algorithms that do not have computational overhead like the case of LM that requires the computation of the Hessian matrix, protracted time, and sensitivity to initial conditions; to this end, Br and other forms of the gradient descent with momentum should be adopted considering overall time expenditure and quality of the forecast as well as mitigation of network overfitting. On the whole, it is recommended that evaluation should consider implications of (i) data quality and quantity and (ii) transfer functions on the overall network forecast performance.Keywords: streamflow, neural network, optimisation, algorithm
Procedia PDF Downloads 152