Search results for: motor parameter estimation
106 Inclusion Body Refolding at High Concentration for Large-Scale Applications
Authors: J. Gabrielczyk, J. Kluitmann, T. Dammeyer, H. J. Jördening
Abstract:
High-level expression of proteins in bacteria often causes production of insoluble protein aggregates, called inclusion bodies (IB). They contain mainly one type of protein and offer an easy and efficient way to get purified protein. On the other hand, proteins in IB are normally devoid of function and therefore need a special treatment to become active. Most refolding techniques aim at diluting the solubilizing chaotropic agents. Unfortunately, optimal refolding conditions have to be found empirically for every protein. For large-scale applications, a simple refolding process with high yields and high final enzyme concentrations is still missing. The constructed plasmid pASK-IBA63b containing the sequence of fructosyltransferase (FTF, EC 2.4.1.162) from Bacillus subtilis NCIMB 11871 was transformed into E. coli BL21 (DE3) Rosetta. The bacterium was cultivated in a fed-batch bioreactor. The produced FTF was obtained mainly as IB. For refolding experiments, five different amounts of IBs were solubilized in urea buffer with protein concentration of 0.2-8.5 g/L. Solubilizates were refolded with batch or continuous dialysis. The refolding yield was determined by measuring the protein concentration of the clear supernatant before and after the dialysis. Particle size was measured by dynamic light scattering. We tested the solubilization properties of fructosyltransferase IBs. The particle size measurements revealed that the solubilization of the aggregates is achieved at urea concentration of 5M or higher and confirmed by absorption spectroscopy. All results confirm previous investigations that refolding yields are dependent upon initial protein concentration. In batch dialysis, the yields dropped from 67% to 12% and 72% to 19% for continuous dialysis, in relation to initial concentrations from 0.2 to 8.5 g/L. Often used additives such as sucrose and glycerol had no effect on refolding yields. Buffer screening indicated a significant increase in activity but also temperature stability of FTF with citrate/phosphate buffer. By adding citrate to the dialysis buffer, we were able to increase the refolding yields to 82-47% in batch and 90-74% in the continuous process. Further experiments showed that in general, higher ionic strength of buffers had major impact on refolding yields; doubling the buffer concentration increased the yields up to threefold. Finally, we achieved corresponding high refolding yields by reducing the chamber volume by 75% and the amount of buffer needed. The refolded enzyme had an optimal activity of 12.5±0.3 x104 units/g. However, detailed experiments with native FTF revealed a reaggregation of the molecules and loss in specific activity depending on the enzyme concentration and particle size. For that reason, we actually focus on developing a process of simultaneous enzyme refolding and immobilization. The results of this study show a new approach in finding optimal refolding conditions for inclusion bodies at high concentrations. Straightforward buffer screening and increase of the ionic strength can optimize the refolding yield of the target protein by 400%. Gentle removal of chaotrope with continuous dialysis increases the yields by an additional 65%, independent of the refolding buffer applied. In general time is the crucial parameter for successful refolding of solubilized proteins.Keywords: dialysis, inclusion body, refolding, solubilization
Procedia PDF Downloads 294105 Estimating Poverty Levels from Satellite Imagery: A Comparison of Human Readers and an Artificial Intelligence Model
Authors: Ola Hall, Ibrahim Wahab, Thorsteinn Rognvaldsson, Mattias Ohlsson
Abstract:
The subfield of poverty and welfare estimation that applies machine learning tools and methods on satellite imagery is a nascent but rapidly growing one. This is in part driven by the sustainable development goal, whose overarching principle is that no region is left behind. Among other things, this requires that welfare levels can be accurately and rapidly estimated at different spatial scales and resolutions. Conventional tools of household surveys and interviews do not suffice in this regard. While they are useful for gaining a longitudinal understanding of the welfare levels of populations, they do not offer adequate spatial coverage for the accuracy that is needed, nor are their implementation sufficiently swift to gain an accurate insight into people and places. It is this void that satellite imagery fills. Previously, this was near-impossible to implement due to the sheer volume of data that needed processing. Recent advances in machine learning, especially the deep learning subtype, such as deep neural networks, have made this a rapidly growing area of scholarship. Despite their unprecedented levels of performance, such models lack transparency and explainability and thus have seen limited downstream applications as humans generally are apprehensive of techniques that are not inherently interpretable and trustworthy. While several studies have demonstrated the superhuman performance of AI models, none has directly compared the performance of such models and human readers in the domain of poverty studies. In the present study, we directly compare the performance of human readers and a DL model using different resolutions of satellite imagery to estimate the welfare levels of demographic and health survey clusters in Tanzania, using the wealth quintile ratings from the same survey as the ground truth data. The cluster-level imagery covers all 608 cluster locations, of which 428 were classified as rural. The imagery for the human readers was sourced from the Google Maps Platform at an ultra-high resolution of 0.6m per pixel at zoom level 18, while that of the machine learning model was sourced from the comparatively lower resolution Sentinel-2 10m per pixel data for the same cluster locations. Rank correlation coefficients of between 0.31 and 0.32 achieved by the human readers were much lower when compared to those attained by the machine learning model – 0.69-0.79. This superhuman performance by the model is even more significant given that it was trained on the relatively lower 10-meter resolution satellite data while the human readers estimated welfare levels from the higher 0.6m spatial resolution data from which key markers of poverty and slums – roofing and road quality – are discernible. It is important to note, however, that the human readers did not receive any training before ratings, and had this been done, their performance might have improved. The stellar performance of the model also comes with the inevitable shortfall relating to limited transparency and explainability. The findings have significant implications for attaining the objective of the current frontier of deep learning models in this domain of scholarship – eXplainable Artificial Intelligence through a collaborative rather than a comparative framework.Keywords: poverty prediction, satellite imagery, human readers, machine learning, Tanzania
Procedia PDF Downloads 107104 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 291103 Review of Concepts and Tools Applied to Assess Risks Associated with Food Imports
Authors: A. Falenski, A. Kaesbohrer, M. Filter
Abstract:
Introduction: Risk assessments can be performed in various ways and in different degrees of complexity. In order to assess risks associated with imported foods additional information needs to be taken into account compared to a risk assessment on regional products. The present review is an overview on currently available best practise approaches and data sources used for food import risk assessments (IRAs). Methods: A literature review has been performed. PubMed was searched for articles about food IRAs published in the years 2004 to 2014 (English and German texts only, search string “(English [la] OR German [la]) (2004:2014 [dp]) import [ti] risk”). Titles and abstracts were screened for import risks in the context of IRAs. The finally selected publications were analysed according to a predefined questionnaire extracting the following information: risk assessment guidelines followed, modelling methods used, data and software applied, existence of an analysis of uncertainty and variability. IRAs cited in these publications were also included in the analysis. Results: The PubMed search resulted in 49 publications, 17 of which contained information about import risks and risk assessments. Within these 19 cross references were identified to be of interest for the present study. These included original articles, reviews and guidelines. At least one of the guidelines of the World Organisation for Animal Health (OIE) and the Codex Alimentarius Commission were referenced in any of the IRAs, either for import of animals or for imports concerning foods, respectively. Interestingly, also a combination of both was used to assess the risk associated with the import of live animals serving as the source of food. Methods ranged from full quantitative IRAs using probabilistic models and dose-response models to qualitative IRA in which decision trees or severity tables were set up using parameter estimations based on expert opinions. Calculations were done using @Risk, R or Excel. Most heterogeneous was the type of data used, ranging from general information on imported goods (food, live animals) to pathogen prevalence in the country of origin. These data were either publicly available in databases or lists (e.g., OIE WAHID and Handystatus II, FAOSTAT, Eurostat, TRACES), accessible on a national level (e.g., herd information) or only open to a small group of people (flight passenger import data at national airport customs office). In the IRAs, an uncertainty analysis has been mentioned in some cases, but calculations have been performed only in a few cases. Conclusion: The current state-of-the-art in the assessment of risks of imported foods is characterized by a great heterogeneity in relation to general methodology and data used. Often information is gathered on a case-by-case basis and reformatted by hand in order to perform the IRA. This analysis therefore illustrates the need for a flexible, modular framework supporting the connection of existing data sources with data analysis and modelling tools. Such an infrastructure could pave the way to IRA workflows applicable ad-hoc, e.g. in case of a crisis situation.Keywords: import risk assessment, review, tools, food import
Procedia PDF Downloads 302102 Theoretical and Experimental Investigation of Structural, Electrical and Photocatalytic Properties of K₀.₅Na₀.₅NbO₃ Lead- Free Ceramics Prepared via Different Synthesis Routes
Authors: Manish Saha, Manish Kumar Niranjan, Saket Asthana
Abstract:
The K₀.₅Na₀.₅NbO₃ (KNN) system has emerged as one of the most promising lead-free piezoelectric over the years. In this work, we perform a comprehensive investigation of electronic structure, lattice dynamics and dielectric/ferroelectric properties of the room temperature phase of KNN by combining ab-initio DFT-based theoretical analysis and experimental characterization. We assign the symmetry labels to KNN vibrational modes and obtain ab-initio polarized Raman spectra, Infrared (IR) reflectivity, Born-effective charge tensors, oscillator strengths etc. The computed Raman spectrum is found to agree well with the experimental spectrum. In particular, the results suggest that the mode in the range ~840-870 cm-¹ reported in the experimental studies is longitudinal optical (LO) with A_1 symmetry. The Raman mode intensities are calculated for different light polarization set-ups, which suggests the observation of different symmetry modes in different polarization set-ups. The electronic structure of KNN is investigated, and an optical absorption spectrum is obtained. Further, the performances of DFT semi-local, metal-GGA and hybrid exchange-correlations (XC) functionals, in the estimation of KNN band gaps are investigated. The KNN bandgap computed using GGA-1/2 and HSE06 hybrid functional schemes are found to be in excellant agreement with the experimental value. The COHP, electron localization function and Bader charge analysis is also performed to deduce the nature of chemical bonding in the KNN. The solid-state reaction and hydrothermal methods are used to prepare the KNN ceramics, and the effects of grain size on the physical characteristics these ceramics are examined. A comprehensive study on the impact of different synthesis techniques on the structural, electrical, and photocatalytic properties of ferroelectric ceramics KNN. The KNN-S prepared by solid-state method have significantly larger grain size as compared to that for KNN-H prepared by hydrothermal method. Furthermore, the KNN-S is found to exhibit higher dielectric, piezoelectric and ferroelectric properties as compared to KNN-H. On the other hand, the increased photocatalytic activity is observed in KNN-H as compared to KNN-S. As compared to the hydrothermal synthesis, the solid-state synthesis causes an increase in the relative dielectric permittivity (ε^') from 2394 to 3286, remnant polarization (P_r) from 15.38 to 20.41 μC/cm^², planer electromechanical coupling factor (k_p) from 0.19 to 0.28 and piezoelectric coefficient (d_33) from 88 to 125 pC/N. The KNN-S ceramics are also found to have a lower leakage current density, and higher grain resistance than KNN-H ceramic. The enhanced photocatalytic activity of KNN-H is attributed to relatively smaller particle sizes. The KNN-S and KNN-H samples are found to have degradation efficiencies of RhB solution of 20% and 65%, respectively. The experimental study highlights the importance of synthesis methods and how these can be exploited to tailor the dielectric, piezoelectric and photocatalytic properties of KNN. Overall, our study provides several bench-mark important results on KNN that have not been reported so far.Keywords: lead-free piezoelectric, Raman intensity spectrum, electronic structure, first-principles calculations, solid state synthesis, photocatalysis, hydrothermal synthesis
Procedia PDF Downloads 52101 The Effect of Extensive Mosquito Migration on Dengue Control as Revealed by Phylogeny of Dengue Vector Aedes aegypti
Authors: M. D. Nirmani, K. L. N. Perera, G. H. Galhena
Abstract:
Dengue has become one of the most important arbo-viral disease in all tropical and subtropical regions of the world. Aedes aegypti, is the principal vector of the virus, vary in both epidemiological and behavioral characteristics, which could be finely measured through DNA sequence comparison at their population level. Such knowledge in the population differences can assist in implementation of effective vector control strategies allowing to make estimates of the gene flow and adaptive genomic changes, which are important predictors of the spread of Wolbachia infection or insecticide resistance. As such, this study was undertaken to investigate the phylogenetic relationships of Ae. aegypti from Galle and Colombo, Sri Lanka, based on the ribosomal protein region which spans between two exons, in order to understand the geographical distribution of genetically distinct mosquito clades and its impact on mosquito control measures. A 320bp DNA region spanning from 681-930 bp, corresponding to the ribosomal protein, was sequenced in 62 Ae. aegypti larvae collected from Galle (N=30) and Colombo (N=32), Sri Lanka. The sequences were aligned using ClustalW and the haplotypes were determined with DnaSP 5.10. Phylogenetic relationships among haplotypes were constructed using the maximum likelihood method under Tamura 3 parameter model in MEGA 7.0.14 including three previously reported sequences of Australian (N=2) and Brazilian (N=1) Ae. aegypti. The bootstrap support was calculated using 1000 replicates and the tree was rooted using Aedes notoscriptus (GenBank accession No. KJ194101). Among all sequences, nineteen different haplotypes were found among which five haplotypes were shared between 80% of mosquitoes in the two populations. Seven haplotypes were unique to each of the population. Phylogenetic tree revealed two basal clades and a single derived clade. All observed haplotypes of the two Ae. aegypti populations were distributed in all the three clades, indicating a lack of genetic differentiation between populations. The Brazilian Ae. aegypti haplotype and one of the Australian haplotypes were grouped together with the Sri Lankan basal haplotype in the same basal clade, whereas the other Australian haplotype was found in the derived clade. Phylogram showed that Galle and Colombo Ae. aegypti populations are highly related to each other despite the large geographic distance (129 Km) indicating a substantial genetic similarity between them. This may have probably arisen from passive migration assisted by human travelling and trade through both land and water as the two areas are bordered by the sea. In addition, studied Sri Lankan mosquito populations were closely related to Australian and Brazilian samples. Probably this might have caused by shipping industry between the three countries as all of them are fully or partially enclosed by sea. For example, illegal fishing boats migrating to Australia by sea is perhaps a good mean of transportation of all life stages of mosquitoes from Sri Lanka. These findings indicate that extensive mosquito migrations occur between populations not only within the country, but also among other countries in the world which might be a main barrier to the successful vector control measures.Keywords: Aedes aegypti, dengue control, extensive mosquito migration, haplotypes, phylogeny, ribosomal protein
Procedia PDF Downloads 190100 Dietary Exposure Assessment of Potentially Toxic Trace Elements in Fruits and Vegetables Grown in Akhtala, Armenia
Authors: Davit Pipoyan, Meline Beglaryan, Nicolò Merendino
Abstract:
Mining industry is one of the priority sectors of Armenian economy. Along with the solution of some socio-economic development, it brings about numerous environmental problems, especially toxic element pollution, which largely influences the safety of agricultural products. In addition, accumulation of toxic elements in agricultural products, mainly in edible parts of plants represents a direct pathway for their penetration into the human food chain. In Armenia, the share of plant origin food in overall diet is significantly high, so estimation of dietary intakes of toxic trace elements via consumption of selected fruits and vegetables are of great importance for observing the underlying health risks. Therefore, the present study was aimed to assess dietary exposure of potentially toxic trace elements through the intake of locally grown fruits and vegetables in Akhtala community (Armenia), where not only mining industry is developed, but also cultivation of fruits and vegetables. Moreover, this investigation represents one of the very first attempts to estimate human dietary exposure of potentially toxic trace elements in the study area. Samples of some commonly grown fruits and vegetables (fig, cornel, raspberry, grape, apple, plum, maize, bean, potato, cucumber, onion, greens) were randomly collected from several home gardens located near mining areas in Akhtala community. The concentration of Cu, Mo, Ni, Cr, Pb, Zn, Hg, As and Cd in samples were determined by using an atomic absorption spectrophotometer (AAS). Precision and accuracy of analyses were guaranteed by repeated analysis of samples against NIST Standard Reference Materials. For a diet study, individual-based approach was used, so the consumption of selected fruits and vegetables was investigated through food frequency questionnaire (FFQ). Combining concentration data with contamination data, the estimated daily intakes (EDI) and cumulative daily intakes were assessed and compared with health-based guidance values (HBGVs). According to the determined concentrations of the studied trace elements in fruits and vegetables, it can be stressed that some trace elements (Cu, Ni, Pb, Zn) among the majority of samples exceeded maximum allowable limits set by international organizations. Meanwhile, others (Cr, Hg, As, Cd, Mo) either did not exceed these limits or still do not have established allowable limits. The obtained results indicated that only for Cu the EDI values exceeded dietary reference intake (0.01 mg/kg/Bw/day) for some investigated fruits and vegetables in decreasing order of potato > grape > bean > raspberry > fig > greens. In contrast to this, for combined consumption of selected fruits and vegetables estimated cumulative daily intakes exceeded reference doses in the following sequence: Zn > Cu > Ni > Mo > Pb. It may be concluded that habitual and combined consumption of the above mentioned fruits and vegetables can pose a health risk to the local population. Hence, further detailed studies are needed for the overall assessment of potential health implications taking into consideration adverse health effects posed by more than one toxic trace element.Keywords: daily intake, dietary exposure, fruits, trace elements, vegetables
Procedia PDF Downloads 30299 A Study on Aquatic Bycatch Mortality Estimation Due to Prawn Seed Collection and Alteration of Collection Method through Sustainable Practices in Selected Areas of Sundarban Biosphere Reserve (SBR), India
Authors: Samrat Paul, Satyajit Pahari, Krishnendu Basak, Amitava Roy
Abstract:
Fishing is one of the pivotal livelihood activities, especially in developing countries. Today it is considered an important occupation for human society from the era of human settlement began. In simple terms, non-target catches of any species during fishing can be considered as ‘bycatch,’ and fishing bycatch is neither a new fishery management issue nor a new problem. Sundarban is one of the world’s largest mangrove land expanding up to 10,200 sq. km in India and Bangladesh. This largest mangrove biome resource is used by the local inhabitants commercially to run their livelihood, especially by forest fringe villagers (FFVs). In Sundarban, over-fishing, especially post larvae collection of wild Penaeus monodon, is one of the major concerns, as during the collection of P. monodon, different aquatic species are destroyed as a result of bycatch mortality which changes in productivity and may negatively impact entire biodiversity, of the ecosystem. Wild prawn seed collection gear like a small mesh sized net poses a serious threat to aquatic stocks, where the collection isn’t only limited to prawn seed larvae. As prawn seed collection processes are inexpensive, require less monetary investment, and are lucrative; people are easily engaged here as their source of income. Wildlife Trust of India’s (WTI) intervention in selected forest fringe villages of Sundarban Tiger Reserve (STR) was to estimate and reduce the mortality of aquatic bycatches by involving local communities in newly developed release method and their time engagement in prawn seed collection (PSC) by involving them in Alternate Income Generation (AIG). The study was conducted for their taxonomic identification during the period of March to October 2019. Collected samples were preserved in 70% ethyl alcohol for identification, and all the preserved bycatch samples were identified morphologically by the expertise of the Zoological Survey of India (ZSI), Kolkata. Around 74 different aquatic species, where 11 different species are molluscs, 41 fish species, out of which 31 species were identified, and 22 species of crustacean collected, out of which 18 species were identified. Around 13 different species belong to a different order, and families were unable to identify them morphologically as they were collected in the juvenile stage. The study reveals that for collecting one single prawn seed, eight individual life of associated faunas are being lost. Zero bycatch mortality is not practical; rather, collectors should focus on bycatch reduction by avoiding capturing, allowing escaping, and mortality reduction, and must make changes in their fishing method by increasing net mesh size, which will avoid non-target captures. But as the prawns are small in size (generally 1-1.5 inches in length), thus increase net size making economically less or no profit for collectors if they do so. In this case, returning bycatches is considered one of the best ways to a reduction in bycatch mortality which is a more sustainable practice.Keywords: bycatch mortality, biodiversity, mangrove biome resource, sustainable practice, Alternate Income Generation (AIG)
Procedia PDF Downloads 15398 New Findings on the Plasma Electrolytic Oxidation (PEO) of Aluminium
Authors: J. Martin, A. Nominé, T. Czerwiec, G. Henrion, T. Belmonte
Abstract:
The plasma electrolytic oxidation (PEO) is a particular electrochemical process to produce protective oxide ceramic coatings on light-weight metals (Al, Mg, Ti). When applied to aluminum alloys, the resulting PEO coating exhibit improved wear and corrosion resistance because thick, hard, compact and adherent crystalline alumina layers can be achieved. Several investigations have been carried out to improve the efficiency of the PEO process and one particular way consists in tuning the suitable electrical regime. Despite the considerable interest in this process, there is still no clear understanding of the underlying discharge mechanisms that make possible metal oxidation up to hundreds of µm through the ceramic layer. A key parameter that governs the PEO process is the numerous short-lived micro-discharges (micro-plasma in liquid) that occur continuously over the processed surface when the high applied voltage exceeds the critical dielectric breakdown value of the growing ceramic layer. By using a bipolar pulsed current to supply the electrodes, we previously observed that micro-discharges are delayed with respect to the rising edge of the anodic current. Nevertheless, explanation of the origin of such phenomena is still not clear and needs more systematic investigations. The aim of the present communication is to identify the relationship that exists between this delay and the mechanisms responsible of the oxide growth. For this purpose, the delay of micro-discharges ignition is investigated as the function of various electrical parameters such as the current density (J), the current pulse frequency (F) and the anodic to cathodic charge quantity ratio (R = Qp/Qn) delivered to the electrodes. The PEO process was conducted on Al2214 aluminum alloy substrates in a solution containing potassium hydroxide [KOH] and sodium silicate diluted in deionized water. The light emitted from micro-discharges was detected by a photomultiplier and the micro-discharge parameters (number, size, life-time) were measured during the process by means of ultra-fast video imaging (125 kfr./s). SEM observations and roughness measurements were performed to characterize the morphology of the elaborated oxide coatings while XRD was carried out to evaluate the amount of corundum -Al203 phase. Results show that whatever the applied current waveform, the delay of micro-discharge appearance increases as the process goes on. Moreover, the delay is shorter when the current density J (A/dm2), the current pulse frequency F (Hz) and the ratio of charge quantity R are high. It also appears that shorter delays are associated to stronger micro-discharges (localized, long and large micro-discharges) which have a detrimental effect on the elaborated oxide layers (thin and porous). On the basis of the results, a model for the growth of the PEO oxide layers will be presented and discussed. Experimental results support that a mechanism of electrical charge accumulation at the oxide surface / electrolyte interface takes place until the dielectric breakdown occurs and thus until micro-discharges appear.Keywords: aluminium, micro-discharges, oxidation mechanisms, plasma electrolytic oxidation
Procedia PDF Downloads 26497 Teleconnection between El Nino-Southern Oscillation and Seasonal Flow of the Surma River and Possibilities of Long Range Flood Forecasting
Authors: Monika Saha, A. T. M. Hasan Zobeyer, Nasreen Jahan
Abstract:
El Nino-Southern Oscillation (ENSO) is the interaction between atmosphere and ocean in tropical Pacific which causes inconsistent warm/cold weather in tropical central and eastern Pacific Ocean. Due to the impact of climate change, ENSO events are becoming stronger in recent times, and therefore it is very important to study the influence of ENSO in climate studies. Bangladesh, being in the low-lying deltaic floodplain, experiences the worst consequences due to flooding every year. To reduce the catastrophe of severe flooding events, non-structural measures such as flood forecasting can be helpful in taking adequate precautions and steps. Forecasting seasonal flood with a longer lead time of several months is a key component of flood damage control and water management. The objective of this research is to identify the possible strength of teleconnection between ENSO and river flow of Surma and examine the potential possibility of long lead flood forecasting in the wet season. Surma is one of the major rivers of Bangladesh and is a part of the Surma-Meghna river system. In this research, sea surface temperature (SST) has been considered as the ENSO index and the lead time is at least a few months which is greater than the basin response time. The teleconnection has been assessed by the correlation analysis between July-August-September (JAS) flow of Surma and SST of Nino 4 region of the corresponding months. Cumulative frequency distribution of standardized JAS flow of Surma has also been determined as part of assessing the possible teleconnection. Discharge data of Surma river from 1975 to 2015 is used in this analysis, and remarkable increased value of correlation coefficient between flow and ENSO has been observed from 1985. From the cumulative frequency distribution of the standardized JAS flow, it has been marked that in any year the JAS flow has approximately 50% probability of exceeding the long-term average JAS flow. During El Nino year (warm episode of ENSO) this probability of exceedance drops to 23% and while in La Nina year (cold episode of ENSO) it increases to 78%. Discriminant analysis which is known as 'Categoric Prediction' has been performed to identify the possibilities of long lead flood forecasting. It has helped to categorize the flow data (high, average and low) based on the classification of predicted SST (warm, normal and cold). From the discriminant analysis, it has been found that for Surma river, the probability of a high flood in the cold period is 75% and the probability of a low flood in the warm period is 33%. A synoptic parameter, forecasting index (FI) has also been calculated here to judge the forecast skill and to compare different forecasts. This study will help the concerned authorities and the stakeholders to take long-term water resources decisions and formulate policies on river basin management which will reduce possible damage of life, agriculture, and property.Keywords: El Nino-Southern Oscillation, sea surface temperature, surma river, teleconnection, cumulative frequency distribution, discriminant analysis, forecasting index
Procedia PDF Downloads 15696 Optimization of the Performance of a Solar Concentrator System with a Cavity Receiver Using the Genetic Algorithm
Authors: Foozhan Gharehkhani
Abstract:
The use of solar energy as a sustainable renewable energy source has gained significant attention in recent years. Solar concentrating systems (CSP), which direct solar radiation onto a receiver, are an effective means of producing high-temperature thermal energy. Cavity receivers, known for their high thermal efficiency and reduced heat losses, are particularly noteworthy in these systems. Optimizing their design can enhance energy efficiency and reduce costs. This study leverages the genetic algorithm, a powerful optimization tool inspired by natural evolution, to optimize the performance of a solar concentrator system with a cavity receiver, aiming for a more efficient and cost-effective design. In this study, a system consisting of a solar concentrator and a cavity receiver was analyzed. The concentrator was designed as a parabolic dish, and the receiver had a cylindrical cavity with a helical structure. The primary parameters were defined as the cavity diameter (D), the receiver height (h), and the helical pipe diameter (d). Initially, the system was optimized to achieve the maximum heat flux, and the optimal parameter values along with the maximum heat flux were obtained. Subsequently, a multi-objective optimization approach was applied, aiming to maximize the heat flux while minimizing the system construction cost. The optimization process was conducted using the genetic algorithm implemented in MATLAB with precise execution. The results of this study revealed that the optimal dimensions of the receiver, including the cavity diameter (D), receiver height (h), and helical pipe diameter (d), were determined to be 0.142 m, 0.1385 m, and 0.011 m, respectively. This optimization resulted in improvements of 3% in the cavity diameter, 8% in the height, and 5% in the helical pipe diameter. Furthermore, the results indicated that the primary focus of this research was the accurate thermal modeling of the solar collection system. The simulations and the obtained results demonstrated that the optimization applied to this system maximized its thermal performance and elevated its energy efficiency to a desirable level. Moreover, this study successfully modeled and controlled effective temperature variations at different angles of solar irradiation, highlighting significant improvements in system efficiency. The significance of this research lies in leveraging solar energy as one of the prominent renewable energy sources, playing a key role in replacing fossil fuels. Considering the environmental and economic challenges associated with the excessive use of fossil resources—such as increased greenhouse gas emissions, environmental degradation, and the depletion of fossil energy reserves—developing technologies related to renewable energy has become a vital priority. Among these, solar concentrating systems, capable of achieving high temperatures, are particularly important for industrial and heating applications. This research aims to optimize the performance of such systems through precise design and simulation, making a significant contribution to the advancement of advanced technologies and the efficient utilization of solar energy in Iran, thereby addressing the country's future energy needs effectively.Keywords: cavity receiver, genetic algorithm, optimization, solar concentrator system performance
Procedia PDF Downloads 1095 Hybrid Data-Driven Drilling Rate of Penetration Optimization Scheme Guided by Geological Formation and Historical Data
Authors: Ammar Alali, Mahmoud Abughaban, William Contreras Otalvora
Abstract:
Optimizing the drilling process for cost and efficiency requires the optimization of the rate of penetration (ROP). ROP is the measurement of the speed at which the wellbore is created, in units of feet per hour. It is the primary indicator of measuring drilling efficiency. Maximization of the ROP can indicate fast and cost-efficient drilling operations; however, high ROPs may induce unintended events, which may lead to nonproductive time (NPT) and higher net costs. The proposed ROP optimization solution is a hybrid, data-driven system that aims to improve the drilling process, maximize the ROP, and minimize NPT. The system consists of two phases: (1) utilizing existing geological and drilling data to train the model prior, and (2) real-time adjustments of the controllable dynamic drilling parameters [weight on bit (WOB), rotary speed (RPM), and pump flow rate (GPM)] that direct influence on the ROP. During the first phase of the system, geological and historical drilling data are aggregated. After, the top-rated wells, as a function of high instance ROP, are distinguished. Those wells are filtered based on NPT incidents, and a cross-plot is generated for the controllable dynamic drilling parameters per ROP value. Subsequently, the parameter values (WOB, GPM, RPM) are calculated as a conditioned mean based on physical distance, following Inverse Distance Weighting (IDW) interpolation methodology. The first phase is concluded by producing a model of drilling best practices from the offset wells, prioritizing the optimum ROP value. This phase is performed before the commencing of drilling. Starting with the model produced in phase one, the second phase runs an automated drill-off test, delivering live adjustments in real-time. Those adjustments are made by directing the driller to deviate two of the controllable parameters (WOB and RPM) by a small percentage (0-5%), following the Constrained Random Search (CRS) methodology. These minor incremental variations will reveal new drilling conditions, not explored before through offset wells. The data is then consolidated into a heat-map, as a function of ROP. A more optimum ROP performance is identified through the heat-map and amended in the model. The validation process involved the selection of a planned well in an onshore oil field with hundreds of offset wells. The first phase model was built by utilizing the data points from the top-performing historical wells (20 wells). The model allows drillers to enhance decision-making by leveraging existing data and blending it with live data in real-time. An empirical relationship between controllable dynamic parameters and ROP was derived using Artificial Neural Networks (ANN). The adjustments resulted in improved ROP efficiency by over 20%, translating to at least 10% saving in drilling costs. The novelty of the proposed system lays is its ability to integrate historical data, calibrate based geological formations, and run real-time global optimization through CRS. Those factors position the system to work for any newly drilled well in a developing field event.Keywords: drilling optimization, geological formations, machine learning, rate of penetration
Procedia PDF Downloads 13394 Performance and Limitations of Likelihood Based Information Criteria and Leave-One-Out Cross-Validation Approximation Methods
Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer
Abstract:
Model assessment, in the Bayesian context, involves evaluation of the goodness-of-fit and the comparison of several alternative candidate models for predictive accuracy and improvements. In posterior predictive checks, the data simulated under the fitted model is compared with the actual data. Predictive model accuracy is estimated using information criteria such as the Akaike information criterion (AIC), the Bayesian information criterion (BIC), the Deviance information criterion (DIC), and the Watanabe-Akaike information criterion (WAIC). The goal of an information criterion is to obtain an unbiased measure of out-of-sample prediction error. Since posterior checks use the data twice; once for model estimation and once for testing, a bias correction which penalises the model complexity is incorporated in these criteria. Cross-validation (CV) is another method used for examining out-of-sample prediction accuracy. Leave-one-out cross-validation (LOO-CV) is the most computationally expensive variant among the other CV methods, as it fits as many models as the number of observations. Importance sampling (IS), truncated importance sampling (TIS) and Pareto-smoothed importance sampling (PSIS) are generally used as approximations to the exact LOO-CV and utilise the existing MCMC results avoiding expensive computational issues. The reciprocals of the predictive densities calculated over posterior draws for each observation are treated as the raw importance weights. These are in turn used to calculate the approximate LOO-CV of the observation as a weighted average of posterior densities. In IS-LOO, the raw weights are directly used. In contrast, the larger weights are replaced by their modified truncated weights in calculating TIS-LOO and PSIS-LOO. Although, information criteria and LOO-CV are unable to reflect the goodness-of-fit in absolute sense, the differences can be used to measure the relative performance of the models of interest. However, the use of these measures is only valid under specific circumstances. This study has developed 11 models using normal, log-normal, gamma, and student’s t distributions to improve the PCR stutter prediction with forensic data. These models are comprised of four with profile-wide variances, four with locus specific variances, and three which are two-component mixture models. The mean stutter ratio in each model is modeled as a locus specific simple linear regression against a feature of the alleles under study known as the longest uninterrupted sequence (LUS). The use of AIC, BIC, DIC, and WAIC in model comparison has some practical limitations. Even though, IS-LOO, TIS-LOO, and PSIS-LOO are considered to be approximations of the exact LOO-CV, the study observed some drastic deviations in the results. However, there are some interesting relationships among the logarithms of pointwise predictive densities (lppd) calculated under WAIC and the LOO approximation methods. The estimated overall lppd is a relative measure that reflects the overall goodness-of-fit of the model. Parallel log-likelihood profiles for the models conditional on equal posterior variances in lppds were observed. This study illustrates the limitations of the information criteria in practical model comparison problems. In addition, the relationships among LOO-CV approximation methods and WAIC with their limitations are discussed. Finally, useful recommendations that may help in practical model comparisons with these methods are provided.Keywords: cross-validation, importance sampling, information criteria, predictive accuracy
Procedia PDF Downloads 39393 Hydrogeomatic System for the Economic Evaluation of Damage by Flooding in Mexico
Authors: Alondra Balbuena Medina, Carlos Diaz Delgado, Aleida Yadira Vilchis Fránces
Abstract:
In Mexico, each year news is disseminated about the ravages of floods, such as the total loss of housing, damage to the fields; the increase of the costs of the food, derived from the losses of the harvests, coupled with health problems such as skin infection, etc. In addition to social problems such as delinquency, damage in education institutions and the population in general. The flooding is a consequence of heavy rains, tropical storms and or hurricanes that generate excess water in drainage systems that exceed its capacity. In urban areas, heavy rains can be one of the main factors in causing flooding, in addition to excessive precipitation, dam breakage, and human activities, for example, excessive garbage in the strainers. In agricultural areas, these can hardly achieve large areas of cultivation. It should be mentioned that for both areas, one of the significant impacts of floods is that they can permanently affect the livelihoods of many families, cause damage, for example in their workplaces such as farmlands, commercial or industry areas and where services are provided. In recent years, Information and Communication Technologies (ICT) have had an accelerated development, being reflected in the growth and the exponential evolution of the innovation giving; as a result, the daily generation of new technologies, updates, and applications. Innovation in the development of Information Technology applications has impacted on all areas of human activity. They influence all the orders of life of individuals, reconfiguring the way of perceiving and analyzing the world such as, for instance, interrelating with people as individuals and as a society, in the economic, political, social, cultural, educational, environmental, etc. Therefore the present work describes the creation of a system of calculation of flood costs for housing areas, retail establishments and agricultural areas from the Mexican Republic, based on the use and application of geotechnical tools being able to be useful for the benefit of the sectors of public, education and private. To generate analysis of hydrometereologic affections and with the obtained results to realize the Geoinformatics tool was constructed from two different points of view: the geoinformatic (design and development of GIS software) and the methodology of flood damage validation in order to integrate a tool that provides the user the monetary estimate of the effects caused by the floods. With information from the period 2000-2014, the functionality of the application was corroborated. For the years 2000 to 2009 only the analysis of the agricultural and housing areas was carried out, incorporating for the commercial establishment's information of the period 2010 - 2014. The method proposed for the resolution of this research project is a fundamental contribution to society, in addition to the tool itself. Therefore, it can be summarized that the problems that are in the physical-geographical environment, conceiving them from the point of view of the spatial analysis, allow to offer different alternatives of solution and also to open up slopes towards academia and research.Keywords: floods, technological innovation, monetary estimation, spatial analysis
Procedia PDF Downloads 22592 Assessing P0.1 and Occlusion Pressures in Brain-Injured Patients on Pressure Support Ventilation: A Study Protocol
Authors: S. B. R. Slagmulder
Abstract:
Monitoring inspiratory effort and dynamic lung stress in patients on pressure support ventilation in the ICU is important for protecting against self inflicted lung injury (P-SILI) and diaphragm dysfunction. Strategies to address the detrimental effects of respiratory drive and effort can lead to improved patient outcomes. Two non-invasive estimation methods, occlusion pressure (Pocc) and P0.1, have been proposed for achieving lung and diaphragm protective ventilation. However, their relationship and interpretation in neuro ICU patients is not well understood. P0.1 is the airway pressure measured during a 100-millisecond occlusion of the inspiratory port. It reflects the neural drive from the respiratory centers to the diaphragm and respiratory muscles, indicating the patient's respiratory drive during the initiation of each breath. Occlusion pressure, measured during a brief inspiratory pause against a closed airway, provides information about the inspiratory muscles' strength and the system's total resistance and compliance. Research Objective: Understanding the relationship between Pocc and P0.1 in brain-injured patients can provide insights into the interpretation of these values in pressure support ventilation. This knowledge can contribute to determining extubation readiness and optimizing ventilation strategies to improve patient outcomes. The central goal is to asses a study protocol for determining the relationship between Pocc and P0.1 in brain-injured patients on pressure support ventilation and their ability to predict successful extubation. Additionally, comparing these values between brain-damaged and non-brain-damaged patients may provide valuable insights. Key Areas of Inquiry: 1. How do Pocc and P0.1 values correlate within brain injury patients undergoing pressure support ventilation? 2. To what extent can Pocc and P0.1 values serve as predictive indicators for successful extubation in patients with brain injuries? 3. What differentiates the Pocc and P0.1 values between patients with brain injuries and those without? Methodology: P0.1 and occlusion pressures are standard measurements for pressure support ventilation patients, taken by attending doctors as per protocol. We utilize electronic patient records for existing data. Unpaired T-test will be conducted to compare P0.1 and Pocc values between both study groups. Associations between P0.1 and Pocc and other study variables, such as extubation, will be explored with simple regression and correlation analysis. Depending on how the data evolve, subgroup analysis will be performed for patients with and without extubation failure. Results: While it is anticipated that neuro patients may exhibit high respiratory drive, the linkage between such elevation, quantified by P0.1, and successful extubation remains unknown The analysis will focus on determining the ability of these values to predict successful extubation and their potential impact on ventilation strategies. Conclusion: Further research is pending to fully understand the potential of these indices and their impact on mechanical ventilation in different patient populations and clinical scenarios. Understanding these relationships can aid in determining extubation readiness and tailoring ventilation strategies to improve patient outcomes in this specific patient population. Additionally, it is vital to account for the influence of sedatives, neurological scores, and BMI on respiratory drive and occlusion pressures to ensure a comprehensive analysis.Keywords: brain damage, diaphragm dysfunction, occlusion pressure, p0.1, respiratory drive
Procedia PDF Downloads 6891 A Hybrid of BioWin and Computational Fluid Dynamics Based Modeling of Biological Wastewater Treatment Plants for Model-Based Control
Authors: Komal Rathore, Kiesha Pierre, Kyle Cogswell, Aaron Driscoll, Andres Tejada Martinez, Gita Iranipour, Luke Mulford, Aydin Sunol
Abstract:
Modeling of Biological Wastewater Treatment Plants requires several parameters for kinetic rate expressions, thermo-physical properties, and hydrodynamic behavior. The kinetics and associated mechanisms become complex due to several biological processes taking place in wastewater treatment plants at varying times and spatial scales. A dynamic process model that incorporated the complex model for activated sludge kinetics was developed using the BioWin software platform for an Advanced Wastewater Treatment Plant in Valrico, Florida. Due to the extensive number of tunable parameters, an experimental design was employed for judicious selection of the most influential parameter sets and their bounds. The model was tuned using both the influent and effluent plant data to reconcile and rectify the forecasted results from the BioWin Model. Amount of mixed liquor suspended solids in the oxidation ditch, aeration rates and recycle rates were adjusted accordingly. The experimental analysis and plant SCADA data were used to predict influent wastewater rates and composition profiles as a function of time for extended periods. The lumped dynamic model development process was coupled with Computational Fluid Dynamics (CFD) modeling of the key units such as oxidation ditches in the plant. Several CFD models that incorporate the nitrification-denitrification kinetics, as well as, hydrodynamics was developed and being tested using ANSYS Fluent software platform. These realistic and verified models developed using BioWin and ANSYS were used to plan beforehand the operating policies and control strategies for the biological wastewater plant accordingly that further allows regulatory compliance at minimum operational cost. These models, with a little bit of tuning, can be used for other biological wastewater treatment plants as well. The BioWin model mimics the existing performance of the Valrico Plant which allowed the operators and engineers to predict effluent behavior and take control actions to meet the discharge limits of the plant. Also, with the help of this model, we were able to find out the key kinetic and stoichiometric parameters which are significantly more important for modeling of biological wastewater treatment plants. One of the other important findings from this model were the effects of mixed liquor suspended solids and recycle ratios on the effluent concentration of various parameters such as total nitrogen, ammonia, nitrate, nitrite, etc. The ANSYS model allowed the abstraction of information such as the formation of dead zones increases through the length of the oxidation ditches as compared to near the aerators. These profiles were also very useful in studying the behavior of mixing patterns, effect of aerator speed, and use of baffles which in turn helps in optimizing the plant performance.Keywords: computational fluid dynamics, flow-sheet simulation, kinetic modeling, process dynamics
Procedia PDF Downloads 21190 Worldwide GIS Based Earthquake Information System/Alarming System for Microzonation/Liquefaction and It’s Application for Infrastructure Development
Authors: Rajinder Kumar Gupta, Rajni Kant Agrawal, Jaganniwas
Abstract:
One of the most frightening phenomena of nature is the occurrence of earthquake as it has terrible and disastrous effects. Many earthquakes occur every day worldwide. There is need to have knowledge regarding the trends in earthquake occurrence worldwide. The recoding and interpretation of data obtained from the establishment of the worldwide system of seismological stations made this possible. From the analysis of recorded earthquake data, the earthquake parameters and source parameters can be computed and the earthquake catalogues can be prepared. These catalogues provide information on origin, time, epicenter locations (in term of latitude and longitudes) focal depths, magnitude and other related details of the recorded earthquakes. Theses catalogues are used for seismic hazard estimation. Manual interpretation and analysis of these data is tedious and time consuming. A geographical information system is a computer based system designed to store, analyzes and display geographic information. The implementation of integrated GIS technology provides an approach which permits rapid evaluation of complex inventor database under a variety of earthquake scenario and allows the user to interactively view results almost immediately. GIS technology provides a powerful tool for displaying outputs and permit to users to see graphical distribution of impacts of different earthquake scenarios and assumptions. An endeavor has been made in present study to compile the earthquake data for the whole world in visual Basic on ARC GIS Plate form so that it can be used easily for further analysis to be carried out by earthquake engineers. The basic data on time of occurrence, location and size of earthquake has been compiled for further querying based on various parameters. A preliminary analysis tool is also provided in the user interface to interpret the earthquake recurrence in region. The user interface also includes the seismic hazard information already worked out under GHSAP program. The seismic hazard in terms of probability of exceedance in definite return periods is provided for the world. The seismic zones of the Indian region are included in the user interface from IS 1893-2002 code on earthquake resistant design of buildings. The City wise satellite images has been inserted in Map and based on actual data the following information could be extracted in real time: • Analysis of soil parameters and its effect • Microzonation information • Seismic hazard and strong ground motion • Soil liquefaction and its effect in surrounding area • Impacts of liquefaction on buildings and infrastructure • Occurrence of earthquake in future and effect on existing soil • Propagation of earth vibration due of occurrence of Earthquake GIS based earthquake information system has been prepared for whole world in Visual Basic on ARC GIS Plate form and further extended micro level based on actual soil parameters. Individual tools has been developed for liquefaction, earthquake frequency etc. All information could be used for development of infrastructure i.e. multi story structure, Irrigation Dam & Its components, Hydro-power etc in real time for present and future.Keywords: GIS based earthquake information system, microzonation, analysis and real time information about liquefaction, infrastructure development
Procedia PDF Downloads 31789 DNA Barcoding for Identification of Dengue Vectors from Assam and Arunachal Pradesh: North-Eastern States in India
Authors: Monika Soni, Shovonlal Bhowmick, Chandra Bhattacharya, Jitendra Sharma, Prafulla Dutta, Jagadish Mahanta
Abstract:
Aedes aegypti and Aedes albopictus are considered as two major vectors to transmit dengue virus. In North-east India, two states viz. Assam and Arunachal Pradesh are known to be high endemic zone for dengue and Chikungunya viral infection. The taxonomical classification of medically important vectors are important for mapping of actual evolutionary trends and epidemiological studies. However, misidentification of mosquito species in field-collected mosquito specimens could have a negative impact which may affect vector-borne disease control policy. DNA barcoding is a prominent method to record available species, differentiate from new addition and change of population structure. In this study, a combined approach of a morphological and molecular technique of DNA barcoding was adopted to explore sequence variation in mitochondrial cytochrome c oxidase subunit I (COI) gene within dengue vectors. The study has revealed the map distribution of the dengue vector from two states i.e. Assam and Arunachal Pradesh, India. Approximate five hundred mosquito specimens were collected from different parts of two states, and their morphological features were compared with the taxonomic keys. The analysis of detailed taxonomic study revealed identification of two species Aedes aegypti and Aedes albopictus. The species aegypti comprised of 66.6% of the specimen and represented as dominant dengue vector species. The sequences obtained through standard DNA barcoding protocol were compared with public databases, viz. GenBank and BOLD. The sequences of all Aedes albopictus have shown 100% similarity whereas sequence of Aedes aegypti has shown 99.77 - 100% similarity of COI gene with that of different geographically located same species based on BOLD database search. From dengue prevalent different geographical regions fifty-nine sequences were retrieved from NCBI and BOLD databases of the same and related taxa to determine the evolutionary distance model based on the phylogenetic analysis. Neighbor-Joining (NJ) and Maximum Likelihood (ML) phylogenetic tree was constructed in MEGA6.06 software with 1000 bootstrap replicates using Kimura-2-Parameter model. Data were analyzed for sequence divergence and found that intraspecific divergence ranged from 0.0 to 2.0% and interspecific divergence ranged from 11.0 to 12.0%. The transitional and transversional substitutions were tested individually. The sequences were deposited in NCBI: GenBank database. This observation claimed the first DNA barcoding analysis of Aedes mosquitoes from North-eastern states in India and also confirmed the range expansion of two important mosquito species. Overall, this study insight into the molecular ecology of the dengue vectors from North-eastern India which will enhance the understanding to improve the existing entomological surveillance and vector incrimination program.Keywords: COI, dengue vectors, DNA barcoding, molecular identification, North-east India, phylogenetics
Procedia PDF Downloads 30488 Accelerated Carbonation of Construction Materials by Using Slag from Steel and Metal Production as Substitute for Conventional Raw Materials
Authors: Karen Fuchs, Michael Prokein, Nils Mölders, Manfred Renner, Eckhard Weidner
Abstract:
Due to the high CO₂ emissions, the energy consumption for the production of sand-lime bricks is of great concern. Especially the production of quicklime from limestone and the energy consumption for hydrothermal curing contribute to high CO₂ emissions. Hydrothermal curing is carried out under a saturated steam atmosphere at about 15 bar and 200°C for 12 hours. Therefore, we are investigating the opportunity to replace quicklime and sand in the production of building materials with different types of slag as calcium-rich waste from steel production. We are also investigating the possibility of substituting conventional hydrothermal curing with CO₂ curing. Six different slags (Linz-Donawitz (LD), ferrochrome (FeCr), ladle (LS), stainless steel (SS), ladle furnace (LF), electric arc furnace (EAF)) provided by "thyssenkrupp MillServices & Systems GmbH" were ground at "Loesche GmbH". Cylindrical blocks with a diameter of 100 mm were pressed at 12 MPa. The composition of the blocks varied between pure slag and mixtures of slag and sand. The effects of pressure, temperature, and time on the CO₂ curing process were studied in a 2-liter high-pressure autoclave. Pressures between 0.1 and 5 MPa, temperatures between 25 and 140°C, and curing times between 1 and 100 hours were considered. The quality of the CO₂-cured blocks was determined by measuring the compressive strength by "Ruhrbaustoffwerke GmbH & Co. KG." The degree of carbonation was determined by total inorganic carbon (TIC) and X-ray diffraction (XRD) measurements. The pH trends in the cross-section of the blocks were monitored using phenolphthalein as a liquid pH indicator. The parameter set that yielded the best performing material was tested on all slag types. In addition, the method was scaled to steel slag-based building blocks (240 mm x 115 mm x 60 mm) provided by "Ruhrbaustoffwerke GmbH & Co. KG" and CO₂-cured in a 20-liter high-pressure autoclave. The results show that CO₂ curing of building blocks consisting of pure wetted LD slag leads to severe cracking of the cylindrical specimens. The high CO₂ uptake leads to an expansion of the specimens. However, if LD slag is used only proportionally to replace quicklime completely and sand proportionally, dimensionally stable bricks with high compressive strength are produced. The tests to determine the optimum pressure and temperature show 2 MPa and 50°C as promising parameters for the CO₂ curing process. At these parameters and after 3 h, the compressive strength of LD slag blocks reaches the highest average value of almost 50 N/mm². This is more than double that of conventional sand-lime bricks. Longer CO₂ curing times do not result in higher compressive strengths. XRD and TIC measurements confirmed the formation of carbonates. All tested slag-based bricks show higher compressive strengths compared to conventional sand-lime bricks. However, the type of slag has a significant influence on the compressive strength values. The results of the tests in the 20-liter plant agreed well with the results of the 2-liter tests. With its comparatively moderate operating conditions, the CO₂ curing process has a high potential for saving CO₂ emissions.Keywords: CO₂ curing, carbonation, CCU, steel slag
Procedia PDF Downloads 10487 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines
Authors: Kamyar Tolouei, Ehsan Moosavi
Abstract:
In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization
Procedia PDF Downloads 10686 Virtual Reality Applications for Building Indoor Engineering: Circulation Way-Finding
Authors: Atefeh Omidkhah Kharashtomi, Rasoul Hedayat Nejad, Saeed Bakhtiyari
Abstract:
Circulation paths and indoor connection network of the building play an important role both in the daily operation of the building and during evacuation in emergency situations. The degree of legibility of the paths for navigation inside the building has a deep connection with the perceptive and cognitive system of human, and the way the surrounding environment is being perceived. Human perception of the space is based on the sensory systems in a three-dimensional environment, and non-linearly, so it is necessary to avoid reducing its representations in architectural design as a two-dimensional and linear issue. Today, the advances in the field of virtual reality (VR) technology have led to various applications, and architecture and building science can benefit greatly from these capabilities. Especially in cases where the design solution requires a detailed and complete understanding of the human perception of the environment and the behavioral response, special attention to VR technologies could be a priority. Way-finding in the indoor circulation network is a proper example for such application. Success in way-finding could be achieved if human perception of the route and the behavioral reaction have been considered in advance and reflected in the architectural design. This paper discusses the VR technology applications for the way-finding improvements in indoor engineering of the building. In a systematic review, with a database consisting of numerous studies, firstly, four categories for VR applications for circulation way-finding have been identified: 1) data collection of key parameters, 2) comparison of the effect of each parameter in virtual environment versus real world (in order to improve the design), 3) comparing experiment results in the application of different VR devices/ methods with each other or with the results of building simulation, and 4) training and planning. Since the costs of technical equipment and knowledge required to use VR tools lead to the limitation of its use for all design projects, priority buildings for the use of VR during design are introduced based on case-studies analysis. The results indicate that VR technology provides opportunities for designers to solve complex buildings design challenges in an effective and efficient manner. Then environmental parameters and the architecture of the circulation routes (indicators such as route configuration, topology, signs, structural and non-structural components, etc.) and the characteristics of each (metrics such as dimensions, proportions, color, transparency, texture, etc.) are classified for the VR way-finding experiments. Then, according to human behavior and reaction in the movement-related issues, the necessity of scenario-based and experiment design for using VR technology to improve the design and receive feedback from the test participants has been described. The parameters related to the scenario design are presented in a flowchart in the form of test design, data determination and interpretation, recording results, analysis, errors, validation and reporting. Also, the experiment environment design is discussed for equipment selection according to the scenario, parameters under study as well as creating the sense of illusion in the terms of place illusion, plausibility and illusion of body ownership.Keywords: virtual reality (VR), way-finding, indoor, circulation, design
Procedia PDF Downloads 7585 Relationship between Hepatokines and Insulin Resistance in Childhood Obesity
Authors: Mustafa Metin Donma, Orkide Donma
Abstract:
Childhood obesity is an important clinical problem because it may lead to chronic diseases during the adulthood period of the individual. Obesity is a metabolic disease associated with low-grade inflammation. The liver occurs at the center of metabolic pathways. Adropin, fibroblast growth factor-21 (FGF-21), and fetuin-A are hepatokines. Due to the immense participation of the liver in glucose metabolism, these liver-derived factors may be associated with insulin resistance (IR), which is a phenomenon discussed within the scope of obesity problems. The aim of this study is to determine the concentrations of adropin, FGF-21, and fetuin-A in childhood obesity, to point out possible differences between the obesity groups, and to investigate possible associations among these three hepatokines in obese and morbidly obese children. A total of one hundred and thirty-two children were included in the study. Two obese groups were constituted. The groups were matched in terms of mean ± SD values of ages. Body mass index values of obese and morbidly obese groups were 25.0 ± 3.5 kg/m² and 29.8 ± 5.7 kg/m², respectively. Anthropometric measurements including waist circumference, hip circumference, head circumference, and neck circumference were recorded. Informed consent forms were taken from the parents of the participants. The ethics committee of the institution approved the study protocol. Blood samples were obtained after overnight fasting. Routine biochemical tests, including glucose- and lipid-related parameters, were performed. Concentrations of the hepatokines (adropin, FGF-21, fetuin A) were determined by enzyme-linked immunosorbent assay. Insulin resistance indices such as homeostasis model assessment for IR (HOMA-IR), alanine transaminase-to aspartate transaminase ratio (ALT/AST), diagnostic obesity notation model assessment laboratory index, diagnostic obesity notation model assessment metabolic syndrome index as well as obesity indices such as diagnostic obesity notation model assessment-II index, and fat mass index were calculated using the previously derived formulas. Statistical evaluation of the study data as well as findings of the study was performed by SPSS for Windows. Statistical difference was accepted significant when p is smaller than 0.05. Statistically significant differences were found for insulin, triglyceride, high-density lipoprotein cholesterol levels of the groups. A significant increase was observed for FGF-21 concentrations in the morbidly obese group. Higher adropin and fetuin-A concentrations were observed in the same group in comparison with the values detected in the obese group (p > 0.05). There was no statistically significant difference between the ALT/AST values of the groups. In all of the remaining IR and obesity indices, significantly increased values were calculated for morbidly obese children. Significant correlations were detected between HOMA-IR and each of the hepatokines. The highest one was the association with fetuin-A (r=0.373, p=0.001). In conclusion, increased levels observed in adropin, FGF-21, and fetuin-A have shown that these hepatokines possess increasing potential going from obese to morbid obese state. Out of the correlations found with the IR index, the most affected hepatokine was fetuin-A, the parameter possibly used as the indicator of the advanced obesity stage.Keywords: adropin, fetuin A, fibroblast growth factor-21, insulin resistance, pediatric obesity
Procedia PDF Downloads 17684 Characterization of Agroforestry Systems in Burkina Faso Using an Earth Observation Data Cube
Authors: Dan Kanmegne
Abstract:
Africa will become the most populated continent by the end of the century, with around 4 billion inhabitants. Food security and climate changes will become continental issues since agricultural practices depend on climate but also contribute to global emissions and land degradation. Agroforestry has been identified as a cost-efficient and reliable strategy to address these two issues. It is defined as the integrated management of trees and crops/animals in the same land unit. Agroforestry provides benefits in terms of goods (fruits, medicine, wood, etc.) and services (windbreaks, fertility, etc.), and is acknowledged to have a great potential for carbon sequestration; therefore it can be integrated into reduction mechanisms of carbon emissions. Particularly in sub-Saharan Africa, the constraint stands in the lack of information about both areas under agroforestry and the characterization (composition, structure, and management) of each agroforestry system at the country level. This study describes and quantifies “what is where?”, earliest to the quantification of carbon stock in different systems. Remote sensing (RS) is the most efficient approach to map such a dynamic technology as agroforestry since it gives relatively adequate and consistent information over a large area at nearly no cost. RS data fulfill the good practice guidelines of the Intergovernmental Panel On Climate Change (IPCC) that is to be used in carbon estimation. Satellite data are getting more and more accessible, and the archives are growing exponentially. To retrieve useful information to support decision-making out of this large amount of data, satellite data needs to be organized so to ensure fast processing, quick accessibility, and ease of use. A new solution is a data cube, which can be understood as a multi-dimensional stack (space, time, data type) of spatially aligned pixels and used for efficient access and analysis. A data cube for Burkina Faso has been set up from the cooperation project between the international service provider WASCAL and Germany, which provides an accessible exploitation architecture of multi-temporal satellite data. The aim of this study is to map and characterize agroforestry systems using the Burkina Faso earth observation data cube. The approach in its initial stage is based on an unsupervised image classification of a normalized difference vegetation index (NDVI) time series from 2010 to 2018, to stratify the country based on the vegetation. Fifteen strata were identified, and four samples per location were randomly assigned to define the sampling units. For safety reasons, the northern part will not be part of the fieldwork. A total of 52 locations will be visited by the end of the dry season in February-March 2020. The field campaigns will consist of identifying and describing different agroforestry systems and qualitative interviews. A multi-temporal supervised image classification will be done with a random forest algorithm, and the field data will be used for both training the algorithm and accuracy assessment. The expected outputs are (i) map(s) of agroforestry dynamics, (ii) characteristics of different systems (main species, management, area, etc.); (iii) assessment report of Burkina Faso data cube.Keywords: agroforestry systems, Burkina Faso, earth observation data cube, multi-temporal image classification
Procedia PDF Downloads 14683 Development of a Psychometric Testing Instrument Using Algorithms and Combinatorics to Yield Coupled Parameters and Multiple Geometric Arrays in Large Information Grids
Authors: Laith F. Gulli, Nicole M. Mallory
Abstract:
The undertaking to develop a psychometric instrument is monumental. Understanding the relationship between variables and events is important in structural and exploratory design of psychometric instruments. Considering this, we describe a method used to group, pair and combine multiple Philosophical Assumption statements that assisted in development of a 13 item psychometric screening instrument. We abbreviated our Philosophical Assumptions (PA)s and added parameters, which were then condensed and mathematically modeled in a specific process. This model produced clusters of combinatorics which was utilized in design and development for 1) information retrieval and categorization 2) item development and 3) estimation of interactions among variables and likelihood of events. The psychometric screening instrument measured Knowledge, Assessment (education) and Beliefs (KAB) of New Addictions Research (NAR), which we called KABNAR. We obtained an overall internal consistency for the seven Likert belief items as measured by Cronbach’s α of .81 in the final study of 40 Clinicians, calculated by SPSS 14.0.1 for Windows. We constructed the instrument to begin with demographic items (degree/addictions certifications) for identification of target populations that practiced within Outpatient Substance Abuse Counseling (OSAC) settings. We then devised education items, beliefs items (seven items) and a modifiable “barrier from learning” item that consisted of six “choose any” choices. We also conceptualized a close relationship between identifying various degrees and certifications held by Outpatient Substance Abuse Therapists (OSAT) (the demographics domain) and all aspects of their education related to EB-NAR (past and present education and desired future training). We placed a descriptive (PA)1tx in both demographic and education domains to trace relationships of therapist education within these two domains. The two perceptions domains B1/b1 and B2/b2 represented different but interrelated perceptions from the therapist perspective. The belief items measured therapist perceptions concerning EB-NAR and therapist perceptions using EB-NAR during the beginning of outpatient addictions counseling. The (PA)s were written in simple words and descriptively accurate and concise. We then devised a list of parameters and appropriately matched them to each PA and devised descriptive parametric (PA)s in a domain categorized information grid. Descriptive parametric (PA)s were reduced to simple mathematical symbols. This made it easy to utilize parametric (PA)s into algorithms, combinatorics and clusters to develop larger information grids. By using matching combinatorics we took paired demographic and education domains with a subscript of 1 and matched them to the column with each B domain with subscript 1. Our algorithmic matching formed larger information grids with organized clusters in columns and rows. We repeated the process using different demographic, education and belief domains and devised multiple information grids with different parametric clusters and geometric arrays. We found benefit combining clusters by different geometric arrays, which enabled us to trace parametric variables and concepts. We were able to understand potential differences between dependent and independent variables and trace relationships of maximum likelihoods.Keywords: psychometric, parametric, domains, grids, therapists
Procedia PDF Downloads 27982 Heat Transfer Modeling of 'Carabao' Mango (Mangifera indica L.) during Postharvest Hot Water Treatments
Authors: Hazel James P. Agngarayngay, Arnold R. Elepaño
Abstract:
Mango is the third most important export fruit in the Philippines. Despite the expanding mango trade in world market, problems on postharvest losses caused by pests and diseases are still prevalent. Many disease control and pest disinfestation methods have been studied and adopted. Heat treatment is necessary to eliminate pests and diseases to be able to pass the quarantine requirements of importing countries. During heat treatments, temperature and time are critical because fruits can easily be damaged by over-exposure to heat. Modeling the process enables researchers and engineers to study the behaviour of temperature distribution within the fruit over time. Understanding physical processes through modeling and simulation also saves time and resources because of reduced experimentation. This research aimed to simulate the heat transfer mechanism and predict the temperature distribution in ‘Carabao' mangoes during hot water treatment (HWT) and extended hot water treatment (EHWT). The simulation was performed in ANSYS CFD Software, using ANSYS CFX Solver. The simulation process involved model creation, mesh generation, defining the physics of the model, solving the problem, and visualizing the results. Boundary conditions consisted of the convective heat transfer coefficient and a constant free stream temperature. The three-dimensional energy equation for transient conditions was numerically solved to obtain heat flux and transient temperature values. The solver utilized finite volume method of discretization. To validate the simulation, actual data were obtained through experiment. The goodness of fit was evaluated using mean temperature difference (MTD). Also, t-test was used to detect significant differences between the data sets. Results showed that the simulations were able to estimate temperatures accurately with MTD of 0.50 and 0.69 °C for the HWT and EHWT, respectively. This indicates good agreement between the simulated and actual temperature values. The data included in the analysis were taken at different locations of probe punctures within the fruit. Moreover, t-tests showed no significant differences between the two data sets. Maximum heat fluxes obtained at the beginning of the treatments were 394.15 and 262.77 J.s-1 for HWT and EHWT, respectively. These values decreased abruptly at the first 10 seconds and gradual decrease was observed thereafter. Data on heat flux is necessary in the design of heaters. If underestimated, the heating component of a certain machine will not be able to provide enough heat required by certain operations. Otherwise, over-estimation will result in wasting of energy and resources. This study demonstrated that the simulation was able to estimate temperatures accurately. Thus, it can be used to evaluate the influence of various treatment conditions on the temperature-time history in mangoes. When combined with information on insect mortality and quality degradation kinetics, it could predict the efficacy of a particular treatment and guide appropriate selection of treatment conditions. The effect of various parameters on heat transfer rates, such as the boundary and initial conditions as well as the thermal properties of the material, can be systematically studied without performing experiments. Furthermore, the use of ANSYS software in modeling and simulation can be explored in modeling various systems and processes.Keywords: heat transfer, heat treatment, mango, modeling and simulation
Procedia PDF Downloads 24781 Assessment of Cytogenetic Damage as a Function of Radiofrequency Electromagnetic Radiations Exposure Measured by Electric Field Strength: A Gender Based Study
Authors: Ramanpreet, Gursatej Gandhi
Abstract:
Background: Dependence on electromagnetic radiations involved in communication and information technologies has incredibly increased in the personal and professional world. Among the numerous radiations, sources are fixed site transmitters, mobile phone base stations, and power lines beside indoor devices like cordless phones, WiFi, Bluetooth, TV, radio, microwave ovens, etc. Rather there is the continuous emittance of radiofrequency radiations (RFR) even to those not using the devices from mobile phone base stations. The consistent and widespread usage of wireless devices has build-up electromagnetic fields everywhere. In fact, the radiofrequency electromagnetic field (RF-EMF) has insidiously become a part of the environment and like any contaminant may pose to be health-hazardous requiring assessment. Materials and Methods: In the present study, cytogenetic damage was assessed using the Buccal Micronucleus Cytome (BMCyt) assay as a function of radiation exposure after Institutional Ethics Committee clearance of the study and written voluntary informed consent from the participants. On a pre-designed questionnaire, general information lifestyle patterns (diet, physical activity, smoking, drinking, use of mobile phones, internet, Wi-Fi usage, etc.) genetic, reproductive (pedigrees) and medical histories were recorded. For this, 24 hour-personal exposimeter measurements (PEM) were recorded for unrelated 60 healthy adults (40 cases residing in the vicinity of mobile phone base stations since their installation and 20 controls residing in areas with no base stations). The personal exposimeter collects information from all the sources generating EMF (TETRA, GSM, UMTS, DECT, and WLAN) as total RF-EMF uplink and downlink. Findings: The cases (n=40; 23-90 years) and the controls (n=20; 19-65 years) matched for alcohol drinking, smoking habits, and mobile and cordless phone usage. The PEM in cases (149.28 ± 8.98 mV/m) revealed significantly higher (p=0.000) electric field strength compared to the recorded value (80.40 ± 0.30 mV/m) in controls. The GSM 900 uplink (p=0.000), GSM 1800 downlink (p=0.000),UMTS (both uplink; p=0.013 and downlink; p=0.001) and DECT (p=0.000) electric field strength were significantly elevated in the cases as compared to controls. The electric field strength in the cases was significantly from GSM1800 (52.26 ± 4.49mV/m) followed by GSM900 (45.69 ± 4.98mV/m), UMTS (25.03 ± 3.33mV/m), DECT (18.02 ± 2.14mV/m) and was least from WLAN (8.26 ± 2.35mV/m). The higher significantly (p=0.000) increased exposure to the cases was from GSM (97.96 ± 6.97mV/m) in comparison to UMTS, DECT, and WLAN. The frequencies of micronuclei (1.86X, p=0.007), nuclear buds (2.95X, p=0.002) and cell death parameter (condensed chromatin cells) were significantly (1.75X, p=0.007) elevated in cases compared to that in controls probably as a function of radiofrequency radiation exposure. Conclusion: In the absence of other exposure(s), any cytogenetic damage if unrepaired is a cause of concern as it can cause malignancy. Larger sample size with the clinical assessment will prove more insightful of such an effect.Keywords: Buccal micronucleus cytome assay, cytogenetic damage, electric field strength, personal exposimeter
Procedia PDF Downloads 15880 Impact of Anthropogenic Stresses on Plankton Biodiversity in Indian Sundarban Megadelta: An Approach towards Ecosystem Conservation and Sustainability
Authors: Dibyendu Rakshit, Santosh K. Sarkar
Abstract:
The study illustrates a comprehensive account of large-scale changes plankton community structure in relevance to water quality characteristics due to anthropogenic stresses, mainly concerned for Annual Gangasagar Festival (AGF) at the southern tip of Sagar Island of Indian Sundarban wetland for 3-year duration (2012-2014; n=36). This prograding, vulnerable and tide-dominated megadelta has been formed in the estuarine phase of the Hooghly Estuary infested by largest continuous tract of luxurious mangrove forest, enriched with high native flora and fauna. The sampling strategy was designed to characterize the changes in plankton community and water quality considering three diverse phases, namely during festival period (January) and its pre - (December) as well as post (February) events. Surface water samples were collected for estimation of different environmental variables as well as for phytoplankton and microzooplankton biodiversity measurement. The preservation and identification techniques of both biotic and abiotic parameters were carried out by standard chemical and biological methods. The intensive human activities lead to sharp ecological changes in the context of poor water quality index (WQI) due to high turbidity (14.02±2.34 NTU) coupled with low chlorophyll a (1.02±0.21 mg m-3) and dissolved oxygen (3.94±1.1 mg l-1), comparing to pre- and post-festival periods. Sharp reduction in abundance (4140 to 2997 cells l-1) and diversity (H′=2.72 to 1.33) of phytoplankton and microzooplankton tintinnids (450 to 328 ind l-1; H′=4.31 to 2.21) was very much pronounced. The small size tintinnid (average lorica length=29.4 µm; average LOD=10.5 µm) composed of Tintinnopsis minuta, T. lobiancoi, T. nucula, T. gracilis are predominant and reached some of the greatest abundances during the festival period. Results of ANOVA revealed a significant variation in different festival periods with phytoplankton (F= 1.77; p=0.006) and tintinnid abundance (F= 2.41; P=0.022). RELATE analyses revealed a significant correlation between the variations of planktonic communities with the environmental data (R= 0.107; p= 0.005). Three distinct groups were delineated from principal component analysis, in which a set of hydrological parameters acted as the causative factor(s) for maintaining diversity and distribution of the planktonic organisms. The pronounced adverse impact of anthropogenic stresses on plankton community could lead to environmental deterioration, disrupting the productivity of benthic and pelagic ecosystems as well as fishery potentialities which directly related to livelihood services. The festival can be considered as multiple drivers of changes in relevance to beach erosion, shoreline changes, pollution from discarded plastic and electronic wastes and destruction of natural habitats resulting loss of biodiversity. In addition, deterioration in water quality was also evident from immersion of idols, causing detrimental effects on aquatic biota. The authors strongly recommend for adopting integrated scientific and administrative strategies for resilience, sustainability and conservation of this megadelta.Keywords: Gangasagar festival, phytoplankton, Sundarban megadelta, tintinnid
Procedia PDF Downloads 23579 Confidence Envelopes for Parametric Model Selection Inference and Post-Model Selection Inference
Authors: I. M. L. Nadeesha Jayaweera, Adao Alex Trindade
Abstract:
In choosing a candidate model in likelihood-based modeling via an information criterion, the practitioner is often faced with the difficult task of deciding just how far up the ranked list to look. Motivated by this pragmatic necessity, we construct an uncertainty band for a generalized (model selection) information criterion (GIC), defined as a criterion for which the limit in probability is identical to that of the normalized log-likelihood. This includes common special cases such as AIC & BIC. The method starts from the asymptotic normality of the GIC for the joint distribution of the candidate models in an independent and identically distributed (IID) data framework and proceeds by deriving the (asymptotically) exact distribution of the minimum. The calculation of an upper quantile for its distribution then involves the computation of multivariate Gaussian integrals, which is amenable to efficient implementation via the R package "mvtnorm". The performance of the methodology is tested on simulated data by checking the coverage probability of nominal upper quantiles and compared to the bootstrap. Both methods give coverages close to nominal for large samples, but the bootstrap is two orders of magnitude slower. The methodology is subsequently extended to two other commonly used model structures: regression and time series. In the regression case, we derive the corresponding asymptotically exact distribution of the minimum GIC invoking Lindeberg-Feller type conditions for triangular arrays and are thus able to similarly calculate upper quantiles for its distribution via multivariate Gaussian integration. The bootstrap once again provides a default competing procedure, and we find that similar comparison performance metrics hold as for the IID case. The time series case is complicated by far more intricate asymptotic regime for the joint distribution of the model GIC statistics. Under a Gaussian likelihood, the default in most packages, one needs to derive the limiting distribution of a normalized quadratic form for a realization from a stationary series. Under conditions on the process satisfied by ARMA models, a multivariate normal limit is once again achieved. The bootstrap can, however, be employed for its computation, whence we are once again in the multivariate Gaussian integration paradigm for upper quantile evaluation. Comparisons of this bootstrap-aided semi-exact method with the full-blown bootstrap once again reveal a similar performance but faster computation speeds. One of the most difficult problems in contemporary statistical methodological research is to be able to account for the extra variability introduced by model selection uncertainty, the so-called post-model selection inference (PMSI). We explore ways in which the GIC uncertainty band can be inverted to make inferences on the parameters. This is being attempted in the IID case by pivoting the CDF of the asymptotically exact distribution of the minimum GIC. For inference one parameter at a time and a small number of candidate models, this works well, whence the attained PMSI confidence intervals are wider than the MLE-based Wald, as expected.Keywords: model selection inference, generalized information criteria, post model selection, Asymptotic Theory
Procedia PDF Downloads 9078 Changes of Mitochondrial Potential in the Midgut Epithelium of Lithobius forficatus (Myriapoda, Chilopoda) Exposed to Cadmium Concentrated in Soil
Authors: Magdalena Rost-Roszkowska, Izabela Poprawa, Alina Chachulska-Zymelka, Lukasz Chajec, Grazyna Wilczek, Piotr Wilczek, Malgorzata Lesniewska
Abstract:
Lithobius forficatus, commonly known as the brown centipede, is a widespread European species, which lives in the upper layers of soil, under stones, litter, rocks, and leaves. As the soil organism, it is exposed to numerous stressors such as xenobiotics, including heavy metals, temperature, starvation, pathogens, etc. Heavy metals are treated as the environmental pollutants of the soil because of their toxic effects on plants, animals and human being. One of the heavy metals which is xenobiotic and can be taken up by plants or animals from the soil is cadmium. The digestive system of centipedes is composed of three distinct regions: fore-, mid- and hindgut. The salivary glands of centipedes are the organs which belong to the anterior region of the digestive system and take part in the synthesis, accumulation, and secretion of many substances. The middle region having contact with the food masses is treated as one of the barriers which protect the organism against any stressors which originate from the external environment, e.g., toxic metals. As the material for our studies, we chose two organs of the digestive system in brown centipede, the organs which take part in homeostasis maintenance: the salivary glands and the midgut. The main purpose of the project was to investigate the relationship between the percentage of depolarized mitochondria, mitophagy and ATP level in cells of mentioned above organs. The animals were divided into experimental groups: K – the control group, the animals cultured in a laboratory conditions in a horticultural soil and fed with Acheta domesticus larvae; Cd1 – the animals cultured in a horticultural soil supplemented with 80 mg/kg (dry weight) of CdCl2, fed with A. domesticus larvae maintained in tap water, 12 days – short-term exposure; Cd2 – the animals cultured in a horticultural soil supplemented with 80 mg/kg (dry weight) of CdCl2, fed with A. domesticus larvae maintained in tap water, 45 days – long-term exposure. The studies were conducted using transmission electron microscopy (TEM), flow cytometry and confocal microscopy. Quantitative analysis revealed that regardless of the organ, a progressive increase in the percentage of cells with depolarized mitochondria was registered, but only in the salivary glands. These were statistically significant changes from the control. In both organs, there were no differences in the level of the analyzed parameter depending on the duration of exposure of individuals to cadmium. Changes in the ultrastructure of mitochondria have been observed. With the extension of the body's exposure time to metal, an increase in the ADP/ATP index was recorded. However, changes statistically significant to the control were demonstrated in the intestine and salivary glands. The size of this intestinal index and salivary glands in the Cd2 group was about thirty and twenty times higher, respectively than in control. Acknowledgment: The study has been financed by the National Science Centre, Poland, grant no 2017/25/B/NZ4/00420.Keywords: cadmium, digestive system, ultrastructure, centipede
Procedia PDF Downloads 13877 Regional Hydrological Extremes Frequency Analysis Based on Statistical and Hydrological Models
Authors: Hadush Kidane Meresa
Abstract:
The hydrological extremes frequency analysis is the foundation for the hydraulic engineering design, flood protection, drought management and water resources management and planning to utilize the available water resource to meet the desired objectives of different organizations and sectors in a country. This spatial variation of the statistical characteristics of the extreme flood and drought events are key practice for regional flood and drought analysis and mitigation management. For different hydro-climate of the regions, where the data set is short, scarcity, poor quality and insufficient, the regionalization methods are applied to transfer at-site data to a region. This study aims in regional high and low flow frequency analysis for Poland River Basins. Due to high frequent occurring of hydrological extremes in the region and rapid water resources development in this basin have caused serious concerns over the flood and drought magnitude and frequencies of the river in Poland. The magnitude and frequency result of high and low flows in the basin is needed for flood and drought planning, management and protection at present and future. Hydrological homogeneous high and low flow regions are formed by the cluster analysis of site characteristics, using the hierarchical and C- mean clustering and PCA method. Statistical tests for regional homogeneity are utilized, by Discordancy and Heterogeneity measure tests. In compliance with results of the tests, the region river basin has been divided into ten homogeneous regions. In this study, frequency analysis of high and low flows using AM for high flow and 7-day minimum low flow series is conducted using six statistical distributions. The use of L-moment and LL-moment method showed a homogeneous region over entire province with Generalized logistic (GLOG), Generalized extreme value (GEV), Pearson type III (P-III), Generalized Pareto (GPAR), Weibull (WEI) and Power (PR) distributions as the regional drought and flood frequency distributions. The 95% percentile and Flow duration curves of 1, 7, 10, 30 days have been plotted for 10 stations. However, the cluster analysis performed two regions in west and east of the province where L-moment and LL-moment method demonstrated the homogeneity of the regions and GLOG and Pearson Type III (PIII) distributions as regional frequency distributions for each region, respectively. The spatial variation and regional frequency distribution of flood and drought characteristics for 10 best catchment from the whole region was selected and beside the main variable (streamflow: high and low) we used variables which are more related to physiographic and drainage characteristics for identify and delineate homogeneous pools and to derive best regression models for ungauged sites. Those are mean annual rainfall, seasonal flow, average slope, NDVI, aspect, flow length, flow direction, maximum soil moisture, elevation, and drainage order. The regional high-flow or low-flow relationship among one streamflow characteristics with (AM or 7-day mean annual low flows) some basin characteristics is developed using Generalized Linear Mixed Model (GLMM) and Generalized Least Square (GLS) regression model, providing a simple and effective method for estimation of flood and drought of desired return periods for ungauged catchments.Keywords: flood , drought, frequency, magnitude, regionalization, stochastic, ungauged, Poland
Procedia PDF Downloads 603