Search results for: spectrophotometric determination
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1993

Search results for: spectrophotometric determination

103 Edge Enhancement Visual Methodology for Fat Amount and Distribution Assessment in Dry-Cured Ham Slices

Authors: Silvia Grassi, Stefano Schiavon, Ernestina Casiraghi, Cristina Alamprese

Abstract:

Dry-cured ham is an uncooked meat product particularly appreciated for its peculiar sensory traits among which lipid component plays a key role in defining quality and, consequently, consumers’ acceptability. Usually, fat content and distribution are chemically determined by expensive, time-consuming, and destructive analyses. Moreover, different sensory techniques are applied to assess product conformity to desired standards. In this context, visual systems are getting a foothold in the meat market envisioning more reliable and time-saving assessment of food quality traits. The present work aims at developing a simple but systematic and objective visual methodology to assess the fat amount of dry-cured ham slices, in terms of total, intermuscular and intramuscular fractions. To the aim, 160 slices from 80 PDO dry-cured hams were evaluated by digital image analysis and Soxhlet extraction. RGB images were captured by a flatbed scanner, converted in grey-scale images, and segmented based on intensity histograms as well as on a multi-stage algorithm aimed at edge enhancement. The latter was performed applying the Canny algorithm, which consists of image noise reduction, calculation of the intensity gradient for each image, spurious response removal, actual thresholding on corrected images, and confirmation of strong edge boundaries. The approach allowed for the automatic calculation of total, intermuscular and intramuscular fat fractions as percentages of the total slice area. Linear regression models were run to estimate the relationships between the image analysis results and the chemical data, thus allowing for the prediction of the total, intermuscular and intramuscular fat content by the dry-cured ham images. The goodness of fit of the obtained models was confirmed in terms of coefficient of determination (R²), hypothesis testing and pattern of residuals. Good regression models have been found being 0.73, 0.82, and 0.73 the R2 values for the total fat, the sum of intermuscular and intramuscular fat and the intermuscular fraction, respectively. In conclusion, the edge enhancement visual procedure brought to a good fat segmentation making the simple visual approach for the quantification of the different fat fractions in dry-cured ham slices sufficiently simple, accurate and precise. The presented image analysis approach steers towards the development of instruments that can overcome destructive, tedious and time-consuming chemical determinations. As future perspectives, the results of the proposed image analysis methodology will be compared with those of sensory tests in order to develop a fast grading method of dry-cured hams based on fat distribution. Therefore, the system will be able not only to predict the actual fat content but it will also reflect the visual appearance of samples as perceived by consumers.

Keywords: dry-cured ham, edge detection algorithm, fat content, image analysis

Procedia PDF Downloads 176
102 Treatment with Triton-X 100: An Enhancement Approach for Cardboard Bioprocessing

Authors: Ahlam Said Al Azkawi, Nallusamy Sivakumar, Saif Nasser Al Bahri

Abstract:

Diverse approaches and pathways are under development with the determination to develop cellulosic biofuels and other bio-products eventually at commercial scale in “bio-refineries”; however, the key challenge is mainly the high level of complexity in processing the feedstock which is complicated and energy consuming. To overcome the complications in utilizing the naturally occurring lignocellulose biomass, using waste paper as a feedstock for bio-production may solve the problem. Besides being abundant and cheap, bioprocessing of waste paper has evolved in response to the public concern from rising landfill cost from shrinking landfill capacity. Cardboard (CB) is one of the major components of municipal solid waste and one of the most important items to recycle. Although 50-70% of cardboard constitute is known to be cellulose and hemicellulose, the presence of lignin around them cause hydrophobic cross-link which physically obstructs the hydrolysis by rendering it resistant to enzymatic cleavage. Therefore, pretreatment is required to disrupt this resistance and to enhance the exposure of the targeted carbohydrates to the hydrolytic enzymes. Several pretreatment approaches have been explored, and the best ones would be those can influence cellulose conversion rates and hydrolytic enzyme performance with minimal or less cost and downstream processes. One of the promising strategies in this field is the application of surfactants, especially non-ionic surfactants. In this study, triton-X 100 was used as surfactants to treat cardboard prior enzymatic hydrolysis and compare it with acid treatment using 0.1% H2SO4. The effect of the surfactant enhancement was evaluated through its effect on hydrolysis rate in respect to time in addition to evaluating the structural changes and modification by scanning electron microscope (SEM) and X-ray diffraction (XRD) and through compositional analysis. Further work was performed to produce ethanol from CB treated with triton-X 100 via separate hydrolysis and fermentation (SHF) and simultaneous saccharification and fermentation (SSF). The hydrolysis studies have demonstrated enhancement in saccharification by 35%. After 72 h of hydrolysis, a saccharification rate of 98% was achieved from CB enhanced with triton-X 100, while only 89 of saccharification achieved from acid pre-treated CB. At 120 h, the saccharification % exceeded 100 as reducing sugars continued to increase with time. This enhancement was not supported by any significant changes in the cardboard content as the cellulose, hemicellulose and lignin content remained same after treatment, but obvious structural changes were observed through SEM images. The cellulose fibers were clearly exposed with very less debris and deposits compared to cardboard without triton-X 100. The XRD pattern has also revealed the ability of the surfactant in removing calcium carbonate, a filler found in waste paper known to have negative effect on enzymatic hydrolysis. The cellulose crystallinity without surfactant was 73.18% and reduced to 66.68% rendering it more amorphous and susceptible to enzymatic attack. Triton-X 100 has proved to effectively enhance CB hydrolysis and eventually had positive effect on the ethanol yield via SSF. Treating cardboard with only triton-X 100 was a sufficient treatment to enhance the enzymatic hydrolysis and ethanol production.

Keywords: cardboard, enhancement, ethanol, hydrolysis, treatment, Triton-X 100

Procedia PDF Downloads 152
101 Methodology for the Determination of Triterpenic Compounds in Apple Extracts

Authors: Mindaugas Liaudanskas, Darius Kviklys, Kristina Zymonė, Raimondas Raudonis, Jonas Viškelis, Norbertas Uselis, Pranas Viškelis, Valdimaras Janulis

Abstract:

Apples are among the most commonly consumed fruits in the world. Based on data from the year 2014, approximately 84.63 million tons of apples are grown per annum. Apples are widely used in food industry to produce various products and drinks (juice, wine, and cider); they are also used unprocessed. Apples in human diet are an important source of different groups of biological active compounds that can positively contribute to the prevention of various diseases. They are a source of various biologically active substances – especially vitamins, organic acids, micro- and macro-elements, pectins, and phenolic, triterpenic, and other compounds. Triterpenic compounds, which are characterized by versatile biological activity, are the biologically active compounds found in apples that are among the most promising and most significant for human health. A specific analytical procedure including sample preparation and High Performance Liquid Chromatography (HPLC) analysis was developed, optimized, and validated for the detection of triterpenic compounds in the samples of different apples, their peels, and flesh from widespread apple cultivars 'Aldas', 'Auksis', 'Connel Red', 'Ligol', 'Lodel', and 'Rajka' grown in Lithuanian climatic conditions. The conditions for triterpenic compound extraction were optimized: the solvent of the extraction was 100% (v/v) acetone, and the extraction was performed in an ultrasound bath for 10 min. Isocratic elution (the eluents ratio being 88% (solvent A) and 12% (solvent B)) for a rapid separation of triterpenic compounds was performed. The validation of the methodology was performed on the basis of the ICH recommendations. The following characteristics of validation were evaluated: the selectivity of the method (specificity), precision, the detection and quantitation limits of the analytes, and linearity. The obtained parameters values confirm suitability of methodology to perform analysis of triterpenic compounds. Using the optimised and validated HPLC technique, four triterpenic compounds were separated and identified, and their specificity was confirmed. These compounds were corosolic acid, betulinic acid, oleanolic acid, and ursolic acid. Ursolic acid was the dominant compound in all the tested apple samples. The detected amount of betulinic acid was the lowest of all the identified triterpenic compounds. The greatest amounts of triterpenic compounds were detected in whole apple and apple peel samples of the 'Lodel' cultivar, and thus apples and apple extracts of this cultivar are potentially valuable for use in medical practice, for the prevention of various diseases, for adjunct therapy, for the isolation of individual compounds with a specific biological effect, and for the development and production of dietary supplements and functional food enriched in biologically active compounds. Acknowledgements. This work was supported by a grant from the Research Council of Lithuania, project No. MIP-17-8.

Keywords: apples, HPLC, triterpenic compounds, validation

Procedia PDF Downloads 173
100 Impact of Traffic Restrictions due to Covid19, on Emissions from Freight Transport in Mexico City

Authors: Oscar Nieto-Garzón, Angélica Lozano

Abstract:

In urban areas, on-road freight transportation creates several social and environmental externalities. Then, it is crucial that freight transport considers not only economic aspects, like retailer distribution cost reduction and service improvement, but also environmental effects such as global CO2 and local emissions (e.g. Particulate Matter, NOX, CO) and noise. Inadequate infrastructure development, high rate of urbanization, the increase of motorization, and the lack of transportation planning are characteristics that urban areas from developing countries share. The Metropolitan Area of Mexico City (MAMC), the Metropolitan Area of São Paulo (MASP), and Bogota are three of the largest urban areas in Latin America where air pollution is often a problem associated with emissions from mobile sources. The effect of the lockdown due to COVID-19 was analyzedfor these urban areas, comparing the same period (January to August) of years 2016 – 2019 with 2020. A strong reduction in the concentration of primary criteria pollutants emitted by road traffic were observed at the beginning of 2020 and after the lockdown measures.Daily mean concentration of NOx decreased 40% in the MAMC, 34% in the MASP, and 62% in Bogota. Daily mean ozone levels increased after the lockdown measures in the three urban areas, 25% in MAMC, 30% in the MASP and 60% in Bogota. These changes in emission patterns from mobile sources drastically changed the ambient atmospheric concentrations of CO and NOX. The CO/NOX ratioat the morning hours is often used as an indicator of mobile sources emissions. In 2020, traffic from cars and light vehicles was significantly reduced due to the first lockdown, but buses and trucks had not restrictions. In theory, it implies a decrease in CO and NOX from cars or light vehicles, maintaining the levels of NOX by trucks(or lower levels due to the congestion reduction). At rush hours, traffic was reduced between 50% and 75%, so trucks could get higher speeds, which would reduce their emissions. By means an emission model, it was found that an increase in the average speed (75%) would reduce the emissions (CO, NOX, and PM) from diesel trucks by up to 30%. It was expected that the value of CO/NOXratio could change due to thelockdownrestrictions. However, although there was asignificant reduction of traffic, CO/NOX kept its trend, decreasing to 8-9 in 2020. Hence, traffic restrictions had no impact on the CO/NOX ratio, although they did reduce vehicle emissions of CO and NOX. Therefore, these emissions may not adequately represent the change in the vehicle emission patterns, or this ratio may not be a good indicator of emissions generated by vehicles. From the comparison of the theoretical data and those observed during the lockdown, results that the real NOX reduction was lower than the theoretical reduction. The reasons could be that there are other sources of NOX emissions, so there would be an over-representation of NOX emissions generated by diesel vehicles, or there is an underestimation of CO emissions. Further analysis needs to consider this ratioto evaluate the emission inventories and then to extend these results forthe determination of emission control policies to non-mobile sources.

Keywords: COVID-19, emissions, freight transport, latin American metropolis

Procedia PDF Downloads 136
99 Implementation of Deep Neural Networks for Pavement Condition Index Prediction

Authors: M. Sirhan, S. Bekhor, A. Sidess

Abstract:

In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.

Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction

Procedia PDF Downloads 137
98 Thermosensitive Hydrogel Development for Its Possible Application in Cardiac Cell Therapy

Authors: Lina Paola Orozco Marin, Yuliet Montoya Osorio, John Bustamante Osorno

Abstract:

Ischemic events can culminate in acute myocardial infarction by irreversible cardiac lesions that cannot be restored due to the limited regenerative capacity of the heart. Cell therapy seeks to replace these injured or necrotic cells by transplanting healthy and functional cells. The therapeutic alternatives proposed by tissue engineering and cardiovascular regenerative medicine are the use of biomaterials to mimic the native extracellular medium, which is full of proteins, proteoglycans, and glycoproteins. The selected biomaterials must provide structural support to the encapsulated cells to avoid their migration and death in the host tissue. In this context, the present research work focused on developing a natural thermosensitive hydrogel, its physical and chemical characterization, and the determination of its biocompatibility in vitro. The hydrogel was developed by mixing hydrolyzed bovine and porcine collagen at 2% w/v, chitosan at 2.5% w/v, and beta-glycerolphosphate at 8.5% w/w and 10.5% w/w in magnetic stirring at 4°C. Once obtained, the thermosensitivity and gelation time were determined, incubating the samples at 37°C and evaluating them through the inverted tube method. The morphological characterization of the hydrogels was carried out through scanning electron microscopy. Chemical characterization was carried out employing infrared spectroscopy. The biocompatibility was determined using the MTT cytotoxicity test according to the ISO 10993-5 standard for the hydrogel’s precursors using the fetal human ventricular cardiomyocytes cell line RL-14. The RL-14 cells were also seeded on the top of the hydrogels, and the supernatants were subculture at different periods to their observation under a bright field microscope. Four types of thermosensitive hydrogels were obtained, which differ in their composition and concentration, called A1 (chitosan/bovine collagen/beta-glycerolphosphate 8.5%w/w), A2 (chitosan/porcine collagen/beta-glycerolphosphate 8.5%), B1 (chitosan/bovine collagen/beta-glycerolphosphate 10.5%) and B2 (chitosan/porcine collagen/beta-glycerolphosphate 10.5%). A1 and A2 had a gelation time of 40 minutes, and B1 and B2 had a gelation time of 30 minutes at 37°C. Electron micrographs revealed a three-dimensional internal structure with interconnected pores for the four types of hydrogels. This facilitates the exchange of nutrients, oxygen, and the exit of metabolites, allowing to preserve a microenvironment suitable for cell proliferation. In the infrared spectra, it was possible to observe the interaction that occurs between the amides of polymeric compounds with the phosphate groups of beta-glycerolphosphate. Finally, the biocompatibility tests indicated that cells in contact with the hydrogel or with each of its precursors are not affected in their proliferation capacity for a period of 16 days. These results show the potential of the hydrogel to increase the cell survival rate in the cardiac cell therapies under investigation. Moreover, the results lay the foundations for its characterization and biological evaluation in both in vitro and in vivo models.

Keywords: cardiac cell therapy, cardiac ischemia, natural polymers, thermosensitive hydrogel

Procedia PDF Downloads 191
97 Effect of Different Contaminants on Mineral Insulating Oil Characteristics

Authors: H. M. Wilhelm, P. O. Fernandes, L. P. Dill, C. Steffens, K. G. Moscon, S. M. Peres, V. Bender, T. Marchesan, J. B. Ferreira Neto

Abstract:

Deterioration of insulating oil is a natural process that occurs during transformers operation. However, this process can be accelerated by some factors, such as oxygen, high temperatures, metals and, moisture, which rapidly reduce oil insulating capacity and favor transformer faults. Parts of building materials of a transformer can be degraded and yield soluble compounds and insoluble particles that shorten the equipment life. Physicochemical tests, dissolved gas analysis (including propane, propylene and, butane), volatile and furanic compounds determination, besides quantitative and morphological analyses of particulate are proposed in this study in order to correlate transformers building materials degradation with insulating oil characteristics. The present investigation involves tests of medium temperature overheating simulation by means of an electric resistance wrapped with the following materials immersed in mineral insulating oil: test I) copper, tin, lead and, paper (heated at 350-400 °C for 8 h); test II) only copper (at 250 °C for 11 h); and test III) only paper (at 250 °C for 8 h and at 350 °C for 8 h). A different experiment is the simulation of electric arc involving copper, using an electric welding machine at two distinct energy sets (low and high). Analysis results showed that dielectric loss was higher in the sample of test I, higher neutralization index and higher values of hydrogen and hydrocarbons, including propane and butane, were also observed. Test III oil presented higher particle count, in addition, ferrographic analysis revealed contamination with fibers and carbonized paper. However, these particles had little influence on the oil physicochemical parameters (dielectric loss and neutralization index) and on the gas production, which was very low. Test II oil showed high levels of methane, ethane, and propylene, indicating the effect of metal on oil degradation. CO2 and CO gases were formed in the highest concentration in test III, as expected. Regarding volatile compounds, in test I acetone, benzene and toluene were detected, which are oil oxidation products. Regarding test III, methanol was identified due to cellulose degradation, as expected. Electric arc simulation test showed the highest oil oxidation in presence of copper and at high temperature, since these samples had huge concentration of hydrogen, ethylene, and acetylene. Particle count was also very high, showing the highest release of copper in such conditions. When comparing high and low energy, the first presented more hydrogen, ethylene, and acetylene. This sample had more similar results to test I, pointing out that the generation of different particles can be the cause for faults such as electric arc. Ferrography showed more evident copper and exfoliation particles than in other samples. Therefore, in this study, by using different combined analytical techniques, it was possible to correlate insulating oil characteristics with possible contaminants, which can lead to transformers failure.

Keywords: Ferrography, gas analysis, insulating mineral oil, particle contamination, transformer failures

Procedia PDF Downloads 225
96 Storage of Organic Carbon in Chemical Fractions in Acid Soil as Influenced by Different Liming

Authors: Ieva Jokubauskaite, Alvyra Slepetiene, Danute Karcauskiene, Inga Liaudanskiene, Kristina Amaleviciute

Abstract:

Soil organic carbon (SOC) is the key soil quality and ecological stability indicator, therefore, carbon accumulation in stable forms not only supports and increases the organic matter content in the soil, but also has a positive effect on the quality of soil and the whole ecosystem. Soil liming is one of the most common ways to improve the carbon sequestration in the soil. Determination of the optimum intensity and combinations of liming in order to ensure the optimal carbon quantitative and qualitative parameters is one of the most important tasks of this work. The field experiments were carried out at the Vezaiciai Branch of Lithuanian Research Centre for Agriculture and Forestry (LRCAF) during the 2011–2013 period. The effect of liming with different intensity (at a rate 0.5 every 7 years and 2.0 every 3-4 years) was investigated in the topsoil of acid moraine loam Bathygleyic Dystric Glossic Retisol. Chemical analyses were carried out at the Chemical Research Laboratory of Institute of Agriculture, LRCAF. Soil samples for chemical analyses were taken from the topsoil after harvesting. SOC was determined by the Tyurin method modified by Nikitin, measuring with spectrometer Cary 50 (VARIAN) at 590 nm wavelength using glucose standards. SOC fractional composition was determined by Ponomareva and Plotnikova version of classical Tyurin method. Dissolved organic carbon (DOC) was analyzed using an ion chromatograph SKALAR in water extract at soil-water ratio 1:5. Spectral properties (E4/E6 ratio) of humic acids were determined by measuring the absorbance of humic and fulvic acids solutions at 465 and 665 nm. Our study showed a negative statistically significant effect of periodical liming (at 0.5 and 2.0 liming rates) on SOC content in the soil. The content of SOC was 1.45% in the unlimed treatment, while in periodically limed at 2.0 liming rate every 3–4 years it was approximately by 0.18 percentage points lower. It was revealed that liming significantly decreased the DOC concentration in the soil. The lowest concentration of DOC (0.156 g kg-1) was established in the most intensively limed (2.0 liming rate every 3–4 years) treatment. Soil liming exerted an increase of all humic acids and fulvic acid bounded with calcium fractions content in the topsoil. Soil liming resulted in the accumulation of valuable humic acids. Due to the applied liming, the HR/FR ratio, indicating the quality of humus increased to 1.08 compared with that in unlimed soil (0.81). Intensive soil liming promoted the formation of humic acids in which groups of carboxylic and phenolic compounds predominated. These humic acids are characterized by a higher degree of condensation of aromatic compounds and in this way determine the intensive organic matter humification processes in the soil. The results of this research provide us with the clear information on the characteristics of SOC change, which could be very useful to guide the climate policy and sustainable soil management.

Keywords: acid soil, carbon sequestration, long–term liming, soil organic carbon

Procedia PDF Downloads 229
95 Gauging Floral Resources for Pollinators Using High Resolution Drone Imagery

Authors: Nicholas Anderson, Steven Petersen, Tom Bates, Val Anderson

Abstract:

Under the multiple-use management regime established in the United States for federally owned lands, government agencies have come under pressure from commercial apiaries to grant permits for the summer pasturing of honeybees on government lands. Federal agencies have struggled to integrate honeybees into their management plans and have little information to make regulations that resolve how many colonies should be allowed in a single location and at what distance sets of hives should be placed. Many conservation groups have voiced their concerns regarding the introduction of honeybees to these natural lands, as they may outcompete and displace native pollinating species. Assessing the quality of an area in regard to its floral resources, pollen, and nectar can be important when attempting to create regulations for the integration of commercial honeybee operations into a native ecosystem. Areas with greater floral resources may be able to support larger numbers of honeybee colonies, while poorer resource areas may be less resilient to introduced disturbances. Attempts are made in this study to determine flower cover using high resolution drone imagery to help assess the floral resource availability to pollinators in high elevation, tall forb communities. This knowledge will help in determining the potential that different areas may have for honeybee pasturing and honey production. Roughly 700 images were captured at 23m above ground level using a drone equipped with a Sony QX1 RGB 20-megapixel camera. These images were stitched together using Pix4D, resulting in a 60m diameter high-resolution mosaic of a tall forb meadow. Using the program ENVI, a supervised maximum likelihood classification was conducted to calculate the percentage of total flower cover and flower cover by color (blue, white, and yellow). A complete vegetation inventory was taken on site, and the major flowers contributing to each color class were noted. An accuracy assessment was performed on the classification yielding an 89% overall accuracy and a Kappa Statistic of 0.855. With this level of accuracy, drones provide an affordable and time efficient method for the assessment of floral cover in large areas. The proximal step of this project will now be to determine the average pollen and nectar loads carried by each flower species. The addition of this knowledge will result in a quantifiable method of measuring pollen and nectar resources of entire landscapes. This information will not only help land managers determine stocking rates for honeybees on public lands but also has applications in the agricultural setting, aiding producers in the determination of the number of honeybee colonies necessary for proper pollination of fruit and nut crops.

Keywords: honeybee, flower, pollinator, remote sensing

Procedia PDF Downloads 141
94 Navigating AI in Higher Education: Exploring Graduate Students’ Perspectives on Teacher-Provided AI Guidelines

Authors: Mamunur Rashid, Jialin Yan

Abstract:

The current years have witnessed a rapid evolution and integration of artificial intelligence (AI) in various fields, prominently influencing the education industry. Acknowledging this transformative wave, AI tools like ChatGPT and Grammarly have undeniably introduced perspectives and skills, enriching the educational experiences of higher education students. The prevalence of AI utilization in higher education also drives an increasing number of researchers' attention in various dimensions. Departments, offices, and professors in universities also designed and released a set of policies and guidelines on using AI effectively. In regard to this, the study targets exploring and analyzing graduate students' perspectives regarding AI guidelines set by teachers. A mixed-methods study will be mainly conducted in this study, employing in-depth interviews and focus groups to investigate and collect students' perspectives. Relevant materials, such as syllabi and course instructions, will also be analyzed through the documentary analysis to facilitate understanding of the study. Surveys will also be used for data collection and students' background statistics. The integration of both interviews and surveys will provide a comprehensive array of student perspectives across various academic disciplines. The study is anchored in the theoretical framework of self-determination theory (SDT), which emphasizes and explains the students' perspective under the AI guidelines through three core needs: autonomy, competence, and relatedness. This framework is instrumental in understanding how AI guidelines influence students' intrinsic motivation and sense of empowerment in their learning environments. Through qualitative analysis, the study reveals a sense of confusion and uncertainty among students regarding the appropriate application and ethical considerations of AI tools, indicating potential challenges in meeting their needs for competence and autonomy. The quantitative data further elucidates these findings, highlighting a significant communication gap between students and educators in the formulation and implementation of AI guidelines. The critical findings of this study mainly come from two aspects: First, the majority of graduate students are uncertain and confused about relevant AI guidelines given by teachers. Second, this study also demonstrates that the design and effectiveness of course materials, such as the syllabi and instructions, also need to adapt in regard to AI policies. It indicates that certain of the existing guidelines provided by teachers lack consideration of students' perspectives, leading to a misalignment with students' needs for autonomy, competence, and relatedness. More emphasize and efforts need to be dedicated to both teacher and student training on AI policies and ethical considerations. To conclude, in this study, graduate students' perspectives on teacher-provided AI guidelines are explored and reflected upon, calling for additional training and strategies to improve how these guidelines can be better disseminated for their effective integration and adoption. Although AI guidelines provided by teachers may be helpful and provide new insights for students, educational institutions should take a more anchoring role to foster a motivating, empowering, and student-centered learning environment. The study also provides some relevant recommendations, including guidance for students on the ethical use of AI and AI policy training for teachers in higher education.

Keywords: higher education policy, graduate students’ perspectives, higher education teacher, AI guidelines, AI in education

Procedia PDF Downloads 74
93 Detection and Identification of Antibiotic Resistant UPEC Using FTIR-Microscopy and Advanced Multivariate Analysis

Authors: Uraib Sharaha, Ahmad Salman, Eladio Rodriguez-Diaz, Elad Shufan, Klaris Riesenberg, Irving J. Bigio, Mahmoud Huleihel

Abstract:

Antimicrobial drugs have played an indispensable role in controlling illness and death associated with infectious diseases in animals and humans. However, the increasing resistance of bacteria to a broad spectrum of commonly used antibiotics has become a global healthcare problem. Many antibiotics had lost their effectiveness since the beginning of the antibiotic era because many bacteria have adapted defenses against these antibiotics. Rapid determination of antimicrobial susceptibility of a clinical isolate is often crucial for the optimal antimicrobial therapy of infected patients and in many cases can save lives. The conventional methods for susceptibility testing require the isolation of the pathogen from a clinical specimen by culturing on the appropriate media (this culturing stage lasts 24 h-first culturing). Then, chosen colonies are grown on media containing antibiotic(s), using micro-diffusion discs (second culturing time is also 24 h) in order to determine its bacterial susceptibility. Other methods, genotyping methods, E-test and automated methods were also developed for testing antimicrobial susceptibility. Most of these methods are expensive and time-consuming. Fourier transform infrared (FTIR) microscopy is rapid, safe, effective and low cost method that was widely and successfully used in different studies for the identification of various biological samples including bacteria; nonetheless, its true potential in routine clinical diagnosis has not yet been established. The new modern infrared (IR) spectrometers with high spectral resolution enable measuring unprecedented biochemical information from cells at the molecular level. Moreover, the development of new bioinformatics analyses combined with IR spectroscopy becomes a powerful technique, which enables the detection of structural changes associated with resistivity. The main goal of this study is to evaluate the potential of the FTIR microscopy in tandem with machine learning algorithms for rapid and reliable identification of bacterial susceptibility to antibiotics in time span of few minutes. The UTI E.coli bacterial samples, which were identified at the species level by MALDI-TOF and examined for their susceptibility by the routine assay (micro-diffusion discs), are obtained from the bacteriology laboratories in Soroka University Medical Center (SUMC). These samples were examined by FTIR microscopy and analyzed by advanced statistical methods. Our results, based on 700 E.coli samples, were promising and showed that by using infrared spectroscopic technique together with multivariate analysis, it is possible to classify the tested bacteria into sensitive and resistant with success rate higher than 90% for eight different antibiotics. Based on these preliminary results, it is worthwhile to continue developing the FTIR microscopy technique as a rapid and reliable method for identification antibiotic susceptibility.

Keywords: antibiotics, E.coli, FTIR, multivariate analysis, susceptibility, UTI

Procedia PDF Downloads 171
92 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables

Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez

Abstract:

Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.

Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X

Procedia PDF Downloads 264
91 Determination Optimum Strike Price of FX Option Call Spread with USD/IDR Volatility and Garman–Kohlhagen Model Analysis

Authors: Bangkit Adhi Nugraha, Bambang Suripto

Abstract:

On September 2016 Bank Indonesia (BI) release regulation no.18/18/PBI/2016 that permit bank clients for using the FX option call spread USD/IDR. Basically, this product is a combination between clients buy FX call option (pay premium) and sell FX call option (receive premium) to protect against currency depreciation while also capping the potential upside with cheap premium cost. BI classifies this product as a structured product. The structured product is combination at least two financial instruments, either derivative or non-derivative instruments. The call spread is the first structured product against IDR permitted by BI since 2009 as response the demand increase from Indonesia firms on FX hedging through derivative for protecting market risk their foreign currency asset or liability. The composition of hedging products on Indonesian FX market increase from 35% on 2015 to 40% on 2016, the majority on swap product (FX forward, FX swap, cross currency swap). Swap is formulated by interest rate difference of the two currency pairs. The cost of swap product is 7% for USD/IDR with one year USD/IDR volatility 13%. That cost level makes swap products seem expensive for hedging buyers. Because call spread cost (around 1.5-3%) cheaper than swap, the most Indonesian firms are using NDF FX call spread USD/IDR on offshore with outstanding amount around 10 billion USD. The cheaper cost of call spread is the main advantage for hedging buyers. The problem arises because BI regulation requires the call spread buyer doing the dynamic hedging. That means, if call spread buyer choose strike price 1 and strike price 2 and volatility USD/IDR exchange rate surpass strike price 2, then the call spread buyer must buy another call spread with strike price 1’ (strike price 1’ = strike price 2) and strike price 2’ (strike price 2’ > strike price 1‘). It could make the premium cost of call spread doubled or even more and dismiss the purpose of hedging buyer to find the cheapest hedging cost. It is very crucial for the buyer to choose best optimum strike price before entering into the transaction. To help hedging buyer find the optimum strike price and avoid expensive multiple premium cost, we observe ten years 2005-2015 historical data of USD/IDR volatility to be compared with the price movement of the call spread USD/IDR using Garman–Kohlhagen Model (as a common formula on FX option pricing). We use statistical tools to analysis data correlation, understand nature of call spread price movement over ten years, and determine factors affecting price movement. We select some range of strike price and tenor and calculate the probability of dynamic hedging to occur and how much it’s cost. We found USD/IDR currency pairs is too uncertain and make dynamic hedging riskier and more expensive. We validated this result using one year data and shown small RMS. The study result could be used to understand nature of FX call spread and determine optimum strike price for hedging plan.

Keywords: FX call spread USD/IDR, USD/IDR volatility statistical analysis, Garman–Kohlhagen Model on FX Option USD/IDR, Bank Indonesia Regulation no.18/18/PBI/2016

Procedia PDF Downloads 379
90 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 337
89 Production and Characterization of Biochars from Torrefaction of Biomass

Authors: Serdar Yaman, Hanzade Haykiri-Acma

Abstract:

Biomass is a CO₂-neutral fuel that is renewable and sustainable along with having very huge global potential. Efficient use of biomass in power generation and production of biomass-based biofuels can mitigate the greenhouse gasses (GHG) and reduce dependency on fossil fuels. There are also other beneficial effects of biomass energy use such as employment creation and pollutant reduction. However, most of the biomass materials are not capable of competing with fossil fuels in terms of energy content. High moisture content and high volatile matter yields of biomass make it low calorific fuel, and it is very significant concern over fossil fuels. Besides, the density of biomass is generally low, and it brings difficulty in transportation and storage. These negative aspects of biomass can be overcome by thermal pretreatments that upgrade the fuel property of biomass. That is, torrefaction is such a thermal process in which biomass is heated up to 300ºC under non-oxidizing conditions to avoid burning of the material. The treated biomass is called as biochar that has considerably lower contents of moisture, volatile matter, and oxygen compared to the parent biomass. Accordingly, carbon content and the calorific value of biochar increase to the level which is comparable with that of coal. Moreover, hydrophilic nature of untreated biomass that leads decay in the structure is mostly eliminated, and the surface properties of biochar turn into hydrophobic character upon torrefaction. In order to investigate the effectiveness of torrefaction process on biomass properties, several biomass species such as olive milling residue (OMR), Rhododendron (small shrubby tree with bell-shaped flowers), and ash tree (timber tree) were chosen. The fuel properties of these biomasses were analyzed through proximate and ultimate analyses as well as higher heating value (HHV) determination. For this, samples were first chopped and ground to a particle size lower than 250 µm. Then, samples were subjected to torrefaction in a horizontal tube furnace by heating from ambient up to temperatures of 200, 250, and 300ºC at a heating rate of 10ºC/min. The biochars obtained from this process were also tested by the methods applied to the parent biomass species. Improvement in the fuel properties was interpreted. That is, increasing torrefaction temperature led to regular increases in the HHV in OMR, and the highest HHV (6065 kcal/kg) was gained at 300ºC. Whereas, torrefaction at 250ºC was seen optimum for Rhododendron and ash tree since torrefaction at 300ºC had a detrimental effect on HHV. On the other hand, the increase in carbon contents and reduction in oxygen contents were determined. Burning characteristics of the biochars were also studied using thermal analysis technique. For this purpose, TA Instruments SDT Q600 model thermal analyzer was used and the thermogravimetric analysis (TGA), derivative thermogravimetry (DTG), differential scanning calorimetry (DSC), and differential thermal analysis (DTA) curves were compared and interpreted. It was concluded that torrefaction is an efficient method to upgrade the fuel properties of biomass and the biochars from which have superior characteristics compared to the parent biomasses.

Keywords: biochar, biomass, fuel upgrade, torrefaction

Procedia PDF Downloads 373
88 The Efficiency of Mechanization in Weed Control in Artificial Regeneration of Oriental Beech (Fagus orientalis Lipsky.)

Authors: Tuğrul Varol, Halil Barış Özel

Abstract:

In this study which has been conducted in Akçasu Forest Range District of Devrek Forest Directorate; 3 methods (cover removal with human force, cover removal with Hitachi F20 Excavator, and cover removal with agricultural equipment mounted on a Ferguson 240S agriculture tractor) utilized in weed control efforts in regeneration of degraded oriental beech forests have been compared. In this respect, 3 methods have been compared by determining certain work hours and standard durations of unit areas (1 hectare). For this purpose, evaluating the tasks made with human and machine force from the aspects of duration, productivity and costs, it has been aimed to determine the most productive method in accordance with the actual ecological conditions of research field. Within the scope of the study, the time studies have been conducted for 3 methods used in weed control efforts. While carrying out those studies, the performed implementations have been evaluated by dividing them into business stages. Also, the actual data have been used while calculating the cost accounts. In those calculations, the latest formulas and equations which are also used in developed countries have been utilized. The variance of analysis (ANOVA) was used in order to determine whether there is any statistically significant difference among obtained results, and the Duncan test was used for grouping if there is significant difference. According to the measurements and findings carried out within the scope of this study, it has been found during living cover removal efforts in regeneration efforts in demolished oriental beech forests that the removal of weed layer in 1 hectare of field has taken 920 hours with human force, 15.1 hours with excavator and 60 hours with an equipment mounted on a tractor. On the other hand, it has been determined that the cost of removal of living cover in unit area (1 hectare) was 3220.00 TL for man power, 788.70 TL for excavator and 2227.20 TL for equipment mounted on a tractor. According to the obtained results, it has been found that the utilization of excavator in weed control effort in regeneration of degraded oriental beech regions under actual ecological conditions of research field has been found to be more productive from both of aspects of duration and costs. These determinations carried out should be repeated in weed control efforts in degraded forest fields with different ecological conditions, it is compulsory for finding the most efficient weed control method. These findings will light the way of technical staff of forestry directorate in determination of the most effective and economic weed contol method. Thus, the more actual data will be used while preparing the weed control budgets, and there will be significant contributions to national economy. Also the results of this and similar studies are very important for developing the policies for our forestry in short and long term.

Keywords: artificial regeneration, weed control, oriental beech, productivity, mechanization, man power, cost analysis

Procedia PDF Downloads 418
87 Nutritional Genomics Profile Based Personalized Sport Nutrition

Authors: Eszter Repasi, Akos Koller

Abstract:

Our genetic information determines our look, physiology, sports performance and all our features. Maximizing the performances of athletes have adopted a science-based approach to the nutritional support. Nowadays genetics studies have blended with nutritional sciences, and a dynamically evolving, new research field have appeared. Nutritional genomics is needed to be used by nutritional experts. This is a recent field of nutritional science, which can provide a solution to reach the best sport performance using correlations between the athlete’s genome, nutritions, molecules, included human microbiome (links between food, microbiome and epigenetics), nutrigenomics and nutrigenetics. Nutritional genomics has a tremendous potential to change the future of dietary guidelines and personal recommendations. Experts need to use new technology to get information about the athletes, like nutritional genomics profile (included the determination of the oral and gut microbiome and DNA coded reaction for food components), which can modify the preparation term and sports performance. The influence of nutrients on the genes expression is called Nutrigenomics. The heterogeneous response of gene variants to nutrients, dietary components is called Nutrigenetics. The human microbiome plays a critical role in the state of health and well-being, and there are more links between food or nutrition and the human microbiome composition, which can develop diseases and epigenetic changes as well. A nutritional genomics-based profile of athletes can be the best technic for a dietitian to make a unique sports nutrition diet plan. Using functional food and the right food components can be effected on health state, thus sports performance. Scientists need to determine the best response, due to the effect of nutrients on health, through altering genome promote metabolites and result changes in physiology. Nutritional biochemistry explains why polymorphisms in genes for the absorption, circulation, or metabolism of essential nutrients (such as n-3 polyunsaturated fatty acids or epigallocatechin-3-gallate), would affect the efficacy of that nutrient. Controlled nutritional deficiencies and failures, prevented the change of health state or a newly discovered food intolerance are observed by a proper medical team, can support better sports performance. It is important that the dietetics profession informed on gene-diet interactions, that may be leading to optimal health, reduced risk of injury or disease. A special medical application for documentation and monitoring of data of health state and risk factors can uphold and warn the medical team for an early action and help to be able to do a proper health service in time. This model can set up a personalized nutrition advice from the status control, through the recovery, to the monitoring. But more studies are needed to understand the mechanisms and to be able to change the composition of the microbiome, environmental and genetic risk factors in cases of athletes.

Keywords: gene-diet interaction, multidisciplinary team, microbiome, diet plan

Procedia PDF Downloads 172
86 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
85 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs

Authors: Ignitia Motjolopane

Abstract:

Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.

Keywords: business models, innovation, generative AI, small medium enterprises

Procedia PDF Downloads 71
84 Determination of Energy and Nutrients Composition of Potential Ready-to-Use Therapeutic Food Formulated from Locally Available Resources

Authors: Amina Sa'id Muhammad, Asmau Ishaq Alhassan, Beba Raymond, Fatima Bello

Abstract:

Severe acute malnutrition (SAM) remains a major killer of children under five years of age. Nigeria has the second highest burden of stunted children in the world, with a national prevalence rate of 32 percent of children under five. An estimated 2 million children in Nigeria suffer from severe acute malnutrition (SAM), and 3.9% of children in northwest Nigeria suffer from SAM, which is significantly higher than the national average of 2.1%. Community-Based Management of Acute Malnutrition (CMAM) has proven to be an effective intervention in the treatment of SAM in children using Ready-to-Use Therapeutic Food (RUTF). Ready-to-use therapeutic food (RUTF) is a key component for the treatment of Severe Acute Malnutrition. It contains all the energy and nutrients required for rapid catch-up growth and used particularly in the treatment of children over 6 months of age with SAM without medical complications. However, almost all RUTFs are currently imported to Nigeria from other countries. Shortages of RUTF due to logistics (shipping costs, delays, donor fatigue etc) and funding issues present a threat to the achievement of the 2030 World Health Assembly (WHA) targets for reducing malnutrition in addition to 2030 SDGs 2 (Zero Hunger), 3 (Good Health and Wellbeing), 12 (Responsible Consumption and Production), and 17 (Partnerships for the Goals), thus undermining its effectiveness in combating malnutrition On the other hand, the availability of human and material resources that will aid local production of RUTF presents an opportunity to fill in the gap in regular RUTF supply. About one thousand Nigerian children die of malnutrition-related causes every day, reaching a total of 361,000 each year. Owing to the high burden of malnutrition in Nigeria, the local production of RUTF is a logical step, that will ensure increased availability, acceptability, access, and efficiency in supply, and at lower costs. Objective(s): The objectives of this study were therefore, to formulate RUTF from locally available resources and to determine its energy and nutrients composition, incommensurate with the standard/commercial RUTF. Methods: Three samples of RUTF were formulated using locally available resources (soya beans, wheat, rice, baobab, brown-sugar, date palm and soya oil); which were subjected to various analysis to determine their energy/proximate composition, vitamin and mineral contents and organoleptic properties were also determined using sensory evaluation. Results: The energy values of the three samples of locally produced RUTF were found to be in conformity with WHO recommendation of ≥ 500 kcal per 100g. The energy values of the three RUTF samples produced in the current study were found to be 563.08, 503.67 and 528.98 kcal respectively. Sample A, B and C had protein content of 13.56% 16.71% and 14.62% respectively, which were higher than that of commercial RUTF (10.9%). Conclusions/recommendations: The locally formulated RUTF samples had energy value of more than 500 kcal per 100g; with an appreciable amount of macro and micro nutrients. The appearance, taste, flavor and general acceptability of the formulated RUTF samples were also commendable.

Keywords: energy, malnutrition, nutrients, RUTF

Procedia PDF Downloads 41
83 Assessing the Plant Diversity's Quality, Threats and Opportunities for the Support of Sustainable City Development of the City Raipur, India

Authors: Katharina Lapin, Debashis Sanyal

Abstract:

Worldwide urban areas are growing. Urbanization has a great impact on social and economic development and ecosystem services. This global trend of urbanization also has significant impact on habitat and biodiversity. The impact of urbanization on the biodiversity of cities in Europe and North America is well studied, while there is a lack of data from cities in currently fast growing urban areas. Indian cities are expanding. The scientific community and the governmental authorities are facing the ongoing urbanization process as an opportunity for the environment. This case study supports the evaluation of urban biodiversity of the city Raipur in the North-West of India. The aim of this study is to assess the overview of the environmental and ecological implications of urbanization. The collected data and analysis was used to discuss the challenges for the sustainable city development. Vascular plants were chosen as an appropriate indicator for the assessment of local biodiversity changes. On the one hand, the vegetation cover is sensible to anthropogenic influence, and in the other hand, the local species composition is comparable to changes at the regional and national scale, using the plant index of India. Further information of abiotic situation can be gathered with the determination of indicator species. In order to calculate the influence of urbanization on the native plant diversity, the Shannon diversity index H´ was chosen. The Pielou`s pooled quadrate method was used for estimating diversity when a random sample is not expected. It was used to calculate the Pilou´s index of evenness. The estimated species coverage was used for calculating the H´ and J. Pearson correlation was performed to test the relationship between urbanization pattern and plant diversity. Further, a SWOT analysis was used in for analyzing internal and external factors impinging on a decision making process. The city of Raipur (21.25°N 81.63°E) has a population of 1,010,087 inhabitants living in an urban area of 226km², in the district of the Indian state of Chhattisgarh. Within the last decade, the urban area of Raipur increased. The results show that various novel ecosystems exist in the urban area of Raipur. The high amount of native flora is mainly to find at the shore of urban lakes and along the river Karun. These areas of high Biodiversity Index are to protect as urban biodiversity hot spots. The governmental authorities are well informed about the environmental challenges for the sustainable development of the city. Together with the scientific community of the Technical University of Raipur many engineering solutions are discussed for implementation of the future. The case study helped to point out the importance environmental measures that support the ecosystem services of green infrastructure. The fast process of urbanization is difficult to control. Uncontrolled creation of urban housing leads to difficulties in unsustainable use of natural resources. This is the major threat for the urban biodiversity.

Keywords: India, novel ecosystems, plant diversity, urban ecology

Procedia PDF Downloads 277
82 Understanding Language Teachers’ Motivations towards Research Engagement: A Qualitative Case Study of Vietnamese Tertiary English Teachers

Authors: My T. Truong

Abstract:

Among various professional development (PD) options available for English as a second language (ESL) teachers, especially those at the tertiary level, research engagement has been recently recommended as an innovative model with a transformative force for both individual teachers’ PD and wider school improvement. Teachers who conduct research themselves tend to develop critical and analytical thinking about their instructional practices, and enhance their ability to make autonomous pedagogical judgments and decisions. With such capabilities, teacher researchers are thus more likely to contribute to curriculum innovation of their schools and improvement of the whole educational process. The extent to which ESL teachers are engaged in research, however, depends largely on their research motivation, which can not only decide teachers’ choice of a PD activity to pursue but also affect the degree and duration of effort they are willing to invest in pursuing it. To understand language teachers’ research practices, and to inform educational authorities about ways to promote research culture among their ESL teaching staff, it is therefore vital to investigate teachers’ research motivation. Despite its importance as such, this individual difference construct has not been paid due attention especially in the ESL contexts. To fill this gap, this study aims to explore Vietnamese tertiary ESL teachers’ motivations towards research. Guided by the self-determination theory and the process model of motivation, it investigates teachers’ initial motivations for conducting research, and the factors that sustained or degraded their motivation during the research engagement process. Adopting a qualitative case-study approach, the study collected longitudinal data via semi-structured interviews and guided diary entries from three ESL tertiary teachers who were conducting their own research project. The respondents attended two semi-structured interviews (one at the beginning of their project, and the other one three months afterwards); and wrote six guided diary entries between the two interviews. The results confirm the significant role motivation plays in driving teachers to initiate and maintain their participation in research, and challenge some common assumptions in teacher motivation literature. For instance, the quality of the past and actual research experience unsurprisingly emerged as an important factor that both motivated and demotivated teachers in their research engagement process. Unlike general suggestions in the motivation literature however, external demand was found in this study to be a critical motivation sustaining factor while intrinsic research interest actually did not suffice to help a teacher fulfil his research endeavor. With such findings, the study is expected to widen the motivational perspective in understanding language teacher research practice given the paucity of related studies. Practically, it is hoped to enable teacher educators, PD program designers and educational policy makers in Vietnam and similar contexts to approach the question of whether and how to promote research activities among ESL teachers feasibly. For practicing and in-service teachers, the findings may elucidate to them the motivational conditions in which they can be research engaged, and the motivational factors that might hinder or encourage them in so doing.

Keywords: teacher motivation, teacher professional development, teacher research engagement, English as a second language (ESL)

Procedia PDF Downloads 190
81 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management

Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro

Abstract:

This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.

Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization

Procedia PDF Downloads 48
80 Spectral Responses of the Laser Generated Coal Aerosol

Authors: Tibor Ajtai, Noémi Utry, Máté Pintér, Tomi Smausz, Zoltán Kónya, Béla Hopp, Gábor Szabó, Zoltán Bozóki

Abstract:

Characterization of spectral responses of light absorbing carbonaceous particulate matter (LAC) is of great importance in both modelling its climate effect and interpreting remote sensing measurement data. The residential or domestic combustion of coal is one of the dominant LAC constituent. According to some related assessments the residential coal burning account for roughly half of anthropogenic BC emitted from fossil fuel burning. Despite of its significance in climate the comprehensive investigation of optical properties of residential coal aerosol is really limited in the literature. There are many reason of that starting from the difficulties associated with the controlled burning conditions of the fuel, through the lack of detailed supplementary proximate and ultimate chemical analysis enforced, the interpretation of the measured optical data, ending with many analytical and methodological difficulties regarding the in-situ measurement of coal aerosol spectral responses. Since the gas matrix of ambient can significantly mask the physicochemical characteristics of the generated coal aerosol the accurate and controlled generation of residential coal particulates is one of the most actual issues in this research area. Most of the laboratory imitation of residential coal combustion is simply based on coal burning in stove with ambient air support allowing one to measure only the apparent spectral feature of the particulates. However, the recently introduced methodology based on a laser ablation of solid coal target opens up novel possibilities to model the real combustion procedure under well controlled laboratory conditions and makes the investigation of the inherent optical properties also possible. Most of the methodology for spectral characterization of LAC is based on transmission measurement made of filter accumulated aerosol or deduced indirectly from parallel measurements of scattering and extinction coefficient using free floating sampling. In the former one the accuracy while in the latter one the sensitivity are liming the applicability of this approaches. Although the scientific community are at the common platform that aerosol-phase PhotoAcoustic Spectroscopy (PAS) is the only method for precise and accurate determination of light absorption by LAC, the PAS based instrumentation for spectral characterization of absorption has only been recently introduced. In this study, the investigation of the inherent, spectral features of laser generated and chemically characterized residential coal aerosols are demonstrated. The experimental set-up and its characteristic for residential coal aerosol generation are introduced here. The optical absorption and the scattering coefficients as well as their wavelength dependency are determined by our state-of-the-art multi wavelength PAS instrument (4λ-PAS) and multi wavelength cosinus sensor (Aurora 3000). The quantified wavelength dependency (AAE and SAE) are deduced from the measured data. Finally, some correlation between the proximate and ultimate chemical as well as the measured or deduced optical parameters are also revealed.

Keywords: absorption, scattering, residential coal, aerosol generation by laser ablation

Procedia PDF Downloads 361
79 Contrastive Analysis of Parameters Registered in Training Rowers and the Impact on the Olympic Performance

Authors: Gheorghe Braniste

Abstract:

The management of the training process in sports is closely related to the awareness of the close connection between performance and the morphological, functional and psychological characteristics of the athlete's body. Achieving high results in Olympic sports is influenced, on the one hand, by the genetically determined characteristics of the body and, on the other hand, by the morphological, functional and motor abilities of the athlete. Taking into account the importance of properly understanding the evolutionary specificity of athletes to assess their competitive potential, this study provides a comparative analysis of the parameters that characterize the growth and development of the level of adaptation of sweeping rowers, considering the growth interval between 12 and 20 years. The study established that, in the multi-annual training process, the bodies of the targeted athletes register significant adaptive changes while analyzing parameters of the morphological, functional, psychomotor and sports-technical spheres. As a result of the influence of physical efforts, both specific and non-specific, there is an increase in the adaptability of the body, its transfer to a much higher level of functionality within the parameters, useful and economical adaptive reactions influenced by environmental factors, be they internal or external. The research was carried out for 7 years, on a group of 28 athletes, following their evolution and recording the specific parameters of each age stage. In order to determine the level of physical, morpho-functional, psychomotor development and technical training of rowers, the screening data were applied at the State University of Physical Education and Sports in the Republic of Moldova. During the research, measurements were made on the waist, in the standing and sitting position, arm span, weight, circumference and chest perimeter, vital capacity of the lungs, with the subsequent determination of the vital index (tolerance level to oxygen deficiency in venous blood in Stange and Genchi breath-taking tests that characterize the level of oxygen saturation, absolute and relative strength of the hand and back, calculation of body mass and morphological maturity indices (Kettle index), body surface area (body gait), psychomotor tests (Romberg test), test-tepping 10 s., reaction to a moving object, visual and auditory-motor reaction, recording of technical parameters of rowing on a competitive distance of 200 m. At the end of the study it was found that highly performance is sports is to be associated on the one hand with the genetically determined characteristics of the body and, on the other hand, with favorable adaptive reactions and energy saving, as well as morphofunctional changes influenced by internal and external environmental factors. The importance of the results obtained at the end of the study was positively reflected in obtaining the maximum level of training of athletes in order to demonstrate performance in large-scale competitions and mostly in the Olympic Games.

Keywords: olympics, parameters, performance, peak

Procedia PDF Downloads 123
78 Metagenomic analysis of Irish cattle faecal samples using Oxford Nanopore MinION Next Generation Sequencing

Authors: Niamh Higgins, Dawn Howard

Abstract:

The Irish agri-food sector is of major importance to Ireland’s manufacturing sector and to the Irish economy through employment and the exporting of animal products worldwide. Infectious diseases and parasites have an impact on farm animal health causing profitability and productivity to be affected. For the sustainability of Irish dairy farming, there must be the highest standard of animal health. There can be a lack of information in accounting for > 1% of complete microbial diversity in an environment. There is the tendency of culture-based methods of microbial identification to overestimate the prevalence of species which grow easily on an agar surface. There is a need for new technologies to address these issues to assist with animal health. Metagenomic approaches provide information on both the whole genome and transcriptome present through DNA sequencing of total DNA from environmental samples producing high determination of functional and taxonomic information. Nanopore Next Generation Technologies have the ability to be powerful sequencing technologies. They provide high throughput, low material requirements and produce ultra-long reads, simplifying the experimental process. The aim of this study is to use a metagenomics approach to analyze dairy cattle faecal samples using the Oxford Nanopore MinION Next Generation Sequencer and to establish an in-house pipeline for metagenomic characterization of complex samples. Faecal samples will be obtained from Irish dairy farms, DNA extracted and the MinION will be used for sequencing, followed by bioinformatics analysis. Of particular interest, will be the parasite Buxtonella sulcata, which there has been little research on and which there is no research on its presence on Irish dairy farms. Preliminary results have shown the ability of the MinION to produce hundreds of reads in a relatively short time frame of eight hours. The faecal samples were obtained from 90 dairy cows on a Galway farm. The results from Oxford Nanopore ‘What’s in my pot’ (WIMP) using the Epi2me workflow, show that from a total of 926 classified reads, 87% were from the Kingdom Bacteria, 10% were from the Kingdom Eukaryota, 3% were from the Kingdom Archaea and < 1% were from the Kingdom Viruses. The most prevalent bacteria were those from the Genus Acholeplasma (71 reads), Bacteroides (35 reads), Clostridium (33 reads), Acinetobacter (20 reads). The most prevalent species present were those from the Genus Acholeplasma and included Acholeplasma laidlawii (39 reads) and Acholeplasma brassicae (26 reads). The preliminary results show the ability of the MinION for the identification of microorganisms to species level coming from a complex sample. With ongoing optimization of the pipe-line, the number of classified reads are likely to increase. Metagenomics has the potential in animal health for diagnostics of microorganisms present on farms. This would support wprevention rather than a cure approach as is outlined in the DAFMs National Farmed Animal Health Strategy 2017-2022.

Keywords: animal health, buxtonella sulcata, infectious disease, irish dairy cattle, metagenomics, minION, next generation sequencing

Procedia PDF Downloads 150
77 Parameter Selection and Monitoring for Water-Powered Percussive Drilling in Green-Fields Mineral Exploration

Authors: S. J. Addinell, T. Richard, B. Evans

Abstract:

The Deep Exploration Technologies Cooperative Research Centre (DET CRC) is researching and developing a new coiled tubing based greenfields mineral exploration drilling system utilising downhole water powered percussive drill tooling. This new drilling system is aimed at significantly reducing the costs associated with identifying mineral resource deposits beneath deep, barron cover. This system has shown superior rates of penetration in water-rich hard rock formations at depths exceeding 500 meters. Several key challenges exist regarding the deployment and use of these bottom hole assemblies for mineral exploration, and this paper discusses some of the key technical challenges. This paper presents experimental results obtained from the research program during laboratory and field testing of the prototype drilling system. A study of the morphological aspects of the cuttings generated during the percussive drilling process is presented and shows a strong power law relationship for particle size distributions. Several percussive drilling parameters such as RPM, applied fluid pressure and weight on bit have been shown to influence the particle size distributions of the cuttings generated. This has direct influence on other drilling parameters such as flow loop performance, cuttings dewatering, and solids control. Real-time, accurate knowledge of percussive system operating parameters will assist the driller in maximising the efficiency of the drilling process. The applied fluid flow, fluid pressure, and rock properties are known to influence the natural oscillating frequency of the percussive hammer, but this paper also shows that drill bit design, drill bit wear and the applied weight on bit can also influence the oscillation frequency. Due to the changing drilling conditions and therefore changing operating parameters, real-time understanding of the natural operating frequency is paramount to achieving system optimisation. Several techniques to understand the oscillating frequency have been investigated and presented. With a conventional top drive drilling rig, spectral analysis of applied fluid pressure, hydraulic feed force pressure, hold back pressure and drill string vibrations have shown the presence of the operating frequency of the bottom hole tooling. Unfortunately, however, with the implementation of a coiled tubing drilling rig, implementing a positive displacement downhole motor to provide drill bit rotation, these signals are not available for interrogation at the surface and therefore another method must be considered. The investigation and analysis of ground vibrations using geophone sensors, similar to seismic-while-drilling techniques have indicated the presence of the natural oscillating frequency of the percussive hammer. This method is shown to provide a robust technique for the determination of the downhole percussive oscillation frequency when used with a coiled tubing drill rig.

Keywords: cuttings characterization, drilling optimization, oscillation frequency, percussive drilling, spectral analysis

Procedia PDF Downloads 230
76 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage

Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng

Abstract:

Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.

Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning

Procedia PDF Downloads 73
75 Biological Monitoring: Vegetation Cover, Bird Assemblages, Rodents, Terrestrial and Aquatic Invertebrates from a Closed Landfill

Authors: A. Cittadino, P. Gantes, C. Coviella, M. Casset, A. Sanchez Caro

Abstract:

Three currently active landfills receive the waste from Buenos Aires city and the Great Buenos Aires suburbs. One of the first landfills to receive solid waste from this area was located in Villa Dominico, some 7 km south from Buenos Aires City. With an area of some 750 ha, including riparian habitats, divided into 14 cells, it received solid wastes from June 1979 through February 2004. In December 2010, a biological monitoring program was set up by CEAMSE and Universidad Nacional de Lujan, still operational to date. The aim of the monitoring program is to assess the state of several biological groups within the landfill and to follow their dynamics overtime in order to identify if any, early signs of damage the landfill activities might have over the biota present. Bird and rodent populations, aquatic and terrestrial invertebrates’ populations, cells vegetation coverage, and surrounding areas vegetation coverage and main composition are followed by quarterly samplings. Bird species richness and abundance were estimated by observation over walk transects on each environment. A total of 74 different species of birds were identified. Species richness and diversity were high for both riparian surrounding areas and within the landfill. Several grassland -typical of the 'Pampa'- bird species were found within the landfill, as well as some migratory and endangered bird species. Sherman and Tomahawk traps are set overnight for small mammal sampling. Rodent populations are just above detection limits, and the few specimens captured belong mainly to species common to rural areas, instead of city-dwelling species. The two marsupial species present in the region were captured on occasions. Aquatic macroinvertebrates were sampled on a watercourse upstream and downstream the outlet of the landfill’s wastewater treatment plant and are used to follow water quality using biological indices. Water quality ranged between weak and severe pollution; benthic invertebrates sampled before and after the landfill, show no significant differences in water quality using the IBMWP index. Insect biota from yellow sticky cards and pitfall traps showed over 90 different morphospecies, with Shannon diversity index running from 1.9 to 3.9, strongly affected by the season. An easy-to-perform non-expert demandant method was used to assess vegetation coverage. Two scales of determination are utilized: field observation (1 m resolution), and Google Earth images (that allow for a better than 5 m resolution). Over the eight year period of the study, vegetation coverage over the landfill cells run from a low 83% to 100% on different cells, with an average between 95 to 99% for the entire landfill depending on seasonality. Surrounding area vegetation showed almost 100% coverage during the entire period, with an average density from 2 to 6 species per sq meter and no signs of leachate damaged vegetation.

Keywords: biological indicators, biota monitoring, landfill species diversity, waste management

Procedia PDF Downloads 139
74 A Vision Making Exercise for Twente Region; Development and Assesment

Authors: Gelareh Ghaderi

Abstract:

the overall objective of this study is to develop two alternative plans of spatial and infrastructural development for the Netwerkstad Twente (Twente region) until 2040 and to assess the impacts of those two alternative plans. This region is located on the eastern border of the Netherlands, and it comprises of five municipalities. Based on the strengths and opportunities of the five municipalities of the Netwerkstad Twente, and in order develop the region internationally, strengthen the job market and retain skilled and knowledgeable young population, two alternative visions have been developed; environmental oriented vision, and economical oriented vision. Environmental oriented vision is based mostly on preserving beautiful landscapes. Twente would be recognized as an educational center, driven by green technologies and environment-friendly economy. Market-oriented vision is based on attracting and developing different economic activities in the region based on visions of the five cities of Netwerkstad Twente, in order to improve the competitiveness of the region in national and international scale. On the basis of the two developed visions and strategies for achieving the visions, land use and infrastructural development are modeled and assessed. Based on the SWOT analysis, criteria were formulated and employed in modeling the two contrasting land use visions by the year 2040. Land use modeling consists of determination of future land use demand, assessment of suitability land (Suitability analysis), and allocation of land uses on suitable land. Suitability analysis aims to determine the available supply of land for future development as well as assessing their suitability for specific type of land uses on the basis of the formulated set of criteria. Suitability analysis was operated using CommunityViz, a Planning Support System application for spatially explicit land suitability and allocation. Netwerkstad Twente has highly developed transportation infrastructure, consists of highways network, national road network, regional road network, street network, local road network, railway network and bike-path network. Based on the assumptions of speed limitations on different types of roads provided, infrastructure accessibility level of predicted land use parcels by four different transport modes is investigated. For evaluation of the two development scenarios, the Multi-criteria Evaluation (MCE) method is used. The first step was to determine criteria used for evaluation of each vision. All factors were categorized as economical, ecological and social. Results of Multi-criteria Evaluation show that Environmental oriented cities scenario has higher overall score. Environment-oriented scenario has impressive scores in relation to economical and ecological factors. This is due to the fact that a large percentage of housing tends towards compact housing. Twente region has immense potential, and the success of this project will define the Eastern part of The Netherlands and create a real competitive local economy with innovations and attractive environment as its backbone.

Keywords: economical oriented vision, environmental oriented vision, infrastructure, land use, multi criteria assesment, vision

Procedia PDF Downloads 227