Search results for: forest fire monitor
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2305

Search results for: forest fire monitor

145 Evaluation of the Effect of Learning Disabilities and Accommodations on the Prediction of the Exam Performance: Ordinal Decision-Tree Algorithm

Authors: G. Singer, M. Golan

Abstract:

Providing students with learning disabilities (LD) with extra time to grant them equal access to the exam is a necessary but insufficient condition to compensate for their LD; there should also be a clear indication that the additional time was actually used. For example, if students with LD use more time than students without LD and yet receive lower grades, this may indicate that a different accommodation is required. If they achieve higher grades but use the same amount of time, then the effectiveness of the accommodation has not been demonstrated. The main goal of this study is to evaluate the effect of including parameters related to LD and extended exam time, along with other commonly-used characteristics (e.g., student background and ability measures such as high-school grades), on the ability of ordinal decision-tree algorithms to predict exam performance. We use naturally-occurring data collected from hundreds of undergraduate engineering students. The sub-goals are i) to examine the improvement in prediction accuracy when the indicator of exam performance includes 'actual time used' in addition to the conventional indicator (exam grade) employed in most research; ii) to explore the effectiveness of extended exam time on exam performance for different courses and for LD students with different profiles (i.e., sets of characteristics). This is achieved by using the patterns (i.e., subgroups) generated by the algorithms to identify pairs of subgroups that differ in just one characteristic (e.g., course or type of LD) but have different outcomes in terms of exam performance (grade and time used). Since grade and time used to exhibit an ordering form, we propose a method based on ordinal decision-trees, which applies a weighted information-gain ratio (WIGR) measure for selecting the classifying attributes. Unlike other known ordinal algorithms, our method does not assume monotonicity in the data. The proposed WIGR is an extension of an information-theoretic measure, in the sense that it adjusts to the case of an ordinal target and takes into account the error severity between two different target classes. Specifically, we use ordinal C4.5, random-forest, and AdaBoost algorithms, as well as an ensemble technique composed of ordinal and non-ordinal classifiers. Firstly, we find that the inclusion of LD and extended exam-time parameters improves prediction of exam performance (compared to specifications of the algorithms that do not include these variables). Secondly, when the indicator of exam performance includes 'actual time used' together with grade (as opposed to grade only), the prediction accuracy improves. Thirdly, our subgroup analyses show clear differences in the effect of extended exam time on exam performance among different courses and different student profiles. From a methodological perspective, we find that the ordinal decision-tree based algorithms outperform their conventional, non-ordinal counterparts. Further, we demonstrate that the ensemble-based approach leverages the strengths of each type of classifier (ordinal and non-ordinal) and yields better performance than each classifier individually.

Keywords: actual exam time usage, ensemble learning, learning disabilities, ordinal classification, time extension

Procedia PDF Downloads 78
144 Training for Safe Tree Felling in the Forest with Symmetrical Collaborative Virtual Reality

Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti

Abstract:

One of the most common pieces of equipment still used today for pruning, felling, and processing trees is the chainsaw in forestry. However, chainsaw use highlights dangers and one of the highest rates of accidents in both professional and non-professional work. Felling is proportionally the most dangerous phase, both in severity and frequency, because of the risk of being hit by the plant the operator wants to cut down. To avoid this, a correct sequence of chainsaw cuts must be taught concerning the different conditions of the tree. Virtual reality (VR) makes it possible to virtually simulate chainsaw use without danger of injury. The limitations of the existing applications are as follow. The existing platforms are not symmetrical collaborative because the trainee is only in virtual reality, and the trainer can only see the virtual environment on a laptop or PC, and this results in an inefficient teacher-learner relationship. Therefore, most applications only involve the use of a virtual chainsaw, and the trainee thus cannot feel the real weight and inertia of a real chainsaw. Finally, existing applications simulate only a few cases of tree felling. The objectives of this research were to implement and test a symmetrical collaborative training application based on VR and mixed reality (MR) with the overlap between real and virtual chainsaws in MR. The research and training platform was developed for the Meta quest 2 head-mounted display. The research and training platform application is based on the Unity 3D engine, and Present Platform Interaction SDK (PPI-SDK) developed by Meta. PPI-SDK avoids the use of controllers and enables hand tracking and MR. With the combination of these two technologies, it was possible to overlay a virtual chainsaw with a real chainsaw in MR and synchronize their movements in VR. This ensures that the user feels the weight of the actual chainsaw, tightens the muscles, and performs the appropriate movements during the test allowing the user to learn the correct body posture. The chainsaw works only if the right sequence of cuts is made to felling the tree. Contact detection is done by Unity's physics system, which allows the interaction of objects that simulate real-world behavior. Each cut of the chainsaw is defined by a so-called collider, and the felling of the tree can only occur if the colliders are activated in the right order simulating a safe technique felling. In this way, the user can learn how to use the chainsaw safely. The system is also multiplayer, so the student and the instructor can experience VR together in a symmetrical and collaborative way. The platform simulates the following tree-felling situations with safe techniques: cutting the tree tilted forward, cutting the medium-sized tree tilted backward, cutting the large tree tilted backward, sectioning the trunk on the ground, and cutting branches. The application is being evaluated on a sample of university students through a special questionnaire. The results are expected to test both the increase in learning compared to a theoretical lecture and the immersive and telepresence of the platform.

Keywords: chainsaw, collaborative symmetric virtual reality, mixed reality, operator training

Procedia PDF Downloads 87
143 Hydrographic Mapping Based on the Concept of Fluvial-Geomorphological Auto-Classification

Authors: Jesús Horacio, Alfredo Ollero, Víctor Bouzas-Blanco, Augusto Pérez-Alberti

Abstract:

Rivers have traditionally been classified, assessed and managed in terms of hydrological, chemical and / or biological criteria. Geomorphological classifications had in the past a secondary role, although proposals like River Styles Framework, Catchment Baseline Survey or Stroud Rural Sustainable Drainage Project did incorporate geomorphology for management decision-making. In recent years many studies have been attracted to the geomorphological component. The geomorphological processes and their associated forms determine the structure of a river system. Understanding these processes and forms is a critical component of the sustainable rehabilitation of aquatic ecosystems. The fluvial auto-classification approach suggests that a river is a self-built natural system, with processes and forms designed to effectively preserve their ecological function (hydrologic, sedimentological and biological regime). Fluvial systems are formed by a wide range of elements with multiple non-linear interactions on different spatial and temporal scales. Besides, the fluvial auto-classification concept is built using data from the river itself, so that each classification developed is peculiar to the river studied. The variables used in the classification are specific stream power and mean grain size. A discriminant analysis showed that these variables are the best characterized processes and forms. The statistical technique applied allows to get an individual discriminant equation for each geomorphological type. The geomorphological classification was developed using sites with high naturalness. Each site is a control point of high ecological and geomorphological quality. The changes in the conditions of the control points will be quickly recognizable, and easy to apply a right management measures to recover the geomorphological type. The study focused on Galicia (NW Spain) and the mapping was made analyzing 122 control points (sites) distributed over eight river basins. In sum, this study provides a method for fluvial geomorphological classification that works as an open and flexible tool underlying the fluvial auto-classification concept. The hydrographic mapping is the visual expression of the results, such that each river has a particular map according to its geomorphological characteristics. Each geomorphological type is represented by a particular type of hydraulic geometry (channel width, width-depth ratio, hydraulic radius, etc.). An alteration of this geometry is indicative of a geomorphological disturbance (whether natural or anthropogenic). Hydrographic mapping is also dynamic because its meaning changes if there is a modification in the specific stream power and/or the mean grain size, that is, in the value of their equations. The researcher has to check annually some of the control points. This procedure allows to monitor the geomorphology quality of the rivers and to see if there are any alterations. The maps are useful to researchers and managers, especially for conservation work and river restoration.

Keywords: fluvial auto-classification concept, mapping, geomorphology, river

Procedia PDF Downloads 349
142 Anti-Graft Instruments and Their Role in Curbing Corruption: Integrity Pact and Its Impact on Indian Procurement

Authors: Jot Prakash Kaur

Abstract:

The paper aims to showcase that with the introduction of anti-graft instruments and willingness of the governments towards their implementation, a significant change can be witnessed in the anti-corruption landscape of any country. Since the past decade anti-graft instruments have been introduced by several international non-governmental organizations with the vision of curbing corruption. Transparency International’s ‘Integrity Pact’ has been one such initiative. Integrity Pact has been described as a tool for preventing corruption in public contracting. Integrity Pact has found its relevance in a developing country like India where public procurement constitutes 25-30 percent of Gross Domestic Product. Corruption in public procurement has been a cause of concern even though India has in place a whole architecture of rules and regulations governing public procurement. Integrity Pact was first adopted by a leading Oil and Gas government company in 2006. Till May 2015, over ninety organizations had adopted Integrity Pact, of which majority of them are central government units. The methodology undertaken to understand impact of Integrity Pact on Public procurement is through analyzing information received from important stakeholders of the instrument. Government, information was sought through Right to Information Act 2005 about the details of adoption of this instrument by various government organizations and departments. Contractor, Company websites and annual reports were used to find out the steps taken towards implementation of Integrity Pact. Civil Society, Transparency International India’s resource materials which include publications and reports on Integrity Pact were also used to understand the impact of Integrity Pact. Some of the findings of the study include organizations adopting Integrity pacts in all kinds of contracts such that 90% of their procurements fall under Integrity Pact. Indian State governments have found merit in Integrity Pact and have adopted it in their procurement contracts. Integrity Pact has been instrumental in creating a brand image of companies. External Monitors, an essential feature of Integrity Pact have emerged as arbitrators for the bidders and are the first line of procurement auditors for the organizations. India has cancelled two defense contracts finding it conflicting with the provisions of Integrity Pact. Some of the clauses of Integrity Pact have been included in the proposed Public Procurement legislation. Integrity Pact has slowly but steadily grown to become an integral part of big ticket procurement in India. Government’s commitment to implement Integrity Pact has changed the way in which public procurement is conducted in India. Public Procurement was a segment infested with corruption but with the adoption of Integrity Pact a number of clean up acts have been performed to make procurement transparent. The paper is divided in five sections. First section elaborates on Integrity Pact. Second section talks about stakeholders of the instrument and the role it plays in its implementation. Third section talks about the efforts taken by the government to implement Integrity Pact in India. Fourth section talks about the role of External Monitor as Arbitrator. The final section puts forth suggestions to strengthen the existing form of Integrity Pact and increase its reach.

Keywords: corruption, integrity pact, procurement, vigilance

Procedia PDF Downloads 313
141 Generating Individualized Wildfire Risk Assessments Utilizing Multispectral Imagery and Geospatial Artificial Intelligence

Authors: Gus Calderon, Richard McCreight, Tammy Schwartz

Abstract:

Forensic analysis of community wildfire destruction in California has shown that reducing or removing flammable vegetation in proximity to buildings and structures is one of the most important wildfire defenses available to homeowners. State laws specify the requirements for homeowners to create and maintain defensible space around all structures. Unfortunately, this decades-long effort had limited success due to noncompliance and minimal enforcement. As a result, vulnerable communities continue to experience escalating human and economic costs along the wildland-urban interface (WUI). Quantifying vegetative fuels at both the community and parcel scale requires detailed imaging from an aircraft with remote sensing technology to reduce uncertainty. FireWatch has been delivering high spatial resolution (5” ground sample distance) wildfire hazard maps annually to the community of Rancho Santa Fe, CA, since 2019. FireWatch uses a multispectral imaging system mounted onboard an aircraft to create georeferenced orthomosaics and spectral vegetation index maps. Using proprietary algorithms, the vegetation type, condition, and proximity to structures are determined for 1,851 properties in the community. Secondary data processing combines object-based classification of vegetative fuels, assisted by machine learning, to prioritize mitigation strategies within the community. The remote sensing data for the 10 sq. mi. community is divided into parcels and sent to all homeowners in the form of defensible space maps and reports. Follow-up aerial surveys are performed annually using repeat station imaging of fixed GPS locations to address changes in defensible space, vegetation fuel cover, and condition over time. These maps and reports have increased wildfire awareness and mitigation efforts from 40% to over 85% among homeowners in Rancho Santa Fe. To assist homeowners fighting increasing insurance premiums and non-renewals, FireWatch has partnered with Black Swan Analytics, LLC, to leverage the multispectral imagery and increase homeowners’ understanding of wildfire risk drivers. For this study, a subsample of 100 parcels was selected to gain a comprehensive understanding of wildfire risk and the elements which can be mitigated. Geospatial data from FireWatch’s defensible space maps was combined with Black Swan’s patented approach using 39 other risk characteristics into a 4score Report. The 4score Report helps property owners understand risk sources and potential mitigation opportunities by assessing four categories of risk: Fuel sources, ignition sources, susceptibility to loss, and hazards to fire protection efforts (FISH). This study has shown that susceptibility to loss is the category residents and property owners must focus their efforts. The 4score Report also provides a tool to measure the impact of homeowner actions on risk levels over time. Resiliency is the only solution to breaking the cycle of community wildfire destruction and it starts with high-quality data and education.

Keywords: defensible space, geospatial data, multispectral imaging, Rancho Santa Fe, susceptibility to loss, wildfire risk.

Procedia PDF Downloads 83
140 Design of a Small and Medium Enterprise Growth Prediction Model Based on Web Mining

Authors: Yiea Funk Te, Daniel Mueller, Irena Pletikosa Cvijikj

Abstract:

Small and medium enterprises (SMEs) play an important role in the economy of many countries. When the overall world economy is considered, SMEs represent 95% of all businesses in the world, accounting for 66% of the total employment. Existing studies show that the current business environment is characterized as highly turbulent and strongly influenced by modern information and communication technologies, thus forcing SMEs to experience more severe challenges in maintaining their existence and expanding their business. To support SMEs at improving their competitiveness, researchers recently turned their focus on applying data mining techniques to build risk and growth prediction models. However, data used to assess risk and growth indicators is primarily obtained via questionnaires, which is very laborious and time-consuming, or is provided by financial institutes, thus highly sensitive to privacy issues. Recently, web mining (WM) has emerged as a new approach towards obtaining valuable insights in the business world. WM enables automatic and large scale collection and analysis of potentially valuable data from various online platforms, including companies’ websites. While WM methods have been frequently studied to anticipate growth of sales volume for e-commerce platforms, their application for assessment of SME risk and growth indicators is still scarce. Considering that a vast proportion of SMEs own a website, WM bears a great potential in revealing valuable information hidden in SME websites, which can further be used to understand SME risk and growth indicators, as well as to enhance current SME risk and growth prediction models. This study aims at developing an automated system to collect business-relevant data from the Web and predict future growth trends of SMEs by means of WM and data mining techniques. The envisioned system should serve as an 'early recognition system' for future growth opportunities. In an initial step, we examine how structured and semi-structured Web data in governmental or SME websites can be used to explain the success of SMEs. WM methods are applied to extract Web data in a form of additional input features for the growth prediction model. The data on SMEs provided by a large Swiss insurance company is used as ground truth data (i.e. growth-labeled data) to train the growth prediction model. Different machine learning classification algorithms such as the Support Vector Machine, Random Forest and Artificial Neural Network are applied and compared, with the goal to optimize the prediction performance. The results are compared to those from previous studies, in order to assess the contribution of growth indicators retrieved from the Web for increasing the predictive power of the model.

Keywords: data mining, SME growth, success factors, web mining

Procedia PDF Downloads 243
139 Wood Energy, Trees outside Forests and Agroforestry Wood Harvesting and Conversion Residues Preparing and Storing

Authors: Adeiza Matthew, Oluwadamilola Abubakar

Abstract:

Wood energy, also known as wood fuel, is a renewable energy source that is derived from woody biomass, which is organic matter that is harvested from forests, woodlands, and other lands. Woody biomass includes trees, branches, twigs, and other woody debris that can be used as fuel. Wood energy can be classified based on its sources, such as trees outside forests, residues from wood harvesting and conversion, and energy plantations. There are several policy frameworks that support the use of wood energy, including participatory forest management and agroforestry. These policies aim to promote the sustainable use of woody biomass as a source of energy while also protecting forests and wildlife habitats. There are several options for using wood as a fuel, including central heating systems, pellet-based systems, wood chip-based systems, log boilers, fireplaces, and stoves. Each of these options has its own benefits and drawbacks, and the most appropriate option will depend on factors such as the availability of woody biomass, the heating needs of the household or facility, and the local climate. In order to use wood as a fuel, it must be harvested and stored properly. Hardwood or softwood can be used as fuel, and the heating value of firewood depends on the species of tree and the degree of moisture content. Proper harvesting and storage of wood can help to minimize environmental impacts and improve wildlife habitats. The use of wood energy has several environmental impacts, including the release of greenhouse gases during combustion and the potential for air pollution from combustion by-products. However, wood energy can also have positive environmental impacts, such as the sequestration of carbon in trees and the reduction of reliance on fossil fuels. The regulation and legislation of wood energy vary by country and region, and there is an ongoing debate about the potential use of wood energy in renewable energy technologies. Wood energy is a renewable energy source that can be used to generate electricity, heat, and transportation fuels. Woody biomass is abundant and widely available, making it a potentially significant source of energy for many countries. The use of wood energy can create local economic and employment opportunities, particularly in rural areas. Wood energy can be used to reduce reliance on fossil fuels and reduce greenhouse gas emissions. Properly managed forests can provide a sustained supply of woody biomass for energy, helping to reduce the risk of deforestation and habitat loss. Wood energy can be produced using a variety of technologies, including direct combustion, co-firing with fossil fuels, and the production of biofuels. The environmental impacts of wood energy can be minimized through the use of best practices in harvesting, transportation, and processing. Wood energy is regulated and legislated at the national and international levels, and there are various standards and certification systems in place to promote sustainable practices. Wood energy has the potential to play a significant role in the transition to a low-carbon economy and the achievement of climate change mitigation goals.

Keywords: biomass, timber, charcoal, firewood

Procedia PDF Downloads 73
138 Designing Automated Embedded Assessment to Assess Student Learning in a 3D Educational Video Game

Authors: Mehmet Oren, Susan Pedersen, Sevket C. Cetin

Abstract:

Despite the frequently criticized disadvantages of the traditional used paper and pencil assessment, it is the most frequently used method in our schools. Although assessments do an acceptable measurement, they are not capable of measuring all the aspects and the richness of learning and knowledge. Also, many assessments used in schools decontextualize the assessment from the learning, and they focus on learners’ standing on a particular topic but do not concentrate on how student learning changes over time. For these reasons, many scholars advocate that using simulations and games (S&G) as a tool for assessment has significant potentials to overcome the problems in traditionally used methods. S&G can benefit from the change in technology and provide a contextualized medium for assessment and teaching. Furthermore, S&G can serve as an instructional tool rather than a method to test students’ learning at a particular time point. To investigate the potentials of using educational games as an assessment and teaching tool, this study presents the implementation and the validation of an automated embedded assessment (AEA), which can constantly monitor student learning in the game and assess their performance without intervening their learning. The experiment was conducted on an undergraduate level engineering course (Digital Circuit Design) with 99 participant students over a period of five weeks in Spring 2016 school semester. The purpose of this research study is to examine if the proposed method of AEA is valid to assess student learning in a 3D Educational game and present the implementation steps. To address this question, this study inspects three aspects of the AEA for the validation. First, the evidence-centered design model was used to lay out the design and measurement steps of the assessment. Then, a confirmatory factor analysis was conducted to test if the assessment can measure the targeted latent constructs. Finally, the scores of the assessment were compared with an external measure (a validated test measuring student learning on digital circuit design) to evaluate the convergent validity of the assessment. The results of the confirmatory factor analysis showed that the fit of the model with three latent factors with one higher order factor was acceptable (RMSEA < 0.00, CFI =1, TLI=1.013, WRMR=0.390). All of the observed variables significantly loaded to the latent factors in the latent factor model. In the second analysis, a multiple regression analysis was used to test if the external measure significantly predicts students’ performance in the game. The results of the regression indicated the two predictors explained 36.3% of the variance (R2=.36, F(2,96)=27.42.56, p<.00). It was found that students’ posttest scores significantly predicted game performance (β = .60, p < .000). The statistical results of the analyses show that the AEA can distinctly measure three major components of the digital circuit design course. It was aimed that this study can help researchers understand how to design an AEA, and showcase an implementation by providing an example methodology to validate this type of assessment.

Keywords: educational video games, automated embedded assessment, assessment validation, game-based assessment, assessment design

Procedia PDF Downloads 401
137 Walking in a Web of Animality: An Animality Informed Ethnography for an Inclusive Coexistence With (Other) Animals

Authors: Francesco De Giorgio

Abstract:

As different groups of wild animals are moving from natural to more anthropic environments, the need to overcome the human-animal gap for ethical coexistence becomes a public concern. Ethnology and ethnography play fundamental roles in the understanding of dynamics, perspective and movement in our interaction with (other) animals. In this effort, the Animality perspective provides an essential ethical lens and quality guidance for ethnography. It deconstructs the human/animal distinction and creates an inclusive approach to society. It further transgresses the rigid lines of normalizing images in human cultures, in which individuals are easily marginalized as ‘different’. Just like labeling an animal with species-specific behavior, judging and categorizing humans according to culture-specific expectations is easier than recognizing subjectivity. A fusion of anti-speciesist ethnology and ethnography of natural and social sciences can redress the shortcomings of current practices of multispecies ethnography that largely remain within an exclusively normalized human perspective. Empirically, the paper is based on current research on wild urban animals and human movement in Genua (IT), collecting data from systematic observations in the field regarding wild boars and ethnographic data collection over a period of time (18 months) where the human involved are educated in a changing perspective of coexistence. An “animality-ethnography” starts from observing our animal movement, how much and when we move, how we intersect our movement with that of other animals cohabiting with us, how we can observe and know others by moving, and ways of walking. The research will show how (interspecies) socio-cognition implies motion and movement and animal journeys between nature and the city, but also within the cities themselves, where a web of motion becomes the basic cultural matrix for cohabiting spaces, places, and systems. Here, the term "cognition" does not refer just to the brain or mind or intelligence. Indeed, cognition has a lot to do with movement, space, motion, proprioception, and the body. The ability to be informed, not only through what you see but also through the information you get from being in tune with the motion of a shared dynamic. To be an informative presence instead of an active stimulus or passive expectation, where the latter leaves too much space for projections and interpretations. What is proposed here is an understanding of our own animal movement linked to our own animal cognition. The result of breaking down your own culturally prescribed way in ethnographic research is breaking the barrier of limited options for observation and comprehension of the Other. Walking in the same way results in seeing others in the same way, studying them through only one channel of perception, causing a one-dimensional life instead of a multidimensional web. Returning to an understanding of our Animality, our animal movement, being in tune to improve a socio-cognitive context of cohabitation, both with domestic and wild animals, both in a forest or in a metropolis, represents the challenge of the coming years, and the evolution of the next centuries, to both preserve and share cultures, beyond the boundaries of species.

Keywords: antispeciesist ethology, interspecies coexistence, socio-cognition, intersectionality, animality

Procedia PDF Downloads 47
136 3D Classification Optimization of Low-Density Airborne Light Detection and Ranging Point Cloud by Parameters Selection

Authors: Baha Eddine Aissou, Aichouche Belhadj Aissa

Abstract:

Light detection and ranging (LiDAR) is an active remote sensing technology used for several applications. Airborne LiDAR is becoming an important technology for the acquisition of a highly accurate dense point cloud. A classification of airborne laser scanning (ALS) point cloud is a very important task that still remains a real challenge for many scientists. Support vector machine (SVM) is one of the most used statistical learning algorithms based on kernels. SVM is a non-parametric method, and it is recommended to be used in cases where the data distribution cannot be well modeled by a standard parametric probability density function. Using a kernel, it performs a robust non-linear classification of samples. Often, the data are rarely linearly separable. SVMs are able to map the data into a higher-dimensional space to become linearly separable, which allows performing all the computations in the original space. This is one of the main reasons that SVMs are well suited for high-dimensional classification problems. Only a few training samples, called support vectors, are required. SVM has also shown its potential to cope with uncertainty in data caused by noise and fluctuation, and it is computationally efficient as compared to several other methods. Such properties are particularly suited for remote sensing classification problems and explain their recent adoption. In this poster, the SVM classification of ALS LiDAR data is proposed. Firstly, connected component analysis is applied for clustering the point cloud. Secondly, the resulting clusters are incorporated in the SVM classifier. Radial basic function (RFB) kernel is used due to the few numbers of parameters (C and γ) that needs to be chosen, which decreases the computation time. In order to optimize the classification rates, the parameters selection is explored. It consists to find the parameters (C and γ) leading to the best overall accuracy using grid search and 5-fold cross-validation. The exploited LiDAR point cloud is provided by the German Society for Photogrammetry, Remote Sensing, and Geoinformation. The ALS data used is characterized by a low density (4-6 points/m²) and is covering an urban area located in residential parts of the city Vaihingen in southern Germany. The class ground and three other classes belonging to roof superstructures are considered, i.e., a total of 4 classes. The training and test sets are selected randomly several times. The obtained results demonstrated that a parameters selection can orient the selection in a restricted interval of (C and γ) that can be further explored but does not systematically lead to the optimal rates. The SVM classifier with hyper-parameters is compared with the most used classifiers in literature for LiDAR data, random forest, AdaBoost, and decision tree. The comparison showed the superiority of the SVM classifier using parameters selection for LiDAR data compared to other classifiers.

Keywords: classification, airborne LiDAR, parameters selection, support vector machine

Procedia PDF Downloads 128
135 Processing of Flexible Dielectric Nanocomposites Using Nanocellulose and Recycled Alum Sludge for Wearable Technology Applications

Authors: D. Sun, L. Saw, A. Onyianta, D. O’Rourke, Z. Lu, C. See, C. Wilson, C. Popescu, M. Dorris

Abstract:

With the rapid development of wearable technology (e.g., smartwatch, activity trackers and health monitor devices), flexible dielectric materials with environmental-friendly, low-cost and high-energy efficiency characteristics are in increasing demand. In this work, a flexible dielectric nanocomposite was processed by incorporating two components: cellulose nanofibrils and alum sludge in a polymer matrix. The two components were used in the reinforcement phase as well as for enhancing the dielectric properties; they were processed using waste materials that would otherwise be disposed to landfills. Alum sludge is a by-product of the water treatment process in which aluminum sulfate is prevalently used as the primary coagulant. According to the data from a project partner-Scottish Water: there are approximately 10k tons of alum sludge generated as a waste from the water treatment work to be landfilled every year in Scotland. The industry has been facing escalating financial and environmental pressure to develop more sustainable strategies to deal with alum sludge wastes. In the available literature, some work on reusing alum sludge has been reported (e.g., aluminum recovery or agriculture and land reclamation). However, little work can be found in applying it to processing energy materials (e.g., dielectrics) for enhanced energy density and efficiency. The alum sludge was collected directly from a water treatment plant of Scottish Water and heat-treated and refined before being used in preparing composites. Cellulose nanofibrils were derived from water hyacinth, an invasive aquatic weed that causes significant ecological issues in tropical regions. The harvested water hyacinth was dried and processed using a cost-effective method, including a chemical extraction followed by a homogenization process in order to extract cellulose nanofibrils. Biodegradable elastomer polydimethylsiloxane (PDMS) was used as the polymer matrix and the nanocomposites were processed by casting raw materials in Petri dishes. The processed composites were characterized using various methods, including scanning electron microscopy (SEM), rheological analysis, thermogravimetric and X-ray diffraction analysis. The SEM result showed that cellulose nanofibrils of approximately 20nm in diameter and 100nm in length were obtained and the alum sludge particles were of approximately 200um in diameters. The TGA/DSC analysis result showed that a weight loss of up to 48% can be seen in the raw material of alum sludge and its crystallization process has been started at approximately 800°C. This observation coincides with the XRD result. Other experiments also showed that the composites exhibit comprehensive mechanical and dielectric performances. This work depicts that it is a sustainable practice of reusing such waste materials in preparing flexible, lightweight and miniature dielectric materials for wearable technology applications.

Keywords: cellulose, biodegradable, sustainable, alum sludge, nanocomposite, wearable technology, dielectric

Procedia PDF Downloads 63
134 Solids and Nutrient Loads Exported by Preserved and Impacted Low-Order Streams: A Comparison among Water Bodies in Different Latitudes in Brazil

Authors: Nicolas R. Finkler, Wesley A. Saltarelli, Taison A. Bortolin, Vania E. Schneider, Davi G. F. Cunha

Abstract:

Estimating the relative contribution of nonpoint or point sources of pollution in low-orders streams is an important tool for the water resources management. The location of headwaters in areas with anthropogenic impacts from urbanization and agriculture is a common scenario in developing countries. This condition can lead to conflicts among different water users and compromise ecosystem services. Water pollution also contributes to exporting organic loads to downstream areas, including higher order rivers. The purpose of this research is to preliminarily assess nutrients and solids loads exported by water bodies located in watersheds with different types of land uses in São Carlos - SP (Latitude. -22.0087; Longitude. -47.8909) and Caxias do Sul - RS (Latitude. -29.1634, Longitude. -51.1796), Brazil, using regression analysis. The variables analyzed in this study were Total Kjeldahl Nitrogen (TKN), Nitrate (NO3-), Total Phosphorus (TP) and Total Suspended Solids (TSS). Data were obtained in October and December 2015 for São Carlos (SC) and in November 2012 and March 2013 for Caxias do Sul (CXS). Such periods had similar weather patterns regarding precipitation and temperature. Altogether, 11 sites were divided into two groups, some classified as more pristine (SC1, SC4, SC5, SC6 and CXS2), with predominance of native forest; and others considered as impacted (SC2, SC3, CXS1, CXS3, CXS4 and CXS5), presenting larger urban and/or agricultural areas. Previous linear regression was applied for data on flow and drainage area of each site (R² = 0.9741), suggesting that the loads to be assessed had a significant relationship with the drainage areas. Thereafter, regression analysis was conducted between the drainage areas and the total loads for the two land use groups. The R² values were 0.070, 0.830, 0.752 e 0.455 respectively for SST, TKN, NO3- and TP loads in the more preserved areas, suggesting that the loads generated by runoff are significant in these locations. However, the respective R² values for sites located in impacted areas were respectively 0.488, 0.054, 0.519 e 0.059 for SST, TKN, NO3- and P loads, indicating a less important relationship between total loads and runoff as compared to the previous scenario. This study suggests three possible conclusions that will be further explored in the full-text article, with more sampling sites and periods: a) In preserved areas, nonpoint sources of pollution are more significant in determining water quality in relation to the studied variables; b) The nutrient (TKN and P) loads in impacted areas may be associated with point sources such as domestic wastewater discharges with inadequate treatment levels; and c) The presence of NO3- in impacted areas can be associated to the runoff, particularly in agricultural areas, where the application of fertilizers is common at certain times of the year.

Keywords: land use, linear regression, point and non-point pollution sources, streams, water resources management

Procedia PDF Downloads 286
133 Investigation of Ground Disturbance Caused by Pile Driving: Case Study

Authors: Thayalan Nall, Harry Poulos

Abstract:

Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.

Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening

Procedia PDF Downloads 209
132 The Latest Salt Caravans: The Chinese Presence between Danakil and Tigray: Interdisciplinary Study to Integrate Chinese and African Relations in Ethiopia: Analyzing Road Evolution and Ethnographic Contexts

Authors: Erika Mattio

Abstract:

The aim of this project is to study the Chinese presence in Ethiopia, in the area between the Saba River and the Coptic areas of the Tigray, with detailed documentation of the Danakil region, from which the salt pickers caravans departed; the study was created to understand the relationships and consequences of the Chinese advance in these areas, inhabited by tribes linked to ancient, still practiced religious rituals, and home to unique landscapes and archaeological sites. Official estimates of the number of Chinese in Africa vary widely; on the continent, there are increasingly diverse groups of Chinese migrants in terms of language, dialect, class, education, and employment. Based on this and on a very general state of the art, it was decided to increase the studies on this phenomenon, focusing the attention on one of the most interesting countries for its diversity, cultural wealth, and for strong Chinese presence: Ethiopia. The study will be integrated with interdisciplinary investigation methods, such as landscape archeology, historiographic research, participatory anthropology, geopolitics, and cultural anthropology and ethnology. There are two main objectives of the research. The first is to predict what will happen to these populations and how the territory will be modified, trying to monitor the growth of infrastructure in the country and the effects it will have on the population. Risk analyzes will be carried out to understand what the foreign presence may entail, such as the absence of sustenance for local populations, the ghettoization of foreigners, unemployment of natives and the exodus of the population to the capital; the relationships between families and the local population will be analyzed, trying to understand the dynamics of socialization and interaction. Thanks to the use of GIS, the areas affected by the Chinese presence will be geo-referenced and mapped, delimiting the areas most affected and creating a risk analysis, both in desert areas and in archaeologically and historically relevant areas. The second point is to document the life and rituals of Ethiopian populations in order not to lose the aspects of uniqueness that risk being lost. Local interviews will collect impressions and criticisms from the local population to understand if the Chinese presence is perceived as a threat or as a solution. Furthermore, Afar leaders in the Logya area will be interviewed, in truly exclusive research, to understand their links with the foreign presence. From the north, along the Saba river, we will move to the northwest, in the Tigray region, to know the impressions in the Coptic area, currently less threatened by the Chinese presence but still affected by urbanization proposals. There will also be documented the Coptic rituals of Gennà and Timkat, unique expressions of a millennial tradition. This will allow the understanding of whether the Maoist presence could influence the religious rites and forms of belief present in the country, or the country will maintain its cultural independence.

Keywords: Ethiopia, GIS, risk perceptions, salt caravans

Procedia PDF Downloads 158
131 Aberrant Acetylation/Methylation of Homeobox (HOX) Family Genes in Cumulus Cells of Infertile Women with Polycystic Ovary Syndrome (PCOS)

Authors: P. Asiabi, M. Shahhoseini, R. Favaedi, F. Hassani, N. Nassiri, B. Movaghar, L. Karimian, P. Eftekhariyazdi

Abstract:

Introduction: Polycystic Ovary Syndrome is a common gynecologic disorder. Many factors including environment, metabolism, hormones and genetics are involved in etiopathogenesis of PCOS. Of genes that have altered expression in human reproductive system disorders are HOX family genes which act as transcription factors in regulation of cell proliferation, differentiation, adhesion and migration. Since recent evidences consider epigenetic factors as causative mechanisms of PCOS, evaluation of association between known epigenetic marks of acetylation/methylation of histone 3 (H3K9ac/me) with regulatory regions of these genes can represent better insight about PCOS. In the current study, cumulus cells (CCs) which have critical roles during folliculogenesis, oocyte maturation, ovulation and fertilization were aimed to monitor epigenetic alterations of HOX genes. Material and methods: CCs were collected from 20 PCOS patients and 20 fertile women (18-36 year) with male infertility problems referred to the Royan Institute to have ICSI under GnRH antagonist protocol. Informed consents were obtained from the participants. Thirty six hours after hCG injection, ovaries were punctured and cumulus oocyte complexes were dissected. Soluble chromatin were extracted from CCs and Chromatin Immune precipitation (ChIP) coupled with Real Time PCR was performed to quantify the epigenetic marks of histone H3K9 acetylation/methylation (H3K9ac/me) on regulatory regions of 15 members of HOX genes from A-D subfamily. Results: Obtained data showed significant increase of H3K9ac epigenetic mark on regulatory regions of HOXA1, HOXB2, HOXC4, HOXD1, HOXD3 and HOXD4 (P < 0.01) and HOXC5 (P < 0.05) and also significant decrease of H3K9ac into regulatory regions of HOXA2, HOXA4, HOXA5, HOXB1 and HOXB5 (P < 0.01) and HOXB3 (P<0.05) in PCOS patients vs. control group. On the other side, there was a significant decrease in incorporation of H3K9me level on regulatory region of HOXA2, HOXA3, HOXA4, HOXA5, HOXB3 and HOXC4 (P≤0.01) and HOXB5 (P < 0.05) in PCOS patients vs. control group. This epigenetic mark (H3K9me2) has significant increase on regulatory region of HOXB1, HOXB2, HOXC5, HOXD1, HOXD3 and HOXD4 (P ≤ 0.01) and HOXB4 (P < 0.05) in patients vs. control group. There were no significant changes in acetylation/methylation levels of H3K9 on regulatory regions of the other studied genes. Conclusion: Current study suggests that epigenetic alterations of HOX genes can be correlated with PCOS and consequently female infertility. This finding might offer additional definitions of PCOS, and eventually provides insight for novel treatments with epidrugs for this disease.

Keywords: epigenetic, HOX genes, PCOS, female infertility

Procedia PDF Downloads 296
130 Importance of Macromineral Ratios and Products in Association with Vitamin D in Pediatric Obesity Including Metabolic Syndrome

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Metabolisms of macrominerals, those of calcium, phosphorus and magnesium, are closely associated with the metabolism of vitamin D. Particularly magnesium, the second most abundant intracellular cation, is related to biochemical and metabolic processes in the body, such as those of carbohydrates, proteins and lipids. The status of each mineral was investigated in obesity to some extent. Their products and ratios may possibly give much more detailed information about the matter. The aim of this study is to investigate possible relations between each macromineral and some obesity-related parameters. This study was performed on 235 children, whose ages were between 06-18 years. Aside from anthropometric measurements, hematological analyses were performed. TANITA body composition monitor using bioelectrical impedance analysis technology was used to establish some obesity-related parameters including basal metabolic rate (BMR), total fat, mineral and muscle masses. World Health Organization body mass index (BMI) percentiles for age and sex were used to constitute the groups. The values above 99th percentile were defined as morbid obesity. Those between 95th and 99th percentiles were included into the obese group. The overweight group comprised of children whose percentiles were between 95 and 85. Children between the 85th and 15th percentiles were defined as normal. Metabolic syndrome (MetS) components (waist circumference, fasting blood glucose, triacylglycerol, high density lipoprotein cholesterol, systolic pressure, diastolic pressure) were determined. High performance liquid chromatography was used to determine Vitamin D status by measuring 25-hydroxy cholecalciferol (25-hydroxy vitamin D3, 25(OH)D). Vitamin D values above 30.0 ng/ml were accepted as sufficient. SPSS statistical package program was used for the evaluation of data. The statistical significance degree was accepted as p < 0.05. The important points were the correlations found between vitamin D and magnesium as well as phosphorus (p < 0.05) that existed in the group with normal BMI values. These correlations were lost in the other groups. The ratio of phosphorus to magnesium was even much more highly correlated with vitamin D (p < 0.001). The negative correlation between magnesium and total fat mass (p < 0.01) was confined to the MetS group showing the inverse relationship between magnesium levels and obesity degree. In this group, calcium*magnesium product exhibited the highest correlation with total fat mass (p < 0.001) among all groups. Only in the MetS group was a negative correlation found between BMR and calcium*magnesium product (p < 0.05). In conclusion, magnesium is located at the center of attraction concerning its relationships with vitamin D, fat mass and MetS. The ratios and products derived from macrominerals including magnesium have pointed out stronger associations other than each element alone. Final considerations have shown that unique correlations of magnesium as well as calcium*magnesium product with total fat mass have drawn attention particularly in the MetS group, possibly due to the derangements in some basic elements of carbohydrate as well as lipid metabolism.

Keywords: macrominerals, metabolic syndrome, pediatric obesity, vitamin D

Procedia PDF Downloads 90
129 Motivations, Communication Dimensions, and Perceived Outcomes in the Multi-Sectoral Collaboration of the Visitor Management Program of Mount Makiling Forest Reserve in Los Banos, Laguna, Philippines

Authors: Charmaine B. Distor

Abstract:

Collaboration has long been recognized in different fields, but there’s been little research on operationalizing it especially on a multi-sectoral setting as per the author’s best knowledge. Also, communication is one of the factors that is usually overlooked when studying it. Specifically, this study aimed to describe the organizational profile and tasks of collaborators in the visitor management program of Make It Makiling (MIM). It also identified the factors that motivated collaborators to collaborate in MIM while determining the communication dimensions in the collaborative process. It also determined the communication channels used by collaborators in MIM while identifying the outcomes of collaboration in MIM. This study also found out if a relationship exists between collaborators’ motivations for collaboration and their perceived outcomes of collaboration, and collaborators' communication dimensions and their perceived outcomes of collaboration. Lastly, it also provided recommendations to improve the communication in MIM. Data were gathered using a self-administered survey that was patterned after Mattessich and Monsey’s (1992) collaboration experience questionnaire. Interviews and secondary sources mainly provided by the Makiling Center for Mountain Ecosystems (MCME) were also used. From the seven MIM collaborating organizations that were selected through purposive sampling, 86 respondents were chosen. Then, data were analyzed through frequency counts, percentages, measures of central tendencies, and Pearson’s and Spearman rho correlations. Collaborators’ length of collaboration ranged from seven to twenty years. Furthermore, six out of seven of the collaborators were involved in the task of 'emergency, rescue, and communication'. For the other aspect of the antecedents, the history of previous collaboration efforts ranked as the highest rated motivation for collaboration. In line with this, the top communication dimension is the governance while perceived effectiveness garnered the highest overall average among the perceived outcomes of collaboration. Results also showed that the collaborators highly rely on formal communication channels. Meetings and memos were the most commonly used communication channels throughout all tasks under the four phases of MIM. Additionally, although collaborators have a high view towards their co-collaborators, they still rely on MCME to act as their manager in coordinating with one another indirectly. Based on the correlation analysis, antecedent (motivations)-outcome relationship generally had positive relationships. However, for the process (communication dimensions)-outcome relationship, both positive and negative relationships were observed. In conclusion, this study exhibited the same trend with existing literature which also used the same framework. For the antecedent-outcome relationship, it can be deduced that MCME, as the main organizer of MIM, can focus on these variables to achieve their desired outcomes because of the positive relationships. For the process-outcome relationship, MCME should also take note that there were negative relationships where an increase in the said communication dimension may result in a decrease in the desired outcome. Recommendations for further study include a methodology that contains: complete enumeration or any parametric sampling, a researcher-administered survey, and direct observations. These might require additional funding, but all may yield to richer data.

Keywords: antecedent-outcome relationship, carrying capacity, organizational communication, process-outcome relationship

Procedia PDF Downloads 100
128 The Power of in situ Characterization Techniques in Heterogeneous Catalysis: A Case Study of Deacon Reaction

Authors: Ramzi Farra, Detre Teschner, Marc Willinger, Robert Schlögl

Abstract:

Introduction: The conventional approach of characterizing solid catalysts under static conditions, i.e., before and after reaction, does not provide sufficient knowledge on the physicochemical processes occurring under dynamic conditions at the molecular level. Hence, the necessity of improving new in situ characterizing techniques with the potential of being used under real catalytic reaction conditions is highly desirable. In situ Prompt Gamma Activation Analysis (PGAA) is a rapidly developing chemical analytical technique that enables us experimentally to assess the coverage of surface species under catalytic turnover and correlate these with the reactivity. The catalytic HCl oxidation (Deacon reaction) over bulk ceria will serve as our example. Furthermore, the in situ Transmission Electron Microscopy is a powerful technique that can contribute to the study of atmosphere and temperature induced morphological or compositional changes of a catalyst at atomic resolution. The application of such techniques (PGAA and TEM) will pave the way to a greater and deeper understanding of the dynamic nature of active catalysts. Experimental/Methodology: In situ Prompt Gamma Activation Analysis (PGAA) experiments were carried out to determine the Cl uptake and the degree of surface chlorination under reaction conditions by varying p(O2), p(HCl), p(Cl2), and the reaction temperature. The abundance and dynamic evolution of OH groups on working catalyst under various steady-state conditions were studied by means of in situ FTIR with a specially designed homemade transmission cell. For real in situ TEM we use a commercial in situ holder with a home built gas feeding system and gas analytics. Conclusions: Two complimentary in situ techniques, namely in situ PGAA and in situ FTIR were utilities to investigate the surface coverage of the two most abundant species (Cl and OH). The OH density and Cl uptake were followed under multiple steady-state conditions as a function of p(O2), p(HCl), p(Cl2), and temperature. These experiments have shown that, the OH density positively correlates with the reactivity whereas Cl negatively. The p(HCl) experiments give rise to increased activity accompanied by Cl-coverage increase (opposite trend to p(O2) and T). Cl2 strongly inhibits the reaction, but no measurable increase of the Cl uptake was found. After considering all previous observations we conclude that only a minority of the available adsorption sites contribute to the reactivity. In addition, the mechanism of the catalysed reaction was proposed. The chlorine-oxygen competition for the available active sites renders re-oxidation as the rate-determining step of the catalysed reaction. Further investigations using in situ TEM are planned and will be conducted in the near future. Such experiments allow us to monitor active catalysts at the atomic scale under the most realistic conditions of temperature and pressure. The talk will shed a light on the potential and limitations of in situ PGAA and in situ TEM in the study of catalyst dynamics.

Keywords: CeO2, deacon process, in situ PGAA, in situ TEM, in situ FTIR

Procedia PDF Downloads 267
127 The Intensity of Root and Soil Respiration Is Significantly Determined by the Organic Matter and Moisture Content of the Soil

Authors: Zsolt Kotroczó, Katalin Juhos, Áron Béni, Gábor Várbíró, Tamás Kocsis, István Fekete

Abstract:

Soil organic matter plays an extremely important role in the functioning and regulation processes of ecosystems. It follows that the C content of organic matter in soil is one of the most important indicators of soil fertility. Part of the carbon stored in them is returned to the atmosphere during soil respiration. Climate change and inappropriate land use can accelerate these processes. Our work aimed to determine how soil CO2 emissions change over ten years as a result of organic matter manipulation treatments. With the help of this, we were able to examine not only the effects of the different organic matter intake but also the effects of the different microclimates that occur as a result of the treatments. We carried out our investigations in the area of the Síkfőkút DIRT (Detritus Input and Removal Treatment) Project. The research area is located in the southern, hilly landscape of the Bükk Mountains, northeast of Eger (Hungary). GPS coordinates of the project: 47°55′34′′ N and 20°26′ 29′′ E, altitude 320-340 m. The soil of the area is Luvisols. The 27-hectare protected forest area is now under the supervision of the Bükki National Park. The experimental plots in Síkfőkút were established in 2000. We established six litter manipulation treatments each with three 7×7 m replicate plots established under complete canopy cover. There were two types of detritus addition treatments (Double Wood and Double Litter). In three treatments, detritus inputs were removed: No Litter No Roots plots, No Inputs, and the Controls. After the establishment of the plots, during the drier periods, the NR and NI treatments showed the highest CO2 emissions. In the first few years, the effect of this process was evident, because due to the lack of living vegetation, the amount of evapotranspiration on the NR and NI plots was much lower, and transpiration practically ceased on these plots. In the wetter periods, the NL and NI treatments showed the lowest soil respiration values, which were significantly lower compared to the Co, DW, and DL treatments. Due to the lower organic matter content and the lack of surface litter cover, the water storage capacity of these soils was significantly limited, therefore we measured the lowest average moisture content among the treatments after ten years. Soil respiration is significantly influenced by temperature values. Furthermore, the supply of nutrients to the soil microorganisms is also a determining factor, which in this case is influenced by the litter production dictated by the treatments. In the case of dry soils with a moisture content of less than 20% in the initial period, litter removal treatments showed a strong correlation with soil moisture (r=0.74). In very dry soils, a smaller increase in moisture does not cause a significant increase in soil respiration, while it does in a slightly higher moisture range. In wet soils, the temperature is the main regulating factor, above a certain moisture limit, water displaces soil air from the soil pores, which inhibits aerobic decomposition processes, and so heterotrophic soil respiration also declines.

Keywords: soil biology, organic matter, nutrition, DIRT, soil respiration

Procedia PDF Downloads 43
126 Inverted Diameter-Limit Thinning: A Promising Alternative for Mixed Populus tremuloides Stands Management

Authors: Ablo Paul Igor Hounzandji, Benoit Lafleur, Annie DesRochers

Abstract:

Introduction: Populus tremuloides [Michx] regenerates rapidly and abundantly by root suckering after harvest, creating stands with interconnected stems. Pre-commercial thinning can be used to concentrate growth on fewer stems to reach merchantability faster than un-thinned stands. However, conventional thinning methods are typically designed to reach even spacing between residual stems (1,100 stem ha⁻¹, evenly distributed), which can lead to treated stands consisting of weaker/smaller stems compared to the original stands. Considering the nature of P. tremuloides's regeneration, with large underground biomass of interconnected roots, aiming to keep the most vigorous and largest stems, regardless of their spatial distribution, inverted diameter-limit thinning could be more beneficial to post-thinning stand productivity because it would reduce the imbalance between roots and leaf area caused by thinning. Aims: This study aimed to compare stand and stem productivity of P. tremuloides stands thinned with a conventional thinning treatment (CT; 1,100 stem ha⁻¹, evenly distributed), two levels of inverted diameter-limit thinning (DL1 and DL2, keeping the largest 1100 or 2200 stems ha⁻¹, respectively, regardless of their spatial distribution) and a control unthinned treatment. Because DL treatments can create substantial or frequent gaps in the thinned stands, we also aimed to evaluate the potential of this treatment to recreate mixed conifer-broadleaf stands by fill-planting Picea glauca seedlings. Methods: Three replicate 21 year-old sucker-regenerated aspen stands were thinned in 2010 according to four treatments: CT, DL1, DL2, and un-thinned control. Picea glauca seedlings were underplanted in gaps created by the DL1 and DL2 treatments. Stand productivity per hectare, stem quality (diameter and height, volume stem⁻¹) and survival and height growth of fill-planted P. glauca seedlings were measured 8 year post-treatments. Results: Productivity, volume, diameter, and height were better in the treated stands (CT, DL1, and DL2) than in the un-thinned control. Productivity of CT and DL1 stands was similar 4.8 m³ ha⁻¹ year⁻¹. At the tree level, diameter and height of the trees in the DL1 treatment were 5% greater than those in the CT treatment. The average volume of trees in the DL1 treatment was 11% higher than the CT treatment. Survival after 8 years of fill planted P. glauca seedlings was 2% greater in the DL1 than in the DL2 treatment. DL1 treatment also produced taller seedlings (+20 cm). Discussion: Results showed that DL treatments were effective in producing post-thinned stands with larger stems without affecting stand productivity. In addition, we showed that these treatments were suitable to introduce slower growing conifer seedlings such as Picea glauca in order to re-create or maintain mixed stands despite the aggressive nature of P. tremuloides sucker regeneration.

Keywords: Aspen, inverted diameter-limit, mixed forest, populus tremuloides, silviculture, thinning

Procedia PDF Downloads 121
125 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene

Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger

Abstract:

In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.

Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water

Procedia PDF Downloads 288
124 Analysis of the Evolution of the Behavior of Land Users Linked to the Surge in the Prices of Cash Crops: Case of the Northeast Region of Madagascar

Authors: Zo Hasina Rabemananjara

Abstract:

The North-East of Madagascar is the pillar of Madagascar's foreign trade, providing 41% and 80% of world exports of cloves and vanilla, respectively, in 2016. For Madagascar, the north-eastern escarpment is home to the last massifs of humid forest in large scale of the island, surrounded by a small scale agricultural mosaic. In the sites where this study is taking place, located in the peripheral zones of protected areas, the production of rent aims to supply international markets. In fact, importers of the cash crops produced in these areas are located mainly in India, Singapore, France, Germany and the United States. Recently, the price of these products has increased significantly, especially from the year 2015. For vanilla, the price has skyrocketed, from an approximate price of 73 USD per kilo in 2015 to more than 250 USD per kilo in 2016. The value of clove exports increased sharply by 49.4% in 2017, largely to Singapore and India due to the sharp increase in exported volume (+47, 6%) in 2017. If the relationship between the rise in prices of rented products and the change in physical environments is known, the evolution of the behavior of land users linked to this aspect was not yet addressed by research. In fact, the consequence of this price increase in the organization of the use of space at the local level still raises questions. Hence, the research question is: to what extent does this improvement in the price of imported products affect user behavior linked to the local organization of access to the factor of soil production? To fully appreciate this change in behavior, surveys of 144 land user households were carried out, and group interviews were also carried out. The results of this research showed that the rise in the prices of annuity products from the year 2015 caused significant changes in the behavior of land users in the study sites. Young people, who have not been attracted to farming for a long time, have started to show interest in it since the period of rising vanilla and clove prices. They have set up their own fields of vanilla and clove cultivation. This revival of interest conferred an important value on the land and caused conflicts especially between family members because the acquisition of the cultivated land was done by inheritance or donation. This change in user behavior has also affected the farmers' life strategy since the latter have decided to abandon rain-fed rice farming, which has long been considered a guaranteed subsistence activity for cash crops. This research will contribute to nourishing scientific reflection on the management of land use and also to support political decision-makers in decision-making on spatial planning.

Keywords: behavior of land users, North-eastern Madagascar, price of export products, spatial planning

Procedia PDF Downloads 94
123 Monitoring of Vector Mosquitors of Diseases in Areas of Energy Employment Influence in the Amazon (Amapa State), Brazil

Authors: Ribeiro Tiago Magalhães

Abstract:

Objective: The objective of this study was to evaluate the influence of a hydroelectric power plant in the state of Amapá, and to present the results obtained by dimensioning the diversity of the main mosquito vectors involved in the transmission of pathogens that cause diseases such as malaria, dengue and leishmaniasis. Methodology: The present study was conducted on the banks of the Araguari River, in the municipalities of Porto Grande and Ferreira Gomes in the southern region of Amapá State. Nine monitoring campaigns were conducted, the first in April 2014 and the last in March 2016. The selection of the catch sites was done in order to prioritize areas with possible occurrence of the species considered of greater importance for public health and areas of contact between the wild environment and humans. Sampling efforts aimed to identify the local vector fauna and to relate it to the transmission of diseases. In this way, three phases of collection were established, covering the schedules of greater hematophageal activity. Sampling was carried out using Shannon Shack and CDC types of light traps and by means of specimen collection with the hold method. This procedure was carried out during the morning (between 08:00 and 11:00), afternoon-twilight (between 15:30 and 18:30) and night (between 18:30 and 22:00). In the specific methodology of capture with the use of the CDC equipment, the delimited times were from 18:00 until 06:00 the following day. Results: A total of 32 species of mosquitoes was identified, and a total of 2,962 specimens was taxonomically subdivided into three genera (Culicidae, Psychodidae and Simuliidae) Psorophora, Sabethes, Simulium, Uranotaenia and Wyeomyia), besides those represented by the family Psychodidae that due to the morphological complexities, allows the safe identification (without the method of diaphanization and assembly of slides for microscopy), only at the taxonomic level of subfamily (Phlebotominae). Conclusion: The nine monitoring campaigns carried out provided the basis for the design of the possible epidemiological structure in the areas of influence of the Cachoeira Caldeirão HPP, in order to point out among the points established for sampling, which would represent greater possibilities, according to the group of identified mosquitoes, of disease acquisition. However, what should be mainly considered, are the future events arising from reservoir filling. This argument is based on the fact that the reproductive success of Culicidae is intrinsically related to the aquatic environment for the development of its larvae until adulthood. From the moment that the water mirror is expanded in new environments for the formation of the reservoir, a modification in the process of development and hatching of the eggs deposited in the substrate can occur, causing a sudden explosion in the abundance of some genera, in special Anopheles, which holds preferences for denser forest environments, close to the water portions.

Keywords: Amazon, hydroelectric, power, plants

Procedia PDF Downloads 158
122 Enhancing Food Quality and Safety Management in Ethiopia's Food Processing Industry: Challenges, Causes, and Solutions

Authors: Tuji Jemal Ahmed

Abstract:

Food quality and safety challenges are prevalent in Ethiopia's food processing industry, which can have adverse effects on consumers' health and wellbeing. The country is known for its diverse range of agricultural products, which are essential to its economy. However, poor food quality and safety policies and management systems in the food processing industry have led to several health problems, foodborne illnesses, and economic losses. This paper aims to highlight the causes and effects of food safety and quality issues in the food processing industry of Ethiopia and discuss potential solutions to address these issues. One of the main causes of poor food quality and safety in Ethiopia's food processing industry is the lack of adequate regulations and enforcement mechanisms. The absence of comprehensive food safety and quality policies and guidelines has led to substandard practices in the food manufacturing process. Moreover, the lack of monitoring and enforcement of existing regulations has created a conducive environment for unscrupulous businesses to engage in unsafe practices that endanger the public's health. The effects of poor food quality and safety are significant, ranging from the loss of human lives, increased healthcare costs, and loss of consumer confidence in the food processing industry. Foodborne illnesses, such as diarrhea, typhoid fever, and cholera, are prevalent in Ethiopia, and poor food quality and safety practices contribute significantly to their prevalence. Additionally, food recalls due to contamination or mislabeling often result in significant economic losses for businesses in the food processing industry. To address these challenges, the Ethiopian government has begun to take steps to improve food quality and safety in the food processing industry. One of the most notable initiatives is the Ethiopian Food and Drug Administration (EFDA), which was established in 2010 to regulate and monitor the quality and safety of food and drug products in the country. The EFDA has implemented several measures to enhance food safety, such as conducting routine inspections, monitoring the importation of food products, and enforcing strict labeling requirements. Another potential solution to improve food quality and safety in Ethiopia's food processing industry is the implementation of food safety management systems (FSMS). An FSMS is a set of procedures and policies designed to identify, assess, and control food safety hazards throughout the food manufacturing process. Implementing an FSMS can help businesses in the food processing industry identify and address potential hazards before they cause harm to consumers. Additionally, the implementation of an FSMS can help businesses comply with existing food safety regulations and guidelines. In conclusion, improving food quality and safety policies and management systems in Ethiopia's food processing industry is critical to protecting public health and enhancing the country's economy. Addressing the root causes of poor food quality and safety and implementing effective solutions, such as the establishment of regulatory agencies and the implementation of food safety management systems, can help to improve the overall safety and quality of the country's food supply.

Keywords: food quality, food safety, policy, management system, food processing industry

Procedia PDF Downloads 56
121 Influence of Torrefied Biomass on Co-Combustion Behaviors of Biomass/Lignite Blends

Authors: Aysen Caliskan, Hanzade Haykiri-Acma, Serdar Yaman

Abstract:

Co-firing of coal and biomass blends is an effective method to reduce carbon dioxide emissions released by burning coals, thanks to the carbon-neutral nature of biomass. Besides, usage of biomass that is renewable and sustainable energy resource mitigates the dependency on fossil fuels for power generation. However, most of the biomass species has negative aspects such as low calorific value, high moisture and volatile matter contents compared to coal. Torrefaction is a promising technique in order to upgrade the fuel properties of biomass through thermal treatment. That is, this technique improves the calorific value of biomass along with serious reductions in the moisture and volatile matter contents. In this context, several woody biomass materials including Rhododendron, hybrid poplar, and ash-tree were subjected to torrefaction process in a horizontal tube furnace at 200°C under nitrogen flow. In this way, the solid residue obtained from torrefaction that is also called as 'biochar' was obtained and analyzed to monitor the variations taking place in biomass properties. On the other hand, some Turkish lignites from Elbistan, Adıyaman-Gölbaşı and Çorum-Dodurga deposits were chosen as coal samples since these lignites are of great importance in lignite-fired power stations in Turkey. These lignites were blended with the obtained biochars for which the blending ratio of biochars was kept at 10 wt% and the lignites were the dominant constituents in the fuel blends. Burning tests of the lignites, biomasses, biochars, and blends were performed using a thermogravimetric analyzer up to 900°C with a heating rate of 40°C/min under dry air atmosphere. Based on these burning tests, properties relevant to burning characteristics such as the burning reactivity and burnout yields etc. could be compared to justify the effects of torrefaction and blending. Besides, some characterization techniques including X-Ray Diffraction (XRD), Fourier Transform Infrared (FTIR) spectroscopy and Scanning Electron Microscopy (SEM) were also conducted for the untreated biomass and torrefied biomass (biochar) samples, lignites and their blends to examine the co-combustion characteristics elaborately. Results of this study revealed the fact that blending of lignite with 10 wt% biochar created synergistic behaviors during co-combustion in comparison to the individual burning of the ingredient fuels in the blends. Burnout and ignition performances of each blend were compared by taking into account the lignite and biomass structures and characteristics. The blend that has the best co-combustion profile and ignition properties was selected. Even though final burnouts of the lignites were decreased due to the addition of biomass, co-combustion process acts as a reasonable and sustainable solution due to its environmentally friendly benefits such as reductions in net carbon dioxide (CO2), SOx and hazardous organic chemicals derived from volatiles.

Keywords: burnout performance, co-combustion, thermal analysis, torrefaction pretreatment

Procedia PDF Downloads 317
120 Recent Findings of Late Bronze Age Mining and Archaeometallurgy Activities in the Mountain Region of Colchis (Southern Lechkhumi, Georgia)

Authors: Rusudan Chagelishvili, Nino Sulava, Tamar Beridze, Nana Rezesidze, Nikoloz Tatuashvili

Abstract:

The South Caucasus is one of the most important centers of prehistoric metallurgy, known for its Colchian bronze culture. Modern Lechkhumi – historical Mountainous Colchis where the existence of prehistoric metallurgy is confirmed by the discovery of many artifacts is a part of this area. Studies focused on prehistoric smelting sites, related artefacts, and ore deposits have been conducted during last ten years in Lechkhumi. More than 20 prehistoric smelting sites and artefacts associated with metallurgical activities (ore roasting furnaces, slags, crucible, and tuyères fragments) have been identified so far. Within the framework of integrated studies was established that these sites were operating in 13-9 centuries B.C. and used for copper smelting. Palynological studies of slags revealed that chestnut (Castanea sativa) and hornbeam (Carpinus sp.) wood were used as smelting fuel. Geological exploration-analytical studies revealed that copper ore mining, processing, and smelting sites were distributed close to each other. Despite recent complex data, the signs of prehistoric mines (trenches) haven’t been found in this part of the study area so far. Since 2018 the archaeological-geological exploration has been focused on the southern part of Lechkhumi and covered the areas of villages Okureshi and Opitara. Several copper smelting sites (Okureshi 1 and 2, Opitara 1), as well as a Colchian Bronze culture settlement, have been identified here. Three mine workings have been found in the narrow gorge of the river Rtkhmelebisgele in the vicinities of the village Opitara. In order to establish a link between the Opitara-Okureshi archaeometallurgical sites, Late Bronze Age settlements, and mines, various scientific analytical methods -mineralized rock and slags petrography and atomic absorption spectrophotometry (AAS) analysis have been applied. The careful examination of Opitara mine workings revealed that there is a striking difference between the mine #1 on the right bank of the river and mines #2 and #3 on the left bank. The first one has all characteristic features of the Soviet period mine working (e. g. high portal with angular ribs and roof showing signs of blasting). In contrast, mines #2 and #3, which are located very close to each other, have round-shaped portals/entrances, low roofs, and fairly smooth ribs and are filled with thick layers of river sediments and collapsed weathered rock mass. A thorough review of the publications related to prehistoric mine workings revealed some striking similarities between mines #2 and #3 with their worldwide analogues. Apparently, the ore extraction from these mines was conducted by fire-setting applying primitive tools. It was also established that mines are cut in Jurassic mineralized volcanic rocks. Ore minerals (chalcopyrite, pyrite, galena) are related to calcite and quartz veins. The results obtained through the petrochemical and petrography studies of mineralized rock samples from Opitara mines and prehistoric slags are in complete correlation with each other, establishing the direct link between copper mining and smelting within the study area. Acknowledgment: This work was supported by the Shota Rustaveli National Science Foundation of Georgia (grant # FR-19-13022).

Keywords: archaeometallurgy, Mountainous Colchis, mining, ore minerals

Procedia PDF Downloads 156
119 Predicting Provider Service Time in Outpatient Clinics Using Artificial Intelligence-Based Models

Authors: Haya Salah, Srinivas Sharan

Abstract:

Healthcare facilities use appointment systems to schedule their appointments and to manage access to their medical services. With the growing demand for outpatient care, it is now imperative to manage physician's time effectively. However, high variation in consultation duration affects the clinical scheduler's ability to estimate the appointment duration and allocate provider time appropriately. Underestimating consultation times can lead to physician's burnout, misdiagnosis, and patient dissatisfaction. On the other hand, appointment durations that are longer than required lead to doctor idle time and fewer patient visits. Therefore, a good estimation of consultation duration has the potential to improve timely access to care, resource utilization, quality of care, and patient satisfaction. Although the literature on factors influencing consultation length abound, little work has done to predict it using based data-driven approaches. Therefore, this study aims to predict consultation duration using supervised machine learning algorithms (ML), which predicts an outcome variable (e.g., consultation) based on potential features that influence the outcome. In particular, ML algorithms learn from a historical dataset without explicitly being programmed and uncover the relationship between the features and outcome variable. A subset of the data used in this study has been obtained from the electronic medical records (EMR) of four different outpatient clinics located in central Pennsylvania, USA. Also, publicly available information on doctor's characteristics such as gender and experience has been extracted from online sources. This research develops three popular ML algorithms (deep learning, random forest, gradient boosting machine) to predict the treatment time required for a patient and conducts a comparative analysis of these algorithms with respect to predictive performance. The findings of this study indicate that ML algorithms have the potential to predict the provider service time with superior accuracy. While the current approach of experience-based appointment duration estimation adopted by the clinic resulted in a mean absolute percentage error of 25.8%, the Deep learning algorithm developed in this study yielded the best performance with a MAPE of 12.24%, followed by gradient boosting machine (13.26%) and random forests (14.71%). Besides, this research also identified the critical variables affecting consultation duration to be patient type (new vs. established), doctor's experience, zip code, appointment day, and doctor's specialty. Moreover, several practical insights are obtained based on the comparative analysis of the ML algorithms. The machine learning approach presented in this study can serve as a decision support tool and could be integrated into the appointment system for effectively managing patient scheduling.

Keywords: clinical decision support system, machine learning algorithms, patient scheduling, prediction models, provider service time

Procedia PDF Downloads 95
118 Developing Three-Dimensional Digital Image Correlation Method to Detect the Crack Variation at the Joint of Weld Steel Plate

Authors: Ming-Hsiang Shih, Wen-Pei Sung, Shih-Heng Tung

Abstract:

The purposes of hydraulic gate are to maintain the functions of storing and draining water. It bears long-term hydraulic pressure and earthquake force and is very important for reservoir and waterpower plant. The high tensile strength of steel plate is used as constructional material of hydraulic gate. The cracks and rusts, induced by the defects of material, bad construction and seismic excitation and under water respectively, thus, the mechanics phenomena of gate with crack are probing into the cause of stress concentration, induced high crack increase rate, affect the safety and usage of hydroelectric power plant. Stress distribution analysis is a very important and essential surveying technique to analyze bi-material and singular point problems. The finite difference infinitely small element method has been demonstrated, suitable for analyzing the buckling phenomena of welding seam and steel plate with crack. Especially, this method can easily analyze the singularity of kink crack. Nevertheless, the construction form and deformation shape of some gates are three-dimensional system. Therefore, the three-dimensional Digital Image Correlation (DIC) has been developed and applied to analyze the strain variation of steel plate with crack at weld joint. The proposed Digital image correlation (DIC) technique is an only non-contact method for measuring the variation of test object. According to rapid development of digital camera, the cost of this digital image correlation technique has been reduced. Otherwise, this DIC method provides with the advantages of widely practical application of indoor test and field test without the restriction on the size of test object. Thus, the research purpose of this research is to develop and apply this technique to monitor mechanics crack variations of weld steel hydraulic gate and its conformation under action of loading. The imagines can be picked from real time monitoring process to analyze the strain change of each loading stage. The proposed 3-Dimensional digital image correlation method, developed in the study, is applied to analyze the post-buckling phenomenon and buckling tendency of welded steel plate with crack. Then, the stress intensity of 3-dimensional analysis of different materials and enhanced materials in steel plate has been analyzed in this paper. The test results show that this proposed three-dimensional DIC method can precisely detect the crack variation of welded steel plate under different loading stages. Especially, this proposed DIC method can detect and identify the crack position and the other flaws of the welded steel plate that the traditional test methods hardly detect these kind phenomena. Therefore, this proposed three-dimensional DIC method can apply to observe the mechanics phenomena of composite materials subjected to loading and operating.

Keywords: welded steel plate, crack variation, three-dimensional digital image correlation (DIC), crack stel plate

Procedia PDF Downloads 498
117 Nanoparticle Exposure Levels in Indoor and Outdoor Demolition Sites

Authors: Aniruddha Mitra, Abbas Rashidi, Shane Lewis, Jefferson Doehling, Alexis Pawlak, Jacob Schwartz, Imaobong Ekpo, Atin Adhikari

Abstract:

Working or living close to demolition sites can increase risks of dust-related health problems. Demolition of concrete buildings may produce crystalline silica dust, which can be associated with a broad range of respiratory diseases including silicosis and lung cancers. Previous studies demonstrated significant associations between demolition dust exposure and increase in the incidence of mesothelioma or asbestos cancer. Dust is a generic term used for minute solid particles of typically <500 µm in diameter. Dust particles in demolition sites vary in a wide range of sizes. Larger particles tend to settle down from the air. On the other hand, the smaller and lighter solid particles remain dispersed in the air for a long period and pose sustained exposure risks. Submicron ultrafine particles and nanoparticles are respirable deeper into our alveoli beyond our body’s natural respiratory cleaning mechanisms such as cilia and mucous membranes and are likely to be retained in the lower airways. To our knowledge, how various demolition tasks release nanoparticles are largely unknown and previous studies mostly focused on course dust, PM2.5, and PM10. General belief is that the dust generated during demolition tasks are mostly large particles formed through crushing, grinding, or sawing of various concrete and wooden structures. Therefore, little consideration has been given to the generated submicron ultrafine and nanoparticles and their exposure levels. These data are, however, critically important because recent laboratory studies have demonstrated cytotoxicity of nanoparticles on lung epithelial cells. The above-described knowledge gaps were addressed in this study by a novel newly developed nanoparticle monitor, which was used for nanoparticle monitoring at two adjacent indoor and outdoor building demolition sites in southern Georgia. Nanoparticle levels were measured (n = 10) by TSI NanoScan SMPS Model 3910 at four different distances (5, 10, 15, and 30 m) from the work location as well as in control sites. Temperature and relative humidity levels were recorded. Indoor demolition works included acetylene torch, masonry drilling, ceiling panel removal, and other miscellaneous tasks. Whereas, outdoor demolition works included acetylene torch and skid-steer loader use to remove a HVAC system. Concentration ranges of nanoparticles of 13 particle sizes at the indoor demolition site were: 11.5 nm: 63 – 1054/cm³; 15.4 nm: 170 – 1690/cm³; 20.5 nm: 321 – 730/cm³; 27.4 nm: 740 – 3255/cm³; 36.5 nm: 1,220 – 17,828/cm³; 48.7 nm: 1,993 – 40,465/cm³; 64.9 nm: 2,848 – 58,910/cm³; 86.6 nm: 3,722 – 62,040/cm³; 115.5 nm: 3,732 – 46,786/cm³; 154 nm: 3,022 – 21,506/cm³; 205.4 nm: 12 – 15,482/cm³; 273.8 nm: Keywords: demolition dust, industrial hygiene, aerosol, occupational exposure

Procedia PDF Downloads 404
116 Meta-Analysis of Previously Unsolved Cases of Aviation Mishaps Employing Molecular Pathology

Authors: Michael Josef Schwerer

Abstract:

Background: Analyzing any aircraft accident is mandatory based on the regulations of the International Civil Aviation Organization and the respective country’s criminal prosecution authorities. Legal medicine investigations are unavoidable when fatalities involve the flight crew or when doubts arise concerning the pilot’s aeromedical health status before the event. As a result of frequently tremendous blunt and sharp force trauma along with the impact of the aircraft to the ground, consecutive blast or fire exposition of the occupants or putrefaction of the dead bodies in cases of delayed recovery, relevant findings can be masked or destroyed and therefor being inaccessible in standard pathology practice comprising just forensic autopsy and histopathology. Such cases are of considerable risk of remaining unsolved without legal consequences for those responsible. Further, no lessons can be drawn from these scenarios to improve flight safety and prevent future mishaps. Aims and Methods: To learn from previously unsolved aircraft accidents, re-evaluations of the investigation files and modern molecular pathology studies were performed. Genetic testing involved predominantly PCR-based analysis of gene regulation, studying DNA promotor methylations, RNA transcription and posttranscriptional regulation. In addition, the presence or absence of infective agents, particularly DNA- and RNA-viruses, was studied. Technical adjustments of molecular genetic procedures when working with archived sample material were necessary. Standards for the proper interpretation of the respective findings had to be settled. Results and Discussion: Additional molecular genetic testing significantly contributes to the quality of forensic pathology assessment in aviation mishaps. Previously undetected cardiotropic viruses potentially explain e.g., a pilot’s sudden incapacitation resulting from cardiac failure or myocardial arrhythmia. In contrast, negative results for infective agents participate in ruling out concerns about an accident pilot’s fitness to fly and the aeromedical examiner’s precedent decision to issue him or her an aeromedical certificate. Care must be taken in the interpretation of genetic testing for pre-existing diseases such as hypertrophic cardiomyopathy or ischemic heart disease. Molecular markers such as mRNAs or miRNAs, which can establish these diagnoses in clinical patients, might be misleading in-flight crew members because of adaptive changes in their tissues resulting from repeated mild hypoxia during flight, for instance. Military pilots especially demonstrate significant physiological adjustments to their somatic burdens in flight, such as cardiocirculatory stress and air combat maneuvers. Their non-pathogenic alterations in gene regulation and expression will likely be misinterpreted for genuine disease by inexperienced investigators. Conclusions: The growing influence of molecular pathology on legal medicine practice has found its way into aircraft accident investigation. As appropriate quality standards for laboratory work and data interpretation are provided, forensic genetic testing supports the medico-legal analysis of aviation mishaps and potentially reduces the number of unsolved events in the future.

Keywords: aviation medicine, aircraft accident investigation, forensic pathology, molecular pathology

Procedia PDF Downloads 19