Search results for: quantification accuracy
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4051

Search results for: quantification accuracy

2971 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers' equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile

Procedia PDF Downloads 148
2970 A Multi-Output Network with U-Net Enhanced Class Activation Map and Robust Classification Performance for Medical Imaging Analysis

Authors: Jaiden Xuan Schraut, Leon Liu, Yiqiao Yin

Abstract:

Computer vision in medical diagnosis has achieved a high level of success in diagnosing diseases with high accuracy. However, conventional classifiers that produce an image to-label result provides insufficient information for medical professionals to judge and raise concerns over the trust and reliability of a model with results that cannot be explained. In order to gain local insight into cancerous regions, separate tasks such as imaging segmentation need to be implemented to aid the doctors in treating patients, which doubles the training time and costs which renders the diagnosis system inefficient and difficult to be accepted by the public. To tackle this issue and drive AI-first medical solutions further, this paper proposes a multi-output network that follows a U-Net architecture for image segmentation output and features an additional convolutional neural networks (CNN) module for auxiliary classification output. Class activation maps are a method of providing insight into a convolutional neural network’s feature maps that leads to its classification but in the case of lung diseases, the region of interest is enhanced by U-net-assisted Class Activation Map (CAM) visualization. Therefore, our proposed model combines image segmentation models and classifiers to crop out only the lung region of a chest X-ray’s class activation map to provide a visualization that improves the explainability and is able to generate classification results simultaneously which builds trust for AI-led diagnosis systems. The proposed U-Net model achieves 97.61% accuracy and a dice coefficient of 0.97 on testing data from the COVID-QU-Ex Dataset which includes both diseased and healthy lungs.

Keywords: multi-output network model, U-net, class activation map, image classification, medical imaging analysis

Procedia PDF Downloads 179
2969 Current Approach in Biodosimetry: Electrochemical Detection of DNA Damage

Authors: Marcela Jelicova, Anna Lierova, Zuzana Sinkorova, Radovan Metelka

Abstract:

At present, electrochemical methods are used in various research fields, especially for analysis of biological molecules. The fact offers the possibility of using the detection of oxidative damage induced indirectly by γ rays in DNA in biodosimentry. The main goal of our study is to optimize the detection of 8-hydroxyguanine by differential pulse voltammetry. The level of this stable and specific indicator of DNA damage could be determined in DNA isolated from peripheral blood lymphocytes, plasma or urine of irradiated individuals. Screen-printed carbon electrodes modified with carboxy-functionalized multi-walled carbon nanotubes were utilized for highly sensitive electrochemical detection of 8-hydroxyguanine. Electrochemical oxidation of 8-hydroxoguanine monitored by differential pulse voltammetry was found pH-dependent and the most intensive signal was recorded at pH 7. After recalculating the current density, several times higher sensitivity was attained in comparison with already published results, which were obtained using screen-printed carbon electrodes with unmodified carbon ink. Subsequently, the modified electrochemical technique was used for the detection of 8-hydroxoguanine in calf thymus DNA samples irradiated by 60Co gamma source in the dose range from 0.5 to 20 Gy using by various types of sample pretreatment and measurement conditions. This method could serve for fast retrospective quantification of absorbed dose in cases of accidental exposure to ionizing radiation and may play an important role in biodosimetry.

Keywords: biodosimetry, electrochemical detection, voltametry, 8-hydroxyguanine

Procedia PDF Downloads 263
2968 A More Sustainable Decellularized Plant Scaffold for Lab Grown Meat with Ocean Water

Authors: Isabella Jabbour

Abstract:

The world's population is expected to reach over 10 billion by 2050, creating a significant demand for food production, particularly in the agricultural industry. Cellular agriculture presents a solution to this challenge by producing meat that resembles traditionally produced meat, but with significantly less land use. Decellularized plant scaffolds, such as spinach leaves, have been shown to be a suitable edible scaffold for growing animal muscle, enabling cultured cells to grow and organize into three-dimensional structures that mimic the texture and flavor of conventionally produced meat. However, the use of freshwater to remove the intact extracellular material from these plants remains a concern, particularly when considering scaling up the production process. In this study, two protocols were used, 1X SDS and Boom Sauce, to decellularize spinach leaves with both distilled water and ocean water. The decellularization process was confirmed by histology, which showed an absence of cell nuclei, DNA and protein quantification. Results showed that spinach decellularized with ocean water contained 9.9 ± 1.4 ng DNA/mg tissue, which is comparable to the 9.2 ± 1.1 ng DNA/mg tissue obtained with DI water. These findings suggest that decellularized spinach leaves using ocean water hold promise as an eco-friendly and cost-effective scaffold for laboratory-grown meat production, which could ultimately transform the meat industry by providing a sustainable alternative to traditional animal farming practices while reducing freshwater use.

Keywords: cellular agriculture, plant scaffold, decellularization, ocean water usage

Procedia PDF Downloads 68
2967 Ulnar Nerve Changes Associated with Carpal Tunnel Syndrome and Effect on Median Ersus Ulnar Comparative Studies

Authors: Emmanuel K. Aziz Saba, Sarah S. El-Tawab

Abstract:

Objectives: Carpal tunnel syndrome (CTS) was found to be associated with high pressure within the Guyon’s canal. The aim of this study was to assess the involvement of sensory and/or motor ulnar nerve fibers in patients with CTS and whether this affects the accuracy of the median versus ulnar sensory and motor comparative tests. Patients and methods: The present study included 145 CTS hands and 71 asymptomatic control hands. Clinical examination was done for all patients. The following tests were done for the patients and control: (1) Sensory conduction studies: median nerve, ulnar nerve, dorsal ulnar cutaneous nerve and median versus ulnar digit (D) four sensory comparative study; (2) Motor conduction studies: median nerve, ulnar nerve and median versus ulnar motor comparative study. Results: There were no statistically significant differences between patients and control group as regards parameters of ulnar motor study and dorsal ulnar cutaneous sensory conduction study. It was found that 17 CTS hands (11.7%) had ulnar sensory abnormalities in 17 different patients. The median versus ulnar sensory and motor comparative studies were abnormal among all these 17 CTS hands. There were statistically significant negative correlations between median motor latency and both ulnar sensory amplitudes recording D5 and D4. There were statistically significant positive correlations between median sensory conduction velocity and both ulnar sensory nerve action potential amplitude recording D5 and D4. Conclusions: There is ulnar sensory nerve abnormality among CTS patients. This abnormality affects the amplitude of ulnar sensory nerve action potential. The presence of abnormalities in ulnar nerve occurs in moderate and severe degrees of CTS. This does not affect the median versus ulnar sensory and motor comparative tests accuracy and validity for use in electrophysiological diagnosis of CTS.

Keywords: carpal tunnel syndrome, ulnar nerve, median nerve, median versus ulnar comparative study, dorsal ulnar cutaneous nerve

Procedia PDF Downloads 550
2966 Experimental Optimization in Diamond Lapping of Plasma Sprayed Ceramic Coatings

Authors: S. Gowri, K. Narayanasamy, R. Krishnamurthy

Abstract:

Plasma spraying, from the point of value engineering, is considered as a cost-effective technique to deposit high performance ceramic coatings on ferrous substrates for use in the aero,automobile,electronics and semiconductor industries. High-performance ceramics such as Alumina, Zirconia, and titania-based ceramics have become a key part of turbine blades,automotive cylinder liners,microelectronic and semiconductor components due to their ability to insulate and distribute heat. However, as the industries continue to advance, improved methods are needed to increase both the flexibility and speed of ceramic processing in these applications. The ceramics mentioned were individually coated on structural steel substrate with NiCr bond coat of 50-70 micron thickness with the final thickness in the range of 150 to 200 microns. Optimal spray parameters were selected based on bond strength and porosity. The 'optimal' processed specimens were super finished by lapping using diamond and green SiC abrasives. Interesting results could be observed as follows: The green SiC could improve the surface finish of lapped surfaces almost as that by diamond in case of alumina and titania based ceramics but the diamond abrasives could improve the surface finish of PSZ better than that by green SiC. The conventional random scratches could be absent in alumina and titania ceramics but in PS those marks were found to be less. However, the flatness accuracy could be improved unto 60 to 85%. The surface finish and geometrical accuracy were measured and modeled. The abrasives in the midrange of their particle size could improve the surface quality faster and better than the particles of size in low and high ranges. From the experimental investigations after lapping process, the optimal lapping time, abrasive size, lapping pressure etc could be evaluated.

Keywords: atmospheric plasma spraying, ceramics, lapping, surface qulaity, optimization

Procedia PDF Downloads 402
2965 An Absolute Femtosecond Rangefinder for Metrological Support in Coordinate Measurements

Authors: Denis A. Sokolov, Andrey V. Mazurkevich

Abstract:

In the modern world, there is an increasing demand for highly precise measurements in various fields, such as aircraft, shipbuilding, and rocket engineering. This has resulted in the development of appropriate measuring instruments that are capable of measuring the coordinates of objects within a range of up to 100 meters, with an accuracy of up to one micron. The calibration process for such optoelectronic measuring devices (trackers and total stations) involves comparing the measurement results from these devices to a reference measurement based on a linear or spatial basis. The reference used in such measurements could be a reference base or a reference range finder with the capability to measure angle increments (EDM). The base would serve as a set of reference points for this purpose. The concept of the EDM for replicating the unit of measurement has been implemented on a mobile platform, which allows for angular changes in the direction of laser radiation in two planes. To determine the distance to an object, a high-precision interferometer with its own design is employed. The laser radiation travels to the corner reflectors, which form a spatial reference with precisely known positions. When the femtosecond pulses from the reference arm and the measuring arm coincide, an interference signal is created, repeating at the frequency of the laser pulses. The distance between reference points determined by interference signals is calculated in accordance with recommendations from the International Bureau of Weights and Measures for the indirect measurement of time of light passage according to the definition of a meter. This distance is D/2 = c/2nF, approximately 2.5 meters, where c is the speed of light in a vacuum, n is the refractive index of a medium, and F is the frequency of femtosecond pulse repetition. The achieved uncertainty of type A measurement of the distance to reflectors 64 m (N•D/2, where N is an integer) away and spaced apart relative to each other at a distance of 1 m does not exceed 5 microns. The angular uncertainty is calculated theoretically since standard high-precision ring encoders will be used and are not a focus of research in this study. The Type B uncertainty components are not taken into account either, as the components that contribute most do not depend on the selected coordinate measuring method. This technology is being explored in the context of laboratory applications under controlled environmental conditions, where it is possible to achieve an advantage in terms of accuracy. In general, the EDM tests showed high accuracy, and theoretical calculations and experimental studies on an EDM prototype have shown that the uncertainty type A of distance measurements to reflectors can be less than 1 micrometer. The results of this research will be utilized to develop a highly accurate mobile absolute range finder designed for the calibration of high-precision laser trackers and laser rangefinders, as well as other equipment, using a 64 meter laboratory comparator as a reference.

Keywords: femtosecond laser, pulse correlation, interferometer, laser absolute range finder, coordinate measurement

Procedia PDF Downloads 37
2964 Evaluation of Classification Algorithms for Diagnosis of Asthma in Iranian Patients

Authors: Taha SamadSoltani, Peyman Rezaei Hachesu, Marjan GhaziSaeedi, Maryam Zolnoori

Abstract:

Introduction: Data mining defined as a process to find patterns and relationships along data in the database to build predictive models. Application of data mining extended in vast sectors such as the healthcare services. Medical data mining aims to solve real-world problems in the diagnosis and treatment of diseases. This method applies various techniques and algorithms which have different accuracy and precision. The purpose of this study was to apply knowledge discovery and data mining techniques for the diagnosis of asthma based on patient symptoms and history. Method: Data mining includes several steps and decisions should be made by the user which starts by creation of an understanding of the scope and application of previous knowledge in this area and identifying KD process from the point of view of the stakeholders and finished by acting on discovered knowledge using knowledge conducting, integrating knowledge with other systems and knowledge documenting and reporting.in this study a stepwise methodology followed to achieve a logical outcome. Results: Sensitivity, Specifity and Accuracy of KNN, SVM, Naïve bayes, NN, Classification tree and CN2 algorithms and related similar studies was evaluated and ROC curves were plotted to show the performance of the system. Conclusion: The results show that we can accurately diagnose asthma, approximately ninety percent, based on the demographical and clinical data. The study also showed that the methods based on pattern discovery and data mining have a higher sensitivity compared to expert and knowledge-based systems. On the other hand, medical guidelines and evidence-based medicine should be base of diagnostics methods, therefore recommended to machine learning algorithms used in combination with knowledge-based algorithms.

Keywords: asthma, datamining, classification, machine learning

Procedia PDF Downloads 432
2963 Quantification of Dowel-Concrete Interaction in Jointed Plain Concrete Pavements Using 3D Numerical Simulation

Authors: Lakshmana Ravi Raj Gali, K. Sridhar Reddy, M. Amaranatha Reddy

Abstract:

Load transfer between adjacent slabs of the jointed plain concrete pavement (JPCP) system is inevitable for long-lasting performance. Dowel bars are generally used to ensure sufficient degree of load transfer, in addition to the load transferred by aggregate interlock mechanism at the joints. Joint efficiency is the measure of joint quality, a major concern and therefore the dowel bar system should be designed and constructed well. The interaction between dowel bars and concrete that includes various parameters of dowel bar and concrete will explain the degree of joint efficiency. The present study focuses on the methodology of selecting contact stiffness, which quantifies dowel-concrete interaction. In addition, a parametric study which focuses on the effect of dowel diameter, dowel shape, the spacing between dowel bars, joint opening, the thickness of the slab, the elastic modulus of concrete, and concrete cover on contact stiffness was also performed. The results indicated that the thickness of the slab is most critical among various parameters to explain the joint efficiency. Further displacement equivalency method was proposed to find out the contact stiffness. The proposed methodology was validated with the available field surface deflection data collected by falling weight deflectometer (FWD).

Keywords: contact stiffness, displacement equivalency method, Dowel-concrete interaction, joint behavior, 3D numerical simulation

Procedia PDF Downloads 133
2962 ARABEX: Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder and Custom Convolutional Recurrent Neural Network

Authors: Hozaifa Zaki, Ghada Soliman

Abstract:

In this paper, we introduced an approach for Automated Dotted Arabic Expiration Date Extraction using Optimized Convolutional Autoencoder (ARABEX) with bidirectional LSTM. This approach is used for translating the Arabic dot-matrix expiration dates into their corresponding filled-in dates. A custom lightweight Convolutional Recurrent Neural Network (CRNN) model is then employed to extract the expiration dates. Due to the lack of available dataset images for the Arabic dot-matrix expiration date, we generated synthetic images by creating an Arabic dot-matrix True Type Font (TTF) matrix to address this limitation. Our model was trained on a realistic synthetic dataset of 3287 images, covering the period from 2019 to 2027, represented in the format of yyyy/mm/dd. We then trained our custom CRNN model using the generated synthetic images to assess the performance of our model (ARABEX) by extracting expiration dates from the translated images. Our proposed approach achieved an accuracy of 99.4% on the test dataset of 658 images, while also achieving a Structural Similarity Index (SSIM) of 0.46 for image translation on our dataset. The ARABEX approach demonstrates its ability to be applied to various downstream learning tasks, including image translation and reconstruction. Moreover, this pipeline (ARABEX+CRNN) can be seamlessly integrated into automated sorting systems to extract expiry dates and sort products accordingly during the manufacturing stage. By eliminating the need for manual entry of expiration dates, which can be time-consuming and inefficient for merchants, our approach offers significant results in terms of efficiency and accuracy for Arabic dot-matrix expiration date recognition.

Keywords: computer vision, deep learning, image processing, character recognition

Procedia PDF Downloads 60
2961 Italian Speech Vowels Landmark Detection through the Legacy Tool 'xkl' with Integration of Combined CNNs and RNNs

Authors: Kaleem Kashif, Tayyaba Anam, Yizhi Wu

Abstract:

This paper introduces a methodology for advancing Italian speech vowels landmark detection within the distinctive feature-based speech recognition domain. Leveraging the legacy tool 'xkl' by integrating combined convolutional neural networks (CNNs) and recurrent neural networks (RNNs), the study presents a comprehensive enhancement to the 'xkl' legacy software. This integration incorporates re-assigned spectrogram methodologies, enabling meticulous acoustic analysis. Simultaneously, our proposed model, integrating combined CNNs and RNNs, demonstrates unprecedented precision and robustness in landmark detection. The augmentation of re-assigned spectrogram fusion within the 'xkl' software signifies a meticulous advancement, particularly enhancing precision related to vowel formant estimation. This augmentation catalyzes unparalleled accuracy in landmark detection, resulting in a substantial performance leap compared to conventional methods. The proposed model emerges as a state-of-the-art solution in the distinctive feature-based speech recognition systems domain. In the realm of deep learning, a synergistic integration of combined CNNs and RNNs is introduced, endowed with specialized temporal embeddings, harnessing self-attention mechanisms, and positional embeddings. The proposed model allows it to excel in capturing intricate dependencies within Italian speech vowels, rendering it highly adaptable and sophisticated in the distinctive feature domain. Furthermore, our advanced temporal modeling approach employs Bayesian temporal encoding, refining the measurement of inter-landmark intervals. Comparative analysis against state-of-the-art models reveals a substantial improvement in accuracy, highlighting the robustness and efficacy of the proposed methodology. Upon rigorous testing on a database (LaMIT) speech recorded in a silent room by four Italian native speakers, the landmark detector demonstrates exceptional performance, achieving a 95% true detection rate and a 10% false detection rate. A majority of missed landmarks were observed in proximity to reduced vowels. These promising results underscore the robust identifiability of landmarks within the speech waveform, establishing the feasibility of employing a landmark detector as a front end in a speech recognition system. The synergistic integration of re-assigned spectrogram fusion, CNNs, RNNs, and Bayesian temporal encoding not only signifies a significant advancement in Italian speech vowels landmark detection but also positions the proposed model as a leader in the field. The model offers distinct advantages, including unparalleled accuracy, adaptability, and sophistication, marking a milestone in the intersection of deep learning and distinctive feature-based speech recognition. This work contributes to the broader scientific community by presenting a methodologically rigorous framework for enhancing landmark detection accuracy in Italian speech vowels. The integration of cutting-edge techniques establishes a foundation for future advancements in speech signal processing, emphasizing the potential of the proposed model in practical applications across various domains requiring robust speech recognition systems.

Keywords: landmark detection, acoustic analysis, convolutional neural network, recurrent neural network

Procedia PDF Downloads 40
2960 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter

Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball

Abstract:

The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.

Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS

Procedia PDF Downloads 12
2959 Static Test Pad for Solid Rocket Motors

Authors: Svanik Garg

Abstract:

Static Test Pads are stationary mechanisms that hold a solid rocket motor, measuring the different parameters of its operation including thrust and temperature to better calibrate it for launch. This paper outlines a specific STP designed to test high powered rocket motors with a thrust upwards of 4000N and limited to 6500N. The design includes a specific portable mechanism with cost an integral part of the design process to make it accessible to small scale rocket developers with limited resources. Using curved surfaces and an ergonomic design, the STP has a delicately engineered façade/case with a focus on stability and axial calibration of thrust. This paper describes the design, operation and working of the STP and its widescale uses given the growing market of aviation enthusiasts. Simulations on the CAD model in Fusion 360 provided promising results with a safety factor of 2 established and stress limited along with the load coefficient A PCB was also designed as part of the test pad design process to help obtain results, with visual output and various virtual terminals to collect data of different parameters. The circuitry was simulated using ‘proteus’ and a special virtual interface with auditory commands was also created for accessibility and wide-scale implementation. Along with this description of the design, the paper also emphasizes the design principle behind the STP including a description of its vertical orientation to maximize thrust accuracy along with a stable base to prevent micromovements. Given the rise of students and professionals alike building high powered rockets, the STP described in this paper is an appropriate option, with limited cost, portability, accuracy, and versatility. There are two types of STP’s vertical or horizontal, the one discussed in this paper is vertical to utilize the axial component of thrust.

Keywords: static test pad, rocket motor, thrust, load, circuit, avionics, drag

Procedia PDF Downloads 349
2958 Vehicle Activity Characterization Approach to Quantify On-Road Mobile Source Emissions

Authors: Hatem Abou-Senna, Essam Radwan

Abstract:

Transportation agencies and researchers in the past have estimated emissions using one average speed and volume on a long stretch of roadway. Other methods provided better accuracy utilizing annual average estimates. Travel demand models provided an intermediate level of detail through average daily volumes. Currently, higher accuracy can be established utilizing microscopic analyses by splitting the network links into sub-links and utilizing second-by-second trajectories to calculate emissions. The need to accurately quantify transportation-related emissions from vehicles is essential. This paper presents an examination of four different approaches to capture the environmental impacts of vehicular operations on a 10-mile stretch of Interstate 4 (I-4), an urban limited access highway in Orlando, Florida. First, (at the most basic level), emissions were estimated for the entire 10-mile section 'by hand' using one average traffic volume and average speed. Then, three advanced levels of detail were studied using VISSIM/MOVES to analyze smaller links: average speeds and volumes (AVG), second-by-second link drive schedules (LDS), and second-by-second operating mode distributions (OPMODE). This paper analyzes how the various approaches affect predicted emissions of CO, NOx, PM2.5, PM10, and CO2. The results demonstrate that obtaining precise and comprehensive operating mode distributions on a second-by-second basis provides more accurate emission estimates. Specifically, emission rates are highly sensitive to stop-and-go traffic and the associated driving cycles of acceleration, deceleration, and idling. Using the AVG or LDS approach may overestimate or underestimate emissions, respectively, compared to an operating mode distribution approach.

Keywords: limited access highways, MOVES, operating mode distribution (OPMODE), transportation emissions, vehicle specific power (VSP)

Procedia PDF Downloads 325
2957 Determining Water Quantity from Sprayer Nozzle Using Particle Image Velocimetry (PIV) and Image Processing Techniques

Authors: M. Nadeem, Y. K. Chang, C. Diallo, U. Venkatadri, P. Havard, T. Nguyen-Quang

Abstract:

Uniform distribution of agro-chemicals is highly important because there is a significant loss of agro-chemicals, for example from pesticide, during spraying due to non-uniformity of droplet and off-target drift. Improving the efficiency of spray pattern for different cropping systems would reduce energy, costs and to minimize environmental pollution. In this paper, we examine the water jet patterns in order to study the performance and uniformity of water distribution during the spraying process. We present a method to quantify the water amount from a sprayer jet by using the Particle Image Velocimetry (PIV) system. The results of the study will be used to optimize sprayer or nozzles design for chemical application. For this study, ten sets of images were acquired by using the following PIV system settings: double frame mode, trigger rate is 4 Hz, and time between pulsed signals is 500 µs. Each set of images contained different numbers of double-framed images: 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 at eight different pressures 25, 50, 75, 100, 125, 150, 175 and 200 kPa. The PIV images obtained were analysed using custom-made image processing software for droplets and volume calculations. The results showed good agreement of both manual and PIV measurements and suggested that the PIV technique coupled with image processing can be used for a precise quantification of flow through nozzles. The results also revealed that the method of measuring fluid flow through PIV is reliable and accurate for sprayer patterns.

Keywords: image processing, PIV, quantifying the water volume from nozzle, spraying pattern

Procedia PDF Downloads 218
2956 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 114
2955 Application of Argumentation for Improving the Classification Accuracy in Inductive Concept Formation

Authors: Vadim Vagin, Marina Fomina, Oleg Morosin

Abstract:

This paper contains the description of argumentation approach for the problem of inductive concept formation. It is proposed to use argumentation, based on defeasible reasoning with justification degrees, to improve the quality of classification models, obtained by generalization algorithms. The experiment’s results on both clear and noisy data are also presented.

Keywords: argumentation, justification degrees, inductive concept formation, noise, generalization

Procedia PDF Downloads 421
2954 Vibro-Tactile Equalizer for Musical Energy-Valence Categorization

Authors: Dhanya Nair, Nicholas Mirchandani

Abstract:

Musical haptic systems can enhance a listener’s musical experience while providing an alternative platform for the hearing impaired to experience music. Current music tactile technologies focus on representing tactile metronomes to synchronize performers or encoding musical notes into distinguishable (albeit distracting) tactile patterns. There is growing interest in the development of musical haptic systems to augment the auditory experience, although the haptic-music relationship is still not well understood. This paper represents a tactile music interface that provides vibrations to multiple fingertips in synchronicity with auditory music. Like an audio equalizer, different frequency bands are filtered out, and the power in each frequency band is computed and converted to a corresponding vibrational strength. These vibrations are felt on different fingertips, each corresponding to a different frequency band. Songs with music from different spectrums, as classified by their energy and valence, were used to test the effectiveness of the system and to understand the relationship between music and tactile sensations. Three participants were trained on one song categorized as sad (low energy and low valence score) and one song categorized as happy (high energy and high valence score). They were trained both with and without auditory feedback (listening to the song while experiencing the tactile music on their fingertips and then experiencing the vibrations alone without the music). The participants were then tested on three songs from both categories, without any auditory feedback, and were asked to classify the tactile vibrations they felt into either category. The participants were blinded to the songs being tested and were not provided any feedback on the accuracy of their classification. These participants were able to classify the music with 100% accuracy. Although the songs tested were on two opposite spectrums (sad/happy), the preliminary results show the potential of utilizing a vibrotactile equalizer, like the one presented, for augmenting musical experience while furthering the current understanding of music tactile relationship.

Keywords: haptic music relationship, tactile equalizer, tactile music, vibrations and mood

Procedia PDF Downloads 159
2953 Nowcasting Indonesian Economy

Authors: Ferry Kurniawan

Abstract:

In this paper, we nowcast quarterly output growth in Indonesia by exploiting higher frequency data (monthly indicators) using a mixed-frequency factor model and exploiting both quarterly and monthly data. Nowcasting quarterly GDP in Indonesia is particularly relevant for the central bank of Indonesia which set the policy rate in the monthly Board of Governors Meeting; whereby one of the important step is the assessment of the current state of the economy. Thus, having an accurate and up-to-date quarterly GDP nowcast every time new monthly information becomes available would clearly be of interest for central bank of Indonesia, for example, as the initial assessment of the current state of the economy -including nowcast- will be used as input for longer term forecast. We consider a small scale mixed-frequency factor model to produce nowcasts. In particular, we specify variables as year-on-year growth rates thus the relation between quarterly and monthly data is expressed in year-on-year growth rates. To assess the performance of the model, we compare the nowcasts with two other approaches: autoregressive model –which is often difficult when forecasting output growth- and Mixed Data Sampling (MIDAS) regression. In particular, both mixed frequency factor model and MIDAS nowcasts are produced by exploiting the same set of monthly indicators. Hence, we compare the nowcasts performance of the two approaches directly. To preview the results, we find that by exploiting monthly indicators using mixed-frequency factor model and MIDAS regression we improve the nowcast accuracy over a benchmark simple autoregressive model that uses only quarterly frequency data. However, it is not clear whether the MIDAS or mixed-frequency factor model is better. Neither set of nowcasts encompasses the other; suggesting that both nowcasts are valuable in nowcasting GDP but neither is sufficient. By combining the two individual nowcasts, we find that the nowcast combination not only increases the accuracy - relative to individual nowcasts- but also lowers the risk of the worst performance of the individual nowcasts.

Keywords: nowcasting, mixed-frequency data, factor model, nowcasts combination

Procedia PDF Downloads 319
2952 Quantification of Enzymatic Activities of Proteins, Peroxidase and Phenylalanine Ammonia Lyase, in Growing Phaseolus vulgaris L, with Application Bacterial Consortium to Control Fusarium and Rhizoctonia

Authors: Arredondo Valdés Roberto, Hernández Castillo Francisco Daniel, Laredo Alcalá Elan Iñaky, Gonzalez Gallegos Esmeralda, Castro Del Angel Epifanio

Abstract:

The common bean or Phaseolus vulgaris L. is the most important food legume for direct consumption in the world. Fusarium dry rot in the major fungus disease affects Phaseolus vulgaris L, after planting. In another hand, Rhizoctonia can be found on all underground parts of the plant and various times during the growing season. In recent years, the world has conducted studies about the use of natural products as substitutes for herbicides and pesticides, because of possible ecological and economic benefits. Plants respond to fungal invasion by activating defense responses associated with accumulation of several enzymes and inhibitors, which prevent pathogen infection. This study focused on the role of proteins, peroxidase (POD), phenylalanine ammonia lyase (PAL), in imparting resistance to soft rot pathogens by applied different bacterial consortium, formulated and provided by Biofertilizantes de Méxicanos industries, analyzing the enzyme activity at different times of application (6 h, 12 h and 24 h). The resistance of these treatments was correlated with high POD and PAL enzyme activity as well as increased concentrations of proteins. These findings show that PAL, POD and synthesis of proteins play a role in imparting resistance to Phaseolus vulgaris L. soft rot infection by Fusarium and Rhizoctonia.

Keywords: fusarium, peroxidase, phenylalanine ammonia lyase, rhizoctonia

Procedia PDF Downloads 335
2951 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 515
2950 Assessment of Hemostatic Activity of the Aqueous Extract of Leaves of Marrubium vulgare L.: A Mediterranean Lamiaceae Algeria

Authors: Nabil Ghedadba, Abdessemed Samira, Leila Hambaba, Sidi Mohamed Ould Mokhtar, Nassima Fercha, Houas Bousselsela

Abstract:

The overall objective of this study was to evaluate in vitro the hemostatic activity of secondary metabolites (polyphenols, flavonoids, and tannins) of Marrubium vulgare leaves, aromatic plant widely used in traditional medicine for the treatment of asthma, cough, diabetes (by its effect on the pancreas to secrete insulin), heart disease, fever has a high efficiency as against inflammation. Qualitative analysis of the aqueous extract (AQE) by thin layer chromatography revealed the presence of quercetin, kaempferol and rutin. Quantification of total phenols by Folin Ciocalteu method and flavonoids by AlCl3 method gave high values with AQE: 175±0.80 mg GAE per 100g of the dry matter, 23.86±0.36 mg QE per 100g of dry matter. Moreover, the assay of condensed tannins by the vanillin method showed that AQE contains the highest value: 16.55±0.03 mg e-catechin per 100 g of dry matter. Assessment of hemostatic activity by the plasma recalcification method (time of Howell) has allowed us to discover the surprising dose dependent anticoagulant effect of AQE lyophilized from leaves of M. vulgare. A positive linear correlation between the two parameters studied: the content of condensed tannins and hemostatic activity (r=0.96) were used to highlight a possible role of these compounds that are potent vasoconstrictor activity in hemostatic. From these results we can see that Marrubium vulgre could be used for the treatment of health.

Keywords: Marrubium vulgare L., aqueous extract, phenolic compounds dosing, hemostatic activity, condensed tannins

Procedia PDF Downloads 228
2949 A Robust Visual Simultaneous Localization and Mapping for Indoor Dynamic Environment

Authors: Xiang Zhang, Daohong Yang, Ziyuan Wu, Lei Li, Wanting Zhou

Abstract:

Visual Simultaneous Localization and Mapping (VSLAM) uses cameras to collect information in unknown environments to realize simultaneous localization and environment map construction, which has a wide range of applications in autonomous driving, virtual reality and other related fields. At present, the related research achievements about VSLAM can maintain high accuracy in static environment. But in dynamic environment, due to the presence of moving objects in the scene, the movement of these objects will reduce the stability of VSLAM system, resulting in inaccurate localization and mapping, or even failure. In this paper, a robust VSLAM method was proposed to effectively deal with the problem in dynamic environment. We proposed a dynamic region removal scheme based on semantic segmentation neural networks and geometric constraints. Firstly, semantic extraction neural network is used to extract prior active motion region, prior static region and prior passive motion region in the environment. Then, the light weight frame tracking module initializes the transform pose between the previous frame and the current frame on the prior static region. A motion consistency detection module based on multi-view geometry and scene flow is used to divide the environment into static region and dynamic region. Thus, the dynamic object region was successfully eliminated. Finally, only the static region is used for tracking thread. Our research is based on the ORBSLAM3 system, which is one of the most effective VSLAM systems available. We evaluated our method on the TUM RGB-D benchmark and the results demonstrate that the proposed VSLAM method improves the accuracy of the original ORBSLAM3 by 70%˜98.5% under high dynamic environment.

Keywords: dynamic scene, dynamic visual SLAM, semantic segmentation, scene flow, VSLAM

Procedia PDF Downloads 94
2948 Numerical Investigation of Tsunami Flow Characteristics and Energy Reduction through Flexible Vegetation

Authors: Abhishek Mukherjee, Juan C. Cajas, Jenny Suckale, Guillaume Houzeaux, Oriol Lehmkuhl, Simone Marras

Abstract:

The investigation of tsunami flow characteristics and the quantification of tsunami energy reduction through the coastal vegetation is important to understand the protective benefits of nature-based mitigation parks. In the present study, a three-dimensional non-hydrostatic incompressible Computational Fluid Dynamics model with a two-way coupling enabled fluid-structure interaction approach (FSI) is used. After validating the numerical model against experimental data, tsunami flow characteristics have been investigated by varying vegetation density, modulus of elasticity, the gap between stems, and arrangement or distribution of vegetation patches. Streamwise depth average velocity profiles, turbulent kinetic energy, energy flux reflection, and dissipation extracted by the numerical study will be presented in this study. These diagnostics are essential to assess the importance of different parameters to design the proper coastal defense systems. When a tsunami wave reaches the shore, it transforms into undular bores, which induce scour around offshore structures and sediment transport. The bed shear stress, instantaneous turbulent kinetic energy, and the vorticity near-bed will be presented to estimate the importance of vegetation to prevent tsunami-induced scour and sediment transport.

Keywords: coastal defense, energy flux, fluid-structure interaction, natural hazards, sediment transport, tsunami mitigation

Procedia PDF Downloads 134
2947 Association of Transmission Risk Factors Among HCV-infected Bangladeshi Patients With Different Genotypes

Authors: Nahida Sultana

Abstract:

Globally, an estimated 58 million people have chronic hepatitis C virus infection, with about 1.5 million new infections occurring per year. The hepatitis C virus is a blood-borne virus, and most infections occur through exposure to blood from unsafe injection practices, unsafe health care, unscreened blood transfusion, injection drug use, and sexual practices that lead to exposure to blood. Hepatitis C virus (HCV) causes chronic infections that mainly affect the liver leading to liver diseases. This study aimed to determine whether there is any significant association between HCV transmission risk factors in relation to genotypes in HCV-infected Bangladeshi patients. After quantification of HCV viral load, 36 samples were randomly selected for HCV genotyping and risk factor measurement. A greater proportion of genotype 1 (p > 0.05) patients (40%) underwent blood transfusion compared to patients (22.6%) with genotype 3 infections. More genotype 1 patient underwent surgery and invasive procedures (20%), and rather than those with genotype 3 patients (16.1%). The history of IDUs (25.8%) and sexual exposure (3.2%) are only prevalent in genotype 3 patients and absent in patients with genotype 1 (p >0.05). There was no significant statistical difference found in HCV transmission risk factors (blood transfusion, IDUs, Surgery& interventions, sexual transmission) between patients infected with genotypes 1 and 3. In HCV infection, genotype may have no relation to transmission risk factors among Bangladeshi patients.

Keywords: HCV genotype, alanine aminotransferase (ALT), HCV viral load, IDUs

Procedia PDF Downloads 70
2946 Clinical Impact of Ultra-Deep Versus Sanger Sequencing Detection of Minority Mutations on the HIV-1 Drug Resistance Genotype Interpretations after Virological Failure

Authors: S. Mohamed, D. Gonzalez, C. Sayada, P. Halfon

Abstract:

Drug resistance mutations are routinely detected using standard Sanger sequencing, which does not detect minor variants with a frequency below 20%. The impact of detecting minor variants generated by ultra-deep sequencing (UDS) on HIV drug-resistance (DR) interpretations has not yet been studied. Fifty HIV-1 patients who experienced virological failure were included in this retrospective study. The HIV-1 UDS protocol allowed the detection and quantification of HIV-1 protease and reverse transcriptase variants related to genotypes A, B, C, E, F, and G. DeepChek®-HIV simplified DR interpretation software was used to compare Sanger sequencing and UDS. The total time required for the UDS protocol was found to be approximately three times longer than Sanger sequencing with equivalent reagent costs. UDS detected all of the mutations found by population sequencing and identified additional resistance variants in all patients. An analysis of DR revealed a total of 643 and 224 clinically relevant mutations by UDS and Sanger sequencing, respectively. Three resistance mutations with > 20% prevalence were detected solely by UDS: A98S (23%), E138A (21%) and V179I (25%). A significant difference in the DR interpretations for 19 antiretroviral drugs was observed between the UDS and Sanger sequencing methods. Y181C and T215Y were the most frequent mutations associated with interpretation differences. A combination of UDS and DeepChek® software for the interpretation of DR results would help clinicians provide suitable treatments. A cut-off of 1% allowed a better characterisation of the viral population by identifying additional resistance mutations and improving the DR interpretation.

Keywords: HIV-1, ultra-deep sequencing, Sanger sequencing, drug resistance

Procedia PDF Downloads 316
2945 Collaboration During Planning and Reviewing in Writing: Effects on L2 Writing

Authors: Amal Sellami, Ahlem Ammar

Abstract:

Writing is acknowledged to be a cognitively demanding and complex task. Indeed, the writing process is composed of three iterative sub-processes, namely planning, translating (writing), and reviewing. Not only do second or foreign language learners need to write according to this process, but they also need to respect the norms and rules of language and writing in the text to-be-produced. Accordingly, researchers have suggested to approach writing as a collaborative task in order to al leviate its complexity. Consequently, collaboration has been implemented during the whole writing process or only during planning orreviewing. Researchers report that implementing collaboration during the whole process might be demanding in terms of time in comparison to individual writing tasks. Consequently, because of time constraints, teachers may avoid it. For this reason, it might be pedagogically more realistic to limit collaboration to one of the writing sub-processes(i.e., planning or reviewing). However, previous research implementing collaboration in planning or reviewing is limited and fails to explore the effects of the seconditionson the written text. Consequently, the present study examines the effects of collaboration in planning and collaboration in reviewing on the written text. To reach this objective, quantitative as well as qualitative methods are deployed to examine the written texts holistically and in terms of fluency, complexity, and accuracy. Participants of the study include 4 pairs in each group (n=8). They participated in two experimental conditions, which are: (1) collaborative planning followed by individual writing and individual reviewing and (2) individual planning followed by individual writing and collaborative reviewing. The comparative research findings indicate that while collaborative planning resulted in better overall text quality (precisely better content and organization ratings), better fluency, better complexity, and fewer lexical errors, collaborative reviewing produces better accuracy and less syntactical and mechanical errors. The discussion of the findings suggests the need to conduct more comparative research in order to further explore the effects of collaboration in planning or in reviewing. Pedagogical implications of the current study include advising teachers to choose between implementing collaboration in planning or in reviewing depending on their students’ need and what they need to improve.

Keywords: collaboration, writing, collaborative planning, collaborative reviewing

Procedia PDF Downloads 85
2944 Regularizing Software for Aerosol Particles

Authors: Christine Böckmann, Julia Rosemann

Abstract:

We present an inversion algorithm that is used in the European Aerosol Lidar Network for the inversion of data collected with multi-wavelength Raman lidar. These instruments measure backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm. The algorithm is based on manually controlled inversion of optical data which allows for detailed sensitivity studies and thus provides us with comparably high quality of the derived data products. The algorithm allows us to derive particle effective radius, volume, surface-area concentration with comparably high confidence. The retrieval of the real and imaginary parts of the complex refractive index still is a challenge in view of the accuracy required for these parameters in climate change studies in which light-absorption needs to be known with high accuracy. Single-scattering albedo (SSA) can be computed from the retrieve microphysical parameters and allows us to categorize aerosols into high and low absorbing aerosols. From mathematical point of view the algorithm is based on the concept of using truncated singular value decomposition as regularization method. This method was adapted to work for the retrieval of the particle size distribution function (PSD) and is called hybrid regularization technique since it is using a triple of regularization parameters. The inversion of an ill-posed problem, such as the retrieval of the PSD, is always a challenging task because very small measurement errors will be amplified most often hugely during the solution process unless an appropriate regularization method is used. Even using a regularization method is difficult since appropriate regularization parameters have to be determined. Therefore, in a next stage of our work we decided to use two regularization techniques in parallel for comparison purpose. The second method is an iterative regularization method based on Pade iteration. Here, the number of iteration steps serves as the regularization parameter. We successfully developed a semi-automated software for spherical particles which is able to run even on a parallel processor machine. From a mathematical point of view, it is also very important (as selection criteria for an appropriate regularization method) to investigate the degree of ill-posedness of the problem which we found is a moderate ill-posedness. We computed the optical data from mono-modal logarithmic PSD and investigated particles of spherical shape in our simulations. We considered particle radii as large as 6 nm which does not only cover the size range of particles in the fine-mode fraction of naturally occurring PSD but also covers a part of the coarse-mode fraction of PSD. We considered errors of 15% in the simulation studies. For the SSA, 100% of all cases achieve relative errors below 12%. In more detail, 87% of all cases for 355 nm and 88% of all cases for 532 nm are well below 6%. With respect to the absolute error for non- and weak-absorbing particles with real parts 1.5 and 1.6 in all modes the accuracy limit +/- 0.03 is achieved. In sum, 70% of all cases stay below +/-0.03 which is sufficient for climate change studies.

Keywords: aerosol particles, inverse problem, microphysical particle properties, regularization

Procedia PDF Downloads 332
2943 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: carbon stock, forest inventory, LiDAR, tree count

Procedia PDF Downloads 364
2942 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 88