Search results for: quantification accuracy
3142 Application of Federated Learning in the Health Care Sector for Malware Detection and Mitigation Using Software-Defined Networking Approach
Authors: A. Dinelka Panagoda, Bathiya Bandara, Chamod Wijetunga, Chathura Malinda, Lakmal Rupasinghe, Chethana Liyanapathirana
Abstract:
This research takes us forward with the concepts of Federated Learning and Software-Defined Networking (SDN) to introduce an efficient malware detection technique and provide a mitigation mechanism to give birth to a resilient and automated healthcare sector network system by also adding the feature of extended privacy preservation. Due to the daily transformation of new malware attacks on hospital Integrated Clinical Environment (ICEs), the healthcare industry is at an undefinable peak of never knowing its continuity direction. The state of blindness by the array of indispensable opportunities that new medical device inventions and their connected coordination offer daily, a factor that should be focused driven is not yet entirely understood by most healthcare operators and patients. This solution has the involvement of four clients in the form of hospital networks to build up the federated learning experimentation architectural structure with different geographical participation to reach the most reasonable accuracy rate with privacy preservation. While the logistic regression with cross-entropy conveys the detection, SDN comes in handy in the second half of the research to stack up the initial development phases of the system with malware mitigation based on policy implementation. The overall evaluation sums up with a system that proves the accuracy with the added privacy. It is no longer needed to continue with traditional centralized systems that offer almost everything but not privacy.Keywords: software-defined network, federated learning, privacy, integrated clinical environment, decentralized learning, malware detection, malware mitigation
Procedia PDF Downloads 1873141 Bias-Corrected Estimation Methods for Receiver Operating Characteristic Surface
Authors: Khanh To Duc, Monica Chiogna, Gianfranco Adimari
Abstract:
With three diagnostic categories, assessment of the performance of diagnostic tests is achieved by the analysis of the receiver operating characteristic (ROC) surface, which generalizes the ROC curve for binary diagnostic outcomes. The volume under the ROC surface (VUS) is a summary index usually employed for measuring the overall diagnostic accuracy. When the true disease status can be exactly assessed by means of a gold standard (GS) test, unbiased nonparametric estimators of the ROC surface and VUS are easily obtained. In practice, unfortunately, disease status verification via the GS test could be unavailable for all study subjects, due to the expensiveness or invasiveness of the GS test. Thus, often only a subset of patients undergoes disease verification. Statistical evaluations of diagnostic accuracy based only on data from subjects with verified disease status are typically biased. This bias is known as verification bias. Here, we consider the problem of correcting for verification bias when continuous diagnostic tests for three-class disease status are considered. We assume that selection for disease verification does not depend on disease status, given test results and other observed covariates, i.e., we assume that the true disease status, when missing, is missing at random. Under this assumption, we discuss several solutions for ROC surface analysis based on imputation and re-weighting methods. In particular, verification bias-corrected estimators of the ROC surface and of VUS are proposed, namely, full imputation, mean score imputation, inverse probability weighting and semiparametric efficient estimators. Consistency and asymptotic normality of the proposed estimators are established, and their finite sample behavior is investigated by means of Monte Carlo simulation studies. Two illustrations using real datasets are also given.Keywords: imputation, missing at random, inverse probability weighting, ROC surface analysis
Procedia PDF Downloads 4163140 Methodology and Credibility of Unmanned Aerial Vehicle-Based Cadastral Mapping
Authors: Ajibola Isola, Shattri Mansor, Ojogbane Sani, Olugbemi Tope
Abstract:
The cadastral map is the rationale behind city management planning and development. For years, cadastral maps have been produced by ground and photogrammetry platforms. Recent evolution in photogrammetry and remote sensing sensors ignites the use of Unmanned Aerial Vehicle systems (UAVs) for cadastral mapping. Despite the time-saving and multi-dimensional cost-effectiveness of the UAV platform, issues related to cadastral map accuracy are a hindrance to the wide applicability of UAVs' cadastral mapping. This study aims to present an approach leading to the generation and assessing the credibility of UAV cadastral mapping. Different sets of Red, Green, and Blue (RGB) photos were obtained from the Tarot 680-hexacopter UAV platform flown over the Universiti Putra Malaysia campus sports complex at an altitude range of 70 m, 100 m, and 250. Before flying the UAV, twenty-eight ground control points were evenly established in the study area with a real-time kinematic differential global positioning system. The second phase of the study utilizes an image-matching algorithm for photos alignment wherein camera calibration parameters and ten of the established ground control points were used for estimating the inner, relative, and absolute orientations of the photos. The resulting orthoimages are exported to ArcGIS software for digitization. Visual, tabular, and graphical assessments of the resulting cadastral maps showed a different level of accuracy. The results of the study show a gradual approach for generating UAV cadastral mapping and that the cadastral map acquired at 70 m altitude produced better results.Keywords: aerial mapping, orthomosaic, cadastral map, flying altitude, image processing
Procedia PDF Downloads 823139 Fin Efficiency of Helical Fin with Fixed Fin Tip Temperature Boundary Condition
Authors: Richard G. Carranza, Juan Ospina
Abstract:
The fin efficiency for a helical fin with a fixed fin tip (or arbitrary) temperature boundary condition is presented. Firstly, the temperature profile throughout the fin is determined via an energy balance around the fin itself. Secondly, the fin efficiency is formulated by integrating across the entire surface of the helical fin. An analytical expression for the fin efficiency is presented and compared with the literature for accuracy.Keywords: efficiency, fin, heat, helical, transfer
Procedia PDF Downloads 6843138 Application of a Universal Distortion Correction Method in Stereo-Based Digital Image Correlation Measurement
Authors: Hu Zhenxing, Gao Jianxin
Abstract:
Stereo-based digital image correlation (also referred to as three-dimensional (3D) digital image correlation (DIC)) is a technique for both 3D shape and surface deformation measurement of a component, which has found increasing applications in academia and industries. The accuracy of the reconstructed coordinate depends on many factors such as configuration of the setup, stereo-matching, distortion, etc. Most of these factors have been investigated in literature. For instance, the configuration of a binocular vision system determines the systematic errors. The stereo-matching errors depend on the speckle quality and the matching algorithm, which can only be controlled in a limited range. And the distortion is non-linear particularly in a complex imaging acquisition system. Thus, the distortion correction should be carefully considered. Moreover, the distortion function is difficult to formulate in a complex imaging acquisition system using conventional models in such cases where microscopes and other complex lenses are involved. The errors of the distortion correction will propagate to the reconstructed 3D coordinates. To address the problem, an accurate mapping method based on 2D B-spline functions is proposed in this study. The mapping functions are used to convert the distorted coordinates into an ideal plane without distortions. This approach is suitable for any image acquisition distortion models. It is used as a prior process to convert the distorted coordinate to an ideal position, which enables the camera to conform to the pin-hole model. A procedure of this approach is presented for stereo-based DIC. Using 3D speckle image generation, numerical simulations were carried out to compare the accuracy of both the conventional method and the proposed approach.Keywords: distortion, stereo-based digital image correlation, b-spline, 3D, 2D
Procedia PDF Downloads 4983137 Modeling Atmospheric Correction for Global Navigation Satellite System Signal to Improve Urban Cadastre 3D Positional Accuracy Case of: TANA and ADIS IGS Stations
Authors: Asmamaw Yehun
Abstract:
The name “TANA” is one of International Geodetic Service (IGS) Global Positioning System (GPS) station which is found in Bahir Dar University in Institute of Land Administration. The station name taken from one of big Lakes in Africa ,Lake Tana. The Institute of Land Administration (ILA) is part of Bahir Dar University, located in the capital of the Amhara National Regional State, Bahir Dar. The institute is the first of its kind in East Africa. The station is installed by cooperation of ILA and Sweden International Development Agency (SIDA) fund support. The Continues Operating Reference Station (CORS) is a network of stations that provide global satellite system navigation data to help three dimensional positioning, meteorology, space, weather, and geophysical applications throughout the globe. TANA station was as CORS since 2013 and sites are independently owned and operated by governments, research and education facilities and others. The data collected by the reference station is downloadable through Internet for post processing purpose by interested parties who carry out GNSS measurements and want to achieve a higher accuracy. We made a first observation on TANA, monitor stations on May 29th 2013. We used Leica 1200 receivers and AX1202GG antennas and made observations from 11:30 until 15:20 for about 3h 50minutes. Processing of data was done in an automatic post processing service CSRS-PPP by Natural Resources Canada (NRCan) . Post processing was done June 27th 2013 so precise ephemeris was used 30 days after observation. We found Latitude (ITRF08): 11 34 08.6573 (dms) / 0.008 (m), Longitude (ITRF08): 37 19 44.7811 (dms) / 0.018 (m) and Ellipsoidal Height (ITRF08): 1850.958 (m) / 0.037 (m). We were compared this result with GAMIT/GLOBK processed data and it was very closed and accurate. TANA station is one of the second IGS station for Ethiopia since 2015 up to now. It provides data for any civilian users, researchers, governmental and nongovernmental users. TANA station is installed with very advanced choke ring antenna and GR25 Leica receiver and also the site is very good for satellite accessibility. In order to test hydrostatic and wet zenith delay for positional data quality, we used GAMIT/GLOBK and we found that TANA station is the most accurate IGS station in East Africa. Due to lower tropospheric zenith and ionospheric delay, TANA and ADIS IGS stations has 2 and 1.9 meters 3D positional accuracy respectively.Keywords: atmosphere, GNSS, neutral atmosphere, precipitable water vapour
Procedia PDF Downloads 693136 Baseline Study for Performance Evaluation of New Generation Solar Insulation Films for Windows: A Test Bed in Singapore
Authors: Priya Pawar, Rithika Susan Thomas, Emmanuel Blonkowski
Abstract:
Due to the solar geometry of Singapore, which lay within the geographical classification of equatorial tropics, there is a great deal of thermal energy transfer to the inside of the buildings. With changing face of economic development of cities like Singapore, more and more buildings are designed to be lightweight using transparent construction materials such as glass. Increased demand for energy efficiency and reduced cooling load demands make it important for building designer and operators to adopt new and non-invasive technologies to achieve building energy efficiency targets. A real time performance evaluation study was undertaken at School of Art Design and Media (SADM), Singapore, to determine the efficiency potential of a new generation solar insulation film. The building has a window to wall ratio (WWR) of 100% and is fitted with high performance (low emissivity) double glazed units. The empirical data collected was then used to calibrate a computerized simulation model to understand the annual energy consumption based on existing conditions (baseline performance). It was found that the correlations of various parameters such as solar irradiance, solar heat flux, and outdoor air-temperatures quantification are significantly important to determine the cooling load during a particular period of testing.Keywords: solar insulation film, building energy efficiency, tropics, cooling load
Procedia PDF Downloads 1933135 Comparison of Number of Waves Surfed and Duration Using Global Positioning System and Inertial Sensors
Authors: João Madureira, Ricardo Lagido, Inês Sousa, Fraunhofer Portugal
Abstract:
Surf is an increasingly popular sport and its performance evaluation is often qualitative. This work aims at using a smartphone to collect and analyze the GPS and inertial sensors data in order to obtain quantitative metrics of the surfing performance. Two approaches are compared for detection of wave rides, computing the number of waves rode in a surfing session, the starting time of each wave and its duration. The first approach is based on computing the velocity from the Global Positioning System (GPS) signal and finding the velocity thresholds that allow identifying the start and end of each wave ride. The second approach adds information from the Inertial Measurement Unit (IMU) of the smartphone, to the velocity thresholds obtained from the GPS unit, to determine the start and end of each wave ride. The two methods were evaluated using GPS and IMU data from two surfing sessions and validated with similar metrics extracted from video data collected from the beach. The second method, combining GPS and IMU data, was found to be more accurate in determining the number of waves, start time and duration. This paper shows that it is feasible to use smartphones for quantification of performance metrics during surfing. In particular, detection of the waves rode and their duration can be accurately determined using the smartphone GPS and IMU.Keywords: inertial measurement unit (IMU), global positioning system (GPS), smartphone, surfing performance
Procedia PDF Downloads 4013134 Isoflavone and Mineral Content in Conventional Commercial Soybean Cultivars and Transgenic Soybean Planted in Minas Gerais, Brazil
Authors: Renata Adriana Labanca, Gabriela Rezende Costa, Nilton de Oliveira Couto e Silva, José Marcos Gontijo Mandarino, Rodrigo Santos Leite, Nilson César Castanheira Guimarães, Roberto Gonçalves Junqueira
Abstract:
The objective of this study was to evaluate the differences in composition between six brands of conventional soybean and six genetically modified cultivars (GM), all of them from Minas Gerais State, Brazil. We focused on the isoflavones profile and mineral content questioning the substantial equivalence between conventional and GM organisms. The statement of compliance label for conventional grains was verified for the presence of genetic modified genes by real time polymerase chain reaction (PCR). We did not detect the presence of the 35S promoter in commercial samples, indicating the absence of transgene insertion. For mineral analysis, we used the method of inductively coupled plasma-optical emission spectrometry (ICP-OES). Isoflavones quantification was performed by high performance liquid chromatography (HPLC). The results showed no statistical difference between the conventional and transgenic soybean groups concerning isoflavone content and mineral composition. The concentration of potassium, the main mineral component of soy, was the highest in conventional soybeans compared to that in GM soy, while GM samples presented the highest concentrations of iron.Keywords: glycine max, genetically modified organism, bioactive compounds, ICP-OES, HPLC
Procedia PDF Downloads 4573133 A Simple Model for Solar Panel Efficiency
Authors: Stefano M. Spagocci
Abstract:
The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy
Procedia PDF Downloads 683132 Determination of Gold in Microelectronics Waste Pieces
Authors: S. I. Usenko, V. N. Golubeva, I. A. Konopkina, I. V. Astakhova, O. V. Vakhnina, A. A. Korableva, A. A. Kalinina, K. B. Zhogova
Abstract:
Gold can be determined in natural objects and manufactured articles of different origin. The up-to-date status of research and problems of high gold level determination in alloys and manufactured articles are described in detail in the literature. No less important is the task of this metal determination in minerals, process products and waste pieces. The latters, as objects of gold content chemical analysis, are most hard-to-study for two reasons: Because of high requirements to accuracy of analysis results and because of difference in chemical and phase composition. As a rule, such objects are characterized by compound, variable and very often unknown matrix composition that leads to unpredictable and uncontrolled effect on accuracy and other analytical characteristics of analysis technique. In this paper, the methods for the determination of gold are described, using flame atomic-absorption spectrophotometry and gravimetric analysis technique. The techniques are aimed at gold determination in a solution for gold etching (KJ+J2), in the technological mixture formed after cleaning stainless steel members of vacuum-deposit installation with concentrated nitric and hydrochloric acids as well as in gold-containing powder resulted from liquid wastes reprocessing. Optimal conditions for sample preparation and analysis of liquid and solid waste specimens of compound and variable matrix composition were chosen. The boundaries of relative resultant error were determined for the methods within the range of gold mass concentration from 0.1 to 30g/dm3 in the specimens of liquid wastes and mass fractions from 3 to 80% in the specimens of solid wastes.Keywords: microelectronics waste pieces, gold, sample preparation, atomic-absorption spectrophotometry, gravimetric analysis technique
Procedia PDF Downloads 2043131 Decoding Kinematic Characteristics of Finger Movement from Electrocorticography Using Classical Methods and Deep Convolutional Neural Networks
Authors: Ksenia Volkova, Artur Petrosyan, Ignatii Dubyshkin, Alexei Ossadtchi
Abstract:
Brain-computer interfaces are a growing research field producing many implementations that find use in different fields and are used for research and practical purposes. Despite the popularity of the implementations using non-invasive neuroimaging methods, radical improvement of the state channel bandwidth and, thus, decoding accuracy is only possible by using invasive techniques. Electrocorticography (ECoG) is a minimally invasive neuroimaging method that provides highly informative brain activity signals, effective analysis of which requires the use of machine learning methods that are able to learn representations of complex patterns. Deep learning is a family of machine learning algorithms that allow learning representations of data with multiple levels of abstraction. This study explores the potential of deep learning approaches for ECoG processing, decoding movement intentions and the perception of proprioceptive information. To obtain synchronous recording of kinematic movement characteristics and corresponding electrical brain activity, a series of experiments were carried out, during which subjects performed finger movements at their own pace. Finger movements were recorded with a three-axis accelerometer, while ECoG was synchronously registered from the electrode strips that were implanted over the contralateral sensorimotor cortex. Then, multichannel ECoG signals were used to track finger movement trajectory characterized by accelerometer signal. This process was carried out both causally and non-causally, using different position of the ECoG data segment with respect to the accelerometer data stream. The recorded data was split into training and testing sets, containing continuous non-overlapping fragments of the multichannel ECoG. A deep convolutional neural network was implemented and trained, using 1-second segments of ECoG data from the training dataset as input. To assess the decoding accuracy, correlation coefficient r between the output of the model and the accelerometer readings was computed. After optimization of hyperparameters and training, the deep learning model allowed reasonably accurate causal decoding of finger movement with correlation coefficient r = 0.8. In contrast, the classical Wiener-filter like approach was able to achieve only 0.56 in the causal decoding mode. In the noncausal case, the traditional approach reached the accuracy of r = 0.69, which may be due to the presence of additional proprioceptive information. This result demonstrates that the deep neural network was able to effectively find a representation of the complex top-down information related to the actual movement rather than proprioception. The sensitivity analysis shows physiologically plausible pictures of the extent to which individual features (channel, wavelet subband) are utilized during the decoding procedure. In conclusion, the results of this study have demonstrated that a combination of a minimally invasive neuroimaging technique such as ECoG and advanced machine learning approaches allows decoding motion with high accuracy. Such setup provides means for control of devices with a large number of degrees of freedom as well as exploratory studies of the complex neural processes underlying movement execution.Keywords: brain-computer interface, deep learning, ECoG, movement decoding, sensorimotor cortex
Procedia PDF Downloads 1773130 Dimensionality Reduction in Modal Analysis for Structural Health Monitoring
Authors: Elia Favarelli, Enrico Testi, Andrea Giorgetti
Abstract:
Autonomous structural health monitoring (SHM) of many structures and bridges became a topic of paramount importance for maintenance purposes and safety reasons. This paper proposes a set of machine learning (ML) tools to perform automatic feature selection and detection of anomalies in a bridge from vibrational data and compare different feature extraction schemes to increase the accuracy and reduce the amount of data collected. As a case study, the Z-24 bridge is considered because of the extensive database of accelerometric data in both standard and damaged conditions. The proposed framework starts from the first four fundamental frequencies extracted through operational modal analysis (OMA) and clustering, followed by density-based time-domain filtering (tracking). The fundamental frequencies extracted are then fed to a dimensionality reduction block implemented through two different approaches: feature selection (intelligent multiplexer) that tries to estimate the most reliable frequencies based on the evaluation of some statistical features (i.e., mean value, variance, kurtosis), and feature extraction (auto-associative neural network (ANN)) that combine the fundamental frequencies to extract new damage sensitive features in a low dimensional feature space. Finally, one class classifier (OCC) algorithms perform anomaly detection, trained with standard condition points, and tested with normal and anomaly ones. In particular, a new anomaly detector strategy is proposed, namely one class classifier neural network two (OCCNN2), which exploit the classification capability of standard classifiers in an anomaly detection problem, finding the standard class (the boundary of the features space in normal operating conditions) through a two-step approach: coarse and fine boundary estimation. The coarse estimation uses classics OCC techniques, while the fine estimation is performed through a feedforward neural network (NN) trained that exploits the boundaries estimated in the coarse step. The detection algorithms vare then compared with known methods based on principal component analysis (PCA), kernel principal component analysis (KPCA), and auto-associative neural network (ANN). In many cases, the proposed solution increases the performance with respect to the standard OCC algorithms in terms of F1 score and accuracy. In particular, by evaluating the correct features, the anomaly can be detected with accuracy and an F1 score greater than 96% with the proposed method.Keywords: anomaly detection, frequencies selection, modal analysis, neural network, sensor network, structural health monitoring, vibration measurement
Procedia PDF Downloads 1233129 Optimized Weight Selection of Control Data Based on Quotient Space of Multi-Geometric Features
Authors: Bo Wang
Abstract:
The geometric processing of multi-source remote sensing data using control data of different scale and different accuracy is an important research direction of multi-platform system for earth observation. In the existing block bundle adjustment methods, as the controlling information in the adjustment system, the approach using single observation scale and precision is unable to screen out the control information and to give reasonable and effective corresponding weights, which reduces the convergence and adjustment reliability of the results. Referring to the relevant theory and technology of quotient space, in this project, several subjects are researched. Multi-layer quotient space of multi-geometric features is constructed to describe and filter control data. Normalized granularity merging mechanism of multi-layer control information is studied and based on the normalized scale factor, the strategy to optimize the weight selection of control data which is less relevant to the adjustment system can be realized. At the same time, geometric positioning experiment is conducted using multi-source remote sensing data, aerial images, and multiclass control data to verify the theoretical research results. This research is expected to break through the cliché of the single scale and single accuracy control data in the adjustment process and expand the theory and technology of photogrammetry. Thus the problem to process multi-source remote sensing data will be solved both theoretically and practically.Keywords: multi-source image geometric process, high precision geometric positioning, quotient space of multi-geometric features, optimized weight selection
Procedia PDF Downloads 2843128 High Resolution Solid State NMR Structural Study of a Ternary Hydraulic Mixture
Authors: Rym Sassi, Franck Fayon, Mohend Chaouche, Emmanuel Veron, Valerie Montouillout
Abstract:
The chemical phenomena occurring during cement hydration are complex and interdependent, and even after almost two centuries of studies, they are still difficult to solve for complex mixtures combining different hydraulic binders. Powder-XRD has been widely used for characterizing the crystalline phases in both anhydrous and hydrated cement, but only limited information is obtained in the case of strongly disordered and amorphous phases. In contrast, local spectroscopies like solid-state NMR can provide a quantitative description of noncrystalline phases. In this work, the structural modifications occurring during hydration of a fast-setting ternary binder based on white Portland cement, white calcium aluminate cement, and calcium sulfate were investigated using advanced solid-state NMR methods. We particularly focused on the early stage of the hydration up to 28 days, working with samples whose hydration was controlled and stopped. ²⁷Al MQ-MAS as well as {¹H}-²⁷Al and {¹H}-²⁹Si Cross- Polarization MAS NMR techniques were combined to distinguish all of the aluminum and silicon species formed during the hydration. The NMR quantification of the different phases was conducted in parallel with the XRD analyses. The consumption of initial products, as well as the precipitation of hydraulic phases (ettringite, monosulfate, strätlingite, CSH, and CASH), were unambiguously quantified. Finally, the drawing of the consumption and formation of phases was correlated with mechanical strength measurements.Keywords: cement, hydration, hydrates structure, mechanical strength, NMR
Procedia PDF Downloads 1543127 Fast Aerodynamic Evaluation of Transport Aircraft in Early Phases
Authors: Xavier Bertrand, Alexandre Cayrel
Abstract:
The early phase of an aircraft development is instrumental as it really drives the potential of a new concept. Any weakness in the high-level design (wing planform, moveable surfaces layout etc.) will be extremely difficult and expensive to recover later in the aircraft development process. Aerodynamic evaluation in this very early development phase is driven by two main criteria: a short lead-time to allow quick iterations of the geometrical design, and a high quality of the calculations to get an accurate & reliable assessment of the current status. These two criteria are usually quite contradictory. Actually, short lead time of a couple of hours from end-to-end can be obtained with very simple tools (semi-empirical methods for instance) although their accuracy is limited, whereas higher quality calculations require heavier/more complex tools, which obviously need more complex inputs as well, and a significantly longer lead time. At this point, the choice has to be done between accuracy and lead-time. A brand new approach has been developed within Airbus, aiming at obtaining quickly high quality evaluations of the aerodynamic of an aircraft. This methodology is based on a joint use of Surrogate Modelling and a lifting line code. The Surrogate Modelling is used to get the wing sections characteristics (e.g. lift coefficient vs. angle of attack), whatever the airfoil geometry, the status of the moveable surfaces (aileron/spoilers) or the high-lift devices deployment. From these characteristics, the lifting line code is used to get the 3D effects on the wing whatever the flow conditions (low/high Mach numbers etc.). This methodology has been applied successfully to a concept of medium range aircraft.Keywords: aerodynamics, lifting line, surrogate model, CFD
Procedia PDF Downloads 3593126 Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops: Statistical Evaluation of the Potential Herbicide Savings
Authors: Morten Stigaard Laursen, Rasmus Nyholm Jørgensen, Henrik Skov Midtiby, Anders Krogh Mortensen, Sanmohan Baby
Abstract:
This work contributes a statistical model and simulation framework yielding the best estimate possible for the potential herbicide reduction when using the MoDiCoVi algorithm all the while requiring a efficacy comparable to conventional spraying. In June 2013 a maize field located in Denmark were seeded. The field was divided into parcels which was assigned to one of two main groups: 1) Control, consisting of subgroups of no spray and full dose spraty; 2) MoDiCoVi algorithm subdivided into five different leaf cover thresholds for spray activation. In addition approximately 25% of the parcels were seeded with additional weeds perpendicular to the maize rows. In total 299 parcels were randomly assigned with the 28 different treatment combinations. In the statistical analysis, bootstrapping was used for balancing the number of replicates. The achieved potential herbicide savings was found to be 70% to 95% depending on the initial weed coverage. However additional field trials covering more seasons and locations are needed to verify the generalisation of these results. There is a potential for further herbicide savings as the time interval between the first and second spraying session was not long enough for the weeds to turn yellow, instead they only stagnated in growth.Keywords: herbicide reduction, macrosprayer, weed crop discrimination, site-specific, sprayer boom
Procedia PDF Downloads 2973125 Method Development for the Determination of Gamma-Aminobutyric Acid in Rice Products by Lc-Ms-Ms
Authors: Cher Rong Matthew Kong, Edmund Tian, Seng Poon Ong, Chee Sian Gan
Abstract:
Gamma-aminobutyric acid (GABA) is a non-protein amino acid that is a functional constituent of certain rice varieties. When consumed, it decreases blood pressure and reduces the risk of hypertension-related diseases. This has led to more research dedicated towards the development of functional food products (e.g. germinated brown rice) with enhanced GABA content, and the development of these functional food products has led to increased demand for instrument-based methods that can efficiently and effectively determine GABA content. Current analytical methods require analyte derivatisation, and have significant disadvantages such as being labour intensive and time-consuming, and being subject to analyte loss due to the increased complexity of the sample preparation process. To address this, an LC-MS-MS method for the determination of GABA in rice products has been developed and validated. This developed method involves a relatively simple sample preparation process before analysis using HILIC LC-MS-MS. This method eliminates the need for derivatisation, thereby significantly reducing the labour and time associated with such an analysis. Using LC-MS-MS also allows for better differentiation of GABA from any potential co-eluting compounds in the sample matrix. Results obtained from the developed method demonstrated high linearity, accuracy, and precision for the determination of GABA (1ng/L to 8ng/L) in a variety of brown rice products. The method can significantly simplify sample preparation steps, improve the accuracy of quantitation, and increase the throughput of analyses, thereby providing a quick but effective alternative to established instrumental analysis methods for GABA in rice.Keywords: functional food, gamma-aminobutyric acid, germinated brown rice, method development
Procedia PDF Downloads 2683124 A Multi-Stage Learning Framework for Reliable and Cost-Effective Estimation of Vehicle Yaw Angle
Authors: Zhiyong Zheng, Xu Li, Liang Huang, Zhengliang Sun, Jianhua Xu
Abstract:
Yaw angle plays a significant role in many vehicle safety applications, such as collision avoidance and lane-keeping system. Although the estimation of the yaw angle has been extensively studied in existing literature, it is still the main challenge to simultaneously achieve a reliable and cost-effective solution in complex urban environments. This paper proposes a multi-stage learning framework to estimate the yaw angle with a monocular camera, which can deal with the challenge in a more reliable manner. In the first stage, an efficient road detection network is designed to extract the road region, providing a highly reliable reference for the estimation. In the second stage, a variational auto-encoder (VAE) is proposed to learn the distribution patterns of road regions, which is particularly suitable for modeling the changing patterns of yaw angle under different driving maneuvers, and it can inherently enhance the generalization ability. In the last stage, a gated recurrent unit (GRU) network is used to capture the temporal correlations of the learned patterns, which is capable to further improve the estimation accuracy due to the fact that the changes of deflection angle are relatively easier to recognize among continuous frames. Afterward, the yaw angle can be obtained by combining the estimated deflection angle and the road direction stored in a roadway map. Through effective multi-stage learning, the proposed framework presents high reliability while it maintains better accuracy. Road-test experiments with different driving maneuvers were performed in complex urban environments, and the results validate the effectiveness of the proposed framework.Keywords: gated recurrent unit, multi-stage learning, reliable estimation, variational auto-encoder, yaw angle
Procedia PDF Downloads 1423123 Erosion Susceptibility Zoning and Prioritization of Micro-Watersheds: A Remote Sensing-Gis Based Study of Asan River Basin, Western Doon Valley, India
Authors: Pijush Roy, Vinay Kumar Rai
Abstract:
The present study highlights the estimation of soil loss and identification of critical area for implementation of best management practice is central to the success of soil conservation programme. The quantification of morphometric and Universal Soil Loss Equation (USLE) factors using remote sensing and GIS for prioritization of micro-watersheds in Asan River catchment, western Doon valley at foothills of Siwalik ranges in the Dehradun districts of Uttarakhand, India. The watershed has classified as a dendritic pattern with sixth order stream. The area is classified into very high, high, moderately high, medium and low susceptibility zones. High to very high erosion zone exists in the urban area and agricultural land. Average annual soil loss of 64 tons/ha/year has been estimated for the watershed. The optimum management practices proposed for micro-watersheds of Asan River basin are; afforestation, contour bunding suitable sites for water harvesting structure as check dam and soil conservation, agronomical measure and bench terrace.Keywords: erosion susceptibility zones, morphometric characteristics, prioritization, remote sensing and GIS, universal soil loss equation
Procedia PDF Downloads 3023122 Sensor Registration in Multi-Static Sonar Fusion Detection
Authors: Longxiang Guo, Haoyan Hao, Xueli Sheng, Hanjun Yu, Jingwei Yin
Abstract:
In order to prevent target splitting and ensure the accuracy of fusion, system error registration is an important step in multi-static sonar fusion detection system. To eliminate the inherent system errors including distance error and angle error of each sonar in detection, this paper uses offline estimation method for error registration. Suppose several sonars from different platforms work together to detect a target. The target position detected by each sonar is based on each sonar’s own reference coordinate system. Based on the two-dimensional stereo projection method, this paper uses real-time quality control (RTQC) method and least squares (LS) method to estimate sensor biases. The RTQC method takes the average value of each sonar’s data as the observation value and the LS method makes the least square processing of each sonar’s data to get the observation value. In the underwater acoustic environment, matlab simulation is carried out and the simulation results show that both algorithms can estimate the distance and angle error of sonar system. The performance of the two algorithms is also compared through the root mean square error and the influence of measurement noise on registration accuracy is explored by simulation. The system error convergence of RTQC method is rapid, but the distribution of targets has a serious impact on its performance. LS method can not be affected by target distribution, but the increase of random noise will slow down the convergence rate. LS method is an improvement of RTQC method, which is widely used in two-dimensional registration. The improved method can be used for underwater multi-target detection registration.Keywords: data fusion, multi-static sonar detection, offline estimation, sensor registration problem
Procedia PDF Downloads 1693121 Determination of Marbofloxacin in Pig Plasma Using LC-MS/MS and Its Application to the Pharmacokinetic Studies
Authors: Jeong Woo Kang, MiYoung Baek, Ki-Suk Kim, Kwang-Jick Lee, ByungJae So
Abstract:
Introduction: A fast, easy and sensitive detection method was developed and validated by liquid chromatography tandem mass spectrometry for the determination of marbofloxacin in pig plasma which was further applied to study the pharmacokinetics of marbofloxacin. Materials and Methods: The plasma sample (500 μL) was mixed with 1.5 ml of 0.1% formic acid in MeCN to precipitate plasma proteins. After shaking for 20 min, The mixture was centrifuged at 5,000 × g for 30 min. It was dried under a nitrogen flow at 50℃. 500 μL aliquot of the sample was injected into the LC-MS/MS system. Chromatographic analysis was carried out mobile phase gradient consisting 0.1% formic acid in D.W. (A) and 0.1% formic acid in MeCN (B) with C18 reverse phase column. Mass spectrometry was performed using the positive ion mode and the selected ion monitoring (MRM). Results and Conclusions: The method validation was performed in the sample matrix. Good linearities (R2>0.999) were observed and the quantified average recoveries of marbofloxacin were 87 - 92% at level of 10 ng g-1 -100 ng g-1. The percent of coefficient of variation (CV) for the described method was less than 10 % over the range of concentrations studied. The limits of detection (LOD) and quantification (LOQ) were 2 and 5 ng g-1, respectively. This method has also been applied successfully to pharmacokinetic analysis of marbofloxacin after intravenous (IV), intramuscular (IM) and oral administration (PO). The mean peak plasma concentration (Cmax) was 2,597 ng g-1at 0.25 h, 2,587 ng g-1at 0.44 h and 2,355 ng g-1at 1.58 h for IV, IM and PO, respectively. The area under the plasma concentration-time curve (AUC0–t) was 24.8, 29.0 and 25.2 h μg/mL for IV, IM and PO, respectively. The elimination half-life (T1/2) was 8.6, 13.1 and 9.5 for IV, IM and PO, respectively. Bioavailability (F) of the marbofloxacin in pig was 117 and 101 % for IM and PO, respectively. Based on these result, marbofloxacin does not have any obstacles as therapeutics to develop the oral formulations such as tablets and capsules.Keywords: marbofloxacin, LC-MS/MS, pharmacokinetics, chromatographic
Procedia PDF Downloads 5483120 YOLO-IR: Infrared Small Object Detection in High Noise Images
Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long
Abstract:
Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion
Procedia PDF Downloads 733119 MRCP as a Pre-Operative Tool for Predicting Variant Biliary Anatomy in Living Related Liver Donors
Authors: Awais Ahmed, Atif Rana, Haseeb Zia, Maham Jahangir, Rashed Nazir, Faisal Dar
Abstract:
Purpose: Biliary complications represent the most common cause of morbidity in living related liver donor transplantation and detailed preoperative evaluation of biliary anatomic variants is crucial for safe patient selection and improved surgical outcomes. Purpose of this study is to determine the accuracy of preoperative MRCP in predicting biliary variations when compared to intraoperative cholangiography in living related liver donors. Materials and Methods: From 44 potential donors, 40 consecutive living related liver donors (13 females and 28 males) underwent donor hepatectomy at our centre from April 2012 to August 2013. MRCP and IOC of all patients were retrospectively reviewed separately by two radiologists and a transplant surgeon.MRCP was performed on 1.5 Tesla MR magnets using breath-hold heavily T2 weighted radial slab technique. One patient was excluded due to suboptimal MRCP. The accuracy of MRCP for variant biliary anatomy was calculated. Results: MRCP accurately predicted the biliary anatomy in 38 of 39 cases (97 %). Standard biliary anatomy was predicted by MRCP in 25 (64 %) donors (100% sensitivity). Variant biliary anatomy was noted in 14 (36 %) IOCs of which MRCP predicted precise anatomy of 13 variants (93 % sensitivity). The two most common variations were drainage of the RPSD into the LHD (50%) and the triple confluence of the RASD, RPSD and LHD (21%). Conclusion: MRCP is a sensitive imaging tool for precise pre-operative mapping of biliary variations which is critical to surgical decision making in living related liver transplantation.Keywords: intraoperative cholangiogram, liver transplantation, living related donors, magnetic resonance cholangio-pancreaticogram (MRCP)
Procedia PDF Downloads 3973118 Computer-Aided Depression Screening: A Literature Review on Optimal Methodologies for Mental Health Screening
Authors: Michelle Nighswander
Abstract:
Suicide can be a tragic response to mental illness. It is difficult for people to disclose or discuss suicidal impulses. The stigma surrounding mental health can create a reluctance to seek help for mental illness. Patients may feel pressure to exhibit a socially desirable demeanor rather than reveal these issues, especially if they sense their healthcare provider is pressed for time or does not have an extensive history with their provider. Overcoming these barriers can be challenging. Although there are several validated depression and suicide risk instruments, varying processes used to administer these tools may impact the truthfulness of the responses. A literature review was conducted to find evidence of the impact of the environment on the accuracy of depression screening. Many investigations do not describe the environment and fewer studies use a comparison design. However, three studies demonstrated that computerized self-reporting might be more likely to elicit truthful and accurate responses due to increased privacy when responding compared to a face-to-face interview. These studies showed patients reported positive reactions to computerized screening for other stigmatizing health conditions such as alcohol use during pregnancy. Computerized self-screening for depression offers the possibility of more privacy and patient reflection, which could then send a targeted message of risk to the healthcare provider. This could potentially increase the accuracy while also increasing time efficiency for the clinic. Considering the persistent effects of mental health stigma, how these screening questions are posed can impact patients’ responses. This literature review analyzes trends in depression screening methodologies, the impact of setting on the results and how this may assist in overcoming one barrier caused by stigma.Keywords: computerized self-report, depression, mental health stigma, suicide risk
Procedia PDF Downloads 1293117 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: Bahareh Golchin, Nooshin Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia PDF Downloads 1373116 UV-Vis Spectroscopy as a Tool for Online Tar Measurements in Wood Gasification Processes
Authors: Philip Edinger, Christian Ludwig
Abstract:
The formation and control of tars remain one of the major challenges in the implementation of biomass gasification technologies. Robust, on-line analytical methods are needed to investigate the fate of tar compounds when different measures for their reduction are applied. This work establishes an on-line UV-Vis method, based on a liquid quench sampling system, to monitor tar compounds in biomass gasification processes. Recorded spectra from the liquid phase were analyzed for their tar composition by means of a classical least squares (CLS) and partial least squares (PLS) approach. This allowed for the detection of UV-Vis active tar compounds with detection limits in the low part per million by volume (ppmV) region. The developed method was then applied to two case studies. The first involved a lab-scale reactor, intended to investigate the decomposition of a limited number of tar compounds across a catalyst. The second study involved a gas scrubber as part of a pilot scale wood gasification plant. Tar compound quantification results showed good agreement with off-line based reference methods (GC-FID) when the complexity of tar composition was limited. The two case studies show that the developed method can provide rapid, qualitative information on the tar composition for the purpose of process monitoring. In cases with a limited number of tar species, quantitative information about the individual tar compound concentrations provides an additional benefit of the analytical method.Keywords: biomass gasification, on-line, tar, UV-Vis
Procedia PDF Downloads 2593115 Generalized Additive Model for Estimating Propensity Score
Authors: Tahmidul Islam
Abstract:
Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching
Procedia PDF Downloads 3673114 A Robust System for Foot Arch Type Classification from Static Foot Pressure Distribution Data Using Linear Discriminant Analysis
Authors: R. Periyasamy, Deepak Joshi, Sneh Anand
Abstract:
Foot posture assessment is important to evaluate foot type, causing gait and postural defects in all age groups. Although different methods are used for classification of foot arch type in clinical/research examination, there is no clear approach for selecting the most appropriate measurement system. Therefore, the aim of this study was to develop a system for evaluation of foot type as clinical decision-making aids for diagnosis of flat and normal arch based on the Arch Index (AI) and foot pressure distribution parameter - Power Ratio (PR) data. The accuracy of the system was evaluated for 27 subjects with age ranging from 24 to 65 years. Foot area measurements (hind foot, mid foot, and forefoot) were acquired simultaneously from foot pressure intensity image using portable PedoPowerGraph system and analysis of the image in frequency domain to obtain foot pressure distribution parameter - PR data. From our results, we obtain 100% classification accuracy of normal and flat foot by using the linear discriminant analysis method. We observe there is no misclassification of foot types because of incorporating foot pressure distribution data instead of only arch index (AI). We found that the mid-foot pressure distribution ratio data and arch index (AI) value are well correlated to foot arch type based on visual analysis. Therefore, this paper suggests that the proposed system is accurate and easy to determine foot arch type from arch index (AI), as well as incorporating mid-foot pressure distribution ratio data instead of physical area of contact. Hence, such computational tool based system can help the clinicians for assessment of foot structure and cross-check their diagnosis of flat foot from mid-foot pressure distribution.Keywords: arch index, computational tool, static foot pressure intensity image, foot pressure distribution, linear discriminant analysis
Procedia PDF Downloads 4993113 Modeling Engagement with Multimodal Multisensor Data: The Continuous Performance Test as an Objective Tool to Track Flow
Authors: Mohammad H. Taheri, David J. Brown, Nasser Sherkat
Abstract:
Engagement is one of the most important factors in determining successful outcomes and deep learning in students. Existing approaches to detect student engagement involve periodic human observations that are subject to inter-rater reliability. Our solution uses real-time multimodal multisensor data labeled by objective performance outcomes to infer the engagement of students. The study involves four students with a combined diagnosis of cerebral palsy and a learning disability who took part in a 3-month trial over 59 sessions. Multimodal multisensor data were collected while they participated in a continuous performance test. Eye gaze, electroencephalogram, body pose, and interaction data were used to create a model of student engagement through objective labeling from the continuous performance test outcomes. In order to achieve this, a type of continuous performance test is introduced, the Seek-X type. Nine features were extracted including high-level handpicked compound features. Using leave-one-out cross-validation, a series of different machine learning approaches were evaluated. Overall, the random forest classification approach achieved the best classification results. Using random forest, 93.3% classification for engagement and 42.9% accuracy for disengagement were achieved. We compared these results to outcomes from different models: AdaBoost, decision tree, k-Nearest Neighbor, naïve Bayes, neural network, and support vector machine. We showed that using a multisensor approach achieved higher accuracy than using features from any reduced set of sensors. We found that using high-level handpicked features can improve the classification accuracy in every sensor mode. Our approach is robust to both sensor fallout and occlusions. The single most important sensor feature to the classification of engagement and distraction was shown to be eye gaze. It has been shown that we can accurately predict the level of engagement of students with learning disabilities in a real-time approach that is not subject to inter-rater reliability, human observation or reliant on a single mode of sensor input. This will help teachers design interventions for a heterogeneous group of students, where teachers cannot possibly attend to each of their individual needs. Our approach can be used to identify those with the greatest learning challenges so that all students are supported to reach their full potential.Keywords: affective computing in education, affect detection, continuous performance test, engagement, flow, HCI, interaction, learning disabilities, machine learning, multimodal, multisensor, physiological sensors, student engagement
Procedia PDF Downloads 94