Search results for: calibration definitions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 756

Search results for: calibration definitions

636 Strict Stability of Fuzzy Differential Equations by Lyapunov Functions

Authors: Mustafa Bayram Gücen, Coşkun Yakar

Abstract:

In this study, we have investigated the strict stability of fuzzy differential systems and we compare the classical notion of strict stability criteria of ordinary differential equations and the notion of strict stability of fuzzy differential systems. In addition that, we present definitions of stability and strict stability of fuzzy differential equations and also we have some theorems and comparison results. Strict Stability is a different stability definition and this stability type can give us an information about the rate of decay of the solutions. Lyapunov’s second method is a standard technique used in the study of the qualitative behavior of fuzzy differential systems along with a comparison result that allows the prediction of behavior of a fuzzy differential system when the behavior of the null solution of a fuzzy comparison system is known. This method is a usefull for investigating strict stability of fuzzy systems. First of all, we present definitions and necessary background material. Secondly, we discuss and compare the differences between the classical notion of stability and the recent notion of strict stability. And then, we have a comparison result in which the stability properties of the null solution of the comparison system imply the corresponding stability properties of the fuzzy differential system. Consequently, we give the strict stability results and a comparison theorem. We have used Lyapunov second method and we have proved a comparison result with scalar differential equations.

Keywords: fuzzy systems, fuzzy differential equations, fuzzy stability, strict stability

Procedia PDF Downloads 225
635 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method

Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare

Abstract:

The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.

Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test

Procedia PDF Downloads 100
634 Comparison between Photogrammetric and Structure from Motion Techniques in Processing Unmanned Aerial Vehicles Imageries

Authors: Ahmed Elaksher

Abstract:

Over the last few years, significant progresses have been made and new approaches have been proposed for efficient collection of 3D spatial data from Unmanned aerial vehicles (UAVs) with reduced costs compared to imagery from satellite or manned aircraft. In these systems, a low-cost GPS unit provides the position, velocity of the vehicle, a low-quality inertial measurement unit (IMU) determines its orientation, and off-the-shelf cameras capture the images. Structure from Motion (SfM) and photogrammetry are the main tools for 3D surface reconstruction from images collected by these systems. Unlike traditional techniques, SfM allows the computation of calibration parameters using point correspondences across images without performing a rigorous laboratory or field calibration process and it is more flexible in that it does not require consistent image overlap or same rotation angles between successive photos. These benefits make SfM ideal for UAVs aerial mapping. In this paper, a direct comparison between SfM Digital Elevation Models (DEM) and those generated through traditional photogrammetric techniques was performed. Data was collected by a 3DR IRIS+ Quadcopter with a Canon PowerShot S100 digital camera. Twenty ground control points were randomly distributed on the ground and surveyed with a total station in a local coordinate system. Images were collected from an altitude of 30 meters with a ground resolution of nine mm/pixel. Data was processed with PhotoScan, VisualSFM, Imagine Photogrammetry, and a photogrammetric algorithm developed by the author. The algorithm starts with performing a laboratory camera calibration then the acquired imagery undergoes an orientation procedure to determine the cameras’ positions and orientations. After the orientation is attained, correlation based image matching is conducted to automatically generate three-dimensional surface models followed by a refining step using sub-pixel image information for high matching accuracy. Tests with different number and configurations of the control points were conducted. Camera calibration parameters estimated from commercial software and those obtained with laboratory procedures were comparable. Exposure station positions were within less than few centimeters and insignificant differences, within less than three seconds, among orientation angles were found. DEM differencing was performed between generated DEMs and few centimeters vertical shifts were found.

Keywords: UAV, photogrammetry, SfM, DEM

Procedia PDF Downloads 266
633 Comparison and Validation of a dsDNA biomimetic Quality Control Reference for NGS based BRCA CNV analysis versus MLPA

Authors: A. Delimitsou, C. Gouedard, E. Konstanta, A. Koletis, S. Patera, E. Manou, K. Spaho, S. Murray

Abstract:

Background: There remains a lack of International Standard Control Reference materials for Next Generation Sequencing-based approaches or device calibration. We have designed and validated dsDNA biomimetic reference materials for targeted such approaches incorporating proprietary motifs (patent pending) for device/test calibration. They enable internal single-sample calibration, alleviating sample comparisons to pooled historical population-based data assembly or statistical modelling approaches. We have validated such an approach for BRCA Copy Number Variation analytics using iQRS™-CNVSUITE versus Mixed Ligation-dependent Probe Amplification. Methods: Standard BRCA Copy Number Variation analysis was compared between mixed ligation-dependent probe amplification and next generation sequencing using a cohort of 198 breast/ovarian cancer patients. Next generation sequencing based copy number variation analysis of samples spiked with iQRS™ dsDNA biomimetics were analysed using proprietary CNVSUITE software. Mixed ligation-dependent probe amplification analyses were performed on an ABI-3130 Sequencer and analysed with Coffalyser software. Results: Concordance of BRCA – copy number variation events for mixed ligation-dependent probe amplification and CNVSUITE indicated an overall sensitivity of 99.88% and specificity of 100% for iQRS™-CNVSUITE. The negative predictive value of iQRS-CNVSUITE™ for BRCA was 100%, allowing for accurate exclusion of any event. The positive predictive value was 99.88%, with no discrepancy between mixed ligation-dependent probe amplification and iQRS™-CNVSUITE. For device calibration purposes, precision was 100%, spiking of patient DNA demonstrated linearity to 1% (±2.5%) and range from 100 copies. Traditional training was supplemented by predefining the calibrator to sample cut-off (lock-down) for amplicon gain or loss based upon a relative ratio threshold, following training of iQRS™-CNVSUITE using spiked iQRS™ calibrator and control mocks. BRCA copy number variation analysis using iQRS™-CNVSUITE™ was successfully validated and ISO15189 accredited and now enters CE-IVD performance evaluation. Conclusions: The inclusion of a reference control competitor (iQRS™ dsDNA mimetic) to next generation sequencing-based sequencing offers a more robust sample-independent approach for the assessment of copy number variation events compared to mixed ligation-dependent probe amplification. The approach simplifies data analyses, improves independent sample data analyses, and allows for direct comparison to an internal reference control for sample-specific quantification. Our iQRS™ biomimetic reference materials allow for single sample copy number variation analytics and further decentralisation of diagnostics to single patient sample assessment.

Keywords: validation, diagnostics, oncology, copy number variation, reference material, calibration

Procedia PDF Downloads 52
632 Development and Validation of a Carbon Dioxide TDLAS Sensor for Studies on Fermented Dairy Products

Authors: Lorenzo Cocola, Massimo Fedel, Dragiša Savić, Bojana Danilović, Luca Poletto

Abstract:

An instrument for the detection and evaluation of gaseous carbon dioxide in the headspace of closed containers has been developed in the context of Packsensor Italian-Serbian joint project. The device is based on Tunable Diode Laser Absorption Spectroscopy (TDLAS) with a Wavelength Modulation Spectroscopy (WMS) technique in order to accomplish a non-invasive measurement inside closed containers of fermented dairy products (yogurts and fermented cheese in cups and bottles). The purpose of this instrument is the continuous monitoring of carbon dioxide concentration during incubation and storage of products over a time span of the whole shelf life of the product, in the presence of different microorganisms. The instrument’s optical front end has been designed to be integrated in a thermally stabilized incubator. An embedded computer provides processing of spectral artifacts and storage of an arbitrary set of calibration data allowing a properly calibrated measurement on many samples (cups and bottles) of different shapes and sizes commonly found in the retail distribution. A calibration protocol has been developed in order to be able to calibrate the instrument on the field also on containers which are notoriously difficult to seal properly. This calibration protocol is described and evaluated against reference measurements obtained through an industry standard (sampling) carbon dioxide metering technique. Some sets of validation test measurements on different containers are reported. Two test recordings of carbon dioxide concentration evolution are shown as an example of instrument operation. The first demonstrates the ability to monitor a rapid yeast growth in a contaminated sample through the increase of headspace carbon dioxide. Another experiment shows the dissolution transient with a non-saturated liquid medium in presence of a carbon dioxide rich headspace atmosphere.

Keywords: TDLAS, carbon dioxide, cups, headspace, measurement

Procedia PDF Downloads 296
631 An Experimental Study on the Variability of Nonnative and Native Inference of Word Meanings in Timed and Untimed Conditions

Authors: Swathi M. Vanniarajan

Abstract:

Reading research suggests that online contextual vocabulary comprehension while reading is an interactive and integrative process. One’s success in it depends on a variety of factors including the amount and the nature of available linguistic and nonlinguistic cues, his/her analytical and integrative skills, schema memory (content familiarity), and processing speed characterized along the continuum of controlled to automatic processing. The experiment reported here, conducted with 30 native speakers as one group and 30 nonnative speakers as another group (all graduate students), hypothesized that while working on (24) tasks which required them to comprehend an unfamiliar word in real time without backtracking, due to the differences in the nature of their respective reading processes, the nonnative subjects would be less able to construct the meanings of the unknown words by integrating the multiple but sufficient contextual cues provided in the text but the native subjects would be able to. The results indicated that there were significant inter-group as well as intra-group differences in terms of the quality of definitions given. However, when given additional time, while the nonnative speakers could significantly improve the quality of their definitions, the native speakers in general would not, suggesting that all things being equal, time is a significant factor for success in nonnative vocabulary and reading comprehension processes and that accuracy precedes automaticity in the development of nonnative reading processes also.

Keywords: reading, second language processing, vocabulary comprehension

Procedia PDF Downloads 144
630 Study of Sub-Surface Flow in an Unconfined Carbonate Aquifer in a Tropical Karst Area in Indonesia: A Modeling Approach Using Finite Difference Groundwater Model

Authors: Dua K. S. Y. Klaas, Monzur A. Imteaz, Ika Sudiayem, Elkan M. E. Klaas, Eldav C. M. Klaas

Abstract:

Due to its porous nature, karst terrains – geomorphologically developed from dissolved formations, is vulnerable to water shortage and deteriorated water quality. Therefore, a solid comprehension on sub-surface flow of karst landscape is essential to assess the long-term availability of groundwater resources. In this paper, a single-continuum model using a finite difference model, MODLFOW, was constructed to represent an unconfined carbonate aquifer in a tropical karst island of Rote in Indonesia. The model, spatially discretized in 20 x 20 m grid cells, was calibrated and validated using available groundwater level and atmospheric variables. In the calibration and validation steps, Parameter Estimation (PEST) and geostatistical pilot point methods were employed to estimate hydraulic conductivity and specific yield values. The results show that the model is able to represent the sub-surface flow indicated by good model performances both in calibration and validation steps. The final model can be used as a robust representation of the system for future study on climate and land use scenarios.

Keywords: carbonate aquifer, karst, sub-surface flow, groundwater model

Procedia PDF Downloads 128
629 Non-Contact Measurement of Soil Deformation in a Cyclic Triaxial Test

Authors: Erica Elice Uy, Toshihiro Noda, Kentaro Nakai, Jonathan Dungca

Abstract:

Deformation in a conventional cyclic triaxial test is normally measured by using point-wise measuring device. In this study, non-contact measurement technique was applied to be able to monitor and measure the occurrence of non-homogeneous behavior of the soil under cyclic loading. Non-contact measurement is executed through image processing. Two-dimensional measurements were performed using Lucas and Kanade optical flow algorithm and it was implemented Labview. In this technique, the non-homogeneous deformation was monitored using a mirrorless camera. A mirrorless camera was used because it is economical and it has the capacity to take pictures at a fast rate. The camera was first calibrated to remove the distortion brought about the lens and the testing environment as well. Calibration was divided into 2 phases. The first phase was the calibration of the camera parameters and distortion caused by the lens. The second phase was to for eliminating the distortion brought about the triaxial plexiglass. A correction factor was established from this phase. A series of consolidated undrained cyclic triaxial test was performed using a coarse soil. The results from the non-contact measurement technique were compared to the measured deformation from the linear variable displacement transducer. It was observed that deformation was higher at the area where failure occurs.

Keywords: cyclic loading, non-contact measurement, non-homogeneous, optical flow

Procedia PDF Downloads 284
628 Evaluation of Vehicle Classification Categories: Florida Case Study

Authors: Ren Moses, Jaqueline Masaki

Abstract:

This paper addresses the need for accurate and updated vehicle classification system through a thorough evaluation of vehicle class categories to identify errors arising from the existing system and proposing modifications. The data collected from two permanent traffic monitoring sites in Florida were used to evaluate the performance of the existing vehicle classification table. The vehicle data were collected and classified by the automatic vehicle classifier (AVC), and a video camera was used to obtain ground truth data. The Federal Highway Administration (FHWA) vehicle classification definitions were used to define vehicle classes from the video and compare them to the data generated by AVC in order to identify the sources of misclassification. Six types of errors were identified. Modifications were made in the classification table to improve the classification accuracy. The results of this study include the development of updated vehicle classification table with a reduction in total error by 5.1%, a step by step procedure to use for evaluation of vehicle classification studies and recommendations to improve FHWA 13-category rule set. The recommendations for the FHWA 13-category rule set indicate the need for the vehicle classification definitions in this scheme to be updated to reflect the distribution of current traffic. The presented results will be of interest to States’ transportation departments and consultants, researchers, engineers, designers, and planners who require accurate vehicle classification information for planning, designing and maintenance of transportation infrastructures.

Keywords: vehicle classification, traffic monitoring, pavement design, highway traffic

Procedia PDF Downloads 164
627 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 51
626 Development and Validation of First Derivative Method and Artificial Neural Network for Simultaneous Spectrophotometric Determination of Two Closely Related Antioxidant Nutraceuticals in Their Binary Mixture”

Authors: Mohamed Korany, Azza Gazy, Essam Khamis, Marwa Adel, Miranda Fawzy

Abstract:

Background: Two new, simple and specific methods; First, a Zero-crossing first-derivative technique and second, a chemometric-assisted spectrophotometric artificial neural network (ANN) were developed and validated in accordance with ICH guidelines. Both methods were used for the simultaneous estimation of the two closely related antioxidant nutraceuticals ; Coenzyme Q10 (Q) ; also known as Ubidecarenone or Ubiquinone-10, and Vitamin E (E); alpha-tocopherol acetate, in their pharmaceutical binary mixture. Results: For first method: By applying the first derivative, both Q and E were alternatively determined; each at the zero-crossing of the other. The D1 amplitudes of Q and E, at 285 nm and 235 nm respectively, were recorded and correlated to their concentrations. The calibration curve is linear over the concentration range of 10-60 and 5.6-70 μg mL-1 for Q and E, respectively. For second method: ANN (as a multivariate calibration method) was developed and applied for the simultaneous determination of both analytes. A training set (or a concentration set) of 90 different synthetic mixtures containing Q and E, in wide concentration ranges between 0-100 µg/mL and 0-556 µg/mL respectively, were prepared in ethanol. The absorption spectra of the training sets were recorded in the spectral region of 230–300 nm. A Gradient Descend Back Propagation ANN chemometric calibration was computed by relating the concentration sets (x-block) to their corresponding absorption data (y-block). Another set of 45 synthetic mixtures of the two drugs, in defined range, was used to validate the proposed network. Neither chemical separation, preparation stage nor mathematical graphical treatment were required. Conclusions: The proposed methods were successfully applied for the assay of Q and E in laboratory prepared mixtures and combined pharmaceutical tablet with excellent recoveries. The ANN method was superior over the derivative technique as the former determined both drugs in the non-linear experimental conditions. It also offers rapidity, high accuracy, effort and money saving. Moreover, no need for an analyst for its application. Although the ANN technique needed a large training set, it is the method of choice in the routine analysis of Q and E tablet. No interference was observed from common pharmaceutical additives. The results of the two methods were compared together

Keywords: coenzyme Q10, vitamin E, chemometry, quantitative analysis, first derivative spectrophotometry, artificial neural network

Procedia PDF Downloads 423
625 Subpixel Corner Detection for Monocular Camera Linear Model Research

Authors: Guorong Sui, Xingwei Jia, Fei Tong, Xiumin Gao

Abstract:

Camera calibration is a fundamental issue of high precision noncontact measurement. And it is necessary to analyze and study the reliability and application range of its linear model which is often used in the camera calibration. According to the imaging features of monocular cameras, a camera model which is based on the image pixel coordinates and three dimensional space coordinates is built. Using our own customized template, the image pixel coordinate is obtained by the subpixel corner detection method. Without considering the aberration of the optical system, the feature extraction and linearity analysis of the line segment in the template are performed. Moreover, the experiment is repeated 11 times by constantly varying the measuring distance. At last, the linearity of the camera is achieved by fitting 11 groups of data. The camera model measurement results show that the relative error does not exceed 1%, and the repeated measurement error is not more than 0.1 mm magnitude. Meanwhile, it is found that the model has some measurement differences in the different region and object distance. The experiment results show this linear model is simple and practical, and have good linearity within a certain object distance. These experiment results provide a powerful basis for establishment of the linear model of camera. These works will have potential value to the actual engineering measurement.

Keywords: camera linear model, geometric imaging relationship, image pixel coordinates, three dimensional space coordinates, sub-pixel corner detection

Procedia PDF Downloads 258
624 University Building: Discussion about the Effect of Numerical Modelling Assumptions for Occupant Behavior

Authors: Fabrizio Ascione, Martina Borrelli, Rosa Francesca De Masi, Silvia Ruggiero, Giuseppe Peter Vanoli

Abstract:

The refurbishment of public buildings is one of the key factors of energy efficiency policy of European States. Educational buildings account for the largest share of the oldest edifice with interesting potentialities for demonstrating best practice with regards to high performance and low and zero-carbon design and for becoming exemplar cases within the community. In this context, this paper discusses the critical issue of dealing the energy refurbishment of a university building in heating dominated climate of South Italy. More in detail, the importance of using validated models will be examined exhaustively by proposing an analysis on uncertainties due to modelling assumptions mainly referring to the adoption of stochastic schedules for occupant behavior and equipment or lighting usage. Indeed, today, the great part of commercial tools provides to designers a library of possible schedules with which thermal zones can be described. Very often, the users do not pay close attention to diversify thermal zones and to modify or to adapt predefined profiles, and results of designing are affected positively or negatively without any alarm about it. Data such as occupancy schedules, internal loads and the interaction between people and windows or plant systems, represent some of the largest variables during the energy modelling and to understand calibration results. This is mainly due to the adoption of discrete standardized and conventional schedules with important consequences on the prevision of the energy consumptions. The problem is surely difficult to examine and to solve. In this paper, a sensitivity analysis is presented, to understand what is the order of magnitude of error that is committed by varying the deterministic schedules used for occupation, internal load, and lighting system. This could be a typical uncertainty for a case study as the presented one where there is not a regulation system for the HVAC system thus the occupant cannot interact with it. More in detail, starting from adopted schedules, created according to questioner’ s responses and that has allowed a good calibration of energy simulation model, several different scenarios are tested. Two type of analysis are presented: the reference building is compared with these scenarios in term of percentage difference on the projected total electric energy need and natural gas request. Then the different entries of consumption are analyzed and for more interesting cases also the comparison between calibration indexes. Moreover, for the optimal refurbishment solution, the same simulations are done. The variation on the provision of energy saving and global cost reduction is evidenced. This parametric study wants to underline the effect on performance indexes evaluation of the modelling assumptions during the description of thermal zones.

Keywords: energy simulation, modelling calibration, occupant behavior, university building

Procedia PDF Downloads 120
623 Assessment of Environmental Risk Factors of Railway Using Integrated ANP-DEMATEL Approach in Fuzzy Conditions

Authors: Mehrdad Abkenari, Mehmet Kunt, Mahdi Nourollahi

Abstract:

Evaluating the environmental risk factors is a combination of analysis of transportation effects. Various definitions for risk can be found in different scientific sources. Each definition depends on a specific and particular perspective or dimension. The effects of potential risks present along the new proposed routes and existing infrastructures of large transportation projects like railways should be studied under comprehensive engineering frameworks. Despite various definitions provided for ‘risk’, all include a uniform concept. Two obvious aspects, loss and unreliability, have always been pointed in all definitions of this term. But, selection as the third aspect is usually implied and means how one notices it. Currently, conducting engineering studies on the environmental effects of railway projects have become obligatory according to the Environmental Assessment Act in developing countries. Considering the longitudinal nature of these projects and probable passage of railways through various ecosystems, scientific research on the environmental risk of these projects have become of great interest. Although many areas of expertise such as road construction in developing countries have not seriously committed to these studies yet, attention to these subjects in establishment or implementation of different systems have become an inseparable part of this wave of research. The present study used environmental risks identified and existing in previous studies and stations to use in next step. The second step proposes a new hybrid approach of analytical network process (ANP) and DEMATEL in fuzzy conditions for assessment of determined risks. Since evaluation of identified risks was not an easy touch, mesh structure was an appropriate approach for analyzing complex systems which were accordingly employed for problem description and modeling. Researchers faced the shortage of real space data and also due to the ambiguity of experts’ opinions and judgments, they were declared in language variables instead of numerical ones. Since fuzzy logic is appropriate for ambiguity and uncertainty, formulation of experts’ opinions in the form of fuzzy numbers seemed an appropriate approach. Fuzzy DEMATEL method was used to extract the relations between major and minor risk factors. Considering the internal relations of risk major factors and its sub-factors in the analysis of fuzzy network, the weight of risk’s main factors and sub-factors were determined. In general, findings of the present study, in which effective railway environmental risk indicators were theoretically identified and rated through the first usage of combined model of DEMATEL and fuzzy network analysis, indicate that environmental risks can be evaluated more accurately and also employed in railway projects.

Keywords: DEMATEL, ANP, fuzzy, risk

Procedia PDF Downloads 392
622 Development of a Combustible Gas Detector with Two Sensor Modules to Enable Measuring Range of Low Concentration

Authors: Young Gyu Kim, Sangguk Ahn, Gyoutae Park, Hiesik Kim

Abstract:

In the gas industrial fields, there are many problems to detect extremely small amounts of combustible gas (CH₄) if a conventional semiconductor is used. Those reasons are that measuring is difficult at the low concentration level, the stabilization time is long, and an initial response time is slow. In this study, we propose a method to solve these issues using two specific sensors to overcome the circumstances of temperature and humidity. This idea is to combine a catalytic and a semiconductor type sensor and to utilize every advantage from every sensor’s characteristic. In order to achieve the goal, we reduced fluctuations of a gas sensor for temperature and humidity by applying designed circuits for sensing temperature and humidity. And we induced the best calibration line of gas sensors through adjusting a weight value corresponding to changeable patterns of temperature and humidity after their data are previously acquired and stored. We proposed and developed the gas leak detector using two sensor modules, which is first operated by a semiconductor sensor for measuring small gas quantities and second a catalytic type sensor is detected if measuring range of the first sensor is beyond. We conclusively verified characteristics of sharp sensitivity and fast response time against even at lower gas concentration level through experiments other than a conventional gas sensor. We think that our proposed idea is very useful if another gas leak is developed to enable measuring extremely small quantities of toxic and flammable gases.

Keywords: gas sensor, leak detector, lower concentration, and calibration

Procedia PDF Downloads 219
621 Assessment of Pre-Processing Influence on Near-Infrared Spectra for Predicting the Mechanical Properties of Wood

Authors: Aasheesh Raturi, Vimal Kothiyal, P. D. Semalty

Abstract:

We studied mechanical properties of Eucalyptus tereticornis using FT-NIR spectroscopy. Firstly, spectra were pre-processed to eliminate useless information. Then, prediction model was constructed by partial least squares regression. To study the influence of pre-processing on prediction of mechanical properties for NIR analysis of wood samples, we applied various pretreatment methods like straight line subtraction, constant offset elimination, vector-normalization, min-max normalization, multiple scattering. Correction, first derivative, second derivatives and their combination with other treatment such as First derivative + straight line subtraction, First derivative+ vector normalization and First derivative+ multiplicative scattering correction. The data processing methods in combination of preprocessing with different NIR regions, RMSECV, RMSEP and optimum factors/rank were obtained by optimization process of model development. More than 350 combinations were obtained during optimization process. More than one pre-processing method gave good calibration/cross-validation and prediction/test models, but only the best calibration/cross-validation and prediction/test models are reported here. The results show that one can safely use NIR region between 4000 to 7500 cm-1 with straight line subtraction, constant offset elimination, first derivative and second derivative preprocessing method which were found to be most appropriate for models development.

Keywords: FT-NIR, mechanical properties, pre-processing, PLS

Procedia PDF Downloads 319
620 X-Ray Dosimetry by a Low-Cost Current Mode Ion Chamber

Authors: Ava Zarif Sanayei, Mustafa Farjad-Fard, Mohammad-Reza Mohammadian-Behbahani, Leyli Ebrahimi, Sedigheh Sina

Abstract:

The fabrication and testing of a low-cost air-filled ion chamber for X-ray dosimetry is studied. The chamber is made of a metal cylinder, a central wire, a BC517 Darlington transistor, a 9V DC battery, and a voltmeter in order to have a cost-effective means to measure the dose. The output current of the dosimeter is amplified by the transistor and then fed to the large internal resistance of the voltmeter, producing a readable voltage signal. The dose-response linearity of the ion chamber is evaluated for different exposure scenarios by the X-ray tube. kVp values 70, 90, and 120, and mAs up to 20 are considered. In all experiments, a solid-state dosimeter (Solidose 400, Elimpex Medizintechnik) is used as a reference device for chamber calibration. Each case of exposure is repeated three times, the voltmeter and Solidose readings are recorded, and the mean and standard deviation values are calculated. Then, the calibration curve, derived by plotting voltmeter readings against Solidose readings, provided a linear fit result for all tube kVps of 70, 90, and 120. A 99, 98, and 100% linear relationship, respectively, for kVp values 70, 90, and 120 are demonstrated. The study shows the feasibility of achieving acceptable dose measurements with a simplified setup. Further enhancements to the proposed setup include solutions for limiting the leakage current, optimizing chamber dimensions, utilizing electronic microcontrollers for dedicated data readout, and minimizing the impact of stray electromagnetic fields on the system.

Keywords: dosimetry, ion chamber, radiation detection, X-ray

Procedia PDF Downloads 45
619 Positive Energy Districts in the Swedish Energy System

Authors: Vartan Ahrens Kayayan, Mattias Gustafsson, Erik Dotzauer

Abstract:

The European Union is introducing the positive energy district concept, which has the goal to reduce overall carbon dioxide emissions. Other studies have already mapped the make-up of such districts, and reviewed their definitions and where they are positioned. The Swedish energy system is unique compared to others in Europe, due to the implementation of low-carbon electricity and heat energy sources and high uptake of district heating. The goal for this paper is to start the discussion about how the concept of positive energy districts can best be applied to the Swedish context and meet their mitigation goals. To explore how these differences impact the formation of positive energy districts, two cases were analyzed for their methods and how these integrate into the Swedish energy system: a district in Uppsala with a focus on energy and another in Helsingborg with a focus on climate. The case in Uppsala uses primary energy calculations which can be critisied but take a virtual border that allows for its surrounding system to be considered. The district in Helsingborg has a complex methodology for considering the life cycle emissions of the neighborhood. It is successful in considering the energy balance on a monthly basis, but it can be problematized in terms of creating sub-optimized systems due to setting tight geographical constraints. The discussion of shaping the definitions and methodologies for positive energy districts is taking place in Europe and Sweden. We identify three pitfalls that must be avoided so that positive energy districts meet their mitigation goals in the Swedish context. The goal of pushing out fossil fuels is not relevant in the current energy system, the mismatch between summer electricity production and winter energy demands should be addressed, and further implementations should consider collaboration with the established district heating grid.

Keywords: positive energy districts, energy system, renewable energy, European Union

Procedia PDF Downloads 61
618 Omni-Modeler: Dynamic Learning for Pedestrian Redetection

Authors: Michael Karnes, Alper Yilmaz

Abstract:

This paper presents the application of the omni-modeler towards pedestrian redetection. The pedestrian redetection task creates several challenges when applying deep neural networks (DNN) due to the variety of pedestrian appearance with camera position, the variety of environmental conditions, and the specificity required to recognize one pedestrian from another. DNNs require significant training sets and are not easily adapted for changes in class appearances or changes in the set of classes held in its knowledge domain. Pedestrian redetection requires an algorithm that can actively manage its knowledge domain as individuals move in and out of the scene, as well as learn individual appearances from a few frames of a video. The Omni-Modeler is a dynamically learning few-shot visual recognition algorithm developed for tasks with limited training data availability. The Omni-Modeler adapts the knowledge domain of pre-trained deep neural networks to novel concepts with a calculated localized language encoder. The Omni-Modeler knowledge domain is generated by creating a dynamic dictionary of concept definitions, which are directly updatable as new information becomes available. Query images are identified through nearest neighbor comparison to the learned object definitions. The study presented in this paper evaluates its performance in re-identifying individuals as they move through a scene in both single-camera and multi-camera tracking applications. The results demonstrate that the Omni-Modeler shows potential for across-camera view pedestrian redetection and is highly effective for single-camera redetection with a 93% accuracy across 30 individuals using 64 example images for each individual.

Keywords: dynamic learning, few-shot learning, pedestrian redetection, visual recognition

Procedia PDF Downloads 54
617 Comparing Two Unmanned Aerial Systems in Determining Elevation at the Field Scale

Authors: Brock Buckingham, Zhe Lin, Wenxuan Guo

Abstract:

Accurate elevation data is critical in deriving topographic attributes for the precision management of crop inputs, especially water and nutrients. Traditional ground-based elevation data acquisition is time consuming, labor intensive, and often inconvenient at the field scale. Various unmanned aerial systems (UAS) provide the capability of generating digital elevation data from high-resolution images. The objective of this study was to compare the performance of two UAS with different global positioning system (GPS) receivers in determining elevation at the field scale. A DJI Phantom 4 Pro and a DJI Phantom 4 RTK(real-time kinematic) were applied to acquire images at three heights, including 40m, 80m, and 120m above ground. Forty ground control panels were placed in the field, and their geographic coordinates were determined using an RTK GPS survey unit. For each image acquisition using a UAS at a particular height, two elevation datasets were generated using the Pix4D stitching software: a calibrated dataset using the surveyed coordinates of the ground control panels and an uncalibrated dataset without using the surveyed coordinates of the ground control panels. Elevation values for each panel derived from the elevation model of each dataset were compared to the corresponding coordinates of the ground control panels. The coefficient of the determination (R²) and the root mean squared error (RMSE) were used as evaluation metrics to assess the performance of each image acquisition scenario. RMSE values for the uncalibrated elevation dataset were 26.613 m, 31.141 m, and 25.135 m for images acquired at 120 m, 80 m, and 40 m, respectively, using the Phantom 4 Pro UAS. With calibration for the same UAS, the accuracies were significantly improved with RMSE values of 0.161 m, 0.165, and 0.030 m, respectively. The best results showed an RMSE of 0.032 m and an R² of 0.998 for calibrated dataset generated using the Phantom 4 RTK UAS at 40m height. The accuracy of elevation determination decreased as the flight height increased for both UAS, with RMSE values greater than 0.160 m for the datasets acquired at 80 m and 160 m. The results of this study show that calibration with ground control panels improves the accuracy of elevation determination, especially for the UAS with a regular GPS receiver. The Phantom 4 Pro provides accurate elevation data with substantial surveyed ground control panels for the 40 m dataset. The Phantom 4 Pro RTK UAS provides accurate elevation at 40 m without calibration for practical precision agriculture applications. This study provides valuable information on selecting appropriate UAS and flight heights in determining elevation for precision agriculture applications.

Keywords: unmanned aerial system, elevation, precision agriculture, real-time kinematic (RTK)

Procedia PDF Downloads 145
616 Physical Activity, Exercise and Physical Fitness in Different Generation

Authors: Carl J. Caspersen, Kenneth E. Powell, Gregory M. Christenson, Kirupa V. Patel

Abstract:

‘Physical activity’, ‘exercise’, and ‘physical fitness’ are terms that describe different concepts. However, they are often confused with one another, and the terms are sometimes used interchangeably. This paper proposes definitions to distinguish them. Physical activity is defined as any bodily movement produced by skeletal muscles that result in energy expenditure. The energy expenditure can be measured in kilocalories. Physical activity in daily life can be categorized into occupational, sports, Conditioning, household, or other activities. Exercise is a subset of physical activity that is planned, structured, and repetitive and has as a final or an intermediate objective the improvement or maintenance of physical fitness. Physical fitness is a set of attributes that are either health- or skill-related. The degree to which people have these attributes can be measured with specific tests. These definitions are offered as an interpretational framework for comparing studies that relate physical activity, exercise, and physical fitness to health. Physical activity is defined as any bodily movement produced by skeletal muscles that require energy expenditure. Physical inactivity has been identified as the fourth leading risk factor for global mortality causing an estimated 3.2 million deaths globally. Regular moderate intensity physical activity – such as walking, cycling, or participating in sports – has significant benefits for health. For instance, it can reduce the risk of cardiovascular diseases, diabetes, colon and breast cancer, and depression. Moreover, adequate levels of physical activity will decrease the risk of a hip or vertebral fracture and help control weight. Any bodily movement produced by the contraction of skeletal muscle that increases energy expenditure above a basal level. In these guidelines, physical activity generally refers to the subset of physical activity that enhances health.

Keywords: physical activity, exercise, physical fitness, sports

Procedia PDF Downloads 330
615 Social Entrepreneurship Core Dimensions and Influential Perspectives: An Exploratory Study

Authors: Filipa Lancastre, Carmen Lages, Filipe Santos

Abstract:

The concept of social entrepreneurship (SE) remains ambiguous and deprived of a widely accepted operational definition. We argue that an awareness about the consensual constituent elements of SE from all key players from its ecosystem as well as a deeper understanding of apparently divergent perspectives will allow the different stakeholders (social entrepreneurs, corporations, investors, policymakers, the beneficiaries themselves) to bridge and cooperate for societal value co-creation in trying to solve our most pressing societal issues. To address our research question –what are the dimensions of SE that are consensual and controversial across existing perspectives? – We designed a two-step qualitative study. In a first step, we conducted an extensive literature review, collecting and analyzing 155 different SE definitions. From this initial step, we extracted and characterized three consensual and six controversial dimensions of the SE concept. In a second step, we conducted 20 semi-structured interviews with practitioners that are actively involved in the SE field. The goal of this second step was to verify if the literature did not capture any key dimension, understand how the dimensions related to each other and to understand the rationale behind them. The dimensions of the SE concept were extracted based on the relevance of each theme and on the theoretical relationship among them. To identify the relevance, we used as a proxy the frequency of each theme was referred to in our sample of definitions. To understand relationships, as identified in the previous section, we included concepts from both the management and psychology literature, such as the Entrepreneurial Orientation concept from the entrepreneurship literature, the Subjective Well Being construct from psychology literature, and the Resource-Based Theory from the strategy literature. This study has two main contributions; First, the identification of (consensual and controversial) dimensions of SE that exist across scattered definitions from the academic and practitioner literature. Second, a framework that parsimoniously synthesizes four dominant perspectives of SE and relates them with the SE dimensions. Assuming the contested nature of the SE concept, it is not expected that these views will be reconciled at the academic or practitioner field level. In future research, academics can, however, be aware of the existence of different understandings of SE and avoid bias towards a single view, developing holistic studies on SE phenomena or comparing differences by studying their underlying assumptions. Additionally, it is important that researchers make explicit the perspective they are embracing to ensure consistency among the research question, sampling procedures and implications of results. At the practitioner level, individuals or groups following different logics are predictably mutually suspicious and might benefit from taking stock of other perspectives on SE, building bridges and fostering cross-fertilization to the benefit of the SE ecosystem for which all contribute.

Keywords: social entrepreneurship, conceptualization, dimensions, perspectives

Procedia PDF Downloads 148
614 Estimation of Ribb Dam Catchment Sediment Yield and Reservoir Effective Life Using Soil and Water Assessment Tool Model and Empirical Methods

Authors: Getalem E. Haylia

Abstract:

The Ribb dam is one of the irrigation projects in the Upper Blue Nile basin, Ethiopia, to irrigate the Fogera plain. Reservoir sedimentation is a major problem because it reduces the useful reservoir capacity by the accumulation of sediments coming from the watersheds. Estimates of sediment yield are needed for studies of reservoir sedimentation and planning of soil and water conservation measures. The objective of this study was to simulate the Ribb dam catchment sediment yield using SWAT model and to estimate Ribb reservoir effective life according to trap efficiency methods. The Ribb dam catchment is found in North Western part of Ethiopia highlands, and it belongs to the upper Blue Nile and Lake Tana basins. Soil and Water Assessment Tool (SWAT) was selected to simulate flow and sediment yield in the Ribb dam catchment. The model sensitivity, calibration, and validation analysis at Ambo Bahir site were performed with Sequential Uncertainty Fitting (SUFI-2). The flow data at this site was obtained by transforming the Lower Ribb gauge station (2002-2013) flow data using Area Ratio Method. The sediment load was derived based on the sediment concentration yield curve of Ambo site. Stream flow results showed that the Nash-Sutcliffe efficiency coefficient (NSE) was 0.81 and the coefficient of determination (R²) was 0.86 in calibration period (2004-2010) and, 0.74 and 0.77 in validation period (2011-2013), respectively. Using the same periods, the NS and R² for the sediment load calibration were 0.85 and 0.79 and, for the validation, it became 0.83 and 0.78, respectively. The simulated average daily flow rate and sediment yield generated from Ribb dam watershed were 3.38 m³/s and 1772.96 tons/km²/yr, respectively. The effective life of Ribb reservoir was estimated using the developed empirical methods of the Brune (1953), Churchill (1948) and Brown (1958) methods and found to be 30, 38 and 29 years respectively. To conclude, massive sediment comes from the steep slope agricultural areas, and approximately 98-100% of this incoming annual sediment loads have been trapped by the Ribb reservoir. In Ribb catchment, as well as reservoir systematic and thorough consideration of technical, social, environmental, and catchment managements and practices should be made to lengthen the useful life of Ribb reservoir.

Keywords: catchment, reservoir effective life, reservoir sedimentation, Ribb, sediment yield, SWAT model

Procedia PDF Downloads 154
613 Procedural Protocol for Dual Energy Computed Tomography (DECT) Inversion

Authors: Rezvan Ravanfar Haghighi, S. Chatterjee, Pratik Kumar, V. C. Vani, Priya Jagia, Sanjiv Sharma, Susama Rani Mandal, R. Lakshmy

Abstract:

The dual energy computed tomography (DECT) aims at noting the HU(V) values for the sample at two different voltages V=V1, V2 and thus obtain the electron densities (ρe) and effective atomic number (Zeff) of the substance. In the present paper, we aim to obtain a numerical algorithm by which (ρe, Zeff) can be obtained from the HU(100) and HU(140) data, where V=100, 140 kVp. The idea is to use this inversion method to characterize and distinguish between the lipid and fibrous coronary artery plaques.With the idea to develop the inversion algorithm for low Zeff materials, as is the case with non calcified coronary artery plaque, we prepare aqueous samples whose calculated values of (ρe, Zeff) lie in the range (2.65×1023≤ ρe≤ 3.64×1023 per cc ) and (6.80≤ Zeff ≤ 8.90). We fill the phantom with these known samples and experimentally determine HU(100) and HU(140) for the same pixels. Knowing that the HU(V) values are related to the attenuation coefficient of the system, we present an algorithm by which the (ρe, Zeff) is calibrated with respect to (HU(100), HU(140)). The calibration is done with a known set of 20 samples; its accuracy is checked with a different set of 23 known samples. We find that the calibration gives the ρe with an accuracy of ± 4% while Zeff is found within ±1% of the actual value, the confidence being 95%.In this inversion method (ρe, Zeff) of the scanned sample can be found by eliminating the effects of the CT machine and also by ensuring that the determination of the two unknowns (ρe, Zeff) does not interfere with each other. It is found that this algorithm can be used for prediction of chemical characteristic (ρe, Zeff) of unknown scanned materials with 95% confidence level, by inversion of the DECT data.

Keywords: chemical composition, dual-energy computed tomography, inversion algorithm

Procedia PDF Downloads 409
612 Carbohydrate-Based Recommendations as a Basis for Dietary Guidelines

Authors: A. E. Buyken, D. J. Mela, P. Dussort, I. T. Johnson, I. A. Macdonald, A. Piekarz, J. D. Stowell, F. Brouns

Abstract:

Recently a number of renewed dietary guidelines have been published by various health authorities. The aim of the present work was 1) to review the processes (systematic approach/review, inclusion of public consultation) and methodological approaches used to identify and select the underpinning evidence base for the established recommendations for total carbohydrate (CHO), fiber and sugar consumption, and 2) examine how differences in the methods and processes applied may have influenced the final recommendations. A search of WHO, US, Canada, Australia and European sources identified 13 authoritative dietary guidelines with the desired detailed information. Each of these guidelines was evaluated for its scientific basis (types and grading of the evidence) and the processes by which the guidelines were developed Based on the data retrieved the following conclusions can be drawn: 1) Generally, a relatively high total CHO and fiber intake and limited intake of sugars (added or free) is recommended. 2) Even where recommendations are quite similar, the specific, justifications for quantitative/qualitative recommendations differ across authorities. 3) Differences appear to be due to inconsistencies in underlying definitions of CHO exposure and in the concurrent appraisal of CHO-providing foods and nutrients as well the choice and number of health outcomes selected for the evidence appraisal. 4) Differences in the selected articles, time frames or data aggregation method appeared to be of rather minor influence. From this assessment, the main recommendations are for: 1) more explicit quantitative justifications for numerical guidelines and communication of uncertainty; and 2) greater international harmonization, particularly with regard to underlying definitions of exposures and range of relevant nutrition-related outcomes.

Keywords: carbohydrates, dietary fibres, dietary guidelines, recommendations, sugars

Procedia PDF Downloads 237
611 Manufacturing and Calibration of Material Standards for Optical Microscopy in Industrial Environments

Authors: Alberto Mínguez-Martínez, Jesús De Vicente Y Oliva

Abstract:

It seems that we live in a world in which the trend in industrial environments is the miniaturization of systems and materials and the fabrication of parts at the micro-and nano-scale. The problem arises when manufacturers want to study the quality of their production. This characteristic is becoming crucial due to the evolution of the industry and the development of Industry 4.0. As Industry 4.0 is based on digital models of production and processes, having accurate measurements becomes capital. At this point, the metrology field plays an important role as it is a powerful tool to ensure more stable production to reduce scrap and the cost of non-conformities. The most extended measuring instruments that allow us to carry out accurate measurements at these scales are optical microscopes, whether they are traditional, confocal, focus variation microscopes, profile projectors, or any other similar measurement system. However, the accuracy of measurements is connected to the traceability of them to the SI unit of length (the meter). The fact of providing adequate traceability to 2D and 3D dimensional measurements at micro-and nano-scale in industrial environments is a problem that is being studied, and it does not have a unique answer. In addition, if commercial material standards for micro-and nano-scale are considered, we can find that there are two main problems. On the one hand, those material standards that could be considered complete and very interesting do not give traceability of dimensional measurements and, on the other hand, their calibration is very expensive. This situation implies that these kinds of standards will not succeed in industrial environments and, as a result, they will work in the absence of traceability. To solve this problem in industrial environments, it becomes necessary to have material standards that are easy to use, agile, adaptive to different forms, cheap to manufacture and, of course, traceable to the definition of meter with simple methods. By using these ‘customized standards’, it would be possible to adapt and design measuring procedures for each application and manufacturers will work with some traceability. It is important to note that, despite the fact that this traceability is clearly incomplete, this situation is preferable to working in the absence of it. Recently, it has been demonstrated the versatility and the utility of using laser technology and other AM technologies to manufacture customized material standards. In this paper, the authors propose to manufacture a customized material standard using an ultraviolet laser system and a method to calibrate it. To conclude, the results of the calibration carried out in an accredited dimensional metrology laboratory are presented.

Keywords: industrial environment, material standards, optical measuring instrument, traceability

Procedia PDF Downloads 99
610 Determination of MDA by HPLC in Blood of Levofloxacin Treated Rats

Authors: D. S. Mohale, A. P. Dewani, A. S.tripathi, A. V. Chandewar

Abstract:

Present work demonstrates the applicability of high-performance liquid chromatography (HPLC) with UV-Vis detection for the quantification of malondialdehyde as malondialdehyde-thiobarbituric acid complex (MDA-TBA) in-vivo in rats. The HPLC method for MDA-TBA was achieved by isocratic mode on a reverse-phase C18 column (250mm×4.6mm) at a flow rate of 1.0mLmin−1 followed by detection at 532 nm. The chromatographic conditions were optimized by varying the concentration and pH of water followed by changes in percentage of organic phase optimal mobile phase consisted of mixture of water (0.2% triethylamine pH adjusted to 2.3 by ortho-phosphoric acid) and acetonitrile in ratio (80:20v/v). The retention time of MDA-TBA complex was 3.7 min. The developed method was sensitive as limit of detection and quantification (LOD and LOQ) for MDA-TBA complex were (standard deviation and slope of calibration curve) 110 ng/ml and 363 ng/ml respectively. Calibration studies were done by spiking MDA into rat plasma at concentrations ranging from 500 to 1000 ng/ml. The precision of developed method measured in terms of relative standard deviations for intra-day and inter-day studies was 1.6–5.0% and 1.9–3.6% respectively. The HPLC method was applied for monitoring MDA levels in rats subjected to chronic treatment of levofloxacin (LEV) (5mg/kg/day) for 21 days. Results were compared by findings in control group rats. Mean peak areas of both study groups was subjected for statistical treatment to unpaired student t-test to find p-values. The p value was <0.001 indicating significant results and suggesting increased MDA levels in rats subjected to chronic treatment of LEV of 21 days.

Keywords: malondialdehyde-thiobarbituric acid complex, levofloxacin, HPLC, oxidative stress

Procedia PDF Downloads 315
609 Assessing the Resilience of the Insurance Industry under Solvency II

Authors: Vincenzo Russo, Rosella Giacometti

Abstract:

The paper aims to assess the insurance industry's resilience under Solvency II against adverse scenarios. Starting from the economic balance sheet available under Solvency II for insurance and reinsurance undertakings, we assume that assets and liabilities follow a bivariate geometric Brownian motion (GBM). Then, using the results available under Margrabe's formula, we establish an analytical solution to calibrate the volatility of the asset-liability ratio. In such a way, we can estimate the probability of default and the probability of breaching the undertaking's Solvency Capital Requirement (SCR). Furthermore, since estimating the volatility of the Solvency Ratio became crucial for insurers in light of the financial crises featured in the last decades, we introduce a novel measure that we call Resiliency Ratio. The Resiliency Ratio can be used, in addition to the Solvency Ratio, to evaluate the insurance industry's resilience in case of adverse scenarios. Finally, we introduce a simplified stress test tool to evaluate the economic balance sheet under stressed conditions. The model we propose is featured by analytical tractability and fast calibration procedure where only the disclosed data available under the Solvency II public reporting are needed for the calibration. Using the data published regularly by the European Insurance and Occupational Pensions Authority (EIOPA) in an aggregated form by country, an empirical analysis has been performed to calibrate the model and provide the related results at the country level.

Keywords: Solvency II, solvency ratio, volatility of the asset-liability ratio, probability of default, probability to breach the SCR, resilience ratio, stress test

Procedia PDF Downloads 61
608 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain

Authors: G. Hafner

Abstract:

A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.

Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency

Procedia PDF Downloads 381
607 Calibration of 2D and 3D Optical Measuring Instruments in Industrial Environments at Submillimeter Range

Authors: Alberto Mínguez-Martínez, Jesús de Vicente y Oliva

Abstract:

Modern manufacturing processes have led to the miniaturization of systems and, as a result, parts at the micro-and nanoscale are produced. This trend seems to become increasingly important in the near future. Besides, as a requirement of Industry 4.0, the digitalization of the models of production and processes makes it very important to ensure that the dimensions of newly manufactured parts meet the specifications of the models. Therefore, it is possible to reduce the scrap and the cost of non-conformities, ensuring the stability of the production at the same time. To ensure the quality of manufactured parts, it becomes necessary to carry out traceable measurements at scales lower than one millimeter. Providing adequate traceability to the SI unit of length (the meter) to 2D and 3D measurements at this scale is a problem that does not have a unique solution in industrial environments. Researchers in the field of dimensional metrology all around the world are working on this issue. A solution for industrial environments, even if it is not complete, will enable working with some traceability. At this point, we believe that the study of the surfaces could provide us with a first approximation to a solution. Among the different options proposed in the literature, the areal topography methods may be the most relevant because they could be compared to those measurements performed using Coordinate Measuring Machines (CMM’s). These measuring methods give (x, y, z) coordinates for each point, expressing it in two different ways, either expressing the z coordinate as a function of x, denoting it as z(x), for each Y-axis coordinate, or as a function of the x and y coordinates, denoting it as z (x, y). Between others, optical measuring instruments, mainly microscopes, are extensively used to carry out measurements at scales lower than one millimeter because it is a non-destructive measuring method. In this paper, the authors propose a calibration procedure for the scales of optical measuring instruments, particularizing for a confocal microscope, using material standards easy to find and calibrate in metrology and quality laboratories in industrial environments. Confocal microscopes are measuring instruments capable of filtering the out-of-focus reflected light so that when it reaches the detector, it is possible to take pictures of the part of the surface that is focused. Varying and taking pictures at different Z levels of the focus, a specialized software interpolates between the different planes, and it could reconstruct the surface geometry into a 3D model. As it is easy to deduce, it is necessary to give traceability to each axis. As a complementary result, the roughness Ra parameter will be traced to the reference. Although the solution is designed for a confocal microscope, it may be used for the calibration of other optical measuring instruments by applying minor changes.

Keywords: industrial environment, confocal microscope, optical measuring instrument, traceability

Procedia PDF Downloads 127