Search results for: probability of detection (PD)
2532 Development of Wide Bandgap Semiconductor Based Particle Detector
Authors: Rupa Jeena, Pankaj Chetry, Pradeep Sarin
Abstract:
The study of fundamental particles and the forces governing them has always remained an attractive field of theoretical study to pursue. With the advancement and development of new technologies and instruments, it is possible now to perform particle physics experiments on a large scale for the validation of theoretical predictions. These experiments are generally carried out in a highly intense beam environment. This, in turn, requires the development of a detector prototype possessing properties like radiation tolerance, thermal stability, and fast timing response. Semiconductors like Silicon, Germanium, Diamond, and Gallium Nitride (GaN) have been widely used for particle detection applications. Silicon and germanium being narrow bandgap semiconductors, require pre-cooling to suppress the effect of noise by thermally generated intrinsic charge carriers. The application of diamond in large-scale experiments is rare owing to its high cost of fabrication, while GaN is one of the most extensively explored potential candidates. But we are aiming to introduce another wide bandgap semiconductor in this active area of research by considering all the requirements. We have made an attempt by utilizing the wide bandgap of rutile Titanium dioxide (TiO2) and other properties to use it for particle detection purposes. The thermal evaporation-oxidation (in PID furnace) technique is used for the deposition of the film, and the Metal Semiconductor Metal (MSM) electrical contacts are made using Titanium+Gold (Ti+Au) (20/80nm). The characterization comprising X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM), Ultraviolet (UV)-Visible spectroscopy, and Laser Raman Spectroscopy (LRS) has been performed on the film to get detailed information about surface morphology. On the other hand, electrical characterizations like Current Voltage (IV) measurement in dark and light and test with laser are performed to have a better understanding of the working of the detector prototype. All these preliminary tests of the detector will be presented.Keywords: particle detector, rutile titanium dioxide, thermal evaporation, wide bandgap semiconductors
Procedia PDF Downloads 792531 Performance Analysis of M-Ary Pulse Position Modulation in Multihop Multiple Input Multiple Output-Free Space Optical System over Uncorrelated Gamma-Gamma Atmospheric Turbulence Channels
Authors: Hechmi Saidi, Noureddine Hamdi
Abstract:
The performance of Decode and Forward (DF) multihop Free Space Optical ( FSO) scheme deploying Multiple Input Multiple Output (MIMO) configuration under Gamma-Gamma (GG) statistical distribution, that adopts M-ary Pulse Position Modulation (MPPM) coding, is investigated. We have extracted exact and estimated values of Symbol-Error Rates (SERs) respectively. A closed form formula related to the Probability Density Function (PDF) is expressed for our designed system. Thanks to the use of DF multihop MIMO FSO configuration and MPPM signaling, atmospheric turbulence is combatted; hence the transmitted signal quality is improved.Keywords: free space optical, multiple input multiple output, M-ary pulse position modulation, multihop, decode and forward, symbol error rate, gamma-gamma channel
Procedia PDF Downloads 1992530 An Investigation of Tourists’ Destination Loyalty: A Case Study of Bangkok, Thailand
Authors: Sukritta Larsen, Kevin Wongleedee
Abstract:
The purposes of this research were to study tourists’ destination loyalty from the perspective of international tourists in Bangkok and to study the level of interest to revisit Bangkok in the near future. A probability random sampling of 200 international tourists was utilized. Half of the sample group was male and the other half was female. A Likert-five-scale questionnaire was designed to collect the data and small in-depth interviews were also used to obtain their opinions. The findings revealed that the majority of respondents had a medium level of loyalty. When examined in detail, the destination loyalty indicators can be ranked according to the mean average from high to low as follows: to recommend the visit, to say positive things, to revisit in the next three years, to refer the information, and to plan to visit regularly. Finally, the findings from the in-depth interviews with small group of international tourists revealed that the major obstacles that prevented many international tourists who may interested in revisiting Thailand included traffic congestions, high crime rate, and political instability.Keywords: destination loyalty, international tourists, revisit, Bangkok
Procedia PDF Downloads 3382529 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications
Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino
Abstract:
The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses
Procedia PDF Downloads 1812528 An Adaptive Opportunistic Transmission for Unlicensed Spectrum Sharing in Heterogeneous Networks
Authors: Daehyoung Kim, Pervez Khan, Hoon Kim
Abstract:
Efficient utilization of spectrum resources is a fundamental issue of wireless communications due to its scarcity. To improve the efficiency of spectrum utilization, the spectrum sharing for unlicensed bands is being regarded as one of key technologies in the next generation wireless networks. A number of schemes such as Listen-Before-Talk(LBT) and carrier sensor adaptive transmission (CSAT) have been suggested from this aspect, but more efficient sharing schemes are required for improving spectrum utilization efficiency. This work considers an opportunistic transmission approach and a dynamic Contention Window (CW) adjustment scheme for LTE-U users sharing the unlicensed spectrum with Wi-Fi, in order to enhance the overall system throughput. The decision criteria for the dynamic adjustment of CW are based on the collision evaluation, derived from the collision probability of the system. The overall performance can be improved due to the adaptive adjustment of the CW. Simulation results show that our proposed scheme outperforms the Distributed Coordination Function (DCF) mechanism of IEEE 802.11 MAC.Keywords: spectrum sharing, adaptive opportunistic transmission, unlicensed bands, heterogeneous networks
Procedia PDF Downloads 3502527 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea
Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin
Abstract:
Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.Keywords: day night band, SAR, fishery, South China Sea
Procedia PDF Downloads 2352526 Modern Detection and Description Methods for Natural Plants Recognition
Authors: Masoud Fathi Kazerouni, Jens Schlemper, Klaus-Dieter Kuhnert
Abstract:
Green planet is one of the Earth’s names which is known as a terrestrial planet and also can be named the fifth largest planet of the solar system as another scientific interpretation. Plants do not have a constant and steady distribution all around the world, and even plant species’ variations are not the same in one specific region. Presence of plants is not only limited to one field like botany; they exist in different fields such as literature and mythology and they hold useful and inestimable historical records. No one can imagine the world without oxygen which is produced mostly by plants. Their influences become more manifest since no other live species can exist on earth without plants as they form the basic food staples too. Regulation of water cycle and oxygen production are the other roles of plants. The roles affect environment and climate. Plants are the main components of agricultural activities. Many countries benefit from these activities. Therefore, plants have impacts on political and economic situations and future of countries. Due to importance of plants and their roles, study of plants is essential in various fields. Consideration of their different applications leads to focus on details of them too. Automatic recognition of plants is a novel field to contribute other researches and future of studies. Moreover, plants can survive their life in different places and regions by means of adaptations. Therefore, adaptations are their special factors to help them in hard life situations. Weather condition is one of the parameters which affect plants life and their existence in one area. Recognition of plants in different weather conditions is a new window of research in the field. Only natural images are usable to consider weather conditions as new factors. Thus, it will be a generalized and useful system. In order to have a general system, distance from the camera to plants is considered as another factor. The other considered factor is change of light intensity in environment as it changes during the day. Adding these factors leads to a huge challenge to invent an accurate and secure system. Development of an efficient plant recognition system is essential and effective. One important component of plant is leaf which can be used to implement automatic systems for plant recognition without any human interface and interaction. Due to the nature of used images, characteristic investigation of plants is done. Leaves of plants are the first characteristics to select as trusty parts. Four different plant species are specified for the goal to classify them with an accurate system. The current paper is devoted to principal directions of the proposed methods and implemented system, image dataset, and results. The procedure of algorithm and classification is explained in details. First steps, feature detection and description of visual information, are outperformed by using Scale invariant feature transform (SIFT), HARRIS-SIFT, and FAST-SIFT methods. The accuracy of the implemented methods is computed. In addition to comparison, robustness and efficiency of results in different conditions are investigated and explained.Keywords: SIFT combination, feature extraction, feature detection, natural images, natural plant recognition, HARRIS-SIFT, FAST-SIFT
Procedia PDF Downloads 2762525 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context
Authors: Nicole Merkle, Stefan Zander
Abstract:
Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.Keywords: ambient intelligence, machine learning, semantic web, software agents
Procedia PDF Downloads 2812524 Reducing Sexism Promotes Female Navy with Agreeableness Personality Traits to Increases Bystander Attitudes Towards Sexual Harassment
Authors: Chia-Chun Wu, Pei-Shan Lee
Abstract:
Gender equality is an important issue in the workplace today. This study aimed to explore whether female naval with agreeableness personality traits can increase bystander attitudes towards sexual harassment by reducing sexism. A total of 281 female navalin Taiwan participated in this study and completed the BFI-10 scale and questionnaires on sexism and bystander attitudes towards sexual harassment. Path analysis was performed using AMOS 23 version. The results demonstrated that female naval with an agreeableness personality predicted bystander attitudes towards sexual harassment, and when sexism was reduced, it was more helpful to increase bystander attitudes toward sexual harassment. These results informed the perspectives of female naval. It is suggested that when promoting gender equality in the military in the future, people with agreeableness personality can be selected to attend gender equality courses to improve bystander attitudes towards sexual harassment. This provided the Navy with strategies to reduce the probability of sexual harassment.Keywords: semism, agreeableness, female, bystander attitude
Procedia PDF Downloads 912523 Machine Learning Driven Analysis of Kepler Objects of Interest to Identify Exoplanets
Authors: Akshat Kumar, Vidushi
Abstract:
This paper identifies 27 KOIs, 26 of which are currently classified as candidates and one as false positives that have a high probability of being confirmed. For this purpose, 11 machine learning algorithms were implemented on the cumulative kepler dataset sourced from the NASA exoplanet archive; it was observed that the best-performing model was HistGradientBoosting and XGBoost with a test accuracy of 93.5%, and the lowest-performing model was Gaussian NB with a test accuracy of 54%, to test model performance F1, cross-validation score and RUC curve was calculated. Based on the learned models, the significant characteristics for confirm exoplanets were identified, putting emphasis on the object’s transit and stellar properties; these characteristics were namely koi_count, koi_prad, koi_period, koi_dor, koi_ror, and koi_smass, which were later considered to filter out the potential KOIs. The paper also calculates the Earth similarity index based on the planetary radius and equilibrium temperature for each KOI identified to aid in their classification.Keywords: Kepler objects of interest, exoplanets, space exploration, machine learning, earth similarity index, transit photometry
Procedia PDF Downloads 752522 A Two Tailed Secretary Problem with Multiple Criteria
Authors: Alaka Padhye, S. P. Kane
Abstract:
The following study considers some variations made to the secretary problem (SP). In a multiple criteria secretary problem (MCSP), the selection of a unit is based on two independent characteristics. The units that appear before an observer are known say N, the best rank of a unit being N. A unit is selected, if it is better with respect to either first or second or both the characteristics. When the number of units is large and due to constraints like time and cost, the observer might want to stop earlier instead of inspecting all the available units. Let the process terminate at r2th unit where r12521 The Effect of Initial Sample Size and Increment in Simulation Samples on a Sequential Selection Approach
Authors: Mohammad H. Almomani
Abstract:
In this paper, we argue the effect of the initial sample size, and the increment in simulation samples on the performance of a sequential approach that used in selecting the top m designs when the number of alternative designs is very large. The sequential approach consists of two stages. In the first stage the ordinal optimization is used to select a subset that overlaps with the set of actual best k% designs with high probability. Then in the second stage the optimal computing budget is used to select the top m designs from the selected subset. We apply the selection approach on a generic example under some parameter settings, with a different choice of initial sample size and the increment in simulation samples, to explore the impacts on the performance of this approach. The results show that the choice of initial sample size and the increment in simulation samples does affect the performance of a selection approach.Keywords: Large Scale Problems, Optimal Computing Budget Allocation, ordinal optimization, simulation optimization
Procedia PDF Downloads 3552520 3D Label-Free Bioimaging of Native Tissue with Selective Plane Illumination Optical Microscopy
Authors: Jing Zhang, Yvonne Reinwald, Nick Poulson, Alicia El Haj, Chung See, Mike Somekh, Melissa Mather
Abstract:
Biomedical imaging of native tissue using light offers the potential to obtain excellent structural and functional information in a non-invasive manner with good temporal resolution. Image contrast can be derived from intrinsic absorption, fluorescence, or scatter, or through the use of extrinsic contrast. A major challenge in applying optical microscopy to in vivo tissue imaging is the effects of light attenuation which limits light penetration depth and achievable imaging resolution. Recently Selective Plane Illumination Microscopy (SPIM) has been used to map the 3D distribution of fluorophores dispersed in biological structures. In this approach, a focused sheet of light is used to illuminate the sample from the side to excite fluorophores within the sample of interest. Images are formed based on detection of fluorescence emission orthogonal to the illumination axis. By scanning the sample along the detection axis and acquiring a stack of images, 3D volumes can be obtained. The combination of rapid image acquisition speeds with the low photon dose to samples optical sectioning provides SPIM is an attractive approach for imaging biological samples in 3D. To date all implementations of SPIM rely on the use of fluorescence reporters be that endogenous or exogenous. This approach has the disadvantage that in the case of exogenous probes the specimens are altered from their native stage rendering them unsuitable for in vivo studies and in general fluorescence emission is weak and transient. Here we present for the first time to our knowledge a label-free implementation of SPIM that has downstream applications in the clinical setting. The experimental set up used in this work incorporates both label-free and fluorescent illumination arms in addition to a high specification camera that can be partitioned for simultaneous imaging of both fluorescent emission and scattered light from intrinsic sources of optical contrast in the sample being studied. This work first involved calibration of the imaging system and validation of the label-free method with well characterised fluorescent microbeads embedded in agarose gel. 3D constructs of mammalian cells cultured in agarose gel with varying cell concentrations were then imaged. A time course study to track cell proliferation in the 3D construct was also carried out and finally a native tissue sample was imaged. For each sample multiple images were obtained by scanning the sample along the axis of detection and 3D maps reconstructed. The results obtained validated label-free SPIM as a viable approach for imaging cells in a 3D gel construct and native tissue. This technique has the potential use in a near-patient environment that can provide results quickly and be implemented in an easy to use manner to provide more information with improved spatial resolution and depth penetration than current approaches.Keywords: bioimaging, optics, selective plane illumination microscopy, tissue imaging
Procedia PDF Downloads 2472519 Detection of Phoneme [S] Mispronounciation for Sigmatism Diagnosis in Adults
Authors: Michal Krecichwost, Zauzanna Miodonska, Pawel Badura
Abstract:
The diagnosis of sigmatism is mostly based on the observation of articulatory organs. It is, however, not always possible to precisely observe the vocal apparatus, in particular in the oral cavity of the patient. Speech processing can allow to objectify the therapy and simplify the verification of its progress. In the described study the methodology for classification of incorrectly pronounced phoneme [s] is proposed. The recordings come from adults. They were registered with the speech recorder at the sampling rate of 44.1 kHz and the resolution of 16 bit. The database of pathological and normative speech has been collected for the study including reference assessments provided by the speech therapy experts. Ten adult subjects were asked to simulate a certain type of stigmatism under the speech therapy expert supervision. In the recordings, the analyzed phone [s] was surrounded by vowels, viz: ASA, ESE, ISI, SPA, USU, YSY. Thirteen MFCC (mel-frequency cepstral coefficients) and RMS (root mean square) values are calculated within each frame being a part of the analyzed phoneme. Additionally, 3 fricative formants along with corresponding amplitudes are determined for the entire segment. In order to aggregate the information within the segment, the average value of each MFCC coefficient is calculated. All features of other types are aggregated by means of their 75th percentile. The proposed method of features aggregation reduces the size of the feature vector used in the classification. Binary SVM (support vector machine) classifier is employed at the phoneme recognition stage. The first group consists of pathological phones, while the other of the normative ones. The proposed feature vector yields classification sensitivity and specificity measures above 90% level in case of individual logo phones. The employment of a fricative formants-based information improves the sole-MFCC classification results average of 5 percentage points. The study shows that the employment of specific parameters for the selected phones improves the efficiency of pathology detection referred to the traditional methods of speech signal parameterization.Keywords: computer-aided pronunciation evaluation, sibilants, sigmatism diagnosis, speech processing
Procedia PDF Downloads 2832518 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes
Abstract:
In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control
Procedia PDF Downloads 5732517 Influence of Iron Ore Mineralogy on Cluster Formation inside the Shaft Furnace
Authors: M. Bahgat, H. A. Hanafy, S. Lakdawala
Abstract:
Clustering phenomenon of pellets was observed frequently in shaft processes operating at higher temperatures. Clustering is a result of the growth of fibrous iron precipitates (iron whiskers) that become hooked to each other and finally become crystallized during the initial stages of metallization. If the pellet clustering is pronounced, sometimes leads to blocking inside the furnace and forced shutdown takes place. This work clarifies further the relation between metallic iron whisker growth and iron ore mineralogy. Various pellet sizes (6 – 12.0 & +12.0 mm) from three different ores (A, B & C) were (completely and partially) reduced at 985 oC with H2/CO gas mixture using thermos-gravimetric technique. It was found that reducibility increases by decreasing the iron ore pellet’s size. Ore (A) has the highest reducibility than ore (B) and ore (C). Increasing the iron ore pellet’s size leads to increase the probability of metallic iron whisker formation. Ore (A) has the highest tendency for metallic iron whisker formation than ore (B) and ore (C). The reduction reactions for all iron ores A, B and C are mainly controlled by diffusion reaction mechanism.Keywords: shaft furnace, cluster, metallic iron whisker, mineralogy, ferrous metallurgy
Procedia PDF Downloads 4702516 Seismic Performance of a Framed Structure Retrofitted with Damped Cable Systems
Authors: Asad Naeem, Minsung Kim, Jinkoo Kim
Abstract:
In this work, the effectiveness of damped cable systems (DCS) on the mitigation of earthquake-induced response of a framed structure is investigated. The seismic performance of DCS is investigated using fragility analysis and life cycle cost evaluation of an existing building retrofitted with DCS, and the results are compared with those of the structure retrofitted with viscous dampers. The comparison of the analysis results reveals that, due to the self-centering capability of the DCS, residual displacement becomes nearly zero in the structure retrofitted with the DCS. According to the fragility analysis, the structure retrofitted with the DCS has smaller probability of reaching a limit states compared to the structure with viscous dampers. It is also observed that both the initial and life cycle costs of the DCS method required for the seismic retrofit is smaller than those of the structure retrofitted with viscous dampers. Acknowledgment: This research was supported by a grant (17CTAP-C132889-01) from Technology Advancement Research Program (TARP) funded by Ministry of Land, Infrastructure, and Transport of Korean government.Keywords: damped cable system, seismic retrofit, self centering, fragility analysis
Procedia PDF Downloads 4532515 Wind Resource Classification and Feasibility of Distributed Generation for Rural Community Utilization in North Central Nigeria
Authors: O. D. Ohijeagbon, Oluseyi O. Ajayi, M. Ogbonnaya, Ahmeh Attabo
Abstract:
This study analyzed the electricity generation potential from wind at seven sites spread across seven states of the North-Central region of Nigeria. Twenty-one years (1987 to 2007) wind speed data at a height of 10m were assessed from the Nigeria Meteorological Department, Oshodi. The data were subjected to different statistical tests and also compared with the two-parameter Weibull probability density function. The outcome shows that the monthly average wind speeds ranged between 2.2 m/s in November for Bida and 10.1 m/s in December for Jos. The yearly average ranged between 2.1m/s in 1987 for Bida and 11.8 m/s in 2002 for Jos. Also, the power density for each site was determined to range between 29.66 W/m2 for Bida and 864.96 W/m2 for Jos, Two parameters (k and c) of the Weibull distribution were found to range between 2.3 in Lokoja and 6.5 in Jos for k, while c ranged between 2.9 in Bida and 9.9m/s in Jos. These outcomes points to the fact that wind speeds at Jos, Minna, Ilorin, Makurdi and Abuja are compatible with the cut-in speeds of modern wind turbines and hence, may be economically feasible for wind-to-electricity at and above the height of 10 m. The study further assessed the potential and economic viability of standalone wind generation systems for off-grid rural communities located in each of the studied sites. A specific electric load profile was developed to suite hypothetic communities, each consisting of 200 homes, a school and a community health center. Assessment of the design that will optimally meet the daily load demand with a loss of load probability (LOLP) of 0.01 was performed, considering 2 stand-alone applications of wind and diesel. The diesel standalone system (DSS) was taken as the basis of comparison since the experimental locations have no connection to a distribution network. The HOMER® software optimizing tool was utilized to determine the optimal combination of system components that will yield the lowest life cycle cost. Sequel to the analysis for rural community utilization, a Distributed Generation (DG) analysis that considered the possibility of generating wind power in the MW range in order to take advantage of Nigeria’s tariff regime for embedded generation was carried out for each site. The DG design incorporated each community of 200 homes, freely catered for and offset from the excess electrical energy generated above the minimum requirement for sales to a nearby distribution grid. Wind DG systems were found suitable and viable in producing environmentally friendly energy in terms of life cycle cost and levelised value of producing energy at Jos ($0.14/kWh), Minna ($0.12/kWh), Ilorin ($0.09/kWh), Makurdi ($0.09/kWh), and Abuja ($0.04/kWh) at a particluar turbine hub height. These outputs reveal the value retrievable from the project after breakeven point as a function of energy consumed Based on the results, the study demonstrated that including renewable energy in the rural development plan will enhance fast upgrade of the rural communities.Keywords: wind speed, wind power, distributed generation, cost per kilowatt-hour, clean energy, North-Central Nigeria
Procedia PDF Downloads 5122514 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis
Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin
Abstract:
Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve
Procedia PDF Downloads 3372513 A Topological Approach for Motion Track Discrimination
Authors: Tegan H. Emerson, Colin C. Olson, George Stantchev, Jason A. Edelberg, Michael Wilson
Abstract:
Detecting small targets at range is difficult because there is not enough spatial information present in an image sub-region containing the target to use correlation-based methods to differentiate it from dynamic confusers present in the scene. Moreover, this lack of spatial information also disqualifies the use of most state-of-the-art deep learning image-based classifiers. Here, we use characteristics of target tracks extracted from video sequences as data from which to derive distinguishing topological features that help robustly differentiate targets of interest from confusers. In particular, we calculate persistent homology from time-delayed embeddings of dynamic statistics calculated from motion tracks extracted from a wide field-of-view video stream. In short, we use topological methods to extract features related to target motion dynamics that are useful for classification and disambiguation and show that small targets can be detected at range with high probability.Keywords: motion tracks, persistence images, time-delay embedding, topological data analysis
Procedia PDF Downloads 1142512 A Survey of Domain Name System Tunneling Attacks: Detection and Prevention
Authors: Lawrence Williams
Abstract:
As the mechanism which converts domains to internet protocol (IP) addresses, Domain Name System (DNS) is an essential part of internet usage. It was not designed securely and can be subject to attacks. DNS attacks have become more frequent and sophisticated and the need for detecting and preventing them becomes more important for the modern network. DNS tunnelling attacks are one type of attack that are primarily used for distributed denial-of-service (DDoS) attacks and data exfiltration. Discussion of different techniques to detect and prevent DNS tunneling attacks is done. The methods, models, experiments, and data for each technique are discussed. A proposal about feasibility is made. Future research on these topics is proposed.Keywords: DNS, tunneling, exfiltration, botnet
Procedia PDF Downloads 752511 Rapid Detection of the Etiology of Infection as Bacterial or Viral Using Infrared Spectroscopy of White Blood Cells
Authors: Uraib Sharaha, Guy Beck, Joseph Kapelushnik, Adam H. Agbaria, Itshak Lapidot, Shaul Mordechai, Ahmad Salman, Mahmoud Huleihel
Abstract:
Infectious diseases cause a significant burden on the public health and the economic stability of societies all over the world for several centuries. A reliable detection of the causative agent of infection is not possible based on clinical features, since some of these infections have similar symptoms, including fever, sneezing, inflammation, vomiting, diarrhea, and fatigue. Moreover, physicians usually encounter difficulties in distinguishing between viral and bacterial infections based on symptoms. Therefore, there is an ongoing need for sensitive, specific, and rapid methods for identification of the etiology of the infection. This intricate issue perplex doctors and researchers since it has serious repercussions. In this study, we evaluated the potential of the mid-infrared spectroscopic method for rapid and reliable identification of bacterial and viral infections based on simple peripheral blood samples. Fourier transform infrared (FTIR) spectroscopy is considered a successful diagnostic method in the biological and medical fields. Many studies confirmed the great potential of the combination of FTIR spectroscopy and machine learning as a powerful diagnostic tool in medicine since it is a very sensitive method, which can detect and monitor the molecular and biochemical changes in biological samples. We believed that this method would play a major role in improving the health situation, raising the level of health in the community, and reducing the economic burdens in the health sector resulting from the indiscriminate use of antibiotics. We collected peripheral blood samples from young 364 patients, of which 93 were controls, 126 had bacterial infections, and 145 had viral infections, with ages lower than18 years old, limited to those who were diagnosed with fever-producing illness. Our preliminary results showed that it is possible to determine the infectious agent with high success rates of 82% for sensitivity and 80% for specificity, based on the WBC data.Keywords: infectious diseases, (FTIR) spectroscopy, viral infections, bacterial infections.
Procedia PDF Downloads 1382510 Pawn or Potentates: Corporate Governance Structure in Indian Central Public Sector Enterprises
Authors: Ritika Jain, Rajnish Kumar
Abstract:
The Department of Public Enterprises had made submissions of Self Evaluation Reports, for the purpose of corporate governance, mandatory for all central government owned enterprises. Despite this, an alarming 40% of the enterprises did not do so. This study examines the impact of external policy tools and internal firm-specific factors on corporate governance of central public sector enterprises (CPSEs). We use a dataset of all manufacturing and non-financial services owned by the central government of India for the year 2010-11. Using probit, ordered logit and Heckman’s sample selection models, the study finds that the probability and quality of corporate governance is positively influenced by the CPSE getting into a Memorandum of Understanding (MoU) with the central government of India, and hence, enjoying more autonomy in terms of day to day operations. Besides these, internal factors, including bigger size and lower debt size contribute significantly to better corporate governance.Keywords: corporate governance, central public sector enterprises (CPSEs), sample selection, Memorandum of Understanding (MoU), ordered logit, disinvestment
Procedia PDF Downloads 2572509 A Short Dermatoscopy Training Increases Diagnostic Performance in Medical Students
Authors: Magdalena Chrabąszcz, Teresa Wolniewicz, Cezary Maciejewski, Joanna Czuwara
Abstract:
BACKGROUND: Dermoscopy is a clinical tool known to improve the early detection of melanoma and other malignancies of the skin. Over the past few years melanoma has grown into a disease of socio-economic importance due to the increasing incidence and persistently high mortality rates. Early diagnosis remains the best method to reduce melanoma and non-melanoma skin cancer– related mortality and morbidity. Dermoscopy is a noninvasive technique that consists of viewing pigmented skin lesions through a hand-held lens. This simple procedure increases melanoma diagnostic accuracy by up to 35%. Dermoscopy is currently the standard for clinical differential diagnosis of cutaneous melanoma and for qualifying lesion for the excision biopsy. Like any clinical tool, training is required for effective use. The introduction of small and handy dermoscopes contributed significantly to the switch of dermatoscopy toward a first-level useful tool. Non-dermatologist physicians are well positioned for opportunistic melanoma detection; however, education in the skin cancer examination is limited during medical school and traditionally lecture-based. AIM: The aim of this randomized study was to determine whether the adjunct of dermoscopy to the standard fourth year medical curriculum improves the ability of medical students to distinguish between benign and malignant lesions and assess acceptability and satisfaction with the intervention. METHODS: We performed a prospective study in 2 cohorts of fourth-year medical students at Medical University of Warsaw. Groups having dermatology course, were randomly assigned to: cohort A: with limited access to dermatoscopy from their teacher only – 1 dermatoscope for 15 people Cohort B: with a full access to use dermatoscopy during their clinical classes:1 dermatoscope for 4 people available constantly plus 15-minute dermoscopy tutorial. Students in both study arms got an image-based test of 10 lesions to assess ability to differentiate benign from malignant lesions and postintervention survey collecting minimal background information, attitudes about the skin cancer examination and course satisfaction. RESULTS: The cohort B had higher scores than the cohort A in recognition of nonmelanocytic (P < 0.05) and melanocytic (P <0.05) lesions. Medical students who have a possibility to use dermatoscope by themselves have also a higher satisfaction rates after the dermatology course than the group with limited access to this diagnostic tool. Moreover according to our results they were more motivated to learn dermatoscopy and use it in their future everyday clinical practice. LIMITATIONS: There were limited participants. Further study of the application on clinical practice is still needed. CONCLUSION: Although the use of dermatoscope in dermatology as a specialty is widely accepted, sufficiently validated clinical tools for the examination of potentially malignant skin lesions are lacking in general practice. Introducing medical students to dermoscopy in their fourth year curricula of medical school may improve their ability to differentiate benign from malignant lesions. It can can also encourage students to use dermatoscopy in their future practice which can significantly improve early recognition of malignant lesions and thus decrease melanoma mortality.Keywords: dermatoscopy, early detection of melanoma, medical education, skin cancer
Procedia PDF Downloads 1142508 Development of a Bead Based Fully Automated Mutiplex Tool to Simultaneously Diagnose FIV, FeLV and FIP/FCoV
Authors: Andreas Latz, Daniela Heinz, Fatima Hashemi, Melek Baygül
Abstract:
Introduction: Feline leukemia virus (FeLV), feline immunodeficiency virus (FIV), and feline coronavirus (FCoV) are serious infectious diseases affecting cats worldwide. Transmission of these viruses occurs primarily through close contact with infected cats (via saliva, nasal secretions, faeces, etc.). FeLV, FIV, and FCoV infections can occur in combination and are expressed in similar clinical symptoms. Diagnosis can therefore be challenging: Symptoms are variable and often non-specific. Sick cats show very similar clinical symptoms: apathy, anorexia, fever, immunodeficiency syndrome, anemia, etc. Sample volume for small companion animals for diagnostic purposes can be challenging to collect. In addition, multiplex diagnosis of diseases can contribute to an easier, cheaper, and faster workflow in the lab as well as to the better differential diagnosis of diseases. For this reason, we wanted to develop a new diagnostic tool that utilizes less sample volume, reagents, and consumables than multiplesingleplex ELISA assays Methods: The Multiplier from Dynextechonogies (USA) has been used as platform to develop a Multiplex diagnostic tool for the detection of antibodies against FIV and FCoV/FIP and antigens for FeLV. Multiplex diagnostics. The Dynex®Multiplier®is a fully automated chemiluminescence immunoassay analyzer that significantly simplifies laboratory workflow. The Multiplier®ease-of-use reduces pre-analytical steps by combining the power of efficiently multiplexing multiple assays with the simplicity of automated microplate processing. Plastic beads have been coated with antigens for FIV and FCoV/FIP, as well as antibodies for FeLV. Feline blood samples are incubated with the beads. Read out of results is performed via chemiluminescence Results: Bead coating was optimized for each individual antigen or capture antibody and then combined in the multiplex diagnostic tool. HRP: Antibody conjugates for FIV and FCoV antibodies, as well as detection antibodies for FeLV antigen, have been adjusted and mixed. 3 individual prototyple batches of the assay have been produced. We analyzed for each disease 50 well defined positive and negative samples. Results show an excellent diagnostic performance of the simultaneous detection of antibodies or antigens against these feline diseases in a fully automated system. A 100% concordance with singleplex methods like ELISA or IFA can be observed. Intra- and Inter-Assays showed a high precision of the test with CV values below 10% for each individual bead. Accelerated stability testing indicate a shelf life of at least 1 year. Conclusion: The new tool can be used for multiplex diagnostics of the most important feline infectious diseases. Only a very small sample volume is required. Fully automation results in a very convenient and fast method for diagnosing animal diseases.With its large specimen capacity to process over 576 samples per 8-hours shift and provide up to 3,456 results, very high laboratory productivity and reagent savings can be achieved.Keywords: Multiplex, FIV, FeLV, FCoV, FIP
Procedia PDF Downloads 1042507 Failure Analysis of the Gasoline Engines Injection System
Authors: Jozef Jurcik, Miroslav Gutten, Milan Sebok, Daniel Korenciak, Jerzy Roj
Abstract:
The paper presents the research results of electronic fuel injection system, which can be used for diagnostics of automotive systems. In the paper is described the construction and operation of a typical fuel injection system and analyzed its electronic part. It has also been proposed method for the detection of the injector malfunction, based on the analysis of differential current or voltage characteristics. In order to detect the fault state, it is needed to use self-learning process, by the use of an appropriate self-learning algorithm.Keywords: electronic fuel injector, diagnostics, measurement, testing device
Procedia PDF Downloads 5522506 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 3252505 Asymptotic Confidence Intervals for the Difference of Coefficients of Variation in Gamma Distributions
Authors: Patarawan Sangnawakij, Sa-Aat Niwitpong
Abstract:
In this paper, we proposed two new confidence intervals for the difference of coefficients of variation, CIw and CIs, in two independent gamma distributions. These proposed confidence intervals using the close form method of variance estimation which was presented by Donner and Zou (2010) based on concept of Wald and Score confidence interval, respectively. Monte Carlo simulation study is used to evaluate the performance, coverage probability and expected length, of these confidence intervals. The results indicate that values of coverage probabilities of the new confidence interval based on Wald and Score are satisfied the nominal coverage and close to nominal level 0.95 in various situations, particularly, the former proposed confidence interval is better when sample sizes are small. Moreover, the expected lengths of the proposed confidence intervals are nearly difference when sample sizes are moderate to large. Therefore, in this study, the confidence interval for the difference of coefficients of variation which based on Wald is preferable than the other one confidence interval.Keywords: confidence interval, score’s interval, wald’s interval, coefficient of variation, gamma distribution, simulation study
Procedia PDF Downloads 4272504 Detection of Triclosan in Water Based on Nanostructured Thin Films
Authors: G. Magalhães-Mota, C. Magro, S. Sério, E. Mateus, P. A. Ribeiro, A. B. Ribeiro, M. Raposo
Abstract:
Triclosan [5-chloro-2-(2,4-dichlorophenoxy) phenol], belonging to the class of Pharmaceuticals and Personal Care Products (PPCPs), is a broad-spectrum antimicrobial agent and bactericide. Because of its antimicrobial efficacy, it is widely used in personal health and skin care products, such as soaps, detergents, hand cleansers, cosmetics, toothpastes, etc. However, it has been considered to disrupt the endocrine system, for instance, thyroid hormone homeostasis and possibly the reproductive system. Considering the widespread use of triclosan, it is expected that environmental and food safety problems regarding triclosan will increase dramatically. Triclosan has been found in river water samples in both North America and Europe and is likely widely distributed wherever triclosan-containing products are used. Although significant amounts are removed in sewage plants, considerable quantities remain in the sewage effluent, initiating widespread environmental contamination. Triclosan undergoes bioconversion to methyl-triclosan, which has been demonstrated to bio accumulate in fish. In addition, triclosan has been found in human urine samples from persons with no known industrial exposure and in significant amounts in samples of mother's milk, demonstrating its presence in humans. The action of sunlight in river water is known to turn triclosan into dioxin derivatives and raises the possibility of pharmacological dangers not envisioned when the compound was originally utilized. The aim of this work is to detect low concentrations of triclosan in an aqueous complex matrix through the use of a sensor array system, following the electronic tongue concept based on impedance spectroscopy. To achieve this goal, we selected the appropriate molecules to the sensor so that there is a high affinity for triclosan and whose sensitivity ensures the detection of concentrations of at least nano-molar. Thin films of organic molecules and oxides have been produced by the layer-by-layer (LbL) technique and sputtered onto glass solid supports already covered by gold interdigitated electrodes. By submerging the films in complex aqueous solutions with different concentrations of triclosan, resistance and capacitance values were obtained at different frequencies. The preliminary results showed that an array of interdigitated electrodes sensor coated or uncoated with different LbL and films, can be used to detect TCS traces in aqueous solutions in a wide range concentration, from 10⁻¹² to 10⁻⁶ M. The PCA method was applied to the measured data, in order to differentiate the solutions with different concentrations of TCS. Moreover, was also possible to trace a curve, the plot of the logarithm of resistance versus the logarithm of concentration, which allowed us to fit the plotted data points with a decreasing straight line with a slope of 0.022 ± 0.006 which corresponds to the best sensitivity of our sensor. To find the sensor resolution near of the smallest concentration (Cs) used, 1pM, the minimum measured value which can be measured with resolution is 0.006, so the ∆logC =0.006/0.022=0.273, and, therefore, C-Cs~0.9 pM. This leads to a sensor resolution of 0.9 pM for the smallest concentration used, 1pM. This attained detection limit is lower than the values obtained in the literature.Keywords: triclosan, layer-by-layer, impedance spectroscopy, electronic tongue
Procedia PDF Downloads 2522503 Possibilities of Postmortem CT to Detection of Gas Accumulations in the Vessels of Dead Newborns with Congenital Sepsis
Authors: Uliana N. Tumanova, Viacheslav M. Lyapin, Vladimir G. Bychenko, Alexandr I. Shchegolev, Gennady T. Sukhikh
Abstract:
It is well known that the gas formed as a result of postmortem decomposition of tissues can be detected already 24-48 hours after death. In addition, the conditions of keeping and storage of the corpse (temperature and humidity of the environment) significantly determine the rate of occurrence and development of posthumous changes. The presence of sepsis is accompanied by faster postmortem decomposition and decay of the organs and tissues of the body. The presence of gas in the vessels and cavities can be revealed fully at postmortem CT. Radiologists must certainly report on the detection of intraorganic or intravascular gas, wich was detected at postmortem CT, to forensic experts or pathologists before the autopsy. This gas can not be detected during autopsy, but it can be very important for establishing a diagnosis. To explore the possibility of postmortem CT for the evaluation of gas accumulations in the newborns' vessels, who died from congenital sepsis. Researched of 44 newborns bodies (25 male and 19 female sex, at the age from 6 hours to 27 days) after 6 - 12 hours of death. The bodies were stored in the refrigerator at a temperature of +4°C in the supine position. Grouped 12 bodies of newborns that died from congenital sepsis. The control group consisted of 32 bodies of newborns that died without signs of sepsis. Postmortem CT examination was performed at the GEMINI TF TOF16 device, before the autopsy. The localizations of gas accumulations in the vessels were determined on the CT tomograms. The sepsis diagnosis was on the basis of clinical and laboratory data and autopsy results. Gases in the vessels were detected in 33.3% of cases in the group with sepsis, and in the control group - in 34.4%. A group with sepsis most often the gas localized in the heart and liver vessels - 50% each, of observations number with the detected gas in the vessels. In the heart cavities, aorta and mesenteric vessels - 25% each. In control most often gas was detected in the liver (63.6%) and abdominal cavity (54.5%) vessels. In 45.5% the gas localized in the cavities, and in 36.4% in the vessels of the heart. In the cerebral vessels and in the aorta gas was detected in 27.3% and 9.1%, respectively. Postmortem CT has high diagnostic capabilities to detect free gas in vessels. Postmortem changes in newborns that died from sepsis do not affect intravascular gas production within 6-12 hours. Radiation methods should be used as a supplement to the autopsy, including as a kind of ‘guide’, with the indication to the forensic medical expert of certain changes identified during CT studies, for better definition of pathological processes during the autopsy. Postmortem CT can be recommend as a first stage of autopsy.Keywords: congenital sepsis, gas, newborn, postmortem CT
Procedia PDF Downloads 146