Search results for: permittivity measurement techniques
7325 Public Behavior When Encountered with a Road Traffic Accident
Authors: H. N. S. Silva, S. N. Silva
Abstract:
Introduction: The latest WHO data published in 2014 states that Sri Lanka has reached 2,773 of total deaths and over 14000 individuals’ sustained injuries due to RTAs each year. It was noticed in previous studies that policemen, three wheel drivers and also pedestrians were the first to respond to RTAs but the victim’s condition was aggravated due to unskilled attempts made by the responders while management of the victim’s wounds, moving and positioning of the victims and also mainly while transportation of the victims. Objective: To observe the practices of the urban public in Sri Lanka who are encountered with RTAs. Methods: A qualitative study was done to analyze public behavior seen on video recordings of scenes of accidents purposefully selected from social media, news websites, YouTube and Google. Results: The results showed that all individuals who tried to help during the RTA were middle aged men, who were mainly pedestrians, motorcyclists and policemen during that moment. Vast majority were very keen to actively help the victims to get to hospital as soon as possible and actively participated in providing 'aid'. But main problem was the first aid attempts were disorganized and uncoordinated. Even though all individuals knew how to control external bleeding, none of them was aware of spinal prevention techniques or management of limb injuries. Most of the transportation methods and transfer techniques used were inappropriate and more injury prone. Conclusions: The public actively engages in providing aid despite their inappropriate practices in giving first aid.Keywords: encountered, pedestrians, road traffic accidents, urban public
Procedia PDF Downloads 2907324 Sensor and Sensor System Design, Selection and Data Fusion Using Non-Deterministic Multi-Attribute Tradespace Exploration
Authors: Matthew Yeager, Christopher Willy, John Bischoff
Abstract:
The conceptualization and design phases of a system lifecycle consume a significant amount of the lifecycle budget in the form of direct tasking and capital, as well as the implicit costs associated with unforeseeable design errors that are only realized during downstream phases. Ad hoc or iterative approaches to generating system requirements oftentimes fail to consider the full array of feasible systems or product designs for a variety of reasons, including, but not limited to: initial conceptualization that oftentimes incorporates a priori or legacy features; the inability to capture, communicate and accommodate stakeholder preferences; inadequate technical designs and/or feasibility studies; and locally-, but not globally-, optimized subsystems and components. These design pitfalls can beget unanticipated developmental or system alterations with added costs, risks and support activities, heightening the risk for suboptimal system performance, premature obsolescence or forgone development. Supported by rapid advances in learning algorithms and hardware technology, sensors and sensor systems have become commonplace in both commercial and industrial products. The evolving array of hardware components (i.e. sensors, CPUs, modular / auxiliary access, etc…) as well as recognition, data fusion and communication protocols have all become increasingly complex and critical for design engineers during both concpetualization and implementation. This work seeks to develop and utilize a non-deterministic approach for sensor system design within the multi-attribute tradespace exploration (MATE) paradigm, a technique that incorporates decision theory into model-based techniques in order to explore complex design environments and discover better system designs. Developed to address the inherent design constraints in complex aerospace systems, MATE techniques enable project engineers to examine all viable system designs, assess attribute utility and system performance, and better align with stakeholder requirements. Whereas such previous work has been focused on aerospace systems and conducted in a deterministic fashion, this study addresses a wider array of system design elements by incorporating both traditional tradespace elements (e.g. hardware components) as well as popular multi-sensor data fusion models and techniques. Furthermore, statistical performance features to this model-based MATE approach will enable non-deterministic techniques for various commercial systems that range in application, complexity and system behavior, demonstrating a significant utility within the realm of formal systems decision-making.Keywords: multi-attribute tradespace exploration, data fusion, sensors, systems engineering, system design
Procedia PDF Downloads 1907323 Changes in the Properties of Composites Caused by Chemical Treatment of Hemp Hurds
Authors: N. Stevulova, I. Schwarzova
Abstract:
The possibility of using industrial hemp as a source of natural fibers for purpose of construction, mainly for the preparation of lightweight composites based on hemp hurds is described. In this article, an overview of measurement results of important technical parameters (compressive strength, density, thermal conductivity) of composites based on organic filler - chemically modified hemp hurds in three solutions (EDTA, NaOH and Ca(OH)2) and inorganic binder MgO-cement after 7, 28, 60, 90 and 180 days of hardening is given. The results of long-term water storage of 28 days hardened composites at room temperature were investigated. Changes in the properties of composites caused by chemical treatment of hemp material are discussed.Keywords: hemp hurds, chemical modification, lightweight composites, testing material properties
Procedia PDF Downloads 3547322 New Variational Approach for Contrast Enhancement of Color Image
Authors: Wanhyun Cho, Seongchae Seo, Soonja Kang
Abstract:
In this work, we propose a variational technique for image contrast enhancement which utilizes global and local information around each pixel. The energy functional is defined by a weighted linear combination of three terms which are called on a local, a global contrast term and dispersion term. The first one is a local contrast term that can lead to improve the contrast of an input image by increasing the grey-level differences between each pixel and its neighboring to utilize contextual information around each pixel. The second one is global contrast term, which can lead to enhance a contrast of image by minimizing the difference between its empirical distribution function and a cumulative distribution function to make the probability distribution of pixel values becoming a symmetric distribution about median. The third one is a dispersion term that controls the departure between new pixel value and pixel value of original image while preserving original image characteristics as well as possible. Second, we derive the Euler-Lagrange equation for true image that can achieve the minimum of a proposed functional by using the fundamental lemma for the calculus of variations. And, we considered the procedure that this equation can be solved by using a gradient decent method, which is one of the dynamic approximation techniques. Finally, by conducting various experiments, we can demonstrate that the proposed method can enhance the contrast of colour images better than existing techniques.Keywords: color image, contrast enhancement technique, variational approach, Euler-Lagrang equation, dynamic approximation method, EME measure
Procedia PDF Downloads 4567321 Equivalent Circuit Modelling of Active Reflectarray Antenna
Authors: M. Y. Ismail, M. Inam
Abstract:
This paper presents equivalent circuit modeling of active planar reflectors which can be used for the detailed analysis and characterization of reflector performance in terms of lumped components. Equivalent circuit representation has been proposed for PIN diodes and liquid crystal based active planar reflectors designed within X-band frequency range. A very close agreement has been demonstrated between equivalent circuit results, 3D EM simulated results as well as measured scattering parameter results. In the case of measured results, a maximum discrepancy of 1.05dB was observed in the reflection loss performance, which can be attributed to the losses occurred during measurement process.Keywords: Equivalent circuit modelling, planar reflectors, reflectarray antenna, PIN diode, liquid crystal
Procedia PDF Downloads 2897320 Design and Manufacture of Removable Nosecone Tips with Integrated Pitot Tubes for High Power Sounding Rocketry
Authors: Bjorn Kierulf, Arun Chundru
Abstract:
Over the past decade, collegiate rocketry teams have emerged across the country with various goals: space, liquid-fueled flight, etc. A critical piece of the development of knowledge within a club is the use of so-called "sounding rockets," whose goal is to take in-flight measurements that inform future rocket design. Common measurements include acceleration from inertial measurement units (IMU's), and altitude from barometers. With a properly tuned filter, these measurements can be used to find velocity, but are susceptible to noise, offset, and filter settings. Instead, velocity can be measured more directly and more instantaneously using a pitot tube, which operates by measuring the stagnation pressure. At supersonic speeds, an additional thermodynamic property is necessary to constrain the upstream state. One possibility is the stagnation temperature, measured by a thermocouple in the pitot tube. The routing of the pitot tube from the nosecone tip down to a pressure transducer is complicated by the nosecone's structure. Commercial-off-the-shelf (COTS) nosecones come with a removable metal tip (without a pitot tube). This provides the opportunity to make custom tips with integrated measurement systems without making the nosecone from scratch. The main design constraint is how the nosecone tip is held down onto the nosecone, using the tension in a threaded rod anchored to a bulkhead below. Because the threaded rod connects into a threaded hole in the center of the nosecone tip, the pitot tube follows a winding path, and the pressure fitting is off-center. Two designs will be presented in the paper, one with a curved pitot tube and a coaxial design that eliminates the need for the winding path by routing pressure through a structural tube. Additionally, three manufacturing methods will be presented for these designs: bound powder filament metal 3D printing, stereo-lithography (SLA) 3D printing, and traditional machining. These will employ three different materials, copper, steel, and proprietary resin. These manufacturing methods and materials are relatively low cost, thus accessible to student researchers. These designs and materials cover multiple use cases, based on how fast the sounding rocket is expected to travel and how important heating effects are - to measure and to avoid melting. This paper will include drawings showing key features and an overview of the design changes necessitated by the manufacture. It will also include a look at the successful use of these nosecone tips and the data they have gathered to date.Keywords: additive manufacturing, machining, pitot tube, sounding rocketry
Procedia PDF Downloads 1707319 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1757318 Road Traffic Accidents Analysis in Mexico City through Crowdsourcing Data and Data Mining Techniques
Authors: Gabriela V. Angeles Perez, Jose Castillejos Lopez, Araceli L. Reyes Cabello, Emilio Bravo Grajales, Adriana Perez Espinosa, Jose L. Quiroz Fabian
Abstract:
Road traffic accidents are among the principal causes of traffic congestion, causing human losses, damages to health and the environment, economic losses and material damages. Studies about traditional road traffic accidents in urban zones represents very high inversion of time and money, additionally, the result are not current. However, nowadays in many countries, the crowdsourced GPS based traffic and navigation apps have emerged as an important source of information to low cost to studies of road traffic accidents and urban congestion caused by them. In this article we identified the zones, roads and specific time in the CDMX in which the largest number of road traffic accidents are concentrated during 2016. We built a database compiling information obtained from the social network known as Waze. The methodology employed was Discovery of knowledge in the database (KDD) for the discovery of patterns in the accidents reports. Furthermore, using data mining techniques with the help of Weka. The selected algorithms was the Maximization of Expectations (EM) to obtain the number ideal of clusters for the data and k-means as a grouping method. Finally, the results were visualized with the Geographic Information System QGIS.Keywords: data mining, k-means, road traffic accidents, Waze, Weka
Procedia PDF Downloads 4207317 Dietary Anion-Cation Balance of Grass and Net Acid-Base Excretion in Urine of Suckler Cows
Authors: H. Scholz, P. Kuehne, G. Heckenberger
Abstract:
Dietary Anion-Cation Balance (DCAB) in grazing systems under German conditions has a tendency to decrease from May until September and often are measured DCAB lower than 100 meq per kg dry matter. Lower DCAB in grass feeding system can change the metabolic status of suckler cows and often are results in acidotic metabolism. Measurement of acid-base excretion in dairy cows has been proved to a method to evaluate the acid-base status. The hypothesis was that metabolic imbalances could be identified by urine measurement in suckler cows. The farm study was conducted during the grazing seasons 2017 and 2018 and involved 7 suckler cow farms in Germany. Suckler cows were grazing during the whole time of the investigation and had no access to other feeding components. Cows had free access to water and salt block and free access to minerals (loose). The dry matter of the grass was determined at 60 °C and were then analysed for energy and nutrient content and for the Dietary Cation-Anion Balance (DCAB). Urine was collected in 50 ml-glasses and analysed for net acid-base excretion (NSBA) and the concentration of creatinine and urea in the laboratory. Statistical analysis took place with ANOVA with fixed effects of farms (1-7), month (May until September), and number of lactations (1, 2, and ≥ 3 lactations) using SPSS Version 25.0 for windows. An alpha of 0.05 was used for all statistical tests. During the grazing periods of years 2017 and 2018, an average DCAB was observed in the grass of 167 meq per kg DM. A very high mean variation could be determined from -42 meq/kg to +439 meq/kg. Reference values in relation to DCAB were described between 150 meq and 400 meq per kg DM. It was found the high chlorine content with reduced potassium level led to this reduction in DCAB at the end of the grazing period. Between the DCAB of the grass and the NSBA in urine of suckler cows was a correlation according to PEARSON of r = 0.478 (p ≤ 0.001) or after SPEARMAN of r = 0.601 (p ≤ 0.001) observed. For the control of urine values of grazing suckler cows, the wide spread of the values poses a challenge of the interpretation, especially since the DCAB is unknown. The influence of several feeding components such as chlorine, sulfur, potassium, and sodium (ions for the DCAB) and dry matter feed intake during the grazing period of suckler cows should be taken into account in further research. The results obtained show that up a decrease in the DCAB is related to a decrease in NSBA in urine of suckler cows. Monitoring of metabolic disturbances should include analysis of urine, blood, milk, and ruminal fluid.Keywords: dietary anion-cation balance, DCAB, net acid-base excretion, NSBA, suckler cow, grazing period
Procedia PDF Downloads 1537316 A Review: Detection and Classification Defects on Banana and Apples by Computer Vision
Authors: Zahow Muoftah
Abstract:
Traditional manual visual grading of fruits has been one of the agricultural industry’s major challenges due to its laborious nature as well as inconsistency in the inspection and classification process. The main requirements for computer vision and visual processing are some effective techniques for identifying defects and estimating defect areas. Automated defect detection using computer vision and machine learning has emerged as a promising area of research with a high and direct impact on the visual inspection domain. Grading, sorting, and disease detection are important factors in determining the quality of fruits after harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have used computer vision to evaluate the quality level of fruits during post-harvest. Many studies have been conducted to identify diseases and pests that affect the fruits of agricultural crops. However, most previous studies concentrated solely on the diagnosis of a lesion or disease. This study focused on a comprehensive study to identify pests and diseases of apple and banana fruits using detection and classification defects on Banana and Apples by Computer Vision. As a result, the current article includes research from these domains as well. Finally, various pattern recognition techniques for detecting apple and banana defects are discussed.Keywords: computer vision, banana, apple, detection, classification
Procedia PDF Downloads 1107315 Innovative Housing Construction Technologies in Slum Upgrading
Authors: Edmund M. Muthigani
Abstract:
Innovation in the construction industry has been characterized by new products and processes especially in slum upgrading. The need for low cost housing has motivated stakeholders to think outside the box in coming up with solutions. This paper explored innovative construction technologies that have been used in slum upgrading. The main objectives of the paper was to examine innovations in the construction housing sector and to show how incremental derived demand for decent housing has led to adoption of innovative technologies and materials. Systematic literature review was used to review studies on innovative construction technologies in slum upgrading. The review revealed slow process of innovations in the construction industry due to risk aversion by firms and the hesitance to adopt by firms and individuals. Low profit margins in low cost housing and lack of sufficient political support remain the major hurdles to innovative techniques adoption that can actualize right to decent housing. Conventional construction materials have remained unaffordable to many people and this has negated them decent housing. This has necessitated exploration of innovative materials to realize low cost housing. Stabilized soil blocks and sisal-cement roofing blocks are some of the innovative construction materials that have been utilized in slum upgrading. These innovative materials have not only lowered the cost of production of building elements but also eased costs of transport as the raw materials to produce them are readily available in or within the slum sites. Despite their shortcomings in durability and compressive strength, they have proved worthwhile in slum upgrading. Production of innovative construction materials and use of innovative techniques in slum upgrading also provided employment to the locals.Keywords: construction, housing, innovation, slum, technology
Procedia PDF Downloads 2167314 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body
Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker
Abstract:
This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel
Procedia PDF Downloads 3977313 Making of Alloy Steel by Direct Alloying with Mineral Oxides during Electro-Slag Remelting
Authors: Vishwas Goel, Kapil Surve, Somnath Basu
Abstract:
In-situ alloying in steel during the electro-slag remelting (ESR) process has already been achieved by the addition of necessary ferroalloys into the electro-slag remelting mold. However, the use of commercially available ferroalloys during ESR processing is often found to be financially less favorable, in comparison with the conventional alloying techniques. However, a process of alloying steel with elements like chromium and manganese using the electro-slag remelting route is under development without any ferrochrome addition. The process utilizes in-situ reduction of refined mineral chromite (Cr₂O₃) and resultant enrichment of chromium in the steel ingot produced. It was established in course of this work that this process can become more advantageous over conventional alloying techniques, both economically and environmentally, for applications which inherently demand the use of the electro-slag remelting process, such as manufacturing of superalloys. A key advantage is the lower overall CO₂ footprint of this process relative to the conventional route of production, storage, and the addition of ferrochrome. In addition to experimentally validating the feasibility of the envisaged reactions, a mathematical model to simulate the reduction of chromium (III) oxide and transfer to chromium to the molten steel droplets was also developed as part of the current work. The developed model helps to correlate the amount of chromite input and the magnitude of chromium alloying that can be achieved through this process. Experiments are in progress to validate the predictions made by this model and to fine-tune its parameters.Keywords: alloying element, chromite, electro-slag remelting, ferrochrome
Procedia PDF Downloads 2257312 Optimum Design of Steel Space Frames by Hybrid Teaching-Learning Based Optimization and Harmony Search Algorithms
Authors: Alper Akin, Ibrahim Aydogdu
Abstract:
This study presents a hybrid metaheuristic algorithm to obtain optimum designs for steel space buildings. The optimum design problem of three-dimensional steel frames is mathematically formulated according to provisions of LRFD-AISC (Load and Resistance factor design of American Institute of Steel Construction). Design constraints such as the strength requirements of structural members, the displacement limitations, the inter-story drift and the other structural constraints are derived from LRFD-AISC specification. In this study, a hybrid algorithm by using teaching-learning based optimization (TLBO) and harmony search (HS) algorithms is employed to solve the stated optimum design problem. These algorithms are two of the recent additions to metaheuristic techniques of numerical optimization and have been an efficient tool for solving discrete programming problems. Using these two algorithms in collaboration creates a more powerful tool and mitigates each other’s weaknesses. To demonstrate the powerful performance of presented hybrid algorithm, the optimum design of a large scale steel building is presented and the results are compared to the previously obtained results available in the literature.Keywords: optimum structural design, hybrid techniques, teaching-learning based optimization, harmony search algorithm, minimum weight, steel space frame
Procedia PDF Downloads 5487311 Experimental Study of Different Types of Concrete in Uniaxial Compression Test
Authors: Khashayar Jafari, Mostafa Jafarian Abyaneh, Vahab Toufigh
Abstract:
Polymer concrete (PC) is a distinct concrete with superior characteristics in comparison to ordinary cement concrete. It has become well-known for its applications in thin overlays, floors and precast components. In this investigation, the mechanical properties of PC with different epoxy resin contents, ordinary cement concrete (OCC) and lightweight concrete (LC) have been studied under uniaxial compression test. The study involves five types of concrete, with each type being tested four times. Their complete elastic-plastic behavior was compared with each other through the measurement of volumetric strain during the tests. According to the results, PC showed higher strength, ductility and energy absorption with respect to OCC and LC.Keywords: polymer concrete, ordinary cement concrete, lightweight concrete, uniaxial compression test, volumetric strain
Procedia PDF Downloads 3987310 Power Iteration Clustering Based on Deflation Technique on Large Scale Graphs
Authors: Taysir Soliman
Abstract:
One of the current popular clustering techniques is Spectral Clustering (SC) because of its advantages over conventional approaches such as hierarchical clustering, k-means, etc. and other techniques as well. However, one of the disadvantages of SC is the time consuming process because it requires computing the eigenvectors. In the past to overcome this disadvantage, a number of attempts have been proposed such as the Power Iteration Clustering (PIC) technique, which is one of versions from SC; some of PIC advantages are: 1) its scalability and efficiency, 2) finding one pseudo-eigenvectors instead of computing eigenvectors, and 3) linear combination of the eigenvectors in linear time. However, its worst disadvantage is an inter-class collision problem because it used only one pseudo-eigenvectors which is not enough. Previous researchers developed Deflation-based Power Iteration Clustering (DPIC) to overcome problems of PIC technique on inter-class collision with the same efficiency of PIC. In this paper, we developed Parallel DPIC (PDPIC) to improve the time and memory complexity which is run on apache spark framework using sparse matrix. To test the performance of PDPIC, we compared it to SC, ESCG, ESCALG algorithms on four small graph benchmark datasets and nine large graph benchmark datasets, where PDPIC proved higher accuracy and better time consuming than other compared algorithms.Keywords: spectral clustering, power iteration clustering, deflation-based power iteration clustering, Apache spark, large graph
Procedia PDF Downloads 1947309 Rhythm-Reading Success Using Conversational Solfege
Authors: Kelly Jo Hollingsworth
Abstract:
Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction
Procedia PDF Downloads 1627308 Iterative Reconstruction Techniques as a Dose Reduction Tool in Pediatric Computed Tomography Imaging: A Phantom Study
Authors: Ajit Brindhaban
Abstract:
Background and Purpose: Computed Tomography (CT) scans have become the largest source of radiation in radiological imaging. The purpose of this study was to compare the quality of pediatric Computed Tomography (CT) images reconstructed using Filtered Back Projection (FBP) with images reconstructed using different strengths of Iterative Reconstruction (IR) technique, and to perform a feasibility study to assess the use of IR techniques as a dose reduction tool. Materials and Methods: An anthropomorphic phantom representing a 5-year old child was scanned, in two stages, using a Siemens Somatom CT unit. In stage one, scans of the head, chest and abdomen were performed using standard protocols recommended by the scanner manufacturer. Images were reconstructed using FBP and 5 different strengths of IR. Contrast-to-Noise Ratios (CNR) were calculated from average CT number and its standard deviation measured in regions of interest created in the lungs, bone, and soft tissues regions of the phantom. Paired t-test and the one-way ANOVA were used to compare the CNR from FBP images with IR images, at p = 0.05 level. The lowest strength value of IR that produced the highest CNR was identified. In the second stage, scans of the head was performed with decreased mA(s) values relative to the increase in CNR compared to the standard FBP protocol. CNR values were compared in this stage using Paired t-test at p = 0.05 level. Results: Images reconstructed using IR technique had higher CNR values (p < 0.01.) in all regions compared to the FBP images, at all strengths of IR. The CNR increased with increasing IR strength of up to 3, in the head and chest images. Increases beyond this strength were insignificant. In abdomen images, CNR continued to increase up to strength 5. The results also indicated that, IR techniques improve CNR by a up to factor of 1.5. Based on the CNR values at strength 3 of IR images and CNR values of FBP images, a reduction in mA(s) of about 20% was identified. The images of the head acquired at 20% reduced mA(s) and reconstructed using IR at strength 3, had similar CNR as FBP images at standard mA(s). In the head scans of the phantom used in this study, it was demonstrated that similar CNR can be achieved even when the mA(s) is reduced by about 20% if IR technique with strength of 3 is used for reconstruction. Conclusions: The IR technique produced better image quality at all strengths of IR in comparison to FBP. IR technique can provide approximately 20% dose reduction in pediatric head CT while maintaining the same image quality as FBP technique.Keywords: filtered back projection, image quality, iterative reconstruction, pediatric computed tomography imaging
Procedia PDF Downloads 1527307 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach
Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes
Abstract:
Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux
Procedia PDF Downloads 1697306 Geomatic Techniques to Filter Vegetation from Point Clouds
Authors: M. Amparo Núñez-Andrés, Felipe Buill, Albert Prades
Abstract:
More and more frequently, geomatics techniques such as terrestrial laser scanning or digital photogrammetry, either terrestrial or from drones, are being used to obtain digital terrain models (DTM) used for the monitoring of geological phenomena that cause natural disasters, such as landslides, rockfalls, debris-flow. One of the main multitemporal analyses developed from these models is the quantification of volume changes in the slopes and hillsides, either caused by erosion, fall, or land movement in the source area or sedimentation in the deposition zone. To carry out this task, it is necessary to filter the point clouds of all those elements that do not belong to the slopes. Among these elements, vegetation stands out as it is the one we find with the greatest presence and its constant change, both seasonal and daily, as it is affected by factors such as wind. One of the best-known indexes to detect vegetation on the image is the NVDI (Normalized Difference Vegetation Index), which is obtained from the combination of the infrared and red channels. Therefore it is necessary to have a multispectral camera. These cameras are generally of lower resolution than conventional RGB cameras, while their cost is much higher. Therefore we have to look for alternative indices based on RGB. In this communication, we present the results obtained in Georisk project (PID2019‐103974RB‐I00/MCIN/AEI/10.13039/501100011033) by using the GLI (Green Leaf Index) and ExG (Excessive Greenness), as well as the change to the Hue-Saturation-Value (HSV) color space being the H coordinate the one that gives us the most information for vegetation filtering. These filters are applied both to the images, creating binary masks to be used when applying the SfM algorithms, and to the point cloud obtained directly by the photogrammetric process without any previous filter or the one obtained by TLS (Terrestrial Laser Scanning). In this last case, we have also tried to work with a Riegl VZ400i sensor that allows the reception, as in the aerial LiDAR, of several returns of the signal. Information to be used for the classification on the point cloud. After applying all the techniques in different locations, the results show that the color-based filters allow correct filtering in those areas where the presence of shadows is not excessive and there is a contrast between the color of the slope lithology and the vegetation. As we have advanced in the case of using the HSV color space, it is the H coordinate that responds best for this filtering. Finally, the use of the various returns of the TLS signal allows filtering with some limitations.Keywords: RGB index, TLS, photogrammetry, multispectral camera, point cloud
Procedia PDF Downloads 1607305 Analytical Study and Conservation Processes of Scribe Box from Old Kingdom
Authors: Mohamed Moustafa, Medhat Abdallah, Ramy Magdy, Ahmed Abdrabou, Mohamed Badr
Abstract:
The scribe box under study dates back to the old kingdom. It was excavated by the Italian expedition in Qena (1935-1937). The box consists of 2pieces, the lid and the body. The inner side of the lid is decorated with ancient Egyptian inscriptions written with a black pigment. The box was made using several panels assembled together by wooden dowels and secured with plant ropes. The entire box is covered with a red pigment. This study aims to use analytical techniques in order to identify and have deep understanding for the box components. Moreover, the authors were significantly interested in using infrared reflectance transmission imaging (RTI-IR) to improve the hidden inscriptions on the lid. The identification of wood species included in this study. The visual observation and assessment were done to understand the condition of this box. 3Ddimensions and 2D programs were used to illustrate wood joints techniques. Optical microscopy (OM), X-ray diffraction (XRD), X-ray fluorescence portable (XRF) and Fourier Transform Infrared spectroscopy (FTIR) were used in this study in order to identify wood species, remains of insects bodies, red pigment, fibers plant and previous conservation adhesives, also RTI-IR technique was very effective to improve hidden inscriptions. The analysis results proved that wooden panels and dowels were identified as Acacia nilotica, wooden rail was Salix sp. the insects were identified as Lasioderma serricorne and Gibbium psylloids, the red pigment was Hematite, while the fiber plants were linen, previous adhesive was identified as cellulose nitrates. The historical study for the inscriptions proved that it’s a Hieratic writings of a funerary Text. After its transportation from the Egyptian museum storage to the wood conservation laboratory of the Grand Egyptian museum –conservation center (GEM-CC), conservation techniques were applied with high accuracy in order to restore the object including cleaning , consolidating of friable pigments and writings, removal of previous adhesive and reassembly, finally the conservation process that were applied were extremely effective for this box which became ready for display or storage in the grand Egyptian museum.Keywords: scribe box, hieratic, 3D program, Acacia nilotica, XRD, cellulose nitrate, conservation
Procedia PDF Downloads 2757304 Identification of the Antimicrobial Property of Double Metal Oxide/Bioactive Glass Nanocomposite Against Multi Drug Resistant Staphylococcus aureus Causing Implant Infections
Authors: M. H. Pazandeh, M. Doudi, S. Barahimi, L. Rahimzadeh Torabi
Abstract:
The use of antibiotics is essential in reducing the occurrence of adverse effects and inhibiting the emergence of antibiotic resistance in microbial populations. The necessity for a novel methodology concerning local administration of antibiotics has arisen, with particular focus on dealing with localized infections prompted by bacterial colonization of medical devices or implant materials. Bioactive glasses (BG) are extensively employed in the field of regenerative medicine, encompassing a diverse range of materials utilized for drug delivery systems. In the present investigation, various drug carriers for imipenem and tetracycline, namely single systems BG/SnO2, BG/NiO with varying proportions of metal oxide, and nanocomposite BG/SnO2/NiO, were synthesized through the sol-gel technique. The antibacterial efficacy of the synthesized samples was assessed through the utilization of the disk diffusion method with the aim of neutralizing Staphylococcus aureus as the bacterial model. The current study involved the examination of the bioactivity of two samples, namely BG10SnO2/10NiO and BG20SnO2, which were chosen based on their heightened bacterial inactivation properties. This evaluation entailed the employment of two techniques: the measurement of the pH of simulated body fluid (SBF) solution and the analysis of the sample tablets through X-ray diffraction (XRD), scanning electron microscopy (SEM), and Fourier transform infrared (FTIR) spectroscopy. The sample tablets were submerged in SBF for varying durations of 7, 14, and 28 days. The bioactivity of the composite bioactive glass sample was assessed through characterization of alterations in its surface morphology, structure, and chemical composition. This evaluation was performed using scanning electron microscopy (SEM), Fourier-transform infrared (FTIR) spectroscopy, and X-ray diffraction spectroscopy. Subsequently, the sample was immersed in simulated liquids to simulate its behavior in biological environments. The specific body fat percentage (SBF) was assessed over a 28-day period. The confirmation of the formation of a hydroxyapatite surface layer serves as a distinct indicator of bioactivity. The infusion of antibiotics into the composite bioactive glass specimen was done separately, and then the release kinetics of tetracycline and imipenem were tested in simulated body fluid (SBF). Antimicrobial effectiveness against various bacterial strains have been proven in numerous instances using both melt and sol-gel techniques to create multiple bioactive glass compositions. An elevated concentration of calcium ions within a solution has been observed to cause an increase in the pH level. In aqueous suspensions, bioactive glass particles manifest a significant antimicrobial impact. The composite bioactive glass specimen exhibits a gradual and uninterrupted release, which is highly desirable for a drug delivery system over a span of 72 hours. The reduction in absorption, which signals the loss of a portion of the antibiotic during the loading process from the initial phosphate-buffered saline solution, indicates the successful bonding of the two antibiotics to the surfaces of the bioactive glass samples. The sample denoted as BG/10SnO2/10NiO exhibits a higher loading of particles compared to the sample designated as BG/20SnO2 in the context of bioactive glass. The enriched sample demonstrates a heightened bactericidal impact on the bacteria under investigation while concurrently preserving its antibacterial characteristics. Tailored bioactive glass that incorporates hydroxyapatite, with a regulated and efficient release of drugs targeting bacterial infections, holds promise as a potential framework for bone implant scaffolds following rigorous clinical evaluation, thereby establishing potential future biomedical uses. During the modification process, the introduction of metal oxides into bioactive glass resulted in improved antibacterial characteristics, particularly in the composite bioactive glass sample that displayed the highest level of efficiency.Keywords: antibacterial, bioactive glasses, implant infections, multi drug resistant
Procedia PDF Downloads 1037303 A Dynamic Solution Approach for Heart Disease Prediction
Authors: Walid Moudani
Abstract:
The healthcare environment is generally perceived as being information rich yet knowledge poor. However, there is a lack of effective analysis tools to discover hidden relationships and trends in data. In fact, valuable knowledge can be discovered from application of data mining techniques in healthcare system. In this study, a proficient methodology for the extraction of significant patterns from the coronary heart disease warehouses for heart attack prediction, which unfortunately continues to be a leading cause of mortality in the whole world, has been presented. For this purpose, we propose to enumerate dynamically the optimal subsets of the reduced features of high interest by using rough sets technique associated to dynamic programming. Therefore, we propose to validate the classification using Random Forest (RF) decision tree to identify the risky heart disease cases. This work is based on a large amount of data collected from several clinical institutions based on the medical profile of patient. Moreover, the experts’ knowledge in this field has been taken into consideration in order to define the disease, its risk factors, and to establish significant knowledge relationships among the medical factors. A computer-aided system is developed for this purpose based on a population of 525 adults. The performance of the proposed model is analyzed and evaluated based on set of benchmark techniques applied in this classification problem.Keywords: multi-classifier decisions tree, features reduction, dynamic programming, rough sets
Procedia PDF Downloads 4127302 Video Stabilization Using Feature Point Matching
Authors: Shamsundar Kulkarni
Abstract:
Video capturing by non-professionals will lead to unanticipated effects. Such as image distortion, image blurring etc. Hence, many researchers study such drawbacks to enhance the quality of videos. In this paper, an algorithm is proposed to stabilize jittery videos .A stable output video will be attained without the effect of jitter which is caused due to shaking of handheld camera during video recording. Firstly, salient points from each frame from the input video are identified and processed followed by optimizing and stabilize the video. Optimization includes the quality of the video stabilization. This method has shown good result in terms of stabilization and it discarded distortion from the output videos recorded in different circumstances.Keywords: video stabilization, point feature matching, salient points, image quality measurement
Procedia PDF Downloads 3157301 Autism Disease Detection Using Transfer Learning Techniques: Performance Comparison between Central Processing Unit vs. Graphics Processing Unit Functions for Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
Neural network approaches are machine learning methods used in many domains, such as healthcare and cyber security. Neural networks are mostly known for dealing with image datasets. While training with the images, several fundamental mathematical operations are carried out in the Neural Network. The operation includes a number of algebraic and mathematical functions, including derivative, convolution, and matrix inversion and transposition. Such operations require higher processing power than is typically needed for computer usage. Central Processing Unit (CPU) is not appropriate for a large image size of the dataset as it is built with serial processing. While Graphics Processing Unit (GPU) has parallel processing capabilities and, therefore, has higher speed. This paper uses advanced Neural Network techniques such as VGG16, Resnet50, Densenet, Inceptionv3, Xception, Mobilenet, XGBOOST-VGG16, and our proposed models to compare CPU and GPU resources. A system for classifying autism disease using face images of an autistic and non-autistic child was used to compare performance during testing. We used evaluation matrices such as Accuracy, F1 score, Precision, Recall, and Execution time. It has been observed that GPU runs faster than the CPU in all tests performed. Moreover, the performance of the Neural Network models in terms of accuracy increases on GPU compared to CPU.Keywords: autism disease, neural network, CPU, GPU, transfer learning
Procedia PDF Downloads 1237300 Effect of Plasma Treatment on UV Protection Properties of Fabrics
Authors: Sheila Shahidi
Abstract:
UV protection by fabrics has recently become a focus of great interest, particularly in connection with environmental degradation or ozone layer depletion. Fabrics provide simple and convenient protection against UV radiation (UVR), but not all fabrics offer sufficient UV protection. To describe the degree of UVR protection offered by clothing materials, the ultraviolet protection factor (UPF) is commonly used. UV-protective fabric can be generated by application of a chemical finish using normal wet-processing methodologies. However, traditional wet-processing techniques are known to consume large quantities of water and energy and may lead to adverse alterations of the bulk properties of the substrate. Recently, usage of plasmas to generate physicochemical surface modifications of textile substrates has become an intriguing approach to replace or enhance conventional wet-processing techniques. In this research work the effect of plasma treatment on UV protection properties of fabrics was investigated. DC magnetron sputtering was used and the parameters of plasma such as gas type, electrodes, time of exposure, power and, etc. were studied. The morphological and chemical properties of samples were analyzed using Scanning Electron Microscope (SEM) and Furrier Transform Infrared Spectroscopy (FTIR), respectively. The transmittance and UPF values of the original and plasma-treated samples were measured using a Shimadzu UV3101 PC (UV–Vis–NIR scanning spectrophotometer, 190–2, 100 nm range). It was concluded that, plasma which is an echo-friendly, cost effective and dry technique is being used in different branches of the industries, and will conquer textile industry in the near future. Also it is promising method for preparation of UV protection textile.Keywords: fabric, plasma, textile, UV protection
Procedia PDF Downloads 5237299 Dynamic Modeling of the Exchange Rate in Tunisia: Theoretical and Empirical Study
Authors: Chokri Slim
Abstract:
The relative failure of simultaneous equation models in the seventies has led researchers to turn to other approaches that take into account the dynamics of economic and financial systems. In this paper, we use an approach based on vector autoregressive model that is widely used in recent years. Their popularity is due to their flexible nature and ease of use to produce models with useful descriptive characteristics. It is also easy to use them to test economic hypotheses. The standard econometric techniques assume that the series studied are stable over time (stationary hypothesis). Most economic series do not verify this hypothesis, which assumes, when one wishes to study the relationships that bind them to implement specific techniques. This is cointegration which characterizes non-stationary series (integrated) with a linear combination is stationary, will also be presented in this paper. Since the work of Johansen, this approach is generally presented as part of a multivariate analysis and to specify long-term stable relationships while at the same time analyzing the short-term dynamics of the variables considered. In the empirical part, we have applied these concepts to study the dynamics of of the exchange rate in Tunisia, which is one of the most important economic policy of a country open to the outside. According to the results of the empirical study by the cointegration method, there is a cointegration relationship between the exchange rate and its determinants. This relationship shows that the variables have a significant influence in determining the exchange rate in Tunisia.Keywords: stationarity, cointegration, dynamic models, causality, VECM models
Procedia PDF Downloads 3707298 Modern Scotland Yard: Improving Surveillance Policies Using Adversarial Agent-Based Modelling and Reinforcement Learning
Authors: Olaf Visker, Arnout De Vries, Lambert Schomaker
Abstract:
Predictive policing refers to the usage of analytical techniques to identify potential criminal activity. It has been widely implemented by various police departments. Being a relatively new area of research, there are, to the author’s knowledge, no absolute tried, and true methods and they still exhibit a variety of potential problems. One of those problems is closely related to the lack of understanding of how acting on these prediction influence crime itself. The goal of law enforcement is ultimately crime reduction. As such, a policy needs to be established that best facilitates this goal. This research aims to find such a policy by using adversarial agent-based modeling in combination with modern reinforcement learning techniques. It is presented here that a baseline model for both law enforcement and criminal agents and compare their performance to their respective reinforcement models. The experiments show that our smart law enforcement model is capable of reducing crime by making more deliberate choices regarding the locations of potential criminal activity. Furthermore, it is shown that the smart criminal model presents behavior consistent with popular crime theories and outperforms the baseline model in terms of crimes committed and time to capture. It does, however, still suffer from the difficulties of capturing long term rewards and learning how to handle multiple opposing goals.Keywords: adversarial, agent based modelling, predictive policing, reinforcement learning
Procedia PDF Downloads 1527297 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 4797296 Survey of Indoor Radon/Thoron Concentrations in High Lung Cancer Incidence Area in India
Authors: Zoliana Bawitlung, P. C. Rohmingliana, L. Z. Chhangte, Remlal Siama, Hming Chungnunga, Vanram Lawma, L. Hnamte, B. K. Sahoo, B. K. Sapra, J. Malsawma
Abstract:
Mizoram state has the highest lung cancer incidence rate in India due to its high-level consumption of tobacco and its products which is supplemented by the food habits. While smoking is mainly responsible for this incidence, the effect of inhalation of indoor radon gas cannot be discarded as the hazardous nature of this radioactive gas and its progenies on human population have been well-established worldwide where the radiation damage to bronchial cells eventually can be the second leading cause of lung cancer next to smoking. It is also known that the effect of radiation, however, small may be the concentration, cannot be neglected as they can bring about the risk of cancer incidence. Hence, estimation of indoor radon concentration is important to give a useful reference against radiation effects as well as establishing its safety measures and to create a baseline for further case-control studies. The indoor radon/thoron concentrations in Mizoram had been measured in 41 dwellings selected on the basis of spot gamma background radiation and construction type of the houses during 2015-2016. The dwellings were monitored for one year, in 4 months cycles to indicate seasonal variations, for the indoor concentration of radon gas and its progenies, outdoor gamma dose, and indoor gamma dose respectively. A time-integrated method using Solid State Nuclear Track Detector (SSNTD) based single entry pin-hole dosimeters were used for measurement of indoor Radon/Thoron concentration. Gamma dose measurements for indoor as well as outdoor were carried out using Geiger Muller survey meters. Seasonal variation of indoor radon/ thoron concentration was monitored. The results show that the annual average radon concentrations varied from 54.07 – 144.72 Bq/m³ with an average of 90.20 Bq/m³ and the annual average thoron concentration varied from 17.39 – 54.19 Bq/m³ with an average of 35.91 Bq/m³ which are below the permissible limit. The spot survey of gamma background radiation level varies between 9 to 24 µR/h inside and outside the dwellings throughout Mizoram which are all within acceptable limits. From the above results, there is no direct indication that radon/thoron is responsible for the high lung cancer incidence in the area. In order to find epidemiological evidence of natural radiations to high cancer incidence in the area, one may need to conduct a case-control study which is beyond this scope. However, the derived data of measurement will provide baseline data for further studies.Keywords: background gamma radiation, indoor radon/thoron, lung cancer, seasonal variation
Procedia PDF Downloads 147