Search results for: three dimensional emission spectral studies
14465 Generation of Waste Streams in Small Model Reactors
Authors: Sara Mostofian
Abstract:
The nuclear industry is a technology that can fulfill future energy needs but requires special attention to ensure safety and reliability while minimizing any environmental impact. To meet these expectations, the nuclear industry is exploring different reactor technologies for power production. Several designs are under development and the technical viability of these new designs is the subject of many ongoing studies. One of these studies considers the radioactive emissions and radioactive waste generated during the life of a nuclear power production plant to allow a successful license process. For all the modern technologies, a good understanding of the radioactivity generated in the process systems of the plant is essential. Some of that understanding may be gleaned from the performance of some prototype reactors of similar design that operated decades ago. This paper presents how, with that understanding, a model can be developed to estimate the emissions as well as the radioactive waste during the normal operation of a nuclear power plant. The model would predict the radioactive material concentrations in different waste streams. Using this information, the radioactive emission and waste generated during the life of these new technologies can be estimated during the early stages of the design of the plant.Keywords: SMRs, activity transport, model, radioactive waste
Procedia PDF Downloads 10914464 Extraction of Urban Building Damage Using Spectral, Height and Corner Information
Authors: X. Wang
Abstract:
Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.Keywords: building damage, corner, earthquake, height, very high resolution (VHR)
Procedia PDF Downloads 21314463 Preparation and Characterization of Photocatalyst for the Conversion of Carbon Dioxide to Methanol
Authors: D. M. Reddy Prasad, Nur Sabrina Binti Rahmat, Huei Ruey Ong, Chin Kui Cheng, Maksudur Rahman Khan, D. Sathiyamoorthy
Abstract:
Carbon dioxide (CO2) emission to the environment is inevitable which is responsible for global warming. Photocatalytic reduction of CO2 to fuel, such as methanol, methane etc. is a promising way to reduce greenhouse gas CO2 emission. In the present work, Bi2S3/CdS was synthesized as an effective visible light responsive photocatalyst for CO2 reduction into methanol. The Bi2S3/CdS photocatalyst was prepared by hydrothermal reaction. The catalyst was characterized by X-ray diffraction (XRD) instrument. The photocatalytic activity of the catalyst has been investigated for methanol production as a function of time. Gas chromatograph flame ionization detector (GC-FID) was employed to analyze the product. The yield of methanol was found to increase with higher CdS concentration in Bi2S3/CdS and the maximum yield was obtained for 45 wt% of Bi2S3/CdS under visible light irradiation was 20 μmole/g. The result establishes that Bi2S3/CdS is favorable catalyst to reduce CO2 to methanol.Keywords: photocatalyst, CO2 reduction, methanol, visible light, XRD, GC-FID
Procedia PDF Downloads 50114462 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 20414461 Environmental Protection by Optimum Utilization of Car Air Conditioners
Authors: Sanchita Abrol, Kunal Rana, Ankit Dhir, S. K. Gupta
Abstract:
According to N.R.E.L.’s findings, 700 crore gallons of petrol is used annually to run the air conditioners of passenger vehicles (nearly 6% of total fuel consumption in the USA). Beyond fuel use, the Environmental Protection Agency reported that refrigerant leaks from auto air conditioning units add an additional 5 crore metric tons of carbon emissions to the atmosphere each year. The objective of our project is to deal with this vital issue by carefully modifying the interiors of a car thereby increasing its mileage and the efficiency of its engine. This would consequently result in a decrease in tail emission and generated pollution along with improved car performance. An automatic mechanism, deployed between the front and the rear seats, consisting of transparent thermal insulating sheet/curtain, would roll down as per the requirement of the driver in order to optimize the volume for effective air conditioning, when travelling alone or with a person. The reduction in effective volume will yield favourable results. Even on a mild sunny day, the temperature inside a parked car can quickly spike to life-threatening levels. For a stationary parked car, insulation would be provided beneath its metal body so as to reduce the rate of heat transfer and increase the transmissivity. As a result, the car would not require a large amount of air conditioning for maintaining lower temperature, which would provide us similar benefits. Authors established the feasibility studies, system engineering and primarily theoretical and experimental results confirming the idea and motivation to fabricate and test the actual product.Keywords: automation, car, cooling insulating curtains, heat optimization, insulation, reduction in tail emission, mileage
Procedia PDF Downloads 27714460 Neck Thinning Dynamics of Janus Droplets under Multiphase Interface Coupling in Cross Junction Microchannels
Authors: Jiahe Ru, Yan Pang, Zhaomiao Liu
Abstract:
Necking processes of the Janus droplet generation in the cross-junction microchannels are experimentally and theoretically investigated. The two dispersed phases that are simultaneously shear by continuous phases are liquid paraffin wax and 100cs silicone oil, in which 80% glycerin aqueous solution is used as continuous phases. According to the variation of minimum neck width and thinning rate, the necking process is divided into two stages, including the two-dimensional extrusion and the three-dimensional extrusion. In the two-dimensional extrusion stage, the evolutions of the tip extension length for the two discrete phases begin with the same trend, and then the length of liquid paraffin is larger than silicone oil. The upper and lower neck interface profiles in Janus necking process are asymmetrical when the tip extension velocity of paraffin oil is greater than that of silicone oil. In the three-dimensional extrusion stage, the neck of the liquid paraffin lags behind that of the silicone oil because of the higher surface tension, and finally, the necking fracture position gradually synchronizes. When the Janus droplets pinch off, the interfacial tension becomes positive to drive the neck thinning. The interface coupling of the three phases can cause asymmetric necking of the neck interface, which affects the necking time and, ultimately, the droplet volume. This paper mainly investigates the thinning dynamics of the liquid-liquid interface in confined microchannels. The revealed results could help to enhance the physical understanding of the droplet generation phenomenon.Keywords: neck interface, interface coupling, janus droplets, multiphase flow
Procedia PDF Downloads 12814459 An Atomistic Approach to Define Continuum Mechanical Quantities in One Dimensional Nanostructures at Finite Temperature
Authors: Smriti, Ajeet Kumar
Abstract:
We present a variant of the Irving-Kirkwood procedure to obtain the microscopic expressions of the cross-section averaged continuum fields such as internal force and moment in one-dimensional nanostructures in the non-equilibrium setting. In one-dimensional continuum theories for slender bodies, we deal with quantities such as mass, linear momentum, angular momentum, and strain energy densities, all defined per unit length. These quantities are obtained by integrating the corresponding pointwise (per unit volume) quantities over the cross-section of the slender body. However, no well-defined cross-section exists for these nanostructures at finite temperature. We thus define the cross-section of a nanorod to be an infinite plane which is fixed in space even when time progresses and defines the above continuum quantities by integrating the pointwise microscopic quantities over this infinite plane. The method yields explicit expressions of both the potential and kinetic parts of the above quantities. We further specialize in these expressions for helically repeating one-dimensional nanostructures in order to use them in molecular dynamics study of extension, torsion, and bending of such nanostructures. As, the Irving-Kirkwood procedure does not yield expressions of stiffnesses, we resort to a thermodynamic equilibrium approach to obtain the expressions of axial force, twisting moment, bending moment, and the associated stiffnesses by taking the first and second derivatives of the Helmholtz free energy with respect to conjugate strain measures. The equilibrium approach yields expressions independent of kinetic terms. We then establish the equivalence of the expressions obtained using the two approaches. The derived expressions are used to understand the extension, torsion, and bending of single-walled carbon nanotubes at non-zero temperatures.Keywords: thermoelasticity, molecular dynamics, one dimensional nanostructures, nanotube buckling
Procedia PDF Downloads 12514458 Electroencephalogram during Natural Reading: Theta and Alpha Rhythms as Analytical Tools for Assessing a Reader’s Cognitive State
Authors: D. Zhigulskaya, V. Anisimov, A. Pikunov, K. Babanova, S. Zuev, A. Latyshkova, K. Сhernozatonskiy, A. Revazov
Abstract:
Electrophysiology of information processing in reading is certainly a popular research topic. Natural reading, however, has been relatively poorly studied, despite having broad potential applications for learning and education. In the current study, we explore the relationship between text categories and spontaneous electroencephalogram (EEG) while reading. Thirty healthy volunteers (mean age 26,68 ± 1,84) participated in this study. 15 Russian-language texts were used as stimuli. The first text was used for practice and was excluded from the final analysis. The remaining 14 were opposite pairs of texts in one of 7 categories, the most important of which were: interesting/boring, fiction/non-fiction, free reading/reading with an instruction, reading a text/reading a pseudo text (consisting of strings of letters that formed meaningless words). Participants had to read the texts sequentially on an Apple iPad Pro. EEG was recorded from 12 electrodes simultaneously with eye movement data via ARKit Technology by Apple. EEG spectral amplitude was analyzed in Fz for theta-band (4-8 Hz) and in C3, C4, P3, and P4 for alpha-band (8-14 Hz) using the Friedman test. We found that reading an interesting text was accompanied by an increase in theta spectral amplitude in Fz compared to reading a boring text (3,87 µV ± 0,12 and 3,67 µV ± 0,11, respectively). When instructions are given for reading, we see less alpha activity than during free reading of the same text (3,34 µV ± 0,20 and 3,73 µV ± 0,28, respectively, for C4 as the most representative channel). The non-fiction text elicited less activity in the alpha band (C4: 3,60 µV ± 0,25) than the fiction text (C4: 3,66 µV ± 0,26). A significant difference in alpha spectral amplitude was also observed between the regular text (C4: 3,64 µV ± 0,29) and the pseudo text (C4: 3,38 µV ± 0,22). These results suggest that some brain activity we see on EEG is sensitive to particular features of the text. We propose that changes in theta and alpha bands during reading may serve as electrophysiological tools for assessing the reader’s cognitive state as well as his or her attitude to the text and the perceived information. These physiological markers have prospective practical value for developing technological solutions and biofeedback systems for reading in particular and for education in general.Keywords: EEG, natural reading, reader's cognitive state, theta-rhythm, alpha-rhythm
Procedia PDF Downloads 8014457 Potential of Hyperion (EO-1) Hyperspectral Remote Sensing for Detection and Mapping Mine-Iron Oxide Pollution
Authors: Abderrazak Bannari
Abstract:
Acid Mine Drainage (AMD) from mine wastes and contaminations of soils and water with metals are considered as a major environmental problem in mining areas. It is produced by interactions of water, air, and sulphidic mine wastes. This environment problem results from a series of chemical and biochemical oxidation reactions of sulfide minerals e.g. pyrite and pyrrhotite. These reactions lead to acidity as well as the dissolution of toxic and heavy metals (Fe, Mn, Cu, etc.) from tailings waste rock piles, and open pits. Soil and aquatic ecosystems could be contaminated and, consequently, human health and wildlife will be affected. Furthermore, secondary minerals, typically formed during weathering of mine waste storage areas when the concentration of soluble constituents exceeds the corresponding solubility product, are also important. The most common secondary mineral compositions are hydrous iron oxide (goethite, etc.) and hydrated iron sulfate (jarosite, etc.). The objectives of this study focus on the detection and mapping of MIOP in the soil using Hyperion EO-1 (Earth Observing - 1) hyperspectral data and constrained linear spectral mixture analysis (CLSMA) algorithm. The abandoned Kettara mine, located approximately 35 km northwest of Marrakech city (Morocco) was chosen as study area. During 44 years (from 1938 to 1981) this mine was exploited for iron oxide and iron sulphide minerals. Previous studies have shown that Kettara surrounding soils are contaminated by heavy metals (Fe, Cu, etc.) as well as by secondary minerals. To achieve our objectives, several soil samples representing different MIOP classes have been resampled and located using accurate GPS ( ≤ ± 30 cm). Then, endmembers spectra were acquired over each sample using an Analytical Spectral Device (ASD) covering the spectral domain from 350 to 2500 nm. Considering each soil sample separately, the average of forty spectra was resampled and convolved using Gaussian response profiles to match the bandwidths and the band centers of the Hyperion sensor. Moreover, the MIOP content in each sample was estimated by geochemical analyses in the laboratory, and a ground truth map was generated using simple Kriging in GIS environment for validation purposes. The acquired and used Hyperion data were corrected for a spatial shift between the VNIR and SWIR detectors, striping, dead column, noise, and gain and offset errors. Then, atmospherically corrected using the MODTRAN 4.2 radiative transfer code, and transformed to surface reflectance, corrected for sensor smile (1-3 nm shift in VNIR and SWIR), and post-processed to remove residual errors. Finally, geometric distortions and relief displacement effects were corrected using a digital elevation model. The MIOP fraction map was extracted using CLSMA considering the entire spectral range (427-2355 nm), and validated by reference to the ground truth map generated by Kriging. The obtained results show the promising potential of the proposed methodology for the detection and mapping of mine iron oxide pollution in the soil.Keywords: hyperion eo-1, hyperspectral, mine iron oxide pollution, environmental impact, unmixing
Procedia PDF Downloads 22814456 The Marker Active Compound Identification of Calotropis gigantea Roots Extract as an Anticancer
Authors: Roihatul Mutiah, Sukardiman, Aty Widyawaruyanti
Abstract:
Calotropis gigantiea (L.) R. Br (Apocynaceae) commonly called as “Biduri” or “giant milk weed” is a well-known weed to many cultures for treating various disorders. Several studies reported that C.gigantea roots has anticancer activity. The main aim of this research was to isolate and identify an active marker compound of C.gigantea roots for quality control purpose of its extract in the development as anticancer natural product. The isolation methods was bioactivity guided column chromatography, TLC, and HPLC. Evaluated anticancer activity of there substances using MTT assay methods. Identification structure active compound by UV, 1HNMR, 13CNMR, HMBC, HMQC spectral and other references. The result showed that the marker active compound was identical as Calotropin.Keywords: calotropin, Calotropis gigantea, anticancer, marker active
Procedia PDF Downloads 33414455 Generalized Approach to Linear Data Transformation
Authors: Abhijith Asok
Abstract:
This paper presents a generalized approach for the simple linear data transformation, Y=bX, through an integration of multidimensional coordinate geometry, vector space theory and polygonal geometry. The scaling is performed by adding an additional ’Dummy Dimension’ to the n-dimensional data, which helps plot two dimensional component-wise straight lines on pairs of dimensions. The end result is a set of scaled extensions of observations in any of the 2n spatial divisions, where n is the total number of applicable dimensions/dataset variables, created by shifting the n-dimensional plane along the ’Dummy Axis’. The derived scaling factor was found to be dependent on the coordinates of the common point of origin for diverging straight lines and the plane of extension, chosen on and perpendicular to the ’Dummy Axis’, respectively. This result indicates the geometrical interpretation of a linear data transformation and hence, opportunities for a more informed choice of the factor ’b’, based on a better choice of these coordinate values. The paper follows on to identify the effect of this transformation on certain popular distance metrics, wherein for many, the distance metric retained the same scaling factor as that of the features.Keywords: data transformation, dummy dimension, linear transformation, scaling
Procedia PDF Downloads 29714454 Partial M-Sequence Code Families Applied in Spectral Amplitude Coding Fiber-Optic Code-Division Multiple-Access Networks
Authors: Shin-Pin Tseng
Abstract:
Nowadays, numerous spectral amplitude coding (SAC) fiber-optic code-division-multiple-access (FO-CDMA) techniques were appealing due to their capable of providing moderate security and relieving the effects of multiuser interference (MUI). Nonetheless, the performance of the previous network is degraded due to fixed in-phase cross-correlation (IPCC) value. Based on the above problems, a new SAC FO-CDMA network using partial M-sequence (PMS) code is presented in this study. Because the proposed PMS code is originated from M-sequence code, the system using the PMS code could effectively suppress the effects of MUI. In addition, two-code keying (TCK) scheme can applied in the proposed SAC FO-CDMA network and enhance the whole network performance. According to the consideration of system flexibility, simple optical encoders/decoders (codecs) using fiber Bragg gratings (FBGs) were also developed. First, we constructed a diagram of the SAC FO-CDMA network, including (N/2-1) optical transmitters, (N/2-1) optical receivers, and one N×N star coupler for broadcasting transmitted optical signals to arrive at the input port of each optical receiver. Note that the parameter N for the PMS code was the code length. In addition, the proposed SAC network was using superluminescent diodes (SLDs) as light sources, which then can save a lot of system cost compared with the other FO-CDMA methods. For the design of each optical transmitter, it is composed of an SLD, one optical switch, and two optical encoders according to assigned PMS codewords. On the other hand, each optical receivers includes a 1 × 2 splitter, two optical decoders, and one balanced photodiode for mitigating the effect of MUI. In order to simplify the next analysis, the some assumptions were used. First, the unipolarized SLD has flat power spectral density (PSD). Second, the received optical power at the input port of each optical receiver is the same. Third, all photodiodes in the proposed network have the same electrical properties. Fourth, transmitting '1' and '0' has an equal probability. Subsequently, by taking the factors of phase‐induced intensity noise (PIIN) and thermal noise, the corresponding performance was displayed and compared with the performance of the previous SAC FO-CDMA networks. From the numerical result, it shows that the proposed network improved about 25% performance than that using other codes at BER=10-9. This is because the effect of PIIN was effectively mitigated and the received power was enhanced by two times. As a result, the SAC FO-CDMA network using PMS codes has an opportunity to apply in applications of the next-generation optical network.Keywords: spectral amplitude coding, SAC, fiber-optic code-division multiple-access, FO-CDMA, partial M-sequence, PMS code, fiber Bragg grating, FBG
Procedia PDF Downloads 38414453 Efficient Tuning Parameter Selection by Cross-Validated Score in High Dimensional Models
Authors: Yoonsuh Jung
Abstract:
As DNA microarray data contain relatively small sample size compared to the number of genes, high dimensional models are often employed. In high dimensional models, the selection of tuning parameter (or, penalty parameter) is often one of the crucial parts of the modeling. Cross-validation is one of the most common methods for the tuning parameter selection, which selects a parameter value with the smallest cross-validated score. However, selecting a single value as an "optimal" value for the parameter can be very unstable due to the sampling variation since the sample sizes of microarray data are often small. Our approach is to choose multiple candidates of tuning parameter first, then average the candidates with different weights depending on their performance. The additional step of estimating the weights and averaging the candidates rarely increase the computational cost, while it can considerably improve the traditional cross-validation. We show that the selected value from the suggested methods often lead to stable parameter selection as well as improved detection of significant genetic variables compared to the tradition cross-validation via real data and simulated data sets.Keywords: cross validation, parameter averaging, parameter selection, regularization parameter search
Procedia PDF Downloads 41514452 Comprehensive Study of X-Ray Emission by APF Plasma Focus Device
Authors: M. Habibi
Abstract:
The time-resolved studies of soft and hard X-ray were carried out over a wide range of argon pressures by employing an array of eight filtered photo PIN diodes and a scintillation detector, simultaneously. In 50% of the discharges, the soft X-ray is seen to be emitted in short multiple pulses corresponding to different compression, whereas it is a single pulse for hard X-rays corresponding to only the first strong compression. It should be stated that multiple compressions dominantly occur at low pressures and high pressures are mostly in the single compression regime. In 43% of the discharges, at all pressures except for optimum pressure, the first period is characterized by two or more sharp peaks.The X–ray signal intensity during the second and subsequent compressions is much smaller than the first compression.Keywords: plasma focus device, SXR, HXR, Pin-diode, argon plasma
Procedia PDF Downloads 40814451 The Autonomy Use of Preparatory School Students to Learn English Language
Authors: Mi̇hri̇ban Müge Aras
Abstract:
The present study aims to investigate the learner autonomy usage of prep school students. This research focuses on the prep school students' autonomy habits according to their self-regulated studies, age and duration of learning English. The research also analyzes whether prep school students have strong autonomy to learn the English language or depend on teachers and English classes only. The participants of the study consisted of 32 prep school students. The "Likert- type of questionnaire " was adopted by the researcher from the survey of Dede (2017). The scale was a one-dimensional 4-Likert type, which has the options of 1=never, 2= sometimes, 3=often, and 4=always. There are 19 questions in the questionnaire to understand the autonomy of students when they try to learn English. Descriptive statistics and OneANOVA were used to analyze the data. The results of the study showed that there is no significant correlation between their ages and their duration of learning English according to their autonomy studies for English.Keywords: learner autonomy, self-regulated learning, independent learning, English language learning, prep school students
Procedia PDF Downloads 24214450 Mapping of Siltations of AlKhod Dam, Muscat, Sultanate of Oman Using Low-Cost Multispectral Satellite Data
Authors: Sankaran Rajendran
Abstract:
Remote sensing plays a vital role in mapping of resources and monitoring of environments of the earth. In the present research study, mapping and monitoring of clay siltations occurred in the Alkhod Dam of Muscat, Sultanate of Oman are carried out using low-cost multispectral Landsat and ASTER data. The dam is constructed across the Wadi Samail catchment for ground water recharge. The occurrence and spatial distribution of siltations in the dam are studied with five years of interval from the year 1987 of construction to 2014. The deposits are mainly due to the clay, sand, and silt occurrences derived from the weathering rocks of ophiolite sequences occurred in the Wadi Samail catchment. The occurrences of clays are confirmed by minerals identification using ASTER VNIR-SWIR spectral bands and Spectral Angle Mapper supervised image processing method. The presence of clays and their spatial distribution are verified in the field. The study recommends the technique and the low-cost satellite data to similar region of the world.Keywords: Alkhod Dam, ASTER siltation, Landsat, remote sensing, Oman
Procedia PDF Downloads 43714449 Investigating the Effects of Data Transformations on a Bi-Dimensional Chi-Square Test
Authors: Alexandru George Vaduva, Adriana Vlad, Bogdan Badea
Abstract:
In this research, we conduct a Monte Carlo analysis on a two-dimensional χ2 test, which is used to determine the minimum distance required for independent sampling in the context of chaotic signals. We investigate the impact of transforming initial data sets from any probability distribution to new signals with a uniform distribution using the Spearman rank correlation on the χ2 test. This transformation removes the randomness of the data pairs, and as a result, the observed distribution of χ2 test values differs from the expected distribution. We propose a solution to this problem and evaluate it using another chaotic signal.Keywords: chaotic signals, logistic map, Pearson’s test, Chi Square test, bivariate distribution, statistical independence
Procedia PDF Downloads 9714448 Cooling of Exhaust Gases Emitted Into the Atmosphere as the Possibility to Reduce the Helicopter Radiation Emission Level
Authors: Mateusz Paszko, Mirosław Wendeker, Adam Majczak
Abstract:
Every material body that temperature is higher than 0K (absolute zero) emits infrared radiation to the surroundings. Infrared radiation is highly meaningful in military aviation, especially in military applications of helicopters. Helicopters, in comparison to other aircraft, have much lower flight speeds and maneuverability, which makes them easy targets for actual combat assets like infrared-guided missiles. When designing new helicopter types, especially for combat applications, it is essential to pay enormous attention to infrared emissions of the solid parts composing the helicopter’s structure, as well as to exhaust gases egressing from the engine’s exhaust system. Due to their high temperature, exhaust gases, egressed to the surroundings are a major factor in infrared radiation emission and, in consequence, detectability of a helicopter performing air combat operations. Protection of the helicopter in flight from early detection, tracking and finally destruction can be realized in many ways. This paper presents the analysis of possibilities to decrease the infrared radiation level that is emitted to the environment by helicopter in flight, by cooling exhaust in special ejection-based coolers. The paper also presents the concept 3D model and results of numeric analysis of ejective-based cooler cooperation with PA-10W turbine engine. Numeric analysis presented promising results in decreasing the infrared emission level by PA W-3 helicopter in flight.Keywords: exhaust cooler, helicopter propulsion, infrared radiation, stealth
Procedia PDF Downloads 34714447 Speech Identification Test for Individuals with High-Frequency Sloping Hearing Loss in Telugu
Authors: S. B. Rathna Kumar, Sandya K. Varudhini, Aparna Ravichandran
Abstract:
Telugu is a south central Dravidian language spoken in Andhra Pradesh, a southern state of India. The available speech identification tests in Telugu have been developed to determine the communication problems of individuals having a flat frequency hearing loss. These conventional speech audiometric tests would provide redundant information when used on individuals with high-frequency sloping hearing loss because of better hearing sensitivity in the low- and mid-frequency regions. Hence, conventional speech identification tests do not indicate the true nature of the communication problem of individuals with high-frequency sloping hearing loss. It is highly possible that a person with a high-frequency sloping hearing loss may get maximum scores if conventional speech identification tests are used. Hence, there is a need to develop speech identification test materials that are specifically designed to assess the speech identification performance of individuals with high-frequency sloping hearing loss. The present study aimed to develop speech identification test for individuals with high-frequency sloping hearing loss in Telugu. Individuals with high-frequency sloping hearing loss have difficulty in perception of voiceless consonants whose spectral energy is above 1000 Hz. Hence, the word lists constructed with phonemes having mid- and high-frequency spectral energy will estimate speech identification performance better for such individuals. The phonemes /k/, /g/, /c/, /ṭ/ /t/, /p/, /s/, /ś/, /ṣ/ and /h/are preferred for the construction of words as these phonemes have spectral energy distributed in the frequencies above 1000 KHz predominantly. The present study developed two word lists in Telugu (each word list contained 25 words) for evaluating speech identification performance of individuals with high-frequency sloping hearing loss. The performance of individuals with high-frequency sloping hearing loss was evaluated using both conventional and high-frequency word lists under recorded voice condition. The results revealed that the developed word lists were found to be more sensitive in identifying the true nature of the communication problem of individuals with high-frequency sloping hearing loss.Keywords: speech identification test, high-frequency sloping hearing loss, recorded voice condition, Telugu
Procedia PDF Downloads 41914446 ROSgeoregistration: Aerial Multi-Spectral Image Simulator for the Robot Operating System
Authors: Andrew R. Willis, Kevin Brink, Kathleen Dipple
Abstract:
This article describes a software package called ROS-georegistration intended for use with the robot operating system (ROS) and the Gazebo 3D simulation environment. ROSgeoregistration provides tools for the simulation, test, and deployment of aerial georegistration algorithms and is available at github.com/uncc-visionlab/rosgeoregistration. A model creation package is provided which downloads multi-spectral images from the Google Earth Engine database and, if necessary, incorporates these images into a single, possibly very large, reference image. Additionally a Gazebo plugin which uses the real-time sensor pose and image formation model to generate simulated imagery using the specified reference image is provided along with related plugins for UAV relevant data. The novelty of this work is threefold: (1) this is the first system to link the massive multi-spectral imaging database of Google’s Earth Engine to the Gazebo simulator, (2) this is the first example of a system that can simulate geospatially and radiometrically accurate imagery from multiple sensor views of the same terrain region, and (3) integration with other UAS tools creates a new holistic UAS simulation environment to support UAS system and subsystem development where real-world testing would generally be prohibitive. Sensed imagery and ground truth registration information is published to client applications which can receive imagery synchronously with telemetry from other payload sensors, e.g., IMU, GPS/GNSS, barometer, and windspeed sensor data. To highlight functionality, we demonstrate ROSgeoregistration for simulating Electro-Optical (EO) and Synthetic Aperture Radar (SAR) image sensors and an example use case for developing and evaluating image-based UAS position feedback, i.e., pose for image-based Guidance Navigation and Control (GNC) applications.Keywords: EO-to-EO, EO-to-SAR, flight simulation, georegistration, image generation, robot operating system, vision-based navigation
Procedia PDF Downloads 10314445 How to Reach Net Zero Emissions? On the Permissibility of Negative Emission Technologies and the Danger of Moral Hazards
Authors: Hanna Schübel, Ivo Wallimann-Helmer
Abstract:
In order to reach the goal of the Paris Agreement to not overshoot 1.5°C of warming above pre-industrial levels, various countries including the UK and Switzerland have committed themselves to net zero emissions by 2050. The employment of negative emission technologies (NETs) is very likely going to be necessary for meeting these national objectives as well as other internationally agreed climate targets. NETs are methods of removing carbon from the atmosphere and are thus a means for addressing climate change. They range from afforestation to technological measures such as direct air capture and carbon storage (DACCS), where CO2 is captured from the air and stored underground. As all so-called geoengineering technologies, the development and deployment of NETs are often subject to moral hazard arguments. As these technologies could be perceived as an alternative to mitigation efforts, so the argument goes, they are potentially a dangerous distraction from the main target of mitigating emissions. We think that this is a dangerous argument to make as it may hinder the development of NETs which are an essential element of net zero emission targets. In this paper we argue that the moral hazard argument is only problematic if we do not reflect upon which levels of emissions are at stake in order to meet net zero emissions. In response to the moral hazard argument we develop an account of which levels of emissions in given societies should be mitigated and not be the target of NETs and which levels of emissions can legitimately be a target of NETs. For this purpose, we define four different levels of emissions: the current level of individual emissions, the level individuals emit in order to appear in public without shame, the level of a fair share of individual emissions in the global budget, and finally the baseline of net zero emissions. At each level of emissions there are different subjects to be assigned responsibilities if societies and/or individuals are committed to the target of net zero emissions. We argue that all emissions within one’s fair share do not demand individual mitigation efforts. The same holds with regard to individuals and the baseline level of emissions necessary to appear in public in their societies without shame. Individuals are only under duty to reduce their emissions if they exceed this baseline level. This is different for whole societies. Societies demanding more emissions to appear in public without shame than the individual fair share are under duty to foster emission reductions and are not legitimate to reduce by introducing NETs. NETs are legitimate for reducing emissions only below the level of fair shares and for reaching net zero emissions. Since access to NETs to achieve net zero emissions demands technology not affordable to individuals there are also no full individual responsibilities to achieve net zero emissions. This is mainly a responsibility of societies as a whole.Keywords: climate change, mitigation, moral hazard, negative emission technologies, responsibility
Procedia PDF Downloads 11814444 Concealed Objects Detection in Visible, Infrared and Terahertz Ranges
Authors: M. Kowalski, M. Kastek, M. Szustakowski
Abstract:
Multispectral screening systems are becoming more popular because of their very interesting properties and applications. One of the most significant applications of multispectral screening systems is prevention of terrorist attacks. There are many kinds of threats and many methods of detection. Visual detection of objects hidden under clothing of a person is one of the most challenging problems of threats detection. There are various solutions of the problem; however, the most effective utilize multispectral surveillance imagers. The development of imaging devices and exploration of new spectral bands is a chance to introduce new equipment for assuring public safety. We investigate the possibility of long lasting detection of potentially dangerous objects covered with various types of clothing. In the article we present the results of comparative studies of passive imaging in three spectrums – visible, infrared and terahertzKeywords: terahertz, infrared, object detection, screening camera, image processing
Procedia PDF Downloads 35714443 Biogeography Based CO2 and Cost Optimization of RC Cantilever Retaining Walls
Authors: Ibrahim Aydogdu, Alper Akin
Abstract:
In this study, the development of minimizing the cost and the CO2 emission of the RC retaining wall design has been performed by Biogeography Based Optimization (BBO) algorithm. This has been achieved by developing computer programs utilizing BBO algorithm which minimize the cost and the CO2 emission of the RC retaining walls. Objective functions of the optimization problem are defined as the minimized cost, the CO2 emission and weighted aggregate of the cost and the CO2 functions of the RC retaining walls. In the formulation of the optimum design problem, the height and thickness of the stem, the length of the toe projection, the thickness of the stem at base level, the length and thickness of the base, the depth and thickness of the key, the distance from the toe to the key, the number and diameter of the reinforcement bars are treated as design variables. In the formulation of the optimization problem, flexural and shear strength constraints and minimum/maximum limitations for the reinforcement bar areas are derived from American Concrete Institute (ACI 318-14) design code. Moreover, the development length conditions for suitable detailing of reinforcement are treated as a constraint. The obtained optimum designs must satisfy the factor of safety for failure modes (overturning, sliding and bearing), strength, serviceability and other required limitations to attain practically acceptable shapes. To demonstrate the efficiency and robustness of the presented BBO algorithm, the optimum design example for retaining walls is presented and the results are compared to the previously obtained results available in the literature.Keywords: bio geography, meta-heuristic search, optimization, retaining wall
Procedia PDF Downloads 39814442 Hydraulics of 3D Aerators with Lateral Enlargements
Authors: Nirmala Lama
Abstract:
The construction of high dams has led to significant challenges in managing flow rates discharging over spillways, resulting in cavitation damages on hydraulic surfaces. To address this, aerator devices were designed and installed to promote fore aeration, thereby controlling and mitigating damages caused by cavitation. Consequently, these aerator types, three-dimensional aerators (3DAEs), have demonstrated superior efficiency in introducing forced air into the flow.This research focuses on the installation and evaluation of three-dimensional aerator devices at the high discharge spillway surface. In the laboratory, the air concentration downstream of the hydraulic structures was extensively measured, and the data were analyzed in details.Multiple flow scenarios and structural arrangements of the aerators were adopted for the study. The outcomes of these experiments are listed as In terms of air concentration value, the comparison between 3 DAE (three-dimensional aerator) with offset only and offset with ramp reveals significant differences. The concentration value on the side wall was justified. The side cavity length was found to increase with higher approach Froude numbers and lateral enlargement widths. Furthermore, 3DAE exhibited shorter side cavity lengths compared to three-dimensional aerator devices without ramps (3DAD), a beneficial features for controlling water fins. An empirical formula to express the side cavity length was derived from the measured data. Also, the comparison were made on the basis of water fin formation between the different arrangements of 3D aerators. In conclusion, this research provides valuable insights into the performance of three-dimensional aerators in mitigating cavitation damages and controlling water fins in high dam spillways. The findings offer practical implications for designers and engineers seeking to enhance the efficiency and safety of hydraulic structures subjected to high flow rates.Keywords: three-dimension aerator, cavity, water fin, air entrainment
Procedia PDF Downloads 6814441 Spectroscopic and 1.08mm Laser Properties of Nd3+ Doped Oxy-Fluoro Borate Glasses
Authors: Swapna Koneru, Srinivasa Rao Allam, Vijaya Prakash Gaddem
Abstract:
The different concentrations of neodymium-doped (Nd-doped) oxy fluoroborate (OFB) glasses were prepared by melt quenching method and characterized through optical absorption, emission and decay curve measurements to understand the lasing potentialities of these glasses. Optical absorption spectra were recorded and have been analyzed using Judd–Ofelt theory. The dipole strengths are parameterized in terms of three phenomenological Judd–Ofelt intensity parameters Ωλ (λ=2, 4 and 6) to elucidate the glassy matrix around Nd3+ ion as well as to determine the 4F3/2 metastable state radiative properties such as the transition probability (AR), radiative lifetime (τR), branching ratios (βR) and integrated absorption cross-section (σa) have been measured for most of the fluorescent levels of Nd3+. The emission spectra recorded for these glasses exhibit two peaks at 1085 and 1328 nm corresponding to 4F3/2 to 4I11/2 and 4I13/2 transitions have been obtained for all the glasses upon 808 nm diode laser excitation in the near infrared region. The emission intensity of the 4F3/2 to 4I11/2 transition increases with increase of Nd3+ concentration up to 1 mol% and then concentration quenching is observed for 2.0 mol% of Nd3+ concentration. The lifetimes for the 4F3/2 level are found to decrease with increase in Nd2O3 concentration in the glasses due to the concentration quenching. The decay curves of all these glasses show single exponential behavior. The spectroscopy of Nd3+ in these glasses is well understood and laser properties can be accurately determined from measured spectroscopic properties. The results obtained are compared with reports on similar glasses. The results indicate that the present glasses could be useful for 1.08 µm laser applications.Keywords: glasses, luminescence, optical properties, photoluminescence spectroscopy
Procedia PDF Downloads 28914440 Multi-Objective Four-Dimensional Traveling Salesman Problem in an IoT-Based Transport System
Authors: Arindam Roy, Madhushree Das, Apurba Manna, Samir Maity
Abstract:
In this research paper, an algorithmic approach is developed to solve a novel multi-objective four-dimensional traveling salesman problem (MO4DTSP) where different paths with various numbers of conveyances are available to travel between two cities. NSGA-II and Decomposition algorithms are modified to solve MO4DTSP in an IoT-based transport system. This IoT-based transport system can be widely observed, analyzed, and controlled by an extensive distribution of traffic networks consisting of various types of sensors and actuators. Due to urbanization, most of the cities are connected using an intelligent traffic management system. Practically, for a traveler, multiple routes and vehicles are available to travel between any two cities. Thus, the classical TSP is reformulated as multi-route and multi-vehicle i.e., 4DTSP. The proposed MO4DTSP is designed with traveling cost, time, and customer satisfaction as objectives. In reality, customer satisfaction is an important parameter that depends on travel costs and time reflects in the present model.Keywords: multi-objective four-dimensional traveling salesman problem (MO4DTSP), decomposition, NSGA-II, IoT-based transport system, customer satisfaction
Procedia PDF Downloads 11014439 Spectral Analysis Approaches for Simultaneous Determination of Binary Mixtures with Overlapping Spectra: An Application on Pseudoephedrine Sulphate and Loratadine
Authors: Sara El-Hanboushy, Hayam Lotfy, Yasmin Fayez, Engy Shokry, Mohammed Abdelkawy
Abstract:
Simple, specific, accurate and precise spectrophotometric methods are developed and validated for simultaneous determination of pseudoephedrine sulphate (PSE) and loratadine (LOR) in combined dosage form based on spectral analysis technique. Pseudoephedrine (PSE) in binary mixture could be analyzed either by using its resolved zero order absorption spectrum at its λ max 256.8 nm after subtraction of LOR spectrum or in presence of LOR spectrum by absorption correction method at 256.8 nm, dual wavelength (DWL) method at 254nm and 273nm, induced dual wavelength (IDWL) method at 256nm and 272nm and ratio difference (RD) method at 256nm and 262 nm. Loratadine (LOR) in the mixture could be analyzed directly at 280nm without any interference of PSE spectrum or at 250 nm using its recovered zero order absorption spectrum using constant multiplication(CM).In addition, simultaneous determination for PSE and LOR in their mixture could be applied by induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM).Keywords: dual wavelength (DW), induced amplitude modulation method (IAM) coupled with amplitude multiplication (PM), loratadine, pseudoephedrine sulphate, ratio difference (RD)
Procedia PDF Downloads 32114438 Verifying the Performance of the Argon-41 Monitoring System from Fluorine-18 Production for Medical Applications
Authors: Nicole Virgili, Romolo Remetti
Abstract:
The aim of this work is to characterize, from radiation protection point of view, the emission into the environment of air contaminated by argon-41. In this research work, 41Ar is produced by a TR19PET cyclotron, operated at 19 MeV, installed at 'A. Gemelli' University Hospital, Rome, Italy, for fluorine-18 production. The production rate of 41Ar has been calculated on the basis of the scheduled operation cycles of the cyclotron and by utilising proper production algorithms. Then extensive Monte Carlo calculations, carried out by MCNP code, have allowed to determine the absolute detection efficiency to 41Ar gamma rays of a Geiger Muller detector placed in the terminal part of the chimney. Results showed unsatisfactory detection efficiency values and the need for integrating the detection system with more efficient detectors.Keywords: Cyclotron, Geiger Muller detector, MCNPX, argon-41, emission of radioactive gas, detection efficiency determination
Procedia PDF Downloads 15114437 Methodology: A Review in Modelling and Predictability of Embankment in Soft Ground
Authors: Bhim Kumar Dahal
Abstract:
Transportation network development in the developing country is in rapid pace. The majority of the network belongs to railway and expressway which passes through diverse topography, landform and geological conditions despite the avoidance principle during route selection. Construction of such networks demand many low to high embankment which required improvement in the foundation soil. This paper is mainly focused on the various advanced ground improvement techniques used to improve the soft soil, modelling approach and its predictability for embankments construction. The ground improvement techniques can be broadly classified in to three groups i.e. densification group, drainage and consolidation group and reinforcement group which are discussed with some case studies. Various methods were used in modelling of the embankments from simple 1-dimensional to complex 3-dimensional model using variety of constitutive models. However, the reliability of the predictions is not found systematically improved with the level of sophistication. And sometimes the predictions are deviated more than 60% to the monitored value besides using same level of erudition. This deviation is found mainly due to the selection of constitutive model, assumptions made during different stages, deviation in the selection of model parameters and simplification during physical modelling of the ground condition. This deviation can be reduced by using optimization process, optimization tools and sensitivity analysis of the model parameters which will guide to select the appropriate model parameters.Keywords: cement, improvement, physical properties, strength
Procedia PDF Downloads 17414436 Effects of a Cooler on the Sampling Process in a Continuous Emission Monitoring System
Authors: J. W. Ahn, I. Y. Choi, T. V. Dinh, J. C. Kim
Abstract:
A cooler has been widely employed in the extractive system of the continuous emission monitoring system (CEMS) to remove water vapor in the gas stream. The effect of the cooler on analytical target gases was investigated in this research. A commercial cooler for the CEMS operated at 4 C was used. Several gases emitted from a coal power plant (i.e. CO2, SO2, NO, NO2 and CO) were mixed with humid air, and then introduced into the cooler to observe its effect. Concentrations of SO2, NO, NO2 and CO were made as 200 ppm. The CO2 concentration was 8%. The inlet absolute humidity was produced as 12.5% at 100 C using a bubbling method. It was found that the reduction rate of SO2 was the highest (~21%), followed by NO2 (~17%), CO2 (~11%) and CO (~10%). In contrast, the cooler was not affected by NO gas. The result indicated that the cooler caused a significant effect on the water soluble gases due to condensate water in the cooler. To overcome this problem, a correction factor may be applied. However, water vapor might be different, and emissions of target gases are also various. Therefore, the correction factor is not only a solution, but also a better available method should be employed.Keywords: cooler, CEMS, monitoring, reproductive, sampling
Procedia PDF Downloads 361