Search results for: fault detector
450 Hamiltonian Related Properties with and without Faults of the Dual-Cube Interconnection Network and Their Variations
Authors: Shih-Yan Chen, Shin-Shin Kao
Abstract:
In this paper, a thorough review about dual-cubes, DCn, the related studies and their variations are given. DCn was introduced to be a network which retains the pleasing properties of hypercube Qn but has a much smaller diameter. In fact, it is so constructed that the number of vertices of DCn is equal to the number of vertices of Q2n +1. However, each vertex in DCn is adjacent to n + 1 neighbors and so DCn has (n + 1) × 2^2n edges in total, which is roughly half the number of edges of Q2n+1. In addition, the diameter of any DCn is 2n +2, which is of the same order of that of Q2n+1. For selfcompleteness, basic definitions, construction rules and symbols are provided. We chronicle the results, where eleven significant theorems are presented, and include some open problems at the end.Keywords: dual-cubes, dual-cube extensive networks, dual-cube-like networks, hypercubes, fault-tolerant hamiltonian property
Procedia PDF Downloads 466449 Reconstruction of Signal in Plastic Scintillator of PET Using Tikhonov Regularization
Authors: L. Raczynski, P. Moskal, P. Kowalski, W. Wislicki, T. Bednarski, P. Bialas, E. Czerwinski, A. Gajos, L. Kaplon, A. Kochanowski, G. Korcyl, J. Kowal, T. Kozik, W. Krzemien, E. Kubicz, Sz. Niedzwiecki, M. Palka, Z. Rudy, O. Rundel, P. Salabura, N.G. Sharma, M. Silarski, A. Slomski, J. Smyrski, A. Strzelecki, A. Wieczorek, M. Zielinski, N. Zon
Abstract:
The J-PET scanner, which allows for single bed imaging of the whole human body, is currently under development at the Jagiellonian University. The J-PET detector improves the TOF resolution due to the use of fast plastic scintillators. Since registration of the waveform of signals with duration times of few nanoseconds is not feasible, a novel front-end electronics allowing for sampling in a voltage domain at four thresholds was developed. To take fully advantage of these fast signals a novel scheme of recovery of the waveform of the signal, based on ideas from the Tikhonov regularization (TR) and Compressive Sensing methods, is presented. The prior distribution of sparse representation is evaluated based on the linear transformation of the training set of waveform of the signals by using the Principal Component Analysis (PCA) decomposition. Beside the advantage of including the additional information from training signals, a further benefit of the TR approach is that the problem of signal recovery has an optimal solution which can be determined explicitly. Moreover, from the Bayes theory the properties of regularized solution, especially its covariance matrix, may be easily derived. This step is crucial to introduce and prove the formula for calculations of the signal recovery error. It has been proven that an average recovery error is approximately inversely proportional to the number of samples at voltage levels. The method is tested using signals registered by means of the single detection module of the J-PET detector built out from the 30 cm long BC-420 plastic scintillator strip. It is demonstrated that the experimental and theoretical functions describing the recovery errors in the J-PET scenario are largely consistent. The specificity and limitations of the signal recovery method in this application are discussed. It is shown that the PCA basis offers high level of information compression and an accurate recovery with just eight samples, from four voltage levels, for each signal waveform. Moreover, it is demonstrated that using the recovered waveform of the signals, instead of samples at four voltage levels alone, improves the spatial resolution of the hit position reconstruction. The experiment shows that spatial resolution evaluated based on information from four voltage levels, without a recovery of the waveform of the signal, is equal to 1.05 cm. After the application of an information from four voltage levels to the recovery of the signal waveform, the spatial resolution is improved to 0.94 cm. Moreover, the obtained result is only slightly worse than the one evaluated using the original raw-signal. The spatial resolution calculated under these conditions is equal to 0.93 cm. It is very important information since, limiting the number of threshold levels in the electronic devices to four, leads to significant reduction of the overall cost of the scanner. The developed recovery scheme is general and may be incorporated in any other investigation where a prior knowledge about the signals of interest may be utilized.Keywords: plastic scintillators, positron emission tomography, statistical analysis, tikhonov regularization
Procedia PDF Downloads 445448 Smoker Recognition from Lung X-Ray Images Using Convolutional Neural Network
Authors: Moumita Chanda, Md. Fazlul Karim Patwary
Abstract:
Smoking is one of the most popular recreational drug use behaviors, and it contributes to birth defects, COPD, heart attacks, and erectile dysfunction. To completely eradicate this disease, it is imperative that it be identified and treated. Numerous smoking cessation programs have been created, and they demonstrate how beneficial it may be to help someone stop smoking at the ideal time. A tomography meter is an effective smoking detector. Other wearables, such as RF-based proximity sensors worn on the collar and wrist to detect when the hand is close to the mouth, have been proposed in the past, but they are not impervious to deceptive variables. In this study, we create a machine that can discriminate between smokers and non-smokers in real-time with high sensitivity and specificity by watching and collecting the human lung and analyzing the X-ray data using machine learning. If it has the highest accuracy, this machine could be utilized in a hospital, in the selection of candidates for the army or police, or in university entrance.Keywords: CNN, smoker detection, non-smoker detection, OpenCV, artificial Intelligence, X-ray Image detection
Procedia PDF Downloads 84447 Waveguiding in an InAs Quantum Dots Nanomaterial for Scintillation Applications
Authors: Katherine Dropiewski, Michael Yakimov, Vadim Tokranov, Allan Minns, Pavel Murat, Serge Oktyabrsky
Abstract:
InAs Quantum Dots (QDs) in a GaAs matrix is a well-documented luminescent material with high light yield, as well as thermal and ionizing radiation tolerance due to quantum confinement. These benefits can be leveraged for high-efficiency, room temperature scintillation detectors. The proposed scintillator is composed of InAs QDs acting as luminescence centers in a GaAs stopping medium, which also acts as a waveguide. This system has appealing potential properties, including high light yield (~240,000 photons/MeV) and fast capture of photoelectrons (2-5ps), orders of magnitude better than currently used inorganic scintillators, such as LYSO or BaF2. The high refractive index of the GaAs matrix (n=3.4) ensures light emitted by the QDs is waveguided, which can be collected by an integrated photodiode (PD). Scintillation structures were grown using Molecular Beam Epitaxy (MBE) and consist of thick GaAs waveguiding layers with embedded sheets of modulation p-type doped InAs QDs. An AlAs sacrificial layer is grown between the waveguide and the GaAs substrate for epitaxial lift-off to separate the scintillator film and transfer it to a low-index substrate for waveguiding measurements. One consideration when using a low-density material like GaAs (~5.32 g/cm³) as a stopping medium is the matrix thickness in the dimension of radiation collection. Therefore, luminescence properties of very thick (4-20 microns) waveguides with up to 100 QD layers were studied. The optimization of the medium included QD shape, density, doping, and AlGaAs barriers at the waveguide surfaces to prevent non-radiative recombination. To characterize the efficiency of QD luminescence, low temperature photoluminescence (PL) (77-450 K) was measured and fitted using a kinetic model. The PL intensity degrades by only 40% at RT, with an activation energy for electron escape from QDs to the barrier of ~60 meV. Attenuation within the waveguide (WG) is a limiting factor for the lateral size of a scintillation detector, so PL spectroscopy in the waveguiding configuration was studied. Spectra were measured while the laser (630 nm) excitation point was scanned away from the collecting fiber coupled to the edge of the WG. The QD ground state PL peak at 1.04 eV (1190 nm) was inhomogeneously broadened with FWHM of 28 meV (33 nm) and showed a distinct red-shift due to self-absorption in the QDs. Attenuation stabilized after traveling over 1 mm through the WG, at about 3 cm⁻¹. Finally, a scintillator sample was used to test detection and evaluate timing characteristics using 5.5 MeV alpha particles. With a 2D waveguide and a small area of integrated PD, the collected charge averaged 8.4 x10⁴ electrons, corresponding to a collection efficiency of about 7%. The scintillation response had 80 ps noise-limited time resolution and a QD decay time of 0.6 ns. The data confirms unique properties of this scintillation detector which can be potentially much faster than any currently used inorganic scintillator.Keywords: GaAs, InAs, molecular beam epitaxy, quantum dots, III-V semiconductor
Procedia PDF Downloads 255446 Development of a Non-Dispersive Infrared Multi Gas Analyzer for a TMS
Authors: T. V. Dinh, I. Y. Choi, J. W. Ahn, Y. H. Oh, G. Bo, J. Y. Lee, J. C. Kim
Abstract:
A Non-Dispersive Infrared (NDIR) multi-gas analyzer has been developed to monitor the emission of carbon monoxide (CO) and sulfur dioxide (SO2) from various industries. The NDIR technique for gas measurement is based on the wavelength absorption in the infrared spectrum as a way to detect particular gasses. NDIR analyzers have popularly applied in the Tele-Monitoring System (TMS). The advantage of the NDIR analyzer is low energy consumption and cost compared with other spectroscopy methods. However, zero/span drift and interference are its urgent issues to be solved. Multi-pathway technique based on optical White cell was employed to improve the sensitivity of the analyzer in this work. A pyroelectric detector was used to detect the Infrared radiation. The analytical range of the analyzer was 0 ~ 200 ppm. The instrument response time was < 2 min. The detection limits of CO and SO2 were < 4 ppm and < 6 ppm, respectively. The zero and span drift of 24 h was less than 3%. The linearity of the analyzer was less than 2.5% of reference values. The precision and accuracy of both CO and SO2 channels were < 2.5% of relative standard deviation. In general, the analyzer performed well. However, the detection limit and 24h drift should be improved to be a more competitive instrument.Keywords: analyzer, CEMS, monitoring, NDIR, TMS
Procedia PDF Downloads 257445 A Review Paper for Detecting Zero-Day Vulnerabilities
Authors: Tshegofatso Rambau, Tonderai Muchenje
Abstract:
Zero-day attacks (ZDA) are increasing day by day; there are many vulnerabilities in systems and software that date back decades. Companies keep discovering vulnerabilities in their systems and software and work to release patches and updates. A zero-day vulnerability is a software fault that is not widely known and is unknown to the vendor; attackers work very quickly to exploit these vulnerabilities. These are major security threats with a high success rate because businesses lack the essential safeguards to detect and prevent them. This study focuses on the factors and techniques that can help us detect zero-day attacks. There are various methods and techniques for detecting vulnerabilities. Various companies like edges can offer penetration testing and smart vulnerability management solutions. We will undertake literature studies on zero-day attacks and detection methods, as well as modeling approaches and simulations, as part of the study process.Keywords: zero-day attacks, exploitation, vulnerabilities
Procedia PDF Downloads 102444 The Application of Data Mining Technology in Building Energy Consumption Data Analysis
Authors: Liang Zhao, Jili Zhang, Chongquan Zhong
Abstract:
Energy consumption data, in particular those involving public buildings, are impacted by many factors: the building structure, climate/environmental parameters, construction, system operating condition, and user behavior patterns. Traditional methods for data analysis are insufficient. This paper delves into the data mining technology to determine its application in the analysis of building energy consumption data including energy consumption prediction, fault diagnosis, and optimal operation. Recent literature are reviewed and summarized, the problems faced by data mining technology in the area of energy consumption data analysis are enumerated, and research points for future studies are given.Keywords: data mining, data analysis, prediction, optimization, building operational performance
Procedia PDF Downloads 852443 Defects Estimation of Embedded Systems Components by a Bond Graph Approach
Authors: I. Gahlouz, A. Chellil
Abstract:
The paper concerns the estimation of system components faults by using an unknown inputs observer. To reach this goal, we used the Bond Graph approach to physical modelling. We showed that this graphical tool is allowing the representation of system components faults as unknown inputs within the state representation of the considered physical system. The study of the causal and structural features of the system (controllability, observability, finite structure, and infinite structure) based on the Bond Graph approach was hence fulfilled in order to design an unknown inputs observer which is used for the system component fault estimation.Keywords: estimation, bond graph, controllability, observability
Procedia PDF Downloads 412442 Cooperative CDD Scheme Based On Hierarchical Modulation in OFDM System
Authors: Seung-Jun Yu, Yeong-Seop Ahn, Young-Min Ko, Hyoung-Kyu Song
Abstract:
In order to achieve high data rate and increase the spectral efficiency, multiple input multiple output (MIMO) system has been proposed. However, multiple antennas are limited by size and cost. Therefore, recently developed cooperative diversity scheme, which profits the transmit diversity only with the existing hardware by constituting a virtual antenna array, can be a solution. However, most of the introduced cooperative techniques have a common fault of decreased transmission rate because the destination should receive the decodable compositions of symbols from the source and the relay. In this paper, we propose a cooperative cyclic delay diversity (CDD) scheme that uses hierarchical modulation. This scheme is free from the rate loss and allows seamless cooperative communication.Keywords: MIMO, cooperative communication, CDD, hierarchical modulation
Procedia PDF Downloads 549441 Production and Characterization of Silver Doped Hydroxyapatite Thin Films for Biomedical Applications
Authors: C. L Popa, C.S. Ciobanu, S. L. Iconaru, P. Chapon, A. Costescu, P. Le Coustumer, D. Predoi
Abstract:
In this paper, the preparation and characterization of silver doped hydroxyapatite thin films and their antimicrobial activity characterized is reported. The resultant Ag: HAp films coated on commercially pure Si disks substrates were systematically characterized by Scanning Electron Microscopy (SEM) coupled with X-ray Energy Dispersive Spectroscopy detector (X-EDS), Glow Discharge Optical Emission Spectroscopy (GDOES) and Fourier Transform Infrared spectroscopy (FT-IR). GDOES measurements show that a substantial Ag content has been deposited in the films. The X-EDS and GDOES spectra revealed the presence of a material composed mainly of phosphate, calcium, oxygen, hydrogen and silver. The antimicrobial efficiency of Ag:HAp thin films against Escherichia coli and Staphylococcus aureus bacteria was demonstrated. Ag:HAp thin films could lead to a decrease of infections especially in the case of bone and dental implants by surface modification of implantable medical devices.Keywords: silver, hydroxyapatite, thin films, GDOES, SEM, FTIR, antimicrobial effect
Procedia PDF Downloads 425440 Geological Characteristics and Hydrocarbon Potential of M’Rar Formation Within NC-210, Atshan Saddle Ghadamis-Murzuq Basins, Libya
Authors: Sadeg M. Ghnia, Mahmud Alghattawi
Abstract:
The NC-210 study area is located in Atshan Saddle between both Ghadamis and Murzuq basins, west Libya. The preserved Palaeozoic successions are predominantly clastics reaching thickness of more than 20,000 ft in northern Ghadamis Basin depocenter. The Carboniferous series consist of interbedded sandstone, siltstone, shale, claystone and minor limestone deposited in a fluctuating shallow marine to brackish lacustrine/fluviatile environment which attain maximum thickness of over 5,000ft in the area of Atshan Saddle and recorded 3,500 ft. in outcrops of Murzuq Basin flanks. The Carboniferous strata was uplifted and eroded during Late Paleozoic and early Mesozoic time in northern Ghadamis Basin and Atshan Saddle. The M'rar Formation age is Tournaisian to Late Serpukhovian based on palynological markers and contains about 12 cycles of sandstone and shale deposited in shallow to outer neritic deltaic settings. The hydrocarbons in the M'rar reservoirs possibly sourced from the Lower Silurian and possibly Frasinian radioactive hot shales. The M'rar Formation lateral, vertical and thickness distribution is possibly influenced by the reactivation of Tumarline Strik-Slip fault and its conjugate faults. A pronounced structural paleohighs and paleolows, trending SE & NW through the Gargaf Saddle, is possibly indicative of the present of two sub-basins in the area of Atshan Saddle. A number of identified seismic reflectors from existing 2D seismic covering Atshan Saddle reflect M’rar deltaic 12 sandstone cycles. M’rar7, M’rar9, M’rar10 and M’rar12 are characterized by high amplitude reflectors, while M’rar2 and M’rar6 are characterized by medium amplitude reflectors. These horizons are productive reservoirs in the study area. Available seismic data in the study area contributed significantly to the identification of M’rar potential traps, which are prominently 3- way dip closure against fault zone. Also seismic data indicates the presence of a significant strikeslip component with the development of flower-structure. The M'rar Formation hydrocarbon discoveries are concentrated mainly in the Atshan Saddle located in southern Ghadamis Basin, Libya and Illizi Basin in southeast of Algeria. Significant additional hydrocarbons may be present in areas adjacent to the Gargaf Uplift, along structural highs and fringing the Hoggar Uplift, providing suitable migration pathways.Keywords: hydrocarbon potential, stratigraphy, Ghadamis basin, seismic, well data integration
Procedia PDF Downloads 74439 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics
Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki
Abstract:
The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio
Procedia PDF Downloads 174438 Simple Rheological Method to Estimate the Branch Structures of Polyethylene under Reactive Modification
Authors: Mahdi Golriz
Abstract:
The aim of this work is to estimate the change in molecular structure of linear low-density polyethylene (LLDPE) during peroxide modification can be detected by a simple rheological method. For this purpose a commercial grade LLDPE (Exxon MobileTM LL4004EL) was reacted with different doses of dicumyl peroxide (DCP). The samples were analyzed by size-exclusion chromatography coupled with a light scattering detector. The dynamic shear oscillatory measurements showed a deviation of the δ-׀G ׀٭curve from that of the linear LLDPE, which can be attributed to the presence of long-chain branching (LCB). By the use of a simple rheological method that utilizes melt rheology, transformations in molecular architecture induced on an originally linear low density polyethylene during the early stages of reactive modification were indicated. Reasonable and consistent estimates are obtained, concerning the degree of LCB, the volume fraction of the various molecular species produced in peroxide modification of LLDPE.Keywords: linear low-density polyethylene, peroxide modification, long-chain branching, rheological method
Procedia PDF Downloads 153437 Stability of Ochratoxin a During Bread Making Process
Authors: Sara Heidari, Jafar Mohammadzadeh Milani, Elmira Pouladi Borj
Abstract:
In this research, stability of Ochratoxin A (OTA) during bread making process including fermentation with yeasts (Saccharomyces cerevisiae) and Sourdough (Lactobacillus casei, Lactobacillus rhamnosus, Lactobacillus acidophilus and Lactobacillus fermentum) and baking at 200°C were examined. Bread was prepared on a pilot-plant scale by using wheat flour spiked with standard solution of OTA. During this process, mycotoxin levels were determined after fermentation of the dough with sourdough and three types of yeast including active dry yeast, instant dry yeast and compressed yeast after further baking 200°C by high performance liquid chromatography (HPLC) with fluorescence detector after extraction and clean-up on an immunoaffinity column. According to the results, the highest stability of was observed in the first fermentation (first proof), while the lowest stability was observed in the baking stage in comparison to contaminated flour. In addition, compressed yeast showed the maximum impact on stability of OTA during bread making process.Keywords: Ochratoxin A, bread, dough, yeast, sourdough
Procedia PDF Downloads 576436 Solar Power Generation in a Mining Town: A Case Study for Australia
Authors: Ryan Chalk, G. M. Shafiullah
Abstract:
Climate change is a pertinent issue facing governments and societies around the world. The industrial revolution has resulted in a steady increase in the average global temperature. The mining and energy production industries have been significant contributors to this change prompting government to intervene by promoting low emission technology within these sectors. This paper initially reviews the energy problem in Australia and the mining sector with a focus on the energy requirements and production methods utilised in Western Australia (WA). Renewable energy in the form of utility-scale solar photovoltaics (PV) provides a solution to these problems by providing emission-free energy which can be used to supplement the existing natural gas turbines in operation at the proposed site. This research presents a custom renewable solution for the mining site considering the specific township network, local weather conditions, and seasonal load profiles. A summary of the required PV output is presented to supply slightly over 50% of the towns power requirements during the peak (summer) period, resulting in close to full coverage in the trench (winter) period. Dig Silent Power Factory Software has been used to simulate the characteristics of the existing infrastructure and produces results of integrating PV. Large scale PV penetration in the network introduce technical challenges, that includes; voltage deviation, increased harmonic distortion, increased available fault current and power factor. Results also show that cloud cover has a dramatic and unpredictable effect on the output of a PV system. The preliminary analyses conclude that mitigation strategies are needed to overcome voltage deviations, unacceptable levels of harmonics, excessive fault current and low power factor. Mitigation strategies are proposed to control these issues predominantly through the use of high quality, made for purpose inverters. Results show that use of inverters with harmonic filtering reduces the level of harmonic injections to an acceptable level according to Australian standards. Furthermore, the configuration of inverters to supply active and reactive power assist in mitigating low power factor problems. Use of FACTS devices; SVC and STATCOM also reduces the harmonics and improve the power factor of the network, and finally, energy storage helps to smooth the power supply.Keywords: climate change, mitigation strategies, photovoltaic (PV), power quality
Procedia PDF Downloads 166435 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System
Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge
Abstract:
The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.Keywords: drinking water, gross alpha, gross beta, waste water
Procedia PDF Downloads 198434 BER Analysis of Energy Detection Spectrum Sensing in Cognitive Radio Using GNU Radio
Authors: B. Siva Kumar Reddy, B. Lakshmi
Abstract:
Cognitive Radio is a turning out technology that empowers viable usage of the spectrum. Energy Detector-based Sensing is the most broadly utilized spectrum sensing strategy. Besides, it is a lot of generic as receivers does not like any information on the primary user's signals, channel data, of even the sort of modulation. This paper puts forth the execution of energy detection sensing for AM (Amplitude Modulated) signal at 710 KHz, FM (Frequency Modulated) signal at 103.45 MHz (local station frequency), Wi-Fi signal at 2.4 GHz and WiMAX signals at 6 GHz. The OFDM/OFDMA based WiMAX physical layer with convolutional channel coding is actualized utilizing USRP N210 (Universal Software Radio Peripheral) and GNU Radio based Software Defined Radio (SDR). Test outcomes demonstrated the BER (Bit Error Rate) augmentation with channel noise and BER execution is dissected for different Eb/N0 (the energy per bit to noise power spectral density ratio) values.Keywords: BER, Cognitive Radio, GNU Radio, OFDM, SDR, WiMAX
Procedia PDF Downloads 500433 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses
Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas
Abstract:
We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition
Procedia PDF Downloads 558432 Analytical Study of the Structural Response to Near-Field Earthquakes
Authors: Isidro Perez, Maryam Nazari
Abstract:
Numerous earthquakes, which have taken place across the world, led to catastrophic damage and collapse of structures (e.g., 1971 San Fernando; 1995 Kobe-Japan; and 2010 Chile earthquakes). Engineers are constantly studying methods to moderate the effect this phenomenon has on structures to further reduce damage, costs, and ultimately to provide life safety to occupants. However, there are regions where structures, cities, or water reservoirs are built near fault lines. When an earthquake occurs near the fault lines, they can be categorized as near-field earthquakes. In contrary, a far-field earthquake occurs when the region is further away from the seismic source. A near-field earthquake generally has a higher initial peak resulting in a larger seismic response, when compared to a far-field earthquake ground motion. These larger responses may result in serious consequences in terms of structural damage which can result in a high risk for the public’s safety. Unfortunately, the response of structures subjected to near-field records are not properly reflected in the current building design specifications. For example, in ASCE 7-10, the design response spectrum is mostly based on the far-field design-level earthquakes. This may result in the catastrophic damage of structures that are not properly designed for near-field earthquakes. This research investigates the knowledge that the effect of near-field earthquakes has on the response of structures. To fully examine this topic, a structure was designed following the current seismic building design specifications, e.g. ASCE 7-10 and ACI 318-14, being analytically modeled, utilizing the SAP2000 software. Next, utilizing the FEMA P695 report, several near-field and far-field earthquakes were selected, and the near-field earthquake records were scaled to represent the design-level ground motions. Upon doing this, the prototype structural model, created using SAP2000, was subjected to the scaled ground motions. A Linear Time History Analysis and Pushover analysis were conducted on SAP2000 for evaluation of the structural seismic responses. On average, the structure experienced an 8% and 1% increase in story drift and absolute acceleration, respectively, when subjected to the near-field earthquake ground motions. The pushover analysis was ran to find and aid in properly defining the hinge formation in the structure when conducting the nonlinear time history analysis. A near-field ground motion is characterized by a high-energy pulse, making it unique to other earthquake ground motions. Therefore, pulse extraction methods were used in this research to estimate the maximum response of structures subjected to near-field motions. The results will be utilized in the generation of a design spectrum for the estimation of design forces for buildings subjected to NF ground motions.Keywords: near-field, pulse, pushover, time-history
Procedia PDF Downloads 146431 Warfield Spying Robot Using LoRa
Authors: Madhavi T., Sireesha Sakhamuri, Hema Sri A., Harika K.
Abstract:
Today as technological advancements are taking place, these advancements are being used by the armed forces to reduce the risk of their losses and to defeat their enemies. The development of sophisticated technology relies mostly on the use of high- tech weapons or machinery. Robotics is one of the hot spheres of the modern age in which nations concentrate on the state of war and peace for military purposes. They have been in use for demining and rescue operations for some time now but are being propelled by using them for combat and spy missions. This project focuses on creating a LoRa-based spying robot with a wireless IP camera attached to it that can rising the human target. This robot transmits the signal via an IP camera to the base station. One of this project’s major applications can be analyzed using a PC that can be used to control the robot’s movement. The robot sends the signal through the LoRa transceiver at the base station to the LoRa transceiver mounted on the robot. With this function, the, robot can relay videos in real- time along with anti-collision capabilities and the enemies in the war zone cannot recognize them. More importantly, this project focuses on increasing communication using LoRa.Keywords: lora, IP cam, metal detector, laser shoot
Procedia PDF Downloads 111430 Crash and Injury Characteristics of Riders in Motorcycle-Passenger Vehicle Crashes
Authors: Z. A. Ahmad Noor Syukri, A. J. Nawal Aswan, S. V. Wong
Abstract:
The motorcycle has become one of the most common type of vehicles used on the road, particularly in the Asia region, including Malaysia, due to its size-convenience and affordable price. This study focuses only on crashes involving motorcycles with passenger cars consisting 43 real world crashes obtained from in-depth crash investigation process from June 2016 till July 2017. The study collected and analyzed vehicle and site parameters obtained during crash investigation and injury information acquired from the patient-treating hospital. The investigation team, consisting of two personnel, is stationed at the Emergency Department of the treatment facility, and was dispatched to the crash scene once receiving notification of the related crashes. The injury information retrieved was coded according to the level of severity using the Abbreviated Injury Scale (AIS) and classified into different body regions. The data revealed that weekend crashes were significantly higher for the night time period and the crash occurrence was the highest during morning hours (commuting to work period) for weekdays. Bad weather conditions play a minimal effect towards the occurrence of motorcycle – passenger vehicle crashes and nearly 90% involved motorcycles with single riders. Riders up to 25 years old are heavily involved in crashes with passenger vehicles (60%), followed by 26-55 year age group with 35%. Male riders were dominant in each of the age segments. The majority of the crashes involved side impacts, followed by rear impacts and cars outnumbered the rest of the passenger vehicle types in terms of crash involvement with motorcycles. The investigation data also revealed that passenger vehicles were the most at-fault counterpart (62%) when involved in crashes with motorcycles and most of the crashes involved situations whereby both of the vehicles are travelling in the same direction and one of the vehicles is in a turning maneuver. More than 80% of the involved motorcycle riders had sustained yellow severity level during triage process. The study also found that nearly 30% of the riders sustained injuries to the lower extremities, while MAIS level 3 injuries were recorded for all body regions except for thorax region. The result showed that crashes in which the motorcycles were found to be at fault were more likely to occur during night and raining conditions. These types of crashes were also found to be more likely to involve other types of passenger vehicles rather than cars and possess higher likelihood in resulting higher ISS (>6) value to the involved rider. To reduce motorcycle fatalities, it first has to understand the characteristics concerned and focus may be given on crashes involving passenger vehicles as the most dominant crash partner on Malaysian roads.Keywords: motorcycle crash, passenger vehicle, in-depth crash investigation, injury mechanism
Procedia PDF Downloads 322429 Bi-Lateral Comparison between NIS-Egypt and NMISA-South Africa for the Calibration of an Optical Time Domain Reflectometer
Authors: Osama Terra, Mariesa Nel, Hatem Hussein
Abstract:
Calibration of Optical Time Domain Reflectometer (OTDR) has a crucial role for the accurate determination of fault locations and the accurate calculation of loss budget of long-haul optical fibre links during installation and repair. A comparison has been made between the Egyptian National Institute for Standards (NIS-Egypt) and the National Metrology institute of South Africa (NMISA-South Africa) for the calibration of an OTDR. The distance and the attenuation scales of a transfer OTDR have been calibrated by both institutes using their standards according to the standard IEC 61746-1 (2009). The results of this comparison have been compiled in this report.Keywords: OTDR calibration, recirculating loop, concatenated method, standard fiber
Procedia PDF Downloads 448428 Phenolic-Based Chemical Production from Catalytic Depolymerization of Alkaline Lignin over Fumed Silica Catalyst
Authors: S. Totong, P. Daorattanachai, N. Laosiripojana
Abstract:
Lignin depolymerization into phenolic-based chemicals is an interesting process for utilizing and upgrading a benefit and value of lignin. In this study, the depolymerization reaction was performed to convert alkaline lignin into smaller molecule compounds. Fumed SiO₂ was used as a catalyst to improve catalytic activity in lignin decomposition. The important parameters in depolymerization process (i.e., reaction temperature, reaction time, etc.) were also investigated. In addition, gas chromatography with mass spectrometry (GC-MS), flame-ironized detector (GC-FID), and Fourier transform infrared spectroscopy (FT-IR) were used to analyze and characterize the lignin products. It was found that fumed SiO₂ catalyst led the good catalytic activity in lignin depolymerization. The main products from catalytic depolymerization were guaiacol, syringol, vanillin, and phenols. Additionally, metal supported on fumed SiO₂ such as Cu/SiO₂ and Ni/SiO₂ increased the catalyst activity in terms of phenolic products yield.Keywords: alkaline lignin, catalytic, depolymerization, fumed SiO₂, phenolic-based chemicals
Procedia PDF Downloads 246427 Non-Invasive Imaging of Human Tissue Using NIR Light
Authors: Ashwani Kumar
Abstract:
Use of NIR light for imaging the biological tissue and to quantify its optical properties is a good choice over other invasive methods. Optical tomography involves two steps. One is the forward problem and the other is the reconstruction problem. The forward problem consists of finding the measurements of transmitted light through the tissue from source to detector, given the spatial distribution of absorption and scattering properties. The second step is the reconstruction problem. In X-ray tomography, there is standard method for reconstruction called filtered back projection method or the algebraic reconstruction methods. But this method cannot be applied as such, in optical tomography due to highly scattering nature of biological tissue. A hybrid algorithm for reconstruction has been implemented in this work which takes into account the highly scattered path taken by photons while back projecting the forward data obtained during Monte Carlo simulation. The reconstructed image suffers from blurring due to point spread function.Keywords: NIR light, tissue, blurring, Monte Carlo simulation
Procedia PDF Downloads 493426 Optical Heterodyning of Injection-Locked Laser Sources: A Novel Technique for Millimeter-Wave Signal Generation
Authors: Subal Kar, Madhuja Ghosh, Soumik Das, Antara Saha
Abstract:
A novel technique has been developed to generate ultra-stable millimeter-wave signal by optical heterodyning of the output from two slave laser (SL) sources injection-locked to the sidebands of a frequency modulated (FM) master laser (ML). Precise thermal tuning of the SL sources is required to lock the particular slave laser frequency to the desired FM sidebands of the ML. The output signals from the injection-locked SL when coherently heterodyned in a fast response photo detector like high electron mobility transistor (HEMT), extremely stable millimeter-wave signal having very narrow line width can be generated. The scheme may also be used to generate ultra-stable sub-millimeter-wave/terahertz signal.Keywords: FM sideband injection locking, master-slave injection locking, millimetre-wave signal generation, optical heterodyning
Procedia PDF Downloads 391425 Metal (Loids) Speciation Using HPLC-ICP-MS Technique in Klodnica River, Upper Silesia, Poland
Authors: Magdalena Jabłońska-Czapla
Abstract:
The work allowed gaining knowledge about redox and speciation changes of As, Cr, and Sb ionic forms in Klodnica River water. This kind of studies never has been conducted in this region of Poland. In study optimized and validated previously HPLC-ICP-MS methods for determination of As, Sb and Cr was used. Separation step was done using high-performance liquid chromatograph equipped with ion-exchange column followed by ICP-MS spectrometer detector. Preliminary studies included determination of the total concentration of As, Sb and Cr, pH, Eh, temperature and conductivity of the water samples. The study was conducted monthly from March to August 2014, at six points on the Klodnica River. The results indicate that exceeded at acceptable concentration of total Cr and Sb was observed in Klodnica River and we should qualify Klodnica River waters below the second purity class. In Klodnica River waters dominates oxidized antimony and arsenic forms, as well as the two forms of chromium Cr(VI) and Cr(III). Studies have also shown the methyl derivative of arsenic's presence.Keywords: antimony, arsenic, chromium, HPLC-ICP-MS, river water, speciation
Procedia PDF Downloads 411424 Effects of Heat Treatment on the Elastic Constants of Cedar Wood
Authors: Tugba Yilmaz Aydin, Ergun Guntekin, Murat Aydin
Abstract:
Effects of heat treatment on the elastic constants of cedar wood (Cedrus libani) were investigated. Specimens were exposed to heat under atmospheric pressure at four different temperatures (120, 150, 180, 210 °C) and three different time levels (2, 5, 8 hours). Three Young’s modulus (EL, ER, ET) and six Poisson ratios (μLR, μLT, μRL, μRT, μTL, μTR) were determined from compression test using bi-axial extensometer at constant moisture content (12 %). Three shear modulus were determined using ultrasound. Six shear wave velocities propagating along the principal axes of anisotropy were measured using EPOCH 650 ultrasonic flaw detector with 1 MHz transverse transducers. The properties of the samples tested were significantly affected by heat treatment by different degree. As a result, softer treatments yielded some amount of increase in Young modulus and shear modulus values, but increase of time and temperature resulted in significant decrease for both values. Poisson ratios seemed insensitive to heat treatment.Keywords: cedar wood, elastic constants, heat treatment, ultrasound
Procedia PDF Downloads 384423 Performances of the Double-Crystal Setup at CERN SPS Accelerator for Physics beyond Colliders Experiments
Authors: Andrii Natochii
Abstract:
We are currently presenting the recent results from the CERN accelerator facilities obtained in the frame of the UA9 Collaboration. The UA9 experiment investigates how a tiny silicon bent crystal (few millimeters long) can be used for various high-energy physics applications. Due to the huge electrostatic field (tens of GV/cm) between crystalline planes, there is a probability for charged particles, impinging the crystal, to be trapped in the channeling regime. It gives a possibility to steer a high intensity and momentum beam by bending the crystal: channeled particles will follow the crystal curvature and deflect on the certain angle (from tens microradians for LHC to few milliradians for SPS energy ranges). The measurements at SPS, performed in 2017 and 2018, confirmed that the protons deflected by the first crystal, inserted in the primary beam halo, can be caught and channeled by the second crystal. In this configuration, we measure the single pass deflection efficiency of the second crystal and prove our opportunity to perform the fixed target experiment at SPS accelerator (LHC in the future).Keywords: channeling, double-crystal setup, fixed target experiment, Timepix detector
Procedia PDF Downloads 150422 Enhanced Visual Sharing Method for Medical Image Security
Authors: Kalaivani Pachiappan, Sabari Annaji, Nithya Jayakumar
Abstract:
In recent years, Information security has emerged as foremost challenges in many fields. Especially in medical information systems security is a major issue, in handling reports such as patients’ diagnosis and medical images. These sensitive data require confidentiality for transmission purposes. Image sharing is a secure and fault-tolerant method for protecting digital images, which can use the cryptography techniques to reduce the information loss. In this paper, visual sharing method is proposed which embeds the patient’s details into a medical image. Then the medical image can be divided into numerous shared images and protected by various users. The original patient details and medical image can be retrieved by gathering the shared images.Keywords: information security, medical images, cryptography, visual sharing
Procedia PDF Downloads 414421 Machine Learning Development Audit Framework: Assessment and Inspection of Risk and Quality of Data, Model and Development Process
Authors: Jan Stodt, Christoph Reich
Abstract:
The usage of machine learning models for prediction is growing rapidly and proof that the intended requirements are met is essential. Audits are a proven method to determine whether requirements or guidelines are met. However, machine learning models have intrinsic characteristics, such as the quality of training data, that make it difficult to demonstrate the required behavior and make audits more challenging. This paper describes an ML audit framework that evaluates and reviews the risks of machine learning applications, the quality of the training data, and the machine learning model. We evaluate and demonstrate the functionality of the proposed framework by auditing an steel plate fault prediction model.Keywords: audit, machine learning, assessment, metrics
Procedia PDF Downloads 271