Search results for: incident detector
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 765

Search results for: incident detector

675 Non-Contact Digital Music Instrument Using Light Sensing Technology

Authors: Aishwarya Ravichandra, Kirtana Kirtivasan, Adithi Mahesh, Ashwini S.Savanth

Abstract:

A Non-Contact Digital Music System has been conceptualized and implemented to create a new era of digital music. This system replaces the strings of a traditional stringed instrument with laser beams to avoid bruising of the user’s hand. The system consists of seven laser modules, detector modules and distance sensors that form the basic hardware blocks of this instrument. Arduino ATmega2560 microcontroller is used as the primary interface between the hardware and the software. MIDI (Musical Instrument Digital Interface) is used as the protocol to establish communication between the instrument and the virtual synthesizer software.

Keywords: Arduino, detector, laser, MIDI, note on, note off, pitch bend, Sharp IR distance sensor

Procedia PDF Downloads 408
674 X-Ray Detector Technology Optimization In CT Imaging

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 274
673 X-Ray Detector Technology Optimization in Computed Tomography

Authors: Aziz Ikhlef

Abstract:

Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.

Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts

Procedia PDF Downloads 195
672 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement

Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman

Abstract:

Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.

Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.

Procedia PDF Downloads 51
671 Research and Design on a Portable Intravehicular Ultrasonic Leak Detector for Manned Spacecraft

Authors: Yan Rongxin, Sun Wei, Li Weidan

Abstract:

Based on the acoustics cascade sound theory, the mechanism of air leak sound producing, transmitting and signal detecting has been analyzed. A formula of the sound power, leak size and air pressure in the spacecraft has been built, and the relationship between leak sound pressure and receiving direction and distance has been studied. The center frequency in millimeter diameter leak is more than 20 kHz. The situation of air leaking from spacecraft to space has been simulated and an experiment of different leak size and testing distance and direction has been done. The sound pressure is in direct proportion to the cosine of the angle of leak to sensor. The portable ultrasonic leak detector has been developed, whose minimal leak rate is 10-1 Pa·m3/s, the testing radius is longer than 20 mm, the mass is less than 1.0 kg, and the electric power is less than 2.2 W.

Keywords: leak testing, manned spacecraft, sound transmitting, ultrasonic

Procedia PDF Downloads 329
670 A Text Classification Approach Based on Natural Language Processing and Machine Learning Techniques

Authors: Rim Messaoudi, Nogaye-Gueye Gning, François Azelart

Abstract:

Automatic text classification applies mostly natural language processing (NLP) and other AI-guided techniques to automatically classify text in a faster and more accurate manner. This paper discusses the subject of using predictive maintenance to manage incident tickets inside the sociality. It focuses on proposing a tool that treats and analyses comments and notes written by administrators after resolving an incident ticket. The goal here is to increase the quality of these comments. Additionally, this tool is based on NLP and machine learning techniques to realize the textual analytics of the extracted data. This approach was tested using real data taken from the French National Railways (SNCF) company and was given a high-quality result.

Keywords: machine learning, text classification, NLP techniques, semantic representation

Procedia PDF Downloads 103
669 Theoretical Analysis of Photoassisted Field Emission near the Metal Surface Using Transfer Hamiltonian Method

Authors: Rosangliana Chawngthu, Ramkumar K. Thapa

Abstract:

A model calculation of photoassisted field emission current (PFEC) by using transfer Hamiltonian method will be present here. When the photon energy is incident on the surface of the metals, such that the energy of a photon is usually less than the work function of the metal under investigation. The incident radiation photo excites the electrons to a final state which lies below the vacuum level; the electrons are confined within the metal surface. A strong static electric field is then applied to the surface of the metal which causes the photoexcited electrons to tunnel through the surface potential barrier into the vacuum region and constitutes the considerable current called photoassisted field emission current. The incident radiation is usually a laser beam, causes the transition of electrons from the initial state to the final state and the matrix element for this transition will be written. For the calculation of PFEC, transfer Hamiltonian method is used. The initial state wavefunction is calculated by using Kronig-Penney potential model. The effect of the matrix element will also be studied. An appropriate dielectric model for the surface region of the metal will be used for the evaluation of vector potential. FORTRAN programme is used for the calculation of PFEC. The results will be checked with experimental data and the theoretical results.

Keywords: photoassisted field emission, transfer Hamiltonian, vector potential, wavefunction

Procedia PDF Downloads 226
668 Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments

Authors: Hai Quang Hong Dam, Hai Ho, Minh Hoang Le Ngo

Abstract:

This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal.

Keywords: blind speech separation, voice activity detector, SRP-PHAT, optimal beamformer

Procedia PDF Downloads 283
667 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy

Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon

Abstract:

Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).

Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect

Procedia PDF Downloads 183
666 Implications of Creating a 3D Vignette as a Reflective Practice for Continuous Professional Development of Foreign Language Teachers

Authors: Samiah H. Ghounaim

Abstract:

The topic of this paper is significant because of the increasing need for intercultural training for foreign language teachers due to the continuous challenges they face in their diverse classrooms. First, the structure of the intercultural training program designed will be briefly described, and the structure of a 3D vignette and its intended purposes will be elaborated on. This was the first stage where the program was designed and implemented on the period of three months with a group of local and expatriate foreign language teachers/practitioners at a university in the Middle East. After that, a set of primary data collected during the first stage of this research on the design and co-construction process of a 3D vignette will be reviewed and analysed in depth. Each practitioner designed a personal incident into a 3D vignette where each dimension of the vignette viewed the same incident from a totally different perspective. Finally, the results and the implications of having participant construct their personal incidents into a 3D vignette as a reflective practice will be discussed in detail as well as possible extensions for the research. This process proved itself to be an effective reflective practice where the participants were stimulated to view their incidents in a different light. Co-constructing one’s own critical incidents –be it a positive experience or not– into a structured 3D vignette encouraged participants to decentralise themselves from the incidents and, thus, creating a personal reflective space where they had the opportunity to see different potential outcomes for each incident, as well as prepare for the reflective discussion of their vignette with their peers. This provides implications for future developments in reflective writing practices and possibilities for educators’ continuous professional development (CPD).

Keywords: 3D vignettes, intercultural competence training, reflective practice, teacher training

Procedia PDF Downloads 109
665 Effectiveness of Variable Speed Limit Signs in Reducing Crash Rates on Roadway Construction Work Zones in Alaska

Authors: Osama Abaza, Tanay Datta Chowdhury

Abstract:

As a driver's speed increases, so do the probability of an incident and likelihood of injury. The presence of equipment, personnel, and a changing landscape in construction zones create greater potential for incident. This is especially concerning in Alaska, where summer construction activity, coinciding with the peak annual traffic volumes, cannot be avoided. In order to reduce vehicular speeding in work zones, and therefore the probability of crash and incident occurrence, variable speed limit (VSL) systems can be implemented in the form of radar speed display trailers since the radar speed display trailers were shown to be effective at reducing vehicular speed in construction zones. Allocation of VSL not only help reduce the 85th percentile speed but also it will predominantly reduce mean speed as well. Total of 2147 incidents along with 385 crashes occurred only in one month around the construction zone in the Alaska which seriously requires proper attention. This research provided a thorough crash analysis to better understand the cause and provide proper countermeasures. Crashes were predominantly recoded as vehicle- object collision and sideswipe type and thus significant amount of crashes fall in the group of no injury to minor injury type in the severity class. But still, 35 major crashes with 7 fatal ones in a one month period require immediate action like the implementation of the VSL system as it proved to be a speed reducer in the construction zone on Alaskan roadways.

Keywords: speed, construction zone, crash, severity

Procedia PDF Downloads 253
664 Development of a Combustible Gas Detector with Two Sensor Modules to Enable Measuring Range of Low Concentration

Authors: Young Gyu Kim, Sangguk Ahn, Gyoutae Park, Hiesik Kim

Abstract:

In the gas industrial fields, there are many problems to detect extremely small amounts of combustible gas (CH₄) if a conventional semiconductor is used. Those reasons are that measuring is difficult at the low concentration level, the stabilization time is long, and an initial response time is slow. In this study, we propose a method to solve these issues using two specific sensors to overcome the circumstances of temperature and humidity. This idea is to combine a catalytic and a semiconductor type sensor and to utilize every advantage from every sensor’s characteristic. In order to achieve the goal, we reduced fluctuations of a gas sensor for temperature and humidity by applying designed circuits for sensing temperature and humidity. And we induced the best calibration line of gas sensors through adjusting a weight value corresponding to changeable patterns of temperature and humidity after their data are previously acquired and stored. We proposed and developed the gas leak detector using two sensor modules, which is first operated by a semiconductor sensor for measuring small gas quantities and second a catalytic type sensor is detected if measuring range of the first sensor is beyond. We conclusively verified characteristics of sharp sensitivity and fast response time against even at lower gas concentration level through experiments other than a conventional gas sensor. We think that our proposed idea is very useful if another gas leak is developed to enable measuring extremely small quantities of toxic and flammable gases.

Keywords: gas sensor, leak detector, lower concentration, and calibration

Procedia PDF Downloads 240
663 Probing Extensive Air Shower Primaries and Their Interactions by Combining Individual Muon Tracks and Shower Depth

Authors: Moon Moon Devi, Ran Budnik

Abstract:

The current large area cosmic ray detector surface arrays typically measure only the net flux and arrival-time of the charged particles produced in an extensive air shower (EAS). Measurement of the individual charged particles at a surface array will provide additional distinguishing parameters to identify the primary and to map the very high energy interactions in the upper layers of the atmosphere. In turn, these may probe anomalies in QCD interactions at energies beyond the reach of current accelerators. The recent attempts of studying the individual muon tracks are limited in their expandability to larger arrays and can only probe primary particles with energy up to about 10^15.5 eV. New developments in detector technology allow for a realistic cost of large area detectors, however with limitations on energy resolutions, directional information, and dynamic range. In this study, we perform a simulation study using CORSIKA to combine the energy spectrum and lateral spread of the muons with the longitudinal depth (Xmax) of an EAS initiated by a primary at ultra high energies (10¹⁶ – 10¹⁹) eV. Using proton and iron as the shower primaries, we show that the muon observables and Xmax together can be used to distinguish the primary. This study can be used to design a future detector for the surface array, which will be able to enhance our knowledge of primaries and QCD interactions.

Keywords: ultra high energy extensive air shower, muon tracking, air shower primaries, QCD interactions

Procedia PDF Downloads 229
662 A Real-Time Snore Detector Using Neural Networks and Selected Sound Features

Authors: Stelios A. Mitilineos, Nicolas-Alexander Tatlas, Georgia Korompili, Lampros Kokkalas, Stelios M. Potirakis

Abstract:

Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS) is a widespread chronic disease that mostly remains undetected, mainly due to the fact that it is diagnosed via polysomnography which is a time and resource-intensive procedure. Screening the disease’s symptoms at home could be used as an alternative approach in order to alert individuals that potentially suffer from OSAHS without compromising their everyday routine. Since snoring is usually linked to OSAHS, developing a snore detector is appealing as an enabling technology for screening OSAHS at home using ubiquitous equipment like commodity microphones (included in, e.g., smartphones). In this context, this study developed a snore detection tool and herein present the approach and selection of specific sound features that discriminate snoring vs. environmental sounds, as well as the performance of the proposed tool. Furthermore, a Real-Time Snore Detector (RTSD) is built upon the snore detection tool and employed in whole-night sleep sound recordings resulting to a large dataset of snoring sound excerpts that are made freely available to the public. The RTSD may be used either as a stand-alone tool that offers insight to an individual’s sleep quality or as an independent component of OSAHS screening applications in future developments.

Keywords: obstructive sleep apnea hypopnea syndrome, apnea screening, snoring detection, machine learning, neural networks

Procedia PDF Downloads 208
661 Association of Alcohol Consumption with Active Tuberculosis in Taiwanese Adults: A Nationwide Population-Based Cohort Study

Authors: Yung-Feng Yen, Yun-Ju Lai

Abstract:

Background: Animal studies have shown that alcohol exposure may cause immunosuppression and increase the susceptibility to tuberculosis (TB) infection. However, the temporality of alcohol consumption with subsequent TB development remains unclear. This nationwide population-based cohort study aimed to investigate the impact of alcohol exposure on TB development in Taiwanese adults. Methods: We included 46 196 adult participants from three rounds (2001, 2005, 2009) of the Taiwan National Health Interview Survey. Alcohol consumption was classified into heavy, regular, social, or never alcohol use. Heavy alcohol consumption was defined as intoxication at least once/week. Alcohol consumption and other covariates were collected by in-person interviews at baseline. Incident cases of active TB were identified from the National Health Insurance database. Multivariate logistic regression was used to estimate the association between alcohol consumption and active TB, with adjustment for age, sex, smoking, socioeconomic status, and other covariates. Results: A total of 279 new cases of active TB occurred during the study follow-up period. Heavy (adjusted odds ratio [AOR], 5.21; 95% confident interval [CI], 2.41-11.26) and regular alcohol use (AOR, 1.73; 95% CI, 1.26-2.38) were associated with higher risks of incident TB after adjusting for the subject demographics and comorbidities. Moreover, a strong dose-response effect was observed between increasing alcohol consumption and incident TB (AOR, 2.26; 95% CI, 1.59-3.21; P <.001). Conclusion: Heavy and regular alcohol consumption were associated with higher risks of active TB. Future TB control programs should consider strategies to lower the overall level of alcohol consumption to reduce the TB disease burden.

Keywords: alcohol consumption, tuberculosis, risk factor, cohort study

Procedia PDF Downloads 228
660 Verifying the Performance of the Argon-41 Monitoring System from Fluorine-18 Production for Medical Applications

Authors: Nicole Virgili, Romolo Remetti

Abstract:

The aim of this work is to characterize, from radiation protection point of view, the emission into the environment of air contaminated by argon-41. In this research work, 41Ar is produced by a TR19PET cyclotron, operated at 19 MeV, installed at 'A. Gemelli' University Hospital, Rome, Italy, for fluorine-18 production. The production rate of 41Ar has been calculated on the basis of the scheduled operation cycles of the cyclotron and by utilising proper production algorithms. Then extensive Monte Carlo calculations, carried out by MCNP code, have allowed to determine the absolute detection efficiency to 41Ar gamma rays of a Geiger Muller detector placed in the terminal part of the chimney. Results showed unsatisfactory detection efficiency values and the need for integrating the detection system with more efficient detectors.

Keywords: Cyclotron, Geiger Muller detector, MCNPX, argon-41, emission of radioactive gas, detection efficiency determination

Procedia PDF Downloads 152
659 Liquid Chromatographic Determination of Alprazolam with ACE Inhibitors in Bulk, Respective Pharmaceutical Products and Human Serum

Authors: Saeeda Nadir Ali, Najma Sultana, Muhammad Saeed Arayne, Amtul Qayoom

Abstract:

Present study describes a simple and a fast liquid chromatographic method using ultraviolet detector for simultaneous determination of anxiety relief medicine alprazolam with ACE inhibitors i.e; lisinopril, captopril and enalapril employing purospher star C18 (25 cm, 0.46 cm, 5 µm). Separation was achieved within 5 min at ambient temperature via methanol: water (8:2 v/v) with pH adjusted to 2.9, monitoring the detector response at 220 nm. Optimum parameters were set up as per ICH (2006) guidelines. Calibration range was found out to be 0.312-10 µg mL-1 for alprazolam and 0.625-20 µg mL-1 for all the ACE inhibitors with correlation coefficients > 0.998 and detection limits 85, 37, 68 and 32 ng mL-1 for lisinopril, captopril, enalapril and alprazolam respectively. Intra-day, inter-day precision and accuracy of the assay were in acceptable range of 0.05-1.62% RSD and 98.85-100.76% recovery. Method was determined to be robust and effectively useful for the estimation of studied drugs in dosage formulations and human serum without obstruction of excipients or serum components.

Keywords: alprazolam, ACE inhibitors, RP HPLC, serum

Procedia PDF Downloads 515
658 Integrating a Security Operations Centre with an Organization’s Existing Procedures, Policies and Information Technology Systems

Authors: M. Mutemwa

Abstract:

A Cybersecurity Operation Centre (SOC) is a centralized hub for network event monitoring and incident response. SOCs are critical when determining an organization’s cybersecurity posture because they can be used to detect, analyze and report on various malicious activities. For most organizations, a SOC is not part of the initial design and implementation of the Information Technology (IT) environment but rather an afterthought. As a result, it is not natively a plug and play component; therefore, there are integration challenges when a SOC is introduced into an organization. A SOC is an independent hub that needs to be integrated with existing procedures, policies and IT systems of an organization such as the service desk, ticket logging system, reporting, etc. This paper discussed the challenges of integrating a newly developed SOC to an organization’s existing IT environment. Firstly, the paper begins by looking at what data sources should be incorporated into the Security Information and Event Management (SIEM) such as which host machines, servers, network end points, software, applications, web servers, etc. for security posture monitoring. That is which systems need to be monitored first and the order by which the rest of the systems follow. Secondly, the paper also describes how to integrate the organization’s ticket logging system with the SOC SIEM. That is how the cybersecurity related incidents should be logged by both analysts and non-technical employees of an organization. Also the priority matrix for incident types and notifications of incidents. Thirdly, the paper looks at how to communicate awareness campaigns from the SOC and also how to report on incidents that are found inside the SOC. Lastly, the paper looks at how to show value for the large investments that are poured into designing, building and running a SOC.

Keywords: cybersecurity operation centre, incident response, priority matrix, procedures and policies

Procedia PDF Downloads 155
657 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT

Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar

Abstract:

X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.

Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum

Procedia PDF Downloads 402
656 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations

Authors: Marta Błażkiewicz-Mazurek, Adam Konefał

Abstract:

The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.

Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling

Procedia PDF Downloads 34
655 Modeling Reflection and Transmission of Elastodiffussive Wave Sata Semiconductor Interface

Authors: Amit Sharma, J. N. Sharma

Abstract:

This paper deals with the study of reflection and transmission characteristics of acoustic waves at the interface of a semiconductor halfspace and elastic solid. The amplitude ratios (reflection and transmission coefficients) of reflected and transmitted waves to that of incident wave varying with the incident angles have been examined for the case of quasi-longitudinal wave. The special cases of normal and grazing incidence have also been derived with the help of Gauss elimination method. The mathematical model consisting of governing partial differential equations of motion and charge carriers diffusion of n-type semiconductors and elastic solid has been solved both analytically and numerically in the study. The numerical computations of reflection and transmission coefficients has been carried out by using MATLAB programming software for silicon (Si) semiconductor and copper elastic solid. The computer simulated results have been plotted graphically for Si semiconductors. The study may be useful in semiconductors, geology, and seismology in addition to surface acoustic wave (SAW) devices.

Keywords: quasilongitudinal, reflection and transmission, semiconductors, acoustics

Procedia PDF Downloads 393
654 Artificial Neural Networks Based Calibration Approach for Six-Port Receiver

Authors: Nadia Chagtmi, Nejla Rejab, Noureddine Boulejfen

Abstract:

This paper presents a calibration approach based on artificial neural networks (ANN) to determine the envelop signal (I+jQ) of a six-port based receiver (SPR). The memory effects called also dynamic behavior and the nonlinearity brought by diode based power detector have been taken into consideration by the ANN. Experimental set-up has been performed to validate the efficiency of this method. The efficiency of this approach has been confirmed by the obtained results in terms of waveforms. Moreover, the obtained error vector magnitude (EVM) and the mean absolute error (MAE) have been calculated in order to confirm and to test the ANN’s performance to achieve I/Q recovery using the output voltage detected by the power based detector. The baseband signal has been recovered using ANN with EVMs no higher than 1 % and an MAE no higher than 17, 26 for the SPR excited different type of signals such QAM (quadrature amplitude modulation) and LTE (Long Term Evolution).

Keywords: six-port based receiver; calibration, nonlinearity, memory effect, artificial neural network

Procedia PDF Downloads 78
653 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit

Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic

Abstract:

Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.

Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method

Procedia PDF Downloads 121
652 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities

Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos

Abstract:

The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.

Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification

Procedia PDF Downloads 477
651 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors

Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah

Abstract:

The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.

Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer

Procedia PDF Downloads 155
650 Interaction of Metals with Non-Conventional Solvents

Authors: Evgeny E. Tereshatov, C. M. Folden

Abstract:

Ionic liquids and deep eutectic mixtures represent so-called non-conventional solvents. The former, composed of discrete ions, is a salt with a melting temperature below 100°С. The latter, consisting of hydrogen bond donors and acceptors, is a mixture of at least two compounds, resulting in a melting temperature depression in comparison with that of the individual moiety. These systems also can be water-immiscible, which makes them applicable for metal extraction. This work will cover interactions of In, Tl, Ir, and Rh in hydrochloric acid media with eutectic mixtures and Er, Ir, and At in a gas phase with chemically modified α-detectors. The purpose is to study chemical systems based on non-conventional solvents in terms of their interaction with metals. Once promising systems are found, the next step is to modify the surface of α-detectors used in the online element production at cyclotrons to get the detector chemical selectivity. Initially, the metal interactions are studied by means of the liquid-liquid extraction technique. Then appropriate molecules are chemisorbed on the surrogate surface first to understand the coating quality. Finally, a detector is covered with the same molecule, and the metal sorption on such detectors is studied in the online regime. It was found that chemical treatment of the surface can result in 99% coverage with a monolayer formation. This surface is chemically active and can adsorb metals from hydrochloric acid solutions. Similarly, a detector surface was modified and tested during cyclotron-based experiments. Thus, a procedure of detectors functionalization has been developed, and this opens an interesting opportunity of studying chemisorption of elements which do not have stable isotopes.

Keywords: mechanism, radioisotopes, solvent extraction, gas phase sorption

Procedia PDF Downloads 103
649 The MCNP Simulation of Prompt Gamma-Ray Neutron Activation Analysis at TRR-1/M1

Authors: S. Sangaroon, W. Ratanatongchai, S. Khaweerat, R. Picha, J. Channuie

Abstract:

The prompt gamma-ray neutron activation analysis system (PGNAA) has been constructed and installed at a 6 inch diameter neutron beam port of the Thai Research Reactor-1/ Modification 1 (TRR-1/M1) since 1989. It was designed for the reactor operating power at 1.2 MW. The purpose of the system is for an elemental and isotopic analytical. In 2016, the PGNAA facility will be developed to reduce the leakage and background of neutrons and gamma radiation at the sample and detector position. In this work, the designed condition of these facilities is carried out based on the Monte Carlo method using MCNP5 computer code. The conditions with different modification materials, thicknesses and structure of the PGNAA facility, including gamma collimator and radiation shields of the detector, are simulated, and then the optimal structure parameters with a significantly improved performance of the facility are obtained.

Keywords: MCNP simulation, PGNAA, Thai research reactor (TRR-1/M1), radiation shielding

Procedia PDF Downloads 384
648 X-Corner Detection for Camera Calibration Using Saddle Points

Authors: Abdulrahman S. Alturki, John S. Loomis

Abstract:

This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.

Keywords: camera calibration, corner detector, edge detector, saddle points

Procedia PDF Downloads 409
647 Simulation of Gamma Rays Attenuation Coefficient for Some common Shielding Materials Using Monte Carlo Program

Authors: Cherief Houria, Fouka Mourad

Abstract:

In this work, the simulation of the radiation attenuation is carried out in a photon detector consisting of different common shielding material using a Monte Carlo program called PTM. The aim of the study is to investigate the effect of atomic weight and the thickness of shielding materials on the gamma radiation attenuation ability. The linear attenuation coefficients of Aluminum (Al), Iron (Fe), and lead (Pb) elements were evaluated at photons energy of 661:7KeV that are considered to be emitted from a standard radioactive point source Cs 137. The experimental measurements have been performed for three materials to obtain these linear attenuation coefficients, using a Gamma NaI(Tl) scintillation detector. Our results have been compared with the simulation results of the linear attenuation coefficient using the XCOM database and Geant4 codes and reveal that they are well agreed with both simulation data.

Keywords: gamma photon, Monte Carlo program, radiation attenuation, shielding material, the linear attenuation coefficient

Procedia PDF Downloads 204
646 Investigation on Scattered Dose Rate and Exposure Parameters during Diagnostic Examination Done with an Overcouch X-Ray Tube in Nigerian Teaching Hospital

Authors: Gbenga Martins, Christopher J. Olowookere, Lateef Bamidele, Kehinde O. Olatunji

Abstract:

The aims of this research are to measure the scattered dose rate during an X-ray examination in an X-ray room, compare the scattered dose rate with exposure parameters based on the body region examined, and examine the X-ray examination done with an over couch tube. The research was carried out using Gamma Scout software installation on the computer system (Laptop) to record the radiation counts, pulse rate, and dose rate. The measurement was employed by placing the detector at 900 to the incident X-ray. Proforma was used for the collection of patients’ data such as age, sex, examination type, and initial diagnosis. Data such as focus skin distance (FSD), body mass index (BMI), body thickness of the patients, the beam output (kVp) were collected at Obafemi Awolowo University, Ile-Ife, Western Nigeria. Total number of 136 patients was considered during this research. Dose rate range between 14.21 and 86.78 µSv/h for the plain abdominal region, 85.70 and 2.86 µSv/h for the lumbosacral region,1.3 µSv/yr and 3.6 µSv/yr in the pelvis region, 2.71 µSv/yr and 28.88 µSv/yr for leg region, 3.06 µSv/yr and 29.98 µSv/yr in hand region. The results of this study were compared with those of other studies carried out in other countries. The findings of this study indicated that the number of exposure parameters selected for each diagnostic examination contributed to the dose rate recorded. Therefore, these results call for a quality assurance program (QAP) in diagnostic X-ray units in Nigerian hospitals.

Keywords: X-radiation, exposure parameters, dose rate, pulse rate, number of counts, tube current, tube potential, diagnostic examination, scattered radiation

Procedia PDF Downloads 117