Search results for: leak detector
410 Development of a Pain Detector Using Microwave Radiometry Method
Authors: Nanditha Rajamani, Anirudhaa R. Rao, Divya Sriram
Abstract:
One of the greatest difficulties in treating patients with pain is the highly subjective nature of pain sensation. The measurement of pain intensity is primarily dependent on the patient’s report, often with little physical evidence to provide objective corroboration. This is also complicated by the fact that there are only few and expensive existing technologies (Functional Magnetic Resonance Imaging-fMRI). The need is thus clear and urgent for a reliable, non-invasive, non-painful, objective, readily adoptable, and coefficient diagnostic platform that provides additional diagnostic information to supplement its current regime with more information to assist doctors in diagnosing these patients. Thus, our idea of developing a pain detector was conceived to take a step further the detection and diagnosis of chronic and acute pain.Keywords: pain sensor, microwave radiometery, pain sensation, fMRI
Procedia PDF Downloads 456409 Development of Wide Bandgap Semiconductor Based Particle Detector
Authors: Rupa Jeena, Pankaj Chetry, Pradeep Sarin
Abstract:
The study of fundamental particles and the forces governing them has always remained an attractive field of theoretical study to pursue. With the advancement and development of new technologies and instruments, it is possible now to perform particle physics experiments on a large scale for the validation of theoretical predictions. These experiments are generally carried out in a highly intense beam environment. This, in turn, requires the development of a detector prototype possessing properties like radiation tolerance, thermal stability, and fast timing response. Semiconductors like Silicon, Germanium, Diamond, and Gallium Nitride (GaN) have been widely used for particle detection applications. Silicon and germanium being narrow bandgap semiconductors, require pre-cooling to suppress the effect of noise by thermally generated intrinsic charge carriers. The application of diamond in large-scale experiments is rare owing to its high cost of fabrication, while GaN is one of the most extensively explored potential candidates. But we are aiming to introduce another wide bandgap semiconductor in this active area of research by considering all the requirements. We have made an attempt by utilizing the wide bandgap of rutile Titanium dioxide (TiO2) and other properties to use it for particle detection purposes. The thermal evaporation-oxidation (in PID furnace) technique is used for the deposition of the film, and the Metal Semiconductor Metal (MSM) electrical contacts are made using Titanium+Gold (Ti+Au) (20/80nm). The characterization comprising X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM), Ultraviolet (UV)-Visible spectroscopy, and Laser Raman Spectroscopy (LRS) has been performed on the film to get detailed information about surface morphology. On the other hand, electrical characterizations like Current Voltage (IV) measurement in dark and light and test with laser are performed to have a better understanding of the working of the detector prototype. All these preliminary tests of the detector will be presented.Keywords: particle detector, rutile titanium dioxide, thermal evaporation, wide bandgap semiconductors
Procedia PDF Downloads 79408 Evaluation of a Data Fusion Algorithm for Detecting and Locating a Radioactive Source through Monte Carlo N-Particle Code Simulation and Experimental Measurement
Authors: Hadi Ardiny, Amir Mohammad Beigzadeh
Abstract:
Through the utilization of a combination of various sensors and data fusion methods, the detection of potential nuclear threats can be significantly enhanced by extracting more information from different data. In this research, an experimental and modeling approach was employed to track a radioactive source by combining a surveillance camera and a radiation detector (NaI). To run this experiment, three mobile robots were utilized, with one of them equipped with a radioactive source. An algorithm was developed in identifying the contaminated robot through correlation between camera images and camera data. The computer vision method extracts the movements of all robots in the XY plane coordinate system, and the detector system records the gamma-ray count. The position of the robots and the corresponding count of the moving source were modeled using the MCNPX simulation code while considering the experimental geometry. The results demonstrated a high level of accuracy in finding and locating the target in both the simulation model and experimental measurement. The modeling techniques prove to be valuable in designing different scenarios and intelligent systems before initiating any experiments.Keywords: nuclear threats, radiation detector, MCNPX simulation, modeling techniques, intelligent systems
Procedia PDF Downloads 123407 Cosmic Muon Tomography at the Wylfa Reactor Site Using an Anti-Neutrino Detector
Authors: Ronald Collins, Jonathon Coleman, Joel Dasari, George Holt, Carl Metelko, Matthew Murdoch, Alexander Morgan, Yan-Jie Schnellbach, Robert Mills, Gareth Edwards, Alexander Roberts
Abstract:
At the Wylfa Magnox Power Plant between 2014–2016, the VIDARR prototype anti-neutrino detector was deployed. It is comprised of extruded plastic scintillating bars measuring 4 cm × 1 cm × 152 cm and utilised wavelength shifting fibres (WLS) and multi-pixel photon counters (MPPCs) to detect and quantify radiation. During deployment, it took cosmic muon data in accidental coincidence with the anti-neutrino measurements with the power plant site buildings obscuring the muon sky. Cosmic muons have a significantly higher probability of being attenuated and/or absorbed by denser objects, and so one-sided cosmic muon tomography was utilised to image the reactor site buildings. In order to achieve clear building outlines, a control data set was taken at the University of Liverpool from 2016 – 2018, which had minimal occlusion of the cosmic muon flux by dense objects. By taking the ratio of these two data sets and using GEANT4 simulations, it is possible to perform a one-sided cosmic muon tomography analysis. This analysis can be used to discern specific buildings, building heights, and features at the Wylfa reactor site, including the reactor core/reactor core shielding using ∼ 3 hours worth of cosmic-ray detector live time. This result demonstrates the feasibility of using cosmic muon analysis to determine a segmented detector’s location with respect to surrounding buildings, assisted by aerial photography or satellite imagery.Keywords: anti-neutrino, GEANT4, muon, tomography, occlusion
Procedia PDF Downloads 186406 Stratafix Barbed Suture Versus Polydioxanone Suture on the Rate of Pancreatic Fistula After Pancreaticoduodenectomy
Authors: Saniya Ablatt, Matthew Jacobsson, Jamie Whisler, Austin Forbes
Abstract:
Postoperative pancreatic fistula (POPF) is a complication that occurs in up to 41% of patients after pancreaticoduodenectomy. Although certain characteristics such as individual patient anatomy are known risk factors for POPF, the effect of barbed suture techniques remains underexplored. This study examines whether the use of Stratafix barbed suture versus PDS impacts the risk of developing POPF. After obtaining IRB exemption, a retrospective chart review was initiated involving patients who underwent pancreaticoduodenectomy for the treatment of malignant or premalignant lesions of the pancreas at our institution between April 1st 2020 and April 30th 2022. Patients were stratified into 2 groups respective to the technique used to suture the pancreatico-jejunal anastomosis: Group 1 was composed to patients in which 4.0 Stratafix® suture was used n=41. Group 1 was composed to patients in which 4.0 PDS suture was used n=42. Data regarding patient age, sex, BMI, presence or absence of biochemical leak, presence or absence of grade B & C postoperative pancreatic fistulas, rate and type of in hospital complication, rate of reoperation, 30 day readmission rate, 90 day mortality, and total mortality were compared between groups. 83 patients were included in our study with 42 receiving Stratafix and 41 receiving PDS (50.6% vs 49.4%). Stratafix patients had less biochemical leaks (0.0% vs 4.8%, p=0.19) and higher rates of POPF but this was not statistically significant (7.2% vs 2.4%, p=0.26). Additionally, there was no difference between the use of stratafix versus PDS on the risk of clinically relevant grade B or C POPF (p=0.26, OR=3.25 [CI= 0.74-16.43]). Of the independent variables including age, race, sex, BMI, and ASA class, BMI greater than 25 increased the risk of clinically relevant POPF by 7.7 times compared to patients with BMI less than 25 (p=0.03, OR=7.79 [1.04-88.51]). Despite no significant difference in primary outcomes, the Stratafix group had lower rates of secondary outcomes including 90-day mortality; bleeding, cardiac, and infectious complications; reoperation; and 30-day readmission. On statistical analysis, Stratafix decreased the risk of 30-day readmission (p=0.04, OR=0.21, CI=0.04-0.97) and had a marginally significant effect on the risk of reoperation (p=0.08, OR=0.24, CI=0.04-1.26). There was no difference between the use of Stratafix versus PDS on the risk of POPF (p=0.26). However, Stratafix decreased the risk of 30-day readmission (p=0.04) and BMI greater than 25 increased the risk of clinically relevant POPF (p=0.03).Keywords: pancreas, hepatobiliary surgery, hepatobiliary, pancreatic leak, biochemical leak, fistula, pancreatic fistula
Procedia PDF Downloads 128405 Advanced Technologies for Detector Readout in Particle Physics
Authors: Y. Venturini, C. Tintori
Abstract:
Given the continuous demand for improved readout performances in particle and dark matter physics, CAEN SpA is pushing on the development of advanced technologies for detector readout. We present the Digitizers 2.0, the result of the success of the previous Digitizers generation, combined with expanded capabilities and a renovation of the user experience introducing the open FPGA. The first product of the family is the VX2740 (64 ch, 125 MS/s, 16 bit) for advanced waveform recording and Digital Pulse Processing, fitting with the special requirements of Dark Matter and Neutrino experiments. In parallel, CAEN is developing the FERS-5200 platform, a Front-End Readout System designed to read out large multi-detector arrays, such as SiPMs, multi-anode PMTs, silicon strip detectors, wire chambers, GEM, gas tubes, and others. This is a highly-scalable distributed platform, based on small Front-End cards synchronized and read out by a concentrator board, allowing to build extremely large experimental setup. We plan to develop a complete family of cost-effective Front-End cards tailored to specific detectors and applications. The first one available is the A5202, a 64-channel unit for SiPM readout based on CITIROC ASIC by Weeroc.Keywords: dark matter, digitizers, front-end electronics, open FPGA, SiPM
Procedia PDF Downloads 128404 Improving 99mTc-tetrofosmin Myocardial Perfusion Images by Time Subtraction Technique
Authors: Yasuyuki Takahashi, Hayato Ishimura, Masao Miyagawa, Teruhito Mochizuki
Abstract:
Quantitative measurement of myocardium perfusion is possible with single photon emission computed tomography (SPECT) using a semiconductor detector. However, accumulation of 99mTc-tetrofosmin in the liver may make it difficult to assess that accurately in the inferior myocardium. Our idea is to reduce the high accumulation in the liver by using dynamic SPECT imaging and a technique called time subtraction. We evaluated the performance of a new SPECT system with a cadmium-zinc-telluride solid-state semi- conductor detector (Discovery NM 530c; GE Healthcare). Our system acquired list-mode raw data over 10 minutes for a typical patient. From the data, ten SPECT images were reconstructed, one for every minute of acquired data. Reconstruction with the semiconductor detector was based on an implementation of a 3-D iterative Bayesian reconstruction algorithm. We studied 20 patients with coronary artery disease (mean age 75.4 ± 12.1 years; range 42-86; 16 males and 4 females). In each subject, 259 MBq of 99mTc-tetrofosmin was injected intravenously. We performed both a phantom and a clinical study using dynamic SPECT. An approximation to a liver-only image is obtained by reconstructing an image from the early projections during which time the liver accumulation dominates (0.5~2.5 minutes SPECT image-5~10 minutes SPECT image). The extracted liver-only image is then subtracted from a later SPECT image that shows both the liver and the myocardial uptake (5~10 minutes SPECT image-liver-only image). The time subtraction of liver was possible in both a phantom and the clinical study. The visualization of the inferior myocardium was improved. In past reports, higher accumulation in the myocardium due to the overlap of the liver is un-diagnosable. Using our time subtraction method, the image quality of the 99mTc-tetorofosmin myocardial SPECT image is considerably improved.Keywords: 99mTc-tetrofosmin, dynamic SPECT, time subtraction, semiconductor detector
Procedia PDF Downloads 335403 Non-Contact Digital Music Instrument Using Light Sensing Technology
Authors: Aishwarya Ravichandra, Kirtana Kirtivasan, Adithi Mahesh, Ashwini S.Savanth
Abstract:
A Non-Contact Digital Music System has been conceptualized and implemented to create a new era of digital music. This system replaces the strings of a traditional stringed instrument with laser beams to avoid bruising of the user’s hand. The system consists of seven laser modules, detector modules and distance sensors that form the basic hardware blocks of this instrument. Arduino ATmega2560 microcontroller is used as the primary interface between the hardware and the software. MIDI (Musical Instrument Digital Interface) is used as the protocol to establish communication between the instrument and the virtual synthesizer software.Keywords: Arduino, detector, laser, MIDI, note on, note off, pitch bend, Sharp IR distance sensor
Procedia PDF Downloads 407402 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations
Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso
Abstract:
Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.Keywords: pipeline, leakage, detection, AI
Procedia PDF Downloads 190401 X-Ray Detector Technology Optimization In CT Imaging
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices CT scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80kVp and 140kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 271400 X-Ray Detector Technology Optimization in Computed Tomography
Authors: Aziz Ikhlef
Abstract:
Most of multi-slices Computed Tomography (CT) scanners are built with detectors composed of scintillator - photodiodes arrays. The photodiodes arrays are mainly based on front-illuminated technology for detectors under 64 slices and on back-illuminated photodiode for systems of 64 slices or more. The designs based on back-illuminated photodiodes were being investigated for CT machines to overcome the challenge of the higher number of runs and connection required in front-illuminated diodes. In backlit diodes, the electronic noise has already been improved because of the reduction of the load capacitance due to the routing reduction. This is translated by a better image quality in low signal application, improving low dose imaging in large patient population. With the fast development of multi-detector-rows CT (MDCT) scanners and the increasing number of examinations, the clinical community has raised significant concerns on radiation dose received by the patient in both medical and regulatory community. In order to reduce individual exposure and in response to the recommendations of the International Commission on Radiological Protection (ICRP) which suggests that all exposures should be kept as low as reasonably achievable (ALARA), every manufacturer is trying to implement strategies and solutions to optimize dose efficiency and image quality based on x-ray emission and scanning parameters. The added demands on the CT detector performance also comes from the increased utilization of spectral CT or dual-energy CT in which projection data of two different tube potentials are collected. One of the approaches utilizes a technology called fast-kVp switching in which the tube voltage is switched between 80 kVp and 140 kVp in fraction of a millisecond. To reduce the cross-contamination of signals, the scintillator based detector temporal response has to be extremely fast to minimize the residual signal from previous samples. In addition, this paper will present an overview of detector technologies and image chain improvement which have been investigated in the last few years to improve the signal-noise ratio and the dose efficiency CT scanners in regular examinations and in energy discrimination techniques. Several parameters of the image chain in general and in the detector technology contribute in the optimization of the final image quality. We will go through the properties of the post-patient collimation to improve the scatter-to-primary ratio, the scintillator material properties such as light output, afterglow, primary speed, crosstalk to improve the spectral imaging, the photodiode design characteristics and the data acquisition system (DAS) to optimize for crosstalk, noise and temporal/spatial resolution.Keywords: computed tomography, X-ray detector, medical imaging, image quality, artifacts
Procedia PDF Downloads 194399 Numerical Response of Coaxial HPGe Detector for Skull and Knee Measurement
Authors: Pabitra Sahu, M. Manohari, S. Priyadharshini, R. Santhanam, S. Chandrasekaran, B. Venkatraman
Abstract:
Radiation workers of reprocessing plants have a potential for internal exposure due to actinides and fission products. Radionuclides like Americium, lead, Polonium and Europium are bone seekers and get accumulated in the skeletal part. As the major skeletal content is in the skull (13%) and knee (22%), measurements of old intake have to be carried out in the skull and knee. At the Indira Gandhi Centre for Atomic Research, a twin HPGe-based actinide monitor is used for the measurement of actinides present in bone. Efficiency estimation, which is one of the prerequisites for the quantification of radionuclides, requires anthropomorphic phantoms. Such phantoms are very limited. Hence, in this study, efficiency curves for a Twin HPGe-based actinide monitoring system are established theoretically using the FLUKA Monte Carlo method and ICRP adult male voxel phantom. In the case of skull measurement, the detector is placed over the forehead, and for knee measurement, one detector is placed over each knee. The efficiency values of radionuclides present in the knee and skull vary from 3.72E-04 to 4.19E-04 CPS/photon and 5.22E-04 to 7.07E-04 CPS/photon, respectively, for the energy range 17 to 3000keV. The efficiency curves for the measurement are established, and it is found that initially, the efficiency value increases up to 100 keV and then starts decreasing. It is found that the skull efficiency values are 4% to 63% higher than that of the knee, depending on the energy for all the energies except 17.74 keV. The reason is the closeness of the detector to the skull compared to the knee. But for 17.74 keV the efficiency of the knee is more than the skull due to the higher attenuation caused in the skull bones because of its greater thickness. The Minimum Detectable Activity (MDA) for 241Am present in the skull and knee is 9 Bq. 239Pu has a MDA of 950 Bq and 1270 Bq for knee and skull, respectively, for a counting time of 1800 sec. This paper discusses the simulation method and the results obtained in the study.Keywords: FLUKA Monte Carlo Method, ICRP adult male voxel phantom, knee, Skull.
Procedia PDF Downloads 51398 Enhanced Water Vapor Flow in Silica Microtubes Explained by Maxwell’s Tangential Momentum Accommodation and Langmuir’s Adsorption
Authors: Wenwen Lei, David R. Mckenzie
Abstract:
Recent findings of anomalously high gas flow rates in carbon nanotubes show smooth hydrophobic walls can increase specular reflection of molecules and reduce the tangential momentum accommodation coefficient (TMAC). Here we report the first measurements of water vapor flows in microtubes over a wide humidity range and show that for hydrophobic silica there is a range of humidity over which an adsorbed water layer reduces TMAC and accelerates flow. Our results show that this association between hydrophobicity and accelerated moisture flow occurs in readily available materials. We develop a hierarchical theory that unifies Maxwell’s ideas on TMAC with Langmuir’s ideas on adsorption. We fit the TMAC data as a function of humidity with the hierarchical theory based on two stages of Langmuir adsorption and derive total adsorption isotherms for water on hydrophobic silica that agree with direct observations. We propose structures for each stage of the water adsorption, the first reducing TMAC by a passivation of adsorptive patches and a smoothing of the surface, the second resembling bulk water with large TMAC. We find that leak testing of moisture barriers with an ideal gas such as helium may not be accurate enough for critical applications and that direct measurements of the water leak rate should be made.Keywords: water vapor flows, silica microtubes, TMAC, enhanced flow rates
Procedia PDF Downloads 274397 Blind Speech Separation Using SRP-PHAT Localization and Optimal Beamformer in Two-Speaker Environments
Authors: Hai Quang Hong Dam, Hai Ho, Minh Hoang Le Ngo
Abstract:
This paper investigates the problem of blind speech separation from the speech mixture of two speakers. A voice activity detector employing the Steered Response Power - Phase Transform (SRP-PHAT) is presented for detecting the activity information of speech sources and then the desired speech signals are extracted from the speech mixture by using an optimal beamformer. For evaluation, the algorithm effectiveness, a simulation using real speech recordings had been performed in a double-talk situation where two speakers are active all the time. Evaluations show that the proposed blind speech separation algorithm offers a good interference suppression level whilst maintaining a low distortion level of the desired signal.Keywords: blind speech separation, voice activity detector, SRP-PHAT, optimal beamformer
Procedia PDF Downloads 283396 An Infrared Inorganic Scintillating Detector Applied in Radiation Therapy
Authors: Sree Bash Chandra Debnath, Didier Tonneau, Carole Fauquet, Agnes Tallet, Julien Darreon
Abstract:
Purpose: Inorganic scintillating dosimetry is the most recent promising technique to solve several dosimetric issues and provide quality assurance in radiation therapy. Despite several advantages, the major issue of using scintillating detectors is the Cerenkov effect, typically induced in the visible emission range. In this context, the purpose of this research work is to evaluate the performance of a novel infrared inorganic scintillator detector (IR-ISD) in the radiation therapy treatment to ensure Cerenkov free signal and the best matches between the delivered and prescribed doses during treatment. Methods: A simple and small-scale infrared inorganic scintillating detector of 100 µm diameter with a sensitive scintillating volume of 2x10-6 mm3 was developed. A prototype of the dose verification system has been introduced based on PTIR1470/F (provided by Phosphor Technology®) material used in the proposed novel IR-ISD. The detector was tested on an Elekta LINAC system tuned at 6 MV/15MV and a brachytherapy source (Ir-192) used in the patient treatment protocol. The associated dose rate was measured in count rate (photons/s) using a highly sensitive photon counter (sensitivity ~20ph/s). Overall measurements were performed in IBATM water tank phantoms by following international Technical Reports series recommendations (TRS 381) for radiotherapy and TG43U1 recommendations for brachytherapy. The performance of the detector was tested through several dosimetric parameters such as PDD, beam profiling, Cerenkov measurement, dose linearity, dose rate linearity repeatability, and scintillator stability. Finally, a comparative study is also shown using a reference microdiamond dosimeter, Monte-Carlo (MC) simulation, and data from recent literature. Results: This study is highlighting the complete removal of the Cerenkov effect especially for small field radiation beam characterization. The detector provides an entire linear response with the dose in the 4cGy to 800 cGy range, independently of the field size selected from 5 x 5 cm² down to 0.5 x 0.5 cm². A perfect repeatability (0.2 % variation from average) with day-to-day reproducibility (0.3% variation) was observed. Measurements demonstrated that ISD has superlinear behavior with dose rate (R2=1) varying from 50 cGy/s to 1000 cGy/s. PDD profiles obtained in water present identical behavior with a build-up maximum depth dose at 15 mm for different small fields irradiation. A low dimension of 0.5 x 0.5 cm² field profiles have been characterized, and the field cross profile presents a Gaussian-like shape. The standard deviation (1σ) of the scintillating signal remains within 0.02% while having a very low convolution effect, thanks to lower sensitive volume. Finally, during brachytherapy, a comparison with MC simulations shows that considering energy dependency, measurement agrees within 0.8% till 0.2 cm source to detector distance. Conclusion: The proposed scintillating detector in this study shows no- Cerenkov radiation and efficient performance for several radiation therapy measurement parameters. Therefore, it is anticipated that the IR-ISD system can be promoted to validate with direct clinical investigations, such as appropriate dose verification and quality control in the Treatment Planning System (TPS).Keywords: IR-Scintillating detector, dose measurement, micro-scintillators, Cerenkov effect
Procedia PDF Downloads 182395 Probing Extensive Air Shower Primaries and Their Interactions by Combining Individual Muon Tracks and Shower Depth
Authors: Moon Moon Devi, Ran Budnik
Abstract:
The current large area cosmic ray detector surface arrays typically measure only the net flux and arrival-time of the charged particles produced in an extensive air shower (EAS). Measurement of the individual charged particles at a surface array will provide additional distinguishing parameters to identify the primary and to map the very high energy interactions in the upper layers of the atmosphere. In turn, these may probe anomalies in QCD interactions at energies beyond the reach of current accelerators. The recent attempts of studying the individual muon tracks are limited in their expandability to larger arrays and can only probe primary particles with energy up to about 10^15.5 eV. New developments in detector technology allow for a realistic cost of large area detectors, however with limitations on energy resolutions, directional information, and dynamic range. In this study, we perform a simulation study using CORSIKA to combine the energy spectrum and lateral spread of the muons with the longitudinal depth (Xmax) of an EAS initiated by a primary at ultra high energies (10¹⁶ – 10¹⁹) eV. Using proton and iron as the shower primaries, we show that the muon observables and Xmax together can be used to distinguish the primary. This study can be used to design a future detector for the surface array, which will be able to enhance our knowledge of primaries and QCD interactions.Keywords: ultra high energy extensive air shower, muon tracking, air shower primaries, QCD interactions
Procedia PDF Downloads 228394 A Real-Time Snore Detector Using Neural Networks and Selected Sound Features
Authors: Stelios A. Mitilineos, Nicolas-Alexander Tatlas, Georgia Korompili, Lampros Kokkalas, Stelios M. Potirakis
Abstract:
Obstructive Sleep Apnea Hypopnea Syndrome (OSAHS) is a widespread chronic disease that mostly remains undetected, mainly due to the fact that it is diagnosed via polysomnography which is a time and resource-intensive procedure. Screening the disease’s symptoms at home could be used as an alternative approach in order to alert individuals that potentially suffer from OSAHS without compromising their everyday routine. Since snoring is usually linked to OSAHS, developing a snore detector is appealing as an enabling technology for screening OSAHS at home using ubiquitous equipment like commodity microphones (included in, e.g., smartphones). In this context, this study developed a snore detection tool and herein present the approach and selection of specific sound features that discriminate snoring vs. environmental sounds, as well as the performance of the proposed tool. Furthermore, a Real-Time Snore Detector (RTSD) is built upon the snore detection tool and employed in whole-night sleep sound recordings resulting to a large dataset of snoring sound excerpts that are made freely available to the public. The RTSD may be used either as a stand-alone tool that offers insight to an individual’s sleep quality or as an independent component of OSAHS screening applications in future developments.Keywords: obstructive sleep apnea hypopnea syndrome, apnea screening, snoring detection, machine learning, neural networks
Procedia PDF Downloads 207393 Verifying the Performance of the Argon-41 Monitoring System from Fluorine-18 Production for Medical Applications
Authors: Nicole Virgili, Romolo Remetti
Abstract:
The aim of this work is to characterize, from radiation protection point of view, the emission into the environment of air contaminated by argon-41. In this research work, 41Ar is produced by a TR19PET cyclotron, operated at 19 MeV, installed at 'A. Gemelli' University Hospital, Rome, Italy, for fluorine-18 production. The production rate of 41Ar has been calculated on the basis of the scheduled operation cycles of the cyclotron and by utilising proper production algorithms. Then extensive Monte Carlo calculations, carried out by MCNP code, have allowed to determine the absolute detection efficiency to 41Ar gamma rays of a Geiger Muller detector placed in the terminal part of the chimney. Results showed unsatisfactory detection efficiency values and the need for integrating the detection system with more efficient detectors.Keywords: Cyclotron, Geiger Muller detector, MCNPX, argon-41, emission of radioactive gas, detection efficiency determination
Procedia PDF Downloads 149392 A pilot Study of Umbilical Cord Mini-Clamp
Authors: Seng Sing Tan
Abstract:
Clamping of the umbilical cord after birth is widely practiced as a part of labor management. Further improvements were proposed to produce a smaller, lighter and more comfortable clamp while still maintaining current standards of clamping. A detachable holder was also developed to facilitate the clamping process. This pilot study on the efficacy of the mini-clamp was conducted to evaluate a tightness of the seal and a firm grip of the clamp on the umbilical cord. The study was carried out at National University Hospital, using 5 sets of placental cord. 18 samples of approximate 10 cm each were harvested. The test results showed that the mini-clamp was able to stop the flow through the cord after clamping without rupturing the cord. All slip tests passed with a load of 0.2 kg. In the pressure testing, 30kPa of saline was exerted into the umbilical veins. Although there was no physical sign of fluid leaking through the end secured by the mini-clamp, the results showed the pressure was not able to sustain the pressure set during the tests. 12 out of the 18 test samples have more than 7% of pressure drop in 30 seconds. During the pressure leak test, it was observed on several samples that when pressurized, small droplets of saline were growing on the outer surface of the cord lining membrane. It was thus hypothesized that the pressure drop was likely caused by the perfusion of the injected saline through the Wharton’s jelly and the cord lining membrane. The average pressure in the umbilical vein is roughly 2.67kPa (20 mmHg), less than 10% of 30kPa (~225mmHg), set for the pressure testing. As such, the pressure set could be over-specified, leading to undesirable outcomes. The development of the mini-clamp was an attempt to increase the comfort of newly born babies while maintaining the usability and efficacy of hospital grade umbilical cord clamp. The pressure leak in this study would be unfair to fully attribute it to the design and efficacy of the mini-clamp. Considering the unexpected leakage of saline through the umbilical membrane due to over-specified pressure exerted on the umbilical veins, improvements can definitely be made to the existing experimental setup to obtain a more accurate and conclusive outcome. If proven conclusive and effective, the mini-clamp with a detachable holder could be a smaller and potentially cheaper alternative to existing umbilical cord clamps. In addition, future clinical trials could be conducted to determine the user-friendliness of the mini-clamp and evaluate its practicality in the clinical setting by labor ward clinicians. A further potential improvement could be proposed on the sustainability factor of the mini-clamp. A biodegradable clamp would revolutionise the industry in this increasingly environmentally sustainability world.Keywords: leak test, mini-clamp, slip test, umbilical cord
Procedia PDF Downloads 132391 Liquid Chromatographic Determination of Alprazolam with ACE Inhibitors in Bulk, Respective Pharmaceutical Products and Human Serum
Authors: Saeeda Nadir Ali, Najma Sultana, Muhammad Saeed Arayne, Amtul Qayoom
Abstract:
Present study describes a simple and a fast liquid chromatographic method using ultraviolet detector for simultaneous determination of anxiety relief medicine alprazolam with ACE inhibitors i.e; lisinopril, captopril and enalapril employing purospher star C18 (25 cm, 0.46 cm, 5 µm). Separation was achieved within 5 min at ambient temperature via methanol: water (8:2 v/v) with pH adjusted to 2.9, monitoring the detector response at 220 nm. Optimum parameters were set up as per ICH (2006) guidelines. Calibration range was found out to be 0.312-10 µg mL-1 for alprazolam and 0.625-20 µg mL-1 for all the ACE inhibitors with correlation coefficients > 0.998 and detection limits 85, 37, 68 and 32 ng mL-1 for lisinopril, captopril, enalapril and alprazolam respectively. Intra-day, inter-day precision and accuracy of the assay were in acceptable range of 0.05-1.62% RSD and 98.85-100.76% recovery. Method was determined to be robust and effectively useful for the estimation of studied drugs in dosage formulations and human serum without obstruction of excipients or serum components.Keywords: alprazolam, ACE inhibitors, RP HPLC, serum
Procedia PDF Downloads 513390 Dependence of the Photoelectric Exponent on the Source Spectrum of the CT
Authors: Rezvan Ravanfar Haghighi, V. C. Vani, Suresh Perumal, Sabyasachi Chatterjee, Pratik Kumar
Abstract:
X-ray attenuation coefficient [µ(E)] of any substance, for energy (E), is a sum of the contributions from the Compton scattering [ μCom(E)] and photoelectric effect [µPh(E)]. In terms of the, electron density (ρe) and the effective atomic number (Zeff) we have µCom(E) is proportional to [(ρe)fKN(E)] while µPh(E) is proportional to [(ρeZeffx)/Ey] with fKN(E) being the Klein-Nishina formula, with x and y being the exponents for photoelectric effect. By taking the sample's HU at two different excitation voltages (V=V1, V2) of the CT machine, we can solve for X=ρe, Y=ρeZeffx from these two independent equations, as is attempted in DECT inversion. Since µCom(E) and µPh(E) are both energy dependent, the coefficients of inversion are also dependent on (a) the source spectrum S(E,V) and (b) the detector efficiency D(E) of the CT machine. In the present paper we tabulate these coefficients of inversion for different practical manifestations of S(E,V) and D(E). The HU(V) values from the CT follow: <µ(V)>=<µw(V)>[1+HU(V)/1000] where the subscript 'w' refers to water and the averaging process <….> accounts for the source spectrum S(E,V) and the detector efficiency D(E). Linearity of μ(E) with respect to X and Y implies that (a) <µ(V)> is a linear combination of X and Y and (b) for inversion, X and Y can be written as linear combinations of two independent observations <µ(V1)>, <µ(V2)> with V1≠V2. These coefficients of inversion would naturally depend upon S(E, V) and D(E). We numerically investigate this dependence for some practical cases, by taking V = 100 , 140 kVp, as are used for cardiological investigations. The S(E,V) are generated by using the Boone-Seibert source spectrum, being superposed on aluminium filters of different thickness lAl with 7mm≤lAl≤12mm and the D(E) is considered to be that of a typical Si[Li] solid state and GdOS scintilator detector. In the values of X and Y, found by using the calculated inversion coefficients, errors are below 2% for data with solutions of glycerol, sucrose and glucose. For low Zeff materials like propionic acid, Zeffx is overestimated by 20% with X being within1%. For high Zeffx materials like KOH the value of Zeffx is underestimated by 22% while the error in X is + 15%. These imply that the source may have additional filtering than the aluminium filter specified by the manufacturer. Also it is found that the difference in the values of the inversion coefficients for the two types of detectors is negligible. The type of the detector does not affect on the DECT inversion algorithm to find the unknown chemical characteristic of the scanned materials. The effect of the source should be considered as an important factor to calculate the coefficients of inversion.Keywords: attenuation coefficient, computed tomography, photoelectric effect, source spectrum
Procedia PDF Downloads 400389 The BNCT Project Using the Cf-252 Source: Monte Carlo Simulations
Authors: Marta Błażkiewicz-Mazurek, Adam Konefał
Abstract:
The project can be divided into three main parts: i. modeling the Cf-252 neutron source and conducting an experiment to verify the correctness of the obtained results, ii. design of the BNCT system infrastructure, iii. analysis of the results from the logical detector. Modeling of the Cf-252 source included designing the shape and size of the source as well as the energy and spatial distribution of emitted neutrons. Two options were considered: a point source and a cylindrical spatial source. The energy distribution corresponded to various spectra taken from specialized literature. Directionally isotropic neutron emission was simulated. The simulation results were compared with experimental values determined using the activation detector method using indium foils and cadmium shields. The relative fluence rate of thermal and resonance neutrons was compared in the chosen places in the vicinity of the source. The second part of the project related to the modeling of the BNCT infrastructure consisted of developing a simulation program taking into account all the essential components of this system. Materials with moderating, absorbing, and backscattering properties of neutrons were adopted into the project. Additionally, a gamma radiation filter was introduced into the beam output system. The analysis of the simulation results obtained using a logical detector located at the beam exit from the BNCT infrastructure included neutron energy and their spatial distribution. Optimization of the system involved changing the size and materials of the system to obtain a suitable collimated beam of thermal neutrons.Keywords: BNCT, Monte Carlo, neutrons, simulation, modeling
Procedia PDF Downloads 29388 Artificial Neural Networks Based Calibration Approach for Six-Port Receiver
Authors: Nadia Chagtmi, Nejla Rejab, Noureddine Boulejfen
Abstract:
This paper presents a calibration approach based on artificial neural networks (ANN) to determine the envelop signal (I+jQ) of a six-port based receiver (SPR). The memory effects called also dynamic behavior and the nonlinearity brought by diode based power detector have been taken into consideration by the ANN. Experimental set-up has been performed to validate the efficiency of this method. The efficiency of this approach has been confirmed by the obtained results in terms of waveforms. Moreover, the obtained error vector magnitude (EVM) and the mean absolute error (MAE) have been calculated in order to confirm and to test the ANN’s performance to achieve I/Q recovery using the output voltage detected by the power based detector. The baseband signal has been recovered using ANN with EVMs no higher than 1 % and an MAE no higher than 17, 26 for the SPR excited different type of signals such QAM (quadrature amplitude modulation) and LTE (Long Term Evolution).Keywords: six-port based receiver; calibration, nonlinearity, memory effect, artificial neural network
Procedia PDF Downloads 76387 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 119386 Quantification of Hydrogen Sulfide and Methyl Mercaptan in Air Samples from a Waste Management Facilities
Authors: R. F. Vieira, S. A. Figueiredo, O. M. Freitas, V. F. Domingues, C. Delerue-Matos
Abstract:
The presence of sulphur compounds like hydrogen sulphide and mercaptans is one of the reasons for waste-water treatment and waste management being associated with odour emissions. In this context having a quantifying method for these compounds helps in the optimization of treatment with the goal of their elimination, namely biofiltration processes. The aim of this study was the development of a method for quantification of odorous gases in waste treatment plants air samples. A method based on head space solid phase microextraction (HS-SPME) coupled with gas chromatography - flame photometric detector (GC-FPD) was used to analyse H2S and Metil Mercaptan (MM). The extraction was carried out with a 75-μm Carboxen-polydimethylsiloxane fiber coating at 22 ºC for 20 min, and analysed by a GC 2010 Plus A from Shimadzu with a sulphur filter detector: splitless mode (0.3 min), the column temperature program was from 60 ºC, increased by 15 ºC/min to 100 ºC (2 min). The injector temperature was held at 250 ºC, and the detector at 260 ºC. For calibration curve a gas diluter equipment (digital Hovagas G2 - Multi Component Gas Mixer) was used to do the standards. This unit had two input connections, one for a stream of the dilute gas and another for a stream of nitrogen and an output connected to a glass bulb. A 40 ppm H2S and a 50 ppm MM cylinders were used. The equipment was programmed to the selected concentration, and it automatically carried out the dilution to the glass bulb. The mixture was left flowing through the glass bulb for 5 min and then the extremities were closed. This method allowed the calibration between 1-20 ppm for H2S and 0.02-0.1 ppm and 1-3.5 ppm for MM. Several quantifications of air samples from inlet and outlet of a biofilter operating in a waste management facility in the north of Portugal allowed the evaluation the biofilters performance.Keywords: biofiltration, hydrogen sulphide, mercaptans, quantification
Procedia PDF Downloads 476385 Design and Simulation of a Radiation Spectrometer Using Scintillation Detectors
Authors: Waleed K. Saib, Abdulsalam M. Alhawsawi, Essam Banoqitah
Abstract:
The idea of this research is to design a radiation spectrometer using LSO scintillation detector coupled to a C series of SiPM (silicon photomultiplier). The device can be used to detects gamma and X-ray radiation. This device is also designed to estimates the activity of the source contamination. The SiPM will detect light in the visible range above the threshold and read them as counts. Three gamma sources were used for these experiments Cs-137, Am-241 and Co-60 with various activities. These sources are applied for four experiments operating the SiPM as a spectrometer, energy resolution, pile-up set and efficiency. The SiPM is connected to a MCA to perform as a spectrometer. Cerium doped Lutetium Silicate (Lu₂SiO₅) with light yield 26000 photons/Mev coupled with the SiPM. As a result, all the main features of the Cs-137, Am-241 and Co-60 are identified in MCA. The experiment shows how photon energy and probability of interaction are inversely related. Total attenuation reduces as photon energy increases. An analytical calculation was made to obtain the FWHM resolution for each gamma source. The FWHM resolution for Am-241 (59 keV) is 28.75 %, for Cs-137 (662 keV) is 7.85 %, for Co-60 (1173 keV) is 4.46 % and for Co-60 (1332 keV) is 3.70%. Moreover, the experiment shows that the dead time and counts number decreased when the pile-up rejection was disabled and the FWHM decreased when the pile-up was enabled. The efficiencies were calculated at four different distances from the detector 2, 4, 8 and 16 cm. The detection efficiency was observed to declined exponentially with increasing distance from the detector face. Conclusively, the SiPM board operated with an LSO scintillator crystal as a spectrometer. The SiPM energy resolution for the three gamma sources used was a decent comparison to other PMTs.Keywords: PMT, radiation, radiation detection, scintillation detectors, silicon photomultiplier, spectrometer
Procedia PDF Downloads 154384 Interaction of Metals with Non-Conventional Solvents
Authors: Evgeny E. Tereshatov, C. M. Folden
Abstract:
Ionic liquids and deep eutectic mixtures represent so-called non-conventional solvents. The former, composed of discrete ions, is a salt with a melting temperature below 100°С. The latter, consisting of hydrogen bond donors and acceptors, is a mixture of at least two compounds, resulting in a melting temperature depression in comparison with that of the individual moiety. These systems also can be water-immiscible, which makes them applicable for metal extraction. This work will cover interactions of In, Tl, Ir, and Rh in hydrochloric acid media with eutectic mixtures and Er, Ir, and At in a gas phase with chemically modified α-detectors. The purpose is to study chemical systems based on non-conventional solvents in terms of their interaction with metals. Once promising systems are found, the next step is to modify the surface of α-detectors used in the online element production at cyclotrons to get the detector chemical selectivity. Initially, the metal interactions are studied by means of the liquid-liquid extraction technique. Then appropriate molecules are chemisorbed on the surrogate surface first to understand the coating quality. Finally, a detector is covered with the same molecule, and the metal sorption on such detectors is studied in the online regime. It was found that chemical treatment of the surface can result in 99% coverage with a monolayer formation. This surface is chemically active and can adsorb metals from hydrochloric acid solutions. Similarly, a detector surface was modified and tested during cyclotron-based experiments. Thus, a procedure of detectors functionalization has been developed, and this opens an interesting opportunity of studying chemisorption of elements which do not have stable isotopes.Keywords: mechanism, radioisotopes, solvent extraction, gas phase sorption
Procedia PDF Downloads 102383 The MCNP Simulation of Prompt Gamma-Ray Neutron Activation Analysis at TRR-1/M1
Authors: S. Sangaroon, W. Ratanatongchai, S. Khaweerat, R. Picha, J. Channuie
Abstract:
The prompt gamma-ray neutron activation analysis system (PGNAA) has been constructed and installed at a 6 inch diameter neutron beam port of the Thai Research Reactor-1/ Modification 1 (TRR-1/M1) since 1989. It was designed for the reactor operating power at 1.2 MW. The purpose of the system is for an elemental and isotopic analytical. In 2016, the PGNAA facility will be developed to reduce the leakage and background of neutrons and gamma radiation at the sample and detector position. In this work, the designed condition of these facilities is carried out based on the Monte Carlo method using MCNP5 computer code. The conditions with different modification materials, thicknesses and structure of the PGNAA facility, including gamma collimator and radiation shields of the detector, are simulated, and then the optimal structure parameters with a significantly improved performance of the facility are obtained.Keywords: MCNP simulation, PGNAA, Thai research reactor (TRR-1/M1), radiation shielding
Procedia PDF Downloads 383382 X-Corner Detection for Camera Calibration Using Saddle Points
Authors: Abdulrahman S. Alturki, John S. Loomis
Abstract:
This paper discusses a corner detection algorithm for camera calibration. Calibration is a necessary step in many computer vision and image processing applications. Robust corner detection for an image of a checkerboard is required to determine intrinsic and extrinsic parameters. In this paper, an algorithm for fully automatic and robust X-corner detection is presented. Checkerboard corner points are automatically found in each image without user interaction or any prior information regarding the number of rows or columns. The approach represents each X-corner with a quadratic fitting function. Using the fact that the X-corners are saddle points, the coefficients in the fitting function are used to identify each corner location. The automation of this process greatly simplifies calibration. Our method is robust against noise and different camera orientations. Experimental analysis shows the accuracy of our method using actual images acquired at different camera locations and orientations.Keywords: camera calibration, corner detector, edge detector, saddle points
Procedia PDF Downloads 406381 The Study of Implications on Modern Businesses Performances by Digital Communities: Case of Data Leak
Authors: Asim Majeed, Anwar Ul Haq, Ayesha Asim, Mike Lloyd-Williams, Arshad Jamal, Usman Butt
Abstract:
This study aims to investigate the impact of data leak of M&S customers on digital communities. Modern businesses are using digital communities as an important public relations tool for marketing purposes. This form of communication helps companies to build better relationship with their customers which also act as another source of information. The communication between the customers and the organizations is not regulated so users may post positive and negative comments. There are new platforms being developed on a daily basis and it is very crucial for the businesses to not only get themselves familiar with those but also know how to reach their existing and perspective consumers. The driving force of marketing and communication in modern businesses is the digital communities and these are continuously increasing and developing. This phenomenon is changing the way marketing is conducted. The current research has discussed the implications on M&S business performance since the data was exploited on digital communities; users contacted M&S and raised the security concerns. M&S closed down its website for few hours to try to resolve the issue. The next day M&S made a public apology about this incidence. This information was proliferated on various digital communities and it has impacted negatively on M&S brand name, sales and customers. The content analysis approach is being used to collect qualitative data from 100 digital bloggers including social media communities such as Facebook and Twitter. The results and finding provide useful new insights into the nature and form of security concerns of digital users. Findings have theoretical and practical implications. This research will showcase a large corporation utilizing various digital community platforms and can serve as a model for future organizations.Keywords: Digital, communities, performance, dissemination, implications, data, exploitation
Procedia PDF Downloads 401