Search results for: hazard of noise
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1731

Search results for: hazard of noise

201 Principal Component Analysis Combined Machine Learning Techniques on Pharmaceutical Samples by Laser Induced Breakdown Spectroscopy

Authors: Kemal Efe Eseller, Göktuğ Yazici

Abstract:

Laser-induced breakdown spectroscopy (LIBS) is a rapid optical atomic emission spectroscopy which is used for material identification and analysis with the advantages of in-situ analysis, elimination of intensive sample preparation, and micro-destructive properties for the material to be tested. LIBS delivers short pulses of laser beams onto the material in order to create plasma by excitation of the material to a certain threshold. The plasma characteristics, which consist of wavelength value and intensity amplitude, depends on the material and the experiment’s environment. In the present work, medicine samples’ spectrum profiles were obtained via LIBS. Medicine samples’ datasets include two different concentrations for both paracetamol based medicines, namely Aferin and Parafon. The spectrum data of the samples were preprocessed via filling outliers based on quartiles, smoothing spectra to eliminate noise and normalizing both wavelength and intensity axis. Statistical information was obtained and principal component analysis (PCA) was incorporated to both the preprocessed and raw datasets. The machine learning models were set based on two different train-test splits, which were 70% training – 30% test and 80% training – 20% test. Cross-validation was preferred to protect the models against overfitting; thus the sample amount is small. The machine learning results of preprocessed and raw datasets were subjected to comparison for both splits. This is the first time that all supervised machine learning classification algorithms; consisting of Decision Trees, Discriminant, naïve Bayes, Support Vector Machines (SVM), k-NN(k-Nearest Neighbor) Ensemble Learning and Neural Network algorithms; were incorporated to LIBS data of paracetamol based pharmaceutical samples, and their different concentrations on preprocessed and raw dataset in order to observe the effect of preprocessing.

Keywords: machine learning, laser-induced breakdown spectroscopy, medicines, principal component analysis, preprocessing

Procedia PDF Downloads 68
200 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India

Authors: Vinu Elias Jacob, Manoj Kumar Kini

Abstract:

Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.

Keywords: disaster management, resilience, spatial planning, spatial transformations

Procedia PDF Downloads 264
199 Microbial Contamination of Cell Phones of Health Care Workers: Case Study in Mampong Municipal Government Hospital, Ghana

Authors: Francis Gyapong, Denis Yar

Abstract:

The use of cell phones has become an indispensable tool in the hospital's settings. Cell phones are used in hospitals without restrictions regardless of their unknown microbial load. However, the indiscriminate use of mobile devices, especially at health facilities, can act as a vehicle for transmitting pathogenic bacteria and other microorganisms. These potential pathogens become exogenous sources of infection for the patients and are also a potential health hazard for self and as well as family members. These are a growing problem in many health care institutions. Innovations in mobile communication have led to better patient care in diabetes, asthma, and increased in vaccine uptake via SMS. Notwithstanding, the use of cell phones can be a great potential source for nosocomial infections. Many studies reported heavy microbial contamination of cell phones among healthcare workers and communities. However, limited studies have been reported in our region on bacterial contamination on cell phones among healthcare workers. This study assessed microbial contamination of cell phones of health care workers (HCWs) at the Mampong Municipal Government Hospital (MMGH), Ghana. A cross-sectional design was used to characterize bacterial microflora on cell phones of HCWs at the MMGH. A total of thirty-five (35) swab samples of cell phones of HCWs at the Laboratory, Dental Unit, Children’s Ward, Theater and Male ward were randomly collected for laboratory examinations. A suspension of the swab samples was each streak on blood and MacConkey agar and incubated at 37℃ for 48 hours. Bacterial isolates were identified using appropriate laboratory and biochemical tests. Kirby-Bauer disc diffusion method was used to determine the antimicrobial sensitivity tests of the isolates. Data analysis was performed using SPSS version 16. All mobile phones sampled were contaminated with one or more bacterial isolates. Cell phones from the Male ward, Dental Unit, Laboratory, Theatre and Children’s ward had at least three different bacterial isolates; 85.7%, 71.4%, 57.1% and 28.6% for both Theater and Children’s ward respectively. Bacterial contaminants identified were Staphylococcus epidermidis (37%), Staphylococcus aureus (26%), E. coli (20%), Bacillus spp. (11%) and Klebsiella spp. (6 %). Except for the Children ward, E. coli was isolated at all study sites and predominant (42.9%) at the Dental Unit while Klebsiella spp. (28.6%) was only isolated at the Children’s ward. Antibiotic sensitivity testing of Staphylococcus aureus indicated that they were highly sensitive to cephalexin (89%) tetracycline (80%), gentamycin (75%), lincomycin (70%), ciprofloxacin (67%) and highly resistant to ampicillin (75%). Some of these bacteria isolated are potential pathogens and their presence on cell phones of HCWs could be transmitted to patients and their families. Hence strict hand washing before and after every contact with patient and phone be enforced to reduce the risk of nosocomial infections.

Keywords: mobile phones, bacterial contamination, patients, MMGH

Procedia PDF Downloads 74
198 A Correlations Study on Nursing Staff's Shifts Systems, Workplace Fatigue, and Quality of Working Life

Authors: Jui Chen Wu, Ming Yi Hsu

Abstract:

Background and Purpose: Shift work of nursing staff is inevitable in hospital to provide continuing medical care. However, shift work is considered as a health hazard that may cause physical and psychological problems. Serious workplace fatigue of nursing shift work might impact on family, social and work life, moreover, causes serious reduction of quality of medical care, or even malpractice. This study aims to explore relationships among nursing staff’s shift, workplace fatigue and quality of working life. Method: Structured questionnaires were used in this study to explore relationships among shift work, workplace fatigue and quality of working life in nursing staffs. We recruited 590 nursing staffs in different Community Teaching hospitals in Taiwan. Data analysed by descriptive statistics, single sample t-test, single factor analysis, Pearson correlation coefficient and hierarchical regression, etc. Results: The overall workplace fatigue score is 50.59 points. In further analysis, the score of personal burnout, work-related burnout, over-commitment and client-related burnout are 57.86, 53.83, 45.95 and 44.71. The basic attributes of nursing staff are significantly different from those of workplace fatigue with different ages, licenses, sleeping quality, self-conscious health status, number of care patients of chronic diseases and number of care people in the obstetric ward. The shift variables revealed no significant influence on workplace fatigue during the hierarchical regression analysis. About the analysis on nursing staff’s basic attributes and shift on the quality of working life, descriptive results show that the overall quality of working life of nursing staff is 3.23 points. Comparing the average score of the six aspects, the ranked average score are 3.47 (SD= .43) in interrelationship, 3.40 (SD= .46) in self-actualisation, 3.30 (SD= .40) in self-efficacy, 3.15 (SD= .38) in vocational concept, 3.07 (SD= .37) in work aspects, and 3.02 (SD= .56) in organization aspects. The basic attributes of nursing staff are significantly different from quality of working life in different marriage situations, education level, years of nursing work, occupation area, sleep quality, self-conscious health status and number of care in medical ward. There are significant differences between shift mode and shift rate with the quality of working life. The results of the hierarchical regression analysis reveal that one of the shifts variables 'shift mode' which does affect staff’s quality of working life. The workplace fatigue is negatively correlated with the quality of working life, and the over-commitment in the workplace fatigue is positively related to the vocational concept of the quality of working life. According to the regression analysis of nursing staff’s basic attributes, shift mode, workplace fatigue and quality of working life related shift, the results show that the workplace fatigue has a significant impact on nursing staff’s quality of working life. Conclusion: According to our study, shift work is correlated with workplace fatigue in nursing staffs. This results work as important reference for human resources management in hospitals to establishing a more positive and healthy work arrangement policy.

Keywords: nursing staff, shift, workplace fatigue, quality of working life

Procedia PDF Downloads 249
197 Vertical Urbanization Over Public Structures: The Example of Mostar Junction in Belgrade, Serbia

Authors: Sladjana Popovic

Abstract:

The concept of vertical space urbanization, defined in English as "air rights development," can be considered a mechanism for the development of public spaces in urban areas of high density. A chronological overview of the transformation of space within the vertical projection of the existing traffic infrastructure that penetrates through the central areas of a city is given in this paper through the analysis of two illustrative case studies: more advanced and recent - "Plot 13" in Boston, and less well-known European example of structures erected above highways throughout Italy - the "Pavesi auto grill" chain. The backbone of this analysis is the examination of the possibility of yielding air rights within the vertical projection of public structures in the two examples by considering the factors that would enable its potential application in capitals in Southeastern Europe. The cession of air rights in the Southeastern Europe region, as a phenomenon, has not been a recognized practice in urban planning. In a formal sense, legal and physical feasibility can be seen to some extent in local models of structures built above protected historical heritage (i.e., archaeological sites); however, the mechanisms of the legal process of assigning the right to use and develop air rights above public structures is not a recognized concept. The goal of the analysis is to shed light on the influence of institutional participants in the implementation of innovative solutions for vertical urbanization, as well as strategic planning mechanisms in public-private partnership models that would enable the implementation of the concept in the region. The main question is whether the manipulation of the vertical projection of space could provide for innovative urban solutions that overcome the deficit and excessive use of the available construction land, particularly above the dominant public spaces and traffic infrastructure that penetrate central parts of a city. Conclusions reflect upon vertical urbanization that can bridge the spatial separation of the city, reduce noise pollution and contribute to more efficient urban planning along main transportation corridors.

Keywords: air rights development, innovative urbanism, public-private partnership, transport infrastructure, vertical urbanization

Procedia PDF Downloads 54
196 Distant Speech Recognition Using Laser Doppler Vibrometer

Authors: Yunbin Deng

Abstract:

Most existing applications of automatic speech recognition relies on cooperative subjects at a short distance to a microphone. Standoff speech recognition using microphone arrays can extend the subject to sensor distance somewhat, but it is still limited to only a few feet. As such, most deployed applications of standoff speech recognitions are limited to indoor use at short range. Moreover, these applications require air passway between the subject and the sensor to achieve reasonable signal to noise ratio. This study reports long range (50 feet) automatic speech recognition experiments using a Laser Doppler Vibrometer (LDV) sensor. This study shows that the LDV sensor modality can extend the speech acquisition standoff distance far beyond microphone arrays to hundreds of feet. In addition, LDV enables 'listening' through the windows for uncooperative subjects. This enables new capabilities in automatic audio and speech intelligence, surveillance, and reconnaissance (ISR) for law enforcement, homeland security and counter terrorism applications. The Polytec LDV model OFV-505 is used in this study. To investigate the impact of different vibrating materials, five parallel LDV speech corpora, each consisting of 630 speakers, are collected from the vibrations of a glass window, a metal plate, a plastic box, a wood slate, and a concrete wall. These are the common materials the application could encounter in a daily life. These data were compared with the microphone counterpart to manifest the impact of various materials on the spectrum of the LDV speech signal. State of the art deep neural network modeling approaches is used to conduct continuous speaker independent speech recognition on these LDV speech datasets. Preliminary phoneme recognition results using time-delay neural network, bi-directional long short term memory, and model fusion shows great promise of using LDV for long range speech recognition. To author’s best knowledge, this is the first time an LDV is reported for long distance speech recognition application.

Keywords: covert speech acquisition, distant speech recognition, DSR, laser Doppler vibrometer, LDV, speech intelligence surveillance and reconnaissance, ISR

Procedia PDF Downloads 154
195 Segmenting 3D Optical Coherence Tomography Images Using a Kalman Filter

Authors: Deniz Guven, Wil Ward, Jinming Duan, Li Bai

Abstract:

Over the past two decades or so, Optical Coherence Tomography (OCT) has been used to diagnose retina and optic nerve diseases. The retinal nerve fibre layer, for example, is a powerful diagnostic marker for detecting and staging glaucoma. With the advances in optical imaging hardware, the adoption of OCT is now commonplace in clinics. More and more OCT images are being generated, and for these OCT images to have clinical applicability, accurate automated OCT image segmentation software is needed. Oct image segmentation is still an active research area, as OCT images are inherently noisy, with the multiplicative speckling noise. Simple edge detection algorithms are unsuitable for detecting retinal layer boundaries in OCT images. Intensity fluctuation, motion artefact, and the presence of blood vessels also decrease further OCT image quality. In this paper, we introduce a new method for segmenting three-dimensional (3D) OCT images. This involves the use of a Kalman filter, which is commonly used in computer vision for object tracking. The Kalman filter is applied to the 3D OCT image volume to track the retinal layer boundaries through the slices within the volume and thus segmenting the 3D image. Specifically, after some pre-processing of the OCT images, points on the retinal layer boundaries in the first image are identified, and curve fitting is applied to them such that the layer boundaries can be represented by the coefficients of the curve equations. These coefficients then form the state space for the Kalman Filter. The filter then produces an optimal estimate of the current state of the system by updating its previous state using the measurements available in the form of a feedback control loop. The results show that the algorithm can be used to segment the retinal layers in OCT images. One of the limitations of the current algorithm is that the curve representation of the retinal layer boundary does not work well when the layer boundary is split into two, e.g., at the optic nerve, the layer boundary split into two. This maybe resolved by using a different approach to representing the boundaries, such as b-splines or level sets. The use of a Kalman filter shows promise to developing accurate and effective 3D OCT segmentation methods.

Keywords: optical coherence tomography, image segmentation, Kalman filter, object tracking

Procedia PDF Downloads 455
194 Influence of Auditory Visual Information in Speech Perception in Children with Normal Hearing and Cochlear Implant

Authors: Sachin, Shantanu Arya, Gunjan Mehta, Md. Shamim Ansari

Abstract:

The cross-modal influence of visual information on speech perception can be illustrated by the McGurk effect which is an illusion of hearing of syllable /ta/ when a listener listens one syllable, e.g.: /pa/ while watching a synchronized video recording of syllable, /ka/. The McGurk effect is an excellent tool to investigate multisensory integration in speech perception in both normal hearing and hearing impaired populations. As the visual cue is unaffected by noise, individuals with hearing impairment rely more than normal listeners on the visual cues.However, when non congruent visual and auditory cues are processed together, audiovisual interaction seems to occur differently in normal and persons with hearing impairment. Therefore, this study aims to observe the audiovisual interaction in speech perception in Cochlear Implant users compares the same with normal hearing children. Auditory stimuli was routed through calibrated Clinical audiometer in sound field condition, and visual stimuli were presented on laptop screen placed at a distance of 1m at 0 degree azimuth. Out of 4 presentations, if 3 responses were a fusion, then McGurk effect was considered to be present. The congruent audiovisual stimuli /pa/ /pa/ and /ka/ /ka/ were perceived correctly as ‘‘pa’’ and ‘‘ka,’’ respectively by both the groups. For the non- congruent stimuli /da/ /pa/, 23 children out of 35 with normal hearing and 9 children out of 35 with cochlear implant had a fusion of sounds i.e. McGurk effect was present. For the non-congruent stimulus /pa/ /ka/, 25 children out of 35 with normal hearing and 8 children out of 35 with cochlear implant had fusion of sounds.The children who used cochlear implants for less than three years did not exhibit fusion of sound i.e. McGurk effect was absent in this group of children. To conclude, the results demonstrate that consistent fusion of visual with auditory information for speech perception is shaped by experience with bimodal spoken language during early life. When auditory experience with speech is mediated by cochlear implant, the likelihood of acquiring bimodal fusion is increased and it greatly depends on the age of implantation. All the above results strongly support the need for screening children for hearing capabilities and providing cochlear implants and aural rehabilitation as early as possible.

Keywords: cochlear implant, congruent stimuli, mcgurk effect, non-congruent stimuli

Procedia PDF Downloads 281
193 Intermittent Effect of Coupled Thermal and Acoustic Sources on Combustion: A Spatial Perspective

Authors: Pallavi Gajjar, Vinayak Malhotra

Abstract:

Rockets have been known to have played a predominant role in spacecraft propulsion. The quintessential aspect of combustion-related requirements of a rocket engine is the minimization of the surrounding risks/hazards. Over time, it has become imperative to understand the combustion rate variation in presence of external energy source(s). Rocket propulsion represents a special domain of chemical propulsion assisted by high speed flows in presence of acoustics and thermal source(s). Jet noise leads to a significant loss of resources and every year a huge amount of financial aid is spent to prevent it. External heat source(s) induce high possibility of fire risk/hazards which can sufficiently endanger the operation of a space vehicle. Appreciable work had been done with justifiable simplification and emphasis on the linear variation of external energy source(s), which yields good physical insight but does not cater to accurate predictions. Present work experimentally attempts to understand the correlation between inter-energy conversions with the non-linear placement of external energy source(s). The work is motivated by the need to have better fire safety and enhanced combustion. The specific objectives of the work are a) To interpret the related energy transfer for combustion in presence of alternate external energy source(s) viz., thermal and acoustic, b) To fundamentally understand the role of key controlling parameters viz., separation distance, the number of the source(s), selected configurations and their non-linear variation to resemble real-life cases. An experimental setup was prepared using incense sticks as potential fuel and paraffin wax candles as the external energy source(s). The acoustics was generated using frequency generator, and source(s) were placed at selected locations. Non-equidistant parametric experimentation was carried out, and the effects were noted on regression rate changes. The results are expected to be very helpful in offering a new perspective into futuristic rocket designs and safety.

Keywords: combustion, acoustic energy, external energy sources, regression rate

Procedia PDF Downloads 114
192 Pattern Recognition Approach Based on Metabolite Profiling Using In vitro Cancer Cell Line

Authors: Amanina Iymia Jeffree, Reena Thriumani, Mohammad Iqbal Omar, Ammar Zakaria, Yumi Zuhanis Has-Yun Hashim, Ali Yeon Md Shakaff

Abstract:

Metabolite profiling is a strategy to be approached in the pattern recognition method focused on three types of cancer cell line that driving the most to death specifically lung, breast, and colon cancer. The purpose of this study was to discriminate the VOCs pattern among cancerous and control group based on metabolite profiling. The sampling was executed utilizing the cell culture technique. All culture flasks were incubated till 72 hours and data collection started after 24 hours. Every running sample took 24 minutes to be completed accordingly. The comparative metabolite patterns were identified by the implementation of headspace-solid phase micro-extraction (HS-SPME) sampling coupled with gas chromatography-mass spectrometry (GCMS). The optimizations of the main experimental variables such as oven temperature and time were evaluated by response surface methodology (RSM) to get the optimal condition. Volatiles were acknowledged through the National Institute of Standards and Technology (NIST) mass spectral database and retention time libraries. To improve the reliability of significance, it is of crucial importance to eliminate background noise which data from 3rd minutes to 17th minutes were selected for statistical analysis. Targeted metabolites, of which were annotated as known compounds with the peak area greater than 0.5 percent were highlighted and subsequently treated statistically. Volatiles produced contain hundreds to thousands of compounds; therefore, it will be optimized by chemometric analysis, such as principal component analysis (PCA) as a preliminary analysis before subjected to a pattern classifier for identification of VOC samples. The volatile organic compound profiling has shown to be significantly distinguished among cancerous and control group based on metabolite profiling.

Keywords: in vitro cancer cell line, metabolite profiling, pattern recognition, volatile organic compounds

Procedia PDF Downloads 344
191 Gradient Length Anomaly Analysis for Landslide Vulnerability Analysis of Upper Alaknanda River Basin, Uttarakhand Himalayas, India

Authors: Hasmithaa Neha, Atul Kumar Patidar, Girish Ch Kothyari

Abstract:

The northward convergence of the Indian plate has a dominating influence over the structural and geomorphic development of the Himalayan region. The highly deformed and complex stratigraphy in the area arises from a confluence of exogenic and endogenetic geological processes. This region frequently experiences natural hazards such as debris flows, flash floods, avalanches, landslides, and earthquakes due to its harsh and steep topography and fragile rock formations. Therefore, remote sensing technique-based examination and real-time monitoring of tectonically sensitive regions may provide crucial early warnings and invaluable data for effective hazard mitigation strategies. In order to identify unusual changes in the river gradients, the current study demonstrates a spatial quantitative geomorphic analysis of the upper Alaknanda River basin, Uttarakhand Himalaya, India, using gradient length anomaly analysis (GLAA). This basin is highly vulnerable to ground creeping and landslides due to the presence of active faults/thrusts, toe-cutting of slopes for road widening, development of heavy engineering projects on the highly sheared bedrock, and periodic earthquakes. The intersecting joint sets developed in the bedrocks have formed wedges that have facilitated the recurrence of several landslides. The main objective of current research is to identify abnormal gradient lengths, indicating potential landslide-prone zones. High-resolution digital elevation data and geospatial techniques are used to perform this analysis. The results of GLAA are corroborated with the historical landslide events and ultimately used for the generation of landslide susceptibility maps of the current study area. The preliminary results indicate that approximately 3.97% of the basin is stable, while about 8.54% is classified as moderately stable and suitable for human habitation. However, roughly 19.89% fall within the zone of moderate vulnerability, 38.06% are classified as vulnerable, and 29% fall within the highly vulnerable zones, posing risks for geohazards, including landslides, glacial avalanches, and earthquakes. This research provides valuable insights into the spatial distribution of landslide-prone areas. It offers a basis for implementing proactive measures for landslide risk reduction, including land-use planning, early warning systems, and infrastructure development techniques.

Keywords: landslide vulnerability, geohazard, GLA, upper Alaknanda Basin, Uttarakhand Himalaya

Procedia PDF Downloads 41
190 Development of Wide Bandgap Semiconductor Based Particle Detector

Authors: Rupa Jeena, Pankaj Chetry, Pradeep Sarin

Abstract:

The study of fundamental particles and the forces governing them has always remained an attractive field of theoretical study to pursue. With the advancement and development of new technologies and instruments, it is possible now to perform particle physics experiments on a large scale for the validation of theoretical predictions. These experiments are generally carried out in a highly intense beam environment. This, in turn, requires the development of a detector prototype possessing properties like radiation tolerance, thermal stability, and fast timing response. Semiconductors like Silicon, Germanium, Diamond, and Gallium Nitride (GaN) have been widely used for particle detection applications. Silicon and germanium being narrow bandgap semiconductors, require pre-cooling to suppress the effect of noise by thermally generated intrinsic charge carriers. The application of diamond in large-scale experiments is rare owing to its high cost of fabrication, while GaN is one of the most extensively explored potential candidates. But we are aiming to introduce another wide bandgap semiconductor in this active area of research by considering all the requirements. We have made an attempt by utilizing the wide bandgap of rutile Titanium dioxide (TiO2) and other properties to use it for particle detection purposes. The thermal evaporation-oxidation (in PID furnace) technique is used for the deposition of the film, and the Metal Semiconductor Metal (MSM) electrical contacts are made using Titanium+Gold (Ti+Au) (20/80nm). The characterization comprising X-Ray Diffraction (XRD), Atomic Force Microscopy (AFM), Ultraviolet (UV)-Visible spectroscopy, and Laser Raman Spectroscopy (LRS) has been performed on the film to get detailed information about surface morphology. On the other hand, electrical characterizations like Current Voltage (IV) measurement in dark and light and test with laser are performed to have a better understanding of the working of the detector prototype. All these preliminary tests of the detector will be presented.

Keywords: particle detector, rutile titanium dioxide, thermal evaporation, wide bandgap semiconductors

Procedia PDF Downloads 53
189 Predicting and Optimizing the Mechanical Behavior of a Flax Reinforced Composite

Authors: Georgios Koronis, Arlindo Silva

Abstract:

This study seeks to understand the mechanical behavior of a natural fiber reinforced composite (epoxy/flax) in more depth, utilizing both experimental and numerical methods. It is attempted to identify relationships between the design parameters and the product performance, understand the effect of noise factors and reduce process variations. Optimization of the mechanical performance of manufactured goods has recently been implemented by numerous studies for green composites. However, these studies are limited and have explored in principal mass production processes. It is expected here to discover knowledge about composite’s manufacturing that can be used to design artifacts that are of low batch and tailored to niche markets. The goal is to reach greater consistency in the performance and further understand which factors play significant roles in obtaining the best mechanical performance. A prediction of response function (in various operating conditions) of the process is modeled by the DoE. Normally, a full factorial designed experiment is required and consists of all possible combinations of levels for all factors. An analytical assessment is possible though with just a fraction of the full factorial experiment. The outline of the research approach will comprise of evaluating the influence that these variables have and how they affect the composite mechanical behavior. The coupons will be fabricated by the vacuum infusion process defined by three process parameters: flow rate, injection point position and fiber treatment. Each process parameter is studied at 2-levels along with their interactions. Moreover, the tensile and flexural properties will be obtained through mechanical testing to discover the key process parameters. In this setting, an experimental phase will be followed in which a number of fabricated coupons will be tested to allow for a validation of the design of the experiment’s setup. Finally, the results are validated by performing the optimum set of in a final set of experiments as indicated by the DoE. It is expected that after a good agreement between the predicted and the verification experimental values, the optimal processing parameter of the biocomposite lamina will be effectively determined.

Keywords: design of experiments, flax fabrics, mechanical performance, natural fiber reinforced composites

Procedia PDF Downloads 183
188 Detection of Temporal Change of Fishery and Island Activities by DNB and SAR on the South China Sea

Authors: I. Asanuma, T. Yamaguchi, J. Park, K. J. Mackin

Abstract:

Fishery lights on the surface could be detected by the Day and Night Band (DNB) of the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar-orbiting Partnership (Suomi-NPP). The DNB covers the spectral range of 500 to 900 nm and realized a higher sensitivity. The DNB has a difficulty of identification of fishing lights from lunar lights reflected by clouds, which affects observations for the half of the month. Fishery lights and lights of the surface are identified from lunar lights reflected by clouds by a method using the DNB and the infrared band, where the detection limits are defined as a function of the brightness temperature with a difference from the maximum temperature for each level of DNB radiance and with the contrast of DNB radiance against the background radiance. Fishery boats or structures on islands could be detected by the Synthetic Aperture Radar (SAR) on the polar orbit satellites using the reflected microwave by the surface reflecting targets. The SAR has a difficulty of tradeoff between spatial resolution and coverage while detecting the small targets like fishery boats. A distribution of fishery boats and island activities were detected by the scan-SAR narrow mode of Radarsat-2, which covers 300 km by 300 km with various combinations of polarizations. The fishing boats were detected as a single pixel of highly scattering targets with the scan-SAR narrow mode of which spatial resolution is 30 m. As the look angle dependent scattering signals exhibits the significant differences, the standard deviations of scattered signals for each look angles were taken into account as a threshold to identify the signal from fishing boats and structures on the island from background noise. It was difficult to validate the detected targets by DNB with SAR data because of time lag of observations for 6 hours between midnight by DNB and morning or evening by SAR. The temporal changes of island activities were detected as a change of mean intensity of DNB for circular area for a certain scale of activities. The increase of DNB mean intensity was corresponding to the beginning of dredging and the change of intensity indicated the ending of reclamation and following constructions of facilities.

Keywords: day night band, SAR, fishery, South China Sea

Procedia PDF Downloads 214
187 An Advanced Automated Brain Tumor Diagnostics Approach

Authors: Berkan Ural, Arif Eser, Sinan Apaydin

Abstract:

Medical image processing is generally become a challenging task nowadays. Indeed, processing of brain MRI images is one of the difficult parts of this area. This study proposes a hybrid well-defined approach which is consisted from tumor detection, extraction and analyzing steps. This approach is mainly consisted from a computer aided diagnostics system for identifying and detecting the tumor formation in any region of the brain and this system is commonly used for early prediction of brain tumor using advanced image processing and probabilistic neural network methods, respectively. For this approach, generally, some advanced noise removal functions, image processing methods such as automatic segmentation and morphological operations are used to detect the brain tumor boundaries and to obtain the important feature parameters of the tumor region. All stages of the approach are done specifically with using MATLAB software. Generally, for this approach, firstly tumor is successfully detected and the tumor area is contoured with a specific colored circle by the computer aided diagnostics program. Then, the tumor is segmented and some morphological processes are achieved to increase the visibility of the tumor area. Moreover, while this process continues, the tumor area and important shape based features are also calculated. Finally, with using the probabilistic neural network method and with using some advanced classification steps, tumor area and the type of the tumor are clearly obtained. Also, the future aim of this study is to detect the severity of lesions through classes of brain tumor which is achieved through advanced multi classification and neural network stages and creating a user friendly environment using GUI in MATLAB. In the experimental part of the study, generally, 100 images are used to train the diagnostics system and 100 out of sample images are also used to test and to check the whole results. The preliminary results demonstrate the high classification accuracy for the neural network structure. Finally, according to the results, this situation also motivates us to extend this framework to detect and localize the tumors in the other organs.

Keywords: image processing algorithms, magnetic resonance imaging, neural network, pattern recognition

Procedia PDF Downloads 386
186 Proposals of Exposure Limits for Infrasound From Wind Turbines

Authors: M. Pawlaczyk-Łuszczyńska, T. Wszołek, A. Dudarewicz, P. Małecki, M. Kłaczyński, A. Bortkiewicz

Abstract:

Human tolerance to infrasound is defined by the hearing threshold. Infrasound that cannot be heard (or felt) is not annoying and is not thought to have any other adverse or health effects. Recent research has largely confirmed earlier findings. ISO 7196:1995 recommends the use of G-weighted characteristics for the assessment of infrasound. There is a strong correlation between G-weighted SPL and annoyance perception. The aim of this study was to propose exposure limits for infrasound from wind turbines. However, only a few countries have set limits for infrasound. These limits are usually no higher than 85-92 dBG, and none of them are specific to wind turbines. Over the years, a number of studies have been carried out to determine hearing thresholds below 20 Hz. It has been recognized that 10% of young people would be able to perceive 10 Hz at around 90 dB, and it has also been found that the difference in median hearing thresholds between young adults aged around 20 years and older adults aged over 60 years is around 10 dB, irrespective of frequency. This shows that older people (up to about 60 years of age) retain good hearing in the low frequency range, while their sensitivity to higher frequencies is often significantly reduced. In terms of exposure limits for infrasound, the average hearing threshold corresponds to a tone with a G-weighted SPL of about 96 dBG. In contrast, infrasound at Lp,G levels below 85-90 dBG is usually inaudible. The individual hearing threshold can, therefore be 10-15 dB lower than the average threshold, so the recommended limits for environmental infrasound could be 75 dBG or 80 dBG. It is worth noting that the G86 curve has been taken as the threshold of auditory perception of infrasound reached by 90-95% of the population, so the G75 and G80 curves can be taken as the criterion curve for wind turbine infrasound. Finally, two assessment methods and corresponding exposure limit values have been proposed for wind turbine infrasound, i.e. method I - based on G-weighted sound pressure level measurements and method II - based on frequency analysis in 1/3-octave bands in the frequency range 4-20 Hz. Separate limit values have been set for outdoor living areas in the open countryside (Area A) and for noise sensitive areas (Area B). In the case of Method I, infrasound limit values of 80 dBG (for areas A) and 75 dBG (for areas B) have been proposed, while in the case of Method II - criterion curves G80 and G75 have been chosen (for areas A and B, respectively).

Keywords: infrasound, exposure limit, hearing thresholds, wind turbines

Procedia PDF Downloads 52
185 Effects of Safety Intervention Program towards Behaviors among Rubber Wood Processing Workers Using Theory of Planned Behavior

Authors: Junjira Mahaboon, Anongnard Boonpak, Nattakarn Worrasan, Busma Kama, Mujalin Saikliang, Siripor Dankachatarn

Abstract:

Rubber wood processing is one of the most important industries in southern Thailand. The process has several safety hazards for example unsafe wood cutting machine guarding, wood dust, noise, and heavy lifting. However, workers’ occupational health and safety measures to promote their behaviors are still limited. This quasi-experimental research was to determine factors affecting workers’ safety behaviors using theory of planned behavior after implementing job safety intervention program. The purposes were to (1) determine factors affecting workers’ behaviors and (2) to evaluate effectiveness of the intervention program. The sample of study was 66 workers from a rubber wood processing factory. Factors in the Theory of Planned Behavior model (TPB) were measured before and after the intervention. The factors of TPB included attitude towards behavior, subjective norm, perceived behavioral control, intention, and behavior. Firstly, Job Safety Analysis (JSA) was conducted and Safety Standard Operation Procedures (SSOP) were established. The questionnaire was also used to collect workers’ characteristics and TPB factors. Then, job safety intervention program to promote workers’ behavior according to SSOP were implemented for a four month period. The program included SSOP training, personal protective equipment use, and safety promotional campaign. After that, the TPB factors were again collected. Paired sample t-test and independent t-test were used to analyze the data. The result revealed that attitude towards behavior and intention increased significantly after the intervention at p<0.05. These factors also significantly determined the workers’ safety behavior according to SSOP at p<0.05. However, subjective norm, and perceived behavioral control were not significantly changed nor related to safety behaviors. In conclusion, attitude towards behavior and workers’ intention should be promoted to encourage workers’ safety behaviors. SSOP intervention program e.g. short meeting, safety training, and promotional campaign should be continuously implemented in a routine basis to improve workers’ behavior.

Keywords: job safety analysis, rubber wood processing workers, safety standard operation procedure, theory of planned behavior

Procedia PDF Downloads 165
184 FMCW Doppler Radar Measurements with Microstrip Tx-Rx Antennas

Authors: Yusuf Ulaş Kabukçu, Si̇nan Çeli̇k, Onur Salan, Mai̇de Altuntaş, Mert Can Dalkiran, Gökseni̇n Bozdağ, Metehan Bulut, Fati̇h Yaman

Abstract:

This study presents a more compact implementation of the 2.4GHz MIT Coffee Can Doppler Radar for 2.6GHz operating frequency. The main difference of our prototype depends on the use of microstrip antennas which makes it possible to transport with a small robotic vehicle. We have designed our radar system with two different channels: Tx and Rx. The system mainly consists of Voltage Controlled Oscillator (VCO) source, low noise amplifiers, microstrip antennas, splitter, mixer, low pass filter, and necessary RF connectors with cables. The two microstrip antennas, one is element for transmitter and the other one is array for receiver channel, was designed, fabricated and verified by experiments. The system has two operation modes: speed detection and range detection. If the switch of the operation mode is ‘Off’, only CW signal transmitted for speed measurement. When the switch is ‘On’, CW is frequency-modulated and range detection is possible. In speed detection mode, high frequency (2.6 GHz) is generated by a VCO, and then amplified to reach a reasonable level of transmit power. Before transmitting the amplified signal through a microstrip patch antenna, a splitter used in order to compare the frequencies of transmitted and received signals. Half of amplified signal (LO) is forwarded to a mixer, which helps us to compare the frequencies of transmitted and received (RF) and has the IF output, or in other words information of Doppler frequency. Then, IF output is filtered and amplified to process the signal digitally. Filtered and amplified signal showing Doppler frequency is used as an input of audio input of a computer. After getting this data Doppler frequency is shown as a speed change on a figure via Matlab script. According to experimental field measurements the accuracy of speed measurement is approximately %90. In range detection mode, a chirp signal is used to form a FM chirp. This FM chirp helps to determine the range of the target since only Doppler frequency measured with CW is not enough for range detection. Such a FMCW Doppler radar may be used in border security of the countries since it is capable of both speed and range detection.

Keywords: doppler radar, FMCW, range detection, speed detection

Procedia PDF Downloads 364
183 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 148
182 Comparison of a Capacitive Sensor Functionalized with Natural or Synthetic Receptors Selective towards Benzo(a)Pyrene

Authors: Natalia V. Beloglazova, Pieterjan Lenain, Martin Hedstrom, Dietmar Knopp, Sarah De Saeger

Abstract:

In recent years polycyclic aromatic hydrocarbons (PAHs), which represent a hazard to humans and entire ecosystem, have been receiving an increased interest due to their mutagenic, carcinogenic and endocrine disrupting properties. They are formed in all incomplete combustion processes of organic matter and, as a consequence, ubiquitous in the environment. Benzo(a)pyrene (BaP) is on the priority list published by the Environmental Agency (US EPA) as the first PAH to be identified as a carcinogen and has often been used as a marker for PAHs contamination in general. It can be found in different types of water samples, therefore, the European Commission set up a limit value of 10 ng L–1 (10 ppt) for BAP in water intended for human consumption. Generally, different chromatographic techniques are used for PAHs determination, but these assays require pre-concentration of analyte, create large amounts of solvent waste, and are relatively time consuming and difficult to perform on-site. An alternative robust, stand-alone, and preferably cheap solution is needed. For example, a sensing unit which can be submerged in a river to monitor and continuously sample BaP. An affinity sensor based on capacitive transduction was developed. Natural antibodies or their synthetic analogues can be used as ligands. Ideally the sensor should operate independently over a longer period of time, e.g. several weeks or months, therefore the use of molecularly imprinted polymers (MIPs) was discussed. MIPs are synthetic antibodies which are selective for a chosen target molecule. Their robustness allows application in environments for which biological recognition elements are unsuitable or denature. They can be reused multiple times, which is essential to meet the stand-alone requirement. BaP is a highly lipophilic compound and does not contain any functional groups in its structure, thus excluding non-covalent imprinting methods based on ionic interactions. Instead, the MIPs syntheses were based on non-covalent hydrophobic and π-π interactions. Different polymerization strategies were compared and the best results were demonstrated by the MIPs produced using electropolymerization. 4-vinylpyridin (VP) and divinylbenzene (DVB) were used as monomer and cross-linker in the polymerization reaction. The selectivity and recovery of the MIP were compared to a non-imprinted polymer (NIP). Electrodes were functionalized with natural receptor (monoclonal anti-BaP antibody) and with MIPs selective towards BaP. Different sets of electrodes were evaluated and their properties such as sensitivity, selectivity and linear range were determined and compared. It was found that both receptor can reach the cut-off level comparable to the established ML, and despite the fact that the antibody showed the better cross-reactivity and affinity, MIPs were more convenient receptor due to their ability to regenerate and stability in river till 7 days.

Keywords: antibody, benzo(a)pyrene, capacitive sensor, MIPs, river water

Procedia PDF Downloads 286
181 Applying Simulation-Based Digital Teaching Plans and Designs in Operating Medical Equipment

Authors: Kuo-Kai Lin, Po-Lun Chang

Abstract:

Background: The Emergency Care Research Institute released a list for the top 10 medical technology hazards in 2017, with the following hazard topping the list: ‘infusion errors can be deadly if simple safety steps are overlooked.’ In addition, hospitals use various assessment items to evaluate the safety of their medical equipment, confirming the importance of medical equipment safety. In recent years, the topic of patient safety has garnered increasing attention. Accordingly, various agencies have established patient safety-related committees to coordinate, collect, and analyze information regarding abnormal events associated with medical practice. Activities to promote and improve employee training have been introduced to diminish the recurrence of medical malpractice. Objective: To allow nursing personnel to acquire the skills needed to operate common medical equipment and update and review such skills whenever necessary to elevate medical care quality and reduce patient injuries caused by medical equipment operation errors. Method: In this study, a quasi-experimental design was adopted and nurses from a regional teaching hospital were selected as the study sample. Online videos instructing the operation method of common medical equipment were made and quick response codes were designed for the nursing personnel to quickly access the videos when necessary. Senior nursing supervisors and equipment experts were invited to formulate a ‘Scale-based Questionnaire for Assessing Nursing Personnel’s Operational Knowledge of Common Medical Equipment’ to evaluate the nursing personnel’s literacy regarding the operation of the medical equipment. From March to October 2017, an employee training on medical equipment operation and a practice course (simulation course) were implemented, after which the effectiveness of the training and practice course were assessed. Results: Prior to and after the training and practice course, the 66 participating nurses scored 58 and 87 on ‘operational knowledge of common medical equipment,’ respectively (showing a significant statistical difference; t = -9.407, p < .001); 53.5 and 86.3 on ‘operational knowledge of 12-lead electrocardiography’ (z = -2.087, p < .01), respectively; 40 and 79.5 on ‘operational knowledge of cardiac defibrillators’ (z = -3.849, p < .001), respectively; 90 and 98 on ‘operational knowledge of Abbott pumps’ (z = -1.841, p = 0.066), respectively; and 8.7 and 13.7 on ‘perceived competence’ (showing a significant statistical difference; t = -2.77, p < .05). In the participating hospital, medical equipment operation errors were observed in both 2016 and 2017. However, since the implementation of the intervention, medical equipment operation errors have not yet been observed up to October 2017, which can be regarded as the secondary outcome of this study. Conclusion: In this study, innovative teaching strategies were adopted to effectively enhance the professional literacy and skills of nursing personnel in operating medical equipment. The training and practice course also elevated the nursing personnel’s related literacy and perceived competence of operating medical equipment. The nursing personnel was thus able to accurately operate the medical equipment and avoid operational errors that might jeopardize patient safety.

Keywords: medical equipment, digital teaching plan, simulation-based teaching plan, operational knowledge, patient safety

Procedia PDF Downloads 119
180 Ground Motion Modeling Using the Least Absolute Shrinkage and Selection Operator

Authors: Yildiz Stella Dak, Jale Tezcan

Abstract:

Ground motion models that relate a strong motion parameter of interest to a set of predictive seismological variables describing the earthquake source, the propagation path of the seismic wave, and the local site conditions constitute a critical component of seismic hazard analyses. When a sufficient number of strong motion records are available, ground motion relations are developed using statistical analysis of the recorded ground motion data. In regions lacking a sufficient number of recordings, a synthetic database is developed using stochastic, theoretical or hybrid approaches. Regardless of the manner the database was developed, ground motion relations are developed using regression analysis. Development of a ground motion relation is a challenging process which inevitably requires the modeler to make subjective decisions regarding the inclusion criteria of the recordings, the functional form of the model and the set of seismological variables to be included in the model. Because these decisions are critically important to the validity and the applicability of the model, there is a continuous interest on procedures that will facilitate the development of ground motion models. This paper proposes the use of the Least Absolute Shrinkage and Selection Operator (LASSO) in selecting the set predictive seismological variables to be used in developing a ground motion relation. The LASSO can be described as a penalized regression technique with a built-in capability of variable selection. Similar to the ridge regression, the LASSO is based on the idea of shrinking the regression coefficients to reduce the variance of the model. Unlike ridge regression, where the coefficients are shrunk but never set equal to zero, the LASSO sets some of the coefficients exactly to zero, effectively performing variable selection. Given a set of candidate input variables and the output variable of interest, LASSO allows ranking the input variables in terms of their relative importance, thereby facilitating the selection of the set of variables to be included in the model. Because the risk of overfitting increases as the ratio of the number of predictors to the number of recordings increases, selection of a compact set of variables is important in cases where a small number of recordings are available. In addition, identification of a small set of variables can improve the interpretability of the resulting model, especially when there is a large number of candidate predictors. A practical application of the proposed approach is presented, using more than 600 recordings from the National Geospatial-Intelligence Agency (NGA) database, where the effect of a set of seismological predictors on the 5% damped maximum direction spectral acceleration is investigated. The set of candidate predictors considered are Magnitude, Rrup, Vs30. Using LASSO, the relative importance of the candidate predictors has been ranked. Regression models with increasing levels of complexity were constructed using one, two, three, and four best predictors, and the models’ ability to explain the observed variance in the target variable have been compared. The bias-variance trade-off in the context of model selection is discussed.

Keywords: ground motion modeling, least absolute shrinkage and selection operator, penalized regression, variable selection

Procedia PDF Downloads 305
179 Living at Density: Resident Perceptions in Auckland, New Zealand

Authors: Errol J. Haarhoff

Abstract:

Housing in New Zealand, particularly in Auckland, is dominated by low-density suburbs. Over the past 20 years, housing intensification policies aimed to curb outward low-density sprawl and to concentrate development within an urban boundary have been implemented. This requires the greater deployment of attached housing typologies such apartments, duplexes and terrace housing. There has been strong market response and uptake for higher density development, with the number of building approvals received by the Auckland Council for attached housing units increasing from around 15 percent in 2012/13, to 54 percent in 2017/18. A key question about intensification and strong market uptake in a city where lower density has been the norm, is whether higher density neighborhoods will deliver necessary housing satisfaction? This paper reports on the findings to a questionnaire survey and focus group discussions probing resident perceptions to living at higher density in relation to their dwellings, the neighborhood and their sense of community. The findings reveal strong overall housing satisfaction, including key aspects such as privacy, noise and living in close proximity to neighbors. However, when residents are differentiated in terms of length of tenure, age or whether they are bringing up children, greater variation in satisfaction is detected. For example, residents in the 65-plus age cohort express much higher levels of satisfaction, when compared to the 18-44 year cohorts who more likely to be binging up children. This suggests greater design sensitivity to better accommodate the range of household types. Those who have live in the area longer express greater satisfaction than those with shorter duration, indicating time for adaption to living at higher density. Findings strongly underpin the instrumental role that the public amenities play in overall housing satisfaction and the emergence of a strong sense of community. This underscores the necessity for appropriate investment in the public amenities often lacking in market-led higher density housing development. We conclude with an evaluation of the PPP model, and its part in delivering housing satisfaction. The findings should be of interest to cities, housing developers and built environment professional pursuing housing policies promoting intensification and higher density.

Keywords: medium density, housing satisfaction, neighborhoods, sense of community

Procedia PDF Downloads 114
178 Systematic Analysis of Logistics Location Search Methods under Aspects of Sustainability

Authors: Markus Pajones, Theresa Steiner, Matthias Neubauer

Abstract:

Selecting a logistics location is vital for logistics providers, food retailing and other trading companies since the selection poses an essential factor for economic success. Therefore various location search methods like cost-benefit analysis and others are well known and under usage. The development of a logistics location can be related to considerable negative effects for the eco system such as sealing the surface, wrecking of biodiversity or CO2 and noise emissions generated by freight and commuting traffic. The increasing importance of sustainability demands for taking an informed decision when selecting a logistics location for the future. Sustainability considers economic, ecologic and social aspects which should be equally integrated in the process of location search. Objectives of this paper are to define various methods which support the selection of sustainable logistics locations and to generate knowledge about the suitability, assets and limitations of the methods within the selection process. This paper investigates the role of economical, ecological and social aspects when searching for new logistics locations. Thereby, related work targeted towards location search is analyzed with respect to encoded sustainability aspects. In addition, this research aims to gain knowledge on how to include aspects of sustainability and take an informed decision when searching for a logistics location. As a result, a decomposition of the various location search methods in there components leads to a comparative analysis in form of a matrix. The comparison within a matrix enables a transparent overview about the mentioned assets and limitations of the methods and their suitability for selecting sustainable logistics locations. A further result is to generate knowledge on how to combine the separate methods to a new method for a more efficient selection of logistics locations in the context of sustainability. Future work will especially investigate the above mentioned combination of various location search methods. The objective is to develop an innovative instrument, which supports the search for logistics locations with a focus on a balanced sustainability (economy, ecology, social). Because of an ideal selection of logistics locations, induced traffic should be reduced and a mode shift to rail and public transport should be facilitated.

Keywords: commuting traffic, freight traffic, logistics location search, location search method

Procedia PDF Downloads 296
177 Experience of Inpatient Life in Korean Complex Regional Pain Syndrome: A Phenomenological Study

Authors: Se-Hwa Park, En-Kyung Han, Jae-Young Lim, Hye-Jung Ahn

Abstract:

Purpose: The objective of this study is to provide basic data for understanding the substance of inpatient life with CRPS (Complex Regional Pain Syndrome) and developing efficient and effective nursing intervention. Methods: From September 2018 to November, we have interviewed 10 CRPS patients about inpatient experiences. To understand the implication of inpatient life experiences with CRPS and intrinsic structure, we have used the question: 'How about the inpatient experiences with CRPS'. For data analysis, the method suggested by Colaizzi was applied as a phenomenological method. Results: According to the analysis, the study participants' inpatient life process was structured in six categories: (a) breakthrough pain experience (b) the limitation of pain treatment, (c) worsen factors of pain during inpatient period, (d) treat method for pain, (e) positive experience for inpatient period, (f) requirements for medical team, family and people in hospital room. Conclusion: Inpatient with CRPS have experienced the breakthrough pain. They had expected immediate treatment for breakthrough pain, but they experienced severe pain because immediate treatment was not implemented. Pain-worsening factors which patients with CRPS are as follows: personal factors from negative emotions such as insomnia, stress, sensitive character, pain part touch or vibration stimulus on the bed, physical factors from high threshold or rapid speed during fast transfer, conflict with other people, climate factors such as humidity or low temperature, noise, smell, lack of space because of many visitors. Patients actively manage the pain committing into another tasks or diversion. And also, patients passively manage the pain, just suppress, give-up. They think positively about rehabilitation treatment. And they require the understanding and sympathy for other people, and emotional support, immediate intervention for medical team. Based on the results of this study, we suppose the guideline of systematic breakthrough pain management for the relaxation of sudden pain, using notice of informing caution for touch or vibration. And we need to develop non-medicine pain management nursing intervention.

Keywords: breakthrough pain, CRPS, complex regional pain syndrome, inpatient life experiences, phenomenological method

Procedia PDF Downloads 110
176 Gadolinium-Based Polymer Nanostructures as Magnetic Resonance Imaging Contrast Agents

Authors: Franca De Sarno, Alfonso Maria Ponsiglione, Enza Torino

Abstract:

Recent advances in diagnostic imaging technology have significantly contributed to a better understanding of specific changes associated with diseases progression. Among different imaging modalities, Magnetic Resonance Imaging (MRI) represents a noninvasive medical diagnostic technique, which shows low sensitivity and long acquisition time and it can discriminate between healthy and diseased tissues by providing 3D data. In order to improve the enhancement of MRI signals, some imaging exams require intravenous administration of contrast agents (CAs). Recently, emerging research reports a progressive deposition of these drugs, in particular, gadolinium-based contrast agents (GBCAs), in the body many years after multiple MRI scans. These discoveries confirm the need to have a biocompatible system able to boost a clinical relevant Gd-chelate. To this aim, several approaches based on engineered nanostructures have been proposed to overcome the common limitations of conventional CAs, such as the insufficient signal-to-noise ratios due to relaxivity and poor safety profile. In particular, nanocarriers, labeling or loading with CAs, capable of carrying high payloads of CAs have been developed. Currently, there’s no a comprehensive understanding of the thermodynamic contributions enable of boosting the efficacy of conventional CAs by using biopolymers matrix. Thus, considering the importance of MRI in diagnosing diseases, here it is reported a successful example of the next generation of these drugs where the commercial gadolinium chelate is incorporate into a biopolymer nanostructure, formed by cross-linked hyaluronic acid (HA), with improved relaxation properties. In addition, they are highlighted the basic principles ruling biopolymer-CA interactions in the perspective of their influence on the relaxometric properties of the CA by adopting a multidisciplinary experimental approach. On the basis of these discoveries, it is clear that the main point consists in increasing the rigidification of readily-available Gd-CAs within the biopolymer matrix by controlling the water dynamics, the physicochemical interactions, and the polymer conformations. In the end, the acquired knowledge about polymer-CA systems has been applied to develop of Gd-based HA nanoparticles with enhanced relaxometric properties.

Keywords: biopolymers, MRI, nanoparticles, contrast agent

Procedia PDF Downloads 123
175 Astronomical Object Classification

Authors: Alina Muradyan, Lina Babayan, Arsen Nanyan, Gohar Galstyan, Vigen Khachatryan

Abstract:

We present a photometric method for identifying stars, galaxies and quasars in multi-color surveys, which uses a library of ∼> 65000 color templates for comparison with observed objects. The method aims for extracting the information content of object colors in a statistically correct way, and performs a classification as well as a redshift estimation for galaxies and quasars in a unified approach based on the same probability density functions. For the redshift estimation, we employ an advanced version of the Minimum Error Variance estimator which determines the redshift error from the redshift dependent probability density function itself. The method was originally developed for the Calar Alto Deep Imaging Survey (CADIS), but is now used in a wide variety of survey projects. We checked its performance by spectroscopy of CADIS objects, where the method provides high reliability (6 errors among 151 objects with R < 24), especially for the quasar selection, and redshifts accurate within σz ≈ 0.03 for galaxies and σz ≈ 0.1 for quasars. For an optimization of future survey efforts, a few model surveys are compared, which are designed to use the same total amount of telescope time but different sets of broad-band and medium-band filters. Their performance is investigated by Monte-Carlo simulations as well as by analytic evaluation in terms of classification and redshift estimation. If photon noise were the only error source, broad-band surveys and medium-band surveys should perform equally well, as long as they provide the same spectral coverage. In practice, medium-band surveys show superior performance due to their higher tolerance for calibration errors and cosmic variance. Finally, we discuss the relevance of color calibration and derive important conclusions for the issues of library design and choice of filters. The calibration accuracy poses strong constraints on an accurate classification, which are most critical for surveys with few, broad and deeply exposed filters, but less severe for surveys with many, narrow and less deep filters.

Keywords: VO, ArVO, DFBS, FITS, image processing, data analysis

Procedia PDF Downloads 45
174 Effects of Sensory Integration Techniques in Science Education of Autistic Students

Authors: Joanna Estkowska

Abstract:

Sensory integration methods are very useful and improve daily functioning autistic and mentally disabled children. Autism is a neurobiological disorder that impairs one's ability to communicate with and relate to others as well as their sensory system. Children with autism, even highly functioning kids, can find it difficult to process language with surrounding noise or smells. They are hypersensitive to things we can ignore such as sight, sounds and touch. Adolescents with highly functioning autism or Asperger Syndrome can study Science and Math but the social aspect is difficult for them. Nature science is an area of study that attracts many of these kids. It is a systematic field in which the children can focus on a small aspect. If you follow these rules you can come up with an expected result. Sensory integration program and systematic classroom observation are quantitative methods of measuring classroom functioning and behaviors from direct observations. These methods specify both the events and behaviors that are to be observed and how they are to be recorded. Our students with and without autism attended the lessons in the classroom of nature science in the school and in the laboratory of University of Science and Technology in Bydgoszcz. The aim of this study is investigation the effects of sensory integration methods in teaching to students with autism. They were observed during experimental lessons in the classroom and in the laboratory. Their physical characteristics, sensory dysfunction, and behavior in class were taken into consideration by comparing their similarities and differences. In the chemistry classroom, every autistic student is paired with a mentor from their school. In the laboratory, the children are expected to wear goggles, gloves and a lab coat. The chemistry classes in the laboratory were held for four hours with a lunch break, and according to the assistants, the children were engaged the whole time. In classroom of nature science, the students are encouraged to use the interactive exhibition of chemical, physical and mathematical models constructed by the author of this paper. Our students with and without autism attended the lessons in those laboratories. The teacher's goals are: to assist the child in inhibiting and modulating sensory information and support the child in processing a response to sensory stimulation.

Keywords: autism spectrum disorder, science education, sensory integration techniques, student with special educational needs

Procedia PDF Downloads 172
173 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)

Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo

Abstract:

Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.

Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop

Procedia PDF Downloads 381
172 Density Measurement of Underexpanded Jet Using Stripe Patterned Background Oriented Schlieren Method

Authors: Shinsuke Udagawa, Masato Yamagishi, Masanori Ota

Abstract:

The Schlieren method, which has been conventionally used to visualize high-speed flows, has disadvantages such as the complexity of the experimental setup and the inability to quantitatively analyze the amount of refraction of light. The Background Oriented Schlieren (BOS) method proposed by Meier is one of the measurement methods that solves the problems, as mentioned above. The refraction of light is used for BOS method same as the Schlieren method. The BOS method is characterized using a digital camera to capture the images of the background behind the observation area. The images are later analyzed by a computer to quantitatively detect the amount of shift of the background image. The experimental setup for BOS does not require concave mirrors, pinholes, or color filters, which are necessary in the conventional Schlieren method, thus simplifying the experimental setup. However, the defocusing of the observation results is caused in case of using BOS method. Since the focus of camera on the background image leads to defocusing of the observed object. The defocusing of object becomes greater with increasing the distance between the background and the object. On the other hand, the higher sensitivity can be obtained. Therefore, it is necessary to adjust the distance between the background and the object to be appropriate for the experiment, considering the relation between the defocus and the sensitivity. The purpose of this study is to experimentally clarify the effect of defocus on density field reconstruction. In this study, the visualization experiment of underexpanded jet using BOS measurement system with ronchi ruling as the background that we constructed, have been performed. The reservoir pressure of the jet and the distance between camera and axis of jet is fixed, and the distance between background and axis of jet has been changed as the parameter. The images have been later analyzed by using personal computer to quantitatively detect the amount of shift of the background image from the comparison between the background pattern and the captured image of underexpanded jet. The quantitatively measured amount of shift have been reconstructed into a density flow field using the Abel transformation and the Gradstone-Dale equation. From the experimental results, it is found that the reconstructed density image becomes blurring, and noise becomes decreasing with increasing the distance between background and axis of underexpanded jet. Consequently, it is cralified that the sensitivity constant should be greater than 20, and the circle of confusion diameter should be less than 2.7mm at least in this experimental setup.

Keywords: BOS method, underexpanded jet, abel transformation, density field visualization

Procedia PDF Downloads 43