Search results for: distance measurement error
1070 Effect of Punch Diameter on Optimal Loading Profiles in Hydromechanical Deep Drawing Process
Authors: Mehmet Halkaci, Ekrem Öztürk, Mevlüt Türköz, H. Selçuk Halkacı
Abstract:
Hydromechanical deep drawing (HMD) process is an advanced manufacturing process used to form deep parts with only one forming step. In this process, sheet metal blank can be drawn deeper by means of fluid pressure acting on sheet surface in the opposite direction of punch movement. High limiting drawing ratio, good surface quality, less springback characteristic and high dimensional accuracy are some of the advantages of this process. The performance of the HMD process is affected by various process parameters such as fluid pressure, blank holder force, punch-die radius, pre-bulging pressure and height, punch diameter, friction between sheet-die and sheet-punch. The fluid pressure and bank older force are the main loading parameters and affect the formability of HMD process significantly. The punch diameter also influences the limiting drawing ratio (the ratio of initial sheet diameter to punch diameter) of the sheet metal blank. In this research, optimal loading (fluid pressure and blank holder force) profiles were determined for AA 5754-O sheet material through fuzzy control algorithm developed in previous study using LS-DYNA finite element analysis (FEA) software. In the preceding study, the fuzzy control algorithm was developed utilizing geometrical criteria such as thinning and wrinkling. In order to obtain the final desired part with the developed algorithm in terms of the punch diameter requested, the effect of punch diameter, which is the one of the process parameters, on loading profiles was investigated separately using blank thickness of 1 mm. Thus, the practicality of the previously developed fuzzy control algorithm with different punch diameters was clarified. Also, thickness distributions of the sheet metal blank along a curvilinear distance were compared for the FEA in which different punch diameters were used. Consequently, it was found that the use of different punch diameters did not affect the optimal loading profiles too much.Keywords: Finite Element Analysis (FEA), fuzzy control, hydromechanical deep drawing, optimal loading profiles, punch diameter
Procedia PDF Downloads 4311069 MIMO Radar-Based System for Structural Health Monitoring and Geophysical Applications
Authors: Davide D’Aria, Paolo Falcone, Luigi Maggi, Aldo Cero, Giovanni Amoroso
Abstract:
The paper presents a methodology for real-time structural health monitoring and geophysical applications. The key elements of the system are a high performance MIMO RADAR sensor, an optical camera and a dedicated set of software algorithms encompassing interferometry, tomography and photogrammetry. The MIMO Radar sensor proposed in this work, provides an extremely high sensitivity to displacements making the system able to react to tiny deformations (up to tens of microns) with a time scale which spans from milliseconds to hours. The MIMO feature of the system makes the system capable of providing a set of two-dimensional images of the observed scene, each mapped on the azimuth-range directions with noticeably resolution in both the dimensions and with an outstanding repetition rate. The back-scattered energy, which is distributed in the 3D space, is projected on a 2D plane, where each pixel has as coordinates the Line-Of-Sight distance and the cross-range azimuthal angle. At the same time, the high performing processing unit allows to sense the observed scene with remarkable refresh periods (up to milliseconds), thus opening the way for combined static and dynamic structural health monitoring. Thanks to the smart TX/RX antenna array layout, the MIMO data can be processed through a tomographic approach to reconstruct the three-dimensional map of the observed scene. This 3D point cloud is then accurately mapped on a 2D digital optical image through photogrammetric techniques, allowing for easy and straightforward interpretations of the measurements. Once the three-dimensional image is reconstructed, a 'repeat-pass' interferometric approach is exploited to provide the user of the system with high frequency three-dimensional motion/vibration estimation of each point of the reconstructed image. At this stage, the methodology leverages consolidated atmospheric correction algorithms to provide reliable displacement and vibration measurements.Keywords: interferometry, MIMO RADAR, SAR, tomography
Procedia PDF Downloads 1951068 Genetic Variation of Lactoferrin Gene and Its Association with Productive Traits in Egyptian Goats
Authors: Othman E. Othman, Hassan R. Darwish, Amira M. Nowier
Abstract:
Lactoferrin (LF) is a multifunctional protein involved in economically production traits like milk protein composition and skeletal structure in small ruminants including sheep and goat. So, LF gene - with its genetic polymorphisms associated with production traits - is considered a candidate genetic marker used in marker-assisted selection in goats. This study aimed to identify the different alleles and genotypes of this gene in three Egyptian goat breeds using PCR-SSCP (polymerase chain reaction-single-strand conformation polymorphism) and DNA sequencing. Genomic DNA was extracted from 120 animals belonging to Barki, Zaraibi, and Damascus goat breeds. Using specific primers, PCR amplified 247-bp fragments from exon 2 of LF goat gene. The PCR products were subjected to Single-Strand Conformation Polymorphism (SSCP) technique. The results showed the presence of two genotypes GG and AG in the tested animals. The frequencies of both genotypes varied among the three tested breeds with the highest frequencies of GG genotype in all tested goat breeds. The sequence analysis of PCR products representing these two detected genotypes declared the presence of an SNP (single nucleotide polymorphisms) substitution (G/A) among G and A alleles of this gene. The association between different LF genotypes and milk composition as well as body measurement was estimated. The comparison showed that the animals possess AG genotypes are superior over those with GG genotypes for different parameters of milk protein compositions and skeletal structures. This finding declared that allele A of LF gene is considered the promising marker for the productive traits in goat. In conclusion, the Egyptian goat breeds will be needed to enhance their milk protein composition and growth trait parameters through the increasing of allele A frequency in their herds depending on the superior production traits of this allele in goats.Keywords: lLactoferrin gene, PCR-SSCP, SNPs, Egyptian goat
Procedia PDF Downloads 1551067 Time Domain Dielectric Relaxation Microwave Spectroscopy
Authors: A. C. Kumbharkhane
Abstract:
Time domain dielectric relaxation microwave spectroscopy (TDRMS) is a term used to describe a technique of observing the time dependant response of a sample after application of time dependant electromagnetic field. A TDRMS probes the interaction of a macroscopic sample with a time dependent electrical field. The resulting complex permittivity spectrum, characterizes amplitude (voltage) and time scale of the charge-density fluctuations within the sample. These fluctuations may arise from the reorientation of the permanent dipole moments of individual molecules or from the rotation of dipolar moieties in flexible molecules, like polymers. The time scale of these fluctuations depends on the sample and its relative relaxation mechanism. Relaxation times range from some picoseconds in low viscosity liquids to hours in glasses, Therefore the TDRS technique covers an extensive dynamical process. The corresponding frequencies range from 10-4 Hz to 1012 Hz. This inherent ability to monitor the cooperative motion of molecular ensemble distinguishes dielectric relaxation from methods like NMR or Raman spectroscopy, which yield information on the motions of individual molecules. Recently, we have developed and established the TDR technique in laboratory that provides information regarding dielectric permittivity in the frequency range 10 MHz to 30 GHz. The TDR method involves the generation of step pulse with rise time of 20 pico-seconds in a coaxial line system and monitoring the change in pulse shape after reflection from the sample placed at the end of the coaxial line. There is a great interest to study the dielectric relaxation behaviour in liquid systems to understand the role of hydrogen bond in liquid system. The intermolecular interaction through hydrogen bonds in molecular liquids results in peculiar dynamical properties. The dynamics of hydrogen-bonded liquids have been studied. The theoretical model to explain the experimental results will be discussed.Keywords: microwave, time domain reflectometry (TDR), dielectric measurement, relaxation time
Procedia PDF Downloads 3361066 Method to Assessing Aspect of Sustainable Development-Walkability
Authors: Amna Ali Nasser Al-Saadi, Riken Homma, Kazuhisa Iki
Abstract:
Need to generate objective communication between researchers, Practitioners and policy makers are top concern of sustainability. Despite the fact that many places have successes in achieving some aspects of sustainable urban development, there are no scientific facts to convince policy makers in the rest of the world to apply their guides and manuals. This is because each of them was developed to fulfill the need of specific city. The question is, how to learn the lesson from each case study? And how distinguish between the potential criteria and negative one? And how quantify their effects in the future development? Walkability has been found as a solution to achieve healthy life style as well as social, environmental and economic sustainability. Moreover, it is complicated as every aspect of sustainable development. This research is stand on quantitative- comparative methodology in order to assess pedestrian oriented development. Three Analyzed Areas (AAs) were selected. One site is located in Oman in which hypotheses as motorized oriented development, while two sites are in Japan where the development is pedestrian friendly. The study used Multi-Criteria Evaluation Method (MCEM). Initially, MCEM stands on Analytic Hierarchy Process (AHP). The later was structured into main goal (walkability), objectives (functions and layout) and attributes (the urban form criteria). Secondly, the GIS were used to evaluate the attributes in multi-criteria maps. Since each criterion has different scale of measurement, all results were standardized by z-score and used to measure the co-relations among cr iteria. Different scenario was generated from each AA. After that, MCEM (AHP- OWA) based on GIS measured the walkability score and determined the priority of criteria development in the non-walker friendly environment. As results, the comparison criteria for z-score presented a measurable distinguished orientation of development. This result has been used to prove that Oman is motorized environment while Japan is walkable. Also, it defined the powerful criteria and week criteria regardless to the AA. This result has been used to generalize the priority for walkable development.Keywords: walkability, sustainable development, multi- criteria evaluation method, gis
Procedia PDF Downloads 4531065 Relationships of Functional Status and Subjective Health Status among Stable Chronic Obstructive Pulmonary Disease Patients Residing in the Community
Authors: Hee-Young Song
Abstract:
Background and objectives: In 2011, the Global Initiative for Chronic Obstructive Lung Disease (GOLD) recommendations proposed a multidimensional assessment of patients’ conditions that included both functional parameters and patient-reported outcomes, with the aim to provide a comprehensive assessment of the disease, thus meeting both the needs of the patient and the role of the physician. However, few studies have evaluated patient-reported outcomes as well as objective functional assessments among individuals with chronic obstructive pulmonary disease (COPD) in clinical practice in Korea. This study was undertaken to explore the relationship between functional status assessed by the 6-minute walking distance (MWD) test and subjective health status reported by stable patients with COPD residing in community. Methods: A cross-sectional descriptive study was conducted with 118 stable COPD patients aged 69.4 years old and selected by a convenient sampling from an outpatient department of pulmonology in a tertiaryhospitals. The 6-MWD test was conducted according to standardized instructions. Participants also completed a constructed questionnaire including general characteristics, smoking history, dyspnea by modified medical research council (mMRC) scale, and health status by COPD assessment test (CAT). Anthropometric measurements were performed for body mass index (BMI). Medical records were reviewed to obtain disease-related characteristics including duration of the disease and forced expiratory volume in 1 second (FEV1). Data were analyzed using PASW statistics 20.0. Results: Mean FEV1% of participants was 63.51% and mean 6-MWD and CAT scores were 297.54m and 17.7, respectively. The 6-MWD and CAT showed significant negative correlations (r= -.280, p=.002); FEV1 and CAT did as well correlations (r= -.347, p < .001). Conclusions: Findings suggest that the better functional status an individual with COPD has, the better subjective health status is, and provide the support for using patient-reported outcomes along with functional parameters to facilitate comprehensive assessment of COPD patients in real clinical practices.Keywords: chronic obstructive pulmonary disease, COPD assessment test, functional status, patient-reported outcomes
Procedia PDF Downloads 3661064 Neuroevolution Based on Adaptive Ensembles of Biologically Inspired Optimization Algorithms Applied for Modeling a Chemical Engineering Process
Authors: Sabina-Adriana Floria, Marius Gavrilescu, Florin Leon, Silvia Curteanu, Costel Anton
Abstract:
Neuroevolution is a subfield of artificial intelligence used to solve various problems in different application areas. Specifically, neuroevolution is a technique that applies biologically inspired methods to generate neural network architectures and optimize their parameters automatically. In this paper, we use different biologically inspired optimization algorithms in an ensemble strategy with the aim of training multilayer perceptron neural networks, resulting in regression models used to simulate the industrial chemical process of obtaining bricks from silicone-based materials. Installations in the raw ceramics industry, i.e., bricks, are characterized by significant energy consumption and large quantities of emissions. In addition, the initial conditions that were taken into account during the design and commissioning of the installation can change over time, which leads to the need to add new mixes to adjust the operating conditions for the desired purpose, e.g., material properties and energy saving. The present approach follows the study by simulation of a process of obtaining bricks from silicone-based materials, i.e., the modeling and optimization of the process. Optimization aims to determine the working conditions that minimize the emissions represented by nitrogen monoxide. We first use a search procedure to find the best values for the parameters of various biologically inspired optimization algorithms. Then, we propose an adaptive ensemble strategy that uses only a subset of the best algorithms identified in the search stage. The adaptive ensemble strategy combines the results of selected algorithms and automatically assigns more processing capacity to the more efficient algorithms. Their efficiency may also vary at different stages of the optimization process. In a given ensemble iteration, the most efficient algorithms aim to maintain good convergence, while the less efficient algorithms can improve population diversity. The proposed adaptive ensemble strategy outperforms the individual optimizers and the non-adaptive ensemble strategy in convergence speed, and the obtained results provide lower error values.Keywords: optimization, biologically inspired algorithm, neuroevolution, ensembles, bricks, emission minimization
Procedia PDF Downloads 1161063 A Web Service Based Sensor Data Management System
Authors: Rose A. Yemson, Ping Jiang, Oyedeji L. Inumoh
Abstract:
The deployment of wireless sensor network has rapidly increased, however with the increased capacity and diversity of sensors, and applications ranging from biological, environmental, military etc. generates tremendous volume of data’s where more attention is placed on the distributed sensing and little on how to manage, analyze, retrieve and understand the data generated. This makes it more quite difficult to process live sensor data, run concurrent control and update because sensor data are either heavyweight, complex, and slow. This work will focus on developing a web service platform for automatic detection of sensors, acquisition of sensor data, storage of sensor data into a database, processing of sensor data using reconfigurable software components. This work will also create a web service based sensor data management system to monitor physical movement of an individual wearing wireless network sensor technology (SunSPOT). The sensor will detect movement of that individual by sensing the acceleration in the direction of X, Y and Z axes accordingly and then send the sensed reading to a database that will be interfaced with an internet platform. The collected sensed data will determine the posture of the person such as standing, sitting and lying down. The system is designed using the Unified Modeling Language (UML) and implemented using Java, JavaScript, html and MySQL. This system allows real time monitoring an individual closely and obtain their physical activity details without been physically presence for in-situ measurement which enables you to work remotely instead of the time consuming check of an individual. These details can help in evaluating an individual’s physical activity and generate feedback on medication. It can also help in keeping track of any mandatory physical activities required to be done by the individuals. These evaluations and feedback can help in maintaining a better health status of the individual and providing improved health care.Keywords: HTML, java, javascript, MySQL, sunspot, UML, web-based, wireless network sensor
Procedia PDF Downloads 2121062 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria
Authors: O. O. Aiyelokun, O. A. Agbede
Abstract:
Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.Keywords: boundary condition, goodness of fit, groundwater, satellite-based data
Procedia PDF Downloads 1301061 Short Life Cycle Time Series Forecasting
Authors: Shalaka Kadam, Dinesh Apte, Sagar Mainkar
Abstract:
The life cycle of products is becoming shorter and shorter due to increased competition in market, shorter product development time and increased product diversity. Short life cycles are normal in retail industry, style business, entertainment media, and telecom and semiconductor industry. The subject of accurate forecasting for demand of short lifecycle products is of special enthusiasm for many researchers and organizations. Due to short life cycle of products the amount of historical data that is available for forecasting is very minimal or even absent when new or modified products are launched in market. The companies dealing with such products want to increase the accuracy in demand forecasting so that they can utilize the full potential of the market at the same time do not oversupply. This provides the challenge to develop a forecasting model that can forecast accurately while handling large variations in data and consider the complex relationships between various parameters of data. Many statistical models have been proposed in literature for forecasting time series data. Traditional time series forecasting models do not work well for short life cycles due to lack of historical data. Also artificial neural networks (ANN) models are very time consuming to perform forecasting. We have studied the existing models that are used for forecasting and their limitations. This work proposes an effective and powerful forecasting approach for short life cycle time series forecasting. We have proposed an approach which takes into consideration different scenarios related to data availability for short lifecycle products. We then suggest a methodology which combines statistical analysis with structured judgement. Also the defined approach can be applied across domains. We then describe the method of creating a profile from analogous products. This profile can then be used for forecasting products with historical data of analogous products. We have designed an application which combines data, analytics and domain knowledge using point-and-click technology. The forecasting results generated are compared using MAPE, MSE and RMSE error scores. Conclusion: Based on the results it is observed that no one approach is sufficient for short life-cycle forecasting and we need to combine two or more approaches for achieving the desired accuracy.Keywords: forecast, short life cycle product, structured judgement, time series
Procedia PDF Downloads 3581060 Constructing a Semi-Supervised Model for Network Intrusion Detection
Authors: Tigabu Dagne Akal
Abstract:
While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.Keywords: intrusion detection, data mining, computer science, data mining
Procedia PDF Downloads 2961059 Effect of Perceived Importance of a Task in the Prospective Memory Task
Authors: Kazushige Wada, Mayuko Ueda
Abstract:
In the present study, we reanalyzed lapse errors in the last phase of a job, by re-counting near lapse errors and increasing the number of participants. We also examined the results of this study from the perspective of prospective memory (PM), which concerns future actions. This study was designed to investigate whether perceiving the importance of PM tasks caused lapse errors in the last phase of a job and to determine if such errors could be explained from the perspective of PM processing. Participants (N = 34) conducted a computerized clicking task, in which they clicked on 10 figures that they had learned in advance in 8 blocks of 10 trials. Participants were requested to click the check box in the start display of a block and to click the checking off box in the finishing display. This task was a PM task. As a measure of PM performance, we counted the number of omission errors caused by forgetting to check off in the finishing display, which was defined as a lapse error. The perceived importance was manipulated by different instructions. Half the participants in the highly important task condition were instructed that checking off was very important, because equipment would be overloaded if it were not done. The other half in the not important task condition was instructed only about the location and procedure for checking off. Furthermore, we controlled workload and the emotion of surprise to confirm the effect of demand capacity and attention. To manipulate emotions during the clicking task, we suddenly presented a photo of a traffic accident and the sound of a skidding car followed by an explosion. Workload was manipulated by requesting participants to press the 0 key in response to a beep. Results indicated too few forgetting induced lapse errors to be analyzed. However, there was a weak main effect of the perceived importance of the check task, in which the mouse moved to the “END” button before moving to the check box in the finishing display. Especially, the highly important task group showed more such near lapse errors, than the not important task group. Neither surprise, nor workload affected the occurrence of near lapse errors. These results imply that high perceived importance of PM tasks impair task performance. On the basis of the multiprocess framework of PM theory, we have suggested that PM task performance in this experiment relied not on monitoring PM tasks, but on spontaneous retrieving.Keywords: prospective memory, perceived importance, lapse errors, multi process framework of prospective memory.
Procedia PDF Downloads 4461058 Impact of Gamma Irradiation on Biological Activities of Artemisia herba alba from Algeria
Authors: Abir Mohamed Mohamed Ibrahim, Amina Titouche, Mohamed Hazzit
Abstract:
Phytotherapy is based on use of plant natural products holding the main sources of drugs with healing properties for the treatment of human, animal or vegetable diseases. With these aims, and to replace chemical preservatives in natural products, we are interested to use essential oils from Algerian endemic plants belonging to the Asteraceae family: Artemisia herba alba Asso, which was undergoes a hydro-distillation after its irradiation by Gamma rays at frequencies: 10, 20, and 30 KGray which gave respectively the following essential oil yields: 1.087%, 1.087%, 1.085%, compared with that of the untreated sample giving a yield of 1.27 %. Evaluation of the antioxidant activity in vitro of essential oil for A. herba alba has been assessed by two different methods: inhibition of DPPH radical and measurement of reducing power. The first method has not revealed a very big difference regardless of the dose of irradiation, the IC50 is about 4000 mg/l, the maximum of inhibition was around 49.4%, likewise, the test of reducing power awarded us a maximum reducing capacity was of 0.76%; both of results were registered by the specimen irradiated at 20 KGy, it has a more better antioxidant power than no irradiated sample but slightly. To combat Fusarium culmorum, causing the wilts and rots, we are focused on the antifungal screening of this aromatic plant. The results obtained, followed by measurements of Minimal Inhibitory Concentrations (MIC); showed promising inhibitory effect against pathogen tested. With a yield superior to l%, the essential oil has shown a remarkable efficiency on the stump, mainly for sample irradiate at 30KGray (MICs= 625 µg/ml; MICc= 1250 µg/ml) with MIC of 2%. These results demonstrate a good antifungal activity, to limit and even to stop the development of the pathogenic microorganism and also the positive effect of dose of irradiation to upgrade this capacity as well, to uphold the antioxidant capacity.Keywords: artemisia herba alba Asso, essential oil yield, gamma ray, antioxidant activity, antifungal activity
Procedia PDF Downloads 5191057 A Study of NT-ProBNP and ETCO2 in Patients Presenting with Acute Dyspnoea
Authors: Dipti Chand, Riya Saboo
Abstract:
OBJECTIVES: Early and correct diagnosis may present a significant clinical challenge in diagnosis of patients presenting to Emergency Department with Acute Dyspnoea. The common cause of acute dyspnoea and respiratory distress in Emergency Department are Decompensated Heart Failure (HF), Chronic Obstructive Pulmonary Disease (COPD), Asthma, Pneumonia, Acute Respiratory Distress Syndrome (ARDS), Pulmonary Embolism (PE), and other causes like anaemia. The aim of the study was to measure NT-pro Brain Natriuretic Peptide (BNP) and exhaled End-Tidal Carbon dioxide (ETCO2) in patients presenting with dyspnoea. MATERIAL AND METHODS: This prospective, cross-sectional and observational study was performed at the Government Medical College and Hospital, Nagpur, between October 2019 and October 2021 in patients admitted to the Medicine Intensive Care Unit. Three groups of patients were compared: (1) HFrelated acute dyspnoea group (n = 52), (2) pulmonary (COPD/PE)-related acute dyspnoea group (n = 31) and (3) sepsis with ARDS-related dyspnoea group (n = 13). All patients underwent initial clinical examination with a recording of initial vital parameters along with on-admission ETCO2 measurement, NT-proBNP testing, arterial blood gas analysis, lung ultrasound examination, 2D echocardiography, chest X-rays, and other relevant diagnostic laboratory testing. RESULTS: 96 patients were included in the study. Median NT-proBNP was found to be high for the Heart Failure group (11,480 pg/ml), followed by the sepsis group (780 pg/ml), and pulmonary group had an Nt ProBNP of 231 pg/ml. The mean ETCO2 value was maximum in the pulmonary group (48.610 mmHg) followed by Heart Failure (31.51 mmHg) and the sepsis group (19.46 mmHg). The results were found to be statistically significant (P < 0.05). CONCLUSION: NT-proBNP has high diagnostic accuracy in differentiating acute HF-related dyspnoea from pulmonary (COPD and ARDS)-related acute dyspnoea. The higher levels of ETCO2 help in diagnosing patients with COPD.Keywords: NT PRO BNP, ETCO2, dyspnoea, lung USG
Procedia PDF Downloads 771056 Experimental Study of Reflective Roof as a Passive Cooling Method in Homes Under the Paradigm of Appropriate Technology
Authors: Javier Ascanio Villabona, Brayan Eduardo Tarazona Romero, Camilo Leonardo Sandoval Rodriguez, Arly Dario Rincon, Omar Lengerke Perez
Abstract:
Efficient energy consumption in the housing sector in relation to refrigeration is a concern in the construction and rehabilitation of houses in tropical areas. Thermal comfort is aggravated by heat gain on the roof surface by heat gains. Thus, in the group of passive cooling techniques, one of the practices and technologies in solar control that provide improvements in comfortable conditions are thermal insulation or geometric changes of the roofs. On the other hand, methods with reflection and radiation are the methods used to decrease heat gain by facilitating the removal of excess heat inside a building to maintain a comfortable environment. Since the potential of these techniques varies in different climatic zones, their application in different zones should be examined. This research is based on the experimental study of a prototype of a roof radiator as a method of passive cooling in homes, which was developed through an experimental research methodology making measurements in a prototype built by means of the paradigm of appropriate technology, with the aim of establishing an initial behavior of the internal temperature resulting from the climate of the external environment. As a starting point, a selection matrix was made to identify the typologies of passive cooling systems to model the system and its subsequent implementation, establishing its constructive characteristics. Step followed by the measurement of the climatic variables (outside the prototype) and microclimatic variables (inside the prototype) to obtain a database to be analyzed. As a final result, the decrease in temperature that occurs inside the chamber with respect to the outside temperature was evidenced. likewise, a linearity in its behavior in relation to the variations of the climatic variables.Keywords: appropriate technology, enveloping, energy efficiency, passive cooling
Procedia PDF Downloads 941055 Competitive DNA Calibrators as Quality Reference Standards (QRS™) for Germline and Somatic Copy Number Variations/Variant Allelic Frequencies Analyses
Authors: Eirini Konstanta, Cedric Gouedard, Aggeliki Delimitsou, Stefania Patera, Samuel Murray
Abstract:
Introduction: Quality reference DNA standards (QRS) for molecular testing by next-generation sequencing (NGS) are essential for accurate quantitation of copy number variations (CNV) for germline and variant allelic frequencies (VAF) for somatic analyses. Objectives: Presently, several molecular analytics for oncology patients are reliant upon quantitative metrics. Test validation and standardisation are also reliant upon the availability of surrogate control materials allowing for understanding test LOD (limit of detection), sensitivity, specificity. We have developed a dual calibration platform allowing for QRS pairs to be included in analysed DNA samples, allowing for accurate quantitation of CNV and VAF metrics within and between patient samples. Methods: QRS™ blocks up to 500nt were designed for common NGS panel targets incorporating ≥ 2 identification tags (IDTDNA.com). These were analysed upon spiking into gDNA, somatic, and ctDNA using a proprietary CalSuite™ platform adaptable to common LIMS. Results: We demonstrate QRS™ calibration reproducibility spiked to 5–25% at ± 2.5% in gDNA and ctDNA. Furthermore, we demonstrate CNV and VAF within and between samples (gDNA and ctDNA) with the same reproducibility (± 2.5%) in a clinical sample of lung cancer and HBOC (EGFR and BRCA1, respectively). CNV analytics was performed with similar accuracy using a single pair of QRS calibrators when using multiple single targeted sequencing controls. Conclusion: Dual paired QRS™ calibrators allow for accurate and reproducible quantitative analyses of CNV, VAF, intrinsic sample allele measurement, inter and intra-sample measure not only simplifying NGS analytics but allowing for monitoring clinically relevant biomarker VAF across patient ctDNA samples with improved accuracy.Keywords: calibrator, CNV, gene copy number, VAF
Procedia PDF Downloads 1521054 Promoting Health and Academic Achievement: Mental Health Promoting Online Education
Authors: Natalie Frandsen
Abstract:
Pursuing post-secondary education is a milestone for many Canadian youths. This transition involves many changes and opportunities for growth. However, this may also be a period where challenges arise. Perhaps not surprisingly, mental health challenges for post-secondary students are common. This poses difficulties for students and instructors. Common mental-health-related symptoms (e.g., low motivation, fatigue, inability to concentrate) can affect academic performance, and instructors may need to provide accommodations for these students without the necessary expertise. ‘Distance education’ has been growing and gaining momentum in Canada for three decades. As a consequence of the COVID-19 pandemic, post-secondary institutions have been required to deliver courses using ‘remote’ methods (i.e., various online delivery modalities). The learning challenges and subsequent academic performance issues experienced by students with mental-health-related disabilities studying online are not well understood. However, we can postulate potential factors drawing from learning theories, the relationship between mental-health-related symptoms and academic performance, and learning design. Identifying barriers and opportunities to academic performance is an essential step in ensuring that students with mental-health-related disabilities are able to achieve their academic goals. Completing post-secondary education provides graduates with more employment opportunities. It is imperative that our post-secondary institutions take a holistic view of learning by providing learning and mental health support while reducing structural barriers. Health-promoting universities and colleges infuse health into their daily operations and academic mandates. Acknowledged in this Charter is the notion that all sectors must take an active role in favour of health, social justice, and equity for all. Drawing from mental health promotion and Universal Design for Learning (UDL) frameworks, relevant adult learning concepts, and critical digital pedagogy, considerations for mental-health-promoting, online learning community development will be summarized. The education sector has the opportunity to create and foster equitable and mental health-promoting learning environments. This is of particular importance during a global pandemic when the mental health of students is being disproportionately impacted.Keywords: academic performance, community, mental health promotion, online learning
Procedia PDF Downloads 1361053 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults
Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter
Abstract:
Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization
Procedia PDF Downloads 1441052 In-vitro Metabolic Fingerprinting Using Plasmonic Chips by Laser Desorption/Ionization Mass Spectrometry
Authors: Vadanasundari Vedarethinam, Kun Qian
Abstract:
The metabolic analysis is more distal over proteomics and genomics engaging in clinics and needs rationally distinct techniques, designed materials, and device for clinical diagnosis. Conventional techniques such as spectroscopic techniques, biochemical analyzers, and electrochemical have been used for metabolic diagnosis. Currently, there are four major challenges including (I) long-term process in sample pretreatment; (II) difficulties in direct metabolic analysis of biosamples due to complexity (III) low molecular weight metabolite detection with accuracy and (IV) construction of diagnostic tools by materials and device-based platforms for real case application in biomedical applications. Development of chips with nanomaterial is promising to address these critical issues. Mass spectroscopy (MS) has displayed high sensitivity and accuracy, throughput, reproducibility, and resolution for molecular analysis. Particularly laser desorption/ ionization mass spectrometry (LDI MS) combined with devices affords desirable speed for mass measurement in seconds and high sensitivity with low cost towards large scale uses. We developed a plasmonic chip for clinical metabolic fingerprinting as a hot carrier in LDI MS by series of chips with gold nanoshells on the surface through controlled particle synthesis, dip-coating, and gold sputtering for mass production. We integrated the optimized chip with microarrays for laboratory automation and nanoscaled experiments, which afforded direct high-performance metabolic fingerprinting by LDI MS using 500 nL of serum, urine, cerebrospinal fluids (CSF) and exosomes. Further, we demonstrated on-chip direct in-vitro metabolic diagnosis of early-stage lung cancer patients using serum and exosomes without any pretreatment or purifications. To our best knowledge, this work initiates a bionanotechnology based platform for advanced metabolic analysis toward large-scale diagnostic use.Keywords: plasmonic chip, metabolic fingerprinting, LDI MS, in-vitro diagnostics
Procedia PDF Downloads 1631051 Computational Fluid Dynamics Simulation of Turbulent Convective Heat Transfer in Rectangular Mini-Channels for Rocket Cooling Applications
Authors: O. Anwar Beg, Armghan Zubair, Sireetorn Kuharat, Meisam Babaie
Abstract:
In this work, motivated by rocket channel cooling applications, we describe recent CFD simulations of turbulent convective heat transfer in mini-channels at different aspect ratios. ANSYS FLUENT software has been employed with a mean average error of 5.97% relative to Forrest’s MIT cooling channel study (2014) at a Reynolds number of 50,443 with a Prandtl number of 3.01. This suggests that the simulation model created for turbulent flow was suitable to set as a foundation for the study of different aspect ratios in the channel. Multiple aspect ratios were also considered to understand the influence of high aspect ratios to analyse the best performing cooling channel, which was determined to be the highest aspect ratio channels. Hence, the approximate 28:1 aspect ratio provided the best characteristics to ensure effective cooling. A mesh convergence study was performed to assess the optimum mesh density to collect accurate results. Hence, for this study an element size of 0.05mm was used to generate 579,120 for proper turbulent flow simulation. Deploying a greater bias factor would increase the mesh density to the furthest edges of the channel which would prove to be useful if the focus of the study was just on a single side of the wall. Since a bulk temperature is involved with the calculations, it is essential to ensure a suitable bias factor is used to ensure the reliability of the results. Hence, in this study we have opted to use a bias factor of 5 to allow greater mesh density at both edges of the channel. However, the limitations on mesh density and hardware have curtailed the sophistication achievable for the turbulence characteristics. Also only linear rectangular channels were considered, i.e. curvature was ignored. Furthermore, we only considered conventional water coolant. From this CFD study the variation of aspect ratio provided a deeper appreciation of the effect of small to high aspect ratios with regard to cooling channels. Hence, when considering an application for the channel, the geometry of the aspect ratio must play a crucial role in optimizing cooling performance.Keywords: rocket channel cooling, ANSYS FLUENT CFD, turbulence, convection heat transfer
Procedia PDF Downloads 1501050 Reproducibility of Shear Strength Parameters Determined from CU Triaxial Tests: Evaluation of Results from Regression of Different Failure Stress Combinations
Authors: Henok Marie Shiferaw, Barbara Schneider-Muntau
Abstract:
Test repeatability and data reproducibility are a concern in many geotechnical laboratory tests due to inherent soil variability, inhomogeneous sample preparation and measurement inaccuracy. Test results on comparable test specimens vary to a considerable extent. Thus, also the derived shear strength parameters from triaxial tests are affected. In this contribution, we present the reproducibility of effective shear strength parameters from consolidated undrained triaxial tests on plain soil and cement-treated soil specimens. Six remolded test specimens were prepared for the plain soil and for the cement-treated soil. Conventional three levels of consolidation pressure testing were considered with an effective consolidation pressure of 100 kPa, 200 kPa and 300 kPa, respectively. At each effective consolidation pressure, two tests were done on comparable test specimens. Focus was laid on the same mean dry density and same water content during sample preparation for the two specimens. The cement-treated specimens were tested after 28 days of curing. Shearing of test specimens was carried out at a deformation rate of 0.4 mm/min after sample saturation at a back pressure of 900 kPa, followed by consolidation. The effective peak and residual shear strength parameters were then estimated from regression analysis of 21 different combinations of the failure stresses from the six tests conducted for both the plain soil and cement-treated soil samples. The 21 different stress combinations were constructed by picking three, four, five and six failure tresses at once at different combinations. Results indicate that the effective shear strength parameters estimated from the regression of different combinations of the failure stresses vary. Effective critical friction angle was found to be more consistent than effective peak friction angle with a smaller standard deviation. The reproducibility of the shear strength parameters for the cement-treated specimens was even lower than that of the untreated specimens.Keywords: shear strength parameters, test repeatability, data reproducibility, triaxial soil testing, cement improvement of soils
Procedia PDF Downloads 331049 Experimental Study of an Isobaric Expansion Heat Engine with Hydraulic Power Output for Conversion of Low-Grade-Heat to Electricity
Authors: Maxim Glushenkov, Alexander Kronberg
Abstract:
Isobaric expansion (IE) process is an alternative to conventional gas/vapor expansion accompanied by a pressure decrease typical of all state-of-the-art heat engines. The elimination of the expansion stage accompanied by useful work means that the most critical and expensive parts of ORC systems (turbine, screw expander, etc.) are also eliminated. In many cases, IE heat engines can be more efficient than conventional expansion machines. In addition, IE machines have a very simple, reliable, and inexpensive design. They can also perform all the known operations of existing heat engines and provide usable energy in a very convenient hydraulic or pneumatic form. This paper reports measurement made with the engine operating as a heat-to-shaft-power or electricity converter and a comparison of the experimental results to a thermodynamic model. Experiments were carried out at heat source temperature in the range 30–85 °C and heat sink temperature around 20 °C; refrigerant R134a was used as the engine working fluid. The pressure difference generated by the engine varied from 2.5 bar at the heat source temperature 40 °C to 23 bar at the heat source temperature 85 °C. Using a differential piston, the generated pressure was quadrupled to pump hydraulic oil through a hydraulic motor that generates shaft power and is connected to an alternator. At the frequency of about 0.5 Hz, the engine operates with useful powers up to 1 kW and an oil pumping flowrate of 7 L/min. Depending on the temperature of the heat source, the obtained efficiency was 3.5 – 6 %. This efficiency looks very high, considering such a low temperature difference (10 – 65 °C) and low power (< 1 kW). The engine’s observed performance is in good agreement with the predictions of the model. The results are very promising, showing that the engine is a simple and low-cost alternative to ORC plants and other known energy conversion systems, especially at low temperatures (< 100 °C) and low power range (< 500 kW) where other known technologies are not economic. Thus low-grade solar, geothermal energy, biomass combustion, and waste heat with a temperature above 30 °C can be involved into various energy conversion processes.Keywords: isobaric expansion, low-grade heat, heat engine, renewable energy, waste heat recovery
Procedia PDF Downloads 2261048 Dosimetric Application of α-Al2O3:C for Food Irradiation Using TA-OSL
Authors: A. Soni, D. R. Mishra, D. K. Koul
Abstract:
α-Al2O3:C has been reported to have deeper traps at 600°C and 900°C respectively. These traps have been reported to accessed at relatively earlier temperatures (122 and 322 °C respectively) using thermally assisted OSL (TA-OSL). In this work, the dose response α-Al2O3:C was studied in the dose range of 10Gy to 10kGy for its application in food irradiation in low ( upto 1kGy) and medium(1 to 10kGy) dose range. The TOL (Thermo-optically stimulated luminescence) measurements were carried out on RisØ TL/OSL, TL-DA-15 system having a blue light-emitting diodes (λ=470 ±30nm) stimulation source with power level set at the 90% of the maximum stimulation intensity for the blue LEDs (40 mW/cm2). The observations were carried on commercial α-Al2O3:C phosphor. The TOL experiments were carried out with number of active channel (300) and inactive channel (1). Using these settings, the sample is subjected to linear thermal heating and constant optical stimulation. The detection filter used in all observations was a Hoya U-340 (Ip ~ 340 nm, FWHM ~ 80 nm). Irradiation of the samples was carried out using a 90Sr/90Y β-source housed in the system. A heating rate of 2 °C/s was preferred in TL measurements so as to reduce the temperature lag between the heater plate and the samples. To study the dose response of deep traps of α-Al2O3:C, samples were irradiated with various dose ranging from 10 Gy to 10 kGy. For each set of dose, three samples were irradiated. In order to record the TA-OSL, initially TL was recorded up to a temperature of 400°C, to deplete the signal due to 185°C main dosimetry TL peak in α-Al2O3:C, which is also associated with the basic OSL traps. After taking TL readout, the sample was subsequently subjected to TOL measurement. As a result, two well-defined TA-OSL peaks at 121°C and at 232°C occur in time as well as temperature domain which are different from the main dosimetric TL peak which occurs at ~ 185°C. The linearity of the integrated TOL signal has been measured as a function of absorbed dose and found to be linear upto 10kGy. Thus, it can be used for low and intermediate dose range of for its application in food irradiation. The deep energy level defects of α-Al2O3:C phosphor can be accessed using TOL section of RisØ reader system.Keywords: α-Al2O3:C, deep traps, food irradiation, TA-OSL
Procedia PDF Downloads 3001047 Collocation Errors in English as Second Language (ESL) Essay Writing
Authors: Fatima Muhammad Shitu
Abstract:
In language learning, Second language learners like their native speaker counter parts, commit errors in their attempt to achieve competence in the target language. The realm of Collocation has to do with meaning relation between lexical items. In all human language, there is a kind of ‘natural order’ in which words are arranged or relate to one another in sentences so much so that when a word occurs in a given context, the related or naturally co -occurring word will automatically come to the mind. It becomes an error, therefore, if students inappropriately pair or arrange such ‘naturally’ co – occurring lexical items in a text. It has been observed that most of the second language learners in this research group commit collocational errors. A study of this kind is very significant as it gives insight into the kinds of errors committed by learners. This will help the language teacher to be able to identify the sources and causes of such errors as well as correct them thereby guiding, helping and leading the learners towards achieving some level of competence in the language. The aim of the study is to understand the nature of these errors as stumbling blocks to effective essay writing. The objective of the study is to identify the errors, analyse their structural compositions so as to determine whether there are similarities between students in this regard and to find out whether there are patterns to these kinds of errors which will enable the researcher to understand their sources and causes. As a descriptive research, the researcher samples some nine hundred essays collected from three hundred undergraduate learners of English as a second language in the Federal College of Education, Kano, North- West Nigeria, i.e. three essays per each student. The essays which were given on three different lecture times were of similar thematic preoccupations (i.e. same topics) and length (i.e. same number of words). The essays were written during the lecture hour at three different lecture occasions. The errors were identified in a systematic manner whereby errors so identified were recorded only once even if they occur severally in students’ essays. The data was collated using percentages in which the identified number of occurrences were converted accordingly in percentages. The findings from the study indicates that there are similarities as well as regular and repeated errors which provided a pattern. Based on the pattern identified, the conclusion is that students’ collocational errors are attributable to poor teaching and learning which resulted in wrong generalisation of rules.Keywords: collocations, errors, second language learning, ESL students
Procedia PDF Downloads 3301046 Representational Conference Profile of Secondary Students in Understanding Selected Chemical Principles
Authors: Ryan Villafuerte Lansangan
Abstract:
Assessing students’ understanding in the microscopic level of an abstract subject like chemistry poses a challenge to teachers. Literature reveals that the use of representations serves as an essential avenue of measuring the extent of understanding in the discipline as an alternative to traditional assessment methods. This undertaking explored the representational competence profile of high school students from the University of Santo Tomas High School in understanding selected chemical principles and correlate this with their academic profile in chemistry based on their performance in the academic achievement examination in chemistry administered by the Center for Education Measurement (CEM). The common misconceptions of the students on the selected chemistry principles based on their representations were taken into consideration as well as the students’ views regarding their understanding of the role of chemical representations in their learning. The students’ level of representation task instrument consisting of the main lessons in chemistry with a corresponding scoring guide was prepared and utilized in the study. The study revealed that most of the students under study are unanimously rated as Level 2 (symbolic level) in terms of their representational competence in understanding the selected chemical principles through the use of chemical representations. Alternative misrepresentations were most observed on the students’ representations on chemical bonding concepts while the concept of chemical equation appeared to be the most comprehensible topic in chemistry for the students. Data implies that teachers’ representations play an important role in helping the student understand the concept in a microscopic level. Results also showed that the academic achievement in the chemistry of the students based on the standardized CEM examination has a significant association with the students’ representational competence. In addition, the students’ responses on the students’ views in chemical representations questionnaire evidently showed a good understanding of what a chemical representation or a mental model is by drawing a negative response that these tools should be an exact replica. Moreover, the students confirmed a greater appreciation that chemical representations are explanatory tools.Keywords: chemical representations, representational competence, academic profile in chemistry, secondary students
Procedia PDF Downloads 4061045 The Concept of Birthday: A Theoretical, Historical, and Social Overview, in Judaism and Other Cultures
Authors: Orly Redlich
Abstract:
In the age of social distance, which has been added to an individual and competitive worldview, it has become important to find a way to promote closeness and personal touch. The sense of social belonging and the existence of positive interaction with others have recently become a considerable necessity. Therefore, this theoretical paper will review one of the familiar and common concepts among different cultures around the world – birthday. This paper has a theoretical contribution that deepens the understanding of the birthday concept. Birthday rituals are historical and universal events, which noted since the prehistoric eras. In ancient history, birthday rituals were solely reserved for kings and nobility members, but over the years, birthday celebrations have evolved into a worldwide tradition. Some of the familiar birthday customs and symbols are currently common among most cultures, while some cultures have adopted for themselves unique birthday customs, which characterized their values and traditions. The birthday concept has a unique significance in Judaism as well, historically, religiously, and socially: It is considered as a lucky day and a private holiday for the celebrant. Therefore, the present paper reviews diverse birthday customs around the world in different cultures, including Judaism, and marks important birthdays throughout history. The paper also describes how the concept of birthday appears over the years in songs, novels, and art, and presents quotes from distinguished sages. The theoretical review suggests that birthday has a special meaning as a time-mark in the cycle of life, and as a socialization means in human development. Moreover, the birthday serves as a symbol of belonging and group cohesiveness, a day in which the celebrant's sense of belonging and sense of importance are strengthened and nurtured. Thus, the reappearance of these elements in a family or group interaction during the birthday ceremony allows the celebrant to absorb positive impressions about himself. In view of the extensive theoretical review, it seems that the unique importance of birthdays can serve as the foundation for intervention programs that may affect the participants’ sense of belonging and empowerment. In the group aspect, perhaps it can also yield therapeutic factors within a group. Concrete recommendations are presented at the end of the paper.Keywords: birthday, universal events, positive interaction, group cohesiveness, rituals
Procedia PDF Downloads 1411044 Evaluation of Traffic Noise Level: A Case Study in Residential Area of Ishbiliyah , Kuwait
Authors: Jamal Almatawah, Hamad Matar, Abdulsalam Altemeemi
Abstract:
The World Health Organization (WHO) has recognized environmental noise as harmful pollution that causes adverse psychosocial and physiologic effects on human health. The motor vehicle is considered to be one of the main source of noise pollution. It is a universal phenomenon, and it has grown to the point that it has become a major concern for both the public and policymakers. The aim of this paper, therefore, is to investigate the Traffic noise levels and the contributing factors that affect its level, such as traffic volume, heavy-vehicle Speed and other metrological factors in Ishbiliyah as a sample of a residential area in Kuwait. Three types of roads were selected in Ishbiliyah expressway, major arterial and collector street. The other source of noise that interferes the traffic noise has also been considered in this study. Traffic noise level is measured and analyzed using the Bruel & Kjaer outdoor sound level meter 2250-L (2250 Light). The Count-Cam2 Video Camera has been used to collect the peak and off-peak traffic count. Ambient Weather WM-5 Handheld Weather Station is used for metrological factors such as temperature, humidity and wind speed. Also, the spot speed was obtained using the radar speed: Decatur Genesis model GHD-KPH. All the measurement has been detected at the same time (simultaneously). The results showed that the traffic noise level is over the allowable limit on all types of roads. The average equivalent noise level (LAeq) for the Expressway, Major arterial and Collector Street was 74.3 dB(A), 70.47 dB(A) and 60.84 dB(A), respectively. In addition, a Positive Correlation coefficient between the traffic noise versus traffic volume and between traffic noise versus 85th percentile speed was obtained. However, there was no significant relation and Metrological factors. Abnormal vehicle noise due to poor maintenance or user-enhanced exhaust noise was found to be one of the highest factors that affected the overall traffic noise reading.Keywords: traffic noise, residential area, pollution, vehicle noise
Procedia PDF Downloads 671043 Spatial Analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) Patients in Lagos, Nigeria
Authors: Akinsola Oluwatosin, Udofia Samuel, Odofin Mayowa
Abstract:
The study is aimed at assessing the Geographic Information System (GIS)-based spatial analysis of Survival Pattern and Treatment Outcomes of Multi-Drug Resistant Tuberculosis (MDR-TB) cases for Lagos, Nigeria, with an objective to inform priority areas for public health planning and resource allocation. Multi-drug resistant tuberculosis (MDR-TB) develops due to problems such as irregular drug supply, poor drug quality, inappropriate prescription, and poor adherence to treatment. The shapefile(s) for this study were already georeferenced to Minna datum. The patient’s information was acquired on MS Excel and later converted to . CSV file for easy processing to ArcMap from various hospitals. To superimpose the patient’s information the spatial data, the addresses was geocoded to generate the longitude and latitude of the patients. The database was used for the SQL query to the various pattern of the treatment. To show the pattern of disease spread, spatial autocorrelation analysis was used. The result was displayed in a graphical format showing the areas of dispersing, random and clustered of patients in the study area. Hot and cold spot analysis was analyzed to show high-density areas. The distance between these patients and the closest health facility was examined using the buffer analysis. The result shows that 22% of the points were successfully matched, while 15% were tied. However, the result table shows that a greater percentage of it was unmatched; this is evident in the fact that most of the streets within the State are unnamed, and then again, most of the patients are likely to supply the wrong addresses. MDR-TB patients of all age groups are concentrated within Lagos-Mainland, Shomolu, Mushin, Surulere, Oshodi-Isolo, and Ifelodun LGAs. MDR-TB patients between the age group of 30-47 years had the highest number and were identified to be about 184 in number. The outcome of patients on ART treatment revealed that a high number of patients (300) were not ART treatment while a paltry 45 patients were on ART treatment. The result shows the Z-score of the distribution is greater than 1 (>2.58), which means that the distribution is highly clustered at a significance level of 0.01.Keywords: tuberculosis, patients, treatment, GIS, MDR-TB
Procedia PDF Downloads 1521042 Monitoring the Thin Film Formation of Carrageenan and PNIPAm Microgels
Authors: Selim Kara, Ertan Arda, Fahrettin Dolastir, Önder Pekcan
Abstract:
Biomaterials and thin film coatings play a fundamental role in medical, food and pharmaceutical industries. Carrageenan is a linear sulfated polysaccharide extracted from algae and seaweeds. To date, such biomaterials have been used in many smart drug delivery systems due to their biocompatibility and antimicrobial activity properties. Poly (N-isopropylacrylamide) (PNIPAm) gels and copolymers have also been used in medical applications. PNIPAm shows lower critical solution temperature (LCST) property at about 32-34 °C which is very close to the human body temperature. Below and above the LCST point, PNIPAm gels exhibit distinct phase transitions between swollen and collapsed states. A special class of gels are microgels which can react to environmental changes significantly faster than microgels due to their small sizes. Quartz crystal microbalance (QCM) measurement technique is one of the attractive techniques which has been used for monitoring the thin-film formation process. A sensitive QCM system was designed as to detect 0.1 Hz difference in resonance frequency and 10-7 change in energy dissipation values, which are the measures of the deposited mass and the film rigidity, respectively. PNIPAm microgels with the diameter around few hundred nanometers in water were produced via precipitation polymerization process. 5 MHz quartz crystals with functionalized gold surfaces were used for the deposition of the carrageenan molecules and microgels in the solutions which were slowly pumped through a flow cell. Interactions between charged carrageenan and microgel particles were monitored during the formation of the film layers, and the Sauerbrey masses of the deposited films were calculated. The critical phase transition temperatures around the LCST were detected during the heating and cooling cycles. It was shown that it is possible to monitor the interactions between PNIPAm microgels and biopolymer molecules, and it is also possible to specify the critical phase transition temperatures by using a QCM system.Keywords: carrageenan, phase transitions, PNIPAm microgels, quartz crystal microbalance (QCM)
Procedia PDF Downloads 2311041 Flame Propagation Velocity of Selected Gas Mixtures Depending on the Temperature
Authors: Kaczmarzyk Piotr, Anna Dziechciarz, Wojciech Klapsa
Abstract:
The purpose of this paper is demonstration the test results of research influence of temperature on the velocity of flame propagation using gas and air mixtures for selected gas mixtures. The research was conducted on the test apparatus in the form of duct 2 m long. The test apparatus was funded from the project: “Development of methods to neutralize threats of explosion for determined tanks contained technical gases, including alternative sources of supply in the fire environment, taking into account needs of rescuers” number: DOB-BIO6/02/50/2014. The Project is funded by The National Centre for Research and Development. This paper presents the results of measurement of rate of pressure rise and rate in flame propagation, using test apparatus for mixtures air and methane or air and propane. This paper presents the results performed using the test apparatus in the form of duct measuring the rate of flame and overpressure wave. Studies were performed using three gas mixtures with different concentrations: Methane (3% to 8% vol), Propane (3% to 6% vol). As regard to the above concentrations, tests were carried out at temperatures 20 and 30 ̊C. The gas mixture was supplied to the inside of the duct by the partial pressure molecules. Data acquisition was made using 5 dynamic pressure transducers and 5 ionization probes, arranged along of the duct. Temperature conditions changes were performed using heater which was mounted on the duct’s bottom. During the tests, following parameters were recorded: maximum explosion pressure, maximum pressure recorded by sensors and voltage recorded by ionization probes. Performed tests, for flammable gas and air mixtures, indicate that temperature changes have an influence on overpressure velocity. It should be noted, that temperature changes do not have a major impact on the flame front velocity. In the case of propane and air mixtures (temperature 30 ̊C) was observed DDT (Deflagration to Detonation) phenomena. The velocity increased from 2 to 20 m/s. This kind of explosion could turn into a detonation, but the duct length is too short (2 m).Keywords: flame propagation, flame propagation velocity, explosion, propane, methane
Procedia PDF Downloads 226