Search results for: fast pyrolysis
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2063

Search results for: fast pyrolysis

203 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 133
202 Pre-Cooling Strategies for the Refueling of Hydrogen Cylinders in Vehicular Transport

Authors: C. Hall, J. Ramos, V. Ramasamy

Abstract:

Hydrocarbon-based fuel vehicles are a major contributor to air pollution due to harmful emissions produced, leading to a demand for cleaner fuel types. A leader in this pursuit is hydrogen, with its application in vehicles producing zero harmful emissions and the only by-product being water. To compete with the performance of conventional vehicles, hydrogen gas must be stored on-board of vehicles in cylinders at high pressures (35–70 MPa) and have a short refueling duration (approximately 3 mins). However, the fast-filling of hydrogen cylinders causes a significant rise in temperature due to the combination of the negative Joule-Thompson effect and the compression of the gas. This can lead to structural failure and therefore, a maximum allowable internal temperature of 85°C has been imposed by the International Standards Organization. The technological solution to tackle the issue of rapid temperature rise during the refueling process is to decrease the temperature of the gas entering the cylinder. Pre-cooling of the gas uses a heat exchanger and requires energy for its operation. Thus, it is imperative to determine the least amount of energy input that is required to lower the gas temperature for cost savings. A validated universal thermodynamic model is used to identify an energy-efficient pre-cooling strategy. The model requires negligible computational time and is applied to previously validated experimental cases to optimize pre-cooling requirements. The pre-cooling characteristics include the location within the refueling timeline and its duration. A constant pressure-ramp rate is imposed to eliminate the effects of rapid changes in mass flow rate. A pre-cooled gas temperature of -40°C is applied, which is the lowest allowable temperature. The heat exchanger is assumed to be ideal with no energy losses. The refueling of the cylinders is modeled with the pre-cooling split in ten percent time intervals. Furthermore, varying burst durations are applied in both the early and late stages of the refueling procedure. The model shows that pre-cooling in the later stages of the refuelling process is more energy-efficient than early pre-cooling. In addition, the efficiency of pre-cooling towards the end of the refueling process is independent of the pressure profile at the inlet. This leads to the hypothesis that pre-cooled gas should be applied as late as possible in the refueling timeline and at very low temperatures. The model had shown a 31% reduction in energy demand whilst achieving the same final gas temperature for a refueling scenario when pre-cooling was applied towards the end of the process. The identification of the most energy-efficient refueling approaches whilst adhering to the safety guidelines is imperative to reducing the operating cost of hydrogen refueling stations. Heat exchangers are energy-intensive and thus, reducing the energy requirement would lead to cost reduction. This investigation shows that pre-cooling should be applied as late as possible and for short durations.

Keywords: cylinder, hydrogen, pre-cooling, refueling, thermodynamic model

Procedia PDF Downloads 96
201 Development and Validation of a Rapid Turbidimetric Assay to Determine the Potency of Cefepime Hydrochloride in Powder Injectable Solution

Authors: Danilo F. Rodrigues, Hérida Regina N. Salgado

Abstract:

Introduction: The emergence of resistant microorganisms to a large number of clinically approved antimicrobials has been increasing, which restrict the options for the treatment of bacterial infections. As a strategy, drugs with high antimicrobial activities are in evidence. Stands out a class of antimicrobial, the cephalosporins, having as fourth generation cefepime (CEF) a semi-synthetic product which has activity against various Gram-positive bacteria (e.g. oxacillin resistant Staphylococcus aureus) and Gram-negative (e.g. Pseudomonas aeruginosa) aerobic. There are few studies in the literature regarding the development of microbiological methodologies for the analysis of this antimicrobial, so researches in this area are highly relevant to optimize the analysis of this drug in the industry and ensure the quality of the marketed product. The development of microbiological methods for the analysis of antimicrobials has gained strength in recent years and has been highlighted in relation to physicochemical methods, especially because they make possible to determine the bioactivity of the drug against a microorganism. In this context, the aim of this work was the development and validation of a microbiological method for quantitative analysis of CEF in powder lyophilized for injectable solution by turbidimetric assay. Method: For performing the method, Staphylococcus aureus ATCC 6538 IAL 2082 was used as the test microorganism and the culture medium chosen was the Casoy broth. The test was performed using temperature control (35.0 °C ± 2.0 °C) and incubated for 4 hours in shaker. The readings of the results were made at a wavelength of 530 nm through a spectrophotometer. The turbidimetric microbiological method was validated by determining the following parameters: linearity, precision (repeatability and intermediate precision), accuracy and robustness, according to ICH guidelines. Results and discussion: Among the parameters evaluated for method validation, the linearity showed results suitable for both statistical analyses as the correlation coefficients (r) that went 0.9990 for CEF reference standard and 0.9997 for CEF sample. The precision presented the following values 1.86% (intraday), 0.84% (interday) and 0.71% (between analyst). The accuracy of the method has been proven through the recovery test where the mean value obtained was 99.92%. The robustness was verified by the parameters changing volume of culture medium, brand of culture medium, incubation time in shaker and wavelength. The potency of CEF present in the samples of lyophilized powder for injectable solution was 102.46%. Conclusion: The turbidimetric microbiological method proposed for quantification of CEF in lyophilized powder for solution for injectable showed being fast, linear, precise, accurate and robust, being in accordance with all the requirements, which can be used in routine analysis of quality control in the pharmaceutical industry as an option for microbiological analysis.

Keywords: cefepime hydrochloride, quality control, turbidimetric assay, validation

Procedia PDF Downloads 361
200 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 263
199 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix

Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti

Abstract:

When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.

Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS

Procedia PDF Downloads 299
198 Tuberculosis (TB) and Lung Cancer

Authors: Asghar Arif

Abstract:

Lung cancer has been recognized as one of the greatest common cancers, causing the annual mortality rate of about 1.2 million people in the world. Lung cancer is the most prevalent cancer in men and the third-most common cancer among women (after breast and digestive cancers).Recent evidences have shown the inflammatory process as one of the potential factors of cancer. Tuberculosis (TB), pneumonia, and chronic bronchitis are among the most important inflammation-inducing factors in the lungs, among which TB has a more profound role in the emergence of cancer.TB is one of the important mortality factors throughout the world, and 205,000 death cases are reported annually due to this disease. Chronic inflammation and fibrosis due to TB can induce genetic mutation and alternations. Parenchyma tissue of lung is involved in both diseases of TB and lung cancer, and continuous cough in lung cancer, morphological vascular variations, lymphocytosis processes, and generation of immune system mediators such as interleukins, are all among the factors leading to the hypothesis regarding the role of TB in lung cancer Some reports have shown that the induction of necrosis and apoptosis or TB reactivation, especially in patients with immune-deficiency, may result in increasing IL-17 and TNF_α, which will either decrease P53 activity or increase the expression of Bcl-2, decrease Bax-T, and cause the inhibition of caspase-3 expression due to decreasing the expression of mitochondria cytochrome oxidase. It has been also indicated that following the injection of BCG vaccine, the host immune system will be reinforced, and in particular, the rates of gamma interferon, nitric oxide, and interleukin-2 are increased. Therefore, CD4 + lymphocyte function will be improved, and the person will be immune against cancer.Numerous prospective studies have so far been conducted on the role of TB in lung cancer, and it seems that this disease is effective in that particular cancer.One of the main challenges of lung cancer is its correct and timely diagnosis. Unfortunately, clinical symptoms (such as continuous cough, hemoptysis, weight loss, fever, chest pain, dyspnea, and loss of appetite) and radiological images are similar in TB and lung cancer. Therefore, anti-TB drugs are routinely prescribed for the patients in the countries with high prevalence of TB, like Pakistan. Regarding the similarity in clinical symptoms and radiological findings of lung cancer, proper diagnosis is necessary for TB and respiratory infections due to nontuberculousmycobacteria (NTM). Some of the drug resistive TB cases are, in fact, lung cancer or NTM lung infections. Acid-fast staining and histological study of phlegm and bronchial washing, culturing and polymerase chain reaction TB are among the most important solutions for differential diagnosis of these diseases. Briefly, it is assumed that TB is one of the risk factors for cancer. Numerous studies have been conducted in this regard throughout the world, and it has been observed that there is a significant relationship between previous TB infection and lung cancer. However, to prove this hypothesis, further and more extensive studies are required. In addition, as the clinical symptoms and radiological findings of TB, lung cancer, and non-TB mycobacteria lung infections are similar, they can be misdiagnosed as TB.

Keywords: TB and lung cancer, TB people, TB servivers, TB and HIV aids

Procedia PDF Downloads 72
197 Dynamic EEG Desynchronization in Response to Vicarious Pain

Authors: Justin Durham, Chanda Rooney, Robert Mather, Mickie Vanhoy

Abstract:

The psychological construct of empathy is to understand a person’s cognitive perspective and experience the other person’s emotional state. Deciphering emotional states is conducive for interpreting vicarious pain. Observing others' physical pain activates neural networks related to the actual experience of pain itself. The study addresses empathy as a nonlinear dynamic process of simulation for individuals to understand the mental states of others and experience vicarious pain, exhibiting self-organized criticality. Such criticality follows from a combination of neural networks with an excitatory feedback loop generating bistability to resonate permutated empathy. Cortical networks exhibit diverse patterns of activity, including oscillations, synchrony and waves, however, the temporal dynamics of neurophysiological activities underlying empathic processes remain poorly understood. Mu rhythms are EEG oscillations with dominant frequencies of 8-13 Hz becoming synchronized when the body is relaxed with eyes open and when the sensorimotor system is in idle, thus, mu rhythm synchrony is expected to be highest in baseline conditions. When the sensorimotor system is activated either by performing or simulating action, mu rhythms become suppressed or desynchronize, thus, should be suppressed while observing video clips of painful injuries if previous research on mirror system activation holds. Twelve undergraduates contributed EEG data and survey responses to empathy and psychopathy scales in addition to watching consecutive video clips of sports injuries. Participants watched a blank, black image on a computer monitor before and after observing a video of consecutive sports injuries incidents. Each video condition lasted five-minutes long. A BIOPAC MP150 recorded EEG signals from sensorimotor and thalamocortical regions related to a complex neural network called the ‘pain matrix’. Physical and social pain are activated in this network to resonate vicarious pain responses to processing empathy. Five EEG single electrode locations were applied to regions measuring sensorimotor electrical activity in microvolts (μV) to monitor mu rhythms. EEG signals were sampled at a rate of 200 Hz. Mu rhythm desynchronization was measured via 8-13 Hz at electrode sites (F3 & F4). Data for each participant’s mu rhythms were analyzed via Fast Fourier Transformation (FFT) and multifractal time series analysis.

Keywords: desynchronization, dynamical systems theory, electroencephalography (EEG), empathy, multifractal time series analysis, mu waveform, neurophysiology, pain simulation, social cognition

Procedia PDF Downloads 283
196 A Machine Learning Approach for Assessment of Tremor: A Neurological Movement Disorder

Authors: Rajesh Ranjan, Marimuthu Palaniswami, A. A. Hashmi

Abstract:

With the changing lifestyle and environment around us, the prevalence of the critical and incurable disease has proliferated. One such condition is the neurological disorder which is rampant among the old age population and is increasing at an unstoppable rate. Most of the neurological disorder patients suffer from some movement disorder affecting the movement of their body parts. Tremor is the most common movement disorder which is prevalent in such patients that infect the upper or lower limbs or both extremities. The tremor symptoms are commonly visible in Parkinson’s disease patient, and it can also be a pure tremor (essential tremor). The patients suffering from tremor face enormous trouble in performing the daily activity, and they always need a caretaker for assistance. In the clinics, the assessment of tremor is done through a manual clinical rating task such as Unified Parkinson’s disease rating scale which is time taking and cumbersome. Neurologists have also affirmed a challenge in differentiating a Parkinsonian tremor with the pure tremor which is essential in providing an accurate diagnosis. Therefore, there is a need to develop a monitoring and assistive tool for the tremor patient that keep on checking their health condition by coordinating them with the clinicians and caretakers for early diagnosis and assistance in performing the daily activity. In our research, we focus on developing a system for automatic classification of tremor which can accurately differentiate the pure tremor from the Parkinsonian tremor using a wearable accelerometer-based device, so that adequate diagnosis can be provided to the correct patient. In this research, a study was conducted in the neuro-clinic to assess the upper wrist movement of the patient suffering from Pure (Essential) tremor and Parkinsonian tremor using a wearable accelerometer-based device. Four tasks were designed in accordance with Unified Parkinson’s disease motor rating scale which is used to assess the rest, postural, intentional and action tremor in such patient. Various features such as time-frequency domain, wavelet-based and fast-Fourier transform based cross-correlation were extracted from the tri-axial signal which was used as input feature vector space for the different supervised and unsupervised learning tools for quantification of severity of tremor. A minimum covariance maximum correlation energy comparison index was also developed which was used as the input feature for various classification tools for distinguishing the PT and ET tremor types. An automatic system for efficient classification of tremor was developed using feature extraction methods, and superior performance was achieved using K-nearest neighbors and Support Vector Machine classifiers respectively.

Keywords: machine learning approach for neurological disorder assessment, automatic classification of tremor types, feature extraction method for tremor classification, neurological movement disorder, parkinsonian tremor, essential tremor

Procedia PDF Downloads 154
195 Interactivity as a Predictor of Intent to Revisit Sports Apps

Authors: Young Ik Suh, Tywan G. Martin

Abstract:

Sports apps in a smartphone provide up-to-date information and fast and convenient access to live games. The market of sports apps has emerged as the second fastest growing app category worldwide. Further, many sports fans use their smartphones to know the schedule of sporting events, players’ position and bios, videos and highlights. In recent years, a growing number of scholars and practitioners alike have emphasized the importance of interactivity with sports apps, hypothesizing that interactivity plays a significant role in enticing sports apps users and that it is a key component in measuring the success of sports apps. Interactivity in sports apps focuses primarily on two functions: (1) two-way communication and (2) active user control, neither of which have been applicable through traditional mass media and communication technologies. Therefore, the purpose of this study is to examine whether the interactivity function on sports apps leads to positive outcomes such as intent to revisit. More specifically, this study investigates how three major functions of interactivity (i.e., two-way communication, active user control, and real-time information) influence the attitude of sports apps users and their intent to revisit the sports apps. The following hypothesis is proposed; interactivity functions will be positively associated with both attitudes toward sports apps and intent to revisit sports apps. The survey questionnaire includes four parts: (1) an interactivity scale, (2) an attitude scale, (3) a behavioral intention scale, and (4) demographic questions. Data are to be collected from ESPN apps users. To examine the relationships among the observed and latent variables and determine the reliability and validity of constructs, confirmatory factor analysis (CFA) is conducted. Structural equation modeling (SEM) is utilized to test hypothesized relationships among constructs. Additionally, this study compares the proposed interactivity model with a rival model to identify the role of attitude as a mediating factor. The findings of the current sports apps study provide several theoretical and practical contributions and implications by extending the research and literature associated with the important role of interactivity functions in sports apps and sports media consumption behavior. Specifically, this study may improve the theoretical understandings of whether the interactivity functions influence user attitudes and intent to revisit sports apps. Additionally, this study identifies which dimensions of interactivity are most important to sports apps users. From practitioners’ perspectives, this findings of this study provide significant implications. More entrepreneurs and investors in the sport industry need to recognize that high-resolution photos, live streams, and up-to-date stats are in the sports app, right at sports fans fingertips. The result will imply that sport practitioners may need to develop sports mobile apps that offer greater interactivity functions to attract sport fans.

Keywords: interactivity, two-way communication, active user control, real time information, sports apps, attitude, intent to revisit

Procedia PDF Downloads 147
194 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 499
193 Real-Time Monitoring of Complex Multiphase Behavior in a High Pressure and High Temperature Microfluidic Chip

Authors: Renée M. Ripken, Johannes G. E. Gardeniers, Séverine Le Gac

Abstract:

Controlling the multiphase behavior of aqueous biomass mixtures is essential when working in the biomass conversion industry. Here, the vapor/liquid equilibria (VLE) of ethylene glycol, glycerol, and xylitol were studied for temperatures between 25 and 200 °C and pressures of 1 to 10 bar. These experiments were performed in a microfluidic platform, which exhibits excellent heat transfer properties so that equilibrium is reached fast. Firstly, the saturated vapor pressure as a function of the temperature and the substrate mole fraction of the substrate was calculated using AspenPlus with a Redlich-Kwong-Soave Boston-Mathias (RKS-BM) model. Secondly, we developed a high-pressure and high-temperature microfluidic set-up for experimental validation. Furthermore, we have studied the multiphase flow pattern that occurs after the saturation temperature was achieved. A glass-silicon microfluidic device containing a 0.4 or 0.2 m long meandering channel with a depth of 250 μm and a width of 250 or 500 μm was fabricated using standard microfabrication techniques. This device was placed in a dedicated chip-holder, which includes a ceramic heater on the silicon side. The temperature was controlled and monitored by three K-type thermocouples: two were located between the heater and the silicon substrate, one to set the temperature and one to measure it, and the third one was placed in a 300 μm wide and 450 μm deep groove on the glass side to determine the heat loss over the silicon. An adjustable back pressure regulator and a pressure meter were added to control and evaluate the pressure during the experiment. Aqueous biomass solutions (10 wt%) were pumped at a flow rate of 10 μL/min using a syringe pump, and the temperature was slowly increased until the theoretical saturation temperature for the pre-set pressure was reached. First and surprisingly, a significant difference was observed between our theoretical saturation temperature and the experimental results. The experimental values were 10’s of degrees higher than the calculated ones and, in some cases, saturation could not be achieved. This discrepancy can be explained in different ways. Firstly, the pressure in the microchannel is locally higher due to both the thermal expansion of the liquid and the Laplace pressure that has to be overcome before a gas bubble can be formed. Secondly, superheating effects are likely to be present. Next, once saturation was reached, the flow pattern of the gas/liquid multiphase system was recorded. In our device, the point of nucleation can be controlled by taking advantage of the pressure drop across the channel and the accurate control of the temperature. Specifically, a higher temperature resulted in nucleation further upstream in the channel. As the void fraction increases downstream, the flow regime changes along the channel from bubbly flow to Taylor flow and later to annular flow. All three flow regimes were observed simultaneously. The findings of this study are key for the development and optimization of a microreactor for hydrogen production from biomass.

Keywords: biomass conversion, high pressure and high temperature microfluidics, multiphase, phase diagrams, superheating

Procedia PDF Downloads 217
192 To Access the Knowledge, Awareness and Factors Associated With Diabetes Mellitus in Buea, Cameroon

Authors: Franck Acho

Abstract:

This is a chronic metabolic disorder which is a fast-growing global problem with a huge social, health, and economic consequences. It is estimated that in 2010 there were globally 285 million people (approximately 6.4% of the adult population) suffering from this disease. This number is estimated to increase to 430 million in the absence of better control or cure. An ageing population and obesity are two main reasons for the increase. Diabetes mellitus is a chronic heterogeneous metabolic disorder with a complex pathogenesis. It is characterized by elevated blood glucose levels or hyperglycemia, which results from abnormalities in either insulin secretion or insulin action or both. Hyperglycemia manifests in various forms with a varied presentation and results in carbohydrate, fat, and protein metabolic dysfunctions. Long-term hyperglycemia often leads to various microvascular and macrovascular diabetic complications, which are mainly responsible for diabetes-associated morbidity and mortality. Hyperglycemia serves as the primary biomarker for the diagnosis of diabetes as well. Furthermore, it has been shown that almost 50% of the putative diabetics are not diagnosed until 10 years after onset of the disease, hence the real prevalence of global diabetes must be astronomically high. This study was conducted in a locality to access the level of knowledge, awareness and risk factors associated with people leaving with diabetes mellitus. A month before the screening was to be conducted, a health screening in some selected churches and on the local community radio as well as on relevant WhatsApp groups were advertised. A general health talk was delivered by the head of the screening unit to all attendees who were all educated on the procedure to be carried out with benefits and any possible discomforts after which the attendee’s consent was obtained. Evaluation of the participants for any leads to the diabetes selected for the screening was done by taking adequate history and physical examinations such as excessive thirst, increased urination, tiredness, hunger, unexplained weight loss, feeling irritable or having other mood changes, having blurry vision, having slow-healing sores, getting a lot of infections, such as gum, skin and vaginal infections. Out of the 94 participants the finding show that 78 were females and 16 were males, 70.21% of participants with diabetes were between the ages of 60-69yrs.The study found that only 10.63% of respondents declared a good level of knowledge of diabetes. Out of 3 symptoms of diabetes analyzed in this study, high blood sugar (58.5%) and chronic fatigue (36.17%) were the most recognized. Out of 4 diabetes risk factors analyzed in this study, obesity (21.27%) and unhealthy diet (60.63%) were the most recognized diabetes risk factors, while only 10.6% of respondents indicated tobacco use. The diabetic foot was the most recognized diabetes complication (50.57%), but some the participants indicated vision problems (30.8%),or cardiovascular diseases (20.21%) as diabetes complications.

Keywords: diabetes mellitus, non comunicable disease, general health talk, hyperglycemia

Procedia PDF Downloads 56
191 Magnetron Sputtered Thin-Film Catalysts with Low Noble Metal Content for Proton Exchange Membrane Water Electrolysis

Authors: Peter Kus, Anna Ostroverkh, Yurii Yakovlev, Yevheniia Lobko, Roman Fiala, Ivan Khalakhan, Vladimir Matolin

Abstract:

Hydrogen economy is a concept of low-emission society which harvests most of its energy from renewable sources (e.g., wind and solar) and in case of overproduction, electrochemically turns the excess amount into hydrogen, which serves as an energy carrier. Proton exchange membrane water electrolyzers (PEMWE) are the backbone of this concept. By fast-response electricity to hydrogen conversion, the PEMWEs will not only stabilize the electrical grid but also provide high-purity hydrogen for variety of fuel cell powered devices, ranging from consumer electronics to vehicles. Wider commercialization of PEMWE technology is however hindered by high prices of noble metals which are necessary for catalyzing the redox reactions within the cell. Namely, platinum for hydrogen evolution reaction (HER), running on cathode, and iridium for oxygen evolution reaction (OER) on anode. Possible way of how to lower the loading of Pt and Ir is by using conductive high-surface nanostructures as catalyst supports in conjunction with thin-film catalyst deposition. The presented study discusses unconventional technique of membrane electron assembly (MEA) preparation. Noble metal catalysts (Pt and Ir) were magnetron sputtered in very low loadings onto the surface of porous sublayers (located on gas diffusion layer or directly on membrane), forming so to say localized three-phase boundary. Ultrasonically sprayed corrosion resistant TiC-based sublayer was used as a support material on anode, whereas magnetron sputtered nanostructured etched nitrogenated carbon (CNx) served the same role on cathode. By using this configuration, we were able to significantly decrease the amount of noble metals (to thickness of just tens of nanometers), while keeping the performance comparable to that of average state-of-the-art catalysts. Complex characterization of prepared supported catalysts includes in-cell performance and durability tests, electrochemical impedance spectroscopy (EIS) as well as scanning electron microscopy (SEM) imaging and X-ray photoelectron spectroscopy (XPS) analysis. Our research proves that magnetron sputtering is a suitable method for thin-film deposition of electrocatalysts. Tested set-up of thin-film supported anode and cathode catalysts with combined loading of just 120 ug.cm⁻² yields remarkable values of specific current. Described approach of thin-film low-loading catalyst deposition might be relevant when noble metal reduction is the topmost priority.

Keywords: hydrogen economy, low-loading catalyst, magnetron sputtering, proton exchange membrane water electrolyzer

Procedia PDF Downloads 163
190 Microalgae Technology for Nutraceuticals

Authors: Weixing Tan

Abstract:

Production of nutraceuticals from microalgae—a virtually untapped natural phyto-based source of which there are 200,000 to 1,000,000 species—offers a sustainable and healthy alternative to conventionally sourced nutraceuticals for the market. Microalgae can be grown organically using only natural sunlight, water and nutrients at an extremely fast rate, e.g. 10-100 times more efficiently than crops or trees. However, the commercial success of microalgae products at scale remains limited largely due to the lack of economically viable technologies. There are two major microalgae production systems or technologies currently available: 1) the open system as represented by open pond technology and 2) the closed system such as photobioreactors (PBR). Each carries its own unique features and challenges. Although an open system requires a lower initial capital investment relative to a PBR, it conveys many unavoidable drawbacks; for example, much lower productivity, difficulty in contamination control/cleaning, inconsistent product quality, inconvenience in automation, restriction in location selection, and unsuitability for cold areas – all directly linked to the system openness and flat underground design. On the other hand, a PBR system has characteristics almost entirely opposite to the open system, such as higher initial capital investment, better productivity, better contamination and environmental control, wider suitability in different climates, ease in automation, higher and consistent product quality, higher energy demand (particularly if using artificial lights), and variable operational expenses if not automated. Although closed systems like PBRs are not highly competitive yet in current nutraceutical supply market, technological advances can be made, in particular for the PBR technology, to narrow the gap significantly. One example is a readily scalable P2P Microalgae PBR Technology at Grande Prairie Regional College, Canada, developed over 11 years considering return on investment (ROI) for key production processes. The P2P PBR system is approaching economic viability at a pre-commercial stage due to five ROI-integrated major components. They include: (1) optimum use of free sunlight through attenuation (patented); (2) simple, economical, and chemical-free harvesting (patent ready to file); (3) optimum pH- and nutrient-balanced culture medium (published), (4) reliable water and nutrient recycling system (trade secret); and (5) low-cost automated system design (trade secret). These innovations have allowed P2P Microalgae Technology to increase daily yield to 106 g/m2/day of Chlorella vulgaris, which contains 50% proteins and 2-3% omega-3. Based on the current market prices and scale-up factors, this P2P PBR system presents as a promising microalgae technology for market competitive nutraceutical supply.

Keywords: microalgae technology, nutraceuticals, open pond, photobioreactor PBR, return on investment ROI, technological advances

Procedia PDF Downloads 157
189 Analyzing the Mission Drift of Social Business: Case Study of Restaurant Providing Professional Training to At-Risk Youth

Authors: G. Yanay-Ventura, H. Desivilya Syna, K. Michael

Abstract:

Social businesses are based on the idea that an enterprise can be established for the sake of profit and, at the same time, with the aim of fulfilling social goals. Yet, the question of how these goals can be integrated in practice to derive parallel benefit in both realms still needs to be examined. Particularly notable in this context is the ‘governance challenge’ of social businesses, meaning the danger of the mission drifts from the social goal in the pursuit of good business. This study is based on an evaluation study of a social business that operates as a restaurant providing professional training to at-risk youth. The evaluation was based on the collection of a variety of data through interviews with stakeholders in the enterprise (directors and managers, business partners, social partners, and position holders in the restaurant and the social enterprise), a focus group consisting of the youth receiving the professional training, observations of the restaurant’s operation, and analysis of the social enterprise’s primary documents. The evaluation highlighted significant strengths of the social enterprise, including reaching relatively fast business sustainability, effective management of the restaurant, stable employment of the restaurant staff, and effective management of the social project. The social enterprise and business management have both enjoyed positive evaluations from a variety of stakeholders. Clearly, the restaurant was deemed by all a promising young business. However, the social project suffered from a 90% dropout rate among the youth entering its ranks, extreme monthly fluctuation in the number of youths participating, and a distinct minority of the youth who have succeeded in completing their training period. Possible explanations of the high dropout rate included the small number of cooks, which impeded the effectiveness of the training process and the provision of advanced cooking skills; lack of clarity regarding the essence and the elements of training; and lack of a meaningful peer group for the youth engaged in the program. Paradoxically, despite the stakeholders’ great appreciation for the social enterprise, the challenge of governability was also formidable, revealing a tangible risk of mission drift in the reduction of the social enterprise’s target population and a breach of the commitment made to the youth with regard to practical training. The risk of mission drifts emerged as a hidden and evasive issue for the stakeholders, who revealed a deep appreciation for the management and the outcomes of the social enterprise. The challenge of integration, therefore, requires an in-depth examination of how to maintain a successful business without hindering the achievement of the social goal. The study concludes that clear conceptualization of the training process and its aims, increased cooks’ participation in the social project, and novel conceptions with regard to the evaluation of success could serve to benefit the youth and impede mission drift.

Keywords: evaluation study, management, mission drift, social business

Procedia PDF Downloads 112
188 Distribution and Diversity of Pyrenocarpous Lichens in India with Special Reference to Forest Health

Authors: Gaurav Kumar Mishra, Sanjeeva Nayaka, Dalip Kumar Upreti

Abstract:

Our nature exhibited presence of a number of unique plants which can be used as indicator of environmental condition of particular place. Lichens are unique plant which has an ability to absorb not only organic, inorganic and metaloties but also absorb radioactive nuclide substances present in the environment. In the present study pyrenocarpous lichens will used as indicator of good forest health in a particular place. The Pyrenocarpous lichens are simple crust forming with black dot like perithecia have few characters for their taxonomical segregation as compared to their foliose and fruticose brethrean. The thallus colour and nature, presence and absence of hypothallus are only few characters of thallus are used to segregate the pyrenocarpous taxa. The fruiting bodies of pyrenolichens i.e. ascocarps are perithecia. The perithecia and the contents found within them posses many important criteria for the segregation of pyrenocarpous lichen taxa. The ascocarp morphology, ascocarp arrangement, the perithecial wall, ascocarp shape and colour, ostiole shape and position, ostiole colour, ascocarp anatomy including type of paraphyses, asci shape and size, ascospores septation, ascospores wall and periphyses are the valuable charcters used for segregation of different pyrenocarpous lichen taxa. India is represented by the occurrence of the 350 species of 44 genera and eleven families. Among the different genera Pyrenula is dominant with 82 species followed by the Porina with 70 species. Recently, systematic of the pyrenocarpous lichens have been revised by American and European lichenologists using phylogenetic methods. Still the taxonomy of pyrenocarpous lichens is in flux and information generated after the completion of this study will play vital role in settlement of the taxonomy of this peculiar group of lichens worldwide. The Indian Himalayan region exhibit rich diversity of pyrenocarpous lichens in India. The western Himalayan region has luxuriance of pyrenocarpous lichens due to its unique topography and climate condition. However, the eastern Himalayan region has rich diversity of pyrenocarpous lichens due to its warmer and moist climate condition. The rich moist and warmer climate in eastern Himalayan region supports forest with dominance of evergreen tree vegetation. The pyrenocarpous lichens communities are good indicator of young and regenerated forest type. The rich diversity of lichens clearly indicates that moist of the forest within the eastern Himalayan region has good health of forest. Due to fast pace of urbanization and other developmental activities will defiantly have adverse effects on the diversity and distribution of pyrenocarpous lichens in different forest type and the present distribution pattern will act as baseline data for carried out future biomonitoring studies in the area.

Keywords: lichen diversity, indicator species, environmental factors, pyrenocarpous

Procedia PDF Downloads 147
187 Multi-Label Approach to Facilitate Test Automation Based on Historical Data

Authors: Warda Khan, Remo Lachmann, Adarsh S. Garakahally

Abstract:

The increasing complexity of software and its applicability in a wide range of industries, e.g., automotive, call for enhanced quality assurance techniques. Test automation is one option to tackle the prevailing challenges by supporting test engineers with fast, parallel, and repetitive test executions. A high degree of test automation allows for a shift from mundane (manual) testing tasks to a more analytical assessment of the software under test. However, a high initial investment of test resources is required to establish test automation, which is, in most cases, a limitation to the time constraints provided for quality assurance of complex software systems. Hence, a computer-aided creation of automated test cases is crucial to increase the benefit of test automation. This paper proposes the application of machine learning for the generation of automated test cases. It is based on supervised learning to analyze test specifications and existing test implementations. The analysis facilitates the identification of patterns between test steps and their implementation with test automation components. For the test case generation, this approach exploits historical data of test automation projects. The identified patterns are the foundation to predict the implementation of unknown test case specifications. Based on this support, a test engineer solely has to review and parameterize the test automation components instead of writing them manually, resulting in a significant time reduction for establishing test automation. Compared to other generation approaches, this ML-based solution can handle different writing styles, authors, application domains, and even languages. Furthermore, test automation tools require expert knowledge by means of programming skills, whereas this approach only requires historical data to generate test cases. The proposed solution is evaluated using various multi-label evaluation criteria (EC) and two small-sized real-world systems. The most prominent EC is ‘Subset Accuracy’. The promising results show an accuracy of at least 86% for test cases, where a 1:1 relationship (Multi-Class) between test step specification and test automation component exists. For complex multi-label problems, i.e., one test step can be implemented by several components, the prediction accuracy is still at 60%. It is better than the current state-of-the-art results. It is expected the prediction quality to increase for larger systems with respective historical data. Consequently, this technique facilitates the time reduction for establishing test automation and is thereby independent of the application domain and project. As a work in progress, the next steps are to investigate incremental and active learning as additions to increase the usability of this approach, e.g., in case labelled historical data is scarce.

Keywords: machine learning, multi-class, multi-label, supervised learning, test automation

Procedia PDF Downloads 132
186 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis

Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger

Abstract:

In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.

Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis

Procedia PDF Downloads 332
185 Vertical Village Buildings as Sustainable Strategy to Re-Attract Mega-Cities in Developing Countries

Authors: M. J. Eichner, Y. S. Sarhan

Abstract:

Overall study purpose has been the evaluation of ‘Vertical Villages’ as a new sustainable building typology, reducing significantly negative impacts of rapid urbanization processes in third world capital cities. Commonly in fast-growing cities, housing and job supply, educational and recreational opportunities, as well as public transportation infrastructure, are not accommodating rapid population growth, exposing people to high noise and emission polluted living environments with low-quality neighborhoods and a lack of recreational areas. Like many others, Egypt’s capital city Cairo, according to the UN facing annual population growth rates of up to 428.000 people, is struggling to address the general deterioration of urban living conditions. New settlements typologies and urban reconstruction approach hardly follow sustainable urbanization principles or socio-ecologic urbanization models with severe effects not only for inhabitants but also for the local environment and global climate. The authors prove that ‘Vertical Village’ buildings can offer a sustainable solution for increasing urban density with at the same time improving the living quality and urban environment significantly. Inserting them within high-density urban fabrics the ecologic and socio-cultural conditions of low-quality neighborhoods can be transformed towards districts, considering all needs of sustainable and social urban life. This study analyzes existing building typologies in Cairo’s «low quality - high density» districts Ard el Lewa, Dokki and Mohandesen according to benchmarks for sustainable residential buildings, identifying major problems and deficits. In 3 case study design projects, the sustainable transformation potential through ‘Vertical Village’ buildings are laid out and comparative studies show the improvement of the urban microclimate, safety, social diversity, sense of community, aesthetics, privacy, efficiency, healthiness and accessibility. The main result of the paper is that the disadvantages of density and overpopulation in developing countries can be converted with ‘Vertical Village’ buildings into advantages, achieving attractive and environmentally friendly living environments with multiple synergies. The paper is documenting based on scientific criteria that mixed-use vertical building structures, designed according to sustainable principles of low rise housing, can serve as an alternative to convert «low quality - high density» districts in megacities, opening a pathway for governments to achieve sustainable urban transformation goals. Neglected informal urban districts, home to millions of the poorer population groups, can be converted into healthier living and working environments.

Keywords: sustainable, architecture, urbanization, urban transformation, vertical village

Procedia PDF Downloads 124
184 Sustainability of the Built Environment of Ranchi District

Authors: Vaidehi Raipat

Abstract:

A city is an expression of coexistence between its users and built environment. The way in which its spaces are animated signify the quality of this coexistence. Urban sustainability is the ability of a city to respond efficiently towards its people, culture, environment, visual image, history, visions and identity. The quality of built environment determines the quality of our lifestyles, but poor ability of the built environment to adapt and sustain itself through the changes leads to degradation of cities. Ranchi was created in November 2000, as the capital of the newly formed state Jharkhand, located on eastern side of India. Before this Ranchi was known as summer capital of Bihar and was a little larger than a town in terms of development. But since then it has been vigorously expanding in size, infrastructure as well as population. This sudden expansion has created a stress on existing built environment. The large forest covers, agricultural land, diverse culture and pleasant climatic conditions have degraded and decreased to a large extent. Narrow roads and old buildings are unable to bear the load of the changing requirements, fast improving technology and growing population. The built environment has hence been rendered unsustainable and unadaptable through fastidious changes of present era. Some of the common hazards that can be easily spotted in the built environment are half-finished built forms, pedestrians and vehicles moving on the same part of the road. Unpaved areas on street edges. Over-sized, bright and randomly placed hoardings. Negligible trees or green spaces. The old buildings have been poorly maintained and the new ones are being constructed over them. Roads are too narrow to cater to the increasing traffic, both pedestrian and vehicular. The streets have a large variety of activities taking place on them, but haphazardly. Trees are being cut down for road widening and new constructions. There is no space for greenery in the commercial as well as old residential areas. The old infrastructure is deteriorating because of poor maintenance and the economic limitations. Pseudo understanding of functionality as well as aesthetics drive the new infrastructure. It is hence necessary to evaluate the extent of sustainability of existing built environment of the city and create or regenerate the existing built environment into a more sustainable and adaptable one. For this purpose, research titled “Sustainability of the Built Environment of Ranchi District” has been carried out. In this research the condition of the built environment of Ranchi are explored so as to figure out the problems and shortcomings existing in the city and provide for design strategies that can make the existing built-environment sustainable. The built environment of Ranchi that include its outdoor spaces like streets, parks, other open areas, its built forms as well as its users, has been analyzed in terms of various urban design parameters. Based on which strategies have been suggested to make the city environmentally, socially, culturally and economically sustainable.

Keywords: adaptable, built-environment, sustainability, urban

Procedia PDF Downloads 237
183 Thermal Imaging of Aircraft Piston Engine in Laboratory Conditions

Authors: Lukasz Grabowski, Marcin Szlachetka, Tytus Tulwin

Abstract:

The main task of the engine cooling system is to maintain its average operating temperatures within strictly defined limits. Too high or too low average temperatures result in accelerated wear or even damage to the engine or its individual components. In order to avoid local overheating or significant temperature gradients, leading to high stresses in the component, the aim is to ensure an even flow of air. In the case of analyses related to heat exchange, one of the main problems is the comparison of temperature fields because standard measuring instruments such as thermocouples or thermistors only provide information about the course of temperature at a given point. Thermal imaging tests can be helpful in this case. With appropriate camera settings and taking into account environmental conditions, we are able to obtain accurate temperature fields in the form of thermograms. Emission of heat from the engine to the engine compartment is an important issue when designing a cooling system. Also, in the case of liquid cooling, the main sources of heat in the form of emissions from the engine block, cylinders, etc. should be identified. It is important to redesign the engine compartment ventilation system. Ensuring proper cooling of aircraft reciprocating engine is difficult not only because of variable operating range but mainly because of different cooling conditions related to the change of speed or altitude of flight. Engine temperature also has a direct and significant impact on the properties of engine oil, which under the influence of this parameter changes, in particular, its viscosity. Too low or too high, its value can be a result of fast wear of engine parts. One of the ways to determine the temperatures occurring on individual parts of the engine is the use of thermal imaging measurements. The article presents the results of preliminary thermal imaging tests of aircraft piston diesel engine with a maximum power of about 100 HP. In order to perform the heat emission tests of the tested engine, the ThermaCAM S65 thermovision monitoring system from FLIR (Forward-Looking Infrared) together with the ThermaCAM Researcher Professional software was used. The measurements were carried out after the engine warm up. The engine speed was 5300 rpm The measurements were taken for the following environmental parameters: air temperature: 17 °C, ambient pressure: 1004 hPa, relative humidity: 38%. The temperatures distribution on the engine cylinder and on the exhaust manifold were analysed. Thermal imaging tests made it possible to relate the results of simulation tests to the real object by measuring the rib temperature of the cylinders. The results obtained are necessary to develop a CFD (Computational Fluid Dynamics) model of heat emission from the engine bay. The project/research was financed in the framework of the project Lublin University of Technology-Regional Excellence Initiative, funded by the Polish Ministry of Science and Higher Education (contract no. 030/RID/2018/19).

Keywords: aircraft, piston engine, heat, emission

Procedia PDF Downloads 118
182 Implications of Social Rights Adjudication on the Separation of Powers Doctrine: Colombian Case

Authors: Mariam Begadze

Abstract:

Separation of Powers (SOP) has often been the most frequently posed objection against the judicial enforcement of socio-economic rights. Although a lot has been written to refute those, very rarely has it been assessed what effect the current practice of social rights adjudication has had on the construction of SOP doctrine in specific jurisdictions. Colombia is an appropriate case-study on this question. The notion of collaborative SOP in the 1991 Constitution has affected the court’s conception of its role. On the other hand, the trends in the jurisprudence have further shaped the collaborative notion of SOP. Other institutional characteristics of the Colombian constitutional law have played its share role as well. Tutela action, particularly flexible and fast judicial action for individuals has placed the judiciary in a more confrontational relation vis-à-vis the political branches. Later interventions through abstract review of austerity measures further contributed to that development. Logically, the court’s activism in this sphere has attracted attacks from political branches, which have turned out to be unsuccessful precisely due to court’s outreach to the middle-class, whose direct reliance on the court has turned into its direct democratic legitimacy. Only later have the structural judgments attempted to revive the collaborative notion behind SOP doctrine. However, the court-supervised monitoring process of implementation has itself manifested fluctuations in the mode of collaboration, moving into more managerial supervision recently. This is not surprising considering the highly dysfunctional political system in Colombia, where distrust seems to be the default starting point in the interaction of the branches. The paper aims to answer the question, what the appropriate judicial tools are to realize the collaborative notion of SOP in a context where the court has to strike a balance between the strong executive and the weak and largely dysfunctional legislative branch. If the recurrent abuse lies in the indifference and inaction of legislative branches to engage with political issues seriously, what are the tools in the court’s hands to activate the political process? The answer to this question partly lies in the court’s other strand of jurisprudence, in which it combines substantive objections with procedural ones concerning the operation of the legislative branch. The primary example is the decision on value-added tax on basic goods, in which the court invalidated the law based on the absence of sufficient deliberation in Congress on the question of the bills’ implications on the equity and progressiveness of the entire taxing system. The decision led to Congressional rejection of an identical bill based on the arguments put forward by the court. The case perhaps is the best illustration of the collaborative notion of SOP, in which the court refrains from categorical pronouncements, while does its bit for activating political process. This also legitimizes the court’s activism based on its role to counter the most perilous abuse in the Colombian context – failure of the political system to seriously engage with serious political questions.

Keywords: Colombian constitutional court, judicial review, separation of powers, social rights

Procedia PDF Downloads 104
181 Trainability of Executive Functions during Preschool Age Analysis of Inhibition of 5-Year-Old Children

Authors: Christian Andrä, Pauline Hähner, Sebastian Ludyga

Abstract:

Introduction: In the recent past, discussions on the importance of physical activity for child development have contributed to a growing interest in executive functions, which refer to cognitive processes. By controlling, modulating and coordinating sub-processes, they make it possible to achieve superior goals. Major components include working memory, inhibition and cognitive flexibility. While executive functions can be trained easily in school children, there are still research deficits regarding the trainability during preschool age. Methodology: This quasi-experimental study with pre- and post-design analyzes 23 children [age: 5.0 (mean value) ± 0.7 (standard deviation)] from four different sports groups. The intervention group was made up of 13 children (IG: 4.9 ± 0.6), while the control group consisted of ten children (CG: 5.1 ± 0.9). Between pre-test and post-test, children from the intervention group participated special games that train executive functions (i.e., changing rules of the game, introduction of new stimuli in familiar games) for ten units of their weekly sports program. The sports program of the control group was not modified. A computer-based version of the Eriksen Flanker Task was employed in order to analyze the participants’ inhibition ability. In two rounds, the participants had to respond 50 times and as fast as possible to a certain target (direction of sight of a fish; the target was always placed in a central position between five fish). Congruent (all fish have the same direction of sight) and incongruent (central fish faces opposite direction) stimuli were used. Relevant parameters were response time and accuracy. The main objective was to investigate whether children from the intervention group show more improvement in the two parameters than the children from the control group. Major findings: The intervention group revealed significant improvements in congruent response time (pre: 1.34 s, post: 1.12 s, p<.01), while the control group did not show any statistically relevant difference (pre: 1.31 s, post: 1.24 s). Likewise, the comparison of incongruent response times indicates a comparable result (IG: pre: 1.44 s, post: 1.25 s, p<.05 vs. CG: pre: 1.38 s, post: 1.38 s). In terms of accuracy for congruent stimuli, the intervention group showed significant improvements (pre: 90.1 %, post: 95.9 %, p<.01). In contrast, no significant improvement was found for the control group (pre: 88.8 %, post: 92.9 %). Vice versa, the intervention group did not display any significant results for incongruent stimuli (pre: 74.9 %, post: 83.5 %), while the control group revealed a significant difference (pre: 68.9 %, post: 80.3 %, p<.01). The analysis of three out of four criteria demonstrates that children who took part in a special sports program improved more than children who did not. The contrary results for the last criterion could be caused by the control group’s low results from the pre-test. Conclusion: The findings illustrate that inhibition can be trained as early as in preschool age. The combination of familiar games with increased requirements for attention and control processes appears to be particularly suitable.

Keywords: executive functions, flanker task, inhibition, preschool children

Procedia PDF Downloads 253
180 Studies of the Reaction Products Resulted from Glycerol Electrochemical Conversion under Galvanostatic Mode

Authors: Ching Shya Lee, Mohamed Kheireddine Aroua, Wan Mohd Ashri Wan Daud, Patrick Cognet, Yolande Peres, Mohammed Ajeel

Abstract:

In recent years, with the decreasing supply of fossil fuel, renewable energy has received a significant demand. Biodiesel which is well known as vegetable oil based fatty acid methyl ester is an alternative fuel for diesel. It can be produced from transesterification of vegetable oils, such as palm oil, sunflower oil, rapeseed oil, etc., with methanol. During the transesterification process, crude glycerol is formed as a by-product, resulting in 10% wt of the total biodiesel production. To date, due to the fast growing of biodiesel production in worldwide, the crude glycerol supply has also increased rapidly and resulted in a significant price drop for glycerol. Therefore, extensive research has been developed to use glycerol as feedstock to produce various added-value chemicals, such as tartronic acid, mesoxalic acid, glycolic acid, glyceric acid, propanediol, acrolein etc. The industrial processes that usually involved are selective oxidation, biofermentation, esterification, and hydrolysis. However, the conversion of glycerol into added-value compounds by electrochemical approach is rarely discussed. Currently, the approach is mainly focused on the electro-oxidation study of glycerol under potentiostatic mode for cogenerating energy with other chemicals. The electro-organic synthesis study from glycerol under galvanostatic mode is seldom reviewed. In this study, the glycerol was converted into various added-value compounds by electrochemical method under galvanostatic mode. This work aimed to study the possible compounds produced from glycerol by electrochemical technique in a one-pot electrolysis cell. The electro-organic synthesis study from glycerol was carried out in a single compartment reactor for 8 hours, over the platinum cathode and anode electrodes under acidic condition. Various parameters such as electric current (1.0 A to 3.0 A) and reaction temperature (27 °C to 80 °C) were evaluated. The products obtained were characterized by using gas chromatography-mass spectroscopy equipped with an aqueous-stable polyethylene glycol stationary phase column. Under the optimized reaction condition, the glycerol conversion achieved as high as 95%. The glycerol was successfully converted into various added-value chemicals such as ethylene glycol, glycolic acid, glyceric acid, acetaldehyde, formic acid, and glyceraldehyde; given the yield of 1%, 45%, 27%, 4%, 0.7% and 5%, respectively. Based on the products obtained from this study, the reaction mechanism of this process is proposed. In conclusion, this study has successfully converted glycerol into a wide variety of added-value compounds. These chemicals are found to have high market value; they can be used in the pharmaceutical, food and cosmetic industries. This study effectively opens a new approach for the electrochemical conversion of glycerol. For further enhancement on the product selectivity, electrode material is an important parameter to be considered.

Keywords: biodiesel, glycerol, electrochemical conversion, galvanostatic mode

Procedia PDF Downloads 192
179 Detection of Egg Proteins in Food Matrices (2011-2021)

Authors: Daniela Manila Bianchi, Samantha Lupi, Elisa Barcucci, Sandra Fragassi, Clara Tramuta, Lucia Decastelli

Abstract:

Introduction: The undeclared allergens detection in food products plays a fundamental role in the safety of the allergic consumer. The protection of allergic consumers is guaranteed, in Europe, by Regulation (EU) No 1169/2011 of the European Parliament, which governs the consumer's right to information and identifies 14 food allergens to be mandatorily indicated on food labels: among these, an egg is included. An egg can be present as an ingredient or as contamination in raw and cooked products. The main allergen egg proteins are ovomucoid, ovalbumin, lysozyme, and ovotransferrin. This study presents the results of a survey conducted in Northern Italy aimed at detecting the presence of undeclared egg proteins in food matrices in the latest ten years (2011-2021). Method: In the period January 2011 - October 2021, a total of 1205 different types of food matrices (ready-to-eat, meats, and meat products, bakery and pastry products, baby foods, food supplements, pasta, fish and fish products, preparations for soups and broths) were delivered to Food Control Laboratory of Istituto Zooprofilattico Sperimentale of Piemonte Liguria and Valle d’Aosta to be analyzed as official samples in the frame of Regional Monitoring Plan of Food Safety or in the contest of food poisoning. The laboratory is ISO 17025 accredited, and since 2019, it has represented the National Reference Centre for the detection in foods of substances causing food allergies or intolerances (CreNaRiA). All samples were stored in the laboratory according to food business operator instructions and analyzed within the expiry date for the detection of undeclared egg proteins. Analyses were performed with RIDASCREEN®FAST Ei/Egg (R-Biopharm ® Italia srl) kit: the method was internally validated and accredited with a Limit of Detection (LOD) equal to 2 ppm (mg/Kg). It is a sandwich enzyme immunoassay for the quantitative analysis of whole egg powder in foods. Results: The results obtained through this study showed that egg proteins were found in 2% (n. 28) of food matrices, including meats and meat products (n. 16), fish and fish products (n. 4), bakery and pastry products (n. 4), pasta (n. 2), preparations for soups and broths (n.1) and ready-to-eat (n. 1). In particular, in 2011 egg proteins were detected in 5% of samples, in 2012 in 4%, in 2013, 2016 and 2018 in 2%, in 2014, 2015 and 2019 in 3%. No egg protein traces were detected in 2017, 2020, and 2021. Discussion: Food allergies occur in the Western World in 2% of adults and up to 8% of children. Allergy to eggs is one of the most common food allergies in the pediatrics context. The percentage of positivity obtained from this study is, however, low. The trend over the ten years has been slightly variable, with comparable data.

Keywords: allergens, food, egg proteins, immunoassay

Procedia PDF Downloads 136
178 Use of Misoprostol in Pregnancy Termination in the Third Trimester: Oral versus Vaginal Route

Authors: Saimir Cenameri, Arjana Tereziu, Kastriot Dallaku

Abstract:

Introduction: Intra-uterine death is a common problem in obstetrical practice, and can lead to complications if left to resolve spontaneously. The cervix is unprepared, making inducing of labor difficult. Misoprostol is a synthetic prostaglandin E1 analogue, inexpensive, and is presented valid thanks to its ability to bring about changes in the cervix that lead to the induction of uterine contractions. Misoprostol is quickly absorbed when taken orally, resulting in high initial peak serum concentrations compared with the vaginal route. The vaginal misoprostol peak serum concentration is not as high and demonstrates a more gradual serum concentration decline. This is associated with many benefits for the patient; fast induction of labor; smaller doses; and fewer side effects (dose-depended). Mostly it has been used the regime of 50 μg/4 hour, with a high percentage of success and limited side effects. Objective: Evaluation of the efficiency of the use of oral and vaginal misoprostol in inducing labor, and comparing it with its use not by a previously defined protocol. Methods: Participants in this study included patients at U.H.O.G. 'Koco Gliozheni', Tirana from April 2004-July 2006, presenting with an indication for inducing labor in the third trimester for pregnancy termination. A total of 37 patients were randomly admitted for birth inducing activity, according to protocol (26), oral or vaginal protocol (10 vs. 16), and a control group (11), not subject to the protocol, was created. Oral or vaginal misoprostol was administered at a dose of 50 μg/4 h, while the fourth group participants were treated individually by the members of the medical staff. The main result of interest was the time between induction of labor to birth. Kruskal-Wallis test was used to compare the average age, parity, women weight, gestational age, Bishop's score, the size of the uterus and weight of the fetus between the four groups in the study. The Fisher exact test was used to compare day-stay and causes in the four groups. Mann-Whitney test was used to compare the time of the expulsion and the number of doses between oral and vaginal group. For all statistical tests used, the value of P ≤ 0.05 was considered statistically significant. Results: The four groups were comparable with regard to woman age and weight, parity, abortion indication, Bishop's score, fetal weight and the gestational age. There was significant difference in the percentage of deliveries within 24 hours. The average time from induction to birth per route (vaginal, oral, according to protocol and not according to the protocol) was respectively; 10.43h; 21.10h; 15.77h, 21.57h. There was no difference in maternal complications in groups. Conclusions: Use of vaginal misoprostol for inducing labor in the third trimester for termination of pregnancy appears to be more effective than the oral route, and even more to uses not according to the protocols approved before, where complications are greater and unjustified.

Keywords: inducing labor, misoprostol, pregnancy termination, third trimester

Procedia PDF Downloads 185
177 Determination of Physical Properties of Crude Oil Distillates by Near-Infrared Spectroscopy and Multivariate Calibration

Authors: Ayten Ekin Meşe, Selahattin Şentürk, Melike Duvanoğlu

Abstract:

Petroleum refineries are a highly complex process industry with continuous production and high operating costs. Physical separation of crude oil starts with the crude oil distillation unit, continues with various conversion and purification units, and passes through many stages until obtaining the final product. To meet the desired product specification, process parameters are strictly followed. To be able to ensure the quality of distillates, routine analyses are performed in quality control laboratories based on appropriate international standards such as American Society for Testing and Materials (ASTM) standard methods and European Standard (EN) methods. The cut point of distillates in the crude distillation unit is very crucial for the efficiency of the upcoming processes. In order to maximize the process efficiency, the determination of the quality of distillates should be as fast as possible, reliable, and cost-effective. In this sense, an alternative study was carried out on the crude oil distillation unit that serves the entire refinery process. In this work, studies were conducted with three different crude oil distillates which are Light Straight Run Naphtha (LSRN), Heavy Straight Run Naphtha (HSRN), and Kerosene. These products are named after separation by the number of carbons it contains. LSRN consists of five to six carbon-containing hydrocarbons, HSRN consist of six to ten, and kerosene consists of sixteen to twenty-two carbon-containing hydrocarbons. Physical properties of three different crude distillation unit products (LSRN, HSRN, and Kerosene) were determined using Near-Infrared Spectroscopy with multivariate calibration. The absorbance spectra of the petroleum samples were obtained in the range from 10000 cm⁻¹ to 4000 cm⁻¹, employing a quartz transmittance flow through cell with a 2 mm light path and a resolution of 2 cm⁻¹. A total of 400 samples were collected for each petroleum sample for almost four years. Several different crude oil grades were processed during sample collection times. Extended Multiplicative Signal Correction (EMSC) and Savitzky-Golay (SG) preprocessing techniques were applied to FT-NIR spectra of samples to eliminate baseline shifts and suppress unwanted variation. Two different multivariate calibration approaches (Partial Least Squares Regression, PLS and Genetic Inverse Least Squares, GILS) and an ensemble model were applied to preprocessed FT-NIR spectra. Predictive performance of each multivariate calibration technique and preprocessing techniques were compared, and the best models were chosen according to the reproducibility of ASTM reference methods. This work demonstrates the developed models can be used for routine analysis instead of conventional analytical methods with over 90% accuracy.

Keywords: crude distillation unit, multivariate calibration, near infrared spectroscopy, data preprocessing, refinery

Procedia PDF Downloads 129
176 [Keynote] Implementation of Quality Control Procedures in Radiotherapy CT Simulator

Authors: B. Petrović, L. Rutonjski, M. Baucal, M. Teodorović, O. Čudić, B. Basarić

Abstract:

Purpose/Objective: Radiotherapy treatment planning requires use of CT simulator, in order to acquire CT images. The overall performance of CT simulator determines the quality of radiotherapy treatment plan, and at the end, the outcome of treatment for every single patient. Therefore, it is strongly advised by international recommendations, to set up a quality control procedures for every machine involved in radiotherapy treatment planning process, including the CT scanner/ simulator. The overall process requires number of tests, which are used on daily, weekly, monthly or yearly basis, depending on the feature tested. Materials/Methods: Two phantoms were used: a dedicated phantom CIRS 062QA, and a QA phantom obtained with the CT simulator. The examined CT simulator was Siemens Somatom Definition as Open, dedicated for radiation therapy treatment planning. The CT simulator has a built in software, which enables fast and simple evaluation of CT QA parameters, using the phantom provided with the CT simulator. On the other hand, recommendations contain additional test, which were done with the CIRS phantom. Also, legislation on ionizing radiation protection requires CT testing in defined periods of time. Taking into account the requirements of law, built in tests of a CT simulator, and international recommendations, the intitutional QC programme for CT imulator is defined, and implemented. Results: The CT simulator parameters evaluated through the study were following: CT number accuracy, field uniformity, complete CT to ED conversion curve, spatial and contrast resolution, image noise, slice thickness, and patient table stability.The following limits are established and implemented: CT number accuracy limits are +/- 5 HU of the value at the comissioning. Field uniformity: +/- 10 HU in selected ROIs. Complete CT to ED curve for each tube voltage must comply with the curve obtained at comissioning, with deviations of not more than 5%. Spatial and contrast resultion tests must comply with the tests obtained at comissioning, otherwise machine requires service. Result of image noise test must fall within the limit of 20% difference of the base value. Slice thickness must meet manufacturer specifications, and patient stability with longitudinal transfer of loaded table must not differ of more than 2mm vertical deviation. Conclusion: The implemented QA tests gave overall basic understanding of CT simulator functionality and its clinical effectiveness in radiation treatment planning. The legal requirement to the clinic is to set up it’s own QA programme, with minimum testing, but it remains user’s decision whether additional testing, as recommended by international organizations, will be implemented, so to improve the overall quality of radiation treatment planning procedure, as the CT image quality used for radiation treatment planning, influences the delineation of a tumor and calculation accuracy of treatment planning system, and finally delivery of radiation treatment to a patient.

Keywords: CT simulator, radiotherapy, quality control, QA programme

Procedia PDF Downloads 531
175 Re-Presenting the Egyptian Informal Urbanism in Films between 1994 and 2014

Authors: R. Mofeed, N. Elgendy

Abstract:

Cinema constructs mind-spaces that reflect inherent human thoughts and emotions. As a representational art, Cinema would introduce comprehensive images of life phenomena in different ways. The term “represent” suggests verity of meanings; bring into presence, replace or typify. In that sense, Cinema may present a phenomenon through direct embodiment, or introduce a substitute image that replaces the original phenomena, or typify it by relating the produced image to a more general category through a process of abstraction. This research is interested in questioning the type of images that Egyptian Cinema introduces to informal urbanism and how these images were conditioned and reshaped in the last twenty years. The informalities/slums phenomenon first appeared in Egypt and, particularly, Cairo in the early sixties, however, this phenomenon was completely ignored by the state and society until the eighties, and furthermore, its evident representation in Cinema was by the mid-nineties. The Informal City represents the illegal housing developments, and it is a fast growing form of urbanization in Cairo. Yet, this expanding phenomenon is still depicted as the minority, exceptional and marginal through the Cinematic lenses. This paper aims at tracing the forms of representations of the urban informalities in the Egyptian Cinema between 1994 and 2014, and how did that affect the popular mind and its perception of these areas. The paper runs two main lines of inquiry; the first traces the phenomena through a chronological and geographical mapping of the informal urbanism has been portrayed in films. This analysis is based on an academic research work at Cairo University in Fall 2014. The visual tracing through maps and timelines allowed a reading of the phases of ignorance, presence, typifying and repetition in the representation of this huge sector of the city through more than 50 films that has been investigated. The analysis clearly revealed the “portrayed image” of informality by the Cinema through the examined period. However, the second part of the paper explores the “perceived image”. A designed questionnaire is applied to highlight the main features of that image that is perceived by both inhabitants of informalities and other Cairenes based on watching selected films. The questionnaire covers the different images of informalities proposed in the Cinema whether in a comic or a melodramatic background and highlight the descriptive terms used, to see which of them resonate with the mass perceptions and affected their mental images. The two images; “portrayed” and “perceived” are then to be encountered to reflect on issues of repetitions, stereotyping and reality. The formulated stereotype of informal urbanism is finally outlined and justified in relation to both production consumption mechanisms of films and the State official vision of informalities.

Keywords: cinema, informal urbanism, popular mind, representation

Procedia PDF Downloads 296
174 Fire Safe Medical Oxygen Delivery for Aerospace Environments

Authors: M. A. Rahman, A. T. Ohta, H. V. Trinh, J. Hyvl

Abstract:

Atmospheric pressure and oxygen (O2) concentration are critical life support parameters for human-occupied aerospace vehicles and habitats. Various medical conditions may require medical O2; for example, the American Medical Association has determined that commercial air travel exposes passengers to altitude-related hypoxia and gas expansion. It may cause some passengers to experience significant symptoms and medical complications during the flight, requiring supplemental medical-grade O2 to maintain adequate tissue oxygenation and prevent hypoxemic complications. Although supplemental medical grade O2 is a successful lifesaver for respiratory and cardiac failure, O2-enriched exhaled air can contain more than 95 % O2, increasing the likelihood of a fire. In an aerospace environment, a localized high concentration O2 bubble forms around a patient being treated for hypoxia, increasing the cabin O2 beyond the safe limit. To address this problem, this work describes a medical O2 delivery system that can reduce the O2 concentration from patient-exhaled O2-rich air to safe levels while maintaining the prescribed O2 administration to the patient. The O2 delivery system is designed to be a part of the medical O2 kit. The system uses cationic multimetallic cobalt complexes to reversibly, selectively, and stoichiometrically chemisorb O2 from the exhaled air. An air-release sub-system monitors the exhaled air, and as soon the O2 percentage falls below 21%, the air is released to the room air. The O2-enriched exhaled air is channeled through a layer of porous, thin-film heaters coated with the cobalt complex. The complex absorbs O2, and when saturated, the complex is heated to 100°C using the thin-film heater. Upon heating, the complex desorbs O2 and is once again ready to absorb or remove the excess O2 from exhaled air. The O2 absorption is a sub-second process, and desorption is a multi-second process. While heating at 0.685 °C/sec, the complex desorbs ~90% O2 in 110 sec. These fast reaction times mean that a simultaneous absorb/desorb process in the O2 delivery system will create a continuous absorption of O2. Moreover, the complex can concentrate O2 by a factor of 160 times that in air and desorb over 90% of the O2 at 100°C. Over 12 cycles of thermogravimetry measurement, less than 0.1% decrease in reversibility in O2 uptake was observed. The 1 kg complex can desorb over 20L of O2, so simultaneous O2 desorption by 0.5 kg of complex and absorption by 0.5 kg of complex can potentially continuously remove 9L/min O2 (~90% desorbed at 100°C) from exhaled air. The complex is synthesized and characterized for reversible O2 absorption and efficacy. The complex changes its color from dark brown to light gray after O2 desorption. In addition to thermogravimetric analysis, the O2 absorption/desorption cycle is characterized using optical imaging, showing stable color changes over ten cycles. The complex was also tested at room temperature in a low O2 environment in its O2 desorbed state, and observed to hold the deoxygenated state under these conditions. The results show the feasibility of using the complex for reversible O2 absorption in the proposed fire safe medical O2 delivery system.

Keywords: fire risk, medical oxygen, oxygen removal, reversible absorption

Procedia PDF Downloads 104