Search results for: continous blood pressure measurement
764 Regulation of Desaturation of Fatty Acid and Triglyceride Synthesis by Myostatin through Swine-Specific MEF2C/miR222/SCD5 Pathway
Authors: Wei Xiao, Gangzhi Cai, Xingliang Qin, Hongyan Ren, Zaidong Hua, Zhe Zhu, Hongwei Xiao, Ximin Zheng, Jie Yao, Yanzhen Bi
Abstract:
Myostatin (MSTN) is the master regulator of double muscling phenotype with overgrown muscle and decreased fatness in animals, but its action mode to regulate fat deposition remains to be elucidated. In this study a swin-specific pathway through which MSTN acts to regulate the fat deposition was deciphered. Deep sequenincing of the mRNA and miRNA of fat tissues of MSTN knockout (KO) and wildtype (WT) pigs discovered the positive correlation of myocyte enhancer factor 2C (MEF2C) and fat-inhibiting miR222 expression, and the inverse correlation of miR222 and stearoyl-CoA desaturase 5 (SCD5) expression. SCD5 is rodent-absent and expressed only in pig, sheep and cattle. Fatty acid spectrum of fat tissues revealed a lower percentage of oleoyl-CoA (18:1) and palmitoleyl CoA (16:1) in MSTN KO pigs, which are the catalyzing products of SCD5-mediated desaturation of steroyl CoA (18:0) and palmitoyl CoA (16:0). Blood metrics demonstrated a 45% decline of triglyceride (TG) content in MSTN KO pigs. In light of these observations we hypothesized that MSTN might act through MEF2C/miR222/SCD5 pathway to regulate desaturation of fatty acid as well as triglyceride synthesis in pigs. To this end, real-time PCR and Western blotting were carried out to detect the expression of the three genes stated above. These experiments showed that MEF2C expression was up-regulated by nearly 2-fold, miR222 up-regulated by nearly 3-fold and SCD5 down-regulated by nearly 50% in MSTN KO pigs. These data were consistent with the expression change in deep sequencing analysis. Dual luciferase reporter was then used to confirm the regulation of MEF2C upon the promoter of miR222. Ecotopic expression of MEF2C in preadipocyte cells enhanced miR222 expression by 3.48-fold. CHIP-PCR identified a putative binding site of MEF2C on -2077 to -2066 region of miR222 promoter. Electrophoretic mobility shift assay (EMSA) demonstrated the interaction of MEF2C and miR222 promoter in vitro. These data indicated that MEF2C transcriptionally regulates the expression of miR222. Next, the regulation of miR222 on SCD5 mRNA as well as its physiological consequences were examined. Dual luciferase reporter testing revealed the translational inhibition of miR222 upon the 3´ UTR (untranslated region) of SCD5 in preadipocyte cells. Transfection of miR222 mimics and inhibitors resulted in the down-regulation and up-regulation of SCD5 in preadipocyte cells respectively, consistent with the results from reporter testing. RNA interference of SCD5 in preadipocyte cells caused 26.2% reduction of TG, in agreement with the results of TG content in MSTN KO pigs. In summary, the results above supported the existence of a molecular pathway that MSTN signals through MEF2C/miR222/SCD5 to regulate the fat deposition in pigs. This swine-specific pathway offers potential molecular markers for the development and breeding of a new pig line with optimised fatty acid composition. This would benefit human health by decreasing the takeup of saturated fatty acid.Keywords: fat deposition, MEF2C, miR222, myostatin, SCD5, pig
Procedia PDF Downloads 129763 Effect of Water Addition on Catalytic Activity for CO2 Purification from Oxyfuel Combustion
Authors: Joudia Akil, Stephane Siffert, Laurence Pirault-Roy, Renaud Cousin, Christophe Poupin
Abstract:
Oxyfuel combustion is a promising method that enables to obtain a CO2 rich stream, with water vapor ( ̴10%), unburned components such as CO and NO, which must be cleaned before the use of CO2. Our objective is then the final treatment of CO and NO by catalysis. Three-way catalysts are well-developed material for simultaneous conversion of NO, CO and hydrocarbons. Pt and/or Rh ensure a quasi-complete removal of NOx, CO and HC and there is also a growing interest in partly replacing Pt with less-expensive Pd. The use of alumina and ceria as support ensures, respectively, the stabilization of such species in active state and discharging or storing oxygen to control the oxidation of CO and HC and the reduction of NOx. In this work, we will compare different metals (Pd, Rh and Pt) supported on Al2O3 and CeO2, for CO2 purification from oxyfuel combustion. The catalyst must reduce NO by CO in an oxidizing environment, in the presence of CO2 rich stream and resistant to water. In this study, Al2O3 and CeO2 were used as support materials of the catalysts. 1wt% M/Support where M = Pd, Rh or Pt catalysts were obtained by wet impregnation on supports with a precursor of palladium [Pd(acac)2], rhodium [Rh(NO3)3] and platinum [Pt(NO2)2(NO3)2]. Materials were characterized by BET surface area, H2 chemisorption, and TEM. Catalytic activity was evaluated in CO2 purification which is carried out in a fixed-bed flow reactor containing 150 mg of catalyst at atmospheric pressure. The flow of the reactant gases is composed of: 20% CO2, 10% O2, 0.5% CO, 0.02% NO and 8.2% H2O (He as eluent gas) with a total flow of 200 mL.min−1, with same GHSV (2.24x104 h-1). The catalytic performances of the samples were investigated with and without water. It shows that the total oxidation of CO occurred over the different materials. This study evidenced an important effect of the nature of the metals, supports and the presence or absence of H2O during the reduction of NO by CO in oxyfuel combustions conditions. Rh based catalysts show that the addition of water has a very positive influence especially on the Rh catalyst on CeO2. Pt based catalysts keep a good activity despite the addition of water on the both supports studied. For the NO reduction, addition of water act as a poison with Pd catalysts. The interesting results of Rh based catalysts with water can be explained by a production of hydrogen through the water gas shift reaction. The produced hydrogen acts as a more effective reductant than CO for NO removal. Furthermore, in TWCs, Rh is the main component responsible for NOx reduction due to its especially high activity for NO dissociation. Moreover, cerium oxide is a promotor for WGSR.Keywords: carbon dioxide, environmental chemistry, heterogeneous catalysis
Procedia PDF Downloads 182762 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 96761 Long Non-Coding RNAs Mediated Regulation of Diabetes in Humanized Mouse
Authors: Md. M. Hossain, Regan Roat, Jenica Christopherson, Colette Free, Zhiguang Guo
Abstract:
Long noncoding RNA (lncRNA) mediated post-transcriptional gene regulation, and their epigenetic landscapes have been shown to be involved in many human diseases. However, their regulation in diabetes through governing islet’s β-cell function and survival needs to be elucidated. Due to the technical and ethical constraints, it is difficult to study their role in β-cell function and survival in human under in vivo condition. In this study, humanized mice have been developed through transplanting human pancreatic islet under the kidney capsule of NOD.SCID mice and induced β-cell death leading to diabetes condition to study lncRNA mediated regulation. For this, human islets from 3 donors (3000 IEQ, purity > 80%) were transplanted under the kidney capsule of STZ induced diabetic NOD.scid mice. After at least 2 weeks of normoglycecemia, lymphocytes from diabetic NOD mice were adoptively transferred and islet grafts were collected once blood glucose reached > 200 mg/dl. RNA from human donor islets, islet grafts from humanized mice with either adoptive lymphocyte transfer (ALT) or PBS control (CTL) were ribodepleted; barcoded fragment libraries were constructed and sequenced on the Ion Proton sequencer. lncRNA expression in isolated human islets, islet grafts from humanized mice with and without induced β-cell death and their regulation in human islets function in vitro under glucose challenge, cytokine mediated inflammation and induced apoptotic condition were investigated. Out of 3155 detected lncRNAs, 299 that highly expressed in islets were found to be significantly downregulated and 224 upregulated in ALT compared to CTL. Most of these are found to be collocated within 5 kb upstream and 1 kb downstream of 788 up- and 624 down-regulated mRNAs. Genomic Regions Enrichment of Annotations Analysis revealed deregulated and collocated genes are related to pancreas endocrine development; insulin synthesis, processing, and secretion; pancreatitis and diabetes. Many of them, that found to be located within enhancer domains for islet specific gene activity, are associated to the deregulation of known islet/βcell specific transcription factors and genes that are important for β-cell differentiation, identity, and function. RNA sequencing analysis revealed aberrant lncRNA expression which is associated to the deregulated mRNAs in β-cell function as well as in molecular pathways related to diabetes. A distinct set of candidate lncRNA isoforms were identified as highly enriched and specific to human islets, which are deregulated in human islets from donors with different BMIs and with type 2 diabetes. These RNAs show an interesting regulation in cultured human islets under glucose stimulation and with induced β-cell death by cytokines. Aberrant expression of these lncRNAs was detected in the exosomes from the media of islets cultured with cytokines. Results of this study suggest that the islet specific lncRNAs are deregulated in human islet with β-cell death, hence important in diabetes. These lncRNAs might be important for human β-cell function and survival thus could be used as biomarkers and novel therapeutic targets for diabetes.Keywords: β-cell, humanized mouse, pancreatic islet, LncRNAs
Procedia PDF Downloads 163760 Assessing Social Sustainability for Biofuels Supply Chains: The Case of Jet Biofuel in Brazil
Authors: Z. Wang, F. Pashaei Kamali, J. A. Posada Duque, P. Osseweijer
Abstract:
Globally, the aviation sector is seeking for sustainable solutions to comply with the pressure to reduce greenhouse gas emissions. Jet fuels derived from biomass are generally perceived as a sustainable alternative compared with their fossil counterparts. However, the establishment of jet biofuels supply chains will have impacts on environment, economy, and society. While existing studies predominantly evaluated environmental impacts and techno-economic feasibility of jet biofuels, very few studies took the social / socioeconomic aspect into consideration. Therefore, this study aims to provide a focused evaluation of social sustainability for aviation biofuels with a supply chain perspective. Three potential jet biofuel supply chains based on different feedstocks, i.e. sugarcane, eucalyptus, and macauba were analyzed in the context of Brazil. The assessment of social sustainability is performed with a process-based approach combined with input-output analysis. Over the supply chains, a set of social sustainability issues including employment, working condition (occupational accident and wage level), labour right, education, equity, social development (GDP and trade balance) and food security were evaluated in a (semi)quantitative manner. The selection of these social issues is based on two criteria: (1) the issues are highly relevant and important to jet biofuel production; (2) methodologies are available for assessing these issues. The results show that the three jet biofuel supply chains lead to a differentiated level of social effects. The sugarcane-based supply chain creates the highest number of jobs whereas the biggest contributor of GDP turns out to be the macauba-based supply chain. In comparison, the eucalyptus-based supply chain stands out regarding working condition. It is also worth noting that biojet fuel supply chain with high level of social benefits could result in high level of social concerns (such as occupational accident, violation of labour right and trade imbalance). Further research is suggested to investigate the possible interactions between different social issues. In addition, the exploration of a wider range of social effects is needed to expand the comprehension of social sustainability for biofuel supply chains.Keywords: biobased supply chain, jet biofuel, social assessment, social sustainability, socio-economic impacts
Procedia PDF Downloads 265759 A Versatile Data Processing Package for Ground-Based Synthetic Aperture Radar Deformation Monitoring
Authors: Zheng Wang, Zhenhong Li, Jon Mills
Abstract:
Ground-based synthetic aperture radar (GBSAR) represents a powerful remote sensing tool for deformation monitoring towards various geohazards, e.g. landslides, mudflows, avalanches, infrastructure failures, and the subsidence of residential areas. Unlike spaceborne SAR with a fixed revisit period, GBSAR data can be acquired with an adjustable temporal resolution through either continuous or discontinuous operation. However, challenges arise from processing high temporal-resolution continuous GBSAR data, including the extreme cost of computational random-access-memory (RAM), the delay of displacement maps, and the loss of temporal evolution. Moreover, repositioning errors between discontinuous campaigns impede the accurate measurement of surface displacements. Therefore, a versatile package with two complete chains is developed in this study in order to process both continuous and discontinuous GBSAR data and address the aforementioned issues. The first chain is based on a small-baseline subset concept and it processes continuous GBSAR images unit by unit. Images within a window form a basic unit. By taking this strategy, the RAM requirement is reduced to only one unit of images and the chain can theoretically process an infinite number of images. The evolution of surface displacements can be detected as it keeps temporarily-coherent pixels which are present only in some certain units but not in the whole observation period. The chain supports real-time processing of the continuous data and the delay of creating displacement maps can be shortened without waiting for the entire dataset. The other chain aims to measure deformation between discontinuous campaigns. Temporal averaging is carried out on a stack of images in a single campaign in order to improve the signal-to-noise ratio of discontinuous data and minimise the loss of coherence. The temporal-averaged images are then processed by a particular interferometry procedure integrated with advanced interferometric SAR algorithms such as robust coherence estimation, non-local filtering, and selection of partially-coherent pixels. Experiments are conducted using both synthetic and real-world GBSAR data. Displacement time series at the level of a few sub-millimetres are achieved in several applications (e.g. a coastal cliff, a sand dune, a bridge, and a residential area), indicating the feasibility of the developed GBSAR data processing package for deformation monitoring of a wide range of scientific and practical applications.Keywords: ground-based synthetic aperture radar, interferometry, small baseline subset algorithm, deformation monitoring
Procedia PDF Downloads 161758 Results of Three-Year Operation of 220kV Pilot Superconducting Fault Current Limiter in Moscow Power Grid
Authors: M. Moyzykh, I. Klichuk, L. Sabirov, D. Kolomentseva, E. Magommedov
Abstract:
Modern city electrical grids are forced to increase their density due to the increasing number of customers and requirements for reliability and resiliency. However, progress in this direction is often limited by the capabilities of existing network equipment. New energy sources or grid connections increase the level of short-circuit currents in the adjacent network, which can exceed the maximum rating of equipment–breaking capacity of circuit breakers, thermal and dynamic current withstand qualities of disconnectors, cables, and transformers. Superconducting fault current limiter (SFCL) is a modern solution designed to deal with the increasing fault current levels in power grids. The key feature of this device is its instant (less than 2 ms) limitation of the current level due to the nature of the superconductor. In 2019 Moscow utilities installed SuperOx SFCL in the city power grid to test the capabilities of this novel technology. The SFCL became the first SFCL in the Russian energy system and is currently the most powerful SFCL in the world. Modern SFCL uses second-generation high-temperature superconductor (2G HTS). Despite its name, HTS still requires low temperatures of liquid nitrogen for operation. As a result, Moscow SFCL is built with a cryogenic system to provide cooling to the superconductor. The cryogenic system consists of three cryostats that contain a superconductor part and are filled with liquid nitrogen (three phases), three cryocoolers, one water chiller, three cryopumps, and pressure builders. All these components are controlled by an automatic control system. SFCL has been continuously operating on the city grid for over three years. During that period of operation, numerous faults occurred, including cryocooler failure, chiller failure, pump failure, and others (like a cryogenic system power outage). All these faults were eliminated without an SFCL shut down due to the specially designed cryogenic system backups and quick responses of grid operator utilities and the SuperOx crew. The paper will describe in detail the results of SFCL operation and cryogenic system maintenance and what measures were taken to solve and prevent similar faults in the future.Keywords: superconductivity, current limiter, SFCL, HTS, utilities, cryogenics
Procedia PDF Downloads 80757 Perinatal Optimisation for Preterm Births Less than 34 Weeks at OLOL, Drogheda, Ireland
Authors: Stephane Maingard, Babu Paturi, Maura Daly, Finnola Armstrong
Abstract:
Background: Perinatal optimization involves the implementation of twelve intervention bundles of care at Our Lady of Lourdes Hospital, reliably delivering evidence-based interventions in the antenatal, intrapartum, and neonatal period to improve preterm outcomes. These key interventions (e.g. Antenatal steroids, Antenatal counselling, Optimal cord management, Respiratory management etc.) are based on WHO (World Health Organization, BAPM (British Association of Perinatal Medicine), and the latest 2022 European Consensus guidelines recommendations. Methodology: In February 2023, a quality improvement project team (pediatricians, neonatologists, obstetricians, clinical skills managers) was established, and a project implementation plan was developed. The Program Study Act implemented the following: 1. Antenatal consultation pathway, 2. Creation and implementation of a perinatal checklist for preterm births less than 34 weeks of gestation, 3. Process changes to ensure the checklist is completed, 4. Completion of parent and staff surveys, 5. Ongoing training. We collected and compared a range of data before and after implementation. Results: Preliminary analysis so far at 1 month demonstrates improvement in the following areas: 50% increase in antenatal counselling. Right place of birth increased from 85% to 100%. Magnesium sulphate increased from 56% to 100%. No change was observed in buccal colostrum administration (28%), delayed cord clamping (75%), caffeine administration (100%), blood glucose level at one hour of life > 2,6mmol (85%). There was also no change noted in respiratory support at resuscitation, CPAP only (47%), IPPV with CPAP (45%), IPPV with intubation (20%), and surfactant administration (28%). A slight decrease in figures was noted in the following: steroid administration from 80% to 75% and thermal care obtaining optimal temperature on admission (65% to 50%). Discussion: Even though the findings are preliminary, the directional improvement shows promise. Improved communication has been achieved between all stakeholders, including our patients, who are key team members. Adherence to the bundles of care will help to improve survival and neurodevelopmental outcomes as well as reduce the length of stay, thereby overall reducing the financial cost, considering the lifetime cost of cerebral palsy is estimated at €800,000 and reducing the length of stay can result in savings of up to €206,000. Conclusion: Preliminary results demonstrate improvements across a range of patient, process, staff, and financial outcomes. Our future goal is a seamless pathway of patient centered care for babies and their families. This project is an interdisciplinary collaboration to implement best practices for a vulnerable patient cohort. Our two main challenges are changing our organization’s culture as well as ensuring the sustainability of the project.Keywords: perinatal, optimization, antenatal, counselling, IPPV
Procedia PDF Downloads 18756 Placement of Inflow Control Valve for Horizontal Oil Well
Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj
Abstract:
Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation
Procedia PDF Downloads 418755 Desulphurization of Waste Tire Pyrolytic Oil (TPO) Using Photodegradation and Adsorption Techniques
Authors: Moshe Mello, Hilary Rutto, Tumisang Seodigeng
Abstract:
The nature of tires makes them extremely challenging to recycle due to the available chemically cross-linked polymer and, therefore, they are neither fusible nor soluble and, consequently, cannot be remolded into other shapes without serious degradation. Open dumping of tires pollutes the soil, contaminates underground water and provides ideal breeding grounds for disease carrying vermins. The thermal decomposition of tires by pyrolysis produce char, gases and oil. The composition of oils derived from waste tires has common properties to commercial diesel fuel. The problem associated with the light oil derived from pyrolysis of waste tires is that it has a high sulfur content (> 1.0 wt.%) and therefore emits harmful sulfur oxide (SOx) gases to the atmosphere when combusted in diesel engines. Desulphurization of TPO is necessary due to the increasing stringent environmental regulations worldwide. Hydrodesulphurization (HDS) is the commonly practiced technique for the removal of sulfur species in liquid hydrocarbons. However, the HDS technique fails in the presence of complex sulfur species such as Dibenzothiopene (DBT) present in TPO. This study aims to investigate the viability of photodegradation (Photocatalytic oxidative desulphurization) and adsorptive desulphurization technologies for efficient removal of complex and non-complex sulfur species in TPO. This study focuses on optimizing the cleaning (removal of impurities and asphaltenes) process by varying process parameters; temperature, stirring speed, acid/oil ratio and time. The treated TPO will then be sent for vacuum distillation to attain the desired diesel like fuel. The effect of temperature, pressure and time will be determined for vacuum distillation of both raw TPO and the acid treated oil for comparison purposes. Polycyclic sulfides present in the distilled (diesel like) light oil will be oxidized dominantly to the corresponding sulfoxides and sulfone via a photo-catalyzed system using TiO2 as a catalyst and hydrogen peroxide as an oxidizing agent and finally acetonitrile will be used as an extraction solvent. Adsorptive desulphurization will be used to adsorb traces of sulfurous compounds which remained during photocatalytic desulphurization step. This desulphurization convoy is expected to give high desulphurization efficiency with reasonable oil recovery.Keywords: adsorption, asphaltenes, photocatalytic oxidation, pyrolysis
Procedia PDF Downloads 272754 The Impact of Emotional Intelligence on Organizational Performance
Authors: El Ghazi Safae, Cherkaoui Mounia
Abstract:
Within companies, emotions have been forgotten as key elements of successful management systems. Seen as factors which disturb judgment, make reckless acts or affect negatively decision-making. Since management systems were influenced by the Taylorist worker image, that made the work regular and plain, and considered employees as executing machines. However, recently, in globalized economy characterized by a variety of uncertainties, emotions are proved as useful elements, even necessary, to attend high-level management. The work of Elton Mayo and Kurt Lewin reveals the importance of emotions. Since then emotions start to attract considerable attention. These studies have shown that emotions influence, directly or indirectly, many organization processes. For example, the quality of interpersonal relationships, job satisfaction, absenteeism, stress, leadership, performance and team commitment. Emotions became fundamental and indispensable to individual yield and so on to management efficiency. The idea that a person potential is associated to Intellectual Intelligence, measured by the IQ as the main factor of social, professional and even sentimental success, was the main problematic that need to be questioned. The literature on emotional intelligence has made clear that success at work does not only depend on intellectual intelligence but also other factors. Several researches investigating emotional intelligence impact on performance showed that emotionally intelligent managers perform more, attain remarkable results, able to achieve organizational objectives, impact the mood of their subordinates and create a friendly work environment. An improvement in the emotional intelligence of managers is therefore linked to the professional development of the organization and not only to the personal development of the manager. In this context, it would be interesting to question the importance of emotional intelligence. Does it impact organizational performance? What is the importance of emotional intelligence and how it impacts organizational performance? The literature highlighted that measurement and conceptualization of emotional intelligence are difficult to define. Efforts to measure emotional intelligence have identified three models that are more prominent: the mixed model, the ability model, and the trait model. The first is considered as cognitive skill, the second relates to the mixing of emotional skills with personality-related aspects and the latter is intertwined with personality traits. But, despite strong claims about the importance of emotional intelligence in the workplace, few studies have empirically examined the impact of emotional intelligence on organizational performance, because even though the concept of performance is at the heart of all evaluation processes of companies and organizations, we observe that performance remains a multidimensional concept and many authors insist about the vagueness that surrounds the concept. Given the above, this article provides an overview of the researches related to emotional intelligence, particularly focusing on studies that investigated the impact of emotional intelligence on organizational performance to contribute to the emotional intelligence literature and highlight its importance and show how it impacts companies’ performance.Keywords: emotions, performance, intelligence, firms
Procedia PDF Downloads 108753 Shear Strength Envelope Characteristics of LimeTreated Clays
Authors: Mohammad Moridzadeh, Gholamreza Mesri
Abstract:
The effectiveness of lime treatment of soils has been commonly evaluated in terms of improved workability and increased undrained unconfined compressive strength in connection to road and airfield construction. The most common method of strength measurement has been the unconfined compression test. However, if the objective of lime treatment is to improve long-term stability of first-time or reactivated landslides in stiff clays and shales, permanent changes in the size and shape of clay particles must be realized to increase drained frictional resistance. Lime-soil interactions that may produce less platy and larger soil particles begin and continue with time under the highly alkaline pH environment. In this research, pH measurements are used to monitor chemical environment and progress of reactions. Atterberg limits are measured to identify changes in particle size and shape indirectly. Also, fully softened and residual strength measurements are used to examine an improvement in frictional resistance due to lime-soil interactions. The main variables are soil plasticity and mineralogy, lime content, water content, and curing period. Lime effect on frictional resistance is examined using samples of clays with different mineralogy and characteristics which may react with lime to various extents. Drained direct shear tests on reconstituted lime-treated clay specimens with various properties have been performed to measure fully softened shear strength. To measure residual shear strength, drained multiple reversal direct shear tests on precut specimens were conducted. This way, soil particles are oriented along the direction of shearing to the maximum possible extent and provide minimum frictional resistance. This is applicable to reactivated and part of first-time landslides. The Brenna clay, which is the highly plastic lacustrine clay of Lake Agassiz causing slope instability along the banks of the Red River, is one of the soil samples used in this study. The Brenna Formation characterized as a uniform, soft to firm, dark grey, glaciolacustrine clay with little or no visible stratification, is full of slickensided surfaces. The major source of sediment for the Brenna Formation was the highly plastic montmorillonitic Pierre Shale bedrock. The other soil used in this study is one of the main sources of slope instability in Harris County Flood Control District (HCFCD), i.e. the Beaumont clay. The shear strengths of untreated and treated clays were obtained under various normal pressures to evaluate the shear envelope nonlinearity.Keywords: Brenna clay, friction resistance, lime treatment, residual
Procedia PDF Downloads 159752 Numerical Analysis of Charge Exchange in an Opposed-Piston Engine
Authors: Zbigniew Czyż, Adam Majczak, Lukasz Grabowski
Abstract:
The paper presents a description of geometric models, computational algorithms, and results of numerical analyses of charge exchange in a two-stroke opposed-piston engine. The research engine was a newly designed internal Diesel engine. The unit is characterized by three cylinders in which three pairs of opposed-pistons operate. The engine will generate a power output equal to 100 kW at a crankshaft rotation speed of 3800-4000 rpm. The numerical investigations were carried out using ANSYS FLUENT solver. Numerical research, in contrast to experimental research, allows us to validate project assumptions and avoid costly prototype preparation for experimental tests. This makes it possible to optimize the geometrical model in countless variants with no production costs. The geometrical model includes an intake manifold, a cylinder, and an outlet manifold. The study was conducted for a series of modifications of manifolds and intake and exhaust ports to optimize the charge exchange process in the engine. The calculations specified a swirl coefficient obtained under stationary conditions for a full opening of intake and exhaust ports as well as a CA value of 280° for all cylinders. In addition, mass flow rates were identified separately in all of the intake and exhaust ports to achieve the best possible uniformity of flow in the individual cylinders. For the models under consideration, velocity, pressure and streamline contours were generated in important cross sections. The developed models are designed primarily to minimize the flow drag through the intake and exhaust ports while the mass flow rate increases. Firstly, in order to calculate the swirl ratio [-], tangential velocity v [m/s] and then angular velocity ω [rad / s] with respect to the charge as the mean of each element were calculated. The paper contains comparative analyses of all the intake and exhaust manifolds of the designed engine. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK "PZL-KALISZ" S.A." and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: computational fluid dynamics, engine swirl, fluid mechanics, mass flow rates, numerical analysis, opposed-piston engine
Procedia PDF Downloads 197751 Quantum Mechanics as A Limiting Case of Relativistic Mechanics
Authors: Ahmad Almajid
Abstract:
The idea of unifying quantum mechanics with general relativity is still a dream for many researchers, as physics has only two paths, no more. Einstein's path, which is mainly based on particle mechanics, and the path of Paul Dirac and others, which is based on wave mechanics, the incompatibility of the two approaches is due to the radical difference in the initial assumptions and the mathematical nature of each approach. Logical thinking in modern physics leads us to two problems: - In quantum mechanics, despite its success, the problem of measurement and the problem of wave function interpretation is still obscure. - In special relativity, despite the success of the equivalence of rest-mass and energy, but at the speed of light, the fact that the energy becomes infinite is contrary to logic because the speed of light is not infinite, and the mass of the particle is not infinite too. These contradictions arise from the overlap of relativistic and quantum mechanics in the neighborhood of the speed of light, and in order to solve these problems, one must understand well how to move from relativistic mechanics to quantum mechanics, or rather, to unify them in a way different from Dirac's method, in order to go along with God or Nature, since, as Einstein said, "God doesn't play dice." From De Broglie's hypothesis about wave-particle duality, Léon Brillouin's definition of the new proper time was deduced, and thus the quantum Lorentz factor was obtained. Finally, using the Euler-Lagrange equation, we come up with new equations in quantum mechanics. In this paper, the two problems in modern physics mentioned above are solved; it can be said that this new approach to quantum mechanics will enable us to unify it with general relativity quite simply. If the experiments prove the validity of the results of this research, we will be able in the future to transport the matter at speed close to the speed of light. Finally, this research yielded three important results: 1- Lorentz quantum factor. 2- Planck energy is a limited case of Einstein energy. 3- Real quantum mechanics, in which new equations for quantum mechanics match and exceed Dirac's equations, these equations have been reached in a completely different way from Dirac's method. These equations show that quantum mechanics is a limited case of relativistic mechanics. At the Solvay Conference in 1927, the debate about quantum mechanics between Bohr, Einstein, and others reached its climax, while Bohr suggested that if particles are not observed, they are in a probabilistic state, then Einstein said his famous claim ("God does not play dice"). Thus, Einstein was right, especially when he didn't accept the principle of indeterminacy in quantum theory, although experiments support quantum mechanics. However, the results of our research indicate that God really does not play dice; when the electron disappears, it turns into amicable particles or an elastic medium, according to the above obvious equations. Likewise, Bohr was right also, when he indicated that there must be a science like quantum mechanics to monitor and study the motion of subatomic particles, but the picture in front of him was blurry and not clear, so he resorted to the probabilistic interpretation.Keywords: lorentz quantum factor, new, planck’s energy as a limiting case of einstein’s energy, real quantum mechanics, new equations for quantum mechanics
Procedia PDF Downloads 77750 Safety of Mesenchymal Stem Cells Therapy: Potential Risk of Spontaneous Transformations
Authors: Katarzyna Drela, Miroslaw Wielgos, Mikolaj Wrobel, Barbara Lukomska
Abstract:
Mesenchymal stem cells (MSCs) have a great potential in regenerative medicine. Since the initial number of isolated MSCs is limited, in vitro propagation is often required to reach sufficient numbers of cells for therapeutic applications. During long-term culture MSCs may undergo genetic or epigenetic alterations that subsequently increase the probability of spontaneous malignant transformation. Thus, factors that influence genomic stability of MSCs following long-term expansions need to be clarified before cultured MSCs are employed for clinical application. The aim of our study was to investigate the potential for spontaneous transformation of human neonatal cord blood (HUCB-MSCs) and adult bone marrow (BM-MSCs) derived MSCs. Materials and Methods: HUCB-MSCs and BM-MSCs were isolated by standard Ficoll gradient centrifugations method. Isolated cells were initially plated in high density 106 cells per cm2. After 48 h medium were changed and non-adherent cells were removed. The malignant transformation of MSCs in vitro was evaluated by morphological changes, proliferation rate, ability to enter cell senescence, the telomerase expression and chromosomal abnormality. Proliferation of MSCs was analyzed with WST-1 reduction method and population doubling time (PDT) was calculated at different culture stages. Then the expression pattern of genes characteristic for mesenchymal or epithelial cells, as well as transcriptions factors were examined by RT-PCR. Concomitantly, immunocytochemical analysis of gene-related proteins was employed. Results: Our studies showed that MSCs from all bone marrow isolations ultimately entered senescence and did not undergo spontaneous malignant transformation. However, HUCB-MSCs from one of the 15 donors displayed an increased proliferation rate, failed to enter senescence, and exhibited an altered cell morphology. In this sample we observed two different cell phenotypes: one mesenchymal-like exhibited spindle shaped morphology and express specific mesenchymal surface markers (CD73, CD90, CD105, CD166) with low proliferation rate, and the second one with round, densely package epithelial-like cells with significantly increased proliferation rate. The PDT of epithelial-like populations was around 1day and 100% of cells were positive for proliferation marker Ki-67. Moreover, HUCB-MSCs showed a positive expression of human telomerase reverse transcriptase (hTERT), cMYC and exhibit increased number of CFU during the long-term culture in vitro. Furthermore, karyotype analysis revealed chromosomal abnormalities including duplications. Conclusions: Our studies demonstrate that HUCB-MSCs are susceptible to spontaneous malignant transformation during long-term culture. Spontaneous malignant transformation process following in vitro culture has enormous effect on the biosafety issues of future cell-based therapies and regenerative medicine regimens.Keywords: mesenchymal stem cells, spontaneous, transformation, long-term culture
Procedia PDF Downloads 267749 The Revival of Asakusa Entertainment Streets and Social Conflicts Since Its Inceptive Point, the Post-war Time
Authors: Seung Oh, Satoshi Okada
Abstract:
Today, religious organizations that have long existed alongside local people are being challenged by social changes in the districts they control. The influence of religious organizations is declining everywhere as locals seeking diversity and economic benefits become more interested in developing projects that attract investors and increase market value instead of opting for conservation. Religions whose moral and philosophical stance rejects materialism have a limited capacity to act as agents of local development in modern society. However, in Tokyo, the city’s oldest temple, Senso-Ji played a vital role in the local development of Asakusa, as an entertainment district while nevertheless retaining the area’s traditional character, despite almost complete destruction caused by the Tokyo air raids. The temple played a vigorous role as a mediator between the community and the Tokyo Metropolitan Government as a spokesman for common interests. This research, therefore, examines the social conflicts that Senso-Ji has confronted with regard to the pressures of development of Asakusa on the one hand, and the legitimacy of perpetuating its traditional religious and cultural role in local society on the other. First, this article examines Senso-Ji’s place in society based on its location in the history of Japanese Buddhism, which existed to offer spiritual and practical help to the ordinary people, and to investigate its social legitimacy as a local stakeholder and historical institution. Second, this paper considers the impact of the social changes that Asakusa had undergone during the Meiji and Taisho eras, by examining the social conflicts and changes in the Asakusa entertainment district, taking the Tokyo Air Raids as the Inceptive Point (IP). Third, it reconsiders how Senso-Ji responded to today’s growth-oriented local developments, as proposed by Tokyo’s Metropolitan planning authorities along lines commonly seen in all cities. Studying the role of Senso-Ji in the development of Asakusa can serve as a case study to justify the involvement of religious institutions in local issues and as a useful and practical example of progressive development which nevertheless permitted conservation of traditional features, as a result of pressure from social groups in a way that may be useful for other places facing similar problems.Keywords: Architecture, Urban Design, Urban Planning, Preservation, Conservation, Social Science
Procedia PDF Downloads 23748 Performance Evaluation of On-Site Sewage Treatment System (Johkasou)
Authors: Aashutosh Garg, Ankur Rajpal, A. A. Kazmi
Abstract:
The efficiency of an on-site wastewater treatment system named Johkasou was evaluated based on its pollutant removal efficiency over 10 months. This system was installed at IIT Roorkee and had a capacity of treating 7 m3/d of sewage water, sufficient for a group of 30-50 people. This system was fed with actual wastewater through an equalization tank to eliminate the fluctuations throughout the day. Methanol and ammonium chloride was added into this equalization tank to increase the Chemical Oxygen Demand (COD) and ammonia content of the influent. The outlet from Johkasou is sent to a tertiary unit consisting of a Pressure Sand Filter and an Activated Carbon Filter for further treatment. Samples were collected on alternate days from Monday to Friday and the following parameters were evaluated: Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN). The Average removal efficiency for Chemical Oxygen Demand (COD), Biochemical Oxygen Demand (BOD), Total Suspended Solids (TSS), and Total Nitrogen (TN) was observed as 89.6, 97.7, 96, and 80% respectively. The cost of treating the wastewater comes out to be Rs 23/m3 which includes electricity, cleaning and maintenance, chemical, and desludging costs. Tests for the coliforms were also performed and it was observed that the removal efficiency for total and fecal coliforms was 100%. The sludge generation rate is approximately 20% of the BOD removal and it needed to be removed twice a year. It also showed a very good response against the hydraulic shock load. We performed vacation stress analysis on the system to evaluate the performance of the system when there is no influent for 8 consecutive days. From the result of stress analysis, we concluded that system needs a recovery time of about 48 hours to stabilize. After about 2 days, the system returns again to original conditions and all the parameters in the effluent become within the limits of National Green Tribunal (NGT) standards. We also performed another stress analysis to save the electricity in which we turned the main aeration blower off for 2 to 12 hrs a day and the results showed that we can turn the blower off for about 4-6 hrs a day and this will help in reducing the electricity costs by about 25%. It was concluded that the Johkasou system can remove a sufficient amount of all the physiochemical parameters tested to satisfy the prescribed limit set as per Indian Standard.Keywords: on-site treatment, domestic wastewater, Johkasou, nutrient removal, pathogens removal
Procedia PDF Downloads 114747 Linguistic World Order in the 21st Century: Need of Alternative Linguistics
Authors: Shailendra Kumar Singh
Abstract:
In the 21st century, we are living through extraordinary times as we are linguistically blessed to live through an era in which the each sociolinguistic example of living appears to be refreshingly new without any precedence of the past. The word `New Linguistic World Order’ is no longer just the intangible fascination but an indication of the emerging reality that we are living through a time in which the word ‘linguistic purism’ no longer invokes the sense of self categorization and self identification. The contemporary world of today is linguistically rewarding. This is a time in which the very existence of global, powerful and local needs to be revisited in the context of power shift, demographic shift, social psychological shift and technological shift. Hence, the old linguistic world view has to be challenged in the midst of 21st century. The first years of the 21st century have thus far been marked by the rise global economy, technological revolution and demographic shift, now we are witnessing linguistic shift which is leading towards forming a new linguistic world order. On the other hand, with rising powers of China and India in Asia in tandem the notion of alternative west is set to become a lot more interesting linguistically. It comes at a point when the world is moving towards inclusive globalization due to vanishing power corridor of the west and ascending geopolitical impact of emerging superpower and superpower in waiting. Now it is a reality that the western world no longer continues to rise – in fact, it will have more pressure to act in situation when the alternative west is looking for balanced globalization. It is more than likely that demographically strong languages of alternative west will be in advantageous position. The paper challenges our preconceptions about the nature of sociolinguistic nature of world in the 21st century. It investigates what a linguistic world is likely to be in the future in contrast to what was a linguistic world before 21st century. In particular, the paper tries to answer the following questions: (a) What will be the common linguistic thread across world? (b) How unprecedented transformations can be mapped linguistically? (c) Do we need alternative linguistics to define inclusive globalization as the linguistic reality of the contemporary world has already been reshaped by increasingly integrated world economy, linguistic revolution and alternative west? (d) In which ways these issues can be addressed holistically? (e) Why linguistic world order is changing dramatically? (f) Is it true that the linguistic world around is changing faster than we can even really cope? (g) Is it true that what is coming next is linguistically greater than ever? (h) Do we need to prepare ourselves with new theoretical strategies to address emerging sociolinguistic reality?Keywords: alternative linguistics, new linguistic world order, power shift, demographic shift, social psychological shift, technological shift
Procedia PDF Downloads 337746 Aerosol Characterization in a Coastal Urban Area in Rimini, Italy
Authors: Dimitri Bacco, Arianna Trentini, Fabiana Scotto, Flavio Rovere, Daniele Foscoli, Cinzia Para, Paolo Veronesi, Silvia Sandrini, Claudia Zigola, Michela Comandini, Marilena Montalti, Marco Zamagni, Vanes Poluzzi
Abstract:
The Po Valley, in the north of Italy, is one of the most polluted areas in Europe. The air quality of the area is linked not only to anthropic activities but also to its geographical characteristics and stagnant weather conditions with frequent inversions, especially in the cold season. Even the coastal areas present high values of particulate matter (PM10 and PM2.5) because the area closed between the Adriatic Sea and the Apennines does not favor the dispersion of air pollutants. The aim of the present work was to identify the main sources of particulate matter in Rimini, a tourist city in northern Italy. Two sampling campaigns were carried out in 2018, one in winter (60 days) and one in summer (30 days), in 4 sites: an urban background, a city hotspot, a suburban background, and a rural background. The samples are characterized by the concentration of the ionic composition of the particulates and of the main a hydro-sugars, in particular levoglucosan, a marker of the biomass burning, because one of the most important anthropogenic sources in the area, both in the winter and surprisingly even in the summer, is the biomass burning. Furthermore, three sampling points were chosen in order to maximize the contribution of a specific biomass source: a point in a residential area (domestic cooking and domestic heating), a point in the agricultural area (weed fires), and a point in the tourist area (restaurant cooking). In these sites, the analyzes were enriched with the quantification of the carbonaceous component (organic and elemental carbon) and with measurement of the particle number concentration and aerosol size distribution (6 - 600 nm). The results showed a very significant impact of the combustion of biomass due to domestic heating in the winter period, even though many intense peaks were found attributable to episodic wood fires. In the summer season, however, an appreciable signal was measured linked to the combustion of biomass, although much less intense than in winter, attributable to domestic cooking activities. Further interesting results were the verification of the total absence of sea salt's contribution in the particulate with the lower diameter (PM2.5), and while in the PM10, the contribution becomes appreciable only in particular wind conditions (high wind from north, north-east). Finally, it is interesting to note that in a small town, like Rimini, in summer, the traffic source seems to be even more relevant than that measured in a much larger city (Bologna) due to tourism.Keywords: aerosol, biomass burning, seacoast, urban area
Procedia PDF Downloads 128745 The Relationship between Risk and Capital: Evidence from Indian Commercial Banks
Authors: Seba Mohanty, Jitendra Mahakud
Abstract:
Capital ratio is one of the major indicators of the stability of the commercial banks. Pertinent to its pervasive importance, over the years the regulators, policy makers focus on the maintenance of the particular level of capital ratio to minimize the solvency and liquidation risk. In this context, it is very much important to identify the relationship between capital and risk and find out the factors which determine the capital ratios of commercial banks. The study examines the relationship between capital and risk of the commercial banks operating in India. Other bank specific variables like bank size, deposit, profitability, non-performing assets, bank liquidity, net interest margin, loan loss reserves, deposits variability and regulatory pressure are also considered for the analysis. The period of study is 1997-2015 i.e. the period of post liberalization. To identify the impact of financial crisis and implementation of Basel II on capital ratio, we have divided the whole period into two sub-periods i.e. 1997-2008 and 2008-2015. This study considers all the three types of commercial banks, i.e. public sector, the private sector and foreign banks, which have continuous data for the whole period. The main sources of data are Prowess data base maintained by centre for monitoring Indian economy (CMIE) and Reserve Bank of India publications. We use simultaneous equation model and more specifically Two Stage Least Square method to find out the relationship between capital and risk. From the econometric analysis, we find that capital and risk affect each other simultaneously, and this is consistent across the time period and across the type of banks. Moreover, regulation has a positive significant impact on the ratio of capital to risk-weighted assets, but no significant impact on the banks risk taking behaviour. Our empirical findings also suggest that size has a negative impact on capital and risk, indicating that larger banks increase their capital less than the other banks supported by the too-big-to-fail hypothesis. This study contributes to the existing body of literature by predicting a strong relationship between capital and risk in an emerging economy, where banking sector plays a majority role for financial development. Further this study may be considered as a primary study to find out the macro economic factors which affecting risk and capital in India.Keywords: capital, commercial bank, risk, simultaneous equation model
Procedia PDF Downloads 327744 Verification of Low-Dose Diagnostic X-Ray as a Tool for Relating Vital Internal Organ Structures to External Body Armour Coverage
Authors: Natalie A. Sterk, Bernard van Vuuren, Petrie Marais, Bongani Mthombeni
Abstract:
Injuries to the internal structures of the thorax and abdomen remain a leading cause of death among soldiers. Body armour is a standard issue piece of military equipment designed to protect the vital organs against ballistic and stab threats. When configured for maximum protection, the excessive weight and size of the armour may limit soldier mobility and increase physical fatigue and discomfort. Providing soldiers with more armour than necessary may, therefore, hinder their ability to react rapidly in life-threatening situations. The capability to determine the optimal trade-off between the amount of essential anatomical coverage and hindrance on soldier performance may significantly enhance the design of armour systems. The current study aimed to develop and pilot a methodology for relating internal anatomical structures with actual armour plate coverage in real-time using low-dose diagnostic X-ray scanning. Several pilot scanning sessions were held at Lodox Systems (Pty) Ltd head-office in South Africa. Testing involved using the Lodox eXero-dr to scan dummy trunk rigs at various degrees and heights of measurement; as well as human participants, wearing correctly fitted body armour while positioned in supine, prone shooting, seated and kneeling shooting postures. The verification of sizing and metrics obtained from the Lodox eXero-dr were then confirmed through a verification board with known dimensions. Results indicated that the low-dose diagnostic X-ray has the capability to clearly identify the vital internal structures of the aortic arch, heart, and lungs in relation to the position of the external armour plates. Further testing is still required in order to fully and accurately identify the inferior liver boundary, inferior vena cava, and spleen. The scans produced in the supine, prone, and seated postures provided superior image quality over the kneeling posture. The X-ray-source and-detector distance from the object must be standardised to control for possible magnification changes and for comparison purposes. To account for this, specific scanning heights and angles were identified to allow for parallel scanning of relevant areas. The low-dose diagnostic X-ray provides a non-invasive, safe, and rapid technique for relating vital internal structures with external structures. This capability can be used for the re-evaluation of anatomical coverage required for essential protection while optimising armour design and fit for soldier performance.Keywords: body armour, low-dose diagnostic X-ray, scanning, vital organ coverage
Procedia PDF Downloads 122743 Kinetic Energy Recovery System Using Spring
Authors: Mayuresh Thombre, Prajyot Borkar, Mangirish Bhobe
Abstract:
New advancement of technology and never satisfying demands of the civilization are putting huge pressure on the natural fuel resources and these resources are at a constant threat to its sustainability. To get the best out of the automobile, the optimum balance between performance and fuel economy is important. In the present state of art, either of the above two aspects are taken into mind while designing and development process which puts the other in the loss as increase in fuel economy leads to decrement in performance and vice-versa. In-depth observation of the vehicle dynamics apparently shows that large amount of energy is lost during braking and likewise large amount of fuel is consumed to reclaim the initial state, this leads to lower fuel efficiency to gain the same performance. Current use of Kinetic Energy Recovery System is only limited to sports vehicles only because of the higher cost of this system. They are also temporary in nature as power can be squeezed only during a small time duration and use of superior parts leads to high cost, which results on concentration on performance only and neglecting the fuel economy. In this paper Kinetic Energy Recovery System for storing the power and then using the same while accelerating has been discussed. The major storing element in this system is a Flat Spiral Spring that will store energy by compression and torsion. The use of spring ensure the permanent storage of energy until used by the driver unlike present mechanical regeneration system in which the energy stored decreases with time and is eventually lost. A combination of internal gears and spur gears will be used in order to make the energy release uniform which will lead to safe usage. The system can be used to improve the fuel efficiency by assisting in overcoming the vehicle’s inertia after braking or to provide instant acceleration whenever required by the driver. The performance characteristics of the system including response time, mechanical efficiency and overall increase in efficiency are demonstrated. This technology makes the KERS (Kinetic Energy Recovery System) more flexible and economical allowing specific application while at the same time increasing the time frame and ease of usage.Keywords: electric control unit, energy, mechanical KERS, planetary gear system, power, smart braking, spiral spring
Procedia PDF Downloads 201742 Association between Noise Levels, Particulate Matter Concentrations and Traffic Intensities in a Near-Highway Urban Area
Authors: Mohammad Javad Afroughi, Vahid Hosseini, Jason S. Olfert
Abstract:
Both traffic-generated particles and noise have been associated with the development of cardiovascular diseases, especially in near-highway environments. Although noise and particulate matters (PM) have different mechanisms of dispersion, sharing the same emission source in urban areas (road traffics) can result in a similar degree of variability in their levels. This study investigated the temporal variation of and correlation between noise levels, PM concentrations and traffic intensities near a major highway in Tehran, Iran. Tehran particulate concentration is highly influenced by road traffic. Additionally, Tehran ultrafine particles (UFP, PM<0.1 µm) are mostly emitted from combustion processes of motor vehicles. This gives a high possibility of a strong association between traffic-related noise and UFP in near-highway environments of this megacity. Hourly average of equivalent continuous sound pressure level (Leq), total number concentration of UFPs, mass concentration of PM2.5 and PM10, as well as traffic count and speed were simultaneously measured over a period of three days in winter. Additionally, meteorological data including temperature, relative humidity, wind speed and direction were collected in a weather station, located 3 km from the monitoring site. Noise levels showed relatively low temporal variability in near-highway environments compared to PM concentrations. Hourly average of Leq ranged from 63.8 to 69.9 dB(A) (mean ~ 68 dB(A)), while hourly concentration of particles varied from 30,800 to 108,800 cm-3 for UFP (mean ~ 64,500 cm-3), 41 to 75 µg m-3 for PM2.5 (mean ~ 53 µg m-3), and 62 to 112 µg m-3 for PM10 (mean ~ 88 µg m-3). The Pearson correlation coefficient revealed strong relationship between noise and UFP (r ~ 0.61) overall. Under downwind conditions, UFP number concentration showed the strongest association with noise level (r ~ 0.63). The coefficient decreased to a lesser degree under upwind conditions (r ~ 0.24) due to the significant role of wind and humidity in UFP dynamics. Furthermore, PM2.5 and PM10 correlated moderately with noise (r ~ 0.52 and 0.44 respectively). In general, traffic counts were more strongly associated with noise and PM compared to traffic speeds. It was concluded that noise level combined with meteorological data can be used as a proxy to estimate PM concentrations (specifically UFP number concentration) in near-highway environments of Tehran. However, it is important to measure joint variability of noise and particles to study their health effects in epidemiological studies.Keywords: noise, particulate matter, PM10, PM2.5, ultrafine particle
Procedia PDF Downloads 192741 Predicting the Exposure Level of Airborne Contaminants in Occupational Settings via the Well-Mixed Room Model
Authors: Alireza Fallahfard, Ludwig Vinches, Stephane Halle
Abstract:
In the workplace, the exposure level of airborne contaminants should be evaluated due to health and safety issues. It can be done by numerical models or experimental measurements, but the numerical approach can be useful when it is challenging to perform experiments. One of the simplest models is the well-mixed room (WMR) model, which has shown its usefulness to predict inhalation exposure in many situations. However, since the WMR is limited to gases and vapors, it cannot be used to predict exposure to aerosols. The main objective is to modify the WMR model to expand its application to exposure scenarios involving aerosols. To reach this objective, the standard WMR model has been modified to consider the deposition of particles by gravitational settling and Brownian and turbulent deposition. Three deposition models were implemented in the model. The time-dependent concentrations of airborne particles predicted by the model were compared to experimental results conducted in a 0.512 m3 chamber. Polystyrene particles of 1, 2, and 3 µm in aerodynamic diameter were generated with a nebulizer under two air changes per hour (ACH). The well-mixed condition and chamber ACH were determined by the tracer gas decay method. The mean friction velocity on the chamber surfaces as one of the input variables for the deposition models was determined by computational fluid dynamics (CFD) simulation. For the experimental procedure, the particles were generated until reaching the steady-state condition (emission period). Then generation stopped, and concentration measurements continued until reaching the background concentration (decay period). The results of the tracer gas decay tests revealed that the ACHs of the chamber were: 1.4 and 3.0, and the well-mixed condition was achieved. The CFD results showed the average mean friction velocity and their standard deviations for the lowest and highest ACH were (8.87 ± 0.36) ×10-2 m/s and (8.88 ± 0.38) ×10-2 m/s, respectively. The numerical results indicated the difference between the predicted deposition rates by the three deposition models was less than 2%. The experimental and numerical aerosol concentrations were compared in the emission period and decay period. In both periods, the prediction accuracy of the modified model improved in comparison with the classic WMR model. However, there is still a difference between the actual value and the predicted value. In the emission period, the modified WMR results closely follow the experimental data. However, the model significantly overestimates the experimental results during the decay period. This finding is mainly due to an underestimation of the deposition rate in the model and uncertainty related to measurement devices and particle size distribution. Comparing the experimental and numerical deposition rates revealed that the actual particle deposition rate is significant, but the deposition mechanisms considered in the model were ten times lower than the experimental value. Thus, particle deposition was significant and will affect the airborne concentration in occupational settings, and it should be considered in the airborne exposure prediction model. The role of other removal mechanisms should be investigated.Keywords: aerosol, CFD, exposure assessment, occupational settings, well-mixed room model, zonal model
Procedia PDF Downloads 103740 Thermo-Hydro-Mechanical-Chemical Coupling in Enhanced Geothermal Systems: Challenges and Opportunities
Authors: Esmael Makarian, Ayub Elyasi, Fatemeh Saberi, Olusegun Stanley Tomomewo
Abstract:
Geothermal reservoirs (GTRs) have garnered global recognition as a sustainable energy source. The Thermo-Hydro-Mechanical-Chemical (THMC) integration coupling proves to be a practical and effective method for optimizing production in GTRs. The study outcomes demonstrate that THMC coupling serves as a versatile and valuable tool, offering in-depth insights into GTRs and enhancing their operational efficiency. This is achieved through temperature analysis and pressure changes and their impacts on mechanical properties, structural integrity, fracture aperture, permeability, and heat extraction efficiency. Moreover, THMC coupling facilitates potential benefits assessment and risks associated with different geothermal technologies, considering the complex thermal, hydraulic, mechanical, and chemical interactions within the reservoirs. However, THMC-coupling utilization in GTRs presents a multitude of challenges. These challenges include accurately modeling and predicting behavior due to the interconnected nature of processes, limited data availability leading to uncertainties, induced seismic events risks to nearby communities, scaling and mineral deposition reducing operational efficiency, and reservoirs' long-term sustainability. In addition, material degradation, environmental impacts, technical challenges in monitoring and control, accurate assessment of resource potential, and regulatory and social acceptance further complicate geothermal projects. Addressing these multifaceted challenges is crucial for successful geothermal energy resources sustainable utilization. This paper aims to illuminate the challenges and opportunities associated with THMC coupling in enhanced geothermal systems. Practical solutions and strategies for mitigating these challenges are discussed, emphasizing the need for interdisciplinary approaches, improved data collection and modeling techniques, and advanced monitoring and control systems. Overcoming these challenges is imperative for unlocking the full potential of geothermal energy making a substantial contribution to the global energy transition and sustainable development.Keywords: geothermal reservoirs, THMC coupling, interdisciplinary approaches, challenges and opportunities, sustainable utilization
Procedia PDF Downloads 69739 Biomimetic Systems to Reveal the Action Mode of Epigallocatechin-3-Gallate in Lipid Membrane
Authors: F. Pires, V. Geraldo, O. N. Oliveira Jr., M. Raposo
Abstract:
Catechins are powerful antioxidants which have attractive properties useful for tumor therapy. Considering their antioxidant activity, these molecules can act as a scavenger of the reactive oxygen species (ROS), alleviating the damage of cell membrane induced by oxidative stress. The complexity and dynamic nature of the cell membrane compromise the analysis of the biophysical interactions between drug and cell membrane and restricts the transport or uptake of the drug by intracellular targets. To avoid the cell membrane complexity, we used biomimetic systems as liposomes and Langmuir monolayers to study the interaction between catechin and membranes at the molecular level. Liposomes were formed after the dispersion of anionic 1,2-dipalmitoyl-sn-glycero-3-[phospho-rac-(1-glycerol)(sodium salt) (DPPG) phospholipids in an aqueous solution, which mimic the arrangement of lipids in natural cell membranes and allows the entrapment of catechins. Langmuir monolayers were formed after dropping amphiphilic molecules, DPPG phospholipids, dissolved in an organic solvent onto the water surface. In this work, we mixed epigallocatechin-3-gallate (EGCG) with DPPG liposomes and exposed them to ultra-violet radiation in order to evaluate the antioxidant potential of these molecules against oxidative stress induced by radiation. The presence of EGCG in the mixture decreased the rate of lipid peroxidation, proving that EGCG protects membranes through the quenching of the reactive oxygen species. Considering the high amount of hydroxyl groups (OH groups) on structure of EGCG, a possible mechanism to these molecules interact with membrane is through hydrogen bonding. We also investigated the effect of EGCG at various concentrations on DPPG Langmuir monolayers. The surface pressure isotherms and infrared reflection-absorption spectroscopy (PM-IRRAS) results corroborate with absorbance results preformed on liposome-model, showing that EGCG interacts with polar heads of the monolayers. This study elucidates the physiological action of EGCG which can be incorporated in lipid membrane. These results are also relevant for the improvement of the current protocols used to incorporate catechins in drug delivery systems.Keywords: catechins, lipid membrane, anticancer agent, molecular interactions
Procedia PDF Downloads 233738 Right Atrial Tissue Morphology in Acquired Heart Diseases
Authors: Edite Kulmane, Mara Pilmane, Romans Lacis
Abstract:
Introduction: Acquired heart diseases remain one of the leading health care problems in the world. Changes in myocardium of the diseased hearts are complex and pathogenesis is still not fully clear. The aim of this study was to identify appearance and distribution of apoptosis, homeostasis regulating factors, and innervation and ischemia markers in right atrial tissue in different acquired heart diseases. Methods: During elective open heart surgery were taken right atrial tissue fragments from 12 patients. All patients were operated because of acquired heart diseases- aortic valve stenosis (5 patients), coronary heart disease (5 patients), coronary heart disease and secondary mitral insufficiency (1 patient) and mitral disease (1 patient). The mean age was (mean±SD) 70,2±7,0 years (range 58-83 years). The tissues were stained with haematoxylin and eosin methods for routine light-microscopical examination and for immunohistochemical detection of protein gene peptide 9.5 (PGP 9.5), human atrial natriuretic peptide (hANUP), vascular endothelial growth factor (VEGF), chromogranin A and endothelin. Apoptosis was detected by TUNEL method. Results: All specimens showed degeneration of cardiomyocytes with lysis of myofibrils, diffuse vacuolization especially in perinuclear region, different size of cells and their nuclei. The severe invasion of connective tissue was observed in main part of all fragments. The apoptotic index ranged from 24 to 91%. One specimen showed region of newly performed microvessels with cube shaped endotheliocytes that were positive for PGP 9.5, endothelin, chromogranin A and VEGF. From all fragments, taken from patients with coronary heart disease, there were observed numerous PGP 9.5-containing nerve fibres, except in patient with secondary mitral insufficiency, who showed just few PGP 9.5 positive nerves. In majority of specimens there were regions observed with cube shaped mixed -VEGF immunoreactive endocardial and epicardial cells. Only VEGF positive endothelial cells were observed just in few specimens. There was no significant difference of hANUP secreting cells among all specimens. All patients operated due to the coronary heart disease moderate to numerous number of chromogranin A positive cells were seen while in patients with aortic valve stenosis tissue demonstrated just few factor positive cells. Conclusions: Complex detection of different factors may indicate selectively disordered morphopathogenetical event of heart disease: decrease of PGP 9.5 nerves suggests the decreased innervation of organ; increased apoptosis indicates the cell death without ingrowth of connective tissue; persistent presence of hANUP proves the unchanged homeostasis of cardiomyocytes probably supported by expression of chromogranins. Finally, decrease of VEGF detects the regions of affected blood vessels in heart affected by acquired heart disease.Keywords: heart, apoptosis, protein-gene peptide 9.5, atrial natriuretic peptide, vascular endothelial growth factor, chromogranin A, endothelin
Procedia PDF Downloads 295737 The Potential Role of Some Nutrients and Drugs in Providing Protection from Neurotoxicity Induced by Aluminium in Rats
Authors: Azza A. Ali, Abeer I. Abd El-Fattah, Shaimaa S. Hussein, Hanan A. Abd El-Samea, Karema Abu-Elfotuh
Abstract:
Background: Aluminium (Al) represents an environmental risk factor. Exposure to high levels of Al causes neurotoxic effects and different diseases. Vinpocetine is widely used to improve cognitive functions, it possesses memory-protective and memory-enhancing properties and has the ability to increase cerebral blood flow and glucose uptake. Cocoa bean represents a rich source of iron as well as a potent antioxidant. It can protect from the impact of free radicals, reduces stress as well as depression and promotes better memory and concentration. Wheatgrass is primarily used as a concentrated source of nutrients. It contains vitamins, minerals, carbohydrates, amino acids and possesses antioxidant and anti-inflammatory activities. Coenzyme Q10 (CoQ10) is an intracellular antioxidant and mitochondrial membrane stabilizer. It is effective in improving cognitive disorders and has been used as anti-aging. Zinc is a structural element of many proteins and signaling messenger that is released by neural activity at many central excitatory synapses. Objective: To study the role of some nutrients and drugs as Vinpocetine, Cocoa, Wheatgrass, CoQ10 and Zinc against neurotoxicity induced by Al in rats as well as to compare between their potency in providing protection. Methods: Seven groups of rats were used and received daily for three weeks AlCl3 (70 mg/kg, IP) for Al-toxicity model groups except for the control group which received saline. All groups of Al-toxicity model except one group (non-treated) were co-administered orally together with AlCl3 the following treatments; Vinpocetine (20mg/kg), Cocoa powder (24mg/kg), Wheat grass (100mg/kg), CoQ10 (200mg/kg) or Zinc (32mg/kg). Biochemical changes in the rat brain as acetyl cholinesterase (ACHE), Aβ, brain derived neurotrophic factor (BDNF), inflammatory mediators (TNF-α, IL-1β), oxidative parameters (MDA, SOD, TAC) were estimated for all groups besides histopathological examinations in different brain regions. Results: Neurotoxicity and neurodegenerations in the rat brain after three weeks of Al exposure were indicated by the significant increase in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the significant decrease in SOD, TAC, BDNF and confirmed by the histopathological changes in the brain. On the other hand, co-administration of each of Vinpocetine, Cocoa, Wheatgrass, CoQ10 or Zinc together with AlCl3 provided protection against hazards of neurotoxicity and neurodegenerations induced by Al, their protection were indicated by the decrease in Aβ, ACHE, MDA, TNF-α, IL-1β, DNA fragmentation together with the increase in SOD, TAC, BDNF and confirmed by the histopathological examinations of different brain regions. Vinpocetine and Cocoa showed the most pronounced protection while Zinc provided the least protective effects than the other used nutrients and drugs. Conclusion: Different degrees of protection from neurotoxicity and neuronal degenerations induced by Al could be achieved through the co-administration of some nutrients and drugs during its exposure. Vinpocetine and Cocoa provided the most protection than Wheat grass, CoQ10 or Zinc which showed the least protective effects.Keywords: aluminum, neurotoxicity, vinpocetine, cocoa, wheat grass, coenzyme Q10, Zinc, rats
Procedia PDF Downloads 249736 Contrastive Analysis of Parameters Registered in Training Rowers and the Impact on the Olympic Performance
Authors: Gheorghe Braniste
Abstract:
The management of the training process in sports is closely related to the awareness of the close connection between performance and the morphological, functional and psychological characteristics of the athlete's body. Achieving high results in Olympic sports is influenced, on the one hand, by the genetically determined characteristics of the body and, on the other hand, by the morphological, functional and motor abilities of the athlete. Taking into account the importance of properly understanding the evolutionary specificity of athletes to assess their competitive potential, this study provides a comparative analysis of the parameters that characterize the growth and development of the level of adaptation of sweeping rowers, considering the growth interval between 12 and 20 years. The study established that, in the multi-annual training process, the bodies of the targeted athletes register significant adaptive changes while analyzing parameters of the morphological, functional, psychomotor and sports-technical spheres. As a result of the influence of physical efforts, both specific and non-specific, there is an increase in the adaptability of the body, its transfer to a much higher level of functionality within the parameters, useful and economical adaptive reactions influenced by environmental factors, be they internal or external. The research was carried out for 7 years, on a group of 28 athletes, following their evolution and recording the specific parameters of each age stage. In order to determine the level of physical, morpho-functional, psychomotor development and technical training of rowers, the screening data were applied at the State University of Physical Education and Sports in the Republic of Moldova. During the research, measurements were made on the waist, in the standing and sitting position, arm span, weight, circumference and chest perimeter, vital capacity of the lungs, with the subsequent determination of the vital index (tolerance level to oxygen deficiency in venous blood in Stange and Genchi breath-taking tests that characterize the level of oxygen saturation, absolute and relative strength of the hand and back, calculation of body mass and morphological maturity indices (Kettle index), body surface area (body gait), psychomotor tests (Romberg test), test-tepping 10 s., reaction to a moving object, visual and auditory-motor reaction, recording of technical parameters of rowing on a competitive distance of 200 m. At the end of the study it was found that highly performance is sports is to be associated on the one hand with the genetically determined characteristics of the body and, on the other hand, with favorable adaptive reactions and energy saving, as well as morphofunctional changes influenced by internal and external environmental factors. The importance of the results obtained at the end of the study was positively reflected in obtaining the maximum level of training of athletes in order to demonstrate performance in large-scale competitions and mostly in the Olympic Games.Keywords: olympics, parameters, performance, peak
Procedia PDF Downloads 123735 Associated Factors of Hypercholesterolemia, Hyperuricemia and Double Burden of Hypercuricémia-Hypercholesterolemia in Gout Patients: Hospital Based Study
Authors: Pierre Mintom, Armel Assiene Agamou, Leslie Toukem, William Dakam, Christine Fernande Nyangono Biyegue
Abstract:
Context: Hyperuricemia, the presence of high levels of uric acid in the blood, is a known precursor to the development of gout. Recent studies have suggested a strong association between hyperuricemia and disorders of lipoprotein metabolism, specifically hypercholesterolemia. Understanding the factors associated with these conditions in gout patients is essential for effective treatment and management. Research Aim: The objective of this study was to determine the prevalence of hyperuricemia, hypercholesterolemia, and the double burden of hyperuricemia-hypercholesterolemia in the gouty population. Additionally, the study aimed to identify the factors associated with these conditions. Methodology: The study utilized a database from a survey of 150 gouty patients recruited at the Laquintinie Hospital in Douala between August 2017 and February 2018. The database contained information on anthropometric parameters, biochemical markers, and the food and drugs consumed by the patients. Hyperuricemia and hypercholesterolemia were defined based on specific serum uric acid and total cholesterol thresholds, and the double burden was defined as the co-occurrence of hyperuricemia and hypercholesterolemia. Findings: The study found that the prevalence rates for hyperuricemia, hypercholesterolemia, and the double burden were 61.3%, 76%, and 50.7% respectively. Factors associated with these conditions included hypertriglyceridemia, atherogenicity index TC/HDL ratio, atherogenicity index LDL/HDL ratio, family history, and the consumption of specific foods and drinks. Theoretical Importance: The study highlights the strong association between hyperuricemia and dyslipidemia, providing important insights for guiding treatment strategies in gout patients. Additionally, it emphasizes the significance of nutritional education in managing these metabolic disorders, suggesting the need to address eating habits in gout patients. Data Collection and Analysis Procedures: Data was collected through surveys and medical records of gouty patients. Information on anthropometric parameters, biochemical markers, and dietary habits was recorded. Prevalence rates and associated factors were determined through statistical analysis, employing odds ratios to assess the risks. Question Addressed: The study aimed to address the prevalence rates and associated factors of hyperuricemia, hypercholesterolemia, and the double burden in gouty patients. It sought to understand the relationships between these conditions and determine their implications for treatment and nutritional education. Conclusion: Findings show that it’s exists an association between hyperuricemia and hypercholesterolemia in gout patients, thus creating a double burden. The findings underscore the importance of considering family history and eating habits in addressing the double burden of hyperuricemia-hypercholesterolemia. This study provides valuable insights for guiding treatment approaches and emphasizes the need for nutritional education in gout patients. This study specifically focussed on the sick population. A case–control study between gouty and non-gouty populations would be interesting to better compare and explain the results observed.Keywords: gout, hyperuricemia, hypercholesterolemia, double burden
Procedia PDF Downloads 61