Search results for: large pipes
1861 Influence of Ammonia Emissions on Aerosol Formation in Northern and Central Europe
Authors: A. Aulinger, A. M. Backes, J. Bieser, V. Matthias, M. Quante
Abstract:
High concentrations of particles pose a threat to human health. Thus, legal maximum concentrations of PM10 and PM2.5 in ambient air have been steadily decreased over the years. In central Europe, the inorganic species ammonium sulphate and ammonium nitrate make up a large fraction of fine particles. Many studies investigate the influence of emission reductions of sulfur- and nitrogen oxides on aerosol concentration. Here, we focus on the influence of ammonia (NH3) emissions. While emissions of sulphate and nitrogen oxides are quite well known, ammonia emissions are subject to high uncertainty. This is due to the uncertainty of location, amount, time of fertilizer application in agriculture, and the storage and treatment of manure from animal husbandry. For this study, we implemented a crop growth model into the SMOKE emission model. Depending on temperature, local legislation, and crop type individual temporal profiles for fertilizer and manure application are calculated for each model grid cell. Additionally, the diffusion from soils and plants and the direct release from open and closed barns are determined. The emission data was used as input for the Community Multiscale Air Quality (CMAQ) model. Comparisons to observations from the EMEP measurement network indicate that the new ammonia emission module leads to a better agreement of model and observation (for both ammonia and ammonium). Finally, the ammonia emission model was used to create emission scenarios. This includes emissions based on future European legislation, as well as a dynamic evaluation of the influence of different agricultural sectors on particle formation. It was found that a reduction of ammonia emissions by 50% lead to a 24% reduction of total PM2.5 concentrations during winter time in the model domain. The observed reduction was mainly driven by reduced formation of ammonium nitrate. Moreover, emission reductions during winter had a larger impact than during the rest of the year.Keywords: ammonia, ammonia abatement strategies, ctm, seasonal impact, secondary aerosol formation
Procedia PDF Downloads 3511860 Radar Fault Diagnosis Strategy Based on Deep Learning
Authors: Bin Feng, Zhulin Zong
Abstract:
Radar systems are critical in the modern military, aviation, and maritime operations, and their proper functioning is essential for the success of these operations. However, due to the complexity and sensitivity of radar systems, they are susceptible to various faults that can significantly affect their performance. Traditional radar fault diagnosis strategies rely on expert knowledge and rule-based approaches, which are often limited in effectiveness and require a lot of time and resources. Deep learning has recently emerged as a promising approach for fault diagnosis due to its ability to learn features and patterns from large amounts of data automatically. In this paper, we propose a radar fault diagnosis strategy based on deep learning that can accurately identify and classify faults in radar systems. Our approach uses convolutional neural networks (CNN) to extract features from radar signals and fault classify the features. The proposed strategy is trained and validated on a dataset of measured radar signals with various types of faults. The results show that it achieves high accuracy in fault diagnosis. To further evaluate the effectiveness of the proposed strategy, we compare it with traditional rule-based approaches and other machine learning-based methods, including decision trees, support vector machines (SVMs), and random forests. The results demonstrate that our deep learning-based approach outperforms the traditional approaches in terms of accuracy and efficiency. Finally, we discuss the potential applications and limitations of the proposed strategy, as well as future research directions. Our study highlights the importance and potential of deep learning for radar fault diagnosis. It suggests that it can be a valuable tool for improving the performance and reliability of radar systems. In summary, this paper presents a radar fault diagnosis strategy based on deep learning that achieves high accuracy and efficiency in identifying and classifying faults in radar systems. The proposed strategy has significant potential for practical applications and can pave the way for further research.Keywords: radar system, fault diagnosis, deep learning, radar fault
Procedia PDF Downloads 901859 Strong Ground Motion Characteristics Revealed by Accelerograms in Ms8.0 Wenchuan Earthquake
Authors: Jie Su, Zhenghua Zhou, Yushi Wang, Yongyi Li
Abstract:
The ground motion characteristics, which are given by the analysis of acceleration records, underlie the formulation and revision of the seismic design code of structural engineering. China Digital Strong Motion Network had recorded a lot of accelerograms of main shock from 478 permanent seismic stations, during the Ms8.0 Wenchuan earthquake on 12th May, 2008. These accelerograms provided a large number of essential data for the analysis of ground motion characteristics of the event. The spatial distribution characteristics, rupture directivity effect, hanging-wall and footwall effect had been studied based on these acceleration records. The results showed that the contours of horizontal peak ground acceleration and peak velocity were approximately parallel to the seismogenic fault which demonstrated that the distribution of the ground motion intensity was obviously controlled by the spatial extension direction of the seismogenic fault. Compared with the peak ground acceleration (PGA) recorded on the sites away from which the front of the fault rupture propagates, the PGA recorded on the sites toward which the front of the fault rupture propagates had larger amplitude and shorter duration, which indicated a significant rupture directivity effect. With the similar fault distance, the PGA of the hanging-wall is apparently greater than that of the foot-wall, while the peak velocity fails to observe this rule. Taking account of the seismic intensity distribution of Wenchuan Ms8.0 earthquake, the shape of strong ground motion contours was significantly affected by the directional effect in the regions with Chinese seismic intensity level VI ~ VIII. However, in the regions whose Chinese seismic intensity level are equal or greater than VIII, the mutual positional relationship between the strong ground motion contours and the surface outcrop trace of the fault was evidently influenced by the hanging-wall and foot-wall effect.Keywords: hanging-wall and foot-wall effect, peak ground acceleration, rupture directivity effect, strong ground motion
Procedia PDF Downloads 3501858 Metallograpy of Remelted A356 Aluminium following Squeeze Casting
Authors: Azad Hussain, Andrew Cobley
Abstract:
The demand for lightweight parts with high mechanical strength(s) and integrity, in sectors such as the aerospace and automotive is ever increasing, motivated by the need for weight reduction in order to increase fuel efficiency with components usually manufactured using a high grade primary metal or alloy. For components manufactured using the squeeze casting process, this alloy is usually A356 aluminium (Al), it is one of the most versatile Al alloys; and is used extensively in castings for demanding environments. The A356 castings provide good strength to weight ratio making it an attractive option for components where strength has to be maintained, with the added advantage of weight reduction. In addition, the versatility in castabilitiy, weldability and corrosion resistance are other attributes that provide for the A356 cast alloy to be used in a large array of industrial applications. Conversely, it is rare to use remelted Al in these cases, due the nature of the applications of components in demanding environments, were material properties must be defined to meet certain specifications for example a known strength or ductility. However the use of remelted Al, especially primary grade Al such as A356, would offer significant cost and energy savings for manufacturers using primary alloys, provided that remelted aluminium can offer similar benefits in terms of material microstructure and mechanical properties. This study presents the results of the material microstructure and properties of 100% primary A356 Al and 100% remelt Al cast, manufactured via the direct squeeze cast method. The microstructures of the castings made from remelted A356 Al were then compared with the microstructures of primary A356 Al. The outcome of using remelting Al on the microstructure was examined via different analytical techniques, optical microscopy of polished and etched surfaces, and scanning electron microscopy. Microstructural analysis of the 100% remelted Al when compared with primary Al show similar α-Al phase, primary Al dendrites, particles and eutectic constituents. Mechanical testing of cast samples will elucidate further information as to the suitability of utilising 100% remelt for casting.Keywords: A356, microstructure, remelt, squeeze casting
Procedia PDF Downloads 2081857 Safe and Scalable Framework for Participation of Nodes in Smart Grid Networks in a P2P Exchange of Short-Term Products
Authors: Maciej Jedrzejczyk, Karolina Marzantowicz
Abstract:
Traditional utility value chain is being transformed during last few years into unbundled markets. Increased distributed generation of energy is one of considerable challenges faced by Smart Grid networks. New sources of energy introduce volatile demand response which has a considerable impact on traditional middlemen in E&U market. The purpose of this research is to search for ways to allow near-real-time electricity markets to transact with surplus energy based on accurate time synchronous measurements. A proposed framework evaluates the use of secure peer-2-peer (P2P) communication and distributed transaction ledgers to provide flat hierarchy, and allow real-time insights into present and forecasted grid operations, as well as state and health of the network. An objective is to achieve dynamic grid operations with more efficient resource usage, higher security of supply and longer grid infrastructure life cycle. Methods used for this study are based on comparative analysis of different distributed ledger technologies in terms of scalability, transaction performance, pluggability with external data sources, data transparency, privacy, end-to-end security and adaptability to various market topologies. An intended output of this research is a design of a framework for safer, more efficient and scalable Smart Grid network which is bridging a gap between traditional components of the energy network and individual energy producers. Results of this study are ready for detailed measurement testing, a likely follow-up in separate studies. New platforms for Smart Grid achieving measurable efficiencies will allow for development of new types of Grid KPI, multi-smart grid branches, markets, and businesses.Keywords: autonomous agents, Distributed computing, distributed ledger technologies, large scale systems, micro grids, peer-to-peer networks, Self-organization, self-stabilization, smart grids
Procedia PDF Downloads 3001856 Hidden Hot Spots: Identifying and Understanding the Spatial Distribution of Crime
Authors: Lauren C. Porter, Andrew Curtis, Eric Jefferis, Susanne Mitchell
Abstract:
A wealth of research has been generated examining the variation in crime across neighborhoods. However, there is also a striking degree of crime concentration within neighborhoods. A number of studies show that a small percentage of street segments, intersections, or addresses account for a large portion of crime. Not surprisingly, a focus on these crime hot spots can be an effective strategy for reducing community level crime and related ills, such as health problems. However, research is also limited in an important respect. Studies tend to use official data to identify hot spots, such as 911 calls or calls for service. While the use of call data may be more representative of the actual level and distribution of crime than some other official measures (e.g. arrest data), call data still suffer from the 'dark figure of crime.' That is, there is most certainly a degree of error between crimes that occur versus crimes that are reported to the police. In this study, we present an alternative method of identifying crime hot spots, that does not rely on official data. In doing so, we highlight the potential utility of neighborhood-insiders to identify and understand crime dynamics within geographic spaces. Specifically, we use spatial video and geo-narratives to record the crime insights of 36 police, ex-offenders, and residents of a high crime neighborhood in northeast Ohio. Spatial mentions of crime are mapped to identify participant-identified hot spots, and these are juxtaposed with calls for service (CFS) data. While there are bound to be differences between these two sources of data, we find that one location, in particular, a corner store, emerges as a hot spot for all three groups of participants. Yet it does not emerge when we examine CFS data. A closer examination of the space around this corner store and a qualitative analysis of narrative data reveal important clues as to why this store may indeed be a hot spot, but not generate disproportionate calls to the police. In short, our results suggest that researchers who rely solely on official data to study crime hot spots may risk missing some of the most dangerous places.Keywords: crime, narrative, video, neighborhood
Procedia PDF Downloads 2381855 Computational Fluid Dynamicsfd Simulations of Air Pollutant Dispersion: Validation of Fire Dynamic Simulator Against the Cute Experiments of the Cost ES1006 Action
Authors: Virginie Hergault, Siham Chebbah, Bertrand Frere
Abstract:
Following in-house objectives, Central laboratory of Paris police Prefecture conducted a general review on models and Computational Fluid Dynamics (CFD) codes used to simulate pollutant dispersion in the atmosphere. Starting from that review and considering main features of Large Eddy Simulation, Central Laboratory Of Paris Police Prefecture (LCPP) postulates that the Fire Dynamics Simulator (FDS) model, from National Institute of Standards and Technology (NIST), should be well suited for air pollutant dispersion modeling. This paper focuses on the implementation and the evaluation of FDS in the frame of the European COST ES1006 Action. This action aimed at quantifying the performance of modeling approaches. In this paper, the CUTE dataset carried out in the city of Hamburg, and its mock-up has been used. We have performed a comparison of FDS results with wind tunnel measurements from CUTE trials on the one hand, and, on the other, with the models results involved in the COST Action. The most time-consuming part of creating input data for simulations is the transfer of obstacle geometry information to the format required by SDS. Thus, we have developed Python codes to convert automatically building and topographic data to the FDS input file. In order to evaluate the predictions of FDS with observations, statistical performance measures have been used. These metrics include the fractional bias (FB), the normalized mean square error (NMSE) and the fraction of predictions within a factor of two of observations (FAC2). As well as the CFD models tested in the COST Action, FDS results demonstrate a good agreement with measured concentrations. Furthermore, the metrics assessment indicate that FB and NMSE meet the tolerance acceptable.Keywords: numerical simulations, atmospheric dispersion, cost ES1006 action, CFD model, cute experiments, wind tunnel data, numerical results
Procedia PDF Downloads 1331854 Community Development and Empowerment
Authors: Shahin Marjan Nanaje
Abstract:
The present century is the time that social worker faced complicated issues in the area of their work. All the focus are on bringing change in the life of those that they live in margin or live in poverty became the cause that we have forgotten to look at ourselves and start to bring change in the way we address issues. It seems that there is new area of needs that social worker should response to that. In need of dialogue and collaboration, to address the issues and needs of community both individually and as a group we need to have new method of dialogue as tools to reach to collaboration. The social worker as link between community, organization and government play multiple roles. They need to focus in the area of communication with new ability, to transfer all the narration of the community to those organization and government and vice versa. It is not relate only in language but it is about changing dialogue. Migration for survival by job seeker to the big cities created its own issues and difficulty and therefore created new need. Collaboration is not only requiring between government sector and non-government sectors but also it could be in new way between government, non-government and communities. To reach to this collaboration we need healthy, productive and meaningful dialogue. In this new collaboration there will not be any hierarchy between members. The methodology that selected by researcher were focusing on observation at the first place, and used questionnaire in the second place. Duration of the research was three months and included home visits, group discussion and using communal narrations which helped to bring enough evidence to understand real need of community. The sample selected randomly was included 70 immigrant families which work as sweepers in the slum community in Bangalore, Karnataka. The result reveals that there is a gap between what a community is and what organizations, government and members of society apart from this community think about them. Consequently, it is learnt that to supply any service or bring any change to slum community, we need to apply new skill to have dialogue and understand each other before providing any services. Also to bring change in the life of those marginal groups at large we need to have collaboration as their challenges are collective and need to address by different group and collaboration will be necessary. The outcome of research helped researcher to see the area of need for new method of dialogue and collaboration as well as a framework for collaboration and dialogue that were main focus of the paper. The researcher used observation experience out of ten NGO’s and their activities to create framework for dialogue and collaboration.Keywords: collaboration, dialogue, community development, empowerment
Procedia PDF Downloads 5881853 Development of an Experimental Model of Diabetes Co-Existing with Metabolic Syndrome in Rats
Authors: Rajesh Kumar Suman, Ipseeta Ray Mohanty, Manjusha K. Borde, Ujjawala maheswari, Y. A. Deshmukh
Abstract:
Background: Metabolic syndrome encompasses cluster of risk factors for cardiovascular disease which includes abdominal obesity, dyslipidemia, hypertension, and hyperglycemia. The incidence of metabolic syndrome is on the rise globally. Objective: The present study was designed to develop a unique animal model that will mimic the pathological features seen in a large pool of individuals with diabetes and metabolic syndrome; suitable for pharmacological screening of drugs beneficial in this condition. Material and Methods: A combination of high fat diet (HFD) and low dose of streptozotocin (STZ) at 30, 35 and 40 mg/kg was used to induce metabolic syndrome co-existing with diabetes mellitus in Wistar rats. Results: The 40 mg/kg STZ produced sustained hyperglycemia and the dose was thus selected for our study to induce diabetes mellitus. Rat fed HFD (HF-DC) group showed significant (p < 0.001) increase in body weight on 4th and 7th week as compared with NC (Normal Control) group rats. However, the increase in body weight of HF-DC group rats was not sustained at the end of 10th weeks. Various components of metabolic syndrome such as dyslipidemia {(Increased Triglyceride, total Cholesterol, LDL Cholesterol and decreased HDL Cholesterol)}, diabetes mellitus (Blood Glucose, HbA1c, Serum Insulin, C-peptide), hypertension {Systolic Blood pressure (p < 0.001)} were mimicked in the developed model of metabolic syndrome co existing with diabetes mellitus. In addition significant cardiac injury as indicated by CPK-MB levels, artherogenic index, hs-CRP. The decline in hepatic function {(p < 0.01) increase in the level of SGPT (U/L)} and renal function {(increase in creatinine levels (p < 0.01)} when compared to NC group rats. The histopathological assessment confirmed presence of edema, necrosis and inflammation in Heart, Pancreas, Liver and Kidney of HFD-DC group as compared to NC. Conclusion: The present study has developed a unique rodent model of metabolic syndrome; with diabetes as an essential component.Keywords: diabetes, metabolic syndrome, high fat diet, streptozotocin, rats
Procedia PDF Downloads 3481852 Metagenomic Identification of Cave Microorganisms in Lascaux and Other Périgord Caves
Authors: Lise Alonso, Audrey Dubost, Patricia Luis, Thomas Pommier, Yvan Moënne-Loccoz
Abstract:
The Lascaux Cave in South-Est France is an archeological landmark renowned for its Paleolithic paintings dating back c.18.000 years. Extensive touristic frequenting and repeated chemical treatments have resulted in the development of microbial stains on cave walls, which is a major issue in terms of art conservation. Therefore, it is of prime importance to better understand the microbiology specific to the Lascaux Cave, in comparison to regional situations. To this end, we compared the microbial community (i.e. both prokaryotic and eukaryotic microbial populations) of Lascaux Cave with three other anthropized Périgord caves as well as three pristine caves from the same area. We used state-of-the-art metagenomic analyses of cave wall samples to obtain a global view of the composition of the microbial community colonizing cave walls. We measured the relative abundance and diversity of four DNA markers targeting different fractions of the ribosomal genes of bacteria (i.e. eubacteria), archaea (i.e. archeobacteria), fungi and other micro-eukaryotes. All groups were highly abundant and diverse in all Périgord caves, as several hundred genera of microorganisms were identified in each. However, Lascaux Cave displayed a specify microbial community, which differed from those of both pristine and anthropized caves. Comparison of stains versus non-stained samples from the Passage area of the Lascaux Cave indicated that a few taxa (e.g. the Sordiaromycetes amongst fungi) were more prevalent within than outside stains, yet the main difference was in the relative proportion of the different microbial taxonomic groups and genera, which supposedly supports the biological origin of the stains. Overall, metagenomic sequencing of cave wall samples was effective to evidence the large colonization of caves by a diversified range of microorganisms. It also showed that Lascaux Cave represented a very particular situation in comparison with neighboring caves, probably in relation to the extent of disturbance it had undergone. Our results provide key baseline information to guide conservation efforts in anthropized caves such as Lascaux and pave the way to modern monitoring of ornamented caves.Keywords: cave conservation, Lascaux cave, microbes, paleolithic paintings
Procedia PDF Downloads 2441851 Approaches to Estimating the Radiation and Socio-Economic Consequences of the Fukushima Daiichi Nuclear Power Plant Accident Using the Data Available in the Public Domain
Authors: Dmitry Aron
Abstract:
Major radiation accidents carry not only the potential risks of negative consequences for public health due to exposure but also because of large-scale emergency measures were taken by authorities to protect the population, which can lead to unreasonable social and economic damage. It is technically difficult, as a rule, to assess the possible costs and damages from decisions on evacuation or resettlement of residents in the shortest possible time, since it requires specially prepared information systems containing relevant information on demographic, economic parameters and incoming data on radiation conditions. Foreign observers also face the difficulties in assessing the consequences of an accident in a foreign territory, since they usually do not have official and detailed statistical data on the territory of foreign state beforehand. Also, they can suppose the application of unofficial data from open Internet sources is an unreliable and overly labor-consuming procedure. This paper describes an approach to prompt creation of relational database that contains detailed actual data on economics, demographics and radiation situation at the Fukushima Prefecture during the Fukushima Daiichi NPP accident, received by the author from open Internet sources. This database was developed and used to assess the number of evacuated population, radiation doses, expected financial losses and other parameters of the affected areas. The costs for the areas with temporarily evacuated and long-term resettled population were investigated, and the radiological and economic effectiveness of the measures taken to protect the population was estimated. Some of the results are presented in the article. The study showed that such a tool for analyzing the consequences of radiation accidents can be prepared in a short space of time for the entire territory of Japan, and it can serve for the modeling of social and economic consequences for hypothetical accidents for any nuclear power plant in its territory.Keywords: Fukushima, radiation accident, emergency measures, database
Procedia PDF Downloads 1911850 Generalized Correlation Coefficient in Genome-Wide Association Analysis of Cognitive Ability in Twins
Authors: Afsaneh Mohammadnejad, Marianne Nygaard, Jan Baumbach, Shuxia Li, Weilong Li, Jesper Lund, Jacob v. B. Hjelmborg, Lene Christensen, Qihua Tan
Abstract:
Cognitive impairment in the elderly is a key issue affecting the quality of life. Despite a strong genetic background in cognition, only a limited number of single nucleotide polymorphisms (SNPs) have been found. These explain a small proportion of the genetic component of cognitive function, thus leaving a large proportion unaccounted for. We hypothesize that one reason for this missing heritability is the misspecified modeling in data analysis concerning phenotype distribution as well as the relationship between SNP dosage and the phenotype of interest. In an attempt to overcome these issues, we introduced a model-free method based on the generalized correlation coefficient (GCC) in a genome-wide association study (GWAS) of cognitive function in twin samples and compared its performance with two popular linear regression models. The GCC-based GWAS identified two genome-wide significant (P-value < 5e-8) SNPs; rs2904650 near ZDHHC2 on chromosome 8 and rs111256489 near CD6 on chromosome 11. The kinship model also detected two genome-wide significant SNPs, rs112169253 on chromosome 4 and rs17417920 on chromosome 7, whereas no genome-wide significant SNPs were found by the linear mixed model (LME). Compared to the linear models, more meaningful biological pathways like GABA receptor activation, ion channel transport, neuroactive ligand-receptor interaction, and the renin-angiotensin system were found to be enriched by SNPs from GCC. The GCC model outperformed the linear regression models by identifying more genome-wide significant genetic variants and more meaningful biological pathways related to cognitive function. Moreover, GCC-based GWAS was robust in handling genetically related twin samples, which is an important feature in handling genetic confounding in association studies.Keywords: cognition, generalized correlation coefficient, GWAS, twins
Procedia PDF Downloads 1241849 Some Issues of Measurement of Impairment of Non-Financial Assets in the Public Sector
Authors: Mariam Vardiashvili
Abstract:
The economic value of the asset impairment process is quite large. Impairment reflects the reduction of future economic benefits or service potentials itemized in the asset. The assets owned by public sector entities bring economic benefits or are used for delivery of the free-of-charge services. Consequently, they are classified as cash-generating and non-cash-generating assets. IPSAS 21 - Impairment of non-cash-generating assets, and IPSAS 26 - Impairment of cash-generating assets, have been designed considering this specificity. When measuring impairment of assets, it is important to select the relevant methods. For measurement of the impaired Non-Cash-Generating Assets, IPSAS 21 recommends three methods: Depreciated Replacement Cost Approach, Restoration Cost Approach, and Service Units Approach. Impairment of Value in Use of Cash-Generating Assets (according to IPSAS 26) is measured by discounted value of the money sources to be received in future. Value in use of the cash-generating asserts (as per IPSAS 26) is measured by the discounted value of the money sources to be received in the future. The article provides classification of the assets in the public sector as non-cash-generating assets and cash-generating assets and, deals also with the factors which should be considered when evaluating impairment of assets. An essence of impairment of the non-financial assets and the methods of measurement thereof evaluation are formulated according to IPSAS 21 and IPSAS 26. The main emphasis is put on different methods of measurement of the value in use of the impaired Cash-Generating Assets and Non-Cash-Generation Assets and the methods of their selection. The traditional and the expected cash flow approaches for calculation of the discounted value are reviewed. The article also discusses the issues of recognition of impairment loss and its reflection in the financial reporting. The article concludes that despite a functional purpose of the impaired asset, whichever method is used for measuring the asset, presentation of realistic information regarding the value of the assets should be ensured in the financial reporting. In the theoretical development of the issue, the methods of scientific abstraction, analysis and synthesis were used. The research was carried out with a systemic approach. The research process uses international standards of accounting, theoretical researches and publications of Georgian and foreign scientists.Keywords: cash-generating assets, non-cash-generating assets, recoverable (usable restorative) value, value of use
Procedia PDF Downloads 1431848 Derivation of Human NK Cells from T Cell-Derived Induced Pluripotent Stem Cells Using Xenogeneic Serum-Free and Feeder Cell-Free Culture System
Authors: Aliya Sekenova, Vyacheslav Ogay
Abstract:
The derivation of human induced pluripotent stem cells (iPSCs) from somatic cells by direct reprogramming opens wide perspectives in the regenerative medicine. It means the possibility to develop the personal and, consequently, any immunologically compatible cells for applications in cell-based therapy. The purpose of our study was to develop the technology for the production of NK cells from T cell-derived induced pluripotent stem cells (TiPSCs) for subsequent application in adoptive cancer immunotherapy. Methods: In this study iPSCs were derived from peripheral blood T cells using Sendai virus vectors expressing Oct4, Sox2, Klf4 and c-Myc. Pluripotent characteristics of TiPSCs were examined and confirmed with alkaline phosphatase staining, immunocytochemistry and RT-PCR analysis. For NK cell differentiation, embryoid bodies (EB) formed from (TiPSCs) were cultured in xenogeneic serum-free medium containing human serum, IL-3, IL-7, IL-15, SCF, FLT3L without using M210-B4 and AFT-024 stromal feeder cells. After differentiation, NK cells were characterized with immunofluorescence analysis, flow cytometry and cytotoxicity assay. Results: Here, we for the first time demonstrate that TiPSCs can effectively differentiate into functionally active NK cells without M210-B4 and AFT-024 xenogeneic stroma cells. Immunofluorescence and flow cytometry analysis showed that EB-derived cells can differentiate into a homogeneous population of NK cell expressing high levels of CD56, CD45 and CD16 specific markers. Moreover, these cells significantly express killing activation receptors such as NKp44 and NKp46. In the comparative analysis, we observed that NK cells derived using feeder-free culture system have more high killing activity against K-562 tumor cells, than NK cells derived by feeder-dependent method. Thus, we think that our obtained data will be useful for the development of large-scale production of NK cells for translation into cancer immunotherapy.Keywords: induced pluripotent stem cells, NK cells, T cells, cell diffentiation, feeder cell-free culture system
Procedia PDF Downloads 3261847 Military Leadership: Emotion Culture and Emotion Coping in Morally Stressful Situations
Authors: Sofia Nilsson, Alicia Ohlsson, Linda-Marie Lundqvist, Aida Alvinius, Peder Hyllengren, Gerry Larsson
Abstract:
In irregular warfare contexts, military personnel are often presented with morally ambiguous situations where they are aware of the morally correct choice but may feel prevented to follow through with it due to organizational demands. Moral stress and/or injury can be the outcome of the individual’s experienced dissonance. These types of challenges put a large demand on the individual to manage their own emotions and the emotions of others, particularly in the case of a leader. Both the ability and inability for emotional regulation can result in different combinations of short and long term reactions after morally stressful events, which can be either positive or negative. Our study analyzed the combination of these reactions based upon the types of morally challenging events that were described by the subjects. 1)What institutionalized norms concerning emotion regulation are favorable in short-and long-term perspectives after a morally stressful event? 2)What individual emotion-focused coping strategies are favorable in short-and long-perspectives after a morally stressful? To address these questions, we conducted a quantitative study in military contexts in Sweden and Norway on upcoming or current military officers (n=331). We tested a theoretical model built upon a recently developed qualitative study. The data was analyzed using factor analysis, multiple regression analysis and subgroup analyses. The results indicated that an individual’s restriction of emotion in order to achieve an organizational goal, which results in emotional dissonance, can be an effective short term strategy for both the individual and the organization; however, it appears to be unfavorable in a long-term perspective which can result in negative reactions. Our results are intriguing because they showed an increased percentage of reported negative long term reactions (13%), which indicated PTSD-related symptoms in comparison to previous Swedish studies which indicated lower PTSD symptomology.Keywords: emotion culture, emotion coping, emotion management, military
Procedia PDF Downloads 5981846 Ill-Posed Inverse Problems in Molecular Imaging
Authors: Ranadhir Roy
Abstract:
Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method
Procedia PDF Downloads 2711845 Multi-Stakeholder Involvement in Construction and Challenges of Building Information Modeling Implementation
Authors: Zeynep Yazicioglu
Abstract:
Project development is a complex process where many stakeholders work together. Employers and main contractors are the base stakeholders, whereas designers, engineers, sub-contractors, suppliers, supervisors, and consultants are other stakeholders. A combination of the complexity of the building process with a large number of stakeholders often leads to time and cost overruns and irregular resource utilization. Failure to comply with the work schedule and inefficient use of resources in the construction processes indicate that it is necessary to accelerate production and increase productivity. The development of computer software called Building Information Modeling, abbreviated as BIM, is a major technological breakthrough in this area. The use of BIM enables architectural, structural, mechanical, and electrical projects to be drawn in coordination. BIM is a tool that should be considered by every stakeholder with the opportunities it offers, such as minimizing construction errors, reducing construction time, forecasting, and determination of the final construction cost. It is a process spreading over the years, enabling all stakeholders associated with the project and construction to use it. The main goal of this paper is to explore the problems associated with the adoption of BIM in multi-stakeholder projects. The paper is a conceptual study, summarizing the author’s practical experience with design offices and construction firms working with BIM. In the transition period to BIM, three of the challenges will be examined in this paper: 1. The compatibility of supplier companies with BIM, 2. The need for two-dimensional drawings, 3. Contractual issues related to BIM. The paper reviews the literature on BIM usage and reviews the challenges in the transition stage to BIM. Even on an international scale, the supplier that can work in harmony with BIM is not very common, which means that BIM's transition is continuing. In parallel, employers, local approval authorities, and material suppliers still need a 2-D drawing. In the BIM environment, different stakeholders can work on the same project simultaneously, giving rise to design ownership issues. Practical applications and problems encountered are also discussed, providing a number of suggestions for the future.Keywords: BIM opportunities, collaboration, contract issues about BIM, stakeholders of project
Procedia PDF Downloads 1021844 Use of SUDOKU Design to Assess the Implications of the Block Size and Testing Order on Efficiency and Precision of Dulce De Leche Preference Estimation
Authors: Jéssica Ferreira Rodrigues, Júlio Silvio De Sousa Bueno Filho, Vanessa Rios De Souza, Ana Carla Marques Pinheiro
Abstract:
This study aimed to evaluate the implications of the block size and testing order on efficiency and precision of preference estimation for Dulce de leche samples. Efficiency was defined as the inverse of the average variance of pairwise comparisons among treatments. Precision was defined as the inverse of the variance of treatment means (or effects) estimates. The experiment was originally designed to test 16 treatments as a series of 8 Sudoku 16x16 designs being 4 randomized independently and 4 others in the reverse order, to yield balance in testing order. Linear mixed models were assigned to the whole experiment with 112 testers and all their grades, as well as their partially balanced subgroups, namely: a) experiment with the four initial EU; b) experiment with EU 5 to 8; c) experiment with EU 9 to 12; and b) experiment with EU 13 to 16. To record responses we used a nine-point hedonic scale, it was assumed a mixed linear model analysis with random tester and treatments effects and with fixed test order effect. Analysis of a cumulative random effects probit link model was very similar, with essentially no different conclusions and for simplicity, we present the results using Gaussian assumption. R-CRAN library lme4 and its function lmer (Fit Linear Mixed-Effects Models) was used for the mixed models and libraries Bayesthresh (default Gaussian threshold function) and ordinal with the function clmm (Cumulative Link Mixed Model) was used to check Bayesian analysis of threshold models and cumulative link probit models. It was noted that the number of samples tested in the same session can influence the acceptance level, underestimating the acceptance. However, proving a large number of samples can help to improve the samples discrimination.Keywords: acceptance, block size, mixed linear model, testing order, testing order
Procedia PDF Downloads 3211843 Magnetic Nanoparticles Coated with Modified Polysaccharides for the Immobilization of Glycoproteins
Authors: Kinga Mylkie, Pawel Nowak, Marta Z. Borowska
Abstract:
The most important proteins in human serum responsible for drug binding are human serum albumin (HSA) and α1-acid glycoprotein (AGP). The AGP molecule is a glycoconjugate containing a single polypeptide chain composed of 183 amino acids (the core of the protein), and five glycan branched chains (sugar part) covalently linked by an N-glycosidic bond with aspartyl residues (Asp(N) -15, -38, -54, -75, - 85) of polypeptide chain. This protein plays an important role in binding alkaline drugs, a large group of drugs used in psychiatry, some acid drugs (e.g., coumarin anticoagulants), and neutral drugs (steroid hormones). The main goal of the research was to obtain magnetic nanoparticles coated with biopolymers in a chemically modified form, which will have highly reactive functional groups able to effectively immobilize the glycoprotein (acid α1-glycoprotein) without losing the ability to bind active substances. The first phase of the project involved the chemical modification of biopolymer starch. Modification of starch was carried out by methods of organic synthesis, leading to the preparation of a polymer enriched on its surface with aldehyde groups, which in the next step was coupled with 3-aminophenylboronic acid. Magnetite nanoparticles coated with starch were prepared by in situ co-precipitation and then oxidized with a 1 M sodium periodate solution to form a dialdehyde starch coating. Afterward, the reaction between the magnetite nanoparticles coated with dialdehyde starch and 3-aminophenylboronic acid was carried out. The obtained materials consist of a magnetite core surrounded by a layer of modified polymer, which contains on its surface dihydroxyboryl groups of boronic acids which are capable of binding glycoproteins. Magnetic nanoparticles obtained as carriers for plasma protein immobilization were fully characterized by ATR-FTIR, TEM, SEM, and DLS. The glycoprotein was immobilized on the obtained nanoparticles. The amount of mobilized protein was determined by the Bradford method.Keywords: glycoproteins, immobilization, magnetic nanoparticles, polysaccharides
Procedia PDF Downloads 1301842 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving
Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries
Abstract:
Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion
Procedia PDF Downloads 1971841 Using Geographic Information System and Analytic Hierarchy Process for Detecting Forest Degradation in Benslimane Forest, Morocco
Authors: Loubna Khalile, Hicham Lahlaoi, Hassan Rhinane, A. Kaoukaya, S. Fal
Abstract:
Green spaces is an essential element, they contribute to improving the quality of lives of the towns around them. They are a place of relaxation, walk and rest a playground for sport and youths. According to United Nations Organization Forests cover 31% of the land. In Morocco in 2013 that cover 12.65 % of the total land area, still, a small proportion compared to the natural needs of forests as a green lung of our planet. The Benslimane Forest is a large green area It belongs to Chaouia-Ouardigha Region and Greater Casablanca Region, it is located geographically between Casablanca is considered the economic and business Capital of Morocco and Rabat the national political capital, with an area of 12261.80 Hectares. The essential problem usually encountered in suburban forests, is visitation and tourism pressure it is anthropogenic actions, as well as other ecological and environmental factors. In recent decades, Morocco has experienced a drought year that has influenced the forest with increasing human pressure and every day it suffers heavy losses, as well as over-exploitation. The Moroccan forest ecosystems are weak with intense ecological variation, domanial and imposed usage rights granted to the population; forests are experiencing a significant deterioration due to forgetfulness and immoderate use of forest resources which can influence the destruction of animal habitats, vegetation, water cycle and climate. The purpose of this study is to make a model of the degree of degradation of the forest and know the causes for prevention by using remote sensing and geographic information systems by introducing climate and ancillary data. Analytic hierarchy process was used to find out the degree of influence and the weight of each parameter, in this case, it is found that anthropogenic activities have a fairly significant impact has thus influenced the climate.Keywords: analytic hierarchy process, degradation, forest, geographic information system
Procedia PDF Downloads 3261840 Exploring Sexual Behavior among Unmarried Male Youth in Bangladesh: A Cross-Sectional Study
Authors: Subas Chandra Biswas, Kazi Sameen Naser, Farzana Misha
Abstract:
Little is known about the sexual behavior of male youth, particularly unmarried young men in Bangladesh as most of the sexual and reproductive health and rights-related research and intervention are mainly focused on females and married couples. To understand the unmarried youth’s sexual behavior, data from a nationwide survey conducted in all 64 districts of Bangladesh were analyzed. Using multistage systematic random sampling, a survey was conducted among 11,113 male youth aged 15-24 years from May-August, 2019. This article analyzed and presented findings of the sexual behavior of unmarried respondents based on the data collected from 10,026 unmarried male youth. Findings showed that 18% had ever experience of sexual relationship, and the reported mean age of first sexual intercourse was 16.5years. For unmarried male youth, those who had a sexual experience, their first sexual partners were female friends/classmate (57%), female neighbors (16%), and female sex workers (12%), relatives (6%) and girlfriends with whom they had love relationship (4%). However, about 36% reported that they had a love relationship with girlfriends, and among them, 23% reported that they had sexual intercourse with their girlfriend. Those who had sexual relations with their girlfriend, 47% reported that they did not use the condom in their last sex with their girlfriend. Furthermore, 29% reported that they had sexual relationships with others besides their girlfriends. Other reported partners were female sex workers (32%), neighbors (29%), female friends (19%), relatives (12%), and cousins (5%). Also, 46% reported that they did not even use the condom during sex with other partners. About 9% used some sort of sexual stimulant to increase their libido. Among the respondents, 376 reported that they bought sex in the last six months, and the mean expenditure of buying sex for the respondent was 1,140 Taka (13.46 US Dollar). Though premarital sexual relations are not socially accepted, findings showed a large portion of male youth are engaged in these relationships and risky sexual behavior. Lack of awareness of sexual and reproductive health, unprotected sexual intercourse, use of the drug during sexual intercourse also increase the threats to health. Thus these findings are important to understand the sexual behavior of male youth in policy and programmatic implications. Therefore, to ensure a healthy sexual life and wellbeing, an immediate and culturally sensitive sexual health promotion intervention is needed for male youth in Bangladesh.Keywords: Bangladesh, male youth, sexual and reproductive health, sexual behavior
Procedia PDF Downloads 1411839 Effectiveness of the Community Health Assist Scheme in Reducing Market Failure in Singapore’s Healthcare Sector
Authors: Matthew Scott Lau
Abstract:
This study addresses the research question: How effective has the Community Health Assist Scheme (CHAS) been in reducing market failure in Singapore’s healthcare sector? The CHAS policy, introduced in 2012 in Singapore, aims to improve accessibility and affordability of healthcare by offering subsidies to low and middle-income groups and elderly individuals for general practice consultations and healthcare. The investigation was undertaken by acquiring and analysing primary and secondary research data from 3 main sources, including handwritten survey responses of 334 individuals who were valid CHAS subsidy recipients (CHAS cardholders) from 5 different locations in Singapore, interview responses from two established general practitioner doctors with working knowledge of the scheme, and information from literature available online. Survey responses were analysed to determine how CHAS has affected the affordability and consumption of healthcare, and other benefits or drawbacks for CHAS users. The interview responses were used to explain the benefits of healthcare consumption and provide different perspectives on the impacts of CHAS on the various parties involved. Online sources provided useful information on changes in healthcare consumerism and Singapore’s government policies. The study revealed that CHAS has been largely effective in reducing market failure as the subsidies granted to consumers have improved the consumption of healthcare. This has allowed for the external benefits of healthcare consumption to be realized, thus reducing market failure. However, the study also revealed that CHAS cannot be fully effective in reducing market failure as the scope of CHAS prevents healthcare consumption from fully reaching the socially optimal level. Hence, the study concluded that CHAS has been effective to a large extent in reducing market failure in Singapore’s healthcare sector, albeit with some benefits to third parties yet to be realised. There are certain elements of the investigation, which may limit the validity of the conclusion, such as the means used to determine the socially optimal level of healthcare consumption, and the survey sample size.Keywords: healthcare consumption, health economics, market failure, subsidies
Procedia PDF Downloads 1591838 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures
Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk
Abstract:
From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach
Procedia PDF Downloads 2021837 Between Buddha and Tsar: Kalmyk Buddhist Sangha in Late Russian Empire
Authors: Elzyata Kuberlinova
Abstract:
This study explores how the Kalmyk Buddhist sangha responded to the Russian empire’s administrative integration and how the Buddhist clerical institutions were shaped in the process of interaction with representatives of the predominantly Orthodox state. The eighteenth-nineteenth century Russian imperial regime adhered to a religion-centred framework to govern its diverse subjects. Within this framework, any form of religious authority was considered a useful tool in the imperial quest for legibility. As such, rather than imposing religious homogeneity, the Russian administration engineered a framework of religious toleration and integrated the non-Orthodox clerical institutions in the empire’s administration. In its attempt to govern the large body of Kalmyk Buddhist sangha, the Russian government had to incorporate the sangha into the imperial institutional establishment. To this end, the Russian government founded the Lamaist Spiritual Governing Board in 1834, which became a part of the civil administration, where the Kalmyk Buddhist affairs were managed under the supervision of the Russian secular authorities. In 1847 the Lamaist Spiritual Board was abolished and Buddhist religious authority was transferred to the Lama of the Kalmyk people. From 1847 until the end of the empire in 1917 the Lama was the manager and intermediary figure between the Russian authorities and the Kalmyks where religious affairs were concerned. Substantial evidence collected in archives in Elista, Astrakhan, Stavropol and St.Petersburg show that despite being on the government’s payroll, first the Lamaist Spiritual Governing Board and later on the Lama did not always serve the interests of the state, and did not always comply with the Russian authorities’ orders. Although being incorporated into the state administrative system the Lama often found ways to manoeuvre the web of the Russian imperial bureaucracy in order to achieve his own goals. The Lama often used ‘every-day forms of resistance’ such as feigned misinterpretation, evasion, false compliance, feigned ignorance, and sabotage in order to resist without directly confronting or challenging the state orders.Keywords: Buddhist Sangha, intermediary, Kalmyks, Lama, legibility, resistance, reform, Russian empire
Procedia PDF Downloads 2221836 The Political Economy of Green Trade in the Context of US-China Trade War: A Case Study of US Biofuels and Soybeans
Authors: Tonghua Li
Abstract:
Under the neoliberal corporate food regime, biofuels are a double-edged sword that exacerbates tensions between national food security and trade in green agricultural products. Biofuels have the potential to help achieve green sustainable development goals, but they threaten food security by exacerbating competition for land and changing global food trade patterns. The U.S.-China trade war complicates this debate. Under the influence of different political and corporate coordination mechanisms in China and the US, trade disputes can have different impacts on sustainable agricultural practices. This paper develops an actor-centred ‘network governance framework’ focusing on trade in soybean and corn-based biofuels to explain how trade wars can change the actions of governmental and non-governmental actors in the context of oligopolistic competition and market concentration in agricultural trade. There is evidence that the US-China trade decoupling exacerbates the conflict between national security, free trade in agriculture, and the realities and needs of green and sustainable energy development. The US government's trade policies reflect concerns about China's relative gains, leading to a loss of trade profits, making it impossible for the parties involved to find a balance between the three objectives and, consequently, to get into a biofuels and soybean industry dilemma. Within the setting of prioritizing national security and strategic interests, the government has replaced the dominant position of large agribusiness in the neoliberal food system, and the goal of environmental sustainability has been marginalized by high politics. In contrast, China faces tensions in the trade war between food security self-sufficiency policy and liberal sustainable trade, but the state-capitalist model ensures policy coordination and coherence in trade diversion and supply chain adjustment. Despite ongoing raw material shortages and technological challenges, China remains committed to playing a role in global environmental governance and promoting green trade objectives.Keywords: food security, green trade, biofuels, soybeans, US-China trade war
Procedia PDF Downloads 71835 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe
Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis
Abstract:
The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM
Procedia PDF Downloads 4261834 Heterogeneity of Genes Encoding the Structural Proteins of Avian Infectious Bronchitis Virus
Authors: Shahid Hussain Abro, Siamak Zohari, Lena H. M. Renström, Désirée S. Jansson, Faruk Otman, Karin Ullman, Claudia Baule
Abstract:
Infectious bronchitis is an acute, highly contagious respiratory, nephropathogenic and reproductive disease of poultry that is caused by infectious bronchitis virus (IBV). The present study used a large data set of structural gene sequences, including newly generated ones and sequences available in the GenBank database to further analyze the diversity and to identify selective pressures and recombination spots. There were some deletions or insertions in the analyzed regions in isolates of the Italy-02 and D274 genotypes. Whereas, there were no insertions or deletions observed in the isolates of the Massachusetts and 4/91 genotype. The hypervariable nucleotide sequence regions spanned positions 152–239, 554–582, 686–737 and 802–912 in the S1 sub-unit of the all analyzed genotypes. The nucleotide sequence data of the E gene showed that this gene was comparatively unstable and subjected to a high frequency of mutations. The M gene showed substitutions consistently distributed except for a region between nucleotide positions 250–680 that remained conserved. The lowest variation in the nucleotide sequences of ORF5a was observed in the isolates of the D274 genotype. While, ORF5b and N gene sequences showed highly conserved regions and were less subjected to variation. Genes ORF3a, ORF3b, M, ORF5a, ORF5b and N presented negative selective pressure among the analyzed isolates. However, some regions of the ORFs showed favorable selective pressure(s). The S1 and E proteins were subjected to a high rate of mutational substitutions and non-synonymous amino acids. Strong signals of recombination breakpoints and ending break point were observed in the S and N genes. Overall, the results of this study revealed that very likely the strong selective pressures in E, M and the high frequency of substitutions in the S gene can probably be considered the main determinants in the evolution of IBV.Keywords: IBV, avian infectious bronchitis, structural genes, genotypes, genetic diversity
Procedia PDF Downloads 4351833 Fermentation of Pretreated Herbaceous Cellulosic Wastes to Ethanol by Anaerobic Cellulolytic and Saccharolytic Thermophilic Clostridia
Authors: Lali Kutateladze, Tamar Urushadze, Tamar Dudauri, Besarion Metreveli, Nino Zakariashvili, Izolda Khokhashvili, Maya Jobava
Abstract:
Lignocellulosic waste streams from agriculture, paper and wood industry are renewable, plentiful and low-cost raw materials that can be used for large-scale production of liquid and gaseous biofuels. As opposed to prevailing multi-stage biotechnological processes developed for bioconversion of cellulosic substrates to ethanol where high-cost cellulase preparations are used, Consolidated Bioprocessing (CBP) offers to accomplish cellulose and xylan hydrolysis followed by fermentation of both C6 and C5 sugars to ethanol in a single-stage process. Syntrophic microbial consortium comprising of anaerobic, thermophilic, cellulolytic, and saccharolytic bacteria in the genus Clostridia with improved ethanol productivity and high tolerance to fermentation end-products had been proposed for achieving CBP. 65 new strains of anaerobic thermophilic cellulolytic and saccharolytic Clostridia were isolated from different wetlands and hot springs in Georgia. Using new isolates, fermentation of mechanically pretreated wheat straw and corn stalks was done under oxygen-free nitrogen environment in thermophilic conditions (T=550C) and pH 7.1. Process duration was 120 hours. Liquid and gaseous products of fermentation were analyzed on a daily basis using Perkin-Elmer gas chromatographs with flame ionization and thermal detectors. Residual cellulose, xylan, xylose, and glucose were determined using standard methods. Cellulolytic and saccharolytic bacteria strains degraded mechanically pretreated herbaceous cellulosic wastes and fermented glucose and xylose to ethanol, acetic acid and gaseous products like hydrogen and CO2. Specifically, maximum yield of ethanol was reached at 96 h of fermentation and varied between 2.9 – 3.2 g/ 10 g of substrate. The content of acetic acid didn’t exceed 0.35 g/l. Other volatile fatty acids were detected in trace quantities.Keywords: anaerobic bacteria, cellulosic wastes, Clostridia sp, ethanol
Procedia PDF Downloads 2941832 Cupric Oxide Thin Films for Optoelectronic Application
Authors: Sanjay Kumar, Dinesh Pathak, Sudhir Saralch
Abstract:
Copper oxide is a semiconductor that has been studied for several reasons such as the natural abundance of starting material copper (Cu); the easiness of production by Cu oxidation; their non-toxic nature and the reasonably good electrical and optical properties. Copper oxide is well-known as cuprite oxide. The cuprite is p-type semiconductors having band gap energy of 1.21 to 1.51 eV. As a p-type semiconductor, conduction arises from the presence of holes in the valence band (VB) due to doping/annealing. CuO is attractive as a selective solar absorber since it has high solar absorbency and a low thermal emittance. CuO is very promising candidate for solar cell applications as it is a suitable material for photovoltaic energy conversion. It has been demonstrated that the dip technique can be used to deposit CuO films in a simple manner using metallic chlorides (CuCl₂.2H₂O) as a starting material. Copper oxide films are prepared using a methanolic solution of cupric chloride (CuCl₂.2H₂O) at three baking temperatures. We made three samples, after heating which converts to black colour. XRD data confirm that the films are of CuO phases at a particular temperature. The optical band gap of the CuO films calculated from optical absorption measurements is 1.90 eV which is quite comparable to the reported value. Dip technique is a very simple and low-cost method, which requires no sophisticated specialized setup. Coating of the substrate with a large surface area can be easily obtained by this technique compared to that in physical evaporation techniques and spray pyrolysis. Another advantage of the dip technique is that it is very easy to coat both sides of the substrate instead of only one and to deposit otherwise inaccessible surfaces. This method is well suited for applying coating on the inner and outer surfaces of tubes of various diameters and shapes. The main advantage of the dip coating method lies in the fact that it is possible to deposit a variety of layers having good homogeneity and mechanical and chemical stability with a very simple setup. In this paper, the CuO thin films preparation by dip coating method and their characterization will be presented.Keywords: absorber material, cupric oxide, dip coating, thin film
Procedia PDF Downloads 309