Search results for: export advantage
417 Preliminary Study of Gold Nanostars/Enhanced Filter for Keratitis Microorganism Raman Fingerprint Analysis
Authors: Chi-Chang Lin, Jian-Rong Wu, Jiun-Yan Chiu
Abstract:
Myopia, ubiquitous symptom that is necessary to correct the eyesight by optical lens struggles many people for their daily life. Recent years, younger people raise interesting on using contact lens because of its convenience and aesthetics. In clinical, the risk of eye infections increases owing to the behavior of incorrectly using contact lens unsupervised cleaning which raising the infection risk of cornea, named ocular keratitis. In order to overcome the identification needs, new detection or analysis method with rapid and more accurate identification for clinical microorganism is importantly needed. In our study, we take advantage of Raman spectroscopy having unique fingerprint for different functional groups as the distinct and fast examination tool on microorganism. As we know, Raman scatting signals are normally too weak for the detection, especially in biological field. Here, we applied special SERS enhancement substrates to generate higher Raman signals. SERS filter we designed in this article that prepared by deposition of silver nanoparticles directly onto cellulose filter surface and suspension nanoparticles - gold nanostars (AuNSs) also be introduced together to achieve better enhancement for lower concentration analyte (i.e., various bacteria). Research targets also focusing on studying the shape effect of synthetic AuNSs, needle-like surface morphology may possible creates more hot-spot for getting higher SERS enhance ability. We utilized new designed SERS technology to distinguish the bacteria from ocular keratitis under strain level, and specific Raman and SERS fingerprint were grouped under pattern recognition process. We reported a new method combined different SERS substrates can be applied for clinical microorganism detection under strain level with simple, rapid preparation and low cost. Our presenting SERS technology not only shows the great potential for clinical bacteria detection but also can be used for environmental pollution and food safety analysis.Keywords: bacteria, gold nanostars, Raman spectroscopy surface-enhanced Raman scattering filter
Procedia PDF Downloads 168416 Educating the Educators: Interdisciplinary Approaches to Enhance Science Teaching
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
In a rapid-changing world, science teachers face considerable challenges. In addition to the basic curriculum, there must be included several transversal themes, which demand creative and innovative strategies to be arranged and integrated to traditional disciplines. In Brazil, nuclear science is still a controversial theme, and teachers themselves seem to be unaware of the issue, most often perpetuating prejudice, errors and misconceptions. This article presents the authors’ experience in the development of an interdisciplinary pedagogical proposal to include nuclear science in the basic curriculum, in a transversal and integrating way. The methodology applied was based on the analysis of several normative documents that define the requirements of essential learning, competences and skills of basic education for all schools in Brazil. The didactic materials and resources were developed according to the best practices to improve learning processes privileging constructivist educational techniques, with emphasis on active learning process, collaborative learning and learning through research. The material consists of an illustrated book for students, a book for teachers and a manual with activities that can articulate nuclear science to different disciplines: Portuguese, mathematics, science, art, English, history and geography. The content counts on high scientific rigor and articulate nuclear technology with topics of interest to society in the most diverse spheres, such as food supply, public health, food safety and foreign trade. Moreover, this pedagogical proposal takes advantage of the potential value of digital technologies, implementing QR codes that excite and challenge students of all ages, improving interaction and engagement. The expected results include the education of the educators for nuclear science communication in a transversal and integrating way, demystifying nuclear technology in a contextualized and significant approach. It is expected that the interdisciplinary pedagogical proposal contributes to improving attitudes towards knowledge construction, privileging reconstructive questioning, fostering a culture of systematic curiosity and encouraging critical thinking skills.Keywords: science education, interdisciplinary learning, nuclear science, scientific literacy
Procedia PDF Downloads 133415 A Study on Adsorption Ability of MnO2 Nanoparticles to Remove Methyl Violet Dye from Aqueous Solution
Authors: Zh. Saffari, A. Naeimi, M. S. Ekrami-Kakhki, Kh. Khandan-Barani
Abstract:
The textile industries are becoming a major source of environmental contamination because an alarming amount of dye pollutants are generated during the dyeing processes. Organic dyes are one of the largest pollutants released into wastewater from textile and other industrial processes, which have shown severe impacts on human physiology. Nano-structure compounds have gained importance in this category due their anticipated high surface area and improved reactive sites. In recent years several novel adsorbents have been reported to possess great adsorption potential due to their enhanced adsorptive capacity. Nano-MnO2 has great potential applications in environment protection field and has gained importance in this category because it has a wide variety of structure with large surface area. The diverse structures, chemical properties of manganese oxides are taken advantage of in potential applications such as adsorbents, sensor catalysis and it is also used for wide catalytic applications, such as degradation of dyes. In this study, adsorption of Methyl Violet (MV) dye from aqueous solutions onto MnO2 nanoparticles (MNP) has been investigated. The surface characterization of these nano particles was examined by Particle size analysis, Scanning Electron Microscopy (SEM), Fourier Transform Infrared (FTIR) spectroscopy and X-Ray Diffraction (XRD). The effects of process parameters such as initial concentration, pH, temperature and contact duration on the adsorption capacities have been evaluated, in which pH has been found to be most effective parameter among all. The data were analyzed using the Langmuir and Freundlich for explaining the equilibrium characteristics of adsorption. And kinetic models like pseudo first- order, second-order model and Elovich equation were utilized to describe the kinetic data. The experimental data were well fitted with Langmuir adsorption isotherm model and pseudo second order kinetic model. The thermodynamic parameters, such as Free energy of adsorption (ΔG°), enthalpy change (ΔH°) and entropy change (ΔS°) were also determined and evaluated.Keywords: MnO2 nanoparticles, adsorption, methyl violet, isotherm models, kinetic models, surface chemistry
Procedia PDF Downloads 258414 Mechanistic Understanding of the Difference in two Strains Cholerae Causing Pathogens and Predicting Therapeutic Strategies for Cholera Patients Affected with new Strain Vibrio Cholerae El.tor. Using Constrain-based Modelling
Authors: Faiz Khan Mohammad, Saumya Ray Chaudhari, Raghunathan Rengaswamy, Swagatika Sahoo
Abstract:
Cholera caused by pathogenic gut bacteria Vibrio Cholerae (VC), is a major health problem in developing countries. Different strains of VC exhibit variable responses subject to different extracellular medium (Nag et al, Infect Immun, 2018). In this study, we present a new approach to model the variable VC responses in mono- and co-cultures, subject to continuously changing growth medium, which is otherwise difficult via simple FBA model. Nine VC strain and seven E. coli (EC) models were assembled and considered. A continuously changing medium is modelled using a new iterative-based controlled medium technique (ITC). The medium is appropriately prefixed with the VC model secretome. As the flux through the bacteria biomass increases secretes certain by-products. These products shall add-on to the medium, either deviating the nutrient potential or block certain metabolic components of the model, effectively forming a controlled feed-back loop. Different VC models were setup as monoculture of VC in glucose enriched medium, and in co-culture with VC strains and EC. Constrained to glucose enriched medium, (i) VC_Classical model resulted in higher flux through acidic secretome suggesting a pH change of the medium, leading to lowering of its biomass. This is in consonance with the literature reports. (ii) When compared for neutral secretome, flux through acetoin exchange was higher in VC_El tor than the classical models, suggesting El tor requires an acidic partner to lower its biomass. (iii) Seven of nine VC models predicted 3-methyl-2-Oxovaleric acid, mysirtic acid, folic acid, and acetate significantly affect corresponding biomass reactions. (iv) V. parhemolyticus and vulnificus were found to be phenotypically similar to VC Classical strain, across the nine VC strains. The work addresses the advantage of the ITC over regular flux balance analysis for modelling varying growth medium. Future expansion to co-cultures, potentiates the identification of novel interacting partners as effective cholera therapeutics.Keywords: cholera, vibrio cholera El. tor, vibrio cholera classical, acetate
Procedia PDF Downloads 162413 Smart Books as a Supporting Tool for Developing Skills of Designing and Employing Webquest 2.0
Authors: Huda Alyami
Abstract:
The present study aims to measure the effectiveness of an "Interactive eBook" in order to develop skills of designing and employing webquests for female intern teachers. The study uses descriptive analytical methodology as well as quasi-experimental methodology. The sample of the study consists of (30) female intern teachers from the Department of Special Education (in the tracks of Gifted Education and Learning Difficulties), during the first semester of the academic year 2015, at King Abdul-Aziz University in Jeddah city. The sample is divided into (15) female intern teachers for the experimental group, and (15) female intern teachers for the control group. A set of qualitative and quantitative tools have been prepared and verified for the study, embodied in: a list of the designing webquests' skills, a list of the employing webquests' skills, a webquests' knowledge achievement test, a product rating card, an observation card, and an interactive ebook. The study concludes the following results: 1. After pre-control, there are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of the experimental and the control groups in the post measurement of the webquests' knowledge achievement test, in favor of the experimental group. 2. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the product rating card in favor of the experimental group. 3. There are statistically significant differences, at the significance level of (α ≤ 0.05), between the mean scores of experimental and control groups in the post measurement of the observation card for the experimental group. In the light of the previous findings, the study recommends the following: taking advantage of interactive ebooks when teaching all educational courses for various disciplines at the university level, creating educational participative platforms to share educational interactive ebooks for various disciplines at the local and regional levels. The study suggests conducting further qualitative studies on the effectiveness of interactive ebooks, in addition to conducting studies on the use of (Web 2.0) in webquests.Keywords: interactive eBook, webquest, design, employing, develop skills
Procedia PDF Downloads 184412 End-Users Tools to Empower and Raise Awareness of Behavioural Change towards Energy Efficiency
Authors: G. Calleja-Rodriguez, N. Jimenez-Redondo, J. J. Peralta Escalante
Abstract:
This research work aims at developing a solution to take advantage of the potential energy saving related to occupants behaviour estimated in between 5-30 % according to existing studies. For that purpose, the following methodology has been followed: 1) literature review and gap analysis, 2) define concept and functional requirements, 3) evaluation and feedback by experts. As result, the concept for a tool-box that implements continuous behavior change interventions named as engagement methods and based on increasing energy literacy, increasing energy visibility, using bonus system, etc. has been defined. These engagement methods are deployed through a set of ICT tools: Building Automation and Control System (BACS) add-ons services installed in buildings and Users Apps installed in smartphones, smart-TVs or dashboards. The tool-box called eTEACHER identifies energy conservation measures (ECM) based on energy behavioral change through a what-if analysis that collects information about the building and its users (comfort feedback, behavior, etc.) and carry out cost-effective calculations to provide outputs such us efficient control settings of building systems. This information is processed and showed in an attractive way as tailored advice to the energy end-users. Therefore, eTEACHER goal is to change the behavior of building´s energy users towards energy efficiency, comfort and better health conditions by deploying customized ICT-based interventions taking into account building typology (schools, residential, offices, health care centres, etc.), users profile (occupants, owners, facility managers, employers, etc.) as well as cultural and demographic factors. One of the main findings of this work is the common failure when technological interventions on behavioural change are done to not consult, train and support users regarding technological changes leading to poor performance in practices. As conclusion, a strong need to carry out social studies to identify relevant behavioural issues and to identify effective pro-evironmental behavioral change strategies has been identified.Keywords: energy saving, behavioral bhange, building users, engagement methods, energy conservation measures
Procedia PDF Downloads 170411 Repeatable Surface Enhanced Raman Spectroscopy Substrates from SERSitive for Wide Range of Chemical and Biological Substances
Authors: Monika Ksiezopolska-Gocalska, Pawel Albrycht, Robert Holyst
Abstract:
Surface Enhanced Raman Spectroscopy (SERS) is a technique used to analyze very low concentrations of substances in solutions, even in aqueous solutions - which is its advantage over IR. This technique can be used in the pharmacy (to check the purity of products); forensics (whether at a crime scene there were any illegal substances); or medicine (serving as a medical test) and lots more. Due to the high potential of this technique, its increasing popularity in analytical laboratories, and simultaneously - the absence of appropriate platforms enhancing the SERS signal (crucial to observe the Raman effect at low analyte concentration in solutions (1 ppm)), we decided to invent our own SERS platforms. As an enhancing layer, we have chosen gold and silver nanoparticles, because these two have the best SERS properties, and each has an affinity for the other kind of particles, which increases the range of research capabilities. The next step was to commercialize them, which resulted in the creation of the company ‘SERSitive.eu’ focusing on production of highly sensitive (Ef = 10⁵ – 10⁶), homogeneous and reproducible (70 - 80%) substrates. SERStive SERS substrates are made using the electrodeposition of silver or silver-gold nanoparticles technique. Thanks to a very detailed analysis of data based on studies optimizing such parameters as deposition time, temperature of the reaction solution, applied potential, used reducer, or reagent concentrations using a standardized compound - p-mercaptobenzoic acid (PMBA) at a concentration of 10⁻⁶ M, we have developed a high-performance process for depositing precious metal nanoparticles on the surface of ITO glass. In order to check a quality of the SERSitive platforms, we examined the wide range of the chemical compounds and the biological substances. Apart from analytes that have great affinity to the metal surfaces (e.g. PMBA) we obtained very good results for those fitting less the SERS measurements. Successfully we received intensive, and what’s more important - very repetitive spectra for; amino acids (phenyloalanine, 10⁻³ M), drugs (amphetamine, 10⁻⁴ M), designer drugs (cathinone derivatives, 10⁻³ M), medicines and ending with bacteria (Listeria, Salmonella, Escherichia coli) and fungi.Keywords: nanoparticles, Raman spectroscopy, SERS, SERS applications, SERS substrates, SERSitive
Procedia PDF Downloads 151410 Knowledge Loss Risk Assessment for Departing Employees: An Exploratory Study
Authors: Muhammad Saleem Ullah Khan Sumbal, Eric Tsui, Ricky Cheong, Eric See To
Abstract:
Organizations are posed to a threat of valuable knowledge loss when employees leave either due to retirement, resignation, job change or because of disabilities e.g. death, etc. Due to changing economic conditions, globalization, and aging workforce, organizations are facing challenges regarding retention of valuable knowledge. On the one hand, large number of employees are going to retire in the organizations whereas on the other hand, younger generation does not want to work in a company for a long time and there is an increasing trend of frequent job change among the new generation. Because of these factors, organizations need to make sure that they capture the knowledge of employee before (s)he walks out of the door. The first step in this process is to know what type of knowledge employee possesses and whether this knowledge is important for the organization. Researchers reveal in the literature that despite the serious consequences of knowledge loss in terms of organizational productivity and competitive advantage, there has not been much work done in the area of knowledge loss assessment of departing employees. An important step in the knowledge retention process is to determine the critical ‘at risk’ knowledge. Thus, knowledge loss risk assessment is a process by which organizations can gauge the importance of knowledge of the departing employee. The purpose of this study is to explore this topic of knowledge loss risk assessment by conducting a qualitative study in oil and gas sector. By engaging in dialogues with managers and executives of the organizations through in-depth interviews and adopting a grounded methodology approach, the research will explore; i) Are there any measures adopted by organizations to assess the risk of knowledge loss from departing employees? ii) Which factors are crucial for knowledge loss assessment in the organizations? iii) How can we prioritize the employees for knowledge retention according to their criticality? Grounded theory approach is used when there is not much knowledge available in the area under research and thus new knowledge is generated about the topic through an in-depth exploration of the topic by using methods such as interviews and using a systematic approach to analyze the data. The outcome of the study will generate a model for the risk of knowledge loss through factors such as the likelihood of knowledge loss, the consequence/impact of knowledge loss and quality of the knowledge loss of departing employees. Initial results show that knowledge loss assessment is quite crucial for the organizations and it helps in determining what types of knowledge employees possess e.g. organizations knowledge, subject matter expertise or relationships knowledge. Based on that, it can be assessed which employee is more important for the organizations and how to prioritize the knowledge retention process for departing employees.Keywords: knowledge loss, risk assessment, departing employees, Hong Kong organizations
Procedia PDF Downloads 408409 Development of Electrochemical Biosensor Based on Dendrimer-Magnetic Nanoparticles for Detection of Alpha-Fetoprotein
Authors: Priyal Chikhaliwala, Sudeshna Chandra
Abstract:
Liver cancer is one of the most common malignant tumors with poor prognosis. This is because liver cancer does not exhibit any symptoms in early stage of disease. Increased serum level of AFP is clinically considered as a diagnostic marker for liver malignancy. The present diagnostic modalities include various types of immunoassays, radiological studies, and biopsy. However, these tests undergo slow response times, require significant sample volumes, achieve limited sensitivity and ultimately become expensive and burdensome to patients. Considering all these aspects, electrochemical biosensors based on dendrimer-magnetic nanoparticles (MNPs) was designed. Dendrimers are novel nano-sized, three-dimensional molecules with monodispersed structures. Poly-amidoamine (PAMAM) dendrimers with eight –NH₂ groups using ethylenediamine as a core molecule were synthesized using Michael addition reaction. Dendrimers provide added the advantage of not only stabilizing Fe₃O₄ NPs but also displays capability of performing multiple electron redox events and binding multiple biological ligands to its dendritic end-surface. Fe₃O₄ NPs due to its superparamagnetic behavior can be exploited for magneto-separation process. Fe₃O₄ NPs were stabilized with PAMAM dendrimer by in situ co-precipitation method. The surface coating was examined by FT-IR, XRD, VSM, and TGA analysis. Electrochemical behavior and kinetic studies were evaluated using CV which revealed that the dendrimer-Fe₃O₄ NPs can be looked upon as electrochemically active materials. Electrochemical immunosensor was designed by immobilizing anti-AFP onto dendrimer-MNPs by gluteraldehyde conjugation reaction. The bioconjugates were then incubated with AFP antigen. The immunosensor was characterized electrochemically indicating successful immuno-binding events. The binding events were also further studied using magnetic particle imaging (MPI) which is a novel imaging modality in which Fe₃O₄ NPs are used as tracer molecules with positive contrast. Multicolor MPI was able to clearly localize AFP antigen and antibody and its binding successfully. Results demonstrate immense potential in terms of biosensing and enabling MPI of AFP in clinical diagnosis.Keywords: alpha-fetoprotein, dendrimers, electrochemical biosensors, magnetic nanoparticles
Procedia PDF Downloads 136408 Departing beyond the Orthodoxy: An Integrative Review and Future Research Avenues of Human Capital Resources Theory
Authors: Long Zhang, Ian Hampson, Loretta O' Donnell
Abstract:
Practitioners in various industries, especially in the finance industry that conventionally benefit from financial capital and resources, appear to be increasingly aware of the importance of human capital resources (HCR) after the 2008 Global Financial Crisis. Scholars from diverse fields have conducted extensive and fruitful research on HCR within their own disciplines. This review suggests that the mainstream of pure quantitative research alone is insufficient to provide precise or comprehensive understanding of HCR. The complex relationships and interactions in HCR call for more integrative and cross-disciplinary research to more holistically understand complex and intricate HCRs. The complex nature of HCR requires deep qualitative exploration based on in-depth data to capture the everydayness of organizational activities and to register its individuality and variety. Despite previous efforts, a systematic and holistic integration of HCR research among multiple disciplines is lacking. Using a retrospective analysis of articles published in the field of economics, finance and management, including psychology, human resources management (HRM), organizational behaviour (OB), industrial and organizational psychology (I-O psychology), organizational theory, and strategy literatures, this study summaries and compares the major perspectives, theories, and findings on HCR research. A careful examination of the progress of the debates of HCR definitions and measurements in distinct disciplines enables an identification of the limitations and gaps in existing research. It enables an analysis of the interplay of these concepts, as well as that of the related concepts of intellectual capital, social capital, and Chinese guanxi, and how they provide a broader perspective on the HCR-related influences on firms’ competitive advantage. The study also introduces the themes of Environmental, Social and Governance, or ESG based investing, as the burgeoning body of ESG studies illustrates the rising importance of human and non-financial capital in investment process. The ESG literature locates HCR into a broader research context of the value of non-financial capital in explaining firm performance. The study concludes with a discussion of new directions for future research that may help advance our knowledge of HCR.Keywords: human capital resources, social capital, Chinese guanxi, human resources management
Procedia PDF Downloads 359407 The Problematic Transfer of Classroom Creativity in Business to the Workplace
Authors: Kym Drady
Abstract:
This paper considers whether creativity is the missing link which would allow the evolution of organisational behaviour and profitability if it was ‘released’. It suggests that although many organisations try to engage their workforce and expect innovation they fail to provide the means for its achievement. The paper suggests that creative thinking is the ‘glue’ which links organisational performance to profitability. A key role of a university today, is to produce skilled and capable graduates. Increasing competition and internationalisation has meant that the employability agenda has never been more prominent within the field of education. As such it should be a key consideration when designing and developing a curriculum. It has been suggested that creativity is a valuable personal skill and perhaps should be the focus of an organisations business strategy in order for them to increase their competitive advantage in the twenty first century. Flexible and agile graduates are now required to become creative in their use of skills and resources in an increasingly complex and sophisticated global market. The paper, therefore, questions that if this is the case why then does creativity fail to appear as a key curriculum subject in many business schools. It also considers why policy makers continue to neglect this critical issue when it could offer the ‘key’ to economic prosperity. Recent literature does go some way to addressing by suggesting that small clusters of UK Universities have started including some creativity in their PDP work. However, this paper builds on this work and proposes that that creativity should become a central component of the curriculum. The paper suggests that creativity should appear in every area of the curriculum and that it should act as the link that connects productivity to profitability rather than being marginalised as an additional part of the curriculum. A range of data gathering methods have been used but each has been drawn from a qualitative base as it was felt that due to nature of the study individual’s thoughts and feelings needed to be examined and reflection was important. The author also recognises the importance of her own reflection both on the experiences of the students and their later working experiences as well as on the creative elements within the programme that she delivered. This paper has been drawn from research undertaken by the author in relation to her PhD study which explores the potential benefits of including creativity in the curriculum within business schools and the added value this could make to their employability. To conclude, creativity is, in the opinion of the author, the missing link to organisational profitability and as such should be prioritised especially by higher education providers.Keywords: business curriculum, business curriculum, higher education, creative thinking and problem-solving, creativity
Procedia PDF Downloads 275406 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 331405 Portable and Parallel Accelerated Development Method for Field-Programmable Gate Array (FPGA)-Central Processing Unit (CPU)- Graphics Processing Unit (GPU) Heterogeneous Computing
Authors: Nan Hu, Chao Wang, Xi Li, Xuehai Zhou
Abstract:
The field-programmable gate array (FPGA) has been widely adopted in the high-performance computing domain. In recent years, the embedded system-on-a-chip (SoC) contains coarse granularity multi-core CPU (central processing unit) and mobile GPU (graphics processing unit) that can be used as general-purpose accelerators. The motivation is that algorithms of various parallel characteristics can be efficiently mapped to the heterogeneous architecture coupled with these three processors. The CPU and GPU offload partial computationally intensive tasks from the FPGA to reduce the resource consumption and lower the overall cost of the system. However, in present common scenarios, the applications always utilize only one type of accelerator because the development approach supporting the collaboration of the heterogeneous processors faces challenges. Therefore, a systematic approach takes advantage of write-once-run-anywhere portability, high execution performance of the modules mapped to various architectures and facilitates the exploration of design space. In this paper, A servant-execution-flow model is proposed for the abstraction of the cooperation of the heterogeneous processors, which supports task partition, communication and synchronization. At its first run, the intermediate language represented by the data flow diagram can generate the executable code of the target processor or can be converted into high-level programming languages. The instantiation parameters efficiently control the relationship between the modules and computational units, including two hierarchical processing units mapping and adjustment of data-level parallelism. An embedded system of a three-dimensional waveform oscilloscope is selected as a case study. The performance of algorithms such as contrast stretching, etc., are analyzed with implementations on various combinations of these processors. The experimental results show that the heterogeneous computing system with less than 35% resources achieves similar performance to the pure FPGA and approximate energy efficiency.Keywords: FPGA-CPU-GPU collaboration, design space exploration, heterogeneous computing, intermediate language, parameterized instantiation
Procedia PDF Downloads 118404 Development of a Turbulent Boundary Layer Wall-pressure Fluctuations Power Spectrum Model Using a Stepwise Regression Algorithm
Authors: Zachary Huffman, Joana Rocha
Abstract:
Wall-pressure fluctuations induced by the turbulent boundary layer (TBL) developed over aircraft are a significant source of aircraft cabin noise. Since the power spectral density (PSD) of these pressure fluctuations is directly correlated with the amount of sound radiated into the cabin, the development of accurate empirical models that predict the PSD has been an important ongoing research topic. The sound emitted can be represented from the pressure fluctuations term in the Reynoldsaveraged Navier-Stokes equations (RANS). Therefore, early TBL empirical models (including those from Lowson, Robertson, Chase, and Howe) were primarily derived by simplifying and solving the RANS for pressure fluctuation and adding appropriate scales. Most subsequent models (including Goody, Efimtsov, Laganelli, Smol’yakov, and Rackl and Weston models) were derived by making modifications to these early models or by physical principles. Overall, these models have had varying levels of accuracy, but, in general, they are most accurate under the specific Reynolds and Mach numbers they were developed for, while being less accurate under other flow conditions. Despite this, recent research into the possibility of using alternative methods for deriving the models has been rather limited. More recent studies have demonstrated that an artificial neural network model was more accurate than traditional models and could be applied more generally, but the accuracy of other machine learning techniques has not been explored. In the current study, an original model is derived using a stepwise regression algorithm in the statistical programming language R, and TBL wall-pressure fluctuations PSD data gathered at the Carleton University wind tunnel. The theoretical advantage of a stepwise regression approach is that it will automatically filter out redundant or uncorrelated input variables (through the process of feature selection), and it is computationally faster than machine learning. The main disadvantage is the potential risk of overfitting. The accuracy of the developed model is assessed by comparing it to independently sourced datasets.Keywords: aircraft noise, machine learning, power spectral density models, regression models, turbulent boundary layer wall-pressure fluctuations
Procedia PDF Downloads 135403 Numerical Simulation on Airflow Structure in the Human Upper Respiratory Tract Model
Authors: Xiuguo Zhao, Xudong Ren, Chen Su, Xinxi Xu, Fu Niu, Lingshuai Meng
Abstract:
The respiratory diseases such as asthma, emphysema and bronchitis are connected with the air pollution and the number of these diseases tends to increase, which may attribute to the toxic aerosol deposition in human upper respiratory tract or in the bifurcation of human lung. The therapy of these diseases mostly uses pharmaceuticals in the form of aerosol delivered into the human upper respiratory tract or the lung. Understanding of airflow structures in human upper respiratory tract plays a very important role in the analysis of the “filtering” effect in the pharynx/larynx and for obtaining correct air-particle inlet conditions to the lung. However, numerical simulation based CFD (Computational Fluid Dynamics) technology has its own advantage on studying airflow structure in human upper respiratory tract. In this paper, a representative human upper respiratory tract is built and the CFD technology was used to investigate the air movement characteristic in the human upper respiratory tract. The airflow movement characteristic, the effect of the airflow movement on the shear stress distribution and the probability of the wall injury caused by the shear stress are discussed. Experimentally validated computational fluid-aerosol dynamics results showed the following: the phenomenon of airflow separation appears near the outer wall of the pharynx and the trachea. The high velocity zone is created near the inner wall of the trachea. The airflow splits at the divider and a new boundary layer is generated at the inner wall of the downstream from the bifurcation with the high velocity near the inner wall of the trachea. The maximum velocity appears at the exterior of the boundary layer. The secondary swirls and axial velocity distribution result in the high shear stress acting on the inner wall of the trachea and bifurcation, finally lead to the inner wall injury. The enhancement of breathing intensity enhances the intensity of the shear stress acting on the inner wall of the trachea and the bifurcation. If human keep the high breathing intensity for long time, not only the ability for the transportation and regulation of the gas through the trachea and the bifurcation fall, but also result in the increase of the probability of the wall strain and tissue injury.Keywords: airflow structure, computational fluid dynamics, human upper respiratory tract, wall shear stress, numerical simulation
Procedia PDF Downloads 247402 Web-Based Tools to Increase Public Understanding of Nuclear Technology and Food Irradiation
Authors: Denise Levy, Anna Lucia C. H. Villavicencio
Abstract:
Food irradiation is a processing and preservation technique to eliminate insects and parasites and reduce disease-causing microorganisms. Moreover, the process helps to inhibit sprouting and delay ripening, extending fresh fruits and vegetables shelf-life. Nevertheless, most Brazilian consumers seem to misunderstand the difference between irradiated food and radioactive food and the general public has major concerns about the negative health effects and environmental contamination. Society´s judgment and decision making are directly linked to perceived benefits and risks. The web-based project entitled ‘Scientific information about food irradiation: Internet as a tool to approach science and society’ was created by the Nuclear and Energetic Research Institute (IPEN), in order to offer an interdisciplinary approach to science education, integrating economic, ethical, social and political aspects of food irradiation. This project takes into account that, misinformation and unfounded preconceived ideas impact heavily on the acceptance of irradiated food and purchase intention by the Brazilian consumer. Taking advantage of the potential value of the Internet to enhance communication and education among general public, a research study was carried out regarding the possibilities and trends of Information and Communication Technologies among the Brazilian population. The content includes concepts, definitions and Frequently Asked Questions (FAQ) about processes, safety, advantages, limitations and the possibilities of food irradiation, including health issues, as well as its impacts on the environment. The project counts on eight self-instructional interactive web courses, situating scientific content in relevant social contexts in order to encourage self-learning and further reflections. Communication is a must to improve public understanding of science. The use of information technology for quality scientific divulgation shall contribute greatly to provide information throughout the country, spreading information to as many people as possible, minimizing geographic distances and stimulating communication and development.Keywords: food irradiation, multimedia learning tools, nuclear science, society and education
Procedia PDF Downloads 248401 Intracellular Sphingosine-1-Phosphate Receptor 3 Contributes to Lung Tumor Cell Proliferation
Authors: Michela Terlizzi, Chiara Colarusso, Aldo Pinto, Rosalinda Sorrentino
Abstract:
Sphingosine-1-phosphate (S1P) is a membrane-derived bioactive phospholipid exerting a multitude of effects on respiratory cell physiology and pathology through five S1P receptors (S1PR1-5). Higher levels of S1P have been registered in a broad range of respiratory diseases, including inflammatory disorders and cancer, although its exact role is still elusive. Based on our previous study in which we found that S1P/S1PR3 is involved in an inflammatory pattern via the activation of Toll-like Receptor 9 (TLR9), highly expressed on lung cancer cells, the main goal of the current study was to better understand the involvement of S1P/S1PR3 pathway/signaling during lung carcinogenesis, taking advantage of a mouse model of first-hand smoke exposure and of carcinogen-induced lung cancer. We used human samples of Non-Small Cell Lung Cancer (NSCLC), a mouse model of first-hand smoking, and of Benzo(a)pyrene (BaP)-induced tumor-bearing mice and A549 lung adenocarcinoma cells. We found that the intranuclear, but not the membrane, localization of S1PR3 was associated to the proliferation of lung adenocarcinoma cells, the mechanism that was correlated to human and mouse samples of smoke-exposure and carcinogen-induced lung cancer, which were characterized by higher utilization of S1P. Indeed, the inhibition of the membrane S1PR3 did not alter tumor cell proliferation after TLR9 activation. Instead, according to the nuclear localization of sphingosine kinase (SPHK) II, the enzyme responsible for the catalysis of the S1P last step synthesis, the inhibition of the kinase completely blocked the endogenous S1P-induced tumor cell proliferation. These results prove that the endogenous TLR9-induced S1P can on one side favor pro-inflammatory mechanisms in the tumor microenvironment via the activation of cell surface receptors, but on the other tumor progression via the nuclear S1PR3/SPHK II axis, highlighting a novel molecular mechanism that identifies S1P as one of the crucial mediators for lung carcinogenesis-associated inflammatory processes and that could provide differential therapeutic approaches especially in non-responsive lung cancer patients.Keywords: sphingosine-1-phosphate (S1P), S1P Receptor 3 (S1PR3), smoking-mice, lung inflammation, lung cancer
Procedia PDF Downloads 201400 Gaze Behaviour of Individuals with and without Intellectual Disability for Nonaccidental and Metric Shape Properties
Authors: S. Haider, B. Bhushan
Abstract:
Eye Gaze behaviour of individuals with and without intellectual disability are investigated in an eye tracking study in terms of sensitivity to Nonaccidental (NAPs) and Metric (MPs) shape properties. Total fixation time is used as an indirect measure of attention allocation. Studies have found Mean reaction times for non accidental properties (NAPs) to be shorter than for metric (MPs) when the MP and NAP differences were equalized. METHODS: Twenty-five individuals with intellectual disability (mild and moderate level of Mental Retardation) and twenty-seven normal individuals were compared on mean total fixation duration, accuracy level and mean reaction time for mild NAPs, extreme NAPs and metric properties of images. 2D images of cylinders were adapted and made into forced choice match-to-sample tasks. Tobii TX300 Eye Tracker was used to record total fixation duration and data obtained from the Areas of Interest (AOI). Variable trial duration (total reaction time of each participant) and fixed trail duration (data taken at each second from one to fifteen seconds) data were used for analyses. Both groups did not differ in terms of fixation times (fixed as well as variable) across any of the three image manipulations but differed in terms of reaction time and accuracy. Normal individuals had longer reaction time compared to individuals with intellectual disability across all types of images. Both the groups differed significantly on accuracy measure across all image types. Normal individuals performed better across all three types of images. Mild NAPs vs. Metric differences: There was significant difference between mild NAPs and metric properties of images in terms of reaction times. Mild NAPs images had significantly longer reaction time compared to metric for normal individuals but this difference was not found for individuals with intellectual disability. Mild NAPs images had significantly better accuracy level compared to metric for both the groups. In conclusion, type of image manipulations did not result in differences in attention allocation for individuals with and without intellectual disability. Mild Nonaccidental properties facilitate better accuracy level compared to metric in both the groups but this advantage is seen only for normal group in terms of mean reaction time.Keywords: eye gaze fixations, eye movements, intellectual disability, stimulus properties
Procedia PDF Downloads 553399 Alphabet Recognition Using Pixel Probability Distribution
Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay
Abstract:
Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix
Procedia PDF Downloads 389398 Attitude to the Types of Organizational Change
Authors: O. Y. Yurieva, O. V. Yurieva, O. V. Kiselkina, A. V. Kamaseva
Abstract:
Since the early 2000s, there are some innovative changes in the civil service in Russia due to administrative reform. Perspectives of the reform of the civil service include a fundamental change in the personnel component, increasing the level of professionalism of officials, increasing their capacity for self-organization and self-regulation. In order to achieve this, the civil service must be able to continuously change. Organizational changes have long become the subject of scientific understanding; problems of research in the field of organizational change is presented by topics focused on the study of the methodological aspects of the implementation of the changes, the specifics of changes in different types of organizations (business, government, and so on), design changes in the organization, including based on the change in organizational culture. In this case, the organizational changes in the civil service are the least studied areas; research of problems of its transformation is carried out in fragments. According to the theory of resistance of Herbert Simon, the root of the opposition and rejection of change is in the person who will resist any change, if it threatens to undermine the degree of satisfaction as a member of the organization (regardless of the reasons for this change). Thus, the condition for successful adaptation to changes in the organization is the ability of its staff to perceive innovation. As part of the problem, the study sought to identify the innovation civil servants, to determine readiness for the development of proposals for the implementation of organizational change in the public service. To identify the relationship to organizational changes case study carried out by the method of "Attitudes to organizational change" of I. Motovilina, which allowed predicting the type of resistance to changes, to reveal the contradictions and hidden results. The advantage of the method of I. Motovilina is its brevity, simplicity, the analysis of the responses to each question, the use of "overlapping" issues potentially conflicting factors. Based on the study made by the authors, it was found that respondents have a positive attitude to change more local than those that take place in reality, such as "increase opportunities for professional growth", "increase the requirements for the level of professionalism of", "the emergence of possible manifestations initiatives from below". Implemented by the authors diagnostics related to organizational changes in the public service showed the presence of specific problem areas, with roots in the lack of understanding of the importance of innovation personnel in the process of bureaucratization of innovation in public service organizations.Keywords: innovative changes, self-organization, self-regulation, civil service
Procedia PDF Downloads 460397 Product Separation of Green Processes and Catalyst Recycling of a Homogeneous Polyoxometalate Catalyst Using Nanofiltration Membranes
Authors: Dorothea Voß, Tobias Esser, Michael Huber, Jakob Albert
Abstract:
The growing world population and the associated increase in demand for energy and consumer goods, as well as increasing waste production, requires the development of sustainable processes. In addition, the increasing environmental awareness of our society is a driving force for the requirement that processes must be as resource and energy efficient as possible. In this context, the use of polyoxometalate catalysts (POMs) has emerged as a promising approach for the development of green processes. POMs are bifunctional polynuclear metal-oxo-anion cluster characterized by a strong Brønsted acidity, a high proton mobility combined with fast multi-electron transfer and tunable redox potential. In addition, POMs are soluble in many commonly known solvents and exhibit resistance to hydrolytic and oxidative degradation. Due to their structure and excellent physicochemical properties, POMs are efficient acid and oxidation catalysts that have attracted much attention in recent years. Oxidation processes with molecular oxygen are worth mentioning here. However, the fact that the POM catalysts are homogeneous poses a challenge for downstream processing of product solutions and recycling of the catalysts. In this regard, nanofiltration membranes have gained increasing interest in recent years, particularly due to their relative sustainability advantage over other technologies and their unique properties such as increased selectivity towards multivalent ions. In order to establish an efficient downstream process for the highly selective separation of homogeneous POM catalysts from aqueous solutions using nanofiltration membranes, a laboratory-scale membrane system was designed and constructed. By varying various process parameters, a sensitivity analysis was performed on a model system to develop an optimized method for the recovery of POM catalysts. From this, process-relevant key figures such as the rejection of various system components were derived. These results form the basis for further experiments on other systems to test the transferability to serval separation tasks with different POMs and products, as well as for recycling experiments of the catalysts in processes on laboratory scale.Keywords: downstream processing, nanofiltration, polyoxometalates, homogeneous catalysis, green chemistry
Procedia PDF Downloads 89396 Youth and Radicalization: Main Causes Who Lead Young People to Radicalize in a Context with Background of Radicalization
Authors: Zineb Emrane
Abstract:
This abstract addresses the issue of radicalization of young people in a context with background of radicalization, in North of Morocco, 5 terrorist of Madrid's Attacts on 11th March, were coming from this context. It were developed a study pilot that describing young people perception about the main causes that lead and motivate for radicalization. Whenever we talk about this topic, we obtain information from studies and investigations by specialists in field, but we don’t give voice to the protagonists who in many cases are victims, specifically, young people at social risk because of social factors. Extremist radicalization is an expanding phenomenon, that affect young people, in north of Morocco. They live in a context with radical background and at risk of social exclusion, their social, economic and familiar needs make them vulnerable. The extremist groups take advantage of this vulnerability to involve them in a process of radicalization, offering them an alternative environment where they can found all they are looking for. This study pilot approaches the main causes that lead and motivates young people to become radicals, analyzing their context with emphasis on influencing factors, and bearing in mind the analysis of young people about how the radical background affect them and their opinion this phenomenon. The pilot study was carried out through the following actions: - Group dynamics with young people to analyze the process of violent radicalization of young people. -A participatory workshop with members of organizations that work directly with young people at risk of radicalization. -Interviews with institutional managers -Participant observation. The implementation of actions has led to the conclusion that young people define violent radicalization as a sequential process, depending on the stage, it can be deconstructed. Young people recognize that they stop feeling belonging to their family, school and neighborhood when they see behavior contrary to what they consider good and evil. The emotional rupture and the search for references outside their circle, push them to sympathize with groups that have an extremist ideology and that offer them what they need. The radicalization is a process with different stages, the main causes and the factors which lead young people to use extremist violence are related their low level of belonging feeling to their context, and lack of critical thinking about important issues. The young people are in a vulnerable stage, searching their identity, a space in which they can be accepted, and when they don't find it they are easily manipulated and susceptible to being attracted by extremist groups.Keywords: exclusion, radicalization, vulnerability, youth
Procedia PDF Downloads 163395 Design and Development of Permanent Magnet Quadrupoles for Low Energy High Intensity Proton Accelerator
Authors: Vikas Teotia, Sanjay Malhotra, Elina Mishra, Prashant Kumar, R. R. Singh, Priti Ukarde, P. P. Marathe, Y. S. Mayya
Abstract:
Bhabha Atomic Research Centre, Trombay is developing low energy high intensity Proton Accelerator (LEHIPA) as pre-injector for 1 GeV proton accelerator for accelerator driven sub-critical reactor system (ADSS). LEHIPA consists of RFQ (Radio Frequency Quadrupole) and DTL (Drift Tube Linac) as major accelerating structures. DTL is RF resonator operating in TM010 mode and provides longitudinal E-field for acceleration of charged particles. The RF design of drift tubes of DTL was carried out to maximize the shunt impedance; this demands the diameter of drift tubes (DTs) to be as low as possible. The width of the DT is however determined by the particle β and trade-off between a transit time factor and effective accelerating voltage in the DT gap. The array of Drift Tubes inside DTL shields the accelerating particle from decelerating RF phase and provides transverse focusing to the charged particles which otherwise tends to diverge due to Columbic repulsions and due to transverse e-field at entry of DTs. The magnetic lenses housed inside DTS controls the transverse emittance of the beam. Quadrupole magnets are preferred over solenoid magnets due to relative high focusing strength of former over later. The availability of small volume inside DTs for housing magnetic quadrupoles has motivated the usage of permanent magnet quadrupoles rather than Electromagnetic Quadrupoles (EMQ). This provides another advantage as joule heating is avoided which would have added thermal loaded in the continuous cycle accelerator. The beam dynamics requires uniformity of integral magnetic gradient to be better than ±0.5% with the nominal value of 2.05 tesla. The paper describes the magnetic design of the PMQ using Sm2Co17 rare earth permanent magnets. The paper discusses the results of five pre-series prototype fabrications and qualification of their prototype permanent magnet quadrupoles and a full scale DT developed with embedded PMQs. The paper discusses the magnetic pole design for optimizing integral Gdl uniformity and the value of higher order multipoles. A novel but simple method of tuning the integral Gdl is discussed.Keywords: DTL, focusing, PMQ, proton, rate earth magnets
Procedia PDF Downloads 472394 Comparing Stability Index MAPping (SINMAP) Landslide Susceptibility Models in the Río La Carbonera, Southeast Flank of Pico de Orizaba Volcano, Mexico
Authors: Gabriel Legorreta Paulin, Marcus I. Bursik, Lilia Arana Salinas, Fernando Aceves Quesada
Abstract:
In volcanic environments, landslides and debris flows occur continually along stream systems of large stratovolcanoes. This is the case on Pico de Orizaba volcano, the highest mountain in Mexico. The volcano has a great potential to impact and damage human settlements and economic activities by landslides. People living along the lower valleys of Pico de Orizaba volcano are in continuous hazard by the coalescence of upstream landslide sediments that increased the destructive power of debris flows. These debris flows not only produce floods, but also cause the loss of lives and property. Although the importance of assessing such process, there is few landslide inventory maps and landslide susceptibility assessment. As a result in México, no landslide susceptibility models assessment has been conducted to evaluate advantage and disadvantage of models. In this study, a comprehensive study of landslide susceptibility models assessment using GIS technology is carried out on the SE flank of Pico de Orizaba volcano. A detailed multi-temporal landslide inventory map in the watershed is used as framework for the quantitative comparison of two landslide susceptibility maps. The maps are created based on 1) the Stability Index MAPping (SINMAP) model by using default geotechnical parameters and 2) by using findings of volcanic soils geotechnical proprieties obtained in the field. SINMAP combines the factor of safety derived from the infinite slope stability model with the theory of a hydrologic model to produce the susceptibility map. It has been claimed that SINMAP analysis is reasonably successful in defining areas that intuitively appear to be susceptible to landsliding in regions with sparse information. The validations of the resulting susceptibility maps are performed by comparing them with the inventory map under LOGISNET system which provides tools to compare by using a histogram and a contingency table. Results of the experiment allow for establishing how the individual models predict the landslide location, advantages, and limitations. The results also show that although the model tends to improve with the use of calibrated field data, the landslide susceptibility map does not perfectly represent existing landslides.Keywords: GIS, landslide, modeling, LOGISNET, SINMAP
Procedia PDF Downloads 313393 Thermal Analysis of Adsorption Refrigeration System Using Silicagel–Methanol Pair
Authors: Palash Soni, Vivek Kumar Gaba, Shubhankar Bhowmick, Bidyut Mazumdar
Abstract:
Refrigeration technology is a fast developing field at the present era since it has very wide application in both domestic and industrial areas. It started from the usage of simple ice coolers to store food stuffs to the present sophisticated cold storages along with other air conditioning system. A variety of techniques are used to bring down the temperature below the ambient. Adsorption refrigeration technology is a novel, advanced and promising technique developed in the past few decades. It gained attention due to its attractive property of exploiting unlimited natural sources like solar energy, geothermal energy or even waste heat recovery from plants or from the exhaust of locomotives to fulfill its energy need. This will reduce the exploitation of non-renewable resources and hence reduce pollution too. This work is aimed to develop a model for a solar adsorption refrigeration system and to simulate the same for different operating conditions. In this system, the mechanical compressor is replaced by a thermal compressor. The thermal compressor uses renewable energy such as solar energy and geothermal energy which makes it useful for those areas where electricity is not available. Refrigerants normally in use like chlorofluorocarbon/perfluorocarbon have harmful effects like ozone depletion and greenhouse warming. It is another advantage of adsorption systems that it can replace these refrigerants with less harmful natural refrigerants like water, methanol, ammonia, etc. Thus the double benefit of reduction in energy consumption and pollution can be achieved. A thermodynamic model was developed for the proposed adsorber, and a universal MATLAB code was used to simulate the model. Simulations were carried out for a different operating condition for the silicagel-methanol working pair. Various graphs are plotted between regeneration temperature, adsorption capacities, the coefficient of performance, desorption rate, specific cooling power, adsorption/desorption times and mass. The results proved that adsorption system could be installed successfully for refrigeration purpose as it has saving in terms of power and reduction in carbon emission even though the efficiency is comparatively less as compared to conventional systems. The model was tested for its compliance in a cold storage refrigeration with a cooling load of 12 TR.Keywords: adsorption, refrigeration, renewable energy, silicagel-methanol
Procedia PDF Downloads 206392 Investigation for Pixel-Based Accelerated Aging of Large Area Picosecond Photo-Detectors
Authors: I. Tzoka, V. A. Chirayath, A. Brandt, J. Asaadi, Melvin J. Aviles, Stephen Clarke, Stefan Cwik, Michael R. Foley, Cole J. Hamel, Alexey Lyashenko, Michael J. Minot, Mark A. Popecki, Michael E. Stochaj, S. Shin
Abstract:
Micro-channel plate photo-multiplier tubes (MCP-PMTs) have become ubiquitous and are widely considered potential candidates for next generation High Energy Physics experiments due to their picosecond timing resolution, ability to operate in strong magnetic fields, and low noise rates. A key factor that determines the applicability of MCP-PMTs in their lifetime, especially when they are used in high event rate experiments. We have developed a novel method for the investigation of the aging behavior of an MCP-PMT on an accelerated basis. The method involves exposing a localized region of the MCP-PMT to photons at a high repetition rate. This pixel-based method was inspired by earlier results showing that damage to the photocathode of the MCP-PMT occurs primarily at the site of light exposure and that the surrounding region undergoes minimal damage. One advantage of the pixel-based method is that it allows the dynamics of photo-cathode damage to be studied at multiple locations within the same MCP-PMT under different operating conditions. In this work, we use the pixel-based accelerated lifetime test to investigate the aging behavior of a 20 cm x 20 cm Large Area Picosecond Photo Detector (LAPPD) manufactured by INCOM Inc. at multiple locations within the same device under different operating conditions. We compare the aging behavior of the MCP-PMT obtained from the first lifetime test conducted under high gain conditions to the lifetime obtained at a different gain. Through this work, we aim to correlate the lifetime of the MCP-PMT and the rate of ion feedback, which is a function of the gain of each MCP, and which can also vary from point to point across a large area (400 $cm^2$) MCP. The tests were made possible by the uniqueness of the LAPPD design, which allows independent control of the gain of the chevron stacked MCPs. We will further discuss the implications of our results for optimizing the operating conditions of the detector when used in high event rate experiments.Keywords: electron multipliers (vacuum), LAPPD, lifetime, micro-channel plate photo-multipliers tubes, photoemission, time-of-flight
Procedia PDF Downloads 178391 A Metric to Evaluate Conventional and Electrified Vehicles in Terms of Customer-Oriented Driving Dynamics
Authors: Stephan Schiffer, Andreas Kain, Philipp Wilde, Maximilian Helbing, Bernard Bäker
Abstract:
Automobile manufacturers progressively focus on a downsizing strategy to meet the EU's CO2 requirements concerning type-approval consumption cycles. The reduction in naturally aspirated engine power is compensated by increased levels of turbocharging. By downsizing conventional engines, CO2 emissions are reduced. However, it also implicates major challenges regarding longitudinal dynamic characteristics. An example of this circumstance is the delayed turbocharger-induced torque reaction which leads to a partially poor response behavior of the vehicle during acceleration operations. That is why it is important to focus conventional drive train design on real customer driving again. The currently considered dynamic maneuvers like the acceleration time 0-100 km/h discussed by journals and car manufacturers describe longitudinal dynamics experienced by a driver inadequately. For that reason we present the realization and evaluation of a comprehensive proband study. Subjects are provided with different vehicle concepts (electrified vehicles, vehicles with naturally aspired engines and vehicles with different concepts of turbochargers etc.) in order to find out which dynamic criteria are decisive for a subjectively strong acceleration and response behavior of a vehicle. Subsequently, realistic acceleration criteria are derived. By weighing the criteria an evaluation metric is developed to objectify customer-oriented transient dynamics. Fully-electrified vehicles are the benchmark in terms of customer-oriented longitudinal dynamics. The electric machine provides the desired torque almost without delay. This advantage compared to combustion engines is especially noticeable at low engine speeds. In conclusion, we will show the degree to which extent customer-relevant longitudinal dynamics of conventional vehicles can be approximated to electrified vehicle concepts. Therefore, various technical measures (turbocharger concepts, 48V electrical chargers etc.) and drive train designs (e.g. varying the final drive) are presented and evaluated in order to strengthen the vehicle’s customer-relevant transient dynamics. As a rating size the newly developed evaluation metric will be used.Keywords: 48V, customer-oriented driving dynamics, electric charger, electrified vehicles, vehicle concepts
Procedia PDF Downloads 407390 Introduction to Two Artificial Boundary Conditions for Transient Seepage Problems and Their Application in Geotechnical Engineering
Authors: Shuang Luo, Er-Xiang Song
Abstract:
Many problems in geotechnical engineering, such as foundation deformation, groundwater seepage, seismic wave propagation and geothermal transfer problems, may involve analysis in the ground which can be seen as extending to infinity. To that end, consideration has to be given regarding how to deal with the unbounded domain to be analyzed by using numerical methods, such as finite element method (FEM), finite difference method (FDM) or finite volume method (FVM). A simple artificial boundary approach derived from the analytical solutions for transient radial seepage problems, is introduced. It should be noted, however, that the analytical solutions used to derive the artificial boundary are particular solutions under certain boundary conditions, such as constant hydraulic head at the origin or constant pumping rate of the well. When dealing with unbounded domains with unsteady boundary conditions, a more sophisticated artificial boundary approach to deal with the infinity of the domain is presented. By applying Laplace transforms and introducing some specially defined auxiliary variables, the global artificial boundary conditions (ABCs) are simplified to local ones so that the computational efficiency is enhanced significantly. The introduced two local ABCs are implemented in a finite element computer program so that various seepage problems can be calculated. The two approaches are first verified by the computation of a one-dimensional radial flow problem, and then tentatively applied to more general two-dimensional cylindrical problems and plane problems. Numerical calculations show that the local ABCs can not only give good results for one-dimensional axisymmetric transient flow, but also applicable for more general problems, such as axisymmetric two-dimensional cylindrical problems, and even more general planar two-dimensional flow problems for well doublet and well groups. An important advantage of the latter local boundary is its applicability for seepage under rapidly changing unsteady boundary conditions, and even the computational results on the truncated boundary are usually quite satisfactory. In this aspect, it is superior over the former local boundary. Simulation of relatively long operational time demonstrates to certain extents the numerical stability of the local boundary. The solutions of the two local ABCs are compared with each other and with those obtained by using large element mesh, which proves the satisfactory performance and obvious superiority over the large mesh model.Keywords: transient seepage, unbounded domain, artificial boundary condition, numerical simulation
Procedia PDF Downloads 294389 Analysis and Optimized Design of a Packaged Liquid Chiller
Authors: Saeed Farivar, Mohsen Kahrom
Abstract:
The purpose of this work is to develop a physical simulation model for the purpose of studying the effect of various design parameters on the performance of packaged-liquid chillers. This paper presents a steady-state model for predicting the performance of package-Liquid chiller over a wide range of operation condition. The model inputs are inlet conditions; geometry and output of model include system performance variable such as power consumption, coefficient of performance (COP) and states of refrigerant through the refrigeration cycle. A computer model that simulates the steady-state cyclic performance of a vapor compression chiller is developed for the purpose of performing detailed physical design analysis of actual industrial chillers. The model can be used for optimizing design and for detailed energy efficiency analysis of packaged liquid chillers. The simulation model takes into account presence of all chiller components such as compressor, shell-and-tube condenser and evaporator heat exchangers, thermostatic expansion valve and connection pipes and tubing’s by thermo-hydraulic modeling of heat transfer, fluids flow and thermodynamics processes in each one of the mentioned components. To verify the validity of the developed model, a 7.5 USRT packaged-liquid chiller is used and a laboratory test stand for bringing the chiller to its standard steady-state performance condition is build. Experimental results obtained from testing the chiller in various load and temperature conditions is shown to be in good agreement with those obtained from simulating the performance of the chiller using the computer prediction model. An entropy-minimization-based optimization analysis is performed based on the developed analytical performance model of the chiller. The variation of design parameters in construction of shell-and-tube condenser and evaporator heat exchangers are studied using the developed performance and optimization analysis and simulation model and a best-match condition between the physical design and construction of chiller heat exchangers and its compressor is found to exist. It is expected that manufacturers of chillers and research organizations interested in developing energy-efficient design and analysis of compression chillers can take advantage of the presented study and its results.Keywords: optimization, packaged liquid chiller, performance, simulation
Procedia PDF Downloads 278388 Assesment of Genetic Fidelity of Micro-Clones of an Aromatic Medicinal Plant Murraya koenigii (L.) Spreng
Authors: Ramesh Joshi, Nisha Khatik
Abstract:
Murraya koenigii (L.) Spreng locally known as “Curry patta” or “Meetha neem” belonging to the family Rutaceae that grows wildly in Southern Asia. Its aromatic leaves are commonly used as the raw material for traditional medicinal formulations in India. The leaves contain essential oil and also used as a condiment. Several monomeric and binary carbazol alkaloids present in the various plant parts. These alkaloids have been reported to possess anti-microbial, mosquitocidal, topo-isomerase inhibition and antioxidant properties. Some of the alkaloids reported in this plant have showed anti carcinogenic and anti-diabetic properties. The conventional method of propagation of this tree is limited to seeds only, which retain their viability for only a short period. Hence, a biotechnological approach might have an advantage edging over traditional breeding as well as the genetic improvement of M. koenigii within a short period. The development of a reproducible regeneration protocol is the prerequisite for ex situ conservation and micropropagation. An efficient protocol for high frequency regeneration of in vitro plants of Murraya koenigii via different explants such as- nodal segments, intermodal segments, leaf, root segments, hypocotyle, cotyledons and cotyledonary node explants is described. In the present investigation, assessment of clonal fidelity in the micropropagated plantlets of Murraya koenigii was attempted using RAPD and ISSR markers at different pathways of plant tissue culture technique. About 20 ISSR and 40 RAPD primers were used for all the samples. Genomic DNA was extracted by CTAB method. ISSR primer were found to be more suitable as compared to RAPD for the analysis of clonal fidelity of M. koenigii. The amplifications however, were finally performed using RAPD, ISSR markers owing to their better performance in terms of generation of amplification products. In RAPD primer maximum 75% polymorphism was recorded in OPU-2 series which exhibited out of 04 scorable bands, three bands were polymorphic with a band range of size 600-1500 bp. In ISSR primers the UBC 857 showed 50% polymorphism with 02 band were polymorphic of band range size between 400-1000 bp.Keywords: genetic fidelity, Murraya koenigii, aromatic plants, ISSR primers
Procedia PDF Downloads 501