Search results for: sensor technologies
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4917

Search results for: sensor technologies

1047 Experimental Validation of Computational Fluid Dynamics Used for Pharyngeal Flow Patterns during Obstructive Sleep Apnea

Authors: Pragathi Gurumurthy, Christina Hagen, Patricia Ulloa, Martin A. Koch, Thorsten M. Buzug

Abstract:

Obstructive sleep apnea (OSA) is a sleep disorder where the patient suffers a disturbed airflow during sleep due to partial or complete occlusion of the pharyngeal airway. Recently, numerical simulations have been used to better understand the mechanism of pharyngeal collapse. However, to gain confidence in the solutions so obtained, an experimental validation is required. Therefore, in this study an experimental validation of computational fluid dynamics (CFD) used for the study of human pharyngeal flow patterns during OSA is performed. A stationary incompressible Navier-Stokes equation solved using the finite element method was used to numerically study the flow patterns in a computed tomography-based human pharynx model. The inlet flow rate was set to 250 ml/s and such that a flat profile was maintained at the inlet. The outlet pressure was set to 0 Pa. The experimental technique used for the validation of CFD of fluid flow patterns is phase contrast-MRI (PC-MRI). Using the same computed tomography data of the human pharynx as in the simulations, a phantom for the experiment was 3 D printed. Glycerol (55.27% weight) in water was used as a test fluid at 25°C. Inflow conditions similar to the CFD study were simulated using an MRI compatible flow pump (CardioFlow-5000MR, Shelley Medical Imaging Technologies). The entire experiment was done on a 3 T MR system (Ingenia, Philips) with 108 channel body coil using an RF-spoiled, gradient echo sequence. A comparison of the axial velocity obtained in the pharynx from the numerical simulations and PC-MRI shows good agreement. The region of jet impingement and recirculation also coincide, therefore validating the numerical simulations. Hence, the experimental validation proves the reliability and correctness of the numerical simulations.

Keywords: computational fluid dynamics, experimental validation, phase contrast-MRI, obstructive sleep apnea

Procedia PDF Downloads 309
1046 Cascaded Transcritical/Supercritical CO2 Cycles and Organic Rankine Cycles to Recover Low-Temperature Waste Heat and LNG Cold Energy Simultaneously

Authors: Haoshui Yu, Donghoi Kim, Truls Gundersen

Abstract:

Low-temperature waste heat is abundant in the process industries, and large amounts of Liquefied Natural Gas (LNG) cold energy are discarded without being recovered properly in LNG terminals. Power generation is an effective way to utilize low-temperature waste heat and LNG cold energy simultaneously. Organic Rankine Cycles (ORCs) and CO2 power cycles are promising technologies to convert low-temperature waste heat and LNG cold energy into electricity. If waste heat and LNG cold energy are utilized simultaneously in one system, the performance may outperform separate systems utilizing low-temperature waste heat and LNG cold energy, respectively. Low-temperature waste heat acts as the heat source and LNG regasification acts as the heat sink in the combined system. Due to the large temperature difference between the heat source and the heat sink, cascaded power cycle configurations are proposed in this paper. Cascaded power cycles can improve the energy efficiency of the system considerably. The cycle operating at a higher temperature to recover waste heat is called top cycle and the cycle operating at a lower temperature to utilize LNG cold energy is called bottom cycle in this study. The top cycle condensation heat is used as the heat source in the bottom cycle. The top cycle can be an ORC, transcritical CO2 (tCO2) cycle or supercritical CO2 (sCO2) cycle, while the bottom cycle only can be an ORC due to the low-temperature range of the bottom cycle. However, the thermodynamic path of the tCO2 cycle and sCO2 cycle are different from that of an ORC. The tCO2 cycle and the sCO2 cycle perform better than an ORC for sensible waste heat recovery due to a better temperature match with the waste heat source. Different combinations of the tCO2 cycle, sCO2 cycle and ORC are compared to screen the best configurations of the cascaded power cycles. The influence of the working fluid and the operating conditions are also investigated in this study. Each configuration is modeled and optimized in Aspen HYSYS. The results show that cascaded tCO2/ORC performs better compared with cascaded ORC/ORC and cascaded sCO2/ORC for the case study.

Keywords: LNG cold energy, low-temperature waste heat, organic Rankine cycle, supercritical CO₂ cycle, transcritical CO₂ cycle

Procedia PDF Downloads 257
1045 Molecularly Imprinted Nanoparticles (MIP NPs) as Non-Animal Antibodies Substitutes for Detection of Viruses

Authors: Alessandro Poma, Kal Karim, Sergey Piletsky, Giuseppe Battaglia

Abstract:

The recent increasing emergency threat to public health of infectious influenza diseases has prompted interest in the detection of avian influenza virus (AIV) H5N1 in humans as well as animals. A variety of technologies for diagnosing AIV infection have been developed. However, various disadvantages (costs, lengthy analyses, and need for high-containment facilities) make these methods less than ideal in their practical application. Molecularly Imprinted Polymeric Nanoparticles (MIP NPs) are suitable to overcome these limitations by having high affinity, selectivity, versatility, scalability and cost-effectiveness with the versatility of post-modification (labeling – fluorescent, magnetic, optical) opening the way to the potential introduction of improved diagnostic tests capable of providing rapid differential diagnosis. Here we present our first results in the production and testing of MIP NPs for the detection of AIV H5N1. Recent developments in the solid-phase synthesis of MIP NPs mean that for the first time a reliable supply of ‘soluble’ synthetic antibodies can be made available for testing as potential biological or diagnostic active molecules. The MIP NPs have the potential to detect viruses that are widely circulating in farm animals and indeed humans. Early and accurate identification of the infectious agent will expedite appropriate control measures. Thus, diagnosis at an early stage of infection of a herd or flock or individual maximizes the efficiency with which containment, prevention and possibly treatment strategies can be implemented. More importantly, substantiating the practicability’s of these novel reagents should lead to an initial reduction and eventually to a potential total replacement of animals, both large and small, to raise such specific serological materials.

Keywords: influenza virus, molecular imprinting, nanoparticles, polymers

Procedia PDF Downloads 361
1044 Integration of Thermal Energy Storage and Electric Heating with Combined Heat and Power Plants

Authors: Erich Ryan, Benjamin McDaniel, Dragoljub Kosanovic

Abstract:

Combined heat and power (CHP) plants are an efficient technology for meeting the heating and electric needs of large campus energy systems, but have come under greater scrutiny as the world pushes for emissions reductions and lower consumption of fossil fuels. The electrification of heating and cooling systems offers a great deal of potential for carbon savings, but these systems can be costly endeavors due to increased electric consumption and peak demand. Thermal energy storage (TES) has been shown to be an effective means of improving the viability of electrified systems, by shifting heating and cooling load to off-peak hours and reducing peak demand charges. In this study, we analyze the integration of an electrified heating and cooling system with thermal energy storage into a campus CHP plant, to investigate the potential of leveraging existing infrastructure and technologies with the climate goals of the 21st century. A TRNSYS model was built to simulate a ground source heat pump (GSHP) system with TES using measured campus heating and cooling loads. The GSHP with TES system is modeled to follow the parameters of industry standards and sized to provide an optimal balance of capital and operating costs. Using known CHP production information, costs and emissions were investigated for a unique large energy user rate structure that operates a CHP plant. The results highlight the cost and emissions benefits of a targeted integration of heat pump technology within the framework of existing CHP systems, along with the performance impacts and value of TES capability within the combined system.

Keywords: thermal energy storage, combined heat and power, heat pumps, electrification

Procedia PDF Downloads 87
1043 Data Mining Spatial: Unsupervised Classification of Geographic Data

Authors: Chahrazed Zouaoui

Abstract:

In recent years, the volume of geospatial information is increasing due to the evolution of communication technologies and information, this information is presented often by geographic information systems (GIS) and stored on of spatial databases (BDS). The classical data mining revealed a weakness in knowledge extraction at these enormous amounts of data due to the particularity of these spatial entities, which are characterized by the interdependence between them (1st law of geography). This gave rise to spatial data mining. Spatial data mining is a process of analyzing geographic data, which allows the extraction of knowledge and spatial relationships from geospatial data, including methods of this process we distinguish the monothematic and thematic, geo- Clustering is one of the main tasks of spatial data mining, which is registered in the part of the monothematic method. It includes geo-spatial entities similar in the same class and it affects more dissimilar to the different classes. In other words, maximize intra-class similarity and minimize inter similarity classes. Taking account of the particularity of geo-spatial data. Two approaches to geo-clustering exist, the dynamic processing of data involves applying algorithms designed for the direct treatment of spatial data, and the approach based on the spatial data pre-processing, which consists of applying clustering algorithms classic pre-processed data (by integration of spatial relationships). This approach (based on pre-treatment) is quite complex in different cases, so the search for approximate solutions involves the use of approximation algorithms, including the algorithms we are interested in dedicated approaches (clustering methods for partitioning and methods for density) and approaching bees (biomimetic approach), our study is proposed to design very significant to this problem, using different algorithms for automatically detecting geo-spatial neighborhood in order to implement the method of geo- clustering by pre-treatment, and the application of the bees algorithm to this problem for the first time in the field of geo-spatial.

Keywords: mining, GIS, geo-clustering, neighborhood

Procedia PDF Downloads 374
1042 Developing Digital Skills in Museum Professionals through Digital Education: International Good Practices and Effective Learning Experiences

Authors: Antonella Poce, Deborah Seid Howes, Maria Rosaria Re, Mara Valente

Abstract:

The Creative Industries education contexts, Museum Education in particular, generally presents a low emphasis on the use of new digital technologies, digital abilities and transversal skills development. The spread of the Covid-19 pandemic has underlined the importance of these abilities and skills in cultural heritage education contexts: gaining digital skills, museum professionals will improve their career opportunities with access to new distribution markets through internet access and e-commerce, new entrepreneurial tools, or adding new forms of digital expression to their work. However, the use of web, mobile, social, and analytical tools is becoming more and more essential in the Heritage field, and museums, in particular, to face the challenges posed by the current worldwide health emergency. Recent studies highlight the need for stronger partnerships between the cultural and creative sectors, social partners and education and training providers in order to provide these sectors with the combination of skills needed for creative entrepreneurship in a rapidly changing environment. Considering the above conditions, the paper presents different examples of digital learning experiences carried out in Italian and USA contexts with the aim of promoting digital skills in museum professionals. In particular, a quali-quantitative research study has been conducted on two international Postgraduate courses, “Advanced Studies in Museum Education” (2 years) and “Museum Education” (1 year), in order to identify the educational effectiveness of the online learning strategies used (e.g., OBL, Digital Storytelling, peer evaluation) for the development of digital skills and the acquisition of specific content. More than 50 museum professionals participating in the mentioned educational pathways took part in the learning activity, providing evaluation data useful for research purposes.

Keywords: digital skills, museum professionals, technology, education

Procedia PDF Downloads 174
1041 Combination between Intrusion Systems and Honeypots

Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal

Abstract:

Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.

Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor

Procedia PDF Downloads 377
1040 Gender Bias in Natural Language Processing: Machines Reflect Misogyny in Society

Authors: Irene Yi

Abstract:

Machine learning, natural language processing, and neural network models of language are becoming more and more prevalent in the fields of technology and linguistics today. Training data for machines are at best, large corpora of human literature and at worst, a reflection of the ugliness in society. Machines have been trained on millions of human books, only to find that in the course of human history, derogatory and sexist adjectives are used significantly more frequently when describing females in history and literature than when describing males. This is extremely problematic, both as training data, and as the outcome of natural language processing. As machines start to handle more responsibilities, it is crucial to ensure that they do not take with them historical sexist and misogynistic notions. This paper gathers data and algorithms from neural network models of language having to deal with syntax, semantics, sociolinguistics, and text classification. Results are significant in showing the existing intentional and unintentional misogynistic notions used to train machines, as well as in developing better technologies that take into account the semantics and syntax of text to be more mindful and reflect gender equality. Further, this paper deals with the idea of non-binary gender pronouns and how machines can process these pronouns correctly, given its semantic and syntactic context. This paper also delves into the implications of gendered grammar and its effect, cross-linguistically, on natural language processing. Languages such as French or Spanish not only have rigid gendered grammar rules, but also historically patriarchal societies. The progression of society comes hand in hand with not only its language, but how machines process those natural languages. These ideas are all extremely vital to the development of natural language models in technology, and they must be taken into account immediately.

Keywords: gendered grammar, misogynistic language, natural language processing, neural networks

Procedia PDF Downloads 118
1039 Hybrid Energy System for the German Mining Industry: An Optimized Model

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

In recent years, economic attractiveness of renewable energy (RE) for the mining industry, especially for off-grid mines, and a negative environmental impact of fossil energy are stimulating to use RE for mining needs. Being that remote area mines have higher energy expenses than mines connected to a grid, integration of RE may give a mine economic benefits. Regarding the literature review, there is a lack of business models for adopting of RE at mine. The main aim of this paper is to develop an optimized model of RE integration into the German mining industry (GMI). Hereby, the GMI with amount of around 800 mill. t. annually extracted resources is included in the list of the 15 major mining country in the world. Accordingly, the mining potential of Germany is evaluated in this paper as a perspective market for RE implementation. The GMI has been classified in order to find out the location of resources, quantity and types of the mines, amount of extracted resources, and access of the mines to the energy resources. Additionally, weather conditions have been analyzed in order to figure out where wind and solar generation technologies can be integrated into a mine with the highest efficiency. Despite the fact that the electricity demand of the GMI is almost completely covered by a grid connection, the hybrid energy system (HES) based on a mix of RE and fossil energy is developed due to show environmental and economic benefits. The HES for the GMI consolidates a combination of wind turbine, solar PV, battery and diesel generation. The model has been calculated using the HOMER software. Furthermore, the demonstrated HES contains a forecasting model that predicts solar and wind generation in advance. The main result from the HES such as CO2 emission reduction is estimated in order to make the mining processing more environmental friendly.

Keywords: diesel generation, German mining industry, hybrid energy system, hybrid optimization model for electric renewables, optimized model, renewable energy

Procedia PDF Downloads 342
1038 Towards a Robust Patch Based Multi-View Stereo Technique for Textureless and Occluded 3D Reconstruction

Authors: Ben Haines, Li Bai

Abstract:

Patch based reconstruction methods have been and still are one of the top performing approaches to 3D reconstruction to date. Their local approach to refining the position and orientation of a patch, free of global minimisation and independent of surface smoothness, make patch based methods extremely powerful in recovering fine grained detail of an objects surface. However, patch based approaches still fail to faithfully reconstruct textureless or highly occluded surface regions thus though performing well under lab conditions, deteriorate in industrial or real world situations. They are also computationally expensive. Current patch based methods generate point clouds with holes in texturesless or occluded regions that require expensive energy minimisation techniques to fill and interpolate a high fidelity reconstruction. Such shortcomings hinder the adaptation of the methods for industrial applications where object surfaces are often highly textureless and the speed of reconstruction is an important factor. This paper presents on-going work towards a multi-resolution approach to address the problems, utilizing particle swarm optimisation to reconstruct high fidelity geometry, and increasing robustness to textureless features through an adapted approach to the normalised cross correlation. The work also aims to speed up the reconstruction using advances in GPU technologies and remove the need for costly initialization and expansion. Through the combination of these enhancements, it is the intention of this work to create denser patch clouds even in textureless regions within a reasonable time. Initial results show the potential of such an approach to construct denser point clouds with a comparable accuracy to that of the current top-performing algorithms.

Keywords: 3D reconstruction, multiview stereo, particle swarm optimisation, photo consistency

Procedia PDF Downloads 202
1037 Numerical Analysis of Solar Cooling System

Authors: Nadia Allouache, Mohamed Belmedani

Abstract:

Energy source is a sustainable, totally inexhaustible and environmentally friendly alternative to the fossil fuels available. It is a renewable and economical energy that can be harnessed sustainably over the long term and thus stabilizes energy costs. Solar cooling technologies have been developed to decrease the augmentation electricity consumption for air conditioning and to displace the peak load during hot summer days. A numerical analysis of thermal and solar performances of an annular finned adsorber, which is the most important component of the adsorption solar refrigerating system, is considered in this work. Different adsorbent/adsorbate pairs, such as activated carbon AC35/methanol, activated carbon AC35/ethanol, and activated carbon BPL/Ammoniac, are undertaken in this study. The modeling of the adsorption cooling machine requires the resolution of the equation describing the energy and mass transfer in the tubular finned adsorber. The Wilson and Dubinin- Astakhov models of the solid-adsorbate equilibrium are used to calculate the adsorbed quantity. The porous medium and the fins are contained in the annular space, and the adsorber is heated by solar energy. Effects of key parameters on the adsorbed quantity and on the thermal and solar performances are analysed and discussed. The AC35/methanol pair is the best pair compared to BPL/Ammoniac and AC35/ethanol pairs in terms of system performance. The system performances are sensitive to the fin geometry. For the considered data measured for clear type days of July 2023 in Algeria and Morocco, the performances of the cooling system are very significant in Algeria.

Keywords: activated carbon AC35-methanol pair, activated carbon AC35-ethanol pair, activated carbon BPL-ammoniac pair, annular finned adsorber, performance coefficients, numerical analysis, solar cooling system

Procedia PDF Downloads 53
1036 The Role of Financial Literacy in Driving Consumer Well-Being

Authors: Amin Nazifi, Amir Raki, Doga Istanbulluoglu

Abstract:

The incorporation of technological advancements into financial services, commonly referred to as Fintech, is primarily aimed at promoting services that are accessible, convenient, and inclusive, thereby benefiting both consumers and businesses. Fintech services employ a variety of technologies, including Artificial Intelligence (AI), blockchain, and big data, to enhance the efficiency and productivity of traditional services. Cryptocurrency, a component of Fintech, is projected to be a trillion-dollar industry, with over 320 million consumers globally investing in various forms of cryptocurrencies. However, these potentially transformative services can also lead to adverse outcomes. For instance, recent Fintech innovations have been increasingly linked to misconduct and disservice, resulting in serious implications for consumer well-being. This could be attributed to the ease of access to Fintech, which enables adults to trade cryptocurrencies, shares, and stocks via mobile applications. However, there is little known about the darker aspects of technological advancements, such as Fintech. Hence, this study aims to generate scholarly insights into the design of robust and resilient Fintech services that can add value to businesses and enhance consumer well-being. Using a mixed-method approach, the study will investigate the personal and contextual factors influencing consumers’ adoption and usage of technology innovations and their impacts on consumer well-being. First, semi-structured interviews will be conducted with a sample of Fintech users until theoretical saturation is achieved. Subsequently, based on the findings of the first study, a quantitative study will be conducted to develop and empirically test the impacts of these factors on consumers’ well-being using an online survey with a sample of 300 participants experienced in using Fintech services. This study will contribute to the growing Transformative Service Research (TSR) literature by addressing the latest priorities in service research and shedding light on the impact of fintech services on consumer well-being.

Keywords: consumer well-being, financial literacy, Fintech, service innovation

Procedia PDF Downloads 63
1035 A Perspective on Education to Support Industry 4.0: An Exploratory Study in the UK

Authors: Sin Ying Tan, Mohammed Alloghani, A. J. Aljaaf, Abir Hussain, Jamila Mustafina

Abstract:

Industry 4.0 is a term frequently used to describe the new upcoming industry era. Higher education institutions aim to prepare students to fulfil the future industry needs. Advancement of digital technology has paved the way for the evolution of education and technology. Evolution of education has proven its conservative nature and a high level of resistance to changes and transformation. The gap between the industry's needs and competencies offered generally by education is revealing the increasing need to find new educational models to face the future. The aim of this study was to identify the main issues faced by both universities and students in preparing the future workforce. From December 2018 to April 2019, a regional qualitative study was undertaken in Liverpool, United Kingdom (UK). Interviews were conducted with employers, faculty members and undergraduate students, and the results were analyzed using the open coding method. Four main issues had been identified, which are the characteristics of the future workforce, student's readiness to work, expectations on different roles played at the tertiary education level and awareness of the latest trends. The finding of this paper concluded that the employers and academic practitioners agree that their expectations on each other’s roles are different and in order to face the rapidly changing technology era, students should not only have the right skills, but they should also have the right attitude in learning. Therefore, the authors address this issue by proposing a learning framework known as 'ASK SUMA' framework as a guideline to support the students, academicians and employers in meeting the needs of 'Industry 4.0'. Furthermore, this technology era requires the employers, academic practitioners and students to work together in order to face the upcoming challenges and fast-changing technologies. It is also suggested that an interactive system should be provided as a platform to support the three different parties to play their roles.

Keywords: attitude, expectations, industry needs, knowledge, skills

Procedia PDF Downloads 123
1034 Impact of Soot on NH3-SCR, NH3 Oxidation and NH3 TPD over Cu/SSZ-13 Zeolite

Authors: Lidija Trandafilovic, Kirsten Leistner, Marie Stenfeldt, Louise Olsson

Abstract:

Ammonia Selective Catalytic Reduction (NH3 SCR), is one of the most efficient post combustion abatement technologies for removing NOx from diesel engines. In order to remove soot, diesel particulate filters (DPF) are used. Recently, SCR coated filters have been introduced, which captures soot and simultaneously is active for ammonia SCR. There are large advantages with using SCR coated filters, such as decreased volume and also better light off characteristics, since both the SCR function as well as filter function is close to the engine. The objective of this work was to examine the effect of soot, produced using an engine bench, on Cu/SSZ-13 catalysts. The impact of soot on Cu/SSZ-13 in standard SCR, NH3 oxidation, NH3 temperature programmed desorption (TPD), as well as soot oxidation (with and without water) was examined using flow reactor measurements. In all experiments, prior to the soot loading, the fresh activity of Cu/SSZ-13 was recorded with stepwise increasing the temperature from 100°C till 600°C. Thereafter, the sample was loaded with soot and the experiment was repeated in the temperature range from 100°C till 700°C. The amount of CO and CO2 produced in each experiment is used to calculate the soot oxidized at each steady state temperature. The soot oxidized during the heating to next temperature step is included, e.g. the CO+CO2 produced when increasing the temperature to 600°C is added to the 600°C step. The influence of the two factors seem to be of the most importance to soot oxidation: ammonia and water. The influence of water on soot oxidation shift the maximum of CO2 and CO production towards lower temperatures, thus water increases the soot oxidation. Moreover, when adding ammonia to the system it is clear that the soot oxidation is lowered in the presence of ammonia, resulting in larger integrated COx at 500°C for O2+H2O, while opposite results at 600 °C was received where more was oxidised for O2+H2O+NH3 case. To conclude the presence of ammonia reduces the soot oxidation, which is in line with the ammonia TPD results where we found ammonia storage on the soot. Interestingly, during ammonia SCR conditions the activity for soot oxidation is regained at 500°C. At this high temperature the SCR zone is very short, thus the majority of the catalyst is not exposed to ammonia and therefore the inhibition effect of ammonia is not observed.

Keywords: NH3-SCR, Cu/SSZ-13, soot, zeolite

Procedia PDF Downloads 234
1033 Tritium Activities in Romania, Potential Support for Development of ITER Project

Authors: Gheorghe Ionita, Sebastian Brad, Ioan Stefanescu

Abstract:

In any fusion device, tritium plays a key role both as a fuel component and, due to its radioactivity and easy incorporation, as tritiated water (HTO). As for the ITER project, to reduce the constant potential of tritium emission, there will be implemented a Water Detritiation System (WDS) and an Isotopic Separation System (ISS). In the same time, during operation of fission CANDU reactors, the tritium content increases in the heavy water used as moderator and cooling agent (due to neutron activation) and it has to be reduced, too. In Romania, at the National Institute for Cryogenics and Isotopic Technologies (ICIT Rm-Valcea), there is an Experimental Pilot Plant for Tritium Removal (Exp. TRF), with the aim of providing technical data on the design and operation of an industrial plant for heavy water depreciation of CANDU reactors from Cernavoda NPP. The selected technology is based on the catalyzed isotopic exchange process between deuterium and liquid water (LPCE) combined with the cryogenic distillation process (CD). This paper presents an updated review of activities in the field carried out in Romania after the year 2000 and in particular those related to the development and operation of Tritium Removal Experimental Pilot Plant. It is also presented a comparison between the experimental pilot plant and industrial plant to be implemented at Cernavoda NPP. The similarities between the experimental pilot plant from ICIT Rm-Valcea and water depreciation and isotopic separation systems from ITER are also presented and discussed. Many aspects or 'opened issues' relating to WDS and ISS could be checked and clarified by a special research program, developed within ExpTRF. By these achievements and results, ICIT Rm - Valcea has proved its expertise and capability concerning tritium management therefore its competence may be used within ITER project.

Keywords: ITER project, heavy water detritiation, tritium removal, isotopic exchange

Procedia PDF Downloads 411
1032 Mitigation Strategies in the Urban Context of Sydney, Australia

Authors: Hamed Reza Heshmat Mohajer, Lan Ding, Mattheos Santamouris

Abstract:

One of the worst environmental dangers for people who live in cities is the Urban Heat Island (UHI) impact which is anticipated to become stronger in the coming years as a result of climate change. Accordingly, the key aim of this paper is to study the interaction between the urban configuration and mitigation strategies including increasing albedo of the urban environment (reflective material), implementation of Urban Green Infrastructure (UGI) and/or a combination thereof. To analyse the microclimate models of different urban categories in the metropolis of Sydney, this study will assess meteorological parameters using a 3D model simulation tool of computational fluid dynamics (CFD) named ENVI-met. In this study, four main parameters are taken into consideration while assessing the effectiveness of UHI mitigation strategies: ambient air temperature, wind speed/direction, and outdoor thermal comfort. Layouts with present condition simulation studies from the basic model (scenario one) are taken as the benchmark. A base model is used to calculate the relative percentage variations between each scenario. The findings showed that maximum cooling potential across different urban layouts can be decreased by 2.15 °C degrees by combining high-albedo material with flora; besides layouts with open arrangements(OT1) present a highly remarkable improvement in ambient air temperature and outdoor thermal comfort when mitigation technologies applied compare to compact counterparts. Besides all layouts present a higher intensity on the maximum ambient air temperature reduction rather than the minimum ambient air temperature. On the other hand, Scenarios associated with an increase in greeneries are anticipated to have a slight cooling effect, especially on high-rise layouts.

Keywords: sustainable urban development, urban green infrastructure, high-albedo materials, heat island effect

Procedia PDF Downloads 93
1031 Factors Affecting M-Government Deployment and Adoption

Authors: Saif Obaid Alkaabi, Nabil Ayad

Abstract:

Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.

Keywords: e-government, m-government, system dependability, system security, trust

Procedia PDF Downloads 380
1030 Evaluation of Possible Application of Cold Energy in Liquefied Natural Gas Complexes

Authors: А. I. Dovgyalo, S. O. Nekrasova, D. V. Sarmin, A. A. Shimanov, D. A. Uglanov

Abstract:

Usually liquefied natural gas (LNG) gasification is performed due to atmospheric heat. In order to produce a liquefied gas a sufficient amount of energy is to be consumed (about 1 kW∙h for 1 kg of LNG). This study offers a number of solutions, allowing using a cold energy of LNG. In this paper it is evaluated the application turbines installed behind the evaporator in LNG complex due to its work additional energy can be obtained and then converted into electricity. At the LNG consumption of G=1000kg/h the expansion work capacity of about 10 kW can be reached. Herewith-open Rankine cycle is realized, where a low capacity cryo-pump (about 500W) performs its normal function, providing the cycle pressure. Additionally discussed an application of Stirling engine within the LNG complex also gives a possibility to realize cold energy. Considering the fact, that efficiency coefficient of Stirling engine reaches 50 %, LNG consumption of G=1000 kg/h may result in getting a capacity of about 142 kW of such a thermal machine. The capacity of the pump, required to compensate pressure losses when LNG passes through the hydraulic channel, will make 500 W. Apart from the above-mentioned converters, it can be proposed to use thermoelectric generating packages (TGP), which are widely used now. At present, the modern thermoelectric generator line provides availability of electric capacity with coefficient of efficiency up to 15%. In the proposed complex, it is suggested to install the thermoelectric generator on the evaporator surface is such a way, that the cold end is contacted with the evaporator’s surface, and the hot one – with the atmosphere. At the LNG consumption of G=1000 kgг/h and specified coefficient of efficiency the capacity of the heat flow Qh will make about 32 kW. The derivable net electric power will be P=4,2 kW, and the number of packages will amount to about 104 pieces. The carried out calculations demonstrate the research perceptiveness in this field of propulsion plant development, as well as allow realizing the energy saving potential with the use of liquefied natural gas and other cryogenics technologies.

Keywords: cold energy, gasification, liquefied natural gas, electricity

Procedia PDF Downloads 272
1029 CRYPTO COPYCAT: A Fashion Centric Blockchain Framework for Eliminating Fashion Infringement

Authors: Magdi Elmessiry, Adel Elmessiry

Abstract:

The fashion industry represents a significant portion of the global gross domestic product, however, it is plagued by cheap imitators that infringe on the trademarks which destroys the fashion industry's hard work and investment. While eventually the copycats would be found and stopped, the damage has already been done, sales are missed and direct and indirect jobs are lost. The infringer thrives on two main facts: the time it takes to discover them and the lack of tracking technologies that can help the consumer distinguish them. Blockchain technology is a new emerging technology that provides a distributed encrypted immutable and fault resistant ledger. Blockchain presents a ripe technology to resolve the infringement epidemic facing the fashion industry. The significance of the study is that a new approach leveraging the state of the art blockchain technology coupled with artificial intelligence is used to create a framework addressing the fashion infringement problem. It transforms the current focus on legal enforcement, which is difficult at best, to consumer awareness that is far more effective. The framework, Crypto CopyCat, creates an immutable digital asset representing the actual product to empower the customer with a near real time query system. This combination emphasizes the consumer's awareness and appreciation of the product's authenticity, while provides real time feedback to the producer regarding the fake replicas. The main findings of this study are that implementing this approach can delay the fake product penetration of the original product market, thus allowing the original product the time to take advantage of the market. The shift in the fake adoption results in reduced returns, which impedes the copycat market and moves the emphasis to the original product innovation.

Keywords: fashion, infringement, blockchain, artificial intelligence, textiles supply chain

Procedia PDF Downloads 258
1028 Re-Constructing the Research Design: Dealing with Problems and Re-Establishing the Method in User-Centered Research

Authors: Kerem Rızvanoğlu, Serhat Güney, Emre Kızılkaya, Betül Aydoğan, Ayşegül Boyalı, Onurcan Güden

Abstract:

This study addresses the re-construction and implementation process of the methodological framework developed to evaluate how locative media applications accompany the urban experiences of international students coming to Istanbul with exchange programs in 2022. The research design was built on a three-stage model. The research team conducted a qualitative questionnaire in the first stage to gain exploratory data. These data were then used to form three persona groups representing the sample by applying cluster analysis. In the second phase, a semi-structured digital diary study was carried out on a gamified task list with a sample selected from the persona groups. This stage proved to be the most difficult to obtaining valid data from the participant group. The research team re-evaluated the design of this second phase to reach the participants who will perform the tasks given by the research team while sharing their momentary city experiences, to ensure the daily data flow for two weeks, and to increase the quality of the obtained data. The final stage, which follows to elaborate on the findings, is the “Walk & Talk,” which is completed with face-to-face and in-depth interviews. It has been seen that the multiple methods used in the research process contribute to the depth and data diversity of the research conducted in the context of urban experience and locative technologies. In addition, by adapting the research design to the experiences of the users included in the sample, the differences and similarities between the initial research design and the research applied are shown.

Keywords: digital diary study, gamification, multi-model research, persona analysis, research design for urban experience, user-centered research, “Walk & Talk”

Procedia PDF Downloads 169
1027 Risks beyond Cyber in IoT Infrastructure and Services

Authors: Mattias Bergstrom

Abstract:

Significance of the Study: This research will provide new insights into the risks with digital embedded infrastructure. Through this research, we will analyze each risk and its potential negation strategies, especially for AI and autonomous automation. Moreover, the analysis that is presented in this paper will convey valuable information for future research that can create more stable, secure, and efficient autonomous systems. To learn and understand the risks, a large IoT system was envisioned, and risks with hardware, tampering, and cyberattacks were collected, researched, and evaluated to create a comprehensive understanding of the potential risks. Potential solutions have then been evaluated on an open source IoT hardware setup. This list shows the identified passive and active risks evaluated in the research. Passive Risks: (1) Hardware failures- Critical Systems relying on high rate data and data quality are growing; SCADA systems for infrastructure are good examples of such systems. (2) Hardware delivers erroneous data- Sensors break, and when they do so, they don’t always go silent; they can keep going, just that the data they deliver is garbage, and if that data is not filtered out, it becomes disruptive noise in the system. (3) Bad Hardware injection- Erroneous generated sensor data can be pumped into a system by malicious actors with the intent to create disruptive noise in critical systems. (4) Data gravity- The weight of the data collected will affect Data-Mobility. (5) Cost inhibitors- Running services that need huge centralized computing is cost inhibiting. Large complex AI can be extremely expensive to run. Active Risks: Denial of Service- It is one of the most simple attacks, where an attacker just overloads the system with bogus requests so that valid requests disappear in the noise. Malware- Malware can be anything from simple viruses to complex botnets created with specific goals, where the creator is stealing computer power and bandwidth from you to attack someone else. Ransomware- It is a kind of malware, but it is so different in its implementation that it is worth its own mention. The goal with these pieces of software is to encrypt your system so that it can only be unlocked with a key that is held for ransom. DNS spoofing- By spoofing DNS calls, valid requests and data dumps can be sent to bad destinations, where the data can be extracted for extortion or to corrupt and re-inject into a running system creating a data echo noise loop. After testing multiple potential solutions. We found that the most prominent solution to these risks was to use a Peer 2 Peer consensus algorithm over a blockchain to validate the data and behavior of the devices (sensors, storage, and computing) in the system. By the devices autonomously policing themselves for deviant behavior, all risks listed above can be negated. In conclusion, an Internet middleware that provides these features would be an easy and secure solution to any future autonomous IoT deployments. As it provides separation from the open Internet, at the same time, it is accessible over the blockchain keys.

Keywords: IoT, security, infrastructure, SCADA, blockchain, AI

Procedia PDF Downloads 106
1026 Understanding the Utilization of Luffa Cylindrica in the Adsorption of Heavy Metals to Clean Up Wastewater

Authors: Akanimo Emene, Robert Edyvean

Abstract:

In developing countries, a low cost method of wastewater treatment is highly recommended. Adsorption is an efficient and economically viable treatment process for wastewater. The utilisation of this process is based on the understanding of the relationship between the growth environment and the metal capacity of the biomaterial. Luffa cylindrica (LC), a plant material, was used as an adsorbent in adsorption design system of heavy metals. The chemically modified LC was used to adsorb heavy metals ions, lead and cadmium, from aqueous environmental solution at varying experimental conditions. Experimental factors, adsorption time, initial metal ion concentration, ionic strength and pH of solution were studied. The chemical nature and surface area of the tissues adsorbing heavy metals in LC biosorption systems were characterised by using electron microscopy and infra-red spectroscopy. It showed an increase in the surface area and improved adhesion capacity after chemical treatment. Metal speciation of the metal ions showed the binary interaction between the ions and the LC surface as the pH increases. Maximum adsorption was shown between pH 5 and pH 6. The ionic strength of the metal ion solution has an effect on the adsorption capacity based on the surface charge and the availability of the adsorption sites on the LC. The nature of the metal-surface complexes formed as a result of the experimental data were analysed with kinetic and isotherm models. The pseudo second order kinetic model and the two-site Langmuir isotherm model showed the best fit. Through the understanding of this process, there will be an opportunity to provide an alternative method for water purification. This will be provide an option, for when expensive water treatment technologies are not viable in developing countries.

Keywords: adsorption, luffa cylindrica, metal-surface complexes, pH

Procedia PDF Downloads 87
1025 The Changes in Motivations and the Use of Translation Strategies in Crowdsourced Translation: A Case Study on Global Voices’ Chinese Translation Project

Authors: Ya-Mei Chen

Abstract:

Online crowdsourced translation, an innovative translation practice brought by Web 2.0 technologies and the democratization of information, has become increasingly popular in the Internet era. Carried out by grass-root internet users, crowdsourced translation contains fundamentally different features from its off-line traditional counterpart, such as voluntary participation and parallel collaboration. To better understand such a participatory and collaborative nature, this paper will use the online Chinese translation project of Global Voices as a case study to investigate the following issues: (1) the changes in volunteer translators’ and reviewers’ motivations for participation, (2) translators’ and reviewers’ use of translation strategies and (3) the correlations of translators’ and reviewers’ motivations and strategies with the organizational mission, the translation style guide, the translator-reviewer interaction, the mediation of the translation platform and various types of capital within the translation field. With an aim to systematically explore the above three issues, this paper will collect both quantitative and qualitative data and then draw upon Engestrom’s activity theory and Bourdieu’s field theory as a theoretical framework to analyze the data in question. An online anonymous questionnaire will be conducted to obtain the quantitative data. The questionnaire will contain questions related to volunteer translators’ and reviewers’ backgrounds, participation motivations, translation strategies and mutual relations as well as the operation of the translation platform. Concerning the qualitative data, they will come from (1) a comparative study between some English news texts published on Global Voices and their Chinese translations, (2) an analysis of the online discussion forum associated with Global Voices’ Chinese translation project and (3) the information about the project’s translation mission and guidelines. It is hoped that this research, through a detailed sociological analysis of a cause-driven crowdsourced translation project, can enable translation researchers and practitioners to adequately meet the translation challenges appearing in the digital age.

Keywords: crowdsourced translation, global voices, motivation, translation strategies

Procedia PDF Downloads 369
1024 How Vernacular Attributes of Traditional Buildings Can Be Integrated Into Modern Designs - A Case Study of Thirumayilai, Mylapore

Authors: Divya Ramaseshan

Abstract:

The indigenous beauty of a space supported by its local context is unmatchable. India, known to be a hub for varied cultural significance, has one of the best uses of vernacularism. This paper focuses on the traditional houses present in Thirumayilai, Mylapore, one of the oldest and most populous cities in Chennai. The Mylapore houses are known for their Agraharam style with thinnai, courtyard, and sloping roof characteristics. These homes had a combined influence of Indian, Islamic as well as Neo-classical architecture in their design. The design of the houses reflects the lives of Brahmin communities which have almost vanished from sight now. According to the growing demands of local residents as well as urbanization, many houses have been renovated. Some of those structures have been conserved in certain streets showcasing their historical identity. Other structures have either been demolished or redesigned based on people’s needs. Those structures have been identified and studied to understand the comparative features that have been changed. Many of those were in direct relevance to the city’s climate, family size, socializing habits, and local materials. Being a temple town, Mylapore has contour variations sloping towards various water bodies. These factors have been considered for building homes as well. The study aims to list down the possible design guidelines that could be effective in today’s construction field. The pros and cons are analyzed, and the respective methodologies are framed. Our modern construction technologies have brought in the best visual aesthetics in a short frame of time, but the serene touch of teak wood, walking through paved stones, daydreaming in the sunlit courtyards, and chitchatting in porticos are always cherished. Architects around the world are trying hard to achieve such appreciated design elements in upcoming projects with the best use of modern technology. This will also improvise people’s mental health in the comfort of their homes.

Keywords: Agraharam, Mylapore, traditional, vernacularism

Procedia PDF Downloads 99
1023 An AI-generated Semantic Communication Platform in HCI Course

Authors: Yi Yang, Jiasong Sun

Abstract:

Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.

Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts

Procedia PDF Downloads 113
1022 Physiological and Biochemical Based Analysis to Assess the Efficacy of Mulch under Partial Root Zone Drying in Wheat

Authors: Salman Ahmad, Muhammad Aown Sammar Raza, Muhammad Farrukh Saleem, Rashid Iqbal, Muhammad Saqlain Zaheer, Muhammad Usman Aslam, Imran Haider, Muhammad Adnan Nazar, Muhammad Ali

Abstract:

Among the various abiotic stresses, drought stress is one of the most challenging for field crops. Wheat is one of the major staple food of the world, which is highly affected by water deficit stress in the current scenario of climate change. In order to ensure food security by depleting water resources, there is an urgent need to adopt technologies which result in sufficient crop yield with less water consumption. Mulching and partial rootzone drying (PRD) are two important management techniques used for water conservation and to mitigate the negative impacts of drought. The experiment was conducted to screen out the best-suited mulch for wheat under PRD system. Two water application techniques (I1= full irrigation I2= PRD irrigation) and four mulch treatments (M0= un-mulched, M1= black plastic mulch, M2= wheat straw mulch and M4= cotton sticks mulch) were conducted in completely randomized design with four replications. The treatment, black plastic mulch was performed the best than other mulch treatments. For irrigation levels, higher values of growth, physiological and water-related parameters were recorded in control treatment while, quality traits and enzymatic activities were higher under partial root zone drying. The current study concluded that adverse effects of drought on wheat can be significantly mitigated by using mulches but black plastic mulch was best suited for partial rootzone drying irrigation system in wheat.

Keywords: antioxidants, leaf water relations, Mulches, osmolytes, partial root zone drying, photosynthesis

Procedia PDF Downloads 263
1021 Alternating Expectation-Maximization Algorithm for a Bilinear Model in Isoform Quantification from RNA-Seq Data

Authors: Wenjiang Deng, Tian Mou, Yudi Pawitan, Trung Nghia Vu

Abstract:

Estimation of isoform-level gene expression from RNA-seq data depends on simplifying assumptions, such as uniform reads distribution, that are easily violated in real data. Such violations typically lead to biased estimates. Most existing methods provide a bias correction step(s), which is based on biological considerations, such as GC content–and applied in single samples separately. The main problem is that not all biases are known. For example, new technologies such as single-cell RNA-seq (scRNA-seq) may introduce new sources of bias not seen in bulk-cell data. This study introduces a method called XAEM based on a more flexible and robust statistical model. Existing methods are essentially based on a linear model Xβ, where the design matrix X is known and derived based on the simplifying assumptions. In contrast, XAEM considers Xβ as a bilinear model with both X and β unknown. Joint estimation of X and β is made possible by simultaneous analysis of multi-sample RNA-seq data. Compared to existing methods, XAEM automatically performs empirical correction of potentially unknown biases. XAEM implements an alternating expectation-maximization (AEM) algorithm, alternating between estimation of X and β. For speed XAEM utilizes quasi-mapping for read alignment, thus leading to a fast algorithm. Overall XAEM performs favorably compared to other recent advanced methods. For simulated datasets, XAEM obtains higher accuracy for multiple-isoform genes, particularly for paralogs. In a differential-expression analysis of a real scRNA-seq dataset, XAEM achieves substantially greater rediscovery rates in an independent validation set.

Keywords: alternating EM algorithm, bias correction, bilinear model, gene expression, RNA-seq

Procedia PDF Downloads 140
1020 Isolation and Expansion of Human Periosteum-Derived Mesenchymal Stem Cells in Defined Serum-Free Culture Medium

Authors: Ainur Mukhambetova, Miras Karzhauov, Vyacheslav Ogay

Abstract:

Introduction: Mesenchymal stem cells (MSCs) have the capacity to be differentiated into several cell lineages and are a promising source for cell therapy and tissue engineering. However, currently most MSCs culturing protocols use media supplemented with fetal bovine serum (FBS), which limits their application in clinic due to the possibility of zoonotic infections, contamination and immunological reactions. Consequently, formulating effective serum free culture medium becomes one of the important problems in contemporary cell biotechnology. Objectives: The aim of this study was to define an optimal serum-free medium for culturing of periosteum derived MSCs. Materials and methods: The MSCs were extracted from human periosteum and transferred to the culture flasks pretreated with CELLstart™. Immunophenotypic characterization, proliferation and in vitro differentiation of cells grown on STEM PRO® MSC SFM were compared to the cells cultured in the standard FBS containing media. Chromosome analysis and flow cytometry were also performed. Results: We have shown that cells were grown on STEM PRO® MSC SFM retained all the morphological, immunophenotypic (CD73, CD90, CD105, vimentin and Stro-1) and cell differentiation characteristics specific to MSCs. Chromosome analysis indicated no anomalies in the chromosome structure. Flow cytometry showed a high expression of cell adhesion molecules CD44 (98,8%), CD90 (97,4%), CD105 (99,1%). In addition, we have shown that cell is grown on STEM PRO® MSC SFM have higher proliferation capacity compared to cell expanded on standard FBS containing the medium. Conclusion: We have shown that STEM PRO® MSC SFM is optimal for culturing periosteum derived human MSCs which subsequently can be safely used in cell therapy.

Keywords: cell technologies, periosteum-derived MSCs, regenerative medicine, serum-free medium

Procedia PDF Downloads 296
1019 Isolation and Characterization of an Ethanol Resistant Bacterium from Sap of Saccharum officinarum for Efficient Fermentation

Authors: Rukshika S Hewawasam, Sisira K. Weliwegamage, Sanath Rajapakse, Subramanium Sotheeswaran

Abstract:

Bio fuel is one of the emerging industries around the world due to arise of crisis in petroleum fuel. Fermentation is a cost effective and eco-friendly process in production of bio-fuel. So inventions in microbes, substrates, technologies in fermentation cause new modifications in fermentation. One major problem in microbial ethanol fermentation is the low resistance of conventional microorganisms to the high ethanol concentrations, which ultimately lead to decrease in the efficiency of the process. In the present investigation, an ethanol resistant bacterium was isolated from sap of Saccharum officinarum (sugar cane). The optimal cultural conditions such as pH, temperature, incubation period, and microbiological characteristics, morphological characteristics, biochemical characteristics, ethanol tolerance, sugar tolerance, growth curve assay were investigated. Isolated microorganism was tolerated to 18% (V/V) of ethanol concentration in the medium and 40% (V/V) glucose concentration in the medium. Biochemical characteristics have revealed as Gram negative, non-motile, negative for Indole test ,Methyl Red test, Voges- Proskauer`s test, Citrate Utilization test, and Urease test. Positive results for Oxidase test was shown by isolated bacterium. Sucrose, Glucose, Fructose, Maltose, Dextrose, Arabinose, Raffinose, Lactose, and Sachcharose can be utilized by this particular bacterium. It is a significant feature in effective fermentation. The fermentation process was carried out in glucose medium under optimum conditions; pH 4, temperature 30˚C, and incubated for 72 hours. Maximum ethanol production was recorded as 12.0±0.6% (V/V). Methanol was not detected in the final product of the fermentation process. This bacterium is especially useful in bio-fuel production due to high ethanol tolerance of this microorganism; it can be used to enhance the fermentation process over conventional microorganisms. Investigations are currently conducted on establishing the identity of the bacterium

Keywords: bacterium, bio-fuel, ethanol tolerance, fermentation

Procedia PDF Downloads 339
1018 Green Transport Solutions for Developing Cities: A Case Study of Nairobi, Kenya

Authors: Benedict O. Muyale, Emmanuel S. Murunga

Abstract:

Cities have always been the loci for nationals as well as growth of cultural fusion and innovation. Over 50%of global population dwells in cities and urban centers. This means that cities are prolific users of natural resources and generators of waste; hence they produce most of the greenhouse gases which are causing global climate change. The root cause of increase in the transport sector carbon curve is mainly the greater numbers of individually owned cars. Development in these cities is geared towards economic progress while environmental sustainability is ignored. Infrastructure projects focus on road expansion, electrification, and more parking spaces. These lead to more carbon emissions, traffic congestion, and air pollution. Recent development plans for Nairobi city are now on road expansion with little priority for electric train solutions. The Vision 2030, Kenya’s development guide, has shed some light on the city with numerous road expansion projects. This chapter seeks to realize the following objectives; (1) to assess the current transport situation of Nairobi; (2) to review green transport solutions being undertaken in the city; (3) to give an overview of alternative green transportation solutions, and (4) to provide a green transportation framework matrix. This preliminary study will utilize primary and secondary data through mainly desktop research and analysis, literature, books, magazines and on-line information. This forms the basis for formulation of approaches for incorporation into the green transportation framework matrix of the main study report.The main goal is the achievement of a practical green transportation system for implementation by the City County of Nairobi to reduce carbon emissions and congestion and promote environmental sustainability.

Keywords: cities, transport, Nairobi, green technologies

Procedia PDF Downloads 321