Search results for: delays resulting from two separate causes at the same time
21004 Modelling Heat Transfer Characteristics in the Pasteurization Process of Medium Long Necked Bottled Beers
Authors: S. K. Fasogbon, O. E. Oguegbu
Abstract:
Pasteurization is one of the most important steps in the preservation of beer products, which improves its shelf life by inactivating almost all the spoilage organisms present in it. However, there is no gain saying the fact that it is always difficult to determine the slowest heating zone, the temperature profile and pasteurization units inside bottled beer during pasteurization, hence there had been significant experimental and ANSYS fluent approaches on the problem. This work now developed Computational fluid dynamics model using COMSOL Multiphysics. The model was simulated to determine the slowest heating zone, temperature profile and pasteurization units inside the bottled beer during the pasteurization process. The results of the simulation were compared with the existing data in the literature. The results showed that, the location and size of the slowest heating zone is dependent on the time-temperature combination of each zone. The results also showed that the temperature profile of the bottled beer was found to be affected by the natural convection resulting from variation in density during pasteurization process and that the pasteurization unit increases with time subject to the temperature reached by the beer. Although the results of this work agreed with literatures in the aspects of slowest heating zone and temperature profiles, the results of pasteurization unit however did not agree. It was suspected that this must have been greatly affected by the bottle geometry, specific heat capacity and density of the beer in question. The work concludes that for effective pasteurization to be achieved, there is a need to optimize the spray water temperature and the time spent by the bottled product in each of the pasteurization zones.Keywords: modeling, heat transfer, temperature profile, pasteurization process, bottled beer
Procedia PDF Downloads 20321003 Influence of Variable Calcium Content on Mechanical Properties of Geopolymer Synthesized at Different Temperature and Moisture Conditions
Authors: Suraj D. Khadka, Priyantha W. Jayawickrama
Abstract:
In search of a sustainable construction material, geopolymer has been investigated for past decades to evaluate its advantage over conventional products. Synthesis of geopolymer requires a source of aluminosilicate mixed with sodium hydroxide and sodium silicate at different proportions to maintain a Si/Al molar ratio of 1-3 and Na/Al molar ratio of unity. A comprehensive geopolymer study was performed with Metakaolin and Class C Fly ash as primary aluminosilicate sources. Synthesized geopolymer was analyzed for time-dependent viscosity, setting period and strength at varying initial moisture content, curing temperature and humidity. Different concentration of Ca(OH)₂ and CaSO₄.2H₂O were added to vary the amount of calcium contained in synthesized geopolymer. Influence of calcium content in unconfined compressive strength behavior of geopolymer were analyzed. Finally, Scanning Electron Microscopy-Energy Dispersive Spectroscopy (SEM-EDS) was performed to investigate the hardened product. It was observed that fly ash based geopolymer had shortened setting time and faster increase in viscosity as compared to geopolymer synthesized from metakaolin. This was primarily attributed to higher calcium content resulting in formation of calcium silicate hydrates (CSH). SEM-EDS was performed to verify the presence of CSH phases. Spectral analysis of geopolymer prepared by addition of Ca(OH)₂ and CaSO₄.2H₂O indicated higher CSH phases at higher concentration. It was observed that lower concentration of added calcium favored strength gain in geopolymer. However, at higher calcium concentration, decrease in strength was observed. Strength variation was also observed with humidity at initial curing condition. At 100% humidity, geopolymer with added calcium presented higher strength compared to samples cured at ambient humidity condition (40%). Reduction in strength in these samples at lower humidity was primarily attributed to reduction in moisture content in specimen due to the formation of CSH phases and loss of moisture through evaporation. For low calcium content geopolymers, with increase in temperature, gain in strength was observed with maximum strength observed at 200 ˚C. However, samples with higher calcium content demonstrated severe cracking resulting in low strength at elevated temperatures.Keywords: calcium silicate hydrates, geopolymer, humidity, Scanning Electron Microscopy-Energy Dispersive Spectroscopy, unconfined compressive strength
Procedia PDF Downloads 12721002 Voyage Analysis of a Marine Gas Turbine Engine Installed to Power and Propel an Ocean-Going Cruise Ship
Authors: Mathias U. Bonet, Pericles Pilidis, Georgios Doulgeris
Abstract:
A gas turbine-powered cruise Liner is scheduled to transport pilgrim passengers from Lagos-Nigeria to the Islamic port city of Jeddah in Saudi Arabia. Since the gas turbine is an air breathing machine, changes in the density and/or mass flow at the compressor inlet due to an encounter with variations in weather conditions induce negative effects on the performance of the power plant during the voyage. In practice, all deviations from the reference atmospheric conditions of 15 oC and 1.103 bar tend to affect the power output and other thermodynamic parameters of the gas turbine cycle. Therefore, this paper seeks to evaluate how a simple cycle marine gas turbine power plant would react under a variety of scenarios that may be encountered during a voyage as the ship sails across the Atlantic Ocean and the Mediterranean Sea before arriving at its designated port of discharge. It is also an assessment that focuses on the effect of varying aerodynamic and hydrodynamic conditions which deteriorate the efficient operation of the propulsion system due to an increase in resistance that results from some projected levels of the ship hull fouling. The investigated passenger ship is designed to run at a service speed of 22 knots and cover a distance of 5787 nautical miles. The performance evaluation consists of three separate voyages that cover a variety of weather conditions in winter, spring and summer seasons. Real-time daily temperatures and the sea states for the selected transit route were obtained and used to simulate the voyage under the aforementioned operating conditions. Changes in engine firing temperature, power output as well as the total fuel consumed per voyage including other performance variables were separately predicted under both calm and adverse weather conditions. The collated data were obtained online from the UK Meteorological Office as well as the UK Hydrographic Office websites, while adopting the Beaufort scale for determining the magnitude of sea waves resulting from rough weather situations. The simulation of the gas turbine performance and voyage analysis was effected through the use of an integrated Cranfield-University-developed computer code known as ‘Turbomatch’ and ‘Poseidon’. It is a project that is aimed at developing a method for predicting the off design behavior of the marine gas turbine when installed and operated as the main prime mover for both propulsion and powering of all other auxiliary services onboard a passenger cruise liner. Furthermore, it is a techno-economic and environmental assessment that seeks to enable the forecast of the marine gas turbine part and full load performance as it relates to the fuel requirement for a complete voyage.Keywords: cruise ship, gas turbine, hull fouling, performance, propulsion, weather
Procedia PDF Downloads 16521001 Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation
Authors: Ksenia Meshkova
Abstract:
With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances.Keywords: neural networks, computer vision, representation learning, autoencoders
Procedia PDF Downloads 12721000 Comparative Study of Globalization and Homogenous Society: South Korea and Greek Society Reaction to Foreign Culture
Authors: Putri Mentari Racharjo
Abstract:
The development of current technology is simplifying globalization process. An easier globalization process and mobilization are increasing interactions among individuals and societies in different countries. It is also easier for foreign culture to enter a country and create changes to the society. Differences brought by foreign culture will most likely affect any society. It will be easier for heterogeneous society to accept new culture, considering that they have various cultures, and they are used to differences. So it will be easier for a heterogeneous society to accept new culture as long as the culture is not contrary to their essential values. However for a homogenous society, where they have only one language and culture, it will take a longer adjustment time to fully accept the new culture. There will be a tendency for homogenous societies to react in a more negative way to new culture. Greece and South Korea are the examples for homogeneous societies. Greece, a destination country for immigrants, is having a hard time adjusting themselves to accept many immigrants with many cultures. There are various discrimination cases of immigrants in Greece, when the Greek society cannot fully accept the new culture brought by immigrants. South Korea, a newly popular country with K-pop and K-dramas, is attracting people from all over the world to come to South Korea. However a homogenous South Korean society is also having a hard time to fully accept foreign cultures, resulting in many discrimination cases based on race and culture in South Korea. With a qualitative method through a case study and literature review, this article will discuss about Greek and South Korean societies reaction to new cultures as an effect of globalization.Keywords: foreign culture, globalization, greece, homogenous society, South Korea
Procedia PDF Downloads 33420999 Evaluation of Three Digital Graphical Methods of Baseflow Separation Techniques in the Tekeze Water Basin in Ethiopia
Authors: Alebachew Halefom, Navsal Kumar, Arunava Poddar
Abstract:
The purpose of this work is to specify the parameter values, the base flow index (BFI), and to rank the methods that should be used for base flow separation. Three different digital graphical approaches are chosen and used in this study for the purpose of comparison. The daily time series discharge data were collected from the site for a period of 30 years (1986 up to 2015) and were used to evaluate the algorithms. In order to separate the base flow and the surface runoff, daily recorded streamflow (m³/s) data were used to calibrate procedures and get parameter values for the basin. Additionally, the performance of the model was assessed by the use of the standard error (SE), the coefficient of determination (R²), and the flow duration curve (FDC) and baseflow indexes. The findings indicate that, in general, each strategy can be used worldwide to differentiate base flow; however, the Sliding Interval Method (SIM) performs significantly better than the other two techniques in this basin. The average base flow index was calculated to be 0.72 using the local minimum method, 0.76 using the fixed interval method, and 0.78 using the sliding interval method, respectively.Keywords: baseflow index, digital graphical methods, streamflow, Emba Madre Watershed
Procedia PDF Downloads 7920998 Time Management in the Public Sector in Nigeria
Authors: Sunny Ewankhiwimen Aigbomian
Abstract:
Time, is a scarce resource and in everything we do, time is required to accomplish any given task. The need for this presentation is predicated on the way majority of Nigerian especially in the public sector operators see “Time Management”. Time as resources cannot be regained if lost or managed badly. As a significant aspect of human life it should be handled with diligence and utmost seriousness if the public sector is to function as a coordinated entity. In our homes, private life and offices, we schedule different things to ensure that some things do not go the unexpected. When it comes to service delivery on the part of government, it ought to be more serious because government is all about effect and efficient service delivery and “Time” is a significant variable necessary to successful accomplishment. The need for Nigerian government to re-examine time management in her public sector with a view of repositioning the sector to be able to compete well with other public sectors in the world. The peculiarity of Time management in Public Sector in Nigerian context as examined and some useful recommendations of immerse assistance proffered.Keywords: Nigeria, public sector, time management, task
Procedia PDF Downloads 9920997 Aquinas Be Damned: Tension between Nothingness and Suffering
Authors: Elizabeth Latham
Abstract:
Aquinas has long been revered by the Catholic Church as one of the greatest theologians of all time. His most well-known and widely respected theological work, the Summa Theologica has been referenced by countless members of the clergy in support of arguments for and about the existence of God. It is surprising, then, and important that one component in his ontological arguments seems to contradict a precept upheld by the Catechism, the Catholic Church’s comprehensive document detailing their theological positions and laws. In Summa Theologica, Thomas Aquinas argued that God’s eternal existence is both an observable and necessary quality. In the Catechism, the Catholic Church argues that souls in Hell are separated from God, and only souls in Heaven are like him. After introducing research on Philosophical Psychology and the natures of consciousness and pain, this paper comes to the conclusion that in order to reconcile the theology of the Catholic Church at large with that of Thomas Aquinas, one must somehow solve the following problem: if a soul must exist eternally to suffer eternally, it must be like God; and, if a soul is in Hell, it is completely separate from God and not like him at all. Thomas Aquinas deviates at this point from the current theological holdings of the Catholic Church, and this apparent discrepancy must be resolved if the Church hopes to use him going forward as a standard for natural theology.Keywords: aquinas, catholic catechism, consciousness, philosophical psychology, summa theologica
Procedia PDF Downloads 21120996 Assessment of Artists’ Socioeconomic and Working Conditions: The Empirical Case of Lithuania
Authors: Rusne Kregzdaite, Erika Godlevska, Morta Vidunaite
Abstract:
The main aim of this research is to explore existing methodologies for artists’ labour force and create artists’ socio-economic and creative conditions in an assessment model. Artists have dual aims in their creative working process: 1) income and 2) artistic self-expression. The valuation of their conditions takes into consideration both sides: the factors related to income and the satisfaction of the creative process and its result. The problem addressed in the study: tangible and intangible artists' criteria used for assessments creativity conditions. The proposed model includes objective factors (working time, income, etc.) and subjective factors (salary covering essential needs, self-satisfaction). Other intangible indicators are taken into account: the impact on the common culture, social values, and the possibility to receive awards, to represent the country in the international market. The empirical model consists of 59 separate indicators, grouped into eight categories. The deviation of each indicator from the general evaluation allows for identifying the strongest and the weakest components of artists’ conditions.Keywords: artist conditions, artistic labour force, cultural policy, indicator, assessment model
Procedia PDF Downloads 15220995 Analyzing the Impact of DCF and PCF on WLAN Network Standards 802.11a, 802.11b, and 802.11g
Authors: Amandeep Singh Dhaliwal
Abstract:
Networking solutions, particularly wireless local area networks have revolutionized the technological advancement. Wireless Local Area Networks (WLANs) have gained a lot of popularity as they provide location-independent network access between computing devices. There are a number of access methods used in Wireless Networks among which DCF and PCF are the fundamental access methods. This paper emphasizes on the impact of DCF and PCF access mechanisms on the performance of the IEEE 802.11a, 802.11b and 802.11g standards. On the basis of various parameters viz. throughput, delay, load etc performance is evaluated between these three standards using above mentioned access mechanisms. Analysis revealed a superior throughput performance with low delays for 802.11g standard as compared to 802.11 a/b standard using both DCF and PCF access methods.Keywords: DCF, IEEE, PCF, WLAN
Procedia PDF Downloads 42520994 Time "And" Dimension(s) - Visualizing the 4th and 4+ Dimensions
Authors: Siddharth Rana
Abstract:
As we know so far, there are 3 dimensions that we are capable of interpreting and perceiving, and there is a 4th dimension, called time, about which we don’t know much yet. We, as humans, live in the 4th dimension, not the 3rd. We travel 3 dimensionally but cannot yet travel 4 dimensionally; perhaps if we could, then visiting the past and the future would be like climbing a mountain or going down a road. So far, we humans are not even capable of imagining any higher dimensions than the three dimensions in which we can travel. We are the beings of the 4th dimension; we are the beings of time; that is why we can travel 3 dimensionally; however, if, say, there were beings of the 5th dimension, then they would easily be able to travel 4 dimensionally, i.e., they could travel in the 4th dimension as well. Beings of the 5th dimension can easily time travel. However, beings of the 4th dimension, like us, cannot time travel because we live in a 4-D world, traveling 3 dimensionally. That means to ever do time travel, we just need to go to a higher dimension and not only perceive it but also be able to travel in it. However, traveling to the past is not very possible, unlike traveling to the future. Even if traveling to the past were possible, it would be very unlikely that an event in the past would be changed. In this paper, some approaches are provided to define time, our movement in time to the future, some aspects of time travel using dimensions, and how we can perceive a higher dimension.Keywords: time, dimensions, String theory, relativity
Procedia PDF Downloads 10720993 Substitution of Phosphate with Liquid Smoke as a Binder on the Quality of Chicken Nugget
Authors: E. Abustam, M. Yusuf, M. I. Said
Abstract:
One of functional properties of the meat is decrease of water holding capacity (WHC) during rigor mortis. At the time of pre-rigor, WHC is higher than post-rigor. The decline of WHC has implication to the other functional properties such as decreased cooking lost and yields resulting in lower elasticity and compactness of processed meat product. In many cases, the addition of phosphate in the meat will increase the functional properties of the meat such as WHC. Furthermore, liquid smoke has also been known in increasing the WHC of fresh meat. For food safety reasons, liquid smoke in the present study was used as a substitute to phosphate in production of chicken nuggets. This study aimed to know the effect of substitution of phosphate with liquid smoke on the quality of nuggets made from post-rigor chicken thigh and breast. The study was arranged using completely randomized design of factorial pattern 2x3 with three replications. Factor 1 was thigh and breast parts of the chicken, and factor 2 was different levels of liquid smoke in substitution to phosphate (0%, 50%, and 100%). The thigh and breast post-rigor broiler aged 40 days were used as the main raw materials in making nuggets. Auxiliary materials instead of meat were phosphate, liquid smoke at concentration of 10%, tapioca flour, salt, eggs and ice. Variables measured were flexibility, shear force value, cooking loss, elasticity level, and preferences. The results of this study showed that the substitution of phosphate with 100% liquid smoke resulting high quality nuggets. Likewise, the breast part of the meat showed higher quality nuggets than thigh part. This is indicated by high elasticity, low shear force value, low cooking loss, and a high level of preference of the nuggets. It can be concluded that liquid smoke can be used as a binder in making nuggets of chicken post-rigor.Keywords: liquid smoke, nugget quality, phosphate, post-rigor
Procedia PDF Downloads 24120992 The Temporal Dimension of Narratives: A Construct of Qualitative Time
Authors: Ani Thomas
Abstract:
Every narrative is a temporal construct. Every narrative creates a qualitative experience of time for the viewer. The paper argues for the concept of a qualified time that emerges from the interaction between the narrative and the audience. The paper also challenges the conventional understanding of narrative time as either story time, real time or discourse time. Looking at narratives through the medium of Cinema, the study examines how narratives create and manipulate duration or durée, the qualitative experience of time as theorized by Henri Bergson. The paper further analyzes how Cinema and, by extension, narratives are nothing but Durée and the filmmaker, the artist of durée, who shape and manipulate the perception and emotions of the viewer through the manipulation and construction of durée. The paper draws on cinematic works to look at the techniques to demonstrate how filmmakers use, for example, editing, sound, compositional and production narratives etc., to create various modes of durée that challenge, amplify or unsettle the viewer’s sense of time. Bringing together the Viewer’s durée and exploring its interaction with the narrative construct, the paper explores the emergence of the new qualitative time, the narrative durée, that defines the audience experience.Keywords: cinema, time, bergson, duree
Procedia PDF Downloads 14820991 The Effect of the Archeological and Architectural Nature of the Cities on the Design of Public Transportation Vehicles
Authors: Mohamed Moheyeldin Mahmoud
Abstract:
Various Islamic, Coptic and Jewish archeological places are located in many Egyptian neighborhoods such as Alsayeda zainab, Aldarb Alahmar, Algammaleya and many other in which they are daily exposed to a great traffic intensity causing vibrations. Vibrations could be stated as one of the most important challenges that face the archeological buildings and threaten their survival. The impact of vibrations varies according to the nature of the soil, nature and building conditions, how far the source of vibration is and the period of exposure. Traffic vibrations could be also stated as one of the most common types of vibrations having the greatest impact on buildings and archaeological installations. These vibrations result from the way that the vehicles act with different types of roads which vary according to the shape, nature, and type of obstacles. Other elements concerning the vehicle itself such as speed, weight, and load have a direct impact on the vibrations resulting from the vehicle movement that couldn't be neglected. The research aims to determine some of the requirements that must be observed when designing the public means of transport operating in the archaeological areas, in order to preserve the archaeological nature of the place. The research concludes that light weight slow motion vehicles should be used (25-50 km/h at maximum) having a multi-leaf steel spring suspension system instead of having an air-bag one should be used in order to reduce generated vibrations that could destroy the archeological buildings. Isolation layers could be used in the engine chamber in order to reduce the resulting noise causing vibrations. Electrically operated engines that use solar photovoltaic cells as a source of electricity could be used instead of gas ones in order to reduce the resulting engine noise.Keywords: archeological, design, isolation layers, suspension, vibrations
Procedia PDF Downloads 19120990 Studying the Evolution of Soot and Precursors in Turbulent Flames Using Laser Diagnostics
Authors: Muhammad A. Ashraf, Scott Steinmetz, Matthew J. Dunn, Assaad R. Masri
Abstract:
This study focuses on the evolution of soot and soot precursors in three different piloted diffusion turbulent flames. The fuel composition is as follow flame A (ethylene/nitrogen, 2:3 by volume), flame B (ethylene/air, 2:3 by volume), and flame C (pure methane). These flames are stabilized using a 4mm diameter jet surrounded by a pilot annulus with an outer diameter of 15 mm. The pilot issues combustion products from stoichiometric premixed flames of hydrogen, acetylene, and air. In all cases, the jet Reynolds number is 10,000, and air flows in the coflow stream at a velocity of 5 m/s. Time-resolved laser-induced fluorescence (LIF) is collected at two wavelength bands in the visible (445 nm) and UV regions (266 nm) along with laser-induced incandescence (LII). The combined results are employed to study concentration, size, and growth of soot and precursors. A set of four fast photo-multiplier tubes are used to record emission data in temporal domain. A 266nm laser pulse preferentially excites smaller nanoparticles which emit a fluorescence spectrum which is analysed to track the presence, evolution, and destruction of nanoparticles. A 1064nm laser pulse excites sufficiently large soot particles, and the resulting incandescence is collected at 1064nm. At downstream and outer radial locations, intermittency becomes a relevant factor. Therefore, data collected in turbulent flames is conditioned to account for intermittency so that the resulting mean profiles for scattering, fluorescence, and incandescence are shown for the events that contain traces of soot. It is found that in the upstream regions of the ethylene-air and ethylene-nitrogen flames, the presence of soot precursors is rather similar. However, further downstream, soot concentration grows larger in the ethylene-air flames.Keywords: laser induced incandescence, laser induced fluorescence, soot, nanoparticles
Procedia PDF Downloads 14620989 Changing Emphases in Mental Health Research Methodology: Opportunities for Occupational Therapy
Authors: Jeffrey Chase
Abstract:
Historically the profession of Occupational Therapy was closely tied to the treatment of those suffering from mental illness; more recently, and especially in the U.S., the percentage of OTs identifying as working in the mental health area has declined significantly despite the estimate that by 2020 behavioral health disorders will surpass physical illnesses as the major cause of disability worldwide. In the U.S. less than 10% of OTs identify themselves as working with the mentally ill and/or practicing in mental health settings. Such a decline has implications for both those suffering from mental illness and the profession of Occupational Therapy. One reason cited for the decline of OT in mental health has been the limited research in the discipline addressing mental health practice. Despite significant advances in technology and growth in the field of neuroscience, major institutions and funding sources such as the National Institute of Mental Health (NIMH) have noted that research into the etiology and treatment of mental illness have met with limited success over the past 25 years. One major reason posited by NIMH is that research has been limited by how we classify individuals, that being mostly on what is observable. A new classification system being developed by NIMH, the Research Domain Criteria (RDoc), has the goal to look beyond just descriptors of disorders for common neural, genetic, and physiological characteristics that cut across multiple supposedly separate disorders. The hope is that by classifying individuals along RDoC measures that both reliability and validity will improve resulting in greater advances in the field. As a result of this change NIH and NIMH will prioritize research funding to those projects using the RDoC model. Multiple disciplines across many different setting will be required for RDoC or similar classification systems to be developed. During this shift in research methodology OT has an opportunity to reassert itself into the research and treatment of mental illness, both in developing new ways to more validly classify individuals, and to document the legitimacy of previously ill-defined and validated disorders such as sensory integration.Keywords: global mental health and neuroscience, research opportunities for ot, greater integration of ot in mental health research, research and funding opportunities, research domain criteria (rdoc)
Procedia PDF Downloads 27520988 Effects of Applied Pressure and Heat Treatment on the Microstructure of Squeeze Cast Al-Si Alloy Were Examined
Authors: Mohamed Ben Amar, Henda Barhoumi, Hokia Siala, Foued Elhalouani
Abstract:
The present contribution consists of a purely experimental investigation on the effect of Squeeze casting on the micro structural and mechanical propriety of Al-Si alloys destined to automotive industry. Accordingly, we have proceeding, by ourselves, to all the thermal treatment consisting of solution treatment at 540°C for 8h and aging at 160°C for 4h. The various thermal treatment, have been carried out in order to monitor the processes of formation and dissolution accompanying the solid state phase transformations as well as the resulting changes in the mechanical proprieties. The examination of the micrographs of the aluminum alloys reveals the dominant presence of dendrite. Concerning the mechanical characteristic the Vickers micro-hardness curve an increase as a function of the pressure. As well as the heat treatment increase mechanical propriety such that pressure and micro hardness. The curves have been explained in terms of structural hardening resulting from the various compounds formation.Keywords: squeeze casting, process parameters, heat treatment, ductility, microstructure
Procedia PDF Downloads 43120987 Catalytic Soot Gasification in Single and Mixed Atmospheres of CO2 and H2O in the Presence of CO and H2
Authors: Yeidy Sorani Montenegro Camacho, Samir Bensaid, Nunzio Russo, Debora Fino
Abstract:
LiFeO2 nano-powders were prepared via solution combustion synthesis (SCS) method and were used as carbon gasification catalyst in a reduced atmosphere. The gasification of soot with CO2 and H2O in the presence of CO and H2 (syngas atmosphere) were also investigated under atmospheric conditions using a fixed-bed micro-reactor placed in an electric, PID-regulated oven. The catalytic bed was composed of 150 mg of inert silica, 45 mg of carbon (Printex-U) and 5 mg of catalyst. The bed was prepared by ball milling the mixture at 240 rpm for 15 min to get an intimate contact between the catalyst and soot. A Gas Hourly Space Velocity (GHSV) of 38.000 h-1 was used for the tests campaign. The furnace was heated up to the desired temperature, a flow of 120 mL/min was sent into the system and at the same time the concentrations of CO, CO2 and H2 were recorded at the reactor outlet using an EMERSON X-STREAM XEGP analyzer. Catalytic and non-catalytic soot gasification reactions were studied in a temperature range of 120°C – 850°C with a heating rate of 5 °C/min (non-isothermal case) and at 650°C for 40 minutes (isothermal case). Experimental results show that the gasification of soot with H2O and CO2 are inhibited by the H2 and CO, respectively. The soot conversion at 650°C decreases from 70.2% to 31.6% when the CO is present in the feed. Besides, the soot conversion was 73.1% and 48.6% for H2O-soot and H2O-H2-soot gasification reactions, respectively. Also, it was observed that the carbon gasification in mixed atmosphere, i.e., when simultaneous carbon gasification with CO2 and steam take place, with H2 and CO as co-reagents; the gasification reaction is strongly inhibited by CO and H2, as well has been observed in single atmospheres for the isothermal and non-isothermal reactions. Further, it has been observed that when CO2 and H2O react with carbon at the same time, there is a passive cooperation of steam and carbon dioxide in the gasification reaction, this means that the two gases operate on separate active sites without influencing each other. Finally, despite the extreme reduced operating conditions, it has been demonstrated that the 32.9% of the initial carbon was gasified using LiFeO2-catalyst, while in the non-catalytic case only 8% of the soot was gasified at 650°C.Keywords: soot gasification, nanostructured catalyst, reducing environment, syngas
Procedia PDF Downloads 26120986 Seismic Impact and Design on Buried Pipelines
Authors: T. Schmitt, J. Rosin, C. Butenweg
Abstract:
Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.Keywords: buried pipeline, earthquake, seismic impact, transient displacement
Procedia PDF Downloads 18720985 A Proteomic Approach for Discovery of Microbial Cellulolytic Enzymes
Authors: M. S. Matlala, I. Ignatious
Abstract:
Environmental sustainability has taken the center stage in human life all over the world. Energy is the most essential component of our life. The conventional sources of energy are non-renewable and have a detrimental environmental impact. Therefore, there is a need to move from conventional to non-conventional renewable energy sources to satisfy the world’s energy demands. The study aimed at screening for microbial cellulolytic enzymes using a proteomic approach. The objectives were to screen for microbial cellulases with high specific activity and separate the cellulolytic enzymes using a combination of zymography and two-dimensional (2-D) gel electrophoresis followed by tryptic digestion, Matrix-assisted Laser Desorption Ionisation-Time of Flight (MALDI-TOF) and bioinformatics analysis. Fungal and bacterial isolates were cultured in M9 minimal and Mandel media for a period of 168 hours at 60°C and 30°C with cellobiose and Avicel as carbon sources. Microbial cells were separated from supernatants through centrifugation, and the crude enzyme from the cultures was used for the determination of cellulase activity, zymography, SDS-PAGE, and two-dimensional gel electrophoresis. Five isolates, with lytic action on carbon sources studied, were a bacterial strain (BARK) and fungal strains (VCFF1, VCFF14, VCFF17, and VCFF18). Peak cellulase production by the selected isolates was found to be 3.8U/ml, 2.09U/ml, 3.38U/ml, 3.18U/ml, and 1.95U/ml, respectively. Two-dimensional gel protein maps resulted in the separation and quantitative expression of different proteins by the microbial isolates. MALDI-TOF analysis and database search showed that the expressed proteins in this study closely relate to different glycoside hydrolases produced by other microbial species with an acceptable confidence level of 100%.Keywords: cellulases, energy, two-dimensional gel electrophoresis, matrix-assisted laser desorption ionisation-time of flight, MALDI-TOF MS
Procedia PDF Downloads 13420984 Improvement of Fatigue and Fatigue Corrosion Resistances of Turbine Blades Using Laser Cladding
Authors: Sami I. Jafar, Sami A. Ajeel, Zaman A. Abdulwahab
Abstract:
The turbine blades used in electric power plants are made of low alloy steel type 52. These blades will be subjected to fatigue and also at other times to fatigue corrosion with aging time. Due to their continuous exposure to cyclic rotational stresses in corrosive steam environments, The current research aims to deal with this problem using the laser cladding method for low alloy steel type 52, which works to re-compose the metallurgical structure and improve the mechanical properties by strengthening the resulting structure, which leads to an increase in fatigue and wears resistance, therefore, an increase in the life of these blades is observed.Keywords: fatigue, fatigue corrosion, turbine blades, laser cladding
Procedia PDF Downloads 19920983 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 14020982 Flood Monitoring in the Vietnamese Mekong Delta Using Sentinel-1 SAR with Global Flood Mapper
Authors: Ahmed S. Afifi, Ahmed Magdy
Abstract:
Satellite monitoring is an essential tool to study, understand, and map large-scale environmental changes that affect humans, climate, and biodiversity. The Sentinel-1 Synthetic Aperture Radar (SAR) instrument provides a high collection of data in all-weather, short revisit time, and high spatial resolution that can be used effectively in flood management. Floods occur when an overflow of water submerges dry land that requires to be distinguished from flooded areas. In this study, we use global flood mapper (GFM), a new google earth engine application that allows users to quickly map floods using Sentinel-1 SAR. The GFM enables the users to adjust manually the flood map parameters, e.g., the threshold for Z-value for VV and VH bands and the elevation and slope mask threshold. The composite R:G:B image results by coupling the bands of Sentinel-1 (VH:VV:VH) reduces false classification to a large extent compared to using one separate band (e.g., VH polarization band). The flood mapping algorithm in the GFM and the Otsu thresholding are compared with Sentinel-2 optical data. And the results show that the GFM algorithm can overcome the misclassification of a flooded area in An Giang, Vietnam.Keywords: SAR backscattering, Sentinel-1, flood mapping, disaster
Procedia PDF Downloads 10620981 Role-Governed Categorization and Category Learning as a Result from Structural Alignment: The RoleMap Model
Authors: Yolina A. Petrova, Georgi I. Petkov
Abstract:
The paper presents a symbolic model for category learning and categorization (called RoleMap). Unlike the other models which implement learning in a separate working mode, role-governed category learning and categorization emerge in RoleMap while it does its usual reasoning. The model is based on several basic mechanisms known as reflecting the sub-processes of analogy-making. It steps on the assumption that in their everyday life people constantly compare what they experience and what they know. Various commonalities between the incoming information (current experience) and the stored one (long-term memory) emerge from those comparisons. Some of those commonalities are considered to be highly important, and they are transformed into concepts for further use. This process denotes the category learning. When there is missing knowledge in the incoming information (i.e. the perceived object is still not recognized), the model makes anticipations about what is missing, based on the similar episodes from its long-term memory. Various such anticipations may emerge for different reasons. However, with time only one of them wins and is transformed into a category member. This process denotes the act of categorization.Keywords: analogy-making, categorization, category learning, cognitive modeling, role-governed categories
Procedia PDF Downloads 14320980 Eco-Index for Assessing Ecological Disturbances at Downstream of a Hydropower Project
Authors: Chandra Upadhyaya, Arup Kumar Sarma
Abstract:
In the North Eastern part of India several hydro power projects are being proposed and execution for some of them are already initiated. There are controversies surrounding these constructions. Impact of these dams in the downstream part of the rivers needs to be assessed so that eco-system and people living downstream are protected by redesigning the projects if it becomes necessary. This may result in reducing the stresses to the affected ecosystem and people living downstream. At present many index based ecological methods are present to assess impact on ecology. However, none of these methods are capable of assessing the affect resulting from dam induced diurnal variation of flow in the downstream. We need environmental flow methodology based on hydrological index which can address the affect resulting from dam induced diurnal variation of flow and play an important role in a riverine ecosystem management and be able to provide a qualitative idea about changes in the habitat for aquatic and riparian species.Keywords: ecosystem, environmental flow assessment, entropy, IHA, TNC
Procedia PDF Downloads 38420979 Literary Interpretation and Systematic-Structural Analysis of the Titles of the Works “The Day Lasts More than a Hundred Years”, “Doomsday”
Authors: Bahor Bahriddinovna Turaeva
Abstract:
The article provides a structural analysis of the titles of the famous Kyrgyz writer Chingiz Aitmatov’s creative works “The Day Lasts More Than a Hundred Years”, “Doomsday”. The author’s creative purpose in naming the work of art, the role of the elements of the plot, and the composition of the novels in revealing the essence of the title are explained. The criteria that are important in naming the author’s works in different genres are classified, and the titles that mean artistic time and artistic space are studied separately. Chronotope is being concerned as the literary-aesthetic category in world literary studies, expressing the scope of the universe interpretation, the author’s outlook and imagination regarding the world foundation, defining personages, and the composition means of expressing the sequence and duration of the events. A creative comprehension of the chronotope as a means of arranging the work composition, structure and constructing an epic field of the text demands a special approach to understanding the aesthetic character of the work. Since the chronotope includes all the elements of a fictional work, it is impossible to present the plot, composition, conflict, system of characters, feelings, and mood of the characters without the description of the chronotope. In the following development of the scientific-theoretical thought in the world, the chronotope is accepted to be one of the poetic means to demonstrate reality as well as to be a literary process that is basic for the expression of reality in the compositional construction and illustration of the plot relying on the writer’s intention and the ideological conception of the literary work. Literary time enables one to cognate the literary world picture created by the author in terms of the descriptive subject and object of the work. Therefore, one of the topical tasks of modern Uzbek literary studies is to describe historical evidence, event, the life of outstanding people, the chronology of the near past based on the literary time; on the example of the creative works of a certain period, creators or an individual writer are analyzed in separate or comparative-typological aspect.Keywords: novel, title, chronotope, motive, epigraph, analepsis, structural analysis, plot line, composition
Procedia PDF Downloads 7620978 The Lived Experience of Siblings of Autistic Children; From the Private to Public Sphere
Authors: Kiana Taghikhan, Shamim Sherafat, Mostafa Taheri
Abstract:
Although many people with autism spectrum disorder around the world face many problems and challenges, their conditions may unintentionally affect the lives of the people around them. In this research the experiences of siblings of autistic children have been investigated in both the public and private spheres of their lives. "Private sphere" includes the experiences of research participants in socializing with relatives and family, assignments and responsibilities, as well as how they spend their leisure time and lifestyle. The "public sphere" includes the experience of their presence in society, such as university, or workplace and any outdoor activities that could have been affected by their sibling’s disorder. The present research has been done using the qualitative research method and in-depth interview technique with siblings of autistic children. The sample population is 15 individuals who participated in the research theoretically and purposefully. Based on the findings, the private and social experiences of these individuals is very different compared to peers who do not have siblings with autism disorder in the family. The difference is to such an extent that causes them to separate and distance themselves from other members of the society, and depending on their special conditions, it can affect their goals and life opportunities such as job, marriage, having children, etc.Keywords: autism spectrum disorder, siblings, private sphere, public sphere
Procedia PDF Downloads 3120977 Synthesis and Optimization of Bio Metal-Organic Framework with Permanent Porosity
Authors: Tia Kristian Tajnšek, Matjaž Mazaj, Nataša Zabukovec Logar
Abstract:
Metal-organic frameworks (MOFs) with their specific properties and the possibility of tuning the structure represent excellent candidates for use in the biomedical field. Their advantage lies in large pore surfaces and volumes, as well as the possibility of using bio-friendly or bioactive constituents. So-called bioMOFs are representatives of MOFs, which are constructed from at least one biomolecule (metal, a small bioactive molecule in metal clusters and/or linker) and are intended for bio-application (usually in the field of medicine; most commonly drug delivery). When designing a bioMOF for biomedical applications, we should adhere to some guidelines for an improved toxicological profile of the material. Such as (i) choosing an endogenous/nontoxic metal, (ii) GRAS (generally recognized as safe) linker, and (iii) nontoxic solvents. Design and synthesis of bioNICS-1 (bioMOF of National Institute of Chemistry Slovenia – 1) consider all these guidelines. Zinc (Zn) was chosen as an endogenous metal with an agreeable recommended daily intake (RDI) and LD50 value, and ascorbic acid (Vitamin C) was chosen as a GRAS and active linker. With these building blocks, we have synthesized a bioNICS-1 material. The synthesis was done in ethanol using a solvothermal method. The synthesis protocol was further optimized in three separate ways. Optimization of (i) synthesis parameters to improve the yield of the synthesis, (ii) input reactant ratio and addition of specific modulators for production of larger crystals, and (iii) differing of the heating source (conventional, microwave and ultrasound) to produce nano-crystals. With optimization strategies, the synthesis yield was increased. Larger crystals were prepared for structural analysis with the use of a proper species and amount of modulator. Synthesis protocol was adjusted to different heating sources, resulting in the production of nano-crystals of bioNICS-1 material. BioNICS-1 was further activated in ethanol and structurally characterized, resolving the crystal structure of new material.Keywords: ascorbic acid, bioMOF, MOF, optimization, synthesis, zinc ascorbate
Procedia PDF Downloads 14120976 Identification of the Relationship Between Signals in Continuous Monitoring of Production Systems
Authors: Maciej Zaręba, Sławomir Lasota
Abstract:
Understanding the dependencies between the input signal, that controls the production system and signals, that capture its output, is of a great importance in intelligent systems. The method for identification of the relationship between signals in continuous monitoring of production systems is described in the paper. The method discovers the correlation between changes in the states derived from input signals and resulting changes in the states of output signals of the production system. The method is able to handle system inertia, which determines the time shift of the relationship between the input and output.Keywords: manufacturing operation management, signal relationship, continuous monitoring, production systems
Procedia PDF Downloads 9320975 Estimation of Train Operation Using an Exponential Smoothing Method
Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono
Abstract:
The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.Keywords: exponential smoothing method, open data, operation estimation, train schedule
Procedia PDF Downloads 388