Search results for: heat transport
345 Possibilities to Evaluate the Climatic and Meteorological Potential for Viticulture in Poland: The Case Study of the Jagiellonian University Vineyard
Authors: Oskar Sekowski
Abstract:
Current global warming causes changes in the traditional zones of viticulture worldwide. During 20th century, the average global air temperature increased by 0.89˚C. The models of climate change indicate that viticulture, currently concentrating in narrow geographic niches, may move towards the poles, to higher geographic latitudes. Global warming may cause changes in traditional viticulture regions. Therefore, there is a need to estimate the climatic conditions and climate change in areas that are not traditionally associated with viticulture, e.g., Poland. The primary objective of this paper is to prepare methodology to evaluate the climatic and meteorological potential for viticulture in Poland based on a case study. Moreover, the additional aim is to evaluate the climatic potential of a mesoregion where a university vineyard is located. The daily data of temperature, precipitation, insolation, and wind speed (1988-2018) from the meteorological station located in Łazy, southern Poland, was used to evaluate 15 climatological parameters and indices connected with viticulture. The next steps of the methodology are based on Geographic Information System methods. The topographical factors such as a slope gradient and slope exposure were created using Digital Elevation Models. The spatial distribution of climatological elements was interpolated by ordinary kriging. The values of each factor and indices were also ranked and classified. The viticultural potential was determined by integrating two suitability maps, i.e., the topographical and climatic ones, and by calculating the average for each pixel. Data analysis shows significant changes in heat accumulation indices that are driven by increases in maximum temperature, mostly increasing number of days with Tmax > 30˚C. The climatic conditions of this mesoregion are sufficient for vitis vinifera viticulture. The values of indicators and insolation are similar to those in the known wine regions located on similar geographical latitudes in Europe. The smallest threat to viticulture in study area is the occurrence of hail and the highest occurrence of frost in the winter. This research provides the basis for evaluating general suitability and climatologic potential for viticulture in Poland. To characterize the climatic potential for viticulture, it is necessary to assess the suitability of all climatological and topographical factors that can influence viticulture. The methodology used in this case study shows places where there is a possibility to create vineyards. It may also be helpful for wine-makers to select grape varieties.Keywords: climatologic potential, climatic classification, Poland, viticulture
Procedia PDF Downloads 106344 Upgrading of Bio-Oil by Bio-Pd Catalyst
Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood
Abstract:
This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.Keywords: bio-oil, catalyst, palladium, upgrading
Procedia PDF Downloads 175343 An Analytical Systematic Design Approach to Evaluate Ballistic Performance of Armour Grade AA7075 Aluminium Alloy Using Friction Stir Processing
Authors: Lahari Ramya Pa, Sudhakar Ib, Madhu Vc, Madhusudhan Reddy Gd, Srinivasa Rao E.
Abstract:
Selection of suitable armor materials for defense applications is very crucial with respect to increasing mobility of the systems as well as maintaining safety. Therefore, determining the material with the lowest possible areal density that resists the predefined threat successfully is required in armor design studies. A number of light metal and alloys are come in to forefront especially to substitute the armour grade steels. AA5083 aluminium alloy which fit in to the military standards imposed by USA army is foremost nonferrous alloy to consider for possible replacement of steel to increase the mobility of armour vehicles and enhance fuel economy. Growing need of AA5083 aluminium alloy paves a way to develop supplement aluminium alloys maintaining the military standards. It has been witnessed that AA 2xxx aluminium alloy, AA6xxx aluminium alloy and AA7xxx aluminium alloy are the potential material to supplement AA5083 aluminium alloy. Among those cited aluminium series alloys AA7xxx aluminium alloy (heat treatable) possesses high strength and can compete with armour grade steels. Earlier investigations revealed that layering of AA7xxx aluminium alloy can prevent spalling of rear portion of armour during ballistic impacts. Hence, present investigation deals with fabrication of hard layer (made of boron carbide) i.e. layer on AA 7075 aluminium alloy using friction stir processing with an intention of blunting the projectile in the initial impact and backing tough portion(AA7xxx aluminium alloy) to dissipate residual kinetic energy. An analytical approach has been adopted to unfold the ballistic performance of projectile. Penetration of projectile inside the armour has been resolved by considering by strain energy model analysis. Perforation shearing areas i.e. interface of projectile and armour is taken in to account for evaluation of penetration inside the armour. Fabricated surface composites (targets) were tested as per the military standard (JIS.0108.01) in a ballistic testing tunnel at Defence Metallurgical Research Laboratory (DMRL), Hyderabad in standardized testing conditions. Analytical results were well validated with experimental obtained one.Keywords: AA7075 aluminium alloy, friction stir processing, boron carbide, ballistic performance, target
Procedia PDF Downloads 330342 Seismic Impact and Design on Buried Pipelines
Authors: T. Schmitt, J. Rosin, C. Butenweg
Abstract:
Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.Keywords: buried pipeline, earthquake, seismic impact, transient displacement
Procedia PDF Downloads 187341 Modification of Aliphatic-Aromatic Copolyesters with Polyether Block for Segmented Copolymers with Elastothemoplastic Properties
Authors: I. Irska, S. Paszkiewicz, D. Pawlikowska, E. Piesowicz, A. Linares, T. A. Ezquerra
Abstract:
Due to the number of advantages such as high tensile strength, sensitivity to hydrolytic degradation, and biocompatibility poly(lactic acid) (PLA) is one of the most common polyesters for biomedical and pharmaceutical applications. However, PLA is a rigid, brittle polymer with low heat distortion temperature and slow crystallization rate. In order to broaden the range of PLA applications, it is necessary to improve these properties. In recent years a number of new strategies have been evolved to obtain PLA-based materials with improved characteristics, including manipulation of crystallinity, plasticization, blending, and incorporation into block copolymers. Among the other methods, synthesis of aliphatic-aromatic copolyesters has been attracting considerable attention as they may combine the mechanical performance of aromatic polyesters with biodegradability known from aliphatic ones. Given the need for highly flexible biodegradable polymers, in this contribution, a series of aromatic-aliphatic based on poly(butylene terephthalate) and poly(lactic acid) (PBT-b-PLA) copolyesters exhibiting superior mechanical properties were copolymerized with an additional poly(tetramethylene oxide) (PTMO) soft block. The structure and properties of both series were characterized by means of attenuated total reflectance – Fourier transform infrared spectroscopy (ATR-FTIR), nuclear magnetic resonance spectroscopy (¹H NMR), differential scanning calorimetry (DSC), wide-angle X-ray scattering (WAXS) and dynamic mechanical, thermal analysis (DMTA). Moreover, the related changes in tensile properties have been evaluated and discussed. Lastly, the viscoelastic properties of synthesized poly(ester-ether) copolymers were investigated in detail by step cycle tensile tests. The block lengths decreased with the advance of treatment, and the block-random diblock terpolymers of (PBT-ran-PLA)-b-PTMO were obtained. DSC and DMTA analysis confirmed unambiguously that synthesized poly(ester-ether) copolymers are microphase-separated systems. The introduction of polyether co-units resulted in a decrease in crystallinity degree and melting temperature. X-ray diffraction patterns revealed that only PBT blocks are able to crystallize. The mechanical properties of (PBT-ran-PLA)-b-PTMO copolymers are a result of a unique arrangement of immiscible hard and soft blocks, providing both strength and elasticity.Keywords: aliphatic-aromatic copolymers, multiblock copolymers, phase behavior, thermoplastic elastomers
Procedia PDF Downloads 140340 A Review of Atomization Mechanisms Used for Spray Flash Evaporation: Their Effectiveness and Proposal of Rotary Bell Atomizer for Flashing Application
Authors: Murad A. Channa, Mehdi Khiadani. Yasir Al-Abdeli
Abstract:
Considering the severity of water scarcity around the world and its widening at an alarming rate, practical improvements in desalination techniques need to be engineered at the earliest. Atomization is the major aspect of flashing phenomena, yet it has been paid less attention to until now. There is a need to test efficient ways of atomization for the flashing process. Flash evaporation together with reverse osmosis is also a commercially matured desalination technique commonly famous as Multi-stage Flash (MSF). Even though reverse osmosis is massively practical, it is not economical or sustainable compared to flash evaporation. However, flashing evaporation has its drawbacks as well such as lower efficiency of water production per higher consumption of power and time. Flash evaporation is simply the instant boiling of a subcooled liquid which is introduced as droplets in a well-maintained negative environment. This negative pressure inside the vacuum increases the temperature of the liquid droplets far above their boiling point, which results in the release of latent heat, and the liquid droplets turn into vapor which is collected to be condensed back into an impurity-free liquid in a condenser. Atomization is the main difference between pool and spray flash evaporation. Atomization is the heart of the flash evaporation process as it increases the evaporating surface area per drop atomized. Atomization can be categorized into many levels depending on its drop size, which again becomes crucial for increasing the droplet density (drop count) per given flow rate. This review comprehensively summarizes the selective results relating to the methods of atomization and their effectiveness on the evaporation rate from earlier works to date. In addition, the reviewers propose using centrifugal atomization for the flashing application, which brings several advantages viz ultra-fine droplets, uniform droplet density, and the swirling geometry of the spray with kinetically more energetic sprays during their flight. Finally, several challenges of using rotary bell atomizer (RBA) and RBA Sprays inside the chamber have been identified which will be explored in detail. A schematic of rotary bell atomizer (RBA) integration with the chamber has been designed. This powerful centrifugal atomization has the potential to increase potable water production in commercial multi-stage flash evaporators, where it would be preferably advantageous.Keywords: atomization, desalination, flash evaporation, rotary bell atomizer
Procedia PDF Downloads 84339 A Method to Predict the Thermo-Elastic Behavior of Laser-Integrated Machine Tools
Authors: C. Brecher, M. Fey, F. Du Bois-Reymond, S. Neus
Abstract:
Additive manufacturing has emerged into a fast-growing section within the manufacturing technologies. Established machine tool manufacturers, such as DMG MORI, recently presented machine tools combining milling and laser welding. By this, machine tools can realize a higher degree of flexibility and a shorter production time. Still there are challenges that have to be accounted for in terms of maintaining the necessary machining accuracy - especially due to thermal effects arising through the use of high power laser processing units. To study the thermal behavior of laser-integrated machine tools, it is essential to analyze and simulate the thermal behavior of machine components, individual and assembled. This information will help to design a geometrically stable machine tool under the influence of high power laser processes. This paper presents an approach to decrease the loss of machining precision due to thermal impacts. Real effects of laser machining processes are considered and thus enable an optimized design of the machine tool, respective its components, in the early design phase. Core element of this approach is a matched FEM model considering all relevant variables arising, e.g. laser power, angle of laser beam, reflective coefficients and heat transfer coefficient. Hence, a systematic approach to obtain this matched FEM model is essential. Indicating the thermal behavior of structural components as well as predicting the laser beam path, to determine the relevant beam intensity on the structural components, there are the two constituent aspects of the method. To match the model both aspects of the method have to be combined and verified empirically. In this context, an essential machine component of a five axis machine tool, the turn-swivel table, serves as the demonstration object for the verification process. Therefore, a turn-swivel table test bench as well as an experimental set-up to measure the beam propagation were developed and are described in the paper. In addition to the empirical investigation, a simulative approach of the described types of experimental examination is presented. Concluding, it is shown that the method and a good understanding of the two core aspects, the thermo-elastic machine behavior and the laser beam path, as well as their combination helps designers to minimize the loss of precision in the early stages of the design phase.Keywords: additive manufacturing, laser beam machining, machine tool, thermal effects
Procedia PDF Downloads 265338 Miniaturized PVC Sensors for Determination of Fe2+, Mn2+ and Zn2+ in Buffalo-Cows’ Cervical Mucus Samples
Authors: Ahmed S. Fayed, Umima M. Mansour
Abstract:
Three polyvinyl chloride membrane sensors were developed for the electrochemical evaluation of ferrous, manganese and zinc ions. The sensors were used for assaying metal ions in cervical mucus (CM) of Egyptian river buffalo-cows (Bubalus bubalis) as their levels vary dependent on cyclical hormone variation during different phases of estrus cycle. The presented sensors are based on using ionophores, β-cyclodextrin (β-CD), hydroxypropyl β-cyclodextrin (HP-β-CD) and sulfocalix-4-arene (SCAL) for sensors 1, 2 and 3 for Fe2+, Mn2+ and Zn2+, respectively. Dioctyl phthalate (DOP) was used as the plasticizer in a polymeric matrix of polyvinylchloride (PVC). For increasing the selectivity and sensitivity of the sensors, each sensor was enriched with a suitable complexing agent, which enhanced the sensor’s response. For sensor 1, β-CD was mixed with bathophenanthroline; for sensor 2, porphyrin was incorporated with HP-β-CD; while for sensor 3, oxine was the used complexing agent with SCAL. Linear responses of 10-7-10-2 M with cationic slopes of 53.46, 45.01 and 50.96 over pH range 4-8 were obtained using coated graphite sensors for ferrous, manganese and zinc ionic solutions, respectively. The three sensors were validated, according to the IUPAC guidelines. The obtained results by the presented potentiometric procedures were statistically analyzed and compared with those obtained by atomic absorption spectrophotometric method (AAS). No significant differences for either accuracy or precision were observed between the two techniques. Successful application for the determination of the three studied cations in CM, for the purpose to determine the proper time for artificial insemination (AI) was achieved. The results were compared with those obtained upon analyzing the samples by AAS. Proper detection of estrus and correct time of AI was necessary to maximize the production of buffaloes. In this experiment, 30 multi-parous buffalo-cows were in second to third lactation and weighting 415-530 kg, and were synchronized with OVSynch protocol. Samples were taken in three times around ovulation, on day 8 of OVSynch protocol, on day 9 (20 h before AI) and on day 10 (1 h before AI). Beside analysis of trace elements (Fe2+, Mn2+ and Zn2+) in CM using the three sensors, the samples were analyzed for the three cations and also Cu2+ by AAS in the CM samples and blood samples. The results obtained were correlated with hormonal analysis of serum samples and ultrasonography for the purpose of determining of the optimum time of AI. The results showed significant differences and powerful correlation with Zn2+ composition of CM during heat phase and the ovulation time, indicating that the parameter could be used as a tool to decide optimal time of AI in buffalo-cows.Keywords: PVC Sensors, buffalo-cows, cyclodextrins, atomic absorption spectrophotometry, artificial insemination, OVSynch protocol
Procedia PDF Downloads 219337 Integration of Corporate Social Responsibility Criteria in Employee Variable Remuneration Plans
Authors: Jian Wu
Abstract:
Since a few years, some French companies have integrated CRS (corporate social responsibility) criteria in their variable remuneration plans to ‘restore a good working atmosphere’ and ‘preserve the natural environment’. These CSR criteria are based on concerns on environment protection, social aspects, and corporate governance. In June 2012, a report on this practice has been made jointly by ORSE (which means Observatory on CSR in French) and PricewaterhouseCoopers. Facing this initiative from the business world, we need to examine whether it has a real economic utility. We adopt a theoretical approach for our study. First, we examine the debate between the ‘orthodox’ point of view in economics and the CSR school of thought. The classical economic model asserts that in a capitalist economy, exists a certain ‘invisible hand’ which helps to resolve all problems. When companies seek to maximize their profits, they are also fulfilling, de facto, their duties towards society. As a result, the only social responsibility that firms should have is profit-searching while respecting the minimum legal requirement. However, the CSR school considers that, as long as the economy system is not perfect, there is no ‘invisible hand’ which can arrange all in a good order. This means that we cannot count on any ‘divine force’ which makes corporations responsible regarding to society. Something more needs to be done in addition to firms’ economic and legal obligations. Then, we reply on some financial theories and empirical evident to examine the sound foundation of CSR. Three theories developed in corporate governance can be used. Stakeholder theory tells us that corporations owe a duty to all of their stakeholders including stockholders, employees, clients, suppliers, government, environment, and society. Social contract theory tells us that there are some tacit ‘social contracts’ between a company and society itself. A firm has to respect these contracts if it does not want to be punished in the form of fine, resource constraints, or bad reputation. Legitime theory tells us that corporations have to ‘legitimize’ their actions toward society if they want to continue to operate in good conditions. As regards empirical results, we present a literature review on the relationship between the CSR performance and the financial performance of a firm. We note that, due to difficulties in defining these performances, this relationship remains still ambiguous despite numerous research works realized in the field. Finally, we are curious to know whether the integration of CSR criteria in variable remuneration plans – which is practiced so far in big companies – should be extended to other ones. After investigation, we note that two groups of firms have the greatest need. The first one involves industrial sectors whose activities have a direct impact on the environment, such as petroleum and transport companies. The second one involves companies which are under pressures in terms of return to deal with international competition.Keywords: corporate social responsibility, corporate governance, variable remuneration, stakeholder theory
Procedia PDF Downloads 186336 Using Real Truck Tours Feedback for Address Geocoding Correction
Authors: Dalicia Bouallouche, Jean-Baptiste Vioix, Stéphane Millot, Eric Busvelle
Abstract:
When researchers or logistics software developers deal with vehicle routing optimization, they mainly focus on minimizing the total travelled distance or the total time spent in the tours by the trucks, and maximizing the number of visited customers. They assume that the upstream real data given to carry the optimization of a transporter tours is free from errors, like customers’ real constraints, customers’ addresses and their GPS-coordinates. However, in real transporter situations, upstream data is often of bad quality because of address geocoding errors and the irrelevance of received addresses from the EDI (Electronic Data Interchange). In fact, geocoders are not exempt from errors and could give impertinent GPS-coordinates. Also, even with a good geocoding, an inaccurate address can lead to a bad geocoding. For instance, when the geocoder has trouble with geocoding an address, it returns those of the center of the city. As well, an obvious geocoding issue is that the mappings used by the geocoders are not regularly updated. Thus, new buildings could not exist on maps until the next update. Even so, trying to optimize tours with impertinent customers GPS-coordinates, which are the most important and basic input data to take into account for solving a vehicle routing problem, is not really useful and will lead to a bad and incoherent solution tours because the locations of the customers used for the optimization are very different from their real positions. Our work is supported by a logistics software editor Tedies and a transport company Upsilon. We work with Upsilon's truck routes data to carry our experiments. In fact, these trucks are equipped with TOMTOM GPSs that continuously save their tours data (positions, speeds, tachograph-information, etc.). We, then, retrieve these data to extract the real truck routes to work with. The aim of this work is to use the experience of the driver and the feedback of the real truck tours to validate GPS-coordinates of well geocoded addresses, and bring a correction to the badly geocoded addresses. Thereby, when a vehicle makes its tour, for each visited customer, the vehicle might have trouble with finding this customer’s address at most once. In other words, the vehicle would be wrong at most once for each customer’s address. Our method significantly improves the quality of the geocoding. Hence, we achieve to automatically correct an average of 70% of GPS-coordinates of a tour addresses. The rest of the GPS-coordinates are corrected in a manual way by giving the user indications to help him to correct them. This study shows the importance of taking into account the feedback of the trucks to gradually correct address geocoding errors. Indeed, the accuracy of customer’s address and its GPS-coordinates play a major role in tours optimization. Unfortunately, address writing errors are very frequent. This feedback is naturally and usually taken into account by transporters (by asking drivers, calling customers…), to learn about their tours and bring corrections to the upcoming tours. Hence, we develop a method to do a big part of that automatically.Keywords: driver experience feedback, geocoding correction, real truck tours
Procedia PDF Downloads 674335 Urban Design as a Tool in Disaster Resilience and Urban Hazard Mitigation: Case of Cochin, Kerala, India
Authors: Vinu Elias Jacob, Manoj Kumar Kini
Abstract:
Disasters of all types are occurring more frequently and are becoming more costly than ever due to various manmade factors including climate change. A better utilisation of the concept of governance and management within disaster risk reduction is inevitable and of utmost importance. There is a need to explore the role of pre- and post-disaster public policies. The role of urban planning/design in shaping the opportunities of households, individuals and collectively the settlements for achieving recovery has to be explored. Governance strategies that can better support the integration of disaster risk reduction and management has to be examined. The main aim is to thereby build the resilience of individuals and communities and thus, the states too. Resilience is a term that is usually linked to the fields of disaster management and mitigation, but today has become an integral part of planning and design of cities. Disaster resilience broadly describes the ability of an individual or community to 'bounce back' from disaster impacts, through improved mitigation, preparedness, response, and recovery. The growing population of the world has resulted in the inflow and use of resources, creating a pressure on the various natural systems and inequity in the distribution of resources. This makes cities vulnerable to multiple attacks by both natural and man-made disasters. Each urban area needs elaborate studies and study based strategies to proceed in the discussed direction. Cochin in Kerala is the fastest and largest growing city with a population of more than 26 lakhs. The main concern that has been looked into in this paper is making cities resilient by designing a framework of strategies based on urban design principles for an immediate response system especially focussing on the city of Cochin, Kerala, India. The paper discusses, understanding the spatial transformations due to disasters and the role of spatial planning in the context of significant disasters. The paper also aims in developing a model taking into consideration of various factors such as land use, open spaces, transportation networks, physical and social infrastructure, building design, and density and ecology that can be implemented in any city of any context. Guidelines are made for the smooth evacuation of people through hassle-free transport networks, protecting vulnerable areas in the city, providing adequate open spaces for shelters and gatherings, making available basic amenities to affected population within reachable distance, etc. by using the tool of urban design. Strategies at the city level and neighbourhood level have been developed with inferences from vulnerability analysis and case studies.Keywords: disaster management, resilience, spatial planning, spatial transformations
Procedia PDF Downloads 296334 Using Google Distance Matrix Application Programming Interface to Reveal and Handle Urban Road Congestion Hot Spots: A Case Study from Budapest
Authors: Peter Baji
Abstract:
In recent years, a growing body of literature emphasizes the increasingly negative impacts of urban road congestion in the everyday life of citizens. Although there are different responses from the public sector to decrease traffic congestion in urban regions, the most effective public intervention is using congestion charges. Because travel is an economic asset, its consumption can be controlled by extra taxes or prices effectively, but this demand-side intervention is often unpopular. Measuring traffic flows with the help of different methods has a long history in transport sciences, but until recently, there was not enough sufficient data for evaluating road traffic flow patterns on the scale of an entire road system of a larger urban area. European cities (e.g., London, Stockholm, Milan), in which congestion charges have already been introduced, designated a particular zone in their downtown for paying, but it protects only the users and inhabitants of the CBD (Central Business District) area. Through the use of Google Maps data as a resource for revealing urban road traffic flow patterns, this paper aims to provide a solution for a fairer and smarter congestion pricing method in cities. The case study area of the research contains three bordering districts of Budapest which are linked by one main road. The first district (5th) is the original downtown that is affected by the congestion charge plans of the city. The second district (13th) lies in the transition zone, and it has recently been transformed into a new CBD containing the biggest office zone in Budapest. The third district (4th) is a mainly residential type of area on the outskirts of the city. The raw data of the research was collected with the help of Google’s Distance Matrix API (Application Programming Interface) which provides future estimated traffic data via travel times between freely fixed coordinate pairs. From the difference of free flow and congested travel time data, the daily congestion patterns and hot spots are detectable in all measured roads within the area. The results suggest that the distribution of congestion peak times and hot spots are uneven in the examined area; however, there are frequently congested areas which lie outside the downtown and their inhabitants also need some protection. The conclusion of this case study is that cities can develop a real-time and place-based congestion charge system that forces car users to avoid frequently congested roads by changing their routes or travel modes. This would be a fairer solution for decreasing the negative environmental effects of the urban road transportation instead of protecting a very limited downtown area.Keywords: Budapest, congestion charge, distance matrix API, application programming interface, pilot study
Procedia PDF Downloads 198333 Transition Dynamic Analysis of the Urban Disparity in Iran “Case Study: Iran Provinces Center”
Authors: Marzieh Ahmadi, Ruhullah Alikhan Gorgani
Abstract:
The usual methods of measuring regional inequalities can not reflect the internal changes of the country in terms of their displacement in different development groups, and the indicators of inequalities are not effective in demonstrating the dynamics of the distribution of inequality. For this purpose, this paper examines the dynamics of the urban inertial transport in the country during the period of 2006-2016 using the CIRD multidimensional index and stochastic kernel density method. it firstly selects 25 indicators in five dimensions including macroeconomic conditions, science and innovation, environmental sustainability, human capital and public facilities, and two-stage Principal Component Analysis methodology are developed to create a composite index of inequality. Then, in the second stage, using a nonparametric analytical approach to internal distribution dynamics and a stochastic kernel density method, the convergence hypothesis of the CIRD index of the Iranian provinces center is tested, and then, based on the ergodic density, long-run equilibrium is shown. Also, at this stage, for the purpose of adopting accurate regional policies, the distribution dynamics and process of convergence or divergence of the Iranian provinces for each of the five. According to the results of the first Stage, in 2006 & 2016, the highest level of development is related to Tehran and zahedan is at the lowest level of development. The results show that the central cities of the country are at the highest level of development due to the effects of Tehran's knowledge spillover and the country's lower cities are at the lowest level of development. The main reason for this may be the lack of access to markets in the border provinces. Based on the results of the second stage, which examines the dynamics of regional inequality transmission in the country during 2006-2016, the first year (2006) is not multifaceted and according to the kernel density graph, the CIRD index of about 70% of the cities. The value is between -1.1 and -0.1. The rest of the sequence on the right is distributed at a level higher than -0.1. In the kernel distribution, a convergence process is observed and the graph points to a single peak. Tends to be a small peak at about 3 but the main peak at about-0.6. According to the chart in the final year (2016), the multidimensional pattern remains and there is no mobility in the lower level groups, but at the higher level, the CIRD index accounts for about 45% of the provinces at about -0.4 Take it. That this year clearly faces the twin density pattern, which indicates that the cities tend to be closely related to each other in terms of development, so that the cities are low in terms of development. Also, according to the distribution dynamics results, the provinces of Iran follow the single-density density pattern in 2006 and the double-peak density pattern in 2016 at low and moderate inequality index levels and also in the development index. The country diverges during the years 2006 to 2016.Keywords: Urban Disparity, CIRD Index, Convergence, Distribution Dynamics, Random Kernel Density
Procedia PDF Downloads 124332 Traumatic Brain Injury Induced Lipid Profiling of Lipids in Mice Serum Using UHPLC-Q-TOF-MS
Authors: Seema Dhariwal, Kiran Maan, Ruchi Baghel, Apoorva Sharma, Poonam Rana
Abstract:
Introduction: Traumatic brain injury (TBI) is defined as the temporary or permanent alteration in brain function and pathology caused by an external mechanical force. It represents the leading cause of mortality and morbidity among children and youth individuals. Various models of TBI in rodents have been developed in the laboratory to mimic the scenario of injury. Blast overpressure injury is common among civilians and military personnel, followed by accidents or explosive devices. In addition to this, the lateral Controlled cortical impact (CCI) model mimics the blunt, penetrating injury. Method: In the present study, we have developed two different mild TBI models using blast and CCI injury. In the blast model, helium gas was used to create an overpressure of 130 kPa (±5) via a shock tube, and CCI injury was induced with an impact depth of 1.5mm to create diffusive and focal injury, respectively. C57BL/6J male mice (10-12 weeks) were divided into three groups: (1) control, (2) Blast treated, (3) CCI treated, and were exposed to different injury models. Serum was collected on Day1 and day7, followed by biphasic extraction using MTBE/Methanol/Water. Prepared samples were separated on Charged Surface Hybrid (CSH) C18 column and acquired on UHPLC-Q-TOF-MS using ESI probe with inhouse optimized parameters and method. MS peak list was generated using Markerview TM. Data were normalized, Pareto-scaled, and log-transformed, followed by multivariate and univariate analysis in metaboanalyst. Result and discussion: Untargeted profiling of lipids generated extensive data features, which were annotated through LIPID MAPS® based on their m/z and were further confirmed based on their fragment pattern by LipidBlast. There is the final annotation of 269 features in the positive and 182 features in the negative mode of ionization. PCA and PLS-DA score plots showed clear segregation of injury groups to controls. Among various lipids in mild blast and CCI, five lipids (Glycerophospholipids {PC 30:2, PE O-33:3, PG 28:3;O3 and PS 36:1 } and fatty acyl { FA 21:3;O2}) were significantly altered in both injury groups at Day 1 and Day 7, and also had VIP score >1. Pathway analysis by Biopan has also shown hampered synthesis of Glycerolipids and Glycerophospholipiods, which coincides with earlier reports. It could be a direct result of alteration in the Acetylcholine signaling pathway in response to TBI. Understanding the role of a specific class of lipid metabolism, regulation and transport could be beneficial to TBI research since it could provide new targets and determine the best therapeutic intervention. This study demonstrates the potential lipid biomarkers which can be used for injury severity diagnosis and identification irrespective of injury type (diffusive or focal).Keywords: LipidBlast, lipidomic biomarker, LIPID MAPS®, TBI
Procedia PDF Downloads 113331 Flow Visualization and Mixing Enhancement in Y-Junction Microchannel with 3D Acoustic Streaming Flow Patterns Induced by Trapezoidal Triangular Structure using High-Viscous Liquids
Authors: Ayalew Yimam Ali
Abstract:
The Y-shaped microchannel is used to mix both miscible or immiscible fluids with different viscosities. However, mixing at the entrance of the Y-junction microchannel can be a difficult mixing phenomena due to micro-scale laminar flow aspects with the two miscible high-viscosity water-glycerol fluids. One of the most promising methods to improve mixing performance and diffusion mass transfer in laminar flow phenomena is acoustic streaming (AS), which is a time-averaged, second-order steady streaming that can produce rolling motion in the microchannel by oscillating a low-frequency range acoustic transducer and inducing an acoustic wave in the flow field. The developed 3D trapezoidal, triangular structure spine used in this study was created using sophisticated CNC machine cutting tools used to create microchannel mold with a 3D trapezoidal triangular structure spine alone the Y-junction longitudinal mixing region. In order to create the molds for the 3D trapezoidal structure with the 3D sharp edge tip angles of 30° and 0.3mm trapezoidal triangular sharp edge tip depth from PMMA glass (Polymethylmethacrylate) with advanced CNC machine and the channel manufactured using PDMS (Polydimethylsiloxane) which is grown up longitudinally on top surface of the Y-junction microchannel using soft lithography nanofabrication strategies. Flow visualization of 3D rolling steady acoustic streaming and mixing enhancement with high-viscosity miscible fluids with different trapezoidal, triangular structure longitudinal length, channel width, high volume flow rate, oscillation frequency, and amplitude using micro-particle image velocimetry (μPIV) techniques were used to study the 3D acoustic streaming flow patterns and mixing enhancement. The streaming velocity fields and vorticity flow fields show 16 times more high vorticity maps than in the absence of acoustic streaming, and mixing performance has been evaluated at various amplitudes, flow rates, and frequencies using the grayscale value of pixel intensity with MATLAB software. Mixing experiments were performed using fluorescent green dye solution with de-ionized water in one inlet side of the channel, and the de-ionized water-glycerol mixture on the other inlet side of the Y-channel and degree of mixing was found to have greatly improved from 67.42% without acoustic streaming to 0.96.83% with acoustic streaming. The results show that the creation of a new 3D steady streaming rolling motion with a high volume flowrate around the entrance was enhanced by the formation of a new, three-dimensional, intense streaming rolling motion with a high-volume flowrate around the entrance junction mixing zone with the two miscible high-viscous fluids which are influenced by laminar flow fluid transport phenomena.Keywords: micro fabrication, 3d acoustic streaming flow visualization, micro-particle image velocimetry, mixing enhancement
Procedia PDF Downloads 21330 Analysis of Splicing Methods for High Speed Automated Fibre Placement Applications
Authors: Phillip Kearney, Constantina Lekakou, Stephen Belcher, Alessandro Sordon
Abstract:
The focus in the automotive industry is to reduce human operator and machine interaction, so manufacturing becomes more automated and safer. The aim is to lower part cost and construction time as well as defects in the parts, sometimes occurring due to the physical limitations of human operators. A move to automate the layup of reinforcement material in composites manufacturing has resulted in the use of tapes that are placed in position by a robotic deposition head, also described as Automated Fibre Placement (AFP). The process of AFP is limited with respect to the finite amount of material that can be loaded into the machine at any one time. Joining two batches of tape material together involves a splice to secure the ends of the finishing tape to the starting edge of the new tape. The splicing method of choice for the majority of prepreg applications is a hand stich method, and as the name suggests requires human input to achieve. This investigation explores three methods for automated splicing, namely, adhesive, binding and stitching. The adhesive technique uses an additional adhesive placed on the tape ends to be joined. Binding uses the binding agent that is already impregnated onto the tape through the application of heat. The stitching method is used as a baseline to compare the new splicing methods to the traditional technique currently in use. As the methods will be used within a High Speed Automated Fibre Placement (HSAFP) process, this meant the parameters of the splices have to meet certain specifications: (a) the splice must be able to endure a load of 50 N in tension applied at a rate of 1 mm/s; (b) the splice must be created in less than 6 seconds, dictated by the capacity of the tape accumulator within the system. The samples for experimentation were manufactured with controlled overlaps, alignment and splicing parameters, these were then tested in tension using a tensile testing machine. Initial analysis explored the use of the impregnated binding agent present on the tape, as in the binding splicing technique. It analysed the effect of temperature and overlap on the strength of the splice. It was found that the optimum splicing temperature was at the higher end of the activation range of the binding agent, 100 °C. The optimum overlap was found to be 25 mm; it was found that there was no improvement in bond strength from 25 mm to 30 mm overlap. The final analysis compared the different splicing methods to the baseline of a stitched bond. It was found that the addition of an adhesive was the best splicing method, achieving a maximum load of over 500 N compared to the 26 N load achieved by a stitching splice and 94 N by the binding method.Keywords: analysis, automated fibre placement, high speed, splicing
Procedia PDF Downloads 155329 Highly Conducting Ultra Nanocrystalline Diamond Nanowires Decorated ZnO Nanorods for Long Life Electronic Display and Photo-Detectors Applications
Authors: A. Saravanan, B. R. Huang, C. J. Yeh, K. C. Leou, I. N. Lin
Abstract:
A new class of ultra-nano diamond-graphite nano-hybrid (DGH) composite materials containing nano-sized diamond needles was developed at low temperature process. Such kind of diamond- graphite nano-hybrid composite nanowires exhibit high electrical conductivity and excellent electron field emission (EFE) properties. Few earlier reports mention that addition of N2 gas to the growth plasma requires high growth temperature (800°C) to trigger the dopants to generate the conductivity in the films. High growth temperature is not familiar with the Si-based device fabrications. We have used a novel process such as bias-enhanced-grown (beg) MPECVD process to grow diamond films at low substrate temperature (450°C). We observed that the beg-N/UNCD films thus obtained possess high conductivity of σ=987 S/cm, ever reported for diamond films with excellent Electron field emission (EFE) properties. TEM investigation indicated that these films contain needle-like diamond grains about 5 nm in diameter and hundreds of nanometers in length. Each of the grains was encased in graphitic layers about tens of nano-meters in thickness. These materials properties suitable for more specific applications, such as high conductivity for electron field emitters, high robustness for microplasma cathodes and high electrochemical activity for electro-chemical sensing. Subsequently, other hand, the highly conducting DGH films were coated on vertically aligned ZnO nanorods, there is no prior nucleation or seeding process needed due to the use of BEG method. Such a composite structure provides significant enhancement in the field emission characteristics of the cold cathode was observed with ultralow turn on voltage 1.78 V/μm with high EFE current density of 3.68 mA/ cm2 (at 4.06V/μm) due to decoration of DGH material on ZnO nanorods. The DGH/ZNRs based device get stable emission for longer duration of 562min than bare ZNRs (104min) without any current degradation because the diamond coating protects the ZNRs from ion bombardment when they are used as the cathode for microplasma devices. The potential application of these materials is demonstrated by the plasma illumination measurements that ignited the plasma at the minimum voltage by 290 V. The photoresponse (Iphoto/Idark) behavior of the DGH/ZNRs based photodetectors exhibits a much higher photoresponse (1202) than bare ZNRs (229). During the process the electron transport is easy from ZNRs to DGH through graphitic layers, the EFE properties of these materials comparable to other primarily used field emitters like carbon nanotubes, graphene. The DGH/ZNRs composite also providing a possibility of their use in flat panel, microplasma and vacuum microelectronic devices.Keywords: bias-enhanced nucleation and growth, ZnO nanorods, electrical conductivity, electron field emission, photo-detectors
Procedia PDF Downloads 370328 Using Computer Vision and Machine Learning to Improve Facility Design for Healthcare Facility Worker Safety
Authors: Hengameh Hosseini
Abstract:
Design of large healthcare facilities – such as hospitals, multi-service line clinics, and nursing facilities - that can accommodate patients with wide-ranging disabilities is a challenging endeavor and one that is poorly understood among healthcare facility managers, administrators, and executives. An even less-understood extension of this problem is the implications of weakly or insufficiently accommodative design of facilities for healthcare workers in physically-intensive jobs who may also suffer from a range of disabilities and who are therefore at increased risk of workplace accident and injury. Combine this reality with the vast range of facility types, ages, and designs, and the problem of universal accommodation becomes even more daunting and complex. In this study, we focus on the implication of facility design for healthcare workers suffering with low vision who also have physically active jobs. The points of difficulty are myriad and could span health service infrastructure, the equipment used in health facilities, and transport to and from appointments and other services can all pose a barrier to health care if they are inaccessible, less accessible, or even simply less comfortable for people with various disabilities. We conduct a series of surveys and interviews with employees and administrators of 7 facilities of a range of sizes and ownership models in the Northeastern United States and combine that corpus with in-facility observations and data collection to identify five major points of failure common to all the facilities that we concluded could pose safety threats to employees with vision impairments, ranging from very minor to severe. We determine that lack of design empathy is a major commonality among facility management and ownership. We subsequently propose three methods for remedying this lack of empathy-informed design, to remedy the dangers posed to employees: the use of an existing open-sourced Augmented Reality application to simulate the low-vision experience for designers and managers; the use of a machine learning model we develop to automatically infer facility shortcomings from large datasets of recorded patient and employee reviews and feedback; and the use of a computer vision model fine tuned on images of each facility to infer and predict facility features, locations, and workflows, that could again pose meaningful dangers to visually impaired employees of each facility. After conducting a series of real-world comparative experiments with each of these approaches, we conclude that each of these are viable solutions under particular sets of conditions, and finally characterize the range of facility types, workforce composition profiles, and work conditions under which each of these methods would be most apt and successful.Keywords: artificial intelligence, healthcare workers, facility design, disability, visually impaired, workplace safety
Procedia PDF Downloads 116327 Energy Efficiency Measures in Canada’s Iron and Steel Industry
Authors: A. Talaei, M. Ahiduzzaman, A. Kumar
Abstract:
In Canada, an increase in the production of iron and steel is anticipated for satisfying the increasing demand of iron and steel in the oil sands and automobile industries. It is predicted that GHG emissions from iron and steel sector will show a continuous increase till 2030 and, with emissions of 20 million tonnes of carbon dioxide equivalent, the sector will account for more than 2% of total national GHG emissions, or 12% of industrial emissions (i.e. 25% increase from 2010 levels). Therefore, there is an urgent need to improve the energy intensity and to implement energy efficiency measures in the industry to reduce the GHG footprint. This paper analyzes the current energy consumption in the Canadian iron and steel industries and identifies energy efficiency opportunities to improve the energy intensity and mitigate greenhouse gas emissions from this industry. In order to do this, a demand tree is developed representing different iron and steel production routs and the technologies within each rout. The main energy consumer within the industry is found to be flared heaters accounting for 81% of overall energy consumption followed by motor system and steam generation each accounting for 7% of total energy consumption. Eighteen different energy efficiency measures are identified which will help the efficiency improvement in various subsector of the industry. In the sintering process, heat recovery from coolers provides a high potential for energy saving and can be integrated in both new and existing plants. Coke dry quenching (CDQ) has the same advantages. Within the blast furnace iron-making process, injection of large amounts of coal in the furnace appears to be more effective than any other option in this category. In addition, because coal-powered electricity is being phased out in Ontario (where the majority of iron and steel plants are located) there will be surplus coal that could be used in iron and steel plants. In the steel-making processes, the recovery of Basic Oxygen Furnace (BOF) gas and scrap preheating provides considerable potential for energy savings in BOF and Electric Arc Furnace (EAF) steel-making processes, respectively. However, despite the energy savings potential, the BOF gas recovery is not applicable in existing plants using steam recovery processes. Given that the share of EAF in steel production is expected to increase the application potential of the technology will be limited. On the other hand, the long lifetime of the technology and the expected capacity increase of EAF makes scrap preheating a justified energy saving option. This paper would present the results of the assessment of the above mentioned options in terms of the costs and GHG mitigation potential.Keywords: Iron and Steel Sectors, Energy Efficiency Improvement, Blast Furnace Iron-making Process, GHG Mitigation
Procedia PDF Downloads 397326 Enhancing of Antibacterial Activity of Essential Oil by Rotating Magnetic Field
Authors: Tomasz Borowski, Dawid Sołoducha, Agata Markowska-Szczupak, Aneta Wesołowska, Marian Kordas, Rafał Rakoczy
Abstract:
Essential oils (EOs) are fragrant volatile oils obtained from plants. These are used for cooking (for flavor and aroma), cleaning, beauty (e.g., rosemary essential oil is used to promote hair growth), health (e.g. thyme essential oil cures arthritis, normalizes blood pressure, reduces stress on the heart, cures chest infection and cough) and in the food industry as preservatives and antioxidants. Rosemary and thyme essential oils are considered the most eminent herbs based on their history and medicinal properties. They possess a wide range of activity against different types of bacteria and fungi compared with the other oils in both in vitro and in vivo studies. However, traditional uses of EOs are limited due to rosemary and thyme oils in high concentrations can be toxic. In light of the accessible data, the following hypothesis was put forward: Low frequency rotating magnetic field (RMF) increases the antimicrobial potential of EOs. The aim of this work was to investigate the antimicrobial activity of commercial Salvia Rosmarinus L. and Thymus vulgaris L. essential oil from Polish company Avicenna-Oil under Rotating Magnetic Field (RMF) at f = 25 Hz. The self-constructed reactor (MAP) was applied for this study. The chemical composition of oils was determined by gas chromatography coupled with mass spectrometry (GC-MS). Model bacteria Escherichia coli K12 (ATCC 25922) was used. Minimum inhibitory concentrations (MIC) against E. coli were determined for the essential oils. Tested oils in very small concentrations were prepared (from 1 to 3 drops of essential oils per 3 mL working suspensions). From the results of disc diffusion assay and MIC tests, it can be concluded that thyme oil had the highest antibacterial activity against E. coli. Moreover, the study indicates the exposition to the RMF, as compared to the unexposed controls causing an increase in the efficacy of antibacterial properties of tested oils. The extended radiation exposure to RMF at the frequency f= 25 Hz beyond 160 minutes resulted in a significant increase in antibacterial potential against E. coli. Bacteria were killed within 40 minutes in thyme oil in lower tested concentration (1 drop of essential oils per 3 mL working suspension). Rapid decrease (>3 log) of bacteria number was observed with rosemary oil within 100 minutes (in concentration 3 drops of essential oils per 3 mL working suspension). Thus, a method for improving the antimicrobial performance of essential oil in low concentrations was developed. However, it still remains to be investigated how bacteria get killed by the EOs treated by an electromagnetic field. The possible mechanisms relies on alteration in the permeability of ionic channels in ionic channels in the bacterial cell walls that transport in the cells was proposed. For further studies, it is proposed to examine other types of essential oils and other antibiotic-resistant bacteria (ARB), which are causing a serious concern throughout the world.Keywords: rotating magnetic field, rosemary, thyme, essential oils, Escherichia coli
Procedia PDF Downloads 156325 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 221324 The Rise of Blue Water Navy and its Implication for the Region
Authors: Riddhi Chopra
Abstract:
Alfred Thayer Mahan described the sea as a ‘great common,’ which would serve as a medium for communication, trade, and transport. The seas of Asia are witnessing an intriguing historical anomaly – rise of an indigenous maritime power against the backdrop of US domination over the region. As China transforms from an inward leaning economy to an outward-leaning economy, it has become increasingly dependent on the global sea; as a result, we witness an evolution in its maritime strategy from near seas defense to far seas deployment strategies. It is not only patrolling the international waters but has also built a network of civilian and military infrastructure across the disputed oceanic expanse. The paper analyses the reorientation of China from a naval power to a blue water navy in an era of extensive globalisation. The actions of the Chinese have created a zone of high alert amongst its neighbors such as Japan, Philippines, Vietnam and North Korea. These nations are trying to align themselves so as to counter China’s growing brinkmanship, but China has been pursuing claims through a carefully calibrated strategy in the region shunning any coercive measures taken by other forces. If China continues to expand its maritime boundaries, its neighbors – all smaller and weaker Asian nations would be limited to a narrow band of the sea along its coastlines. Hence it is essential for the US to intervene and support its allies to offset Chinese supremacy. The paper intends to provide a profound analysis over the disputes in South China Sea and East China Sea focusing on Philippines and Japan respectively. Moreover, the paper attempts to give an account of US involvement in the region and its alignment with its South Asian allies. The geographic dynamics is said the breed a national coalition dominating the strategic ambitions of China as well as the weak littoral states. China has conducted behind the scenes diplomacy trying to persuade its neighbors to support its position on the territorial disputes. These efforts have been successful in creating fault lines in ASEAN thereby undermining regional integrity to reach a consensus on the issue. Chinese diplomatic efforts have also forced the US to revisit its foreign policy and engage with players like Cambodia and Laos. The current scenario in the SCS points to a strong Chinese hold trying to outspace all others with no regards to International law. Chinese activities are in contrast with US principles like Freedom of Navigation thereby signaling US to take bold actions to prevent Chinese hegemony in the region. The paper ultimately seeks to explore the changing power dynamics among various claimants where a rival superpower like US can pursue the traditional policy of alliance formation play a decisive role in changing the status quo in the arena, consequently determining the future trajectory.Keywords: China, East China Sea, South China Sea, USA
Procedia PDF Downloads 241323 A Sustainability Benchmarking Framework Based on the Life Cycle Sustainability Assessment: The Case of the Italian Ceramic District
Authors: A. M. Ferrari, L. Volpi, M. Pini, C. Siligardi, F. E. Garcia Muina, D. Settembre Blundo
Abstract:
A long tradition in the ceramic manufacturing since the 18th century, primarily due to the availability of raw materials and an efficient transport system, let to the birth and development of the Italian ceramic tiles district that nowadays represents a reference point for this sector even at global level. This economic growth has been coupled to attention towards environmental sustainability issues throughout various initiatives undertaken over the years at the level of the production sector, such as certification activities and sustainability policies. In this way, starting from an evaluation of the sustainability in all its aspects, the present work aims to develop a benchmarking helping both producers and consumers. In the present study, throughout the Life Cycle Sustainability Assessment (LCSA) framework, the sustainability has been assessed in all its dimensions: environmental with the Life Cycle Assessment (LCA), economic with the Life Cycle Costing (LCC) and social with the Social Life Cycle Assessment (S-LCA). The annual district production of stoneware tiles during the 2016 reference year has been taken as reference flow for all the three assessments, and the system boundaries cover the entire life cycle of the tiles, except for the LCC for which only the production costs have been considered at the moment. In addition, a preliminary method for the evaluation of local and indoor emissions has been introduced in order to assess the impact due to atmospheric emissions on both people living in the area surrounding the factories and workers. The Life Cycle Assessment results, obtained from IMPACT 2002+ modified assessment method, highlight that the manufacturing process is responsible for the main impact, especially because of atmospheric emissions at a local scale, followed by the distribution to end users, the installation and the ordinary maintenance of the tiles. With regard to the economic evaluation, both the internal and external costs have been considered. For the LCC, primary data from the analysis of the financial statements of Italian ceramic companies show that the higher cost items refer to expenses for goods and services and costs of human resources. The analysis of externalities with the EPS 2015dx method attributes the main damages to the distribution and installation of the tiles. The social dimension has been investigated with a preliminary approach by using the Social Hotspots Database, and the results indicate that the most affected damage categories are health and safety and labor rights and decent work. This study shows the potential of the LCSA framework applied to an industrial sector; in particular, it can be a useful tool for building a comprehensive benchmark for the sustainability of the ceramic industry, and it can help companies to actively integrate sustainability principles into their business models.Keywords: benchmarking, Italian ceramic industry, life cycle sustainability assessment, porcelain stoneware tiles
Procedia PDF Downloads 128322 Catalytic Ammonia Decomposition: Cobalt-Molybdenum Molar Ratio Effect on Hydrogen Production
Authors: Elvis Medina, Alejandro Karelovic, Romel Jiménez
Abstract:
Catalytic ammonia decomposition represents an attractive alternative due to its high H₂ content (17.8% w/w), a product stream free of COₓ, among others; however, challenges need to be addressed for its consolidation as an H₂ chemical storage technology, especially, those focused on the synthesis of efficient bimetallic catalytic systems, as an alternative to the price and scarcity of ruthenium, the most active catalyst reported. In this sense, from the perspective of rational catalyst design, adjusting the main catalytic activity descriptor, a screening of supported catalysts with different compositional settings of cobalt-molybdenum metals is presented to evaluate their effect on the catalytic decomposition rate of ammonia. Subsequently, a kinetic study on the supported monometallic Co and Mo catalysts, as well as on the bimetallic CoMo catalyst with the highest activity is shown. The synthesis of catalysts supported on γ-alumina was carried out using the Charge Enhanced Dry Impregnation (CEDI) method, all with a 5% w/w loading metal. Seeking to maintain uniform dispersion, the catalysts were oxidized and activated (In-situ activation) using a flow of anhydrous air and hydrogen, respectively, under the same conditions: 40 ml min⁻¹ and 5 °C min⁻¹ from room temperature to 600 °C. Catalytic tests were carried out in a fixed-bed reactor, confirming the absence of transport limitations, as well as an Approach to equilibrium (< 1 x 10⁻⁴). The reaction rate on all catalysts was measured between 400 and 500 ºC at 53.09 kPa NH3. The synergy theoretically (DFT) reported for bimetallic catalysts was confirmed experimentally. Specifically, it was observed that the catalyst composed mainly of 75 mol% cobalt proved to be the most active in the experiments, followed by the monometallic cobalt and molybdenum catalysts, in this order of activity as referred to in the literature. A kinetic study was performed at 10.13 – 101.32 kPa NH3 and at four equidistant temperatures between 437 and 475 °C the data were adjusted to an LHHW-type model, which considered the desorption of nitrogen atoms from the active phase surface as the rate determining step (RDS). The regression analysis were carried out under an integral regime, using a minimization algorithm based on SLSQP. The physical meaning of the parameters adjusted in the kinetic model, such as the RDS rate constant (k₅) and the lumped adsorption constant of the quasi-equilibrated steps (α) was confirmed through their Arrhenius and Van't Hoff-type behavior (R² > 0.98), respectively. From an energetic perspective, the activation energy for cobalt, cobalt-molybdenum, and molybdenum was 115.2, 106.8, and 177.5 kJ mol⁻¹, respectively. With this evidence and considering the volcano shape described by the ammonia decomposition rate in relation to the metal composition ratio, the synergistic behavior of the system is clearly observed. However, since characterizations by XRD and TEM were inconclusive, the formation of intermetallic compounds should be still verified using HRTEM-EDS. From this point onwards, our objective is to incorporate parameters into the kinetic expressions that consider both compositional and structural elements and explore how these can maximize or influence H₂ production.Keywords: CEDI, hydrogen carrier, LHHW, RDS
Procedia PDF Downloads 56321 Current Deflecting Wall: A Promising Structure for Minimising Siltation in Semi-Enclosed Docks
Authors: A. A. Purohit, A. Basu, K. A. Chavan, M. D. Kudale
Abstract:
Many estuarine harbours in the world are facing the problem of siltation in docks, channel entrances, etc. The harbours in India are not an exception and require maintenance dredging to achieve navigable depths for keeping them operable. Hence, dredging is inevitable and is a costly affair. The heavy siltation in docks in well mixed tide dominated estuaries is mainly due to settlement of cohesive sediments in suspension. As such there is a need to have a permanent solution for minimising the siltation in such docks to alter the hydrodynamic flow field responsible for siltation by constructing structures outside the dock. One of such docks on the west coast of India, wherein siltation of about 2.5-3 m/annum prevails, was considered to understand the hydrodynamic flow field responsible for siltation. The dock is situated in such a region where macro type of semi-diurnal tide (range of about 5m) prevails. In order to change the flow field responsible for siltation inside the dock, suitability of Current Deflecting Wall (CDW) outside the dock was studied, which will minimise the sediment exchange rate and siltation in the dock. The well calibrated physical tidal model was used to understand the flow field during various phases of tide for the existing dock in Mumbai harbour. At the harbour entrance where the tidal flux exchanges in/out of the dock, measurements on water level and current were made to estimate the sediment transport capacity. The distorted scaled model (1:400 (H) & 1:80 (V)) of Mumbai area was used to study the tidal flow phenomenon, wherein tides are generated by automatic tide generator. Hydraulic model studies carried out under the existing condition (without CDW) reveal that, during initial hours of flood tide, flow hugs the docks breakwater and part of flow which enters the dock forms number of eddies of varying sizes inside the basin, while remaining part of flow bypasses the entrance of dock. During ebb, flow direction reverses, and part of the flow re-enters the dock from outside and creates eddies at its entrance. These eddies do not allow water/sediment-mass to come out and result in settlement of sediments in dock both due to eddies and more retention of sediment. At latter hours, current strength outside the dock entrance reduces and allows the water-mass of dock to come out. In order to improve flow field inside the dockyard, two CDWs of length 300 m and 40 m were proposed outside the dock breakwater and inline to Pier-wall at dock entrance. Model studies reveal that, during flood, major flow gets deflected away from the entrance and no eddies are formed inside the dock, while during ebb flow does not re-enter the dock, and sediment flux immediately starts emptying it during initial hours of ebb. This reduces not only the entry of sediment in dock by about 40% but also the deposition by about 42% due to less retention. Thus, CDW is a promising solution to significantly reduce siltation in dock.Keywords: current deflecting wall, eddies, hydraulic model, macro tide, siltation
Procedia PDF Downloads 298320 Case Study of Mechanised Shea Butter Production in South-Western Nigeria Using the LCA Approach from Gate-to-Gate
Authors: Temitayo Abayomi Ewemoje, Oluwamayowa Oluwafemi Oluwaniyi
Abstract:
Agriculture and food processing, industry are among the largest industrial sectors that uses large amount of energy. Thus, a larger amount of gases from their fuel combustion technologies is being released into the environment. The choice of input energy supply not only directly having affects the environment, but also poses a threat to human health. The study was therefore designed to assess each unit production processes in order to identify hotspots using life cycle assessments (LCA) approach in South-western Nigeria. Data such as machine power rating, operation duration, inputs and outputs of shea butter materials for unit processes obtained at site were used to modelled Life Cycle Impact Analysis on GaBi6 (Holistic Balancing) software. Four scenarios were drawn for the impact assessments. Material sourcing from Kaiama, Scenarios 1, 3 and Minna Scenarios 2, 4 but different heat supply sources (Liquefied Petroleum Gas ‘LPG’ Scenarios 1, 2 and 10.8 kW Diesel Heater, scenarios 3, 4). Modelling of shea butter production on GaBi6 was for 1kg functional unit of shea butter produced and the Tool for the Reduction and Assessment of Chemical and other Environmental Impacts (TRACI) midpoint assessment was tool used to was analyse the life cycle inventories of the four scenarios. Eight categories in all four Scenarios were observed out of which three impact categories; Global Warming Potential (GWP) (0.613, 0.751, 0.661, 0.799) kg CO2¬-Equiv., Acidification Potential (AP) (0.112, 0.132, 0.129, 0.149) kg H+ moles-Equiv., and Smog (0.044, 0.059, 0.049, 0.063) kg O3-Equiv., categories had the greater impacts on the environment in Scenarios 1-4 respectively. Impacts from transportation activities was also seen to contribute more to these environmental impact categories due to large volume of petrol combusted leading to releases of gases such as CO2, CH4, N2O, SO2, and NOx into the environment during the transportation of raw shea kernel purchased. The ratio of transportation distance from Minna and Kaiama to production site was approximately 3.5. Shea butter unit processes with greater impacts in all categories was the packaging, milling and with the churning processes in ascending order of magnitude was identified as hotspots that may require attention. From the 1kg shea butter functional unit, it was inferred that locating production site at the shortest travelling distance to raw material sourcing and combustion of LPG for heating would reduce all the impact categories assessed on the environment.Keywords: GaBi6, Life cycle assessment, shea butter production, TRACI
Procedia PDF Downloads 324319 Microstructure and Mechanical Properties of Nb: Si: (a-C) Thin Films Prepared Using Balanced Magnetron Sputtering System
Authors: Sara Khamseh, Elahe Sharifi
Abstract:
321 alloy steel is austenitic stainless steel with high oxidation resistance and is commonly used to fabricate heat exchangers and steam generators. However, the low hardness and weak tribological performance can cause dangerous failures during industrial operations. The well-designed protective coatings on 321 alloy steel surfaces with high hardness and good tribological performance can guarantee their safe applications. The surface protection of metal substrates using protective coatings showed high efficiency in prevailing these problems. Carbon-based multicomponent coatings, such as metal-added amorphous carbon coatings, are crucially necessary because of their remarkable mechanical and tribological performances. In the current study, (Nb: Si: a-C) multicomponent coatings (a-C: amorphous carbon) were coated on 321 alloys using a balanced magnetron (BM) sputtering system at room temperature. The effects of the Si/Nb ratio on microstructure, mechanical and tribological characteristics of (Nb: Si: a-C) composite coatings were investigated. The XRD and Raman analysis results showed that the coatings formed a composite structure of cubic diamond (C-D), NbC, and graphite-like carbon (GLC). The NbC phase's abundance decreased when the C-D phase's affluence increased with an increasing Si/Nb ratio. The coatings' indentation hardness and plasticity index (H³/E² ratio) increased with an increasing Si/Nb ratio. The better mechanical properties of the coatings with higher Si content can be attributed to the higher cubic diamond (C-D) content. The cubic diamond (C-D) is a challenging phase and can positively affect the mechanical performance of the coatings. It is well documented that in hard protective coatings, Si encourages amorphization. In addition, THE studies showed that Nb and Mo can act as a catalyst for nucleation and growth of hard cubic (C-D) and hexagonal (H-D) diamond phases in a-C coatings. In the current study, it seems that fully arranged nanocomposite coatings contain hard C-D and NbC phases that embedded in the amorphous carbon (GLC) phase is formed. This unique structure decreased grain boundary density and defects and resulted in high hardness and H³/E² ratio. Moreover, the COF and wear rate of the coatings decreased with increasing Si/Nb ratio. This can be attributed to the good mechanical properties of the coatings and the formation of graphite-like carbon (GLC) structure with lamellae arrangement in the coatings. The complex and self-lubricant coatings are successfully formed on the surface of 321 alloys. The results of the present study clarified that Si addition to (Nb: a-C) coatings improve the mechanical and tribological performance of the coatings on 321 alloy.Keywords: COF, mechanical properties, microstructure, (Nb: Si: a-C) coatings, Wear rate
Procedia PDF Downloads 90318 Expression of CASK Antibody in Non-Mucionus Colorectal Adenocarcinoma and Its Relation to Clinicopathological Prognostic Factors
Authors: Reham H. Soliman, Noha Noufal, Howayda AbdelAal
Abstract:
Calcium/calmodulin-dependent serine protein kinase (CASK) belongs to the membrane-associated guanylate kinase (MAGUK) family and has been proposed as a mediator of cell-cell adhesion and proliferation, which can contribute to tumorogenesis. CASK has been linked as a good prognostic factor with some tumor subtypes, while considered as a poor prognostic marker in others. To our knowledge, no sufficient evidence of CASK role in colorectal cancer is available. The aim of this study is to evaluate the expression of Calcium/calmodulin-dependent serine protein kinase (CASK) in non-mucinous colorectal adenocarcinoma and adenomatous polyps as precursor lesions and assess its prognostic significance. The study included 42 cases of conventional colorectal adenocarcinoma and 15 biopsies of adenomatous polyps with variable degrees of dysplasia. They were reviewed for clinicopathological prognostic factors and stained by CASK; mouse, monoclonal antibody using heat-induced antigen retrieval immunohistochemical techniques. The results showed that CASK protein was significantly overexpressed (p <0.05) in CRC compared with adenoma samples. The CASK protein was overexpressed in the majority of CRC samples with 85.7% of cases showing moderate to strong expression, while 46.7% of adenomas were positive. CASK overexpression was significantly correlated with both TNM stage and grade of differentiation (p <0.05). There was a significantly higher expression in tumor samples with early stages (I/II) rather than advanced stage (III/IV) and with low grade (59.5%) rather than high grade (40.5%). Another interesting finding was found among the adenomas group, where the stronger intensity of staining was observed in samples with high grade dysplasia (33.3%) than those of lower grades (13.3%). In conclusion, this study shows that there is significant overexpression of CASK protein in CRC as well as in adenomas with high grade dysplasia. This indicates that CASK is involved in the process of carcinogenesis and functions as a potential trigger of the adenoma-carcinoma cascade. CASK was significantly overexpressed in early stage and low-grade tumors rather than tumors with advanced stage and higher histological grades. This suggests that CASK protein is a good prognostic factor. We suggest that CASK affects CRC in two different ways derived from its physiology. CASK as part of MAGUK family can stimulate proliferation and through its cell membrane localization and as a mediator of cell-cell adhesion might contribute in tumor confinement and localization.Keywords: CASK, colorectal cancer, overexpression, prognosis
Procedia PDF Downloads 279317 The Efficacy of Preoperative Thermal Pulsation Treatment in Reducing Post Cataract Surgery Dry Eye Disease: A Systematic Review and Meta-analysis
Authors: Lugean K. Alomari, Rahaf K. Sharif, Basil K. Alomari, Hind M. Aljabri, Faisal F. Aljahdali, Amal A. Alomari, Saeed A. Alghamdi
Abstract:
Background: The thermal pulsation system is a therapy that uses heat and massage to treat dry eye disease; thus, some trials have been published to compare it with the conventional treatment. The aim of this study is to conduct a systematic review and meta-analysis comparing the efficacy of thermal pulsation systems with conventional treatment in patients undergoing cataract surgery. Methods: Medline, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL) databases were searched for eligible trials. We included three randomized controlled trials (RCTs) that compared the thermal pulsation system with the conventional treatment in patients undergoing cataract surgery. A table of characteristics was plotted, and the Quality of the studies was assessed using the Cochrane risk-of-bias tool for randomized trials (RoB 2). Forest plots were plotted using the Random-effect Inverse Variance method. χ2 test and the Higgins-I-squared (I2) model were used to assess heterogeneity. A total of 201 cataract surgery patients were included, with 105 undergoing preoperative pulsation therapy and 96 receiving conventional treatment. Demographic analysis revealed comparable distributions across groups. Results: All the studies in our analysis are of good quality with a low risk of bias. A total of 201 patients were included in the analysis, out of which 105 underwent pulsation therapy, and 95 were in the control group. Tear Break-up Time (TBUT) analysis revealed no significant baseline differences, except pulsation therapy being better at 1 month. (SMD 0.42 [95%CI 0.14 - 0.70] p=0.004). This positive trend continued at three months (SMD 0.52 [95% CI (0.20 – 0.84)] p=0.002). Corneal fluorescein staining scores and Meibomian gland-yielding secretion scores showed no significant differences at baseline. However, at one month, pulsation therapy significantly improved Meibomian gland function (SMD -0.86 [95% CI (-1.20 - -0.53)] p<0.00001), indicating a reduced risk of dry eye syndrome. Conclusion: Preoperative pulsation therapy appears to enhance post-cataract surgery outcomes, particularly in terms of tear film stability and Meibomian gland secretory function. The sustained positive effects observed at one and three months post-surgery suggest the potential for long-term benefits.Keywords: lipiflow, cataract, thermal pulsation, dry eye
Procedia PDF Downloads 20316 Molecular Dynamics Simulation Study of the Influence of Potassium Salts on the Adsorption and Surface Hydration Inhibition Performance of Hexane, 1,6 - Diamine Clay Mineral Inhibitor onto Sodium Montmorillonite
Authors: Justine Kiiza, Xu Jiafang
Abstract:
The world’s demand for energy is increasing rapidly due to population growth and a reduction in shallow conventional oil and gas reservoirs, resorting to deeper and mostly unconventional reserves like shale oil and gas. Most shale formations contain a large amount of expansive sodium montmorillonite (Na-Mnt), due to high water adsorption, hydration, and when the drilling fluid filtrate enters the formation with high Mnt content, the wellbore wall can be unstable due to hydration and swelling, resulting to shrinkage, sticking, balling, time wasting etc., and well collapse in extreme cases causing complex downhole accidents and high well costs. Recently, polyamines like 1, 6 – hexane diamine (HEDA) have been used as typical drilling fluid shale inhibitors to minimize and/or cab clay mineral swelling and maintain the wellbore stability. However, their application is limited to shallow drilling due to their sensitivity to elevated temperature and pressure. Inorganic potassium salts i.e., KCl, have long been applied for restriction of shale formation hydration expansion in deep wells, but their use is limited due to toxicity. Understanding the adsorption behaviour of HEDA on Na-Mnt surfaces in present of organo-salts, organic K-salts e.g., HCO₂K - main component of organo-salt drilling fluid, is of great significance in explaining the inhibitory performance of polyamine inhibitors. Molecular dynamic simulations (MD) were applied to investigate the influence of HCO₂K and KCl on the adsorption mechanism of HEDA on the Na-Mnt surface. Simulation results showed that adsorption configurations of HEDA are mainly by terminal amine groups with a flat-lying alkyl hydrophobic chain. Its interaction with the clay surface decreased the H-bond number between H₂O-clay and neutralized the negative charge of the Mnt surface, thus weakening the surface hydration ability of Na-Mnt. The introduction of HCO₂K greatly improved inhibition ability, coordination of interlayer ions with H₂O as they were replaced by K+, and H₂O-HCOO- coordination reduced H₂O-Mnt interactions, mobility and transport capability of H₂O molecules were more decreased. While KCl showed little ability and also caused more hydration with time, HCO₂K can be used as an alternative for offshore drilling instead of toxic KCl, with a maximum concentration noted in this study as 1.65 wt%. This study provides a theoretical elucidation for the inhibition mechanism and adsorption characteristics of HEDA inhibitor on Na-Mnt surfaces in the presence of K+-salts and may provide more insight into the evaluation, selection, and molecular design of new clay-swelling high-performance WBDF systems used in oil and gas complex offshore drilling well sections.Keywords: shale, hydration, inhibition, polyamines, organo-salts, simulation
Procedia PDF Downloads 48