Search results for: large carnivores
6076 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems
Authors: Samuel Kaspi, Sitalakshmi Venkatraman
Abstract:
In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.Keywords: in-memory database, disk-based system, hybrid database, concurrency control
Procedia PDF Downloads 4176075 Tailoring the Parameters of the Quantum MDS Codes Constructed from Constacyclic Codes
Authors: Jaskarn Singh Bhullar, Divya Taneja, Manish Gupta, Rajesh Kumar Narula
Abstract:
The existence conditions of dual containing constacyclic codes have opened a new path for finding quantum maximum distance separable (MDS) codes. Using these conditions parameters of length n=(q²+1)/2 quantum MDS codes were improved. A class of quantum MDS codes of length n=(q²+q+1)/h, where h>1 is an odd prime, have also been constructed having large minimum distance and these codes are new in the sense as these are not available in the literature.Keywords: hermitian construction, constacyclic codes, cyclotomic cosets, quantum MDS codes, singleton bound
Procedia PDF Downloads 3896074 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging
Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland
Abstract:
A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography
Procedia PDF Downloads 1576073 The Impact of Heat Waves on Human Health: State of Art in Italy
Authors: Vito Telesca, Giuseppina A. Giorgio
Abstract:
The earth system is subject to a wide range of human activities that have changed the ecosystem more rapidly and extensively in the last five decades. These global changes have a large impact on human health. The relationship between extreme weather events and mortality are widely documented in different studies. In particular, a number of studies have investigated the relationship between climatological variations and the cardiovascular and respiratory system. The researchers have become interested in the evaluation of the effect of environmental variations on the occurrence of different diseases (such as infarction, ischemic heart disease, asthma, respiratory problems, etc.) and mortality. Among changes in weather conditions, the heat waves have been used for investigating the association between weather conditions and cardiovascular events and cerebrovascular, using thermal indices, which combine air temperature, relative humidity, and wind speed. The effects of heat waves on human health are mainly found in the urban areas and they are aggravated by the presence of atmospheric pollution. The consequences of these changes for human health are of growing concern. In particular, meteorological conditions are one of the environmental aspects because cardiovascular diseases are more common among the elderly population, and such people are more sensitive to weather changes. In addition, heat waves, or extreme heat events, are predicted to increase in frequency, intensity, and duration with climate change. In this context, are very important public health and climate change connections increasingly being recognized by the medical research, because these might help in informing the public at large. Policy experts claim that a growing awareness of the relationships of public health and climate change could be a key in breaking through political logjams impeding action on mitigation and adaptation. The aims of this study are to investigate about the importance of interactions between weather variables and your effects on human health, focusing on Italy. Also highlighting the need to define strategies and practical actions of monitoring, adaptation and mitigation of the phenomenon.Keywords: climate change, illness, Italy, temperature, weather
Procedia PDF Downloads 2476072 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour
Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling
Abstract:
Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model
Procedia PDF Downloads 996071 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study
Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria
Abstract:
Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations
Procedia PDF Downloads 1176070 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age
Authors: S. N. Susenburger, D. Boecker
Abstract:
In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.Keywords: innovation management, innovation culture, innovation methodologies, digital transformation
Procedia PDF Downloads 1466069 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models
Authors: Morten Brøgger, Kim Wittchen
Abstract:
Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.Keywords: building stock energy modelling, energy-savings, archetype
Procedia PDF Downloads 1546068 Window Opening Behavior in High-Density Housing Development in Subtropical Climate
Authors: Minjung Maing, Sibei Liu
Abstract:
This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.Keywords: high-density housing, subtropical climate, urban behavior, window opening
Procedia PDF Downloads 1256067 Evaluation of the Gasification Process for the Generation of Syngas Using Solid Waste at the Autónoma de Colombia University
Authors: Yeraldin Galindo, Soraida Mora
Abstract:
Solid urban waste represents one of the largest sources of global environmental pollution due to the large quantities of these that are produced every day; thus, the elimination of such waste is a major problem for the environmental authorities who must look for alternatives to reduce the volume of waste with the possibility of obtaining an energy recovery. At the Autónoma de Colombia University, approximately 423.27 kg/d of solid waste are generated mainly paper, cardboard, and plastic. A large amount of these solid wastes has as final disposition the sanitary landfill of the city, wasting the energy potential that these could have, this, added to the emissions generated by the collection and transport of the same, has as consequence the increase of atmospheric pollutants. One of the alternative process used in the last years to generate electrical energy from solid waste such as paper, cardboard, plastic and, mainly, organic waste or biomass to replace the use of fossil fuels is the gasification. This is a thermal conversion process of biomass. The objective of it is to generate a combustible gas as the result of a series of chemical reactions propitiated by the addition of heat and the reaction agents. This project was developed with the intention of giving an energetic use to the waste (paper, cardboard, and plastic) produced inside the university, using them to generate a synthesis gas with a gasifier prototype. The gas produced was evaluated to determine their benefits in terms of electricity generation or raw material for the chemical industry. In this process, air was used as gasifying agent. The characterization of the synthesis gas was carried out by a gas chromatography carried out by the Chemical Engineering Laboratory of the National University of Colombia. Taking into account the results obtained, it was concluded that the gas generated is of acceptable quality in terms of the concentration of its components, but it is a gas of low calorific value. For this reason, the syngas generated in this project is not viable for the production of electrical energy but for the production of methanol transformed by the Fischer-Tropsch cycle.Keywords: alternative energies, gasification, gasifying agent, solid urban waste, syngas
Procedia PDF Downloads 2586066 Banking Union: A New Step towards Completing the Economic and Monetary Union
Authors: Marijana Ivanov, Roman Šubić
Abstract:
The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, represent important steps towards completing the Economic and Monetary Union. It should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim that a former practice of the bail-out is replaced with the bail-in system through which bank failures will be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. It has to reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities, and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the single interbank money market. However, contrary to all the positive expectations related to the future functioning of the banking union, low and unbalanced economic growth rates remain a challenge for the maintenance of financial stability in the euro area, and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times, and large banks are still a matter of concern because of their systemic importance for individual countries and the euro zone as a whole. The creation of the SSM and the SRM should increase transparency of the banking system in the euro area and restore confidence that have been disturbed during the depression. It would provide a new opportunity to strengthen economic and financial systems in the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented to the large and significant banks (whereby one half of them operate in the core and most important euro area countries), while it is questionable to what extent the common resolution funds will be used for rescue of less important institutions.Keywords: banking union, financial integration, single supervision mechanism (SSM)
Procedia PDF Downloads 4706065 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant
Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha
Abstract:
This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.Keywords: PV, oscillation, modelling, wind
Procedia PDF Downloads 376064 High Level Expression of Fluorinase in Escherichia Coli and Pichia Pastoris
Authors: Lee A. Browne, K. Rumbold
Abstract:
The first fluorinating enzyme, 5'-fluoro-5'-deoxyadenosine synthase (fluorinase) was isolated from the soil bacterium Streptomyces cattleya. Such an enzyme, with the ability to catalyze a C-F bond, presents great potential as a biocatalyst. Naturally fluorinated compounds are extremely rare in nature. As a result, the number of fluorinases identified remains relatively few. The field of fluorination is almost completely synthetic. However, with the increasing demand for fluorinated organic compounds of commercial value in the agrochemical, pharmaceutical and materials industries, it has become necessary to utilize biologically based methods such as biocatalysts. A key step in this crucial process is the large-scale production of the fluorinase enzyme in considerable quantities for industrial applications. Thus, this study aimed to optimize expression of the fluorinase enzyme in both prokaryotic and eukaryotic expression systems in order to obtain high protein yields. The fluorinase gene was cloned into the pET 41b(+) and pPinkα-HC vectors and used to transform the expression hosts, E.coli BL21(DE3) and Pichia pastoris (PichiaPink™ strains) respectively. Expression trials were conducted to select optimal conditions for expression in both expression systems. Fluorinase catalyses a reaction between S-adenosyl-L-Methionine (SAM) and fluoride ion to produce 5'-fluorodeoxyadenosine (5'FDA) and L-Methionine. The activity of the enzyme was determined using HPLC by measuring the product of the reaction 5'FDA. A gradient mobile phase of 95:5 v/v 50mM potassium phosphate buffer to a final mobile phase containing 80:20 v/v 50mM potassium phosphate buffer and acetonitrile were used. This resulted in the complete separation of SAM and 5’-FDA which eluted at 1.3 minutes and 3.4 minutes respectively. This proved that the fluorinase enzyme was active. Optimising expression of the fluorinase enzyme was successful in both E.coli and PichiaPink™ where high expression levels in both expression systems were achieved. Protein production will be scaled up in PichiaPink™ using fermentation to achieve large-scale protein production. High level expression of protein is essential in biocatalysis for the availability of enzymes for industrial applications.Keywords: biocatalyst, expression, fluorinase, PichiaPink™
Procedia PDF Downloads 5526063 Development of a New Method for the Evaluation of Heat Tolerant Wheat Genotypes for Genetic Studies and Wheat Breeding
Authors: Hameed Alsamadany, Nader Aryamanesh, Guijun Yan
Abstract:
Heat is one of the major abiotic stresses limiting wheat production worldwide. To identify heat tolerant genotypes, a newly designed system involving a large plastic box holding many layers of filter papers positioned vertically with wheat seeds sown in between for the ease of screening large number of wheat geno types was developed and used to study heat tolerance. A collection of 499 wheat geno types were screened under heat stress (35ºC) and non-stress (25ºC) conditions using the new method. Compared with those under non-stress conditions, a substantial and very significant reduction in seedling length (SL) under heat stress was observed with an average reduction of 11.7 cm (P<0.01). A damage index (DI) of each geno type based on SL under the two temperatures was calculated and used to rank the genotypes. Three hexaploid geno types of Triticum aestivum [Perenjori (DI= -0.09), Pakistan W 20B (-0.18) and SST16 (-0.28)], all growing better at 35ºC than at 25ºC were identified as extremely heat tolerant (EHT). Two hexaploid genotypes of T. aestivum [Synthetic wheat (0.93) and Stiletto (0.92)] and two tetraploid genotypes of T. turgidum ssp dicoccoides [G3211 (0.98) and G3100 (0.93)] were identified as extremely heat susceptible (EHS). Another 14 geno types were classified as heat tolerant (HT) and 478 as heat susceptible (HS). Extremely heat tolerant and heat susceptible geno types were used to develop re combinant inbreeding line populations for genetic studies. Four major QTLs, HTI4D, HTI3B.1, HTI3B.2 and HTI3A located on wheat chromosomes 4D, 3B (x2) and 3A, explaining up to 34.67 %, 28.93 %, 13.46% % and 11.34% phenotypic variation, respectively, were detected. The four QTLs together accounted for 88.40% of the total phenotypic variation. Random wheat geno types possessing the four heat tolerant alleles performed significantly better under the heat condition than those lacking the heat tolerant alleles indicating the importance of the four QTLs in conferring heat tolerance in wheat. Molecular markers are being developed for marker assisted breeding of heat tolerant wheat.Keywords: bread wheat, heat tolerance, screening, RILs, QTL mapping, association analysis
Procedia PDF Downloads 5516062 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines
Authors: Alexander Guzman Urbina, Atsushi Aoyama
Abstract:
The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.Keywords: deep learning, risk assessment, neuro fuzzy, pipelines
Procedia PDF Downloads 2926061 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks
Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev
Abstract:
One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications
Procedia PDF Downloads 2176060 Risk Assessment on Construction Management with “Fuzzy Logy“
Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi
Abstract:
Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.Keywords: fuzzy logy, risk, prioritization, assessment
Procedia PDF Downloads 5946059 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply
Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele
Abstract:
In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant
Procedia PDF Downloads 1786058 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case
Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht
Abstract:
Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL
Procedia PDF Downloads 836057 Modeling of Conjugate Heat Transfer including Radiation in a Kerosene/Air Certification Burner
Authors: Lancelot Boulet, Pierre Benard, Ghislain Lartigue, Vincent Moureau, Nicolas Chauvet, Sheddia Didorally
Abstract:
International aeronautic standards demand a fire certification for engines that demonstrate their resistance. This demonstration relies on tests performed with prototype engines in the late stages of the development. Hardest tests require to place a kerosene standardized flame in front of the engine casing during a given time with imposed temperature and heat flux. The purpose of this work is to provide a better characterization of a kerosene/air certification burner in order to minimize the risks of test failure. A first Large-Eddy Simulation (LES) study of the certification burner permitted to model and simulate this burner, including both adiabatic and Conjugate Heat Transfer (CHT) computations. Carried out on unstructured grids with 40 million tetrahedral cells, using the finite-volume YALES2 code, spray combustion, forced convection on walls and conduction in the solid parts of the burner were coupled to achieve a detailed description of heat transfer. It highlighted the fact that conduction inside the solid has a real impact on the flame topology and the combustion regime. However, in the absence of radiative heat transfer, unrealistic temperature of the equipment was obtained. The aim of the present study is to include the radiative heat transfer in order to reach the same temperature given by experimental measurements. First, various test-cases are conducted to validate the coupling between the different heat solvers. Then, adiabatic case, CHT case, as well as CHT including radiative transfer are studied and compared. The LES model is finally applied to investigate the heat transfer in a flame impaction configuration. The aim is to progress on fire test modeling so as to reach a good confidence level as far as success of the certification test is concerned.Keywords: conjugate heat transfer, fire resistance test, large-eddy simulation, radiative transfer, turbulent combustion
Procedia PDF Downloads 2236056 Research and Development of Net-Centric Information Sharing Platform
Authors: Wang Xiaoqing, Fang Youyuan, Zheng Yanxing, Gu Tianyang, Zong Jianjian, Tong Jinrong
Abstract:
Compared with traditional distributed environment, the net-centric environment brings on more demanding challenges for information sharing with the characteristics of ultra-large scale and strong distribution, dynamic, autonomy, heterogeneity, redundancy. This paper realizes an information sharing model and a series of core services, through which provides an open, flexible and scalable information sharing platform.Keywords: net-centric environment, information sharing, metadata registry and catalog, cross-domain data access control
Procedia PDF Downloads 5706055 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics
Authors: Varun K, Pramod B. Balareddy
Abstract:
Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient
Procedia PDF Downloads 2576054 Designing Nickel Coated Activated Carbon (Ni/AC) Based Electrode Material for Supercapacitor Applications
Authors: Zahid Ali Ghazi
Abstract:
Supercapacitors (SCs) have emerged as auspicious energy storage devices because of their fast charge-discharge characteristics and high power densities. In the current study, a simple approach is used to coat activated carbon (AC) with a thin layer of nickel (Ni) by an electroless deposition process to enhance the electrochemical performance of the SC. The synergistic combination of large surface area and high electrical conductivity of the AC, as well as the pseudocapacitive behavior of the metallic Ni, has shown great potential to overcome the limitations of traditional SC materials. First, the materials were characterized using X-ray diffraction (XRD) for crystallography, scanning electron microscopy (SEM) for surface morphology and energy dispersion X-ray (EDX) for elemental analysis. The electrochemical performance of the nickel-coated activated carbon (Ni-AC) is systematically evaluated through various techniques, including galvanostatic charge-discharge (GCD), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The GCD results revealed that Ni/AC has a higher specific capacitance (1559 F/g) than bare AC (222 F/g) at 1 A/g current density in a 2 M KOH electrolyte. Even at a higher current density of 20 A/g, the Ni/AC showed a high capacitance of 944 F/g as compared to 77 F/g by AC. The specific capacitance (1318 F/g) calculated from CV measurements for Ni-AC at 10mV/sec was in close agreement with GCD data. Furthermore, the bare AC exhibited a low energy of 15 Wh/kg at a power density of 356 W/kg whereas, an energy density of 111 Wh/kg at a power density of 360 W/kg was achieved by Ni/AC-850 electrode and demonstrated a long life cycle with 94% capacitance retention over 50000 charge/discharge cycles at 10 A/g. In addition, the EIS study disclosed that the Rs and Rct values of Ni/AC electrodes were much lower than those of bare AC. The superior performance of Ni/AC is mainly attributed to the presence of excessive redox active sites, large electroactive surface area and corrosive resistance properties of Ni. We believe that this study will provide new insights into the controlled coating of ACs and other porous materials with metals for developing high-performance SCs and other energy storage devices.Keywords: supercapacitor, cyclic voltammetry, coating, energy density, activated carbon
Procedia PDF Downloads 636053 Development of Alternative Fuels Technologies for Transportation
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
Currently, in automotive transport to power vehicles, almost exclusively hydrocarbon based fuels are used. Due to increase of hydrocarbon fuels consumption, quality parameters are tightend for clean environment. At the same time efforts are undertaken for development of alternative fuels. The reasons why looking for alternative fuels for petroleum and diesel are: to increase vehicle efficiency and to reduce the environmental impact, reduction of greenhouse gases emissions and savings in consumption of limited oil resources. Significant progress was performed on development of alternative fuels such as methanol, ethanol, natural gas (CNG / LNG), LPG, dimethyl ether (DME) and biodiesel. In addition, biggest vehicle manufacturers work on fuel cell vehicles and its introduction to the market. Alcohols such as methanol and ethanol create the perfect fuel for spark-ignition engines. Their advantages are high-value antiknock which determines their application as additive (10%) to unleaded petrol and relative purity of produced exhaust gasses. Ethanol is produced in distillation process of plant products, which value as a food can be irrational. Ethanol production can be costly also for the entire economy of the country, because it requires a large complex distillation plants, large amounts of biomass and finally a significant amount of fuel to sustain the process. At the same time, the fermentation process of plants releases into the atmosphere large quantities of carbon dioxide. Natural gas cannot be directly converted into liquid fuels, although such arrangements have been proposed in the literature. Going through stage of intermediates is inevitable yet. Most popular one is conversion to methanol, which can be processed further to dimethyl ether (DME) or olefin (ethylene and propylene) for the petrochemical sector. Methanol uses natural gas as a raw material, however, requires expensive and advanced production processes. In relation to pollution emissions, the optimal vehicle fuel is LPG which is used in many countries as an engine fuel. Production of LPG is inextricably linked with production and processing of oil and gas, and which represents a small percentage. Its potential as an alternative for traditional fuels is therefore proportionately reduced. Excellent engine fuel may be biogas, however, follows to the same limitations as ethanol - the same production process is used and raw materials. Most essential fuel in the campaign of environment protection against pollution is natural gas. Natural gas as fuel may be either compressed (CNG) or liquefied (LNG). Natural gas can also be used for hydrogen production in steam reforming. Hydrogen can be used as a basic starting material for the chemical industry, an important raw material in the refinery processes, as well as a fuel vehicle transportation. Natural gas can be used as CNG which represents an excellent compromise between the availability of the technology that is proven and relatively cheap to use in many areas of the automotive industry. Natural gas can also be seen as an important bridge to other alternative sources of energy derived from fuel and harmless to the environment. For these reasons CNG as a fuel stimulates considerable interest in the worldwide.Keywords: alternative fuels, CNG (Compressed Natural Gas), LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles)
Procedia PDF Downloads 1816052 Computational Elucidation of β-endo-Acetylglucosaminidase (LytB) Inhibition by Kaempferol, Apigenin, and Quercetin in Streptococcus pneumoniae: Anti-Pneumonia Mechanism
Authors: Singh Divya, Rohan Singh, Anjana Pandey
Abstract:
Reviewers' Comments: The study provides valuable insights into the anti-pneumonia properties of flavonoids against LytB. Authors could further validate findings through in vitro studies and consider exploring combination therapies for enhanced efficacy Response: Thankyou for your valuable comments. This study has been conducted further via experimental validation of the in-silico findings. The study uses Streptococcus pneumoniae D39 strain and examine the anti-pneumonia effect of kaempferol, quercetin and apigenin at various concentrations ranging from 9ug/ml to 200ug/ml. From results, it can be concluded that the kaempferol has shown the highest cytotoxic effect (72.1% of inhibition) against S. pneumoniae at concentration of 40ug/ml compare to apigenin and quercetin. The treatment of S. pneumoniae with concoction of kaempferol, quercetin and apigenin has also been performed, it is noted that conc. of 200ug/ml was most effect in achieving 75% inhibition. As S. pneumoniae D39 is a virulent encapsulated strain, the capsule interferes with the uptake of large size drug formulation. For instance, S. pneumoniae D39 with kaempferol and gold nano urchin (GNU) formulation, but the large size of GNU has resulted in reduced cytotoxic effect of kaempferol (27%). To achieve near 100% cytotoxic effect on the MDR S. pneumoniae D39 strain, the study will target the development of kaempferol-engineered gold nano-urchin’ conjugates, where gold nanocrystal will be of small size (less than or equal to 5nm) and decorated with hydroxyl, sulfhydryl, carboxyl, amine and groups. This approach is expected to enhance the anti-pneumonia effect of kaempferol (polyhydroxylated flavonoid). The study will also examine the interactive study among lung epithelial cell line (A549), kaempferol-engineered gold nano urchins, and S. pneumoniae for exploring the colonization, invasion, and biofilm formation of S. pneumoniae on A549 cells resembling the upper respiratory surface of humans.Keywords: streptococcus pneumoniae, β-endo-Acetylglucosaminidase, apigenin, quercetin kaempferol, molecular dynamic simulation, interactome study and GROMACS
Procedia PDF Downloads 46051 Study of the Adsorptives Properties of Zeolites X Exchanged by the Cations Cu2 + and/or Zn2+
Authors: H. Hammoudi, S. Bendenia, I. Batonneau-Gener, A. Khelifa
Abstract:
Applying growing zeolites is due to their intrinsic physicochemical properties: a porous structure, regular, generating a large free volume, a high specific surface area, acidic properties of interest to the origin of their activity, selectivity energy and dimensional, leading to a screening phenomenon, hence the name of molecular sieves is generally attributed to them. Most of the special properties of zeolites have been valued as direct applications such as ion exchange, adsorption, separation and catalysis. Due to their crystalline structure stable, their large pore volume and their high content of cation X zeolites are widely used in the process of adsorption and separation. The acidic properties of zeolites X and interesting selectivity conferred on them their porous structure is also have potential catalysts. The study presented in this manuscript is devoted to the chemical modification of an X zeolite by cation exchange. Ion exchange of zeolite NaX by Zn 2 + cations and / or Cu 2 + is gradually conducted by following the evolution of some of its characteristics: crystallinity by XRD, micropore volume by nitrogen adsorption. Once characterized, the different samples will be used for the adsorption of propane and propylene. Particular attention is paid thereafter, on the modeling of adsorption isotherms. In this vein, various equations of adsorption isotherms and localized mobile, some taking into account the adsorbate-adsorbate interactions, are used to describe the experimental isotherms. We also used the Toth equation, a mathematical model with three parameters whose adjustment requires nonlinear regression. The last part is dedicated to the study of acid properties of Cu (x) X, Zn (x) X and CuZn (x) X, with the adsorption-desorption of pyridine followed by IR. The effect of substitution at different rates of Na + by Cu2 + cations and / or Zn 2 +, on the crystallinity and on the textural properties was treated. Some results on the morphology of the crystallites and the thermal effects during a temperature rise, obtained by scanning electron microscopy and DTA-TGA thermal analyzer, respectively, are also reported. The acidity of our different samples was also studied. Thus, the nature and strength of each type of acidity are estimated. The evaluation of these various features will provide a comparison between Cu (x) X, Zn (x) X and CuZn (x) X. One study on adsorption of C3H8 and C3H6 in NaX, Cu (x) X , Zn (x) x and CuZn (x) x has been undertaken.Keywords: adsorption, acidity, ion exchange, zeolite
Procedia PDF Downloads 1976050 Dynamic Analysis of Turbine Foundation
Authors: Mogens Saberi
Abstract:
This paper presents different design approaches for the design of turbine foundations. In the design process, several unknown factors must be considered such as the soil stiffness at the site. The main static and dynamic loads are presented and the results of a dynamic simulation are presented for a turbine foundation that is currently being built. A turbine foundation is an important part of a power plant since a non-optimal behavior of the foundation can damage the turbine itself and thereby stop the power production with large consequences.Keywords: dynamic turbine design, harmonic response analysis, practical turbine design experience, concrete foundation
Procedia PDF Downloads 3166049 Micro-Filtration with an Inorganic Membrane
Authors: Benyamina, Ouldabess, Bensalah
Abstract:
The aim of this study is to use membrane technique for filtration of a coloring solution. the preparation of the micro-filtration membranes is based on a natural clay powder with a low cost, deposited on macro-porous ceramic supports. The micro-filtration membrane provided a very large permeation flow. Indeed, the filtration effectiveness of membrane was proved by the total discoloration of bromothymol blue solution with initial concentration of 10-3 mg/L after the first minutes.Keywords: the inorganic membrane, micro-filtration, coloring solution, natural clay powder
Procedia PDF Downloads 5136048 Evaluation of Duncan-Chang Deformation Parameters of Granular Fill Materials Using Non-Invasive Seismic Wave Methods
Authors: Ehsan Pegah, Huabei Liu
Abstract:
Characterizing the deformation properties of fill materials in a wide stress range always has been an important issue in geotechnical engineering. The hyperbolic Duncan-Chang model is a very popular model of stress-strain relationship that captures the nonlinear deformation of granular geomaterials in a very tractable manner. It consists of a particular set of the model parameters, which are generally measured from an extensive series of laboratory triaxial tests. This practice is both time-consuming and costly, especially in large projects. In addition, undesired effects caused by soil disturbance during the sampling procedure also may yield a large degree of uncertainty in the results. Accordingly, non-invasive geophysical seismic approaches may be utilized as the appropriate alternative surveys for measuring the model parameters based on the seismic wave velocities. To this end, the conventional seismic refraction profiles were carried out in the test sites with the granular fill materials to collect the seismic waves information. The acquired shot gathers are processed, from which the P- and S-wave velocities can be derived. The P-wave velocities are extracted from the Seismic Refraction Tomography (SRT) technique while S-wave velocities are obtained by the Multichannel Analysis of Surface Waves (MASW) method. The velocity values were then utilized with the equations resulting from the rigorous theories of elasticity and soil mechanics to evaluate the Duncan-Chang model parameters. The derived parameters were finally compared with those from laboratory tests to validate the reliability of the results. The findings of this study may confidently serve as the useful references for determination of nonlinear deformation parameters of granular fill geomaterials. Those are environmentally friendly and quite economic, which can yield accurate results under the actual in-situ conditions using the surface seismic methods.Keywords: Duncan-Chang deformation parameters, granular fill materials, seismic waves velocity, multichannel analysis of surface waves, seismic refraction tomography
Procedia PDF Downloads 1826047 Construction Port Requirements for Floating Wind Turbines
Authors: Alan Crowle, Philpp Thies
Abstract:
As the floating offshore wind turbine industry continues to develop and grow, the capabilities of established port facilities need to be assessed as to their ability to support the expanding construction and installation requirements. This paper assesses current infrastructure requirements and projected changes to port facilities that may be required to support the floating offshore wind industry. Understanding the infrastructure needs of the floating offshore renewable industry will help to identify the port-related requirements. Floating Offshore Wind Turbines can be installed further out to sea and in deeper waters than traditional fixed offshore wind arrays, meaning that it can take advantage of stronger winds. Separate ports are required for substructure construction, fit-out of the turbines, moorings, subsea cables and maintenance. Large areas are required for the laydown of mooring equipment; inter-array cables, turbine blades and nacelles. The capabilities of established port facilities to support floating wind farms are assessed by evaluation of the size of substructures, the height of wind turbine with regards to the cranes for fitting of blades, distance to offshore site and offshore installation vessel characteristics. The paper will discuss the advantages and disadvantages of using large land-based cranes, inshore floating crane vessels or offshore crane vessels at the fit-out port for the installation of the turbine. Water depths requirements for import of materials and export of the completed structures will be considered. There are additional costs associated with any emerging technology. However part of the popularity of Floating Offshore Wind Turbines stems from the cost savings against permanent structures like fixed wind turbines. Floating Offshore Wind Turbine developers can benefit from lighter, more cost-effective equipment which can be assembled in port and towed to the site rather than relying on large, expensive installation vessels to transport and erect fixed bottom turbines. The ability to assemble Floating Offshore Wind Turbines equipment onshore means minimizing highly weather-dependent operations like offshore heavy lifts and assembly, saving time and costs and reducing safety risks for offshore workers. Maintenance might take place in safer onshore conditions for barges and semi-submersibles. Offshore renewables, such as floating wind, can take advantage of this wealth of experience, while oil and gas operators can deploy this experience at the same time as entering the renewables space The floating offshore wind industry is in the early stages of development and port facilities are required for substructure fabrication, turbine manufacture, turbine construction and maintenance support. The paper discusses the potential floating wind substructures as this provides a snapshot of the requirements at the present time, and potential technological developments required for commercial development. Scaling effects of demonstration-scale projects will be addressed, however, the primary focus will be on commercial-scale (30+ units) device floating wind energy farms.Keywords: floating wind, port, marine construction, offshore renewables
Procedia PDF Downloads 291