Search results for: large woody debris
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7179

Search results for: large woody debris

6039 Single Stage “Fix and Flap” Orthoplastic Approach to Severe Open Tibial Fractures: A Systematic Review of the Outcomes

Authors: Taylor Harris

Abstract:

Gustilo-anderson grade III tibial fractures are exquisitely difficult injuries to manage as they require extensive soft tissue repair in addition to fracture fixation. These injuries are best managed collaboratively by Orthopedic and Plastic surgeons. While utilizing an Orthoplastics approach has decreased the rates of adverse outcomes in these injuries, there is a large amount of variation in exactly how an Orthoplastics team approaches complex cases such as these. It is sometimes recommended that definitive bone fixation and soft tissue coverage be completed simultaneously in a single-stage manner, but there is a paucity of large scale studies to provide evidence to support this recommendation. It is the aim of this study to report the outcomes of a single-stage "fix-and-flap" approach through a systematic review of the available literature. Hopefully, this better informs an evidence-based Orthoplastics approach to managing open tibial fractures. Systematic review of the literature was performed. Medline and Google Scholar were used and all studies published since 2000, in English were included. 103 studies were initially evaluated for inclusion. Reference lists of all included studies were also examined for potentially eligible studies. Gustilo grade III tibial shaft fractures in adults that were managed with a single-stage Orthoplastics approach were identified and evaluated with regard to outcomes of interest. Exclusion criteria included studies with patients <16 years old, case studies, systemic reviews, meta-analyses. Primary outcomes of interest were the rates of deep infections and rates of limb salvage. Secondary outcomes of interest included time to bone union, rates of non-union, and rates of re-operation. 15 studies were eligible. 11 of these studies reported rates of deep infection as an outcome, with rates ranging from 0.98%-20%. The pooled rate between studies was 7.34%. 7 studies reported rates of limb salvage with a range of 96.25%-100%. The pooled rate of the associated studies was 97.8%. 6 reported rates of non-union with a range of 0%-14%, a pooled rate of 6.6%. 6 reported time to bone union with a range of 24 to 40.3 weeks and a pooled average time of 34.2 weeks, and 4 reported rates of reoperation ranging from 7%-55%, with a pooled rate of 31.1%. A few studies that compared a single stage to a multi stage approach side-by-side unanimously favored the single stage approach. Outcomes of Gustilo grade III open tibial fractures utilizing an Orthoplastics approach that is specifically done in a single-stage produce low rates of adverse outcomes. Large scale studies of Orthoplastic collaboration that were not completed in strictly a single stage, or were completed in multiple stages, have not reported as favorable outcomes. We recommend that not only should Orthopedic surgeons and Plastic surgeons collaborate in the management of severe open tibial fracture, but they should plan to undergo definitive fixation and coverage in a single-stage for improved outcomes.

Keywords: orthoplastic, gustilo grade iii, single-stage, trauma, systematic review

Procedia PDF Downloads 86
6038 Temperature Effect on Changing of Electrical Impedance and Permittivity of Ouargla (Algeria) Dunes Sand at Different Frequencies

Authors: Naamane Remita, Mohammed laïd Mechri, Nouredine Zekri, Smaïl Chihi

Abstract:

The goal of this study is the estimation real and imaginary components of both electrical impedance and permittivity z', z'' and ε', ε'' respectively, in Ouargla dunes sand at different temperatures and different frequencies, with alternating current (AC) equal to 1 volt, using the impedance spectroscopy (IS). This method is simple and non-destructive. the results can frequently be correlated with a number of physical properties, dielectric properties and the impacts of the composition on the electrical conductivity of solids. The experimental results revealed that the real part of impedance is higher at higher temperature in the lower frequency region and gradually decreases with increasing frequency. As for the high frequencies, all the values of the real part of the impedance were positive. But at low frequency the values of the imaginary part were positive at all temperatures except for 1200 degrees which were negative. As for the medium frequencies, the reactance values were negative at temperatures 25, 400, 200 and 600 degrees, and then became positive at the rest of the temperatures. At high frequencies of the order of MHz, the values of the imaginary part of the electrical impedance were in contrast to what we recorded for the middle frequencies. The results showed that the electrical permittivity decreases with increasing frequency, at low frequency we recorded permittivity values of 10+ 11, and at medium frequencies it was 10+ 07, while at high frequencies it was 10+ 02. The values of the real part of the electrical permittivity were taken large values at the temperatures of 200 and 600 degrees Celsius and at the lowest frequency, while the smallest value for the permittivity was recorded at the temperature of 400 degrees Celsius at the highest frequency. The results showed that there are large values of the imaginary part of the electrical permittivity at the lowest frequency and then it starts decreasing as the latter increases (the higher the frequency the lower the values of the imaginary part of the electrical permittivity). The character of electrical impedance variation indicated an opportunity to realize the polarization of Ouargla dunes sand and acquaintance if this compound consumes or produces energy. It’s also possible to know the satisfactory of equivalent electric circuit, whether it’s miles induction or capacitance.

Keywords: electrical impedance, electrical permittivity, temperature, impedance spectroscopy, dunes sand ouargla

Procedia PDF Downloads 48
6037 The Rule of Architectural Firms in Enhancing Building Energy Efficiency in Emerging Countries: Processes and Tools Evaluation of Architectural Firms in Egypt

Authors: Mahmoud F. Mohamadin, Ahmed Abdel Malek, Wessam Said

Abstract:

Achieving energy efficient architecture in general, and in emerging countries in particular, is a challenging process that requires the contribution of various governmental, institutional, and individual entities. The rule of architectural design is essential in this process as it is considered as one of the earliest steps on the road to sustainability. Architectural firms have a moral and professional responsibility to respond to these challenges and deliver buildings that consume less energy. This study aims to evaluate the design processes and tools in practice of Egyptian architectural firms based on a limited survey to investigate if their processes and methods can lead to projects that meet the Egyptian Code of Energy Efficiency Improvement. A case study of twenty architectural firms in Cairo was selected and categorized according to their scale; large-scale, medium-scale, and small-scale. A questionnaire was designed and distributed to the firms, and personal meetings with the firms’ representatives took place. The questionnaire answered three main points; the design processes adopted, the usage of performance-based simulation tools, and the usage of BIM tools for energy efficiency purposes. The results of the study revealed that only little percentage of the large-scale firms have clear strategies for building energy efficiency in their building design, however the application is limited to certain project types, or according to the client request. On the other hand, the percentage of medium-scale firms is much less, and it is almost absent in the small-scale ones. This demonstrates the urgent need of enhancing the awareness of the Egyptian architectural design community of the great importance of implementing these methods starting from the early stages of the building design. Finally, the study proposed recommendations for such firms to be able to create a healthy built environment and improve the quality of life in emerging countries.

Keywords: architectural firms, emerging countries, energy efficiency, performance-based simulation tools

Procedia PDF Downloads 283
6036 Satellite Derived Evapotranspiration and Turbulent Heat Fluxes Using Surface Energy Balance System (SEBS)

Authors: Muhammad Tayyab Afzal, Muhammad Arslan, Mirza Muhammad Waqar

Abstract:

One of the key components of the water cycle is evapotranspiration (ET), which represents water consumption by vegetated and non-vegetated surfaces. Conventional techniques for measurements of ET are point based and representative of the local scale only. Satellite remote sensing data with large area coverage and high temporal frequency provide representative measurements of several relevant biophysical parameters required for estimation of ET at regional scales. The objective is of this research is to exploit satellite data in order to estimate evapotranspiration. This study uses Surface Energy Balance System (SEBS) model to calculate daily actual evapotranspiration (ETa) in Larkana District, Sindh Pakistan using Landsat TM data for clouds-free days. As there is no flux tower in the study area for direct measurement of latent heat flux or evapotranspiration and sensible heat flux, therefore, the model estimated values of ET were compared with reference evapotranspiration (ETo) computed by FAO-56 Penman Monteith Method using meteorological data. For a country like Pakistan, agriculture by irrigation in the river basins is the largest user of fresh water. For the better assessment and management of irrigation water requirement, the estimation of consumptive use of water for agriculture is very important because it is the main consumer of water. ET is yet an essential issue of water imbalance due to major loss of irrigation water and precipitation on cropland. As large amount of irrigated water is lost through ET, therefore its accurate estimation can be helpful for efficient management of irrigation water. Results of this study can be used to analyse surface conditions, i.e. temperature, energy budgets and relevant characteristics. Through this information we can monitor vegetation health and suitable agricultural conditions and can take controlling steps to increase agriculture production.

Keywords: SEBS, remote sensing, evapotranspiration, ETa

Procedia PDF Downloads 333
6035 GIS Mapping of Sheep Population and Distribution Pattern in the Derived Savannah of Nigeria

Authors: Sosina Adedayo O., Babyemi Olaniyi J.

Abstract:

The location, population, and distribution pattern of sheep are severe challenges to agribusiness investment and policy formulation in the livestock industry. There is a significant disconnect between farmers' needs and the policy framework towards ameliorating the sheep production constraints. Information on the population, production, and distribution pattern of sheep remains very scanty. A multi-stage sampling technique was used to elicit information from 180 purposively selected respondents from the study area comprised of Oluyole, Ona-ara, Akinyele, Egbeda, Ido and Ibarapa East LGA. The Global Positioning Systems (GPS) of the farmers' location (distribution), and average sheep herd size (Total Livestock Unit, TLU) (population) were recorded, taking the longitude and latitude of the locations in question. The recorded GPS data of the study area were transferred into the ARC-GIS. The ARC-GIS software processed the data using the ARC-GIS model 10.0. Sheep production and distribution (TLU) ranged from 4.1 (Oluyole) to 25.0 (Ibarapa East), with Oluyole, Akinyele, Ona-ara and Egbeda having TLU of 5, 7, 8 and 20, respectively. The herd sizes were classified as less than 8 (smallholders), 9-25 (medium), 26-50 (large), and above 50 (commercial). The majority (45%) of farmers were smallholders. The FR CP (%) ranged from 5.81±0.26 (cassava leaf) to 24.91±0.91 (Amaranthus spinosus), NDF (%) ranged from 22.38±4.43 (Amaranthus spinosus) to 67.96 ± 2.58 (Althemanthe dedentata) while ME ranged from 7.88±0.24 (Althemanthe dedentata) to 10.68±0.18 (cassava leaf). The smallholders’ sheep farmers were the majority, evenly distributed across rural areas due to the availability of abundant feed resources (crop residues, tree crops, shrubs, natural pastures, and feed ingredients) coupled with a large expanse of land in the study area. Most feed resources available were below sheep protein requirement level, hence supplementation is necessary for productivity. Bio-informatics can provide relevant information for sheep production for policy framework and intervention strategies.

Keywords: sheep enterprise, agribusiness investment, policy, bio-informatics, ecological zone

Procedia PDF Downloads 82
6034 Enhanced Disk-Based Databases towards Improved Hybrid in-Memory Systems

Authors: Samuel Kaspi, Sitalakshmi Venkatraman

Abstract:

In-memory database systems are becoming popular due to the availability and affordability of sufficiently large RAM and processors in modern high-end servers with the capacity to manage large in-memory database transactions. While fast and reliable in-memory systems are still being developed to overcome cache misses, CPU/IO bottlenecks and distributed transaction costs, disk-based data stores still serve as the primary persistence. In addition, with the recent growth in multi-tenancy cloud applications and associated security concerns, many organisations consider the trade-offs and continue to require fast and reliable transaction processing of disk-based database systems as an available choice. For these organizations, the only way of increasing throughput is by improving the performance of disk-based concurrency control. This warrants a hybrid database system with the ability to selectively apply an enhanced disk-based data management within the context of in-memory systems that would help improve overall throughput. The general view is that in-memory systems substantially outperform disk-based systems. We question this assumption and examine how a modified variation of access invariance that we call enhanced memory access, (EMA) can be used to allow very high levels of concurrency in the pre-fetching of data in disk-based systems. We demonstrate how this prefetching in disk-based systems can yield close to in-memory performance, which paves the way for improved hybrid database systems. This paper proposes a novel EMA technique and presents a comparative study between disk-based EMA systems and in-memory systems running on hardware configurations of equivalent power in terms of the number of processors and their speeds. The results of the experiments conducted clearly substantiate that when used in conjunction with all concurrency control mechanisms, EMA can increase the throughput of disk-based systems to levels quite close to those achieved by in-memory system. The promising results of this work show that enhanced disk-based systems facilitate in improving hybrid data management within the broader context of in-memory systems.

Keywords: in-memory database, disk-based system, hybrid database, concurrency control

Procedia PDF Downloads 417
6033 Tailoring the Parameters of the Quantum MDS Codes Constructed from Constacyclic Codes

Authors: Jaskarn Singh Bhullar, Divya Taneja, Manish Gupta, Rajesh Kumar Narula

Abstract:

The existence conditions of dual containing constacyclic codes have opened a new path for finding quantum maximum distance separable (MDS) codes. Using these conditions parameters of length n=(q²+1)/2 quantum MDS codes were improved. A class of quantum MDS codes of length n=(q²+q+1)/h, where h>1 is an odd prime, have also been constructed having large minimum distance and these codes are new in the sense as these are not available in the literature.

Keywords: hermitian construction, constacyclic codes, cyclotomic cosets, quantum MDS codes, singleton bound

Procedia PDF Downloads 388
6032 E4D-MP: Time-Lapse Multiphysics Simulation and Joint Inversion Toolset for Large-Scale Subsurface Imaging

Authors: Zhuanfang Fred Zhang, Tim C. Johnson, Yilin Fang, Chris E. Strickland

Abstract:

A variety of geophysical techniques are available to image the opaque subsurface with little or no contact with the soil. It is common to conduct time-lapse surveys of different types for a given site for improved results of subsurface imaging. Regardless of the chosen survey methods, it is often a challenge to process the massive amount of survey data. The currently available software applications are generally based on the one-dimensional assumption for a desktop personal computer. Hence, they are usually incapable of imaging the three-dimensional (3D) processes/variables in the subsurface of reasonable spatial scales; the maximum amount of data that can be inverted simultaneously is often very small due to the capability limitation of personal computers. Presently, high-performance or integrating software that enables real-time integration of multi-process geophysical methods is needed. E4D-MP enables the integration and inversion of time-lapsed large-scale data surveys from geophysical methods. Using the supercomputing capability and parallel computation algorithm, E4D-MP is capable of processing data across vast spatiotemporal scales and in near real time. The main code and the modules of E4D-MP for inverting individual or combined data sets of time-lapse 3D electrical resistivity, spectral induced polarization, and gravity surveys have been developed and demonstrated for sub-surface imaging. E4D-MP provides capability of imaging the processes (e.g., liquid or gas flow, solute transport, cavity development) and subsurface properties (e.g., rock/soil density, conductivity) critical for successful control of environmental engineering related efforts such as environmental remediation, carbon sequestration, geothermal exploration, and mine land reclamation, among others.

Keywords: gravity survey, high-performance computing, sub-surface monitoring, electrical resistivity tomography

Procedia PDF Downloads 157
6031 The Impact of Heat Waves on Human Health: State of Art in Italy

Authors: Vito Telesca, Giuseppina A. Giorgio

Abstract:

The earth system is subject to a wide range of human activities that have changed the ecosystem more rapidly and extensively in the last five decades. These global changes have a large impact on human health. The relationship between extreme weather events and mortality are widely documented in different studies. In particular, a number of studies have investigated the relationship between climatological variations and the cardiovascular and respiratory system. The researchers have become interested in the evaluation of the effect of environmental variations on the occurrence of different diseases (such as infarction, ischemic heart disease, asthma, respiratory problems, etc.) and mortality. Among changes in weather conditions, the heat waves have been used for investigating the association between weather conditions and cardiovascular events and cerebrovascular, using thermal indices, which combine air temperature, relative humidity, and wind speed. The effects of heat waves on human health are mainly found in the urban areas and they are aggravated by the presence of atmospheric pollution. The consequences of these changes for human health are of growing concern. In particular, meteorological conditions are one of the environmental aspects because cardiovascular diseases are more common among the elderly population, and such people are more sensitive to weather changes. In addition, heat waves, or extreme heat events, are predicted to increase in frequency, intensity, and duration with climate change. In this context, are very important public health and climate change connections increasingly being recognized by the medical research, because these might help in informing the public at large. Policy experts claim that a growing awareness of the relationships of public health and climate change could be a key in breaking through political logjams impeding action on mitigation and adaptation. The aims of this study are to investigate about the importance of interactions between weather variables and your effects on human health, focusing on Italy. Also highlighting the need to define strategies and practical actions of monitoring, adaptation and mitigation of the phenomenon.

Keywords: climate change, illness, Italy, temperature, weather

Procedia PDF Downloads 247
6030 Development of Digital Twin Concept to Detect Abnormal Changes in Structural Behaviour

Authors: Shady Adib, Vladimir Vinogradov, Peter Gosling

Abstract:

Digital Twin (DT) technology is a new technology that appeared in the early 21st century. The DT is defined as the digital representation of living and non-living physical assets. By connecting the physical and virtual assets, data are transmitted smoothly, allowing the virtual asset to fully represent the physical asset. Although there are lots of studies conducted on the DT concept, there is still limited information about the ability of the DT models for monitoring and detecting unexpected changes in structural behaviour in real time. This is due to the large computational efforts required for the analysis and an excessively large amount of data transferred from sensors. This paper aims to develop the DT concept to be able to detect the abnormal changes in structural behaviour in real time using advanced modelling techniques, deep learning algorithms, and data acquisition systems, taking into consideration model uncertainties. finite element (FE) models were first developed offline to be used with a reduced basis (RB) model order reduction technique for the construction of low-dimensional space to speed the analysis during the online stage. The RB model was validated against experimental test results for the establishment of a DT model of a two-dimensional truss. The established DT model and deep learning algorithms were used to identify the location of damage once it has appeared during the online stage. Finally, the RB model was used again to identify the damage severity. It was found that using the RB model, constructed offline, speeds the FE analysis during the online stage. The constructed RB model showed higher accuracy for predicting the damage severity, while deep learning algorithms were found to be useful for estimating the location of damage with small severity.

Keywords: data acquisition system, deep learning, digital twin, model uncertainties, reduced basis, reduced order model

Procedia PDF Downloads 99
6029 Relationships of Plasma Lipids, Lipoproteins and Cardiovascular Outcomes with Climatic Variations: A Large 8-Year Period Brazilian Study

Authors: Vanessa H. S. Zago, Ana Maria H. de Avila, Paula P. Costa, Welington Corozolla, Liriam S. Teixeira, Eliana C. de Faria

Abstract:

Objectives: The outcome of cardiovascular disease is affected by environment and climate. This study evaluated the possible relationships between climatic and environmental changes and the occurrence of biological rhythms in serum lipids and lipoproteins in a large population sample in the city of Campinas, State of Sao Paulo, Brazil. In addition, it determined the temporal variations of death due to atherosclerotic events in Campinas during the time window examined. Methods: A large 8-year retrospective study was carried out to evaluate the lipid profiles of individuals attended at the University of Campinas (Unicamp). The study population comprised 27.543 individuals of both sexes and of all ages. Normolipidemic and dyslipidemic individuals classified according to Brazilian guidelines on dyslipidemias, participated in the study. For the same period, the temperature, relative humidity and daily brightness records were obtained from the Centro de Pesquisas Meteorologicas e Climaticas Aplicadas a Agricultura/Unicamp and frequencies of death due to atherosclerotic events in Campinas were acquired from the Brazilian official database DATASUS, according to the International Classification of Diseases. Statistical analyses were performed using both Cosinor and ARIMA temporal analysis methods. For cross-correlation analysis between climatic and lipid parameters, cross-correlation functions were used. Results: Preliminary results indicated that rhythmicity was significant for LDL-C and HDL-C in the cases of both normolipidemic and dyslipidemic subjects (n =respectively 11.892 and 15.651 both measures increasing in the winter and decreasing in the summer). On the other hand, for dyslipidemic subjects triglycerides increased in summer and decreased in winter, in contrast to normolipidemic ones, in which triglycerides did not show rhythmicity. The number of deaths due to atherosclerotic events showed significant rhythmicity, with maximum and minimum frequencies in winter and summer, respectively. Cross-correlation analyzes showed that low humidity and temperature, higher thermal amplitude and dark cycles are associated with increased levels of LDL-C and HDL-C during winter. In contrast, TG showed moderate cross-correlations with temperature and minimum humidity in an inverse way: maximum temperature and humidity increased TG during the summer. Conclusions: This study showed a coincident rhythmicity between low temperatures and high concentrations of LDL-C and HDL-C and the number of deaths due to atherosclerotic cardiovascular events in individuals from the city of Campinas. The opposite behavior of cholesterol and TG suggest different physiological mechanisms in their metabolic modulation by climate parameters change. Thus, new analyses are underway to better elucidate these mechanisms, as well as variations in lipid concentrations in relation to climatic variations and their associations with atherosclerotic disease and death outcomes in Campinas.

Keywords: atherosclerosis, climatic variations, lipids and lipoproteins, associations

Procedia PDF Downloads 117
6028 Mastering Digital Transformation with the Strategy Tandem Innovation Inside-Out/Outside-In: An Approach to Drive New Business Models, Services and Products in the Digital Age

Authors: S. N. Susenburger, D. Boecker

Abstract:

In the age of Volatility, Uncertainty, Complexity, and Ambiguity (VUCA), where digital transformation is challenging long standing traditional hardware and manufacturing companies, innovation needs a different methodology, strategy, mindset, and culture. What used to be a mindset of scaling per quantity is now shifting to orchestrating ecosystems, platform business models and service bundles. While large corporations are trying to mimic the nimbleness and versatile mindset of startups in the core of their digital strategies, they’re at the frontier of facing one of the largest organizational and cultural changes in history. This paper elaborates on how a manufacturing giant transformed its Corporate Information Technology (IT) to enable digital and Internet of Things (IoT) business while establishing the mindset and the approaches of the Innovation Inside-Out/Outside-In Strategy. It gives insights into the core elements of an innovation culture and the tactics and methodologies leveraged to support the cultural shift and transformation into an IoT company. This paper also outlines the core elements for an innovation culture and how the persona 'Connected Engineer' thrives in the digital innovation environment. Further, it explores how tapping domain-focused ecosystems in vibrant innovative cities can be used as a part of the strategy to facilitate partner co-innovation. Therefore, findings from several use cases, observations and surveys led to conclusion for the strategy tandem of Innovation Inside-Out/Outside-In. The findings indicate that it's crucial in which phases and maturity level the Innovation Inside-Out/Outside-In Strategy is activated: cultural aspects of the business and the regional ecosystem need to be considered, as well as cultural readiness from management and active contributors. The 'not invented here syndrome' is a barrier of large corporations that need to be addressed and managed to successfully drive partnerships, as well as embracing co-innovation and a mindset shifting away from physical products toward new business models, services, and IoT platforms. This paper elaborates on various methodologies and approaches tested in different countries and cultures, including the U.S., Brazil, Mexico, and Germany.

Keywords: innovation management, innovation culture, innovation methodologies, digital transformation

Procedia PDF Downloads 144
6027 Rethinking Urban Voids: An Investigation beneath the Kathipara Flyover, Chennai into a Transit Hub by Adaptive Utilization of Space

Authors: V. Jayanthi

Abstract:

Urbanization and pace of urbanization have increased tremendously in last few decades. More towns are now getting converted into cities. Urbanization trend is seen all over the world but is becoming most dominant in Asia. Today, the scale of urbanization in India is so huge that Indian cities are among the fastest-growing in the world, including Bangalore, Hyderabad, Pune, Chennai, Delhi, and Mumbai. Urbanization remains a single predominant factor that is continuously linked to the destruction of urban green spaces. With reference to Chennai as a case study, which is suffering from rapid deterioration of its green spaces, this paper sought to fill this gap by exploring key factors aside urbanization that is responsible for the destruction of green spaces. The paper relied on a research approach and triangulated data collection techniques such as interviews, focus group discussion, personal observation and retrieval of archival data. It was observed that apart from urbanization, problem of ownership of green space lands, low priority to green spaces, poor maintenance, enforcement of development controls, wastage of underpass spaces, and uncooperative attitudes of the general public, play a critical role in the destruction of urban green spaces. Therefore the paper narrows down to a point, that for a city to have a proper sustainable urban green space, broader city development plans are essential. Though rapid urbanization is an indicator of positive development, it is also accompanied by a host of challenges. Chennai lost a lot of greenery, as the city urbanized rapidly that led to a steep fall in vegetation cover. Environmental deterioration will be the big price we pay if Chennai continues to grow at the expense of greenery. Soaring skyscrapers, multistoried complexes, gated communities, and villas, frame the iconic skyline of today’s Chennai city which reveals that we overlook the importance of our green cover, which is important to balance our urban and lung spaces. Chennai, with a clumped landscape at the center of the city, is predicted to convert 36% of its total area into urban areas by 2026. One major issue is that a city designed and planned in isolation creates underused spaces all around the cities which are of negligence. These urban voids are dead, underused, unused spaces in the cities that are formed due to inefficient decision making, poor land management, and poor coordination. Urban voids have huge potential of creating a stronger urban fabric, exploited as public gathering spaces, pocket parks or plazas or just enhance public realm, rather than dumping of debris and encroachments. Flyovers need to justify their existence themselves by being more than just traffic and transport solutions. The vast, unused space below the Kathipara flyover is a case in point. This flyover connects three major routes: Tambaram, Koyambedu, and Adyar. This research will focus on the concept of urban voids, how these voids under the flyovers, can be used for place making process, how this space beneath flyovers which are neglected, can be a part of the urban realm through urban design and landscaping.

Keywords: landscape design, flyovers, public spaces, reclaiming lost spaces, urban voids

Procedia PDF Downloads 282
6026 Quantifying Uncertainties in an Archetype-Based Building Stock Energy Model by Use of Individual Building Models

Authors: Morten Brøgger, Kim Wittchen

Abstract:

Focus on reducing energy consumption in existing buildings at large scale, e.g. in cities or countries, has been increasing in recent years. In order to reduce energy consumption in existing buildings, political incentive schemes are put in place and large scale investments are made by utility companies. Prioritising these investments requires a comprehensive overview of the energy consumption in the existing building stock, as well as potential energy-savings. However, a building stock comprises thousands of buildings with different characteristics making it difficult to model energy consumption accurately. Moreover, the complexity of the building stock makes it difficult to convey model results to policymakers and other stakeholders. In order to manage the complexity of the building stock, building archetypes are often employed in building stock energy models (BSEMs). Building archetypes are formed by segmenting the building stock according to specific characteristics. Segmenting the building stock according to building type and building age is common, among other things because this information is often easily available. This segmentation makes it easy to convey results to non-experts. However, using a single archetypical building to represent all buildings in a segment of the building stock is associated with loss of detail. Thermal characteristics are aggregated while other characteristics, which could affect the energy efficiency of a building, are disregarded. Thus, using a simplified representation of the building stock could come at the expense of the accuracy of the model. The present study evaluates the accuracy of a conventional archetype-based BSEM that segments the building stock according to building type- and age. The accuracy is evaluated in terms of the archetypes’ ability to accurately emulate the average energy demands of the corresponding buildings they were meant to represent. This is done for the buildings’ energy demands as a whole as well as for relevant sub-demands. Both are evaluated in relation to the type- and the age of the building. This should provide researchers, who use archetypes in BSEMs, with an indication of the expected accuracy of the conventional archetype model, as well as the accuracy lost in specific parts of the calculation, due to use of the archetype method.

Keywords: building stock energy modelling, energy-savings, archetype

Procedia PDF Downloads 154
6025 Window Opening Behavior in High-Density Housing Development in Subtropical Climate

Authors: Minjung Maing, Sibei Liu

Abstract:

This research discusses the results of a study of window opening behavior of large housing developments in the high-density megacity of Hong Kong. The methods used for the study involved field observations using photo documentation of the four cardinal elevations (north, south-east, and west) of two large housing developments in a very dense urban area of approx. 46,000 persons per square meter within the city of Hong Kong. The targeted housing developments (A and B) are large public housing with a population of about 13,000 in each development of lower income. However, the mean income level in development A is about 40% higher than development B and home ownership is 60% in development A and 0% in development B. Mapping of the surrounding amenities and layout of the developments were also studied to understand the available activities to the residents. The photo documentation of the elevations was taken from November 2016 to February 2018 to gather a full spectrum of different seasons and both in the morning and afternoon (am/pm) times. From the photograph, the window opening behavior was measured by counting the amount of windows opened as a percentage of all the windows on that façade. For each date of survey data collected, weather data was recorded from weather stations located in the same region to collect temperature, humidity and wind speed. To further understand the behavior, simulation studies of microclimate conditions of the housing development was conducted using the software ENVI-met, a widely used simulation tool by researchers studying urban climate. Four major conclusions can be drawn from the data analysis and simulation results. Firstly, there is little change in the amount of window opening during the different seasons within a temperature range of 10 to 35 degrees Celsius. This means that people who tend to open their windows have consistent window opening behavior throughout the year and high tolerance of indoor thermal conditions. Secondly, for all four elevations the lower-income development B opened more windows (almost two times more units) than higher-income development A meaning window opening behavior had strong correlations with income level. Thirdly, there is a lack of correlation between outdoor horizontal wind speed and window opening behavior, as the changes of wind speed do not seem to affect the action of opening windows in most conditions. Similar to the low correlation between horizontal wind speed and window opening percentage, it is found that vertical wind speed also cannot explain the window opening behavior of occupants. Fourthly, there is a slightly higher average of window opening on the south elevation than the north elevation, which may be due to the south elevation being well shaded from high angle sun during the summer and allowing heat into units from lower angle sun during the winter season. These findings are important to providing insight into how to better design urban environments and indoor thermal environments for a liveable high density city.

Keywords: high-density housing, subtropical climate, urban behavior, window opening

Procedia PDF Downloads 125
6024 Evaluation of the Gasification Process for the Generation of Syngas Using Solid Waste at the Autónoma de Colombia University

Authors: Yeraldin Galindo, Soraida Mora

Abstract:

Solid urban waste represents one of the largest sources of global environmental pollution due to the large quantities of these that are produced every day; thus, the elimination of such waste is a major problem for the environmental authorities who must look for alternatives to reduce the volume of waste with the possibility of obtaining an energy recovery. At the Autónoma de Colombia University, approximately 423.27 kg/d of solid waste are generated mainly paper, cardboard, and plastic. A large amount of these solid wastes has as final disposition the sanitary landfill of the city, wasting the energy potential that these could have, this, added to the emissions generated by the collection and transport of the same, has as consequence the increase of atmospheric pollutants. One of the alternative process used in the last years to generate electrical energy from solid waste such as paper, cardboard, plastic and, mainly, organic waste or biomass to replace the use of fossil fuels is the gasification. This is a thermal conversion process of biomass. The objective of it is to generate a combustible gas as the result of a series of chemical reactions propitiated by the addition of heat and the reaction agents. This project was developed with the intention of giving an energetic use to the waste (paper, cardboard, and plastic) produced inside the university, using them to generate a synthesis gas with a gasifier prototype. The gas produced was evaluated to determine their benefits in terms of electricity generation or raw material for the chemical industry. In this process, air was used as gasifying agent. The characterization of the synthesis gas was carried out by a gas chromatography carried out by the Chemical Engineering Laboratory of the National University of Colombia. Taking into account the results obtained, it was concluded that the gas generated is of acceptable quality in terms of the concentration of its components, but it is a gas of low calorific value. For this reason, the syngas generated in this project is not viable for the production of electrical energy but for the production of methanol transformed by the Fischer-Tropsch cycle.

Keywords: alternative energies, gasification, gasifying agent, solid urban waste, syngas

Procedia PDF Downloads 258
6023 Banking Union: A New Step towards Completing the Economic and Monetary Union

Authors: Marijana Ivanov, Roman Šubić

Abstract:

The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, represent important steps towards completing the Economic and Monetary Union. It should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim that a former practice of the bail-out is replaced with the bail-in system through which bank failures will be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. It has to reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities, and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the single interbank money market. However, contrary to all the positive expectations related to the future functioning of the banking union, low and unbalanced economic growth rates remain a challenge for the maintenance of financial stability in the euro area, and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times, and large banks are still a matter of concern because of their systemic importance for individual countries and the euro zone as a whole. The creation of the SSM and the SRM should increase transparency of the banking system in the euro area and restore confidence that have been disturbed during the depression. It would provide a new opportunity to strengthen economic and financial systems in the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented to the large and significant banks (whereby one half of them operate in the core and most important euro area countries), while it is questionable to what extent the common resolution funds will be used for rescue of less important institutions.

Keywords: banking union, financial integration, single supervision mechanism (SSM)

Procedia PDF Downloads 470
6022 Environmental Forensic Analysis of the Shoreline Microplastics Debris on the Limbe Coastline, Cameroon

Authors: Ndumbe Eric Esongami, Manga Veronica Ebot, Foba Josepha Tendo, Yengong Fabrice Lamfu, Tiku David Tambe

Abstract:

The prevalence and unpleasant nature of plastics pollution constantly observed on beach shore on stormy events has prompt researchers worldwide to thesis on sustainable economic and environmental designs on plastics, especially in Cameroon, a major touristic destination in the Central Africa Region. The inconsistent protocols develop by researchers has added to this burden, thus the morphological nature of microplastic remediation is a call for concerns. The prime aim of the study is to morphologically identify, quantify and forensically understands the distribution of each plastics polymer composition. Duplicates of 2×2 m (4m2) quadrants were sampled in each beach/month over 8 months period across five purposive beaches along the Limbe – Idenau coastline, Cameroon. Collected plastic samples were thoroughly washed and separation done using a 2 mm sieve. Only particles of size, < 2 mm, were considered and forward follow the microplastics laboratory analytical processes. Established step by step methodological procedures of particle filtration, organic matter digestion, density separation, particle extraction and polymer identification including microscope and were applied for the beach microplastics samples. Microplastics were observed in each sample/beach/month with an overall abundance of 241 particles/number weighs 89.15 g in total and with a mean abundance of 2 particles/m2 (0.69 g/m2) and 6 particles/month (2.0 g/m2). The accumulation of beach shoreline MPs rose dramatically towards decreasing size with microbeads and fiber only found in the < 1 mm size fraction. Approximately 75% of beach MPs contamination were found in LDB 2, LDB 1 and IDN beaches/average particles/number while the most dominant polymer type frequently observed also were PP, PE, and PS in all morphologically parameters analysed. Beach MPs accumulation significantly varied temporally and spatially at p = 0.05. ANOVA and Spearman’s rank correlation used shows linear relationships between the sizes categories considered in this study. In terms of polymer MPs analysis, the colour class recorded that white coloured MPs was dominant, 50 particles/number (22.25 g) with recorded abundance/number in PP (25), PE (15) and PS (5). The shape class also revealed that irregularly shaped MPs was dominant, 98 particles/number (30.5 g) with higher abundance/number in PP (39), PE (33), and PS (11). Similarly, MPs type class shows that fragmented MPs type was also dominant, 80 particles/number (25.25 g) with higher abundance/number in PP (30), PE (28) and PS (15). Equally, the sized class forward revealed that 1.5 – 1.99 mm sized ranged MPs had the highest abundance of 102 particles/number (51.77 g) with higher concentration observed in PP (47), PE (41), and PS (7) as well and finally, the weight class also show that 0.01 g weighs MPs was dominated by 98 particles/number (56.57 g) with varied numeric abundance seen in PP (49), PE (29) and PS (13). The forensic investigation of the pollution indicated that majority of the beach microplastic is sourced from the site/nearby area. The investigation could draw useful conclusions regarding the pathways of pollution. The fragmented microplastic, a significant component in the sample, was found to be sourced from recreational activities and partly from fishing boat installations and repairs activities carried out close to the shore.

Keywords: forensic analysis, beach MPs, particle/number, polymer composition, cameroon

Procedia PDF Downloads 78
6021 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant

Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha

Abstract:

This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.

Keywords: PV, oscillation, modelling, wind

Procedia PDF Downloads 37
6020 High Level Expression of Fluorinase in Escherichia Coli and Pichia Pastoris

Authors: Lee A. Browne, K. Rumbold

Abstract:

The first fluorinating enzyme, 5'-fluoro-5'-deoxyadenosine synthase (fluorinase) was isolated from the soil bacterium Streptomyces cattleya. Such an enzyme, with the ability to catalyze a C-F bond, presents great potential as a biocatalyst. Naturally fluorinated compounds are extremely rare in nature. As a result, the number of fluorinases identified remains relatively few. The field of fluorination is almost completely synthetic. However, with the increasing demand for fluorinated organic compounds of commercial value in the agrochemical, pharmaceutical and materials industries, it has become necessary to utilize biologically based methods such as biocatalysts. A key step in this crucial process is the large-scale production of the fluorinase enzyme in considerable quantities for industrial applications. Thus, this study aimed to optimize expression of the fluorinase enzyme in both prokaryotic and eukaryotic expression systems in order to obtain high protein yields. The fluorinase gene was cloned into the pET 41b(+) and pPinkα-HC vectors and used to transform the expression hosts, E.coli BL21(DE3) and Pichia pastoris (PichiaPink™ strains) respectively. Expression trials were conducted to select optimal conditions for expression in both expression systems. Fluorinase catalyses a reaction between S-adenosyl-L-Methionine (SAM) and fluoride ion to produce 5'-fluorodeoxyadenosine (5'FDA) and L-Methionine. The activity of the enzyme was determined using HPLC by measuring the product of the reaction 5'FDA. A gradient mobile phase of 95:5 v/v 50mM potassium phosphate buffer to a final mobile phase containing 80:20 v/v 50mM potassium phosphate buffer and acetonitrile were used. This resulted in the complete separation of SAM and 5’-FDA which eluted at 1.3 minutes and 3.4 minutes respectively. This proved that the fluorinase enzyme was active. Optimising expression of the fluorinase enzyme was successful in both E.coli and PichiaPink™ where high expression levels in both expression systems were achieved. Protein production will be scaled up in PichiaPink™ using fermentation to achieve large-scale protein production. High level expression of protein is essential in biocatalysis for the availability of enzymes for industrial applications.

Keywords: biocatalyst, expression, fluorinase, PichiaPink™

Procedia PDF Downloads 552
6019 Development of a New Method for the Evaluation of Heat Tolerant Wheat Genotypes for Genetic Studies and Wheat Breeding

Authors: Hameed Alsamadany, Nader Aryamanesh, Guijun Yan

Abstract:

Heat is one of the major abiotic stresses limiting wheat production worldwide. To identify heat tolerant genotypes, a newly designed system involving a large plastic box holding many layers of filter papers positioned vertically with wheat seeds sown in between for the ease of screening large number of wheat geno types was developed and used to study heat tolerance. A collection of 499 wheat geno types were screened under heat stress (35ºC) and non-stress (25ºC) conditions using the new method. Compared with those under non-stress conditions, a substantial and very significant reduction in seedling length (SL) under heat stress was observed with an average reduction of 11.7 cm (P<0.01). A damage index (DI) of each geno type based on SL under the two temperatures was calculated and used to rank the genotypes. Three hexaploid geno types of Triticum aestivum [Perenjori (DI= -0.09), Pakistan W 20B (-0.18) and SST16 (-0.28)], all growing better at 35ºC than at 25ºC were identified as extremely heat tolerant (EHT). Two hexaploid genotypes of T. aestivum [Synthetic wheat (0.93) and Stiletto (0.92)] and two tetraploid genotypes of T. turgidum ssp dicoccoides [G3211 (0.98) and G3100 (0.93)] were identified as extremely heat susceptible (EHS). Another 14 geno types were classified as heat tolerant (HT) and 478 as heat susceptible (HS). Extremely heat tolerant and heat susceptible geno types were used to develop re combinant inbreeding line populations for genetic studies. Four major QTLs, HTI4D, HTI3B.1, HTI3B.2 and HTI3A located on wheat chromosomes 4D, 3B (x2) and 3A, explaining up to 34.67 %, 28.93 %, 13.46% % and 11.34% phenotypic variation, respectively, were detected. The four QTLs together accounted for 88.40% of the total phenotypic variation. Random wheat geno types possessing the four heat tolerant alleles performed significantly better under the heat condition than those lacking the heat tolerant alleles indicating the importance of the four QTLs in conferring heat tolerance in wheat. Molecular markers are being developed for marker assisted breeding of heat tolerant wheat.

Keywords: bread wheat, heat tolerance, screening, RILs, QTL mapping, association analysis

Procedia PDF Downloads 551
6018 Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines

Authors: Alexander Guzman Urbina, Atsushi Aoyama

Abstract:

The sustainability of traditional technologies employed in energy and chemical infrastructure brings a big challenge for our society. Making decisions related with safety of industrial infrastructure, the values of accidental risk are becoming relevant points for discussion. However, the challenge is the reliability of the models employed to get the risk data. Such models usually involve large number of variables and with large amounts of uncertainty. The most efficient techniques to overcome those problems are built using Artificial Intelligence (AI), and more specifically using hybrid systems such as Neuro-Fuzzy algorithms. Therefore, this paper aims to introduce a hybrid algorithm for risk assessment trained using near-miss accident data. As mentioned above the sustainability of traditional technologies related with energy and chemical infrastructure constitutes one of the major challenges that today’s societies and firms are facing. Besides that, the adaptation of those technologies to the effects of the climate change in sensible environments represents a critical concern for safety and risk management. Regarding this issue argue that social consequences of catastrophic risks are increasing rapidly, due mainly to the concentration of people and energy infrastructure in hazard-prone areas, aggravated by the lack of knowledge about the risks. Additional to the social consequences described above, and considering the industrial sector as critical infrastructure due to its large impact to the economy in case of a failure the relevance of industrial safety has become a critical issue for the current society. Then, regarding the safety concern, pipeline operators and regulators have been performing risk assessments in attempts to evaluate accurately probabilities of failure of the infrastructure, and consequences associated with those failures. However, estimating accidental risks in critical infrastructure involves a substantial effort and costs due to number of variables involved, complexity and lack of information. Therefore, this paper aims to introduce a well trained algorithm for risk assessment using deep learning, which could be capable to deal efficiently with the complexity and uncertainty. The advantage point of the deep learning using near-miss accidents data is that it could be employed in risk assessment as an efficient engineering tool to treat the uncertainty of the risk values in complex environments. The basic idea of using a Near-Miss Deep Learning Approach for Neuro-Fuzzy Risk Assessment in Pipelines is focused in the objective of improve the validity of the risk values learning from near-miss accidents and imitating the human expertise scoring risks and setting tolerance levels. In summary, the method of Deep Learning for Neuro-Fuzzy Risk Assessment involves a regression analysis called group method of data handling (GMDH), which consists in the determination of the optimal configuration of the risk assessment model and its parameters employing polynomial theory.

Keywords: deep learning, risk assessment, neuro fuzzy, pipelines

Procedia PDF Downloads 292
6017 Relation of Optimal Pilot Offsets in the Shifted Constellation-Based Method for the Detection of Pilot Contamination Attacks

Authors: Dimitriya A. Mihaylova, Zlatka V. Valkova-Jarvis, Georgi L. Iliev

Abstract:

One possible approach for maintaining the security of communication systems relies on Physical Layer Security mechanisms. However, in wireless time division duplex systems, where uplink and downlink channels are reciprocal, the channel estimate procedure is exposed to attacks known as pilot contamination, with the aim of having an enhanced data signal sent to the malicious user. The Shifted 2-N-PSK method involves two random legitimate pilots in the training phase, each of which belongs to a constellation, shifted from the original N-PSK symbols by certain degrees. In this paper, legitimate pilots’ offset values and their influence on the detection capabilities of the Shifted 2-N-PSK method are investigated. As the implementation of the technique depends on the relation between the shift angles rather than their specific values, the optimal interconnection between the two legitimate constellations is investigated. The results show that no regularity exists in the relation between the pilot contamination attacks (PCA) detection probability and the choice of offset values. Therefore, an adversary who aims to obtain the exact offset values can only employ a brute-force attack but the large number of possible combinations for the shifted constellations makes such a type of attack difficult to successfully mount. For this reason, the number of optimal shift value pairs is also studied for both 100% and 98% probabilities of detecting pilot contamination attacks. Although the Shifted 2-N-PSK method has been broadly studied in different signal-to-noise ratio scenarios, in multi-cell systems the interference from the signals in other cells should be also taken into account. Therefore, the inter-cell interference impact on the performance of the method is investigated by means of a large number of simulations. The results show that the detection probability of the Shifted 2-N-PSK decreases inversely to the signal-to-interference-plus-noise ratio.

Keywords: channel estimation, inter-cell interference, pilot contamination attacks, wireless communications

Procedia PDF Downloads 217
6016 Risk Assessment on Construction Management with “Fuzzy Logy“

Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi

Abstract:

Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.

Keywords: fuzzy logy, risk, prioritization, assessment

Procedia PDF Downloads 594
6015 Approaches to Reduce the Complexity of Mathematical Models for the Operational Optimization of Large-Scale Virtual Power Plants in Public Energy Supply

Authors: Thomas Weber, Nina Strobel, Thomas Kohne, Eberhard Abele

Abstract:

In context of the energy transition in Germany, the importance of so-called virtual power plants in the energy supply continues to increase. The progressive dismantling of the large power plants and the ongoing construction of many new decentralized plants result in great potential for optimization through synergies between the individual plants. These potentials can be exploited by mathematical optimization algorithms to calculate the optimal application planning of decentralized power and heat generators and storage systems. This also includes linear or linear mixed integer optimization. In this paper, procedures for reducing the number of decision variables to be calculated are explained and validated. On the one hand, this includes combining n similar installation types into one aggregated unit. This aggregated unit is described by the same constraints and target function terms as a single plant. This reduces the number of decision variables per time step and the complexity of the problem to be solved by a factor of n. The exact operating mode of the individual plants can then be calculated in a second optimization in such a way that the output of the individual plants corresponds to the calculated output of the aggregated unit. Another way to reduce the number of decision variables in an optimization problem is to reduce the number of time steps to be calculated. This is useful if a high temporal resolution is not necessary for all time steps. For example, the volatility or the forecast quality of environmental parameters may justify a high or low temporal resolution of the optimization. Both approaches are examined for the resulting calculation time as well as for optimality. Several optimization models for virtual power plants (combined heat and power plants, heat storage, power storage, gas turbine) with different numbers of plants are used as a reference for the investigation of both processes with regard to calculation duration and optimality.

Keywords: CHP, Energy 4.0, energy storage, MILP, optimization, virtual power plant

Procedia PDF Downloads 178
6014 Diagnosis, Treatment, and Prognosis in Cutaneous Anaplastic Lymphoma Kinase-Positive Anaplastic Large Cell Lymphoma: A Narrative Review Apropos of a Case

Authors: Laura Gleason, Sahithi Talasila, Lauren Banner, Ladan Afifi, Neda Nikbakht

Abstract:

Primary cutaneous anaplastic large cell lymphoma (pcALCL) accounts for 9% of all cutaneous T-cell lymphomas. pcALCL is classically characterized as a solitary papulonodule that often enlarges, ulcerates, and can be locally destructive, but overall exhibits an indolent course with overall 5-year survival estimated to be 90%. Distinguishing pcALCL from systemic ALCL (sALCL) is essential as sALCL confers a poorer prognosis with average 5-year survival being 40-50%. Although extremely rare, there have been several cases of ALK-positive ALCL diagnosed on skin biopsy without evidence of systemic involvement, which poses several challenges in the classification, prognostication, treatment, and follow-up of these patients. Objectives: We present a case of cutaneous ALK-positive ALCL without evidence of systemic involvement, and a narrative review of the literature to further characterize that ALK-positive ALCL limited to the skin is a distinct variant with a unique presentation, history, and prognosis. A 30-year-old woman presented for evaluation of an erythematous-violaceous papule present on her right chest for two months. With the development of multifocal disease and persistent lymphadenopathy, a bone marrow biopsy and lymph node excisional biopsy were performed to assess for systemic disease. Both biopsies were unrevealing. The patient was counseled on pursuing systemic therapy consisting of Brentuximab, Cyclophosphamide, Doxorubicin, and Prednisone given the concern for sALCL. Apropos of the patient we searched for clinically evident, cutaneous ALK-positive ALCL cases, with and without systemic involvement, in the English literature. Risk factors, such as tumor location, number, size, ALK localization, ALK translocations, and recurrence, were evaluated in cases of cutaneous ALK-positive ALCL. The majority of patients with cutaneous ALK-positive ALCL did not progress to systemic disease. The majority of cases that progressed to systemic disease in adults had recurring skin lesions and cytoplasmic localization of ALK. ALK translocations did not influence disease progression. Mean time to disease progression was 16.7 months, and significant mortality (50%) was observed in those cases that progressed to systemic disease. Pediatric cases did not exhibit a trend similar to adult cases. In both the adult and pediatric cases, a subset of cutaneous-limited ALK-positive ALCL were treated with chemotherapy. All cases treated with chemotherapy did not progress to systemic disease. Apropos of an ALK-positive ALCL patient with clinical cutaneous limited disease in the histologic presence of systemic markers, we discussed the literature data, highlighting the crucial issues related to developing a clinical strategy to approach this rare subtype of ALCL. Physicians need to be aware of the overall spectrum of ALCL, including cutaneous limited disease, systemic disease, disease with NPM-ALK translocation, disease with ALK and EMA positivity, and disease with skin recurrence.

Keywords: anaplastic large cell lymphoma, systemic, cutaneous, anaplastic lymphoma kinase, ALK, ALCL, sALCL, pcALCL, cALCL

Procedia PDF Downloads 83
6013 Modeling of Conjugate Heat Transfer including Radiation in a Kerosene/Air Certification Burner

Authors: Lancelot Boulet, Pierre Benard, Ghislain Lartigue, Vincent Moureau, Nicolas Chauvet, Sheddia Didorally

Abstract:

International aeronautic standards demand a fire certification for engines that demonstrate their resistance. This demonstration relies on tests performed with prototype engines in the late stages of the development. Hardest tests require to place a kerosene standardized flame in front of the engine casing during a given time with imposed temperature and heat flux. The purpose of this work is to provide a better characterization of a kerosene/air certification burner in order to minimize the risks of test failure. A first Large-Eddy Simulation (LES) study of the certification burner permitted to model and simulate this burner, including both adiabatic and Conjugate Heat Transfer (CHT) computations. Carried out on unstructured grids with 40 million tetrahedral cells, using the finite-volume YALES2 code, spray combustion, forced convection on walls and conduction in the solid parts of the burner were coupled to achieve a detailed description of heat transfer. It highlighted the fact that conduction inside the solid has a real impact on the flame topology and the combustion regime. However, in the absence of radiative heat transfer, unrealistic temperature of the equipment was obtained. The aim of the present study is to include the radiative heat transfer in order to reach the same temperature given by experimental measurements. First, various test-cases are conducted to validate the coupling between the different heat solvers. Then, adiabatic case, CHT case, as well as CHT including radiative transfer are studied and compared. The LES model is finally applied to investigate the heat transfer in a flame impaction configuration. The aim is to progress on fire test modeling so as to reach a good confidence level as far as success of the certification test is concerned.

Keywords: conjugate heat transfer, fire resistance test, large-eddy simulation, radiative transfer, turbulent combustion

Procedia PDF Downloads 223
6012 Research and Development of Net-Centric Information Sharing Platform

Authors: Wang Xiaoqing, Fang Youyuan, Zheng Yanxing, Gu Tianyang, Zong Jianjian, Tong Jinrong

Abstract:

Compared with traditional distributed environment, the net-centric environment brings on more demanding challenges for information sharing with the characteristics of ultra-large scale and strong distribution, dynamic, autonomy, heterogeneity, redundancy. This paper realizes an information sharing model and a series of core services, through which provides an open, flexible and scalable information sharing platform.

Keywords: net-centric environment, information sharing, metadata registry and catalog, cross-domain data access control

Procedia PDF Downloads 570
6011 Investigation of External Pressure Coefficients on Large Antenna Parabolic Reflector Using Computational Fluid Dynamics

Authors: Varun K, Pramod B. Balareddy

Abstract:

Estimation of wind forces plays a significant role in the in the design of large antenna parabolic reflectors. Reflector surface accuracies are very sensitive to the gain of the antenna system at higher frequencies. Hence accurate estimation of wind forces becomes important, which is primary input for design and analysis of the reflector system. In the present work, numerical simulation of wind flow using Computational Fluid Dynamics (CFD) software is used to investigate the external pressure coefficients. An extensive comparative study has been made between the CFD results and the published wind tunnel data for different wind angle of attacks (α) acting over concave to convex surfaces respectively. Flow simulations using CFD are carried out to estimate the coefficients of Drag, Lift and Moment for the parabolic reflector. Coefficients of pressures (Cp) over the front and the rear face of the reflector are extracted over surface of the reflector to study the net pressure variations. These resultant pressure variations are compared with the published wind tunnel data for different angle of attacks. It was observed from the CFD simulations, both convex and concave face of reflector system experience a band of pressure variations for the positive and negative angle of attacks respectively. In the published wind tunnel data, Pressure variations over convex surfaces are assumed to be uniform and vice versa. Chordwise and spanwise pressure variations were calculated and compared with the published experimental data. In the present work, it was observed that the maximum pressure coefficients for α ranging from +30° to -90° and α=+90° was lower. For α ranging from +45° to +75°, maximum pressure coefficients were higher as compared to wind tunnel data. This variation is due to non-uniform pressure distribution observed over front and back faces of reflector. Variations in Cd, Cl and Cm over α=+90° to α=-90° was in close resemblance with the experimental data.

Keywords: angle of attack, drag coefficient, lift coefficient, pressure coefficient

Procedia PDF Downloads 257
6010 Designing Nickel Coated Activated Carbon (Ni/AC) Based Electrode Material for Supercapacitor Applications

Authors: Zahid Ali Ghazi

Abstract:

Supercapacitors (SCs) have emerged as auspicious energy storage devices because of their fast charge-discharge characteristics and high power densities. In the current study, a simple approach is used to coat activated carbon (AC) with a thin layer of nickel (Ni) by an electroless deposition process to enhance the electrochemical performance of the SC. The synergistic combination of large surface area and high electrical conductivity of the AC, as well as the pseudocapacitive behavior of the metallic Ni, has shown great potential to overcome the limitations of traditional SC materials. First, the materials were characterized using X-ray diffraction (XRD) for crystallography, scanning electron microscopy (SEM) for surface morphology and energy dispersion X-ray (EDX) for elemental analysis. The electrochemical performance of the nickel-coated activated carbon (Ni-AC) is systematically evaluated through various techniques, including galvanostatic charge-discharge (GCD), cyclic voltammetry (CV), and electrochemical impedance spectroscopy (EIS). The GCD results revealed that Ni/AC has a higher specific capacitance (1559 F/g) than bare AC (222 F/g) at 1 A/g current density in a 2 M KOH electrolyte. Even at a higher current density of 20 A/g, the Ni/AC showed a high capacitance of 944 F/g as compared to 77 F/g by AC. The specific capacitance (1318 F/g) calculated from CV measurements for Ni-AC at 10mV/sec was in close agreement with GCD data. Furthermore, the bare AC exhibited a low energy of 15 Wh/kg at a power density of 356 W/kg whereas, an energy density of 111 Wh/kg at a power density of 360 W/kg was achieved by Ni/AC-850 electrode and demonstrated a long life cycle with 94% capacitance retention over 50000 charge/discharge cycles at 10 A/g. In addition, the EIS study disclosed that the Rs and Rct values of Ni/AC electrodes were much lower than those of bare AC. The superior performance of Ni/AC is mainly attributed to the presence of excessive redox active sites, large electroactive surface area and corrosive resistance properties of Ni. We believe that this study will provide new insights into the controlled coating of ACs and other porous materials with metals for developing high-performance SCs and other energy storage devices.

Keywords: supercapacitor, cyclic voltammetry, coating, energy density, activated carbon

Procedia PDF Downloads 63