Search results for: Step
300 The Hijras of Odisha: A Study of the Self-Identity of the Eunuchs and Their Identification with Stereotypical Feminine Roles
Authors: Purnima Anjali Mohanty, Mousumi Padhi
Abstract:
Background of the study: In the background of the passage of the Transgender Bill 2016, which is the first such step of formal recognition of the rights of transgender, the Hijras have been recognized under the wider definition of Transgender. Fascinatingly, in the Hindu social context, Hijras have a long social standing during marriages and childbirths. Other than this ironically, they live an ostracized life. The Bill rather than recognizing their unique characteristics and needs, reinforces the societal dualism through a parallelism of their legal rights with rights available to women. Purpose of the paper: The research objective was to probe why and to what extent did they identify themselves with the feminine gender roles. Originality of the paper: In the Indian context, the subject of eunuch has received relatively little attention. Among the studies that exist, there has been a preponderance of studies from the perspective of social exclusion, rights, and physical health. There has been an absence of research studying the self-identity of Hijras from the gender perspective. Methodology: The paper adopts the grounded theory method to investigate and discuss the underlying gender identity of transgenders. Participants in the study were 30 hijras from various parts of Odisha. 4 Focus group discussions were held for collecting data. The participants were approached in their natural habitat. Following the methodological recommendations of the grounded theory, care was taken to select respondents with varying experiences. The recorded discourses were transcribed verbatim. The transcripts were analysed sentence by sentence, and coded. Common themes were identified, and responses were categorized under the themes. Data collected in the latter group discussions were added till saturation of themes. Finally, the themes were put together to prove that despite the demand for recognition as third gender, the eunuchs of Odisha identify themselves with the feminine roles. Findings: The Hijra have their own social structure and norms which are unique and are in contrast with the mainstream culture. These eunuchs live and reside in KOTHIS (house), where the family is led by a matriarch addressed as Maa (mother) with her daughters (the daughters are eunuchs/effeminate men castrated and not castrated). They all dress up as woman, do womanly duties, expect to be considered and recognized as woman and wife and have the behavioral traits of a woman. Looking from the stance of Feminism one argues that when the Hijras identify themselves with the gender woman then on what grounds they are given the recognition as third gender. As self-identified woman; their claim for recognition as third gender falls flat. Significance of the study: Academically it extends the study of understanding of gender identity and psychology of the Hijras in the Indian context. Practically its significance is far reaching. The findings can be used to address legal and social issues with regards to the rights available to the Hijras.Keywords: feminism, gender perspective, Hijras, rights, self-identity
Procedia PDF Downloads 432299 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement
Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies
Abstract:
Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement
Procedia PDF Downloads 257298 Linkages between Innovation Policies and SMEs' Innovation Activities: Empirical Evidence from 15 Transition Countries
Authors: Anita Richter
Abstract:
Innovation is one of the key foundations of competitive advantage, generating growth and welfare worldwide. Consequently, all firms should innovate to bring new ideas to the market. Innovation is a vital growth driver, particularly for transition countries to move towards knowledge-based, high-income economies. However, numerous barriers, such as financial, regulatory or infrastructural constraints prevent, in particular, new and small firms in transition countries from innovating. Thus SMEs’ innovation output may benefit substantially from government support. This research paper aims to assess the effect of government interventions on innovation activities in SMEs in emerging countries. Until now academic research related to the innovation policies focused either on single country and/or high-income countries assessments and less on cross-country and/or low and middle-income countries. Therefore the paper seeks to close the research gap by providing empirical evidence from 8,500 firms in 15 transition countries (Eastern Europe, South Caucasus, South East Europe, Middle East and North Africa). Using firm-level data from the Business Environment and Enterprise Performance Survey of the World Bank and EBRD and policy data from the SME Policy Index of the OECD, the paper investigates how government interventions affect SME’s likelihood of investing in any technological and non-technological innovation. Using the Standard Linear Regression, the impact of government interventions on SMEs’ innovation output and R&D activities is measured. The empirical analysis suggests that a firm’s decision to invest into innovative activities is sensitive to government interventions. A firm’s likelihood to invest into innovative activities increases by 3% to 8%, if the innovation eco-system noticeably improves (measured by an increase of 1 level in the SME Policy Index). At the same time, a better eco-system encourages SMEs to invest more in R&D. Government reforms in establishing a dedicated policy framework (IP legislation), institutional infrastructure (science and technology parks, incubators) and financial support (public R&D grants, innovation vouchers) are particularly relevant to stimulate innovation performance in SMEs. Particular segments of the SME population, namely micro and manufacturing firms, are more likely to benefit from an increased innovation framework conditions. The marginal effects are particularly strong on product innovation, process innovation, and marketing innovation, but less on management innovation. In conclusion, government interventions supporting innovation will likely lead to higher innovation performance of SMEs. They increase productivity at both firm and country level, which is a vital step in transitioning towards knowledge-based market economies.Keywords: innovation, research and development, government interventions, economic development, small and medium-sized enterprises, transition countries
Procedia PDF Downloads 324297 Machine Learning Techniques in Seismic Risk Assessment of Structures
Authors: Farid Khosravikia, Patricia Clayton
Abstract:
The main objective of this work is to evaluate the advantages and disadvantages of various machine learning techniques in two key steps of seismic hazard and risk assessment of different types of structures. The first step is the development of ground-motion models, which are used for forecasting ground-motion intensity measures (IM) given source characteristics, source-to-site distance, and local site condition for future events. IMs such as peak ground acceleration and velocity (PGA and PGV, respectively) as well as 5% damped elastic pseudospectral accelerations at different periods (PSA), are indicators of the strength of shaking at the ground surface. Typically, linear regression-based models, with pre-defined equations and coefficients, are used in ground motion prediction. However, due to the restrictions of the linear regression methods, such models may not capture more complex nonlinear behaviors that exist in the data. Thus, this study comparatively investigates potential benefits from employing other machine learning techniques as statistical method in ground motion prediction such as Artificial Neural Network, Random Forest, and Support Vector Machine. The results indicate the algorithms satisfy some physically sound characteristics such as magnitude scaling distance dependency without requiring pre-defined equations or coefficients. Moreover, it is shown that, when sufficient data is available, all the alternative algorithms tend to provide more accurate estimates compared to the conventional linear regression-based method, and particularly, Random Forest outperforms the other algorithms. However, the conventional method is a better tool when limited data is available. Second, it is investigated how machine learning techniques could be beneficial for developing probabilistic seismic demand models (PSDMs), which provide the relationship between the structural demand responses (e.g., component deformations, accelerations, internal forces, etc.) and the ground motion IMs. In the risk framework, such models are used to develop fragility curves estimating exceeding probability of damage for pre-defined limit states, and therefore, control the reliability of the predictions in the risk assessment. In this study, machine learning algorithms like artificial neural network, random forest, and support vector machine are adopted and trained on the demand parameters to derive PSDMs for them. It is observed that such models can provide more accurate estimates of prediction in relatively shorter about of time compared to conventional methods. Moreover, they can be used for sensitivity analysis of fragility curves with respect to many modeling parameters without necessarily requiring more intense numerical response-history analysis.Keywords: artificial neural network, machine learning, random forest, seismic risk analysis, seismic hazard analysis, support vector machine
Procedia PDF Downloads 106296 Rainfall and Flood Forecast Models for Better Flood Relief Plan of the Mae Sot Municipality
Authors: S. Chuenchooklin, S. Taweepong, U. Pangnakorn
Abstract:
This research was conducted in the Mae Sot Watershed whereas located in the Moei River Basin at the Upper Salween River Basin in Tak Province, Thailand. The Mae Sot Municipality is the largest urbanized in Tak Province and situated in the midstream of the Mae Sot Watershed. It usually faces flash flood problem after heavy rain due to poor flood management has been reported since economic rapidly bloom up in recently years. Its catchment can be classified as ungauged basin with lack of rainfall data and no any stream gaging station was reported. It was attached by most severely flood event in 2013 as the worst studied case for those all communities in this municipality. Moreover, other problems are also faced in this watershed such shortage water supply for domestic consumption and agriculture utilizations including deterioration of water quality and landslide as well. The research aimed to increase capability building and strengthening the participation of those local community leaders and related agencies to conduct better water management in urban area was started by mean of the data collection and illustration of appropriated application of some short period rainfall forecasting model as the aim for better flood relief plan and management through the hydrologic model system and river analysis system programs. The authors intended to apply the global rainfall data via the integrated data viewer (IDV) program from the Unidata with the aim for rainfall forecasting in short period of 7 - 10 days in advance during rainy season instead of real time record. The IDV product can be present in advance period of rainfall with time step of 3 - 6 hours was introduced to the communities. The result can be used to input to either the hydrologic modeling system model (HEC-HMS) or the soil water assessment tool model (SWAT) for synthesizing flood hydrographs and use for flood forecasting as well. The authors applied the river analysis system model (HEC-RAS) to present flood flow behaviors in the reach of the Mae Sot stream via the downtown of the Mae Sot City as flood extents as water surface level at every cross-sectional profiles of the stream. Both models of HMS and RAS were tested in 2013 with observed rainfall and inflow-outflow data from the Mae Sot Dam. The result of HMS showed fit to the observed data at dam and applied at upstream boundary discharge to RAS in order to simulate flood extents and tested in the field, and the result found satisfied. The result of IDV’s rainfall forecast data was compared to observed data and found fair. However, it is an appropriate tool to use in the ungauged catchment to use with flood hydrograph and river analysis models for future efficient flood relief plan and management.Keywords: global rainfall, flood forecast, hydrologic modeling system, river analysis system
Procedia PDF Downloads 349295 Biocompatibility of Calcium Phosphate Coatings With Different Crystallinity Deposited by Sputtering
Authors: Ekaterina S. Marchenko, Gulsharat A. Baigonakova, Kirill M. Dubovikov, Igor A. Khlusov
Abstract:
NiTi alloys combine biomechanical and biochemical properties. This makes them a perfect candidate for medical applications. However, there is a serious problem with these alloys, such as the release of Ni from the matrix. Ni ions are known to be toxic to living tissues and leach from the matrix into the surrounding implant tissues due to corrosion after prolonged use. To prevent the release of Ni ions, corrosive strong coatings are usually used. Titanium nitride-based coatings are perfect corrosion inhibitors and also have good bioactive properties. However, there is an opportunity to improve the biochemical compatibility of the surface by depositing another layer. This layer can consist of elements such as calcium and phosphorus. The Ca and P ions form different calcium phosphate phases, which are present in the mineral part of human bones. We therefore believe that these elements must promote osteogenesis and osteointegration. In view of the above, the aim of this study is to investigate the effect of crystallinity on the biocompatibility of a two-layer coating deposited on NiTi substrate by sputtering. The first step of the research, apart from the NiTi polishing, is the layer-by-layer deposition of Ti-Ni-Ti by magnetron sputtering and the subsequent synthesis of this composite in an N atmosphere at 900 °C. The total thickness of the corrosion resistant layer is 150 nm. Plasma assisted RF sputtering was then used to deposit a bioactive film on the titanium nitride layer. A Ca-P powder target was used to obtain such a film. We deposited three types of Ca-P layers with different crystallinity and compared them in terms of cytotoxicity. One group of samples had no Ca-P coating and was used as a control. We obtained different crystallinity by varying the sputtering parameters such as bias voltage, plasma source current and pressure. XRD analysis showed that all coatings are calcium phosphate, but the sample obtained at maximum bias and plasma source current and minimum pressure has the most intense peaks from the coating phase. SEM and EDS showed that all three coatings have a homogeneous and dense structure without cracks and consist of calcium, phosphorus and oxygen. Cytotoxic tests carried out on three types of samples with Ca-P coatings and a control group showed that the control sample and the sample with Ca-P coating obtained at maximum bias voltage and plasma source current and minimum pressure had the lowest number of dead cells on the surface, around 11 ± 4%. Two other types of samples with Ca-P coating have 40 ± 9% and 21 ± 7% dead cells on the surface. It can therefore be concluded that these two sputtering modes have a negative effect on the corrosion resistance of the whole samples. The third sputtering mode does not affect the corrosion resistance and has the same level of cytotoxicity as the control. It can be concluded that the most suitable sputtering mode is the third with maximum bias voltage and plasma source current and minimum pressure.Keywords: calcium phosphate coating, cytotoxicity, NiTi alloy, two-layer coating
Procedia PDF Downloads 67294 Educational Institutional Approach for Livelihood Improvement and Sustainable Development
Authors: William Kerua
Abstract:
The PNG University of Technology (Unitech) has mandatory access to teaching, research and extension education. Given such function, the Agriculture Department has established the ‘South Pacific Institute of Sustainable Agriculture and Rural Development (SPISARD)’ in 2004. SPISARD is established as a vehicle to improve farming systems practiced in selected villages by undertaking pluralistic extension method through ‘Educational Institutional Approach’. Unlike other models, SPISARD’s educational institutional approach stresses on improving the whole farming systems practiced in a holistic manner and has a two-fold focus. The first is to understand the farming communities and improve the productivity of the farming systems in a sustainable way to increase income, improve nutrition and food security as well as livelihood enhancement trainings. The second is to enrich the Department’s curriculum through teaching, research, extension and getting inputs from farming community. SPISARD has established number of model villages in various provinces in Papua New Guinea (PNG) and with many positive outcome and success stories. Adaption of ‘educational institutional approach’ thus binds research, extension and training into one package with the use of students and academic staff through model village establishment in delivering development and extension to communities. This centre (SPISARD) coordinates the activities of the model village programs and linkages. The key to the development of the farming systems is establishing and coordinating linkages, collaboration, and developing partnerships both within and external institutions, organizations and agencies. SPISARD has a six-point step strategy for the development of sustainable agriculture and rural development. These steps are (i) establish contact and identify model villages, (ii) development of model village resource centres for research and trainings, (iii) conduct baseline surveys to identify problems/needs of model villages, (iv) development of solution strategies, (v) implementation and (vi) evaluation of impact of solution programs. SPISARD envisages that the farming systems practiced being improved if the villages can be made the centre of SPISARD activities. Therefore, SPISARD has developed a model village approach to channel rural development. The model village when established become the conduit points where teaching, training, research, and technology transfer takes place. This approach is again different and unique to the existing ones, in that, the development process take place in the farmers’ environment with immediate ‘real time’ feedback mechanisms based on the farmers’ perspective and satisfaction. So far, we have developed 14 model villages and have conducted 75 trainings in 21 different areas/topics in 8 provinces to a total of 2,832 participants of both sex. The aim of these trainings is to directly participate with farmers in the pursuit to improving their farming systems to increase productivity, income and to secure food security and nutrition, thus to improve their livelihood.Keywords: development, educational institutional approach, livelihood improvement, sustainable agriculture
Procedia PDF Downloads 154293 Hardware Implementation on Field Programmable Gate Array of Two-Stage Algorithm for Rough Set Reduct Generation
Authors: Tomasz Grzes, Maciej Kopczynski, Jaroslaw Stepaniuk
Abstract:
The rough sets theory developed by Prof. Z. Pawlak is one of the tools that can be used in the intelligent systems for data analysis and processing. Banking, medicine, image recognition and security are among the possible fields of utilization. In all these fields, the amount of the collected data is increasing quickly, but with the increase of the data, the computation speed becomes the critical factor. Data reduction is one of the solutions to this problem. Removing the redundancy in the rough sets can be achieved with the reduct. A lot of algorithms of generating the reduct were developed, but most of them are only software implementations, therefore have many limitations. Microprocessor uses the fixed word length, consumes a lot of time for either fetching as well as processing of the instruction and data; consequently, the software based implementations are relatively slow. Hardware systems don’t have these limitations and can process the data faster than a software. Reduct is the subset of the decision attributes that provides the discernibility of the objects. For the given decision table there can be more than one reduct. Core is the set of all indispensable condition attributes. None of its elements can be removed without affecting the classification power of all condition attributes. Moreover, every reduct consists of all the attributes from the core. In this paper, the hardware implementation of the two-stage greedy algorithm to find the one reduct is presented. The decision table is used as an input. Output of the algorithm is the superreduct which is the reduct with some additional removable attributes. First stage of the algorithm is calculating the core using the discernibility matrix. Second stage is generating the superreduct by enriching the core with the most common attributes, i.e., attributes that are more frequent in the decision table. Described above algorithm has two disadvantages: i) generating the superreduct instead of reduct, ii) additional first stage may be unnecessary if the core is empty. But for the systems focused on the fast computation of the reduct the first disadvantage is not the key problem. The core calculation can be achieved with a combinational logic block, and thus add respectively little time to the whole process. Algorithm presented in this paper was implemented in Field Programmable Gate Array (FPGA) as a digital device consisting of blocks that process the data in a single step. Calculating the core is done by the comparators connected to the block called 'singleton detector', which detects if the input word contains only single 'one'. Calculating the number of occurrences of the attribute is performed in the combinational block made up of the cascade of the adders. The superreduct generation process is iterative and thus needs the sequential circuit for controlling the calculations. For the research purpose, the algorithm was also implemented in C language and run on a PC. The times of execution of the reduct calculation in a hardware and software were considered. Results show increase in the speed of data processing.Keywords: data reduction, digital systems design, field programmable gate array (FPGA), reduct, rough set
Procedia PDF Downloads 219292 Teachers’ Instructional Decisions When Teaching Geometric Transformations
Authors: Lisa Kasmer
Abstract:
Teachers’ instructional decisions shape the structure and content of mathematics lessons and influence the mathematics that students are given the opportunity to learn. Therefore, it is important to better understand how teachers make instructional decisions and thus find new ways to help practicing and future teachers give their students a more effective and robust learning experience. Understanding the relationship between teachers’ instructional decisions and their goals, resources, and orientations (beliefs) is important given the heightened focus on geometric transformations in the middle school mathematics curriculum. This work is significant as the development and support of current and future teachers need more effective ways to teach geometry to their students. The following research questions frame this study: (1) As middle school mathematics teachers plan and enact instruction related to teaching transformations, what thinking processes do they engage in to make decisions about teaching transformations with or without a coordinate system and (2) How do the goals, resources and orientations of these teachers impact their instructional decisions and reveal about their understanding of teaching transformations? Teachers and students alike struggle with understanding transformations; many teachers skip or hurriedly teach transformations at the end of the school year. However, transformations are an important mathematical topic as this topic supports students’ understanding of geometric and spatial reasoning. Geometric transformations are a foundational concept in mathematics, not only for understanding congruence and similarity but for proofs, algebraic functions, and calculus etc. Geometric transformations also underpin the secondary mathematics curriculum, as features of transformations transfer to other areas of mathematics. Teachers’ instructional decisions in terms of goals, orientations, and resources that support these instructional decisions were analyzed using open-coding. Open-coding is recognized as an initial first step in qualitative analysis, where comparisons are made, and preliminary categories are considered. Initial codes and categories from current research on teachers’ thinking processes that are related to the decisions they make while planning and reflecting on the lessons were also noted. Surfacing ideas and additional themes common across teachers while seeking patterns, were compared and analyzed. Finally, attributes of teachers’ goals, orientations and resources were identified in order to begin to build a picture of the reasoning behind their instructional decisions. These categories became the basis for the organization and conceptualization of the data. Preliminary results suggest that teachers often rely on their own orientations about teaching geometric transformations. These beliefs are underpinned by the teachers’ own mathematical knowledge related to teaching transformations. When a teacher does not have a robust understanding of transformations, they are limited by this lack of knowledge. These shortcomings impact students’ opportunities to learn, and thus disadvantage their own understanding of transformations. Teachers’ goals are also limited by their paucity of knowledge regarding transformations, as these goals do not fully represent the range of comprehension a teacher needs to teach this topic well.Keywords: coordinate plane, geometric transformations, instructional decisions, middle school mathematics
Procedia PDF Downloads 88291 Spatial Conceptualization in French and Italian Speakers: A Contrastive Approach in the Context of the Linguistic Relativity Theory
Authors: Camilla Simoncelli
Abstract:
The connection between language and cognition has been one of the main interests of linguistics from several years. According to the Sapir-Whorf Linguistic Relativity Theory, the way we perceive reality depends on the language we speak which in turn has a central role in the human cognition. This paper is in line with this research work with the aim of analyzing how language structures reflect on our cognitive abilities even in the description of space, which is generally considered as a human natural and universal domain. The main objective is to identify the differences in the encoding of spatial inclusion relationships in French and Italian speakers to make evidence that a significant variation exists at various levels even in two similar systems. Starting from the constitution a corpora, the first step of the study has been to establish the relevant complex prepositions marking an inclusion relation in French and Italian: au centre de, au cœur de, au milieu de, au sein de, à l'intérieur de and the opposition entre/parmi in French; al centro di, al cuore di, nel mezzo di, in seno a, all'interno di and the fra/tra contrast in Italian. These prepositions had been classified on the base of the type of Noun following them (e.g. mass nouns, concrete nouns, abstract nouns, body-parts noun, etc.) following the Collostructional Analysis of lexemes with the purpose of analyzing the preferred construction of each preposition comparing the relations construed. Comparing the Italian and the French results it has been possible to define the degree of representativeness of each target Noun for the chosen preposition studied. Lexicostatistics and Statistical Association Measures showed the values of attraction or repulsion between lexemes and a given preposition, highlighting which words are over-represented or under-represented in a specific context compared to the expected results. For instance, a Noun as Dibattiti has a negative value for the Italian Al cuore di (-1,91), but it has a strong positive representativeness for the corresponding French Au cœur de (+677,76). The value, positive or negative, is the result of a hypergeometric distribution law which displays the current use of some relevant nouns in relations of spatial inclusion by French and Italian speakers. Differences on the kind of location conceptualization denote syntactic and semantic constraints based on spatial features as well as on linguistic peculiarity, too. The aim of this paper is to demonstrate that the domain of spatial relations is basic to human experience and is linked to universally shared perceptual mechanisms which create mental representations depending on the language use. Therefore, linguistic coding strongly correlates with the way spatial distinctions are conceptualized for non-verbal tasks even in close language systems, like Italian and French.Keywords: cognitive semantics, cross-linguistic variations, locational terms, non-verbal spatial representations
Procedia PDF Downloads 113290 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities
Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard
Abstract:
INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification
Procedia PDF Downloads 163289 Investigating the Flow Physics within Vortex-Shockwave Interactions
Authors: Frederick Ferguson, Dehua Feng, Yang Gao
Abstract:
No doubt, current CFD tools have a great many technical limitations, and active research is being done to overcome these limitations. Current areas of limitations include vortex-dominated flows, separated flows, and turbulent flows. In general, turbulent flows are unsteady solutions to the fluid dynamic equations, and instances of these solutions can be computed directly from the equations. One of the approaches commonly implemented is known as the ‘direct numerical simulation’, DNS. This approach requires a spatial grid that is fine enough to capture the smallest length scale of the turbulent fluid motion. This approach is called the ‘Kolmogorov scale’ model. It is of interest to note that the Kolmogorov scale model must be captured throughout the domain of interest and at a correspondingly small-time step. In typical problems of industrial interest, the ratio of the length scale of the domain to the Kolmogorov length scale is so great that the required grid set becomes prohibitively large. As a result, the available computational resources are usually inadequate for DNS related tasks. At this time in its development, DNS is not applicable to industrial problems. In this research, an attempt is made to develop a numerical technique that is capable of delivering DNS quality solutions at the scale required by the industry. To date, this technique has delivered preliminary results for both steady and unsteady, viscous and inviscid, compressible and incompressible, and for both high and low Reynolds number flow fields that are very accurate. Herein, it is proposed that the Integro-Differential Scheme (IDS) be applied to a set of vortex-shockwave interaction problems with the goal of investigating the nonstationary physics within the resulting interaction regions. In the proposed paper, the IDS formulation and its numerical error capability will be described. Further, the IDS will be used to solve the inviscid and viscous Burgers equation, with the goal of analyzing their solutions over a considerable length of time, thus demonstrating the unsteady capabilities of the IDS. Finally, the IDS will be used to solve a set of fluid dynamic problems related to flow that involves highly vortex interactions. Plans are to solve the following problems: the travelling wave and vortex problems over considerable lengths of time, the normal shockwave–vortex interaction problem for low supersonic conditions and the reflected oblique shock–vortex interaction problem. The IDS solutions obtained in each of these solutions will be explored further in efforts to determine the distributed density gradients and vorticity, as well as the Q-criterion. Parametric studies will be conducted to determine the effects of the Mach number on the intensity of vortex-shockwave interactions.Keywords: vortex dominated flows, shockwave interactions, high Reynolds number, integro-differential scheme
Procedia PDF Downloads 137288 Carboxyfullerene-Modified Titanium Dioxide Nanoparticles in Singlet Oxygen and Hydroxyl Radicals Scavenging Activity
Authors: Kai-Cheng Yang, Yen-Ling Chen, Er-Chieh Cho, Kuen-Chan Lee
Abstract:
Titanium dioxide nanomaterials offer superior protection for human skin against the full spectrum of ultraviolet light. However, some literature reviews indicated that it might be associated with adverse effects such as cytotoxicity or reactive oxygen species (ROS) due to their nanoscale. The surface of fullerene is covered with π electrons constituting aromatic structures, which can effectively scavenge large amount of radicals. Unfortunately, fullerenes are poor solubility in water, severe aggregation, and toxicity in biological applications when dispersed in solvent have imposed the limitations to the use of fullerenes. Carboxyfullerene acts as the scavenger of radicals for several years. Some reports indicate that carboxyfullerene not only decrease the concentration of free radicals in ambience but also prevent cells from reducing the number or apoptosis under UV irradiation. The aim of this study is to decorate fullerene –C70-carboxylic acid (C70-COOH) on the surface of titanium dioxide nanoparticles (P25) for the purpose of scavenging ROS during the irradiation. The modified material is prepared through the esterification of C70-COOH with P25 (P25/C70-COOH). The binding edge and structure are studied by using Transmission electron microscope (TEM) and Fourier transform infrared (FTIR). The diameter of P25 is about 30 nm and C70-COOH is found to be conjugated on the edge of P25 in aggregation morphology with the size of ca. 100 nm. In the next step, the FTIR was used to confirm the binding structure between P25 and C70-COOH. There are two new peaks are shown at 1427 and 1720 cm-1 for P25/C70-COOH, resulting from the C–C stretch and C=O stretch formed during esterification with dilute sulfuric acid. The IR results further confirm the chemically bonded interaction between C70-COOH and P25. In order to provide the evidence of scavenging radical ability of P25/C70-COOH, we chose pyridoxine (Vit.B6) and terephthalic acid (TA) to react with singlet oxygen and hydroxyl radicals. We utilized these chemicals to observe the radicals scavenging statement via detecting the intensity of ultraviolet adsorption or fluorescence emission. The UV spectra are measured by using different concentration of C70-COOH modified P25 with 1mM pyridoxine under UV irradiation for various duration times. The results revealed that the concentration of pyridoxine was increased when cooperating with P25/C70-COOH after three hours as compared with control (only P25). It indicates fewer radicals could be reacted with pyridoxine because of the absorption via P25/C70-COOH. The fluorescence spectra are observed by measuring P25/C70-COOH with 1mM terephthalic acid under UV irradiation for various duration times. The fluorescence intensity of TAOH was decreased in ten minutes when cooperating with P25/C70-COOH. Here, it was found that the fluorescence intensity was increased after thirty minutes, which could be attributed to the saturation of C70-COOH in the absorption of radicals. However, the results showed that the modified P25/C70-COOH could reduce the radicals in the environment. Therefore, we expect that P25/C70-COOH is a potential materials in using for antioxidant.Keywords: titanium dioxide, fullerene, radical scavenging activity, antioxidant
Procedia PDF Downloads 404287 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education
Authors: CA Kishore S. Peshori
Abstract:
Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementationKeywords: balanced scorecard, bench marking, Naac, UGC
Procedia PDF Downloads 272286 Physicochemical Properties and Toxicity Studies on a Lectin from the Bulb of Dioscorea bulbifera
Authors: Uchenna Nkiruka Umeononihu, Adenike Kuku, Oludele Odekanyin, Olubunmi Babalola, Femi Agboola, Rapheal Okonji
Abstract:
In this study, a lectin from the bulb of Dioscorea bulbifera was purified, characterised, and its acute and sub-acute toxicity was investigated with a view to evaluate its toxic effects in mice. The protein from the bulb was extracted by homogenising 50 g of the bulb in 500 ml of phosphate buffered saline (0.025 M) of pH 7.2, stirred for 3 hr, and centrifuged at the speed of 3000 rpm. Blood group and sugar specificity assays of the crude extract were determined. The lectin was purified in a two-step procedure- gel filtration on Sephadex G-75 and affinity chromatography on Sepharose 4-B arabinose. The degree of purity of the purified lectin was ascertained by SDS-polyacrylamide gel electrophoresis. Detection of covalently bound carbohydrate was carried out with Periodic Acid-Schiffs (PAS) reagent staining technique. Effects of temperature, pH, and EDTA on the lectin were carried out using standard methods. This was followed by acute toxicity studies via oral and subcutaneous routes using mice. The animals were monitored for mortality and signs of toxicity. The sub-acute toxicity studies were carried out using rats. Different concentrations of the lectin were administered twice daily for 5 days via the subcutaneous route. The animals were sacrificed on the sixth day; blood samples and liver tissues were collected. Biochemical assays (determination of total protein, direct bilirubin, Alanine aminotransferase (ALT), Aspartate aminotransferase (AST), catalase (CAT), and superoxide dismutase (SOD)) were carried out on the serum and liver homogenates. The collected organs (heart, liver, kidney, and spleen) were subjected to histopathological analysis. The results showed that lectin from the bulbs of Dioscorea bulbifera agglutinated non-specifically the erythrocytes of the human ABO system as well as rabbit erythrocytes. The haemagglutinating activity was strongly inhibited by arabinose and dulcitol with minimum inhibitory concentrations of 0.781 and 6.25, respectively. The lectin was purified to homogeneity with native and subunit molecular weights of 56,273 and 29,373 Daltons, respectively. The lectin was thermostable up to 30 0C and lost 25 %, 33.3 %, and 100 % of its heamagglutinating activity at 40°C, 50°C, and 60°C, respectively. The lectin was maximally active at pH 4 and 5 but lost its total activity at pH eight, while EDTA (10 mM) had no effect on its haemagglutinating activity. PAS reagent staining showed that the lectin was not a glycoprotein. The sub-acute studies on rats showed elevated levels of ALT, AST, serum bilirubin, total protein in serum and liver homogenates suggesting damage to liver and spleen. The study concluded that the aerial bulb of D. bulbifera lectin was non-specific in its heamagglutinating activity and dimeric in its structure. The lectin shared some physicochemical characteristics with lectins from other Dioscorecea species and was moderately toxic to the liver and spleen of treated animals.Keywords: Dioscorea bulbifera, heamagglutinin, lectin, toxicity
Procedia PDF Downloads 128285 “Divorced Women are Like Second-Hand Clothes” - Hate Language in Media Discourse
Authors: Sopio Totibadze
Abstract:
Although the legal framework of Georgia reflects the main principles of gender equality and is in line with the international situation, Georgia remains a male-dominated society. This means that men prevail in many areas of social, economic, and political life, which frequently gives women a subordinate status in society and the family. According to the latest studies, “violence against women and girls in Georgia is also recognized as a public problem, and it is necessary to focus on it”. Moreover, the Public Defender's report (2019) reveals that “in the last five years, 151 women were killed in Georgia due to gender and family violence”. Unfortunately, there are frequent cases of crimes based on gender-based oppression in Georgia, which pose a threat not only to women but also to people of any gender whose desires and aspirations do not correspond to the gender norms and roles prevailing in society. It is well-known that language is often used as a tool for gender oppression. Therefore, feminist and gender studies in linguistics ultimately serve to represent the problem, reflect on it, and propose ways to solve it. Together with technical advancement in communication, a new form of discrimination has arisen- hate language against women in electronic media discourse. Due to the nature of social media and the internet, messages containing hate language can spread in seconds and reach millions of people. However, only a few know about the detrimental effects they may have on the addressee and society. This paper aims to analyse the hateful comments directed at women on various media platforms to determine the linguistic strategies used while attacking women and the reasons why women may fall victim to this type of hate language. The data have been collected over six months, and overall, 500 comments will be examined for the paper. Qualitative and quantitative analysis was chosen for the methodology of the study. The comments posted on various media platforms have been selected manually due to several reasons, the most important being the problem of identifying hate speech as it can disguise itself in different ways- humour, memes, etc. The comments on the articles, posts, pictures, and videos selected for sociolinguistic analysis depict a woman, a taboo topic, or a scandalous event centred on a woman that triggered hate language towards the person to whom the post/article was dedicated. The study has revealed that a woman can become a victim of hatred directed at them if they do something considered to be a deviation from a societal norm, namely, get a divorce, be sexually active, be vocal about feministic values, and talk about taboos. Interestingly, people who utilize hate language are not only men trying to “normalize” the prejudiced patriarchal values but also women who are equally active in bringing down a "strong" woman. The paper also aims to raise awareness about the hate language directed at women, as being knowledgeable about the issue at hand is the first step to tackling it.Keywords: femicide, hate language, media discourse, sociolinguistics
Procedia PDF Downloads 85284 Phenomena-Based Approach for Automated Generation of Process Options and Process Models
Authors: Parminder Kaur Heer, Alexei Lapkin
Abstract:
Due to global challenges of increased competition and demand for more sustainable products/processes, there is a rising pressure on the industry to develop innovative processes. Through Process Intensification (PI) the existing and new processes may be able to attain higher efficiency. However, very few PI options are generally considered. This is because processes are typically analysed at a unit operation level, thus limiting the search space for potential process options. PI performed at more detailed levels of a process can increase the size of the search space. The different levels at which PI can be achieved is unit operations, functional and phenomena level. Physical/chemical phenomena form the lowest level of aggregation and thus, are expected to give the highest impact because all the intensification options can be described by their enhancement. The objective of the current work is thus, generation of numerous process alternatives based on phenomena, and development of their corresponding computer aided models. The methodology comprises: a) automated generation of process options, and b) automated generation of process models. The process under investigation is disintegrated into functions viz. reaction, separation etc., and these functions are further broken down into the phenomena required to perform them. E.g., separation may be performed via vapour-liquid or liquid-liquid equilibrium. A list of phenomena for the process is formed and new phenomena, which can overcome the difficulties/drawbacks of the current process or can enhance the effectiveness of the process, are added to the list. For instance, catalyst separation issue can be handled by using solid catalysts; the corresponding phenomena are identified and added. The phenomena are then combined to generate all possible combinations. However, not all combinations make sense and, hence, screening is carried out to discard the combinations that are meaningless. For example, phase change phenomena need the co-presence of the energy transfer phenomena. Feasible combinations of phenomena are then assigned to the functions they execute. A combination may accomplish a single or multiple functions, i.e. it might perform reaction or reaction with separation. The combinations are then allotted to the functions needed for the process. This creates a series of options for carrying out each function. Combination of these options for different functions in the process leads to the generation of superstructure of process options. These process options, which are formed by a list of phenomena for each function, are passed to the model generation algorithm in the form of binaries (1, 0). The algorithm gathers the active phenomena and couples them to generate the model. A series of models is generated for the functions, which are combined to get the process model. The most promising process options are then chosen subjected to a performance criterion, for example purity of product, or via a multi-objective Pareto optimisation. The methodology was applied to a two-step process and the best route was determined based on the higher product yield. The current methodology can identify, produce and evaluate process intensification options from which the optimal process can be determined. It can be applied to any chemical/biochemical process because of its generic nature.Keywords: Phenomena, Process intensification, Process models , Process options
Procedia PDF Downloads 232283 Making the Neighbourhood: Analyzing Mapping Procedures to Deal with Plurality and Conflict
Authors: Barbara Roosen, Oswald Devisch
Abstract:
Spatial projects are often contested. Despite participatory trajectories in official spatial development processes, citizens engage often by their power to say no. Participatory mapping helps to produce more legible and democratic ways of decision-making. It has proven its value in producing a multitude of knowledges and views, for individuals and community groups and local stakeholders to imagine desired and undesired futures and to give them the rhetorical power to present their views throughout the development process. From this perspective, mapping works as a social process in which individuals and groups share their knowledge, learn from each other and negotiate their relationship with each other as well as with space and power. In this way, these processes eventually aim to activate communities to intervene in cooperation in real problems. However, these are fragile and bumpy processes, sometimes leading to (local) conflict and intractable situations. Heterogeneous subjectivities and knowledge that become visible during the mapping process and which are contested by members of the community, is often the first trigger. This paper discusses a participatory mapping project conducted in a residential subdivision in Flanders to provide a deeper understanding of how or under which conditions the mapping process could moderate discordant situations amongst inhabitants, local organisations and local authorities, towards a more constructive outcome. In our opinion, this implies a thorough documentation and presentation of the different steps of the mapping process to design and moderate an open and transparent dialogue. The mapping project ‘Make the Neighbourhood’, is set up in the aftermath of a socio-spatial design intervention in the neighbourhood that led to polarization within the community. To start negotiation between the diverse claims that came to the fore, we co-create a desired future map of the neighbourhood together with local organisations and inhabitants as a way to engage them in the development of a new spatial development plan for the area. This mapping initiative set up a new ‘common’ goal or concern, as a first step to bridge the gap that we experienced between different sociocultural groups, bottom-up and top-down initiatives and between professionals and non-professionals. An atlas of elements (materials), an atlas of actors with different roles and an atlas of ways of cooperation and organisation form the work and building material of the future neighbourhood map, assembled in two co-creation sessions. Firstly, we will consider how the mapping procedures articulate the plurality of claims and agendas. Secondly, we will elaborate upon how social relations and spatialities are negotiated and reproduced during the different steps of the map making. Thirdly, we will reflect on the role of the rules, format, and structure of the mapping process in moderating negotiations between much divided claims. To conclude, we will discuss the challenges of visualizing the different steps of mapping process as a strategy to moderate tense negotiations in a more constructive direction in the context of spatial development processes.Keywords: conflict, documentation, participatory mapping, residential subdivision
Procedia PDF Downloads 209282 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression
Authors: Anne M. Denton, Rahul Gomes, David W. Franzen
Abstract:
High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression
Procedia PDF Downloads 129281 Redefining Intellectual Humility in Indian Context: An Experimental Investigation
Authors: Jayashree And Gajjam
Abstract:
Intellectual humility (IH) is defined as a virtuous mean between intellectual arrogance and intellectual self-diffidence by the ‘Doxastic Account of IH’ studied, researched and developed by western scholars not earlier than 2015 at the University of Edinburgh. Ancient Indian philosophical texts or the Upanisads written in the Sanskrit language during the later Vedic period (circa 600-300 BCE) have long addressed the virtue of being humble in several stories and narratives. The current research paper questions and revisits these character traits in an Indian context following an experimental method. Based on the subjective reports of more than 400 Indian teenagers and adults, it argues that while a few traits of IH (such as trustworthiness, respectfulness, intelligence, politeness, etc.) are panhuman and pancultural, a few are not. Some attributes of IH (such as proper pride, open-mindedness, awareness of own strength, etc.) may be taken for arrogance by the Indian population, while other qualities of Intellectual Diffidence such as agreeableness, surrendering can be regarded as the characteristic of IH. The paper then gives the reasoning for this discrepancy that can be traced back to the ancient Indian (Upaniṣadic) teachings that are still prevalent in many Indian families and still anchor their views on IH. The name Upanisad itself means ‘sitting down near’ (to the Guru to gain the Supreme knowledge of the Self and the Universe and setting to rest ignorance) which is equivalent to the three traits among the BIG SEVEN characterized as IH by the western scholars viz. ‘being a good listener’, ‘curious to learn’, and ‘respect to other’s opinion’. The story of Satyakama Jabala (Chandogya Upanisad 4.4-8) who seeks the truth for several years even from the bull, the fire, the swan and waterfowl, suggests nothing but the ‘need for cognition’ or ‘desire for knowledge’. Nachiketa (Katha Upanisad), a boy with a pure mind and heart, follows his father’s words and offers himself to Yama (the God of Death) where after waiting for Yama for three days and nights, he seeks the knowledge of the mysteries of life and death. Although the main aim of these Upaniṣadic stories is to give the knowledge of life and death, the Supreme reality which can be identical with traits such as ‘curious to learn’, one cannot deny that they have a lot more to offer than mere information about true knowledge e.g., ‘politeness’, ‘good listener’, ‘awareness of own limitations’, etc. The possible future scope of this research includes (1) finding other socio-cultural factors that affect the ideas on IH such as age, gender, caste, type of education, highest qualification, place of residence and source of income, etc. which may be predominant in current Indian society despite our great teachings of the Upaniṣads, and (2) to devise different measures to impart IH in Indian children, teenagers, and younger adults for the harmonious future. The current experimental research can be considered as the first step towards these goals.Keywords: ethics and virtue epistemology, Indian philosophy, intellectual humility, upaniṣadic texts in ancient India
Procedia PDF Downloads 92280 Hypersensitivity Reactions Following Intravenous Administration of Contrast Medium
Authors: Joanna Cydejko, Paulina Mika
Abstract:
Hypersensitivity reactions are side effects of medications that resemble an allergic reaction. Anaphylaxis is a generalized, severe allergic reaction of the body caused by exposure to a specific agent at a dose tolerated by a healthy body. The most common causes of anaphylaxis are food (about 70%), Hymenoptera venoms (22%), and medications (7%), despite detailed diagnostics in 1% of people, the cause of the anaphylactic reaction was not indicated. Contrast media are anaphylactic agents of unknown mechanism. Hypersensitivity reactions can occur with both immunological and non-immunological mechanisms. Symptoms of anaphylaxis occur within a few seconds to several minutes after exposure to the allergen. Contrast agents are chemical compounds that make it possible to visualize or improve the visibility of anatomical structures. In the diagnosis of computed tomography, the preparations currently used are derivatives of the triiodide benzene ring. Pharmacokinetic and pharmacodynamic properties, i.e., their osmolality, viscosity, low chemotoxicity and high hydrophilicity, have an impact on better tolerance of the substance by the patient's body. In MRI diagnostics, macrocyclic gadolinium contrast agents are administered during examinations. The aim of this study is to present the results of the number and severity of anaphylactic reactions that occurred in patients in all age groups undergoing diagnostic imaging with intravenous administration of contrast agents. In non-ionic iodine CT and in macrocyclic gadolinium MRI. A retrospective assessment of the number of adverse reactions after contrast administration was carried out on the basis of data from the Department of Radiology of the University Clinical Center in Gdańsk, and it was assessed whether their different physicochemical properties had an impact on the incidence of acute complications. Adverse reactions are divided according to the severity of the patient's condition and the diagnostic method used in a given patient. Complications following the administration of a contrast medium in the form of acute anaphylaxis accounted for less than 0.5% of all diagnostic procedures performed with the use of a contrast agent. In the analysis period from January to December 2022, 34,053 CT scans and 15,279 MRI examinations with the use of contrast medium were performed. The total number of acute complications was 21, of which 17 were complications of iodine-based contrast agents and 5 of gadolinium preparations. The introduction of state-of-the-art contrast formulations was an important step toward improving the safety and tolerability of contrast agents used in imaging. Currently, contrast agents administered to patients are considered to be one of the best-tolerated preparations used in medicine. However, like any drug, they can be responsible for the occurrence of adverse reactions resulting from their toxic effects. The increase in the number of imaging tests performed with the use of contrast agents has a direct impact on the number of adverse events associated with their administration. However, despite the low risk of anaphylaxis, this risk should not be marginalized. The growing threat associated with the mass performance of radiological procedures with the use of contrast agents forces the knowledge of the rules of conduct in the event of symptoms of hypersensitivity to these preparations.Keywords: anaphylactic, contrast medium, diagnostic, medical imagine
Procedia PDF Downloads 62279 Bioflavonoids Derived from Mandarin Processing Wastes: Functional Hydrogels as a Sustainable Food Systems
Authors: Niharika Kaushal, Minni Singh
Abstract:
Fruit crops are widely cultivated throughout the World, with citrus being one of the most common. Mandarins, oranges, grapefruits, lemons, and limes are among the most frequently grown varieties. Citrus cultivars are industrially processed into juice, resulting in approx. 25-40% by wt. of biomass in the form of peels and seeds, generally considered as waste. In consequence, a significant amount of this nutraceutical-enriched biomass goes to waste, which, if utilized wisely, could revolutionize the functional food industry, as this biomass possesses a wide range of bioactive compounds, mainly within the class of polyphenols and terpenoids, making them an abundant source of functional bioactive. Mandarin is a potential source of bioflavonoids with putative antioxidative properties, and its potential application for developing value-added products is obvious. In this study, ‘kinnow’ mandarin (Citrus nobilis X Citrus deliciosa) biomass was studied for its flavonoid profile. For this, dried and pulverized peels were subjected to green and sustainable extraction techniques, namely, supercritical fluid extraction carried out under conditions pressure: 330 bar, temperature: 40 ̊ C and co-solvent: 10% ethanol. The obtained extract was observed to contain 47.3±1.06 mg/ml rutin equivalents as total flavonoids. Mass spectral analysis revealed the prevalence of polymethoxyflavones (PMFs), chiefly tangeretin and nobiletin. Furthermore, the antioxidant potential was analyzed by the 2,2-diphenyl-1-picrylhydrazyl (DPPH) method, which was estimated to be at an IC₅₀ of 0.55μg/ml. The pre-systemic metabolism of flavonoids limits their functionality, as was observed in this study through in vitro gastrointestinal studies where nearly 50.0% of the flavonoids were degraded within 2 hours of gastric exposure. We proposed nanoencapsulation as a means to overcome this problem, and flavonoids-laden polylactic-co-glycolic acid (PLGA) nano encapsulates were bioengineered using solvent evaporation method, and these were furnished to a particle size between 200-250nm, which exhibited protection of flavonoids in the gastric environment, allowing only 20% to be released in 2h. A further step involved impregnating the nano encapsulates within alginate hydrogels which were fabricated by ionic cross-linking, which would act as delivery vehicles within the gastrointestinal (GI) tract. As a result, 100% protection was achieved from the pre-systemic release of bioflavonoids. These alginate hydrogels had key significant features, i.e., less porosity of nearly 20.0%, and Cryo-SEM (Cryo-scanning electron microscopy) images of the composite corroborate the packing ability of the alginate hydrogel. As a result of this work, it is concluded that the waste can be used to develop functional biomaterials while retaining the functionality of the bioactive itself.Keywords: bioflavonoids, gastrointestinal, hydrogels, mandarins
Procedia PDF Downloads 80278 A Method Intensive Top-down Approach for Generating Guidelines for an Energy-Efficient Neighbourhood: A Case of Amaravati, Andhra Pradesh, India
Authors: Rituparna Pal, Faiz Ahmed
Abstract:
Neighbourhood energy efficiency is a newly emerged term to address the quality of urban strata of built environment in terms of various covariates of sustainability. The concept of sustainability paradigm in developed nations has encouraged the policymakers for developing urban scale cities to envision plans under the aegis of urban scale sustainability. The concept of neighbourhood energy efficiency is realized a lot lately just when the cities, towns and other areas comprising this massive global urban strata have started facing a strong blow from climate change, energy crisis, cost hike and an alarming shortfall in the justice which the urban areas required. So this step of urban sustainability can be easily referred more as a ‘Retrofit Action’ which is to cover up the already affected urban structure. So even if we start energy efficiency for existing cities and urban areas the initial layer remains, for which a complete model of urban sustainability still lacks definition. Urban sustainability is a broadly spoken off word with end number of parameters and policies through which the loop can be met. Out of which neighbourhood energy efficiency can be an integral part where the concept and index of neighbourhood scale indicators, block level indicators and building physics parameters can be understood, analyzed and concluded to help emerge guidelines for urban scale sustainability. The future of neighbourhood energy efficiency not only lies in energy efficiency but also important parameters like quality of life, access to green, access to daylight, outdoor comfort, natural ventilation etc. So apart from designing less energy-hungry buildings, it is required to create a built environment which will create less stress on buildings to consume more energy. A lot of literary analysis has been done in the Western countries prominently in Spain, Paris and also Hong Kong, leaving a distinct gap in the Indian scenario in exploring the sustainability at the urban strata. The site for the study has been selected in the upcoming capital city of Amaravati which can be replicated with similar neighbourhood typologies in the area. The paper suggests a methodical intent to quantify energy and sustainability indices in detail taking by involving several macro, meso and micro level covariates and parameters. Several iterations have been made both at macro and micro level and have been subjected to simulation, computation and mathematical models and finally to comparative analysis. Parameters at all levels are analyzed to suggest the best case scenarios which in turn is extrapolated to the macro level finally coming out with a proposal model for energy efficient neighbourhood and worked out guidelines with significance and correlations derived.Keywords: energy quantification, macro scale parameters, meso scale parameters, micro scale parameters
Procedia PDF Downloads 176277 A Quantitative Analysis of Rural to Urban Migration in Morocco
Authors: Donald Wright
Abstract:
The ultimate goal of this study is to reinvigorate the philosophical underpinnings the study of urbanization with scientific data with the goal of circumventing what seems an inevitable future clash between rural and urban populations. To that end urban infrastructure must be sustainable economically, politically and ecologically over the course of several generations as cities continue to grow with the incorporation of climate refugees. Our research will provide data concerning the projected increase in population over the coming two decades in Morocco, and the population will shift from rural areas to urban centers during that period of time. As a result, urban infrastructure will need to be adapted, developed or built to fit the demand of future internal migrations from rural to urban centers in Morocco. This paper will also examine how past experiences of internally displaced people give insight into the challenges faced by future migrants and, beyond the gathering of data, how people react to internal migration. This study employs four different sets of research tools. First, a large part of this study is archival, which involves compiling the relevant literature on the topic and its complex history. This step also includes gathering data bout migrations in Morocco from public data sources. Once the datasets are collected, the next part of the project involves populating the attribute fields and preprocessing the data to make it understandable and usable by machine learning algorithms. In tandem with the mathematical interpretation of data and projected migrations, this study benefits from a theoretical understanding of the critical apparatus existing around urban development of the 20th and 21st centuries that give us insight into past infrastructure development and the rationale behind it. Once the data is ready to be analyzed, different machine learning algorithms will be experimented (k-clustering, support vector regression, random forest analysis) and the results compared for visualization of the data. The final computational part of this study involves analyzing the data and determining what we can learn from it. This paper helps us to understand future trends of population movements within and between regions of North Africa, which will have an impact on various sectors such as urban development, food distribution and water purification, not to mention the creation of public policy in the countries of this region. One of the strengths of this project is the multi-pronged and cross-disciplinary methodology to the research question, which enables an interchange of knowledge and experiences to facilitate innovative solutions to this complex problem. Multiple and diverse intersecting viewpoints allow an exchange of methodological models that provide fresh and informed interpretations of otherwise objective data.Keywords: climate change, machine learning, migration, Morocco, urban development
Procedia PDF Downloads 150276 Upgrading of Bio-Oil by Bio-Pd Catalyst
Authors: Sam Derakhshan Deilami, Iain N. Kings, Lynne E. Macaskie, Brajendra K. Sharma, Anthony V. Bridgwater, Joseph Wood
Abstract:
This paper reports the application of a bacteria-supported palladium catalyst to the hydrodeoxygenation (HDO) of pyrolysis bio-oil, towards producing an upgraded transport fuel. Biofuels are key to the timely replacement of fossil fuels in order to mitigate the emissions of greenhouse gases and depletion of non-renewable resources. The process is an essential step in the upgrading of bio-oils derived from industrial by-products such as agricultural and forestry wastes, the crude oil from pyrolysis containing a large amount of oxygen that requires to be removed in order to create a fuel resembling fossil-derived hydrocarbons. The bacteria supported catalyst manufacture is a means of utilizing recycled metals and second life bacteria, and the metal can also be easily recovered from the spent catalysts after use. Comparisons are made between bio-Pd, and a conventional activated carbon supported Pd/C catalyst. Bio-oil was produced by fast pyrolysis of beechwood at 500 C at a residence time below 2 seconds, provided by Aston University. 5 wt % BioPd/C was prepared under reducing conditions, exposing cells of E. coli MC4100 to a solution of sodium tetrachloropalladate (Na2PdCl4), followed by rinsing, drying and grinding to form a powder. Pd/C was procured from Sigma-Aldrich. The HDO experiments were carried out in a 100 mL Parr batch autoclave using ~20g bio-crude oil and 0.6 g bio-Pd/C catalyst. Experimental variables investigated for optimization included temperature (160-350C) and reaction times (up to 5 h) at a hydrogen pressure of 100 bar. Most of the experiments resulted in an aqueous phase (~40%) and an organic phase (~50-60%) as well as gas phase (<5%) and coke (<2%). Study of the temperature and time upon the process showed that the degree of deoxygenation increased (from ~20 % up to 60 %) at higher temperatures in the region of 350 C and longer residence times up to 5 h. However minimum viscosity (~0.035 Pa.s) occurred at 250 C and 3 h residence time, indicating that some polymerization of the oil product occurs at the higher temperatures. Bio-Pd showed a similar degree of deoxygenation (~20 %) to Pd/C at lower temperatures of 160 C, but did not rise as steeply with temperature. More coke was formed over bio-Pd/C than Pd/C at temperatures above 250 C, suggesting that bio-Pd/C may be more susceptible to coke formation than Pd/C. Reactions occurring during bio-oil upgrading include catalytic cracking, decarbonylation, decarboxylation, hydrocracking, hydrodeoxygenation and hydrogenation. In conclusion, it was shown that bio-Pd/C displays an acceptable rate of HDO, which increases with residence time and temperature. However some undesirable reactions also occur, leading to a deleterious increase in viscosity at higher temperatures. Comparisons are also drawn with earlier work on the HDO of Chlorella derived bio-oil manufactured from micro-algae via hydrothermal liquefaction. Future work will analyze the kinetics of the reaction and investigate the effect of bi-metallic catalysts.Keywords: bio-oil, catalyst, palladium, upgrading
Procedia PDF Downloads 175275 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 119274 Strength Evaluation by Finite Element Analysis of Mesoscale Concrete Models Developed from CT Scan Images of Concrete Cube
Authors: Nirjhar Dhang, S. Vinay Kumar
Abstract:
Concrete is a non-homogeneous mix of coarse aggregates, sand, cement, air-voids and interfacial transition zone (ITZ) around aggregates. Adoption of these complex structures and material properties in numerical simulation would lead us to better understanding and design of concrete. In this work, the mesoscale model of concrete has been prepared from X-ray computerized tomography (CT) image. These images are converted into computer model and numerically simulated using commercially available finite element software. The mesoscale models are simulated under the influence of compressive displacement. The effect of shape and distribution of aggregates, continuous and discrete ITZ thickness, voids, and variation of mortar strength has been investigated. The CT scan of concrete cube consists of series of two dimensional slices. Total 49 slices are obtained from a cube of 150mm and the interval of slices comes approximately 3mm. In CT scan images, the same cube can be CT scanned in a non-destructive manner and later the compression test can be carried out in a universal testing machine (UTM) for finding its strength. The image processing and extraction of mortar and aggregates from CT scan slices are performed by programming in Python. The digital colour image consists of red, green and blue (RGB) pixels. The conversion of RGB image to black and white image (BW) is carried out, and identification of mesoscale constituents is made by putting value between 0-255. The pixel matrix is created for modeling of mortar, aggregates, and ITZ. Pixels are normalized to 0-9 scale considering the relative strength. Here, zero is assigned to voids, 4-6 for mortar and 7-9 for aggregates. The value between 1-3 identifies boundary between aggregates and mortar. In the next step, triangular and quadrilateral elements for plane stress and plane strain models are generated depending on option given. Properties of materials, boundary conditions, and analysis scheme are specified in this module. The responses like displacement, stresses, and damages are evaluated by ABAQUS importing the input file. This simulation evaluates compressive strengths of 49 slices of the cube. The model is meshed with more than sixty thousand elements. The effect of shape and distribution of aggregates, inclusion of voids and variation of thickness of ITZ layer with relation to load carrying capacity, stress-strain response and strain localizations of concrete have been studied. The plane strain condition carried more load than plane stress condition due to confinement. The CT scan technique can be used to get slices from concrete cores taken from the actual structure, and the digital image processing can be used for finding the shape and contents of aggregates in concrete. This may be further compared with test results of concrete cores and can be used as an important tool for strength evaluation of concrete.Keywords: concrete, image processing, plane strain, interfacial transition zone
Procedia PDF Downloads 241273 The Role of Anti-corruption Clauses in the Fight Against Corruption in Petroleum Sector
Authors: Azar Mahmoudi
Abstract:
Despite the rise of global anti-corruption movements and the strong emergence of international and national anti-corruption laws, corrupt practices are still prevalent in most places, and countries still struggle to translate these laws into practice. On the other hand, in most countries, political and economic elites oppose anti-corruption reforms. In such a situation, the role of external actors, like the other States, international organizations, and transnational actors, becomes essential. Among them, Transnational Corporations [TNCs] can develop their own regime-like framework to govern their internal activities, and through this, they can contribute to the regimes established by State actors to solve transnational issues. Among various regimes, TNCs may choose to comply with the transnational anti-corruption legal regime to avoid the cost of non-compliance with anti-corruption laws. As a result, they decide to strenghen their anti-corruption compliance as they expand into new overseas markets. Such a decision extends anti-corruption standards among their employees and third-party agents and within their projects across countries. To better address the challenges posed by corruption, TNCs have adopted a comprehensive anti-corruption toolkit. Among the various instruments, anti-corruption clauses have become one of the most anti-corruption means in international commercial agreements. Anti-corruption clauses, acting as a due diligence tool, can protect TNCs against the engagement of third-party agents in corrupt practices and further promote anti-corruption standards among businesses operating across countries. An anti-corruption clause allows parties to create a contractual commitment to exclude corrupt practices during the term of their agreement, including all levels of negotiation and implementation. Such a clause offers companies a mechanism to reduce the risk of potential corruption in their dealings with third parties while avoiding civil and administrative penalties. There have been few attempts to examine the role of anti-corruption clauses in the fight against corruption; therefore, this paper aims to fill this gap and examine anti-corruption clauses in a specific sector where corrupt practices are widespread and endemic, i.e., the petroleum industry. This paper argues that anti-corruption clauses are a positive step in ensuring that the petroleum industry operates in an ethical and transparent manner, helping to reducing the risk of corruption and promote integrity in this sector. Contractual anti-corruption clauses vary in terms of the types commitment, so parties have a wide range of options to choose from for their preferred clauses incorporated within their contracts. This paper intends to propose a categorization of anti-corruption clauses in the petroleum sector. It examines particularly the anti-corruption clauses incorporated in transnational hydrocarbon contracts published by the Resource Contract Portal, an online repository of extractive contracts. Then, this paper offers a quantitative assessment of anti-corruption clauses according to the types of contract, the date of conclusion, and the geographical distribution.Keywords: anti-corruption, oil and gas, transnational corporations, due diligence, contractual clauses, hydrocarbon, petroleum sector
Procedia PDF Downloads 131272 Integrating High-Performance Transport Modes into Transport Networks: A Multidimensional Impact Analysis
Authors: Sarah Pfoser, Lisa-Maria Putz, Thomas Berger
Abstract:
In the EU, the transport sector accounts for roughly one fourth of the total greenhouse gas emissions. In fact, the transport sector is one of the main contributors of greenhouse gas emissions. Climate protection targets aim to reduce the negative effects of greenhouse gas emissions (e.g. climate change, global warming) worldwide. Achieving a modal shift to foster environmentally friendly modes of transport such as rail and inland waterways is an important strategy to fulfill the climate protection targets. The present paper goes beyond these conventional transport modes and reflects upon currently emerging high-performance transport modes that yield the potential of complementing future transport systems in an efficient way. It will be defined which properties describe high-performance transport modes, which types of technology are included and what is their potential to contribute to a sustainable future transport network. The first step of this paper is to compile state-of-the-art information about high-performance transport modes to find out which technologies are currently emerging. A multidimensional impact analysis will be conducted afterwards to evaluate which of the technologies is most promising. This analysis will be performed from a spatial, social, economic and environmental perspective. Frequently used instruments such as cost-benefit analysis and SWOT analysis will be applied for the multidimensional assessment. The estimations for the analysis will be derived based on desktop research and discussions in an interdisciplinary team of researchers. For the purpose of this work, high-performance transport modes are characterized as transport modes with very fast and very high throughput connections that could act as efficient extension to the existing transport network. The recently proposed hyperloop system represents a potential high-performance transport mode which might be an innovative supplement for the current transport networks. The idea of hyperloops is that persons and freight are shipped in a tube at more than airline speed. Another innovative technology consists in drones for freight transport. Amazon already tests drones for their parcel shipments, they aim for delivery times of 30 minutes. Drones can, therefore, be considered as high-performance transport modes as well. The Trans-European Transport Networks program (TEN-T) addresses the expansion of transport grids in Europe and also includes high speed rail connections to better connect important European cities. These services should increase competitiveness of rail and are intended to replace aviation, which is known to be a polluting transport mode. In this sense, the integration of high-performance transport modes as described above facilitates the objectives of the TEN-T program. The results of the multidimensional impact analysis will reveal potential future effects of the integration of high-performance modes into transport networks. Building on that, a recommendation on the following (research) steps can be given which are necessary to ensure the most efficient implementation and integration processes.Keywords: drones, future transport networks, high performance transport modes, hyperloops, impact analysis
Procedia PDF Downloads 332271 Dynamic Exergy Analysis for the Built Environment: Fixed or Variable Reference State
Authors: Valentina Bonetti
Abstract:
Exergy analysis successfully helps optimizing processes in various sectors. In the built environment, a second-law approach can enhance potential interactions between constructions and their surrounding environment and minimise fossil fuel requirements. Despite the research done in this field in the last decades, practical applications are hard to encounter, and few integrated exergy simulators are available for building designers. Undoubtedly, an obstacle for the diffusion of exergy methods is the strong dependency of results on the definition of its 'reference state', a highly controversial issue. Since exergy is the combination of energy and entropy by means of a reference state (also called "reference environment", or "dead state"), the reference choice is crucial. Compared to other classical applications, buildings present two challenging elements: They operate very near to the reference state, which means that small variations have relevant impacts, and their behaviour is dynamical in nature. Not surprisingly then, the reference state definition for the built environment is still debated, especially in the case of dynamic assessments. Among the several characteristics that need to be defined, a crucial decision for a dynamic analysis is between a fixed reference environment (constant in time) and a variable state, which fluctuations follow the local climate. Even if the latter selection is prevailing in research, and recommended by recent and widely-diffused guidelines, the fixed reference has been analytically demonstrated as the only choice which defines exergy as a proper function of the state in a fluctuating environment. This study investigates the impact of that crucial choice: Fixed or variable reference. The basic element of the building energy chain, the envelope, is chosen as the object of investigation as common to any building analysis. Exergy fluctuations in the building envelope of a case study (a typical house located in a Mediterranean climate) are confronted for each time-step of a significant summer day, when the building behaviour is highly dynamical. Exergy efficiencies and fluxes are not familiar numbers, and thus, the more easy-to-imagine concept of exergy storage is used to summarize the results. Trends obtained with a fixed and a variable reference (outside air) are compared, and their meaning is discussed under the light of the underpinning dynamical energy analysis. As a conclusion, a fixed reference state is considered the best choice for dynamic exergy analysis. Even if the fixed reference is generally only contemplated as a simpler selection, and the variable state is often stated as more accurate without explicit justifications, the analytical considerations supporting the adoption of a fixed reference are confirmed by the usefulness and clarity of interpretation of its results. Further discussion is needed to address the conflict between the evidence supporting a fixed reference state and the wide adoption of a fluctuating one. A more robust theoretical framework, including selection criteria of the reference state for dynamical simulations, could push the development of integrated dynamic tools and thus spread exergy analysis for the built environment across the common practice.Keywords: exergy, reference state, dynamic, building
Procedia PDF Downloads 226