Search results for: linear complexity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4848

Search results for: linear complexity

1068 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: harmonics, passive filter, power factor, power quality

Procedia PDF Downloads 299
1067 The Effect of Ingredients Mixing Sequence in Rubber Compounding on the Formation of Bound Rubber and Cross-Link Density of Natural Rubber

Authors: Abu Hasan, Rochmadi, Hary Sulistyo, Suharto Honggokusumo

Abstract:

This research purpose is to study the effect of Ingredients mixing sequence in rubber compounding onto the formation of bound rubber and cross link density of natural rubber and also the relationship of bound rubber and cross link density. Analysis of bound rubber formation of rubber compound and cross link density of rubber vulcanizates were carried out on a natural rubber formula having masticated and mixing, followed by curing. There were four methods of mixing and each mixing process was followed by four mixing sequence methods of carbon black into the rubber. In the first method of mixing sequence, rubber was masticated for 5 min and then rubber chemicals and carbon black N 330 were added simultaneously. In the second one, rubber was masticated for 1 min and followed by addition of rubber chemicals and carbon black N 330 simultaneously using the different method of mixing then the first one. In the third one, carbon black N 660 was used for the same mixing procedure of the second one, and in the last one, rubber was masticated for 3 min, carbon black N 330 and rubber chemicals were added subsequently. The addition of rubber chemicals and carbon black into masticated rubber was distinguished by the sequence and time allocated for each mixing process. Carbon black was added into two stages. In the first stage, 10 phr was added first and the remaining 40 phr was added later along with oil. In the second one to the fourth one, the addition of carbon black in the first and the second stage was added in the phr ratio 20:30, 30:20, and 40:10. The results showed that the ingredients mixing process influenced bound rubber formation and cross link density. In the three methods of mixing, the bound rubber formation was proportional with crosslink density. In contrast in the fourth one, bound rubber formation and cross link density had contradictive relation. Regardless of the mixing method operated, bound rubber had non linear relationship with cross link density. The high cross link density was formed when low bound rubber formation. The cross link density became constant at high bound rubber content.

Keywords: bound-rubber, cross-link density, natural rubber, rubber mixing process

Procedia PDF Downloads 406
1066 Transformation of Periodic Fuzzy Membership Function to Discrete Polygon on Circular Polar Coordinates

Authors: Takashi Mitsuishi

Abstract:

Fuzzy logic has gained acceptance in the recent years in the fields of social sciences and humanities such as psychology and linguistics because it can manage the fuzziness of words and human subjectivity in a logical manner. However, the major field of application of the fuzzy logic is control engineering as it is a part of the set theory and mathematical logic. Mamdani method, which is the most popular technique for approximate reasoning in the field of fuzzy control, is one of the ways to numerically represent the control afforded by human language and sensitivity and has been applied in various practical control plants. Fuzzy logic has been gradually developing as an artificial intelligence in different applications such as neural networks, expert systems, and operations research. The objects of inference vary for different application fields. Some of these include time, angle, color, symptom and medical condition whose fuzzy membership function is a periodic function. In the defuzzification stage, the domain of the membership function should be unique to obtain uniqueness its defuzzified value. However, if the domain of the periodic membership function is determined as unique, an unintuitive defuzzified value may be obtained as the inference result using the center of gravity method. Therefore, the authors propose a method of circular-polar-coordinates transformation and defuzzification of the periodic membership functions in this study. The transformation to circular polar coordinates simplifies the domain of the periodic membership function. Defuzzified value in circular polar coordinates is an argument. Furthermore, it is required that the argument is calculated from a closed plane figure which is a periodic membership function on the circular polar coordinates. If the closed plane figure is continuous with the continuity of the membership function, a significant amount of computation is required. Therefore, to simplify the practice example and significantly reduce the computational complexity, we have discretized the continuous interval and the membership function in this study. In this study, the following three methods are proposed to decide the argument from the discrete polygon which the continuous plane figure is transformed into. The first method provides an argument of a straight line passing through the origin and through the coordinate of the arithmetic mean of each coordinate of the polygon (physical center of gravity). The second one provides an argument of a straight line passing through the origin and the coordinate of the geometric center of gravity of the polygon. The third one provides an argument of a straight line passing through the origin bisecting the perimeter of the polygon (or the closed continuous plane figure).

Keywords: defuzzification, fuzzy membership function, periodic function, polar coordinates transformation

Procedia PDF Downloads 353
1065 Creativity and Innovation in Postgraduate Supervision

Authors: Rajendra Chetty

Abstract:

The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.

Keywords: interdisciplinarity, internationalisation, postgraduate, supervision

Procedia PDF Downloads 229
1064 Learning with Music: The Effects of Musical Tension on Long-Term Declarative Memory Formation

Authors: Nawras Kurzom, Avi Mendelsohn

Abstract:

The effects of background music on learning and memory are inconsistent, partly due to the intrinsic complexity and variety of music and partly to individual differences in music perception and preference. A prominent musical feature that is known to elicit strong emotional responses is musical tension. Musical tension can be brought about by building anticipation of rhythm, harmony, melody, and dynamics. Delaying the resolution of dominant-to-tonic chord progressions, as well as using dissonant harmonics, can elicit feelings of tension, which can, in turn, affect memory formation of concomitant information. The aim of the presented studies was to explore how forming declarative memory is influenced by musical tension, brought about within continuous music as well as in the form of isolated chords with varying degrees of dissonance/consonance. The effects of musical tension on long-term memory of declarative information were studied in two ways: 1) by evoking tension within continuous music pieces by delaying the release of harmonic progressions from dominant to tonic chords, and 2) by using isolated single complex chords with various degrees of dissonance/roughness. Musical tension was validated through subjective reports of tension, as well as physiological measurements of skin conductance response (SCR) and pupil dilation responses to the chords. In addition, music information retrieval (MIR) was used to quantify musical properties associated with tension and its release. Each experiment included an encoding phase, wherein individuals studied stimuli (words or images) with different musical conditions. Memory for the studied stimuli was tested 24 hours later via recognition tasks. In three separate experiments, we found positive relationships between tension perception and physiological measurements of SCR and pupil dilation. As for memory performance, we found that background music, in general, led to superior memory performance as compared to silence. We detected a trade-off effect between tension perception and memory, such that individuals who perceived musical tension as such displayed reduced memory performance for images encoded during musical tension, whereas tense music benefited memory for those who were less sensitive to the perception of musical tension. Musical tension exerts complex interactions with perception, emotional responses, and cognitive performance on individuals with and without musical training. Delineating the conditions and mechanisms that underlie the interactions between musical tension and memory can benefit our understanding of musical perception at large and the diverse effects that music has on ongoing processing of declarative information.

Keywords: musical tension, declarative memory, learning and memory, musical perception

Procedia PDF Downloads 89
1063 Comparative Study of Electronic and Optical Properties of Ammonium and Potassium Dinitramide Salts through Ab-Initio Calculations

Authors: J. Prathap Kumar, G. Vaitheeswaran

Abstract:

The present study investigates the role of ammonium and potassium ion in the electronic, bonding and optical properties of dinitramide salts due to their stability and non-toxic nature. A detailed analysis of bonding between NH₄ and K with dinitramide, optical transitions from the valence band to the conduction band, absorption spectra, refractive indices, reflectivity, loss function are reported. These materials are well known as oxidizers in solid rocket propellants. In the present work, we use full potential linear augmented plane wave (FP-LAPW) method which is implemented in the Wien2k package within the framework of density functional theory. The standard DFT functional local density approximation (LDA) and generalized gradient approximation (GGA) always underestimate the band gap by 30-40% due to the lack of derivative discontinuities of the exchange-correlation potential with respect to an occupation number. In order to get reliable results, one must use hybrid functional (HSE-PBE), GW calculations and Tran-Blaha modified Becke-Johnson (TB-mBJ) potential. It is very well known that hybrid functionals GW calculations are very expensive, the later methods are computationally cheap. The new developed TB-mBJ functionals use information kinetic energy density along with the charge density employed in DFT. The TB-mBJ functionals cannot be used for total energy calculations but instead yield very much improved band gap. The obtained electronic band gap at gamma point for both the ammonium dinitramide and potassium dinitramide are found to be 2.78 eV and 3.014 eV with GGA functional, respectively. After the inclusion of TB-mBJ, the band gap improved by 4.162 eV for potassium dinitramide and 4.378 eV for ammonium dinitramide. The nature of the band gap is direct in ADN and indirect in KDN. The optical constants such as dielectric constant, absorption, and refractive indices, birefringence values are presented. Overall as there are no experimental studies we present the improved band gap with TB-mBJ functional following with optical properties.

Keywords: ammonium dinitramide, potassium dinitramide, DFT, propellants

Procedia PDF Downloads 145
1062 Analyzing the Influence of Hydrometeorlogical Extremes, Geological Setting, and Social Demographic on Public Health

Authors: Irfan Ahmad Afip

Abstract:

This main research objective is to accurately identify the possibility for a Leptospirosis outbreak severity of a certain area based on its input features into a multivariate regression model. The research question is the possibility of an outbreak in a specific area being influenced by this feature, such as social demographics and hydrometeorological extremes. If the occurrence of an outbreak is being subjected to these features, then the epidemic severity for an area will be different depending on its environmental setting because the features will influence the possibility and severity of an outbreak. Specifically, this research objective was three-fold, namely: (a) to identify the relevant multivariate features and visualize the patterns data, (b) to develop a multivariate regression model based from the selected features and determine the possibility for Leptospirosis outbreak in an area, and (c) to compare the predictive ability of multivariate regression model and machine learning algorithms. Several secondary data features were collected locations in the state of Negeri Sembilan, Malaysia, based on the possibility it would be relevant to determine the outbreak severity in the area. The relevant features then will become an input in a multivariate regression model; a linear regression model is a simple and quick solution for creating prognostic capabilities. A multivariate regression model has proven more precise prognostic capabilities than univariate models. The expected outcome from this research is to establish a correlation between the features of social demographic and hydrometeorological with Leptospirosis bacteria; it will also become a contributor for understanding the underlying relationship between the pathogen and the ecosystem. The relationship established can be beneficial for the health department or urban planner to inspect and prepare for future outcomes in event detection and system health monitoring.

Keywords: geographical information system, hydrometeorological, leptospirosis, multivariate regression

Procedia PDF Downloads 106
1061 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 119
1060 An Experimental Investigation on the Fuel Characteristics of Nano-Aluminium Oxide and Nano-Cobalt Oxide Particles Blended in Diesel Fuel

Authors: S. Singh, P. Patel, D. Kachhadiya, Swapnil Dharaskar

Abstract:

The research objective is to integrate nanoparticles into fuels- i.e. diesel, biodiesel, biodiesel blended with diesel, plastic derived fuels, etc. to increase the fuel efficiency. The metal oxide nanoparticles will reduce the carbon monoxide emissions by donating oxygen atoms from their lattices to catalyze the combustion reactions and to aid complete combustion; due to this, there will be an increase in the calorific value of the blend (fuel + metal nanoparticles). Aluminium oxide and cobalt oxide nanoparticles have been synthesized by sol-gel method. The characterization was done by Fourier Transform Infrared Spectroscopy (FTIR), X-Ray Diffraction (XRD), Scanning Electron Microscope (SEM) and Energy Dispersive X-ray Spectroscopy (EDS). The size of the particles was determined by XRD to be 28.6 nm and 28.06 nm for aluminium oxide and cobalt oxide nanoparticles respectively. Different concentration blends- 50, 100, 150 ppm were prepared by adding the required weight of metal oxides in 1 liter of diesel and sonicating for 30 minutes at 500W. The blend properties- calorific value, viscosity, and flash point were determined by bomb calorimeter, Brookfield viscometer and pensky-martin apparatus. For the aluminum oxide blended diesel, there was a maximum increase of 5.544% in the calorific value, but at the same time, there was an increase in the flash point from 43°C to 58.5°C and an increase in the viscosity from 2.45 cP to 3.25 cP. On the other hand, for the cobalt oxide blended diesel there was a maximum increase of 2.012% in the calorific value while the flash point increased from 43°C to 51.5°C and the viscosity increased from 2.45 cP to 2.94 cP. There was a linear increase in the calorific value, viscosity and flash point when the concentration of the metal oxide nanoparticles in the blend was increased. For the 50 ppm Al₂O₃ and 50 ppm Co₃O₄ blend the increasing the calorific value was 1.228 %, and the viscosity changed from 2.45 cP to 2.64 cP and the flash point increased from 43°C to 50.5°C. Clearly the aluminium oxide nanoparticles increase the calorific value but at the cost of flash point and viscosity, thus it is better to use the 50 ppm aluminium oxide, and 50 ppm cobalt oxide blended diesel.

Keywords: aluminium oxide nanoparticles, cobalt oxide nanoparticles, fuel additives, fuel characteristics

Procedia PDF Downloads 309
1059 A Semidefinite Model to Quantify Dynamic Forces in the Powertrain of Torque Regulated Bascule Bridge Machineries

Authors: Kodo Sektani, Apostolos Tsouvalas, Andrei Metrikine

Abstract:

The reassessment of existing movable bridges in The Netherlands has created the need for acceptance/rejection criteria to assess whether the machineries are meet certain design demands. However, the existing design code defines a different limit state design, meant for new machineries which is based on a simple linear spring-mass model. Observations show that existing bridges do not confirm the model predictions. In fact, movable bridges are nonlinear systems consisting of mechanical components, such as, gears, electric motors and brakes. Next to that, each movable bridge is characterized by a unique set of parameters. However, in the existing code various variables that describe the physical characteristics of the bridge are neglected or replaced by partial factors. For instance, the damping ratio ζ, which is different for drawbridges compared to bascule bridges, is taken as a constant for all bridge types. In this paper, a model is developed that overcomes some of the limitations of existing modelling approaches to capture the dynamics of the powertrain of a class of bridge machineries First, a semidefinite dynamic model is proposed, which accounts for stiffness, damping, and some additional variables of the physical system, which are neglected by the code, such as nonlinear braking torques. The model gives an upper bound of the peak forces/torques occurring in the powertrain during emergency braking. Second, a discrete nonlinear dynamic model is discussed, with realistic motor torque characteristics during normal operation. This model succeeds to accurately predict the full time history of the occurred stress state of the opening and closing cycle for fatigue purposes.

Keywords: Dynamics of movable bridges, Bridge machinery, Powertrains, Torque measurements

Procedia PDF Downloads 144
1058 The Associations between Ankle and Brachial Systolic Blood Pressures with Obesity Parameters

Authors: Matei Tudor Berceanu, Hema Viswambharan, Kirti Kain, Chew Weng Cheng

Abstract:

Background - Obesity parameters, particularly visceral obesity as measured by the waist-to-height ratio (WHtR), correlate with insulin resistance. The metabolic microvascular changes associated with insulin resistance causes increased peripheral arteriolar resistance primarily to the lower limb vessels. We hypothesize that ankle systolic blood pressures (SBPs) are more significantly associated with visceral obesity than brachial SBPs. Methods - 1098 adults enriched in south Asians or Europeans with diabetes (T2DM) were recruited from a primary care practice in West Yorkshire. Their medical histories, including T2DM and cardiovascular disease (CVD) status, were gathered from an electronic database. The brachial, dorsalis pedis, and posterior tibial SBPs were measured using a Doppler machine. Their body mass index (BMI) and WHtR were calculated after measuring their weight, height, and waist circumference. Linear regressions were performed between the 6 SBPs and both obesity parameters, after adjusting for covariates. Results - Generally, the left posterior tibial SBP (P=4.559*10⁻¹⁵) and right posterior tibial SBP (P=1.114* 10⁻¹³ ) are the pressures most significantly associated with the BMI, as well as in south Asians (P < 0.001) and Europeans (P < 0.001) specifically. In South Asians, although the left (P=0.032) and right brachial SBP (P=0.045) were associated to the WHtR, the left posterior tibial SBP (P=0.023) showed the strongest association. Conclusion - Regardless of ethnicity, ankle SBPs are more significantly associated with generalized obesity than brachial SBPs, suggesting their screening potential for screening for early detection of T2DM and CVD. A combination of ankle SBPs with WHtR is proposed in south Asians.

Keywords: ankle blood pressures, body mass index, insulin resistance, waist-to-height-ratio

Procedia PDF Downloads 135
1057 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 542
1056 Assessing the Suitability of South African Waste Foundry Sand as an Additive in Clay Masonry Products

Authors: Nthabiseng Portia Mahumapelo, Andre van Niekerk, Ndabenhle Sosibo, Nirdesh Singh

Abstract:

The foundry industry generates large quantities of solid waste in the form of waste foundry sand. The ever-increasing quantities of this type of industrial waste put pressure on land-filling space and its proper management has become a global concern. The South African foundry industry is not different when it comes to this solid waste generation. Utilizing the foundry waste sand in other applications has become an attractive avenue to deal with this waste stream. In the present paper, an evaluation was done on the suitability of foundry waste sand as an additive in clay masonry products. Purchased clay was added to the foundry waste sand sample in a 50/50 ratio. The mixture was named FC sample. The FC sample was mixed with water in a pan mixer until the mixture was consistent and suitable for extrusion. The FC sample was extruded and cut into briquettes. Water absorption, shrinkage and modulus of rupture tests were conducted on the resultant briquettes. Foundry waste sand and FC samples were respectively characterized mineralogically using X-Ray Diffraction, and the major and trace elements were determined using Inductively Coupled Plasma Optical Emission Spectroscopy. Adding purchased clay to the foundry waste sand positively influenced the workability of the test sample. Another positive characteristic was the low linear shrinkage, which indicated that products manufactured from the FC sample would not be susceptible to cracking. The water absorption values were acceptable and the unfired and fired strength values of the briquette’s samples were acceptable. In conclusion, tests showed that foundry waste sand can be used as an additive in masonry clay bricks, provided it is blended with good quality clay.

Keywords: foundry waste sand, masonry clay bricks, modulus of rupture, shrinkage

Procedia PDF Downloads 221
1055 Socioeconomic Inequality in Physical Activity: The CASPIAN-V Study

Authors: Roya Kelishadi, Mostafa Amini-Rarani, Mostafa Qorbani

Abstract:

Introduction: As a health-related behavior, physical activity (PA) has an unequal distribution relating to individual's socioeconomic status. This study aimed to assess socioeconomic inequality in PA among Iranian students and their parents at national level and according to socioeconomic status (SES) of the living regions. Method: This study was conducted as part of a national surveillance program conducted among 14400 Iranian students and their parents. Non-linear principal component analysis was used to construct the households' socioeconomic status, and the concentration index approach was applied to measure inequality in father, mother, and student’s PA. Results: The data of 13313 students and their parents were complete for the current study. At national level and SES regions, students had more PA than their parents (except in the lowest SES region), and fathers have more PA than mothers. The lowest means of mother and student's PA were find in the highest SES region. At national level, the concentration indices of father and mother’s PA were -0.050 (95 % CI: -0.067 ~ -0.030) and -0.028 (95% CI: -0.044 ~ -0.012), respectively; indicating pro-poor inequality and, the CI value of student PA was nearly equal to zero (P > 0.05). At SES regions, father and mother's PA were more concentrated in the poor, except for lower middle region. Regional concentration indices for students reveal that inequality not statistically significant at all regions. Conclusion: This study suggests that reliable evidence that comparing different aspects of inequality of PA, based on socioeconomic status and residence areas of students and their parents, could be used for better planning for health promotion programs. Moreover, given the average of mother's and student’s PA in the richer regions were low, it can be suggested that richer focused-PA planning may further increase the level of PA across higher SES and, consequently, reduce inequality in PA. These findings can be applied in the health system services.

Keywords: concentration index, health system services, physical activity, socioeconomic inequality

Procedia PDF Downloads 146
1054 Sustainability Assessment of a Deconstructed Residential House

Authors: Atiq U. Zaman, Juliet Arnott

Abstract:

This paper analyses the various benefits and barriers of residential deconstruction in the context of environmental performance and circular economy based on a case study project in Christchurch, New Zealand. The case study project “Whole House Deconstruction” which aimed, firstly, to harvest materials from a residential house, secondly, to produce new products using the recovered materials, and thirdly, to organize an exhibition for the local public to promote awareness on resource conservation and sustainable deconstruction practices. Through a systematic deconstruction process, the project recovered around 12 tonnes of various construction materials, most of which would otherwise be disposed of to landfill in the traditional demolition approach. It is estimated that the deconstruction of a similar residential house could potentially prevent around 27,029 kg of carbon emission to the atmosphere by recovering and reusing the building materials. In addition, the project involved local designers to produce 400 artefacts using the recovered materials and to exhibit them to accelerate public awareness. The findings from this study suggest that the deconstruction project has significant environmental benefits, as well as social benefits by involving the local community and unemployed youth as a part of their professional skills development opportunities. However, the project faced a number of economic and institutional challenges. The study concludes that with proper economic models and appropriate institutional support a significant amount of construction and demolition waste can be reduced through a systematic deconstruction process. Traditionally, the greatest benefits from such projects are often ignored and remain unreported to wider audiences as most of the external and environmental costs have not been considered in the traditional linear economy.

Keywords: circular economy, construction and demolition waste, resource recovery, systematic deconstruction, sustainable waste management

Procedia PDF Downloads 177
1053 In vitro Study of Inflammatory Gene Expression Suppression of Strawberry and Blackberry Extracts

Authors: Franco Van De Velde, Debora Esposito, Maria E. Pirovani, Mary A. Lila

Abstract:

The physiology of various inflammatory diseases is a complex process mediated by inflammatory and immune cells such as macrophages and monocytes. Chronic inflammation, as observed in many cardiovascular and autoimmune disorders, occurs when the low-grade inflammatory response fails to resolve with time. Because of the complexity of the chronic inflammatory disease, major efforts have focused on identifying novel anti-inflammatory agents and dietary regimes that prevent the pro-inflammatory process at the early stage of gene expression of key pro-inflammatory mediators and cytokines. The ability of the extracts of three blackberry cultivars (‘Jumbo’, ‘Black Satin’ and ‘Dirksen’), and one strawberry cultivar (‘Camarosa’) to inhibit four well-known genetic biomarkers of inflammation: inducible nitric oxide synthase (iNOS), cyclooxynase-2 (Cox-2), interleukin-1β (IL-1β) and interleukin-6 (IL-6) in an in vitro lipopolysaccharide-stimulated murine RAW 264.7 macrophage model were investigated. Moreover, the effect of latter extracts on the intracellular reactive oxygen species (ROS) and nitric oxide (NO) production was assessed. Assay was conducted with 50 µg/mL crude extract concentration, an amount that is easily achievable in the gastrointestinal tract after berries consumption. The mRNA expression levels of Cox-2 and IL-6 were reduced consistently (more than 30%) by extracts of ‘Jumbo’ and ‘Black Satin’ blackberries. Strawberry extracts showed high reduction in mRNA expression levels of IL-6 (more than 65%) and exhibited moderate reduction in mRNA expression of Cox-2 (more than 35%). The latter behavior mirrors the intracellular ROS production of the LPS stimulated RAW 264.7 macrophages after the treatment with blackberry ‘Black Satin’ and ‘Jumbo’, and strawberry ‘Camarosa’ extracts, suggesting that phytochemicals from these fruits may play a role in the health maintenance by reducing oxidative stress. On the other hand, effective inhibition in the gene expression of IL-1β and iNOS was not observed by any of blackberry and strawberry extracts. However, suppression in the NO production in the activated macrophages among 5–25% was observed by ‘Jumbo’ and ‘Black Satin’ blackberry extracts and ‘Camarosa’ strawberry extracts, suggesting a higher NO suppression property by phytochemicals of these fruits. All these results suggest the potential beneficial effects of studied berries as functional foods with antioxidant and anti-inflammatory roles. Moreover, the underlying role of phytochemicals from these fruits in the protection of inflammatory process will deserve to be further explored.

Keywords: cyclooxygenase-2, functional foods, interleukin-6, reactive oxygen species

Procedia PDF Downloads 227
1052 Low Voltage and High Field-Effect Mobility Thin Film Transistor Using Crystalline Polymer Nanocomposite as Gate Dielectric

Authors: Debabrata Bhadra, B. K. Chaudhuri

Abstract:

The operation of organic thin film transistors (OFETs) with low voltage is currently a prevailing issue. We have fabricated anthracene thin-film transistor (TFT) with an ultrathin layer (~450nm) of Poly-vinylidene fluoride (PVDF)/CuO nanocomposites as a gate insulator. We obtained a device with excellent electrical characteristics at low operating voltages (<1V). Different layers of the film were also prepared to achieve the best optimization of ideal gate insulator with various static dielectric constant (εr ). Capacitance density, leakage current at 1V gate voltage and electrical characteristics of OFETs with a single and multi layer films were investigated. This device was found to have highest field effect mobility of 2.27 cm2/Vs, a threshold voltage of 0.34V, an exceptionally low sub threshold slope of 380 mV/decade and an on/off ratio of 106. Such favorable combination of properties means that these OFETs can be utilized successfully as voltages below 1V. A very simple fabrication process has been used along with step wise poling process for enhancing the pyroelectric effects on the device performance. The output characteristic of OFET after poling were changed and exhibited linear current-voltage relationship showing the evidence of large polarization. The temperature dependent response of the device was also investigated. The stable performance of the OFET after poling operation makes it reliable in temperature sensor applications. Such High-ε CuO/PVDF gate dielectric appears to be highly promising candidates for organic non-volatile memory and sensor field-effect transistors (FETs).

Keywords: organic field effect transistors, thin film transistor, gate dielectric, organic semiconductor

Procedia PDF Downloads 233
1051 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data

Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L. Duan

Abstract:

The conditional density characterizes the distribution of a response variable y given other predictor x and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts as a motivating starting point. In this work, the authors extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zₚ, zₙ]. The zₚ component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zₙ component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach coined Augmented Posterior CDE (AP-CDE) only requires a simple modification of the common normalizing flow framework while significantly improving the interpretation of the latent component since zₚ represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of 𝑥-related variations due to factors such as lighting condition and subject id from the other random variations. Further, the experiments show that an unconditional NF neural network based on an unsupervised model of z, such as a Gaussian mixture, fails to generate interpretable results.

Keywords: conditional density estimation, image generation, normalizing flow, supervised dimension reduction

Procedia PDF Downloads 84
1050 A Comparative Approach to the Concept of Incarnation of God in Hinduism and Christianity

Authors: Cemil Kutluturk

Abstract:

This is a comparative study of the incarnation of God according to Hinduism and Christianity. After dealing with their basic ideas on the concept of the incarnation of God, the main similarities and differences between each other will be examined by quoting references from their sacred texts. In Hinduism, the term avatara is used in order to indicate the concept of the incarnation of God. The word avatara is derived from ava (down) and tri (to cross, to save, attain). Thus avatara means to come down or to descend. Although an avatara is commonly considered as an appearance of any deity on earth, the term refers particularly to descents of Vishnu. According to Hinduism, God becomes an avatara in every age and entering into diverse wombs for the sake of establishing righteousness. On the Christian side, the word incarnation means enfleshment. In Christianity, it is believed that the Logos or Word, the Second Person of Trinity, presumed human reality. Incarnation refers both to the act of God becoming a human being and to the result of his action, namely the permanent union of the divine and human natures in the one Person of the Word. When the doctrines of incarnation and avatara are compared some similarities and differences can be found between each other. The basic similarity is that both doctrines are not bound by the laws of nature as human beings are. They reveal God’s personal love and concern, and emphasize loving devotion. Their entry into the world is generally accompanied by extraordinary signs. In both cases, the descent of God allows for human beings to ascend to God. On the other hand, there are some distinctions between two religious traditions. For instance, according to Hinduism there are many and repeated avataras, while Christ comes only once. Indeed, this is related to the respective cyclic and linear worldviews of the two religions. Another difference is that in Hinduism avataras are real and perfect, while in Christianity Christ is also real, yet imperfect; that is, he has human imperfections, except sin. While Christ has never been thought of as a partial incarnation, in Hinduism there are some partial and full avataras. The other difference is that while the purpose of Christ is primarily ultimate salvation, not every avatara grants ultimate liberation, some of them come only to save a devotee from a specific predicament.

Keywords: Avatara, Christianity, Hinduism, incarnation

Procedia PDF Downloads 244
1049 Computational Characterization of Electronic Charge Transfer in Interfacial Phospholipid-Water Layers

Authors: Samira Baghbanbari, A. B. P. Lever, Payam S. Shabestari, Donald Weaver

Abstract:

Existing signal transmission models, although undoubtedly useful, have proven insufficient to explain the full complexity of information transfer within the central nervous system. The development of transformative models will necessitate a more comprehensive understanding of neuronal lipid membrane electrophysiology. Pursuant to this goal, the role of highly organized interfacial phospholipid-water layers emerges as a promising case study. A series of phospholipids in neural-glial gap junction interfaces as well as cholesterol molecules have been computationally modelled using high-performance density functional theory (DFT) calculations. Subsequent 'charge decomposition analysis' calculations have revealed a net transfer of charge from phospholipid orbitals through the organized interfacial water layer before ultimately finding its way to cholesterol acceptor molecules. The specific pathway of charge transfer from phospholipid via water layers towards cholesterol has been mapped in detail. Cholesterol is an essential membrane component that is overrepresented in neuronal membranes as compared to other mammalian cells; given this relative abundance, its apparent role as an electronic acceptor may prove to be a relevant factor in further signal transmission studies of the central nervous system. The timescales over which this electronic charge transfer occurs have also been evaluated by utilizing a system design that systematically increases the number of water molecules separating lipids and cholesterol. Memory loss through hydrogen-bonded networks in water can occur at femtosecond timescales, whereas existing action potential-based models are limited to micro or nanosecond scales. As such, the development of future models that attempt to explain faster timescale signal transmission in the central nervous system may benefit from our work, which provides additional information regarding fast timescale energy transfer mechanisms occurring through interfacial water. The study possesses a dataset that includes six distinct phospholipids and a collection of cholesterol. Ten optimized geometric characteristics (features) were employed to conduct binary classification through an artificial neural network (ANN), differentiating cholesterol from the various phospholipids. This stems from our understanding that all lipids within the first group function as electronic charge donors, while cholesterol serves as an electronic charge acceptor.

Keywords: charge transfer, signal transmission, phospholipids, water layers, ANN

Procedia PDF Downloads 57
1048 Effect of Thermal Treatment on Phenolic Content, Antioxidant, and Alpha-Amylase Inhibition Activities of Moringa stenopetala Leaves

Authors: Daniel Assefa, Engeda Dessalegn, Chetan Chauhan

Abstract:

Moringa stenopetala is a socioeconomic valued tree that is widely available and cultivated in the Southern part of Ethiopia. The leaves have been traditionally used as a food source with high nutritional and medicinal values. The present work was carried out to evaluate the effect of thermal treatment on the total phenolic content, antioxidant and alpha-amylase inhibition activities of aqueous leaf extracts during maceration and different decoction time interval (5, 10 and 15 min). The total phenolic content was determined by the Folin-ciocalteu methods whereas antioxidant activities were determined by 2,2-diphenyl-1-picryl-hydrazyl(DPPH) radical scavenging, reducing power and ferrous ion chelating assays and alpha-amylase inhibition activity was determined using 3,5-dinitrosalicylic acid method. Total phenolic content ranged from 34.35 to 39.47 mgGAE/g. Decoction for 10 min extract showed ferrous ion chelating (92.52), DPPH radical scavenging (91.52%), alpha-amylase inhibition (69.06%) and ferric reducing power (0.765), respectively. DPPH, reducing power and alpha-amylase inhibition activities showed positive linear correlation (R2=0.853, R2= 0.857 and R2=0.930), respectively with total phenolic content but ferrous ion chelating activity was found to be weakly correlated (R2=0.481). Based on the present investigation, it could be concluded that major loss of total phenolic content, antioxidant and alpha-amylase inhibition activities of the crude leaf extracts of Moringa stenopetala leaves were observed at decoction time for 15 min. Therefore, to maintain the total phenolic content, antioxidant, and alpha-amylase inhibition activities of leaves, cooking practice should be at the optimum decoction time (5-10 min).

Keywords: alpha-amylase inhibition, antioxidant, Moringa stenopetala, total phenolic content

Procedia PDF Downloads 345
1047 Historical Development of Negative Emotive Intensifiers in Hungarian

Authors: Martina Katalin Szabó, Bernadett Lipóczi, Csenge Guba, István Uveges

Abstract:

In this study, an exhaustive analysis was carried out about the historical development of negative emotive intensifiers in the Hungarian language via NLP methods. Intensifiers are linguistic elements which modify or reinforce a variable character in the lexical unit they apply to. Therefore, intensifiers appear with other lexical items, such as adverbs, adjectives, verbs, infrequently with nouns. Due to the complexity of this phenomenon (set of sociolinguistic, semantic, and historical aspects), there are many lexical items which can operate as intensifiers. The group of intensifiers are admittedly one of the most rapidly changing elements in the language. From a linguistic point of view, particularly interesting are a special group of intensifiers, the so-called negative emotive intensifiers, that, on their own, without context, have semantic content that can be associated with negative emotion, but in particular cases, they may function as intensifiers (e.g.borzasztóanjó ’awfully good’, which means ’excellent’). Despite their special semantic features, negative emotive intensifiers are scarcely examined in literature based on large Historical corpora via NLP methods. In order to become better acquainted with trends over time concerning the intensifiers, The exhaustively analysed a specific historical corpus, namely the Magyar TörténetiSzövegtár (Hungarian Historical Corpus). This corpus (containing 3 millions text words) is a collection of texts of various genres and styles, produced between 1772 and 2010. Since the corpus consists of raw texts and does not contain any additional information about the language features of the data (such as stemming or morphological analysis), a large amount of manual work was required to process the data. Thus, based on a lexicon of negative emotive intensifiers compiled in a previous phase of the research, every occurrence of each intensifier was queried, and the results were stored in a separate data frame. Then, basic linguistic processing (POS-tagging, lemmatization etc.) was carried out automatically with the ‘magyarlanc’ NLP-toolkit. Finally, the frequency and collocation features of all the negative emotive words were automatically analyzed in the corpus. Outcomes of the research revealed in detail how these words have proceeded through grammaticalization over time, i.e., they change from lexical elements to grammatical ones, and they slowly go through a delexicalization process (their negative content diminishes over time). What is more, it was also pointed out which negative emotive intensifiers are at the same stage in this process in the same time period. Giving a closer look to the different domains of the analysed corpus, it also became certain that during this process, the pragmatic role’s importance increases: the newer use expresses the speaker's subjective, evaluative opinion at a certain level.

Keywords: historical corpus analysis, historical linguistics, negative emotive intensifiers, semantic changes over time

Procedia PDF Downloads 223
1046 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation

Procedia PDF Downloads 285
1045 Ethanol Chlorobenzene Dosimetr Usage for Measuring Dose of the Intraoperative Linear Electron Accelerator System

Authors: Mojtaba Barzegar, Alireza Shirazi, Saied Rabi Mahdavi

Abstract:

Intraoperative radiation therapy (IORT) is an innovative treatment modality that the delivery of a large single dose of radiation to the tumor bed during the surgery. The radiotherapy success depends on the absorbed dose delivered to the tumor. The achievement better accuracy in patient treatment depends upon the measured dose by standard dosimeter such as ionization chamber, but because of the high density of electric charge/pulse produced by the accelerator in the ionization chamber volume, the standard correction factor for ion recombination Ksat calculated with the classic two-voltage method is overestimated so the use of dose/pulse independent dosimeters such as chemical Fricke and ethanol chlorobenzene (ECB) dosimeters have been suggested. Dose measurement is usually calculated and calibrated in the Zmax. Ksat calculated by comparison of ion chamber response and ECB dosimeter at each applicator degree, size, and dose. The relative output factors for IORT applicators have been calculated and compared with experimentally determined values and the results simulated by Monte Carlo software. The absorbed doses have been calculated and measured with statistical uncertainties less than 0.7% and 2.5% consecutively. The relative differences between calculated and measured OF’s were up to 2.5%, for major OF’s the agreement was better. In these conditions, together with the relative absorbed dose calculations, the OF’s could be considered as an indication that the IORT electron beams have been well simulated. These investigations demonstrate the utility of the full Monte Carlo simulation of accelerator head with ECB dosimeter allow us to obtain detailed information of clinical IORT beams.

Keywords: intra operative radiotherapy, ethanol chlorobenzene, ksat, output factor, monte carlo simulation

Procedia PDF Downloads 469
1044 Trends in Incisional and Ventral Hernia Repair: A Population Analysis from 2001 to 2021

Authors: Lakmali Anthony, Madeline Gillies

Abstract:

Background: Incisional and ventral hernias are highly prevalent, with primary ventral hernias occurring in approximately 20% of adults and incisional hernias developing in up to 30% of midline abdominal incisions. Recent data from the United States have shown an increasing incidence of elective incisional and ventral hernia repair (IVHR) and emergency repair of complicated hernias. This study examines Australian population trends in IVHR over a two-decade study period. Methods: This retrospective study was performed using procedure data from the Australian Institute of Health and Welfare, and population data from the Australian Bureau of Statistics captured between 2000 and 2021 to calculate incidence rates per 100,000 population by age and sex for selected subcategories of IVHR operations. Trends over time were evaluated using simple linear regression. Results: There were 809,308 IVHR operations performed in Australia during the study period. The cumulative incidence adjusted for the population was 182 per 100,000; this increased by 9.578 per year during the study period (95% CI = 8.431- 10.726, p<.001). IVHR for primary umbilical hernias experienced the most significant increase in population-adjusted incidence, 1.177 per year. (95% CI = 0.654- 1.701, p<.001). Emergency IVHR for incarcerated, obstructed, and strangulated hernias increased by 0.576 per year (95% CI = 0.510 -0.642, p<.001). Only 20.2% of IVHR procedures were performed as day surgery. Conclusions: Australia has seen a significant increase in IVHR operations performed in the last 20 years, particularly those for primary ventral hernias. IVHR for hernias complicated by incarceration, obstruction, and strangulation also increased significantly. The proportion of IVHR operations performed as day surgery is well below the target set by the Royal Australasian College of Surgeons. With the increasing incidence of IVHR operations and an increasing proportion of these being emergent, elective IVHR should be performed as day surgery when it is safe.

Keywords: ventral, incisional, hernia, trends

Procedia PDF Downloads 67
1043 Optimization of Process Parameters for Copper Extraction from Wastewater Treatment Sludge by Sulfuric Acid

Authors: Usarat Thawornchaisit, Kamalasiri Juthaisong, Kasama Parsongjeen, Phonsiri Phoengchan

Abstract:

In this study, sludge samples that were collected from the wastewater treatment plant of a printed circuit board manufacturing industry in Thailand were subjected to acid extraction using sulfuric acid as the chemical extracting agent. The effects of sulfuric acid concentration (A), the ratio of a volume of acid to a quantity of sludge (B) and extraction time (C) on the efficiency of copper extraction were investigated with the aim of finding the optimal conditions for maximum removal of copper from the wastewater treatment sludge. Factorial experimental design was employed to model the copper extraction process. The results were analyzed statistically using analysis of variance to identify the process variables that were significantly affected the copper extraction efficiency. Results showed that all linear terms and an interaction term between volume of acid to quantity of sludge ratio and extraction time (BC), had statistically significant influence on the efficiency of copper extraction under tested conditions in which the most significant effect was ascribed to volume of acid to quantity of sludge ratio (B), followed by sulfuric acid concentration (A), extraction time (C) and interaction term of BC, respectively. The remaining two-way interaction terms, (AB, AC) and the three-way interaction term (ABC) is not statistically significant at the significance level of 0.05. The model equation was derived for the copper extraction process and the optimization of the process was performed using a multiple response method called desirability (D) function to optimize the extraction parameters by targeting maximum removal. The optimum extraction conditions of 99% of copper were found to be sulfuric acid concentration: 0.9 M, ratio of the volume of acid (mL) to the quantity of sludge (g) at 100:1 with an extraction time of 80 min. Experiments under the optimized conditions have been carried out to validate the accuracy of the Model.

Keywords: acid treatment, chemical extraction, sludge, waste management

Procedia PDF Downloads 187
1042 Downtime Estimation of Building Structures Using Fuzzy Logic

Authors: M. De Iuliis, O. Kammouh, G. P. Cimellaro, S. Tesfamariam

Abstract:

Community Resilience has gained a significant attention due to the recent unexpected natural and man-made disasters. Resilience is the process of maintaining livable conditions in the event of interruptions in normally available services. Estimating the resilience of systems, ranging from individuals to communities, is a formidable task due to the complexity involved in the process. The most challenging parameter involved in the resilience assessment is the 'downtime'. Downtime is the time needed for a system to recover its services following a disaster event. Estimating the exact downtime of a system requires a lot of inputs and resources that are not always obtainable. The uncertainties in the downtime estimation are usually handled using probabilistic methods, which necessitates acquiring large historical data. The estimation process also involves ignorance, imprecision, vagueness, and subjective judgment. In this paper, a fuzzy-based approach to estimate the downtime of building structures following earthquake events is proposed. Fuzzy logic can integrate descriptive (linguistic) knowledge and numerical data into the fuzzy system. This ability allows the use of walk down surveys, which collect data in a linguistic or a numerical form. The use of fuzzy logic permits a fast and economical estimation of parameters that involve uncertainties. The first step of the method is to determine the building’s vulnerability. A rapid visual screening is designed to acquire information about the analyzed building (e.g. year of construction, structural system, site seismicity, etc.). Then, a fuzzy logic is implemented using a hierarchical scheme to determine the building damageability, which is the main ingredient to estimate the downtime. Generally, the downtime can be divided into three main components: downtime due to the actual damage (DT1); downtime caused by rational and irrational delays (DT2); and downtime due to utilities disruption (DT3). In this work, DT1 is computed by relating the building damageability results obtained from the visual screening to some already-defined components repair times available in the literature. DT2 and DT3 are estimated using the REDITM Guidelines. The Downtime of the building is finally obtained by combining the three components. The proposed method also allows identifying the downtime corresponding to each of the three recovery states: re-occupancy; functional recovery; and full recovery. Future work is aimed at improving the current methodology to pass from the downtime to the resilience of buildings. This will provide a simple tool that can be used by the authorities for decision making.

Keywords: resilience, restoration, downtime, community resilience, fuzzy logic, recovery, damage, built environment

Procedia PDF Downloads 155
1041 Designing a Model to Increase the Flow of Circular Economy Startups Using a Systemic and Multi-Generational Approach

Authors: Luís Marques, João Rocha, Andreia Fernandes, Maria Moura, Cláudia Caseiro, Filipa Figueiredo, João Nunes

Abstract:

The implementation of circularity strategies other than recycling, such as reducing the amount of raw material, as well as reusing or sharing existing products, remains marginal. The European Commission announced that the transition towards a more circular economy could lead to the net creation of about 700,000 jobs in Europe by 2030, through additional labour demand from recycling plants, repair services and other circular activities. Efforts to create new circular business models in accordance with completely circular processes, as opposed to linear ones, have increased considerably in recent years. In order to create a societal Circular Economy transition model, it is necessary to include innovative solutions, where startups play a key role. Early-stage startups based on new business models according to circular processes often face difficulties in creating enough impact. The StartUp Zero Program designs a model and approach to increase the flow of startups in the Circular Economy field, focusing on a systemic decision analysis and multi-generational approach, considering Multi-Criteria Decision Analysis to support a decision-making tool, which is also supported by the use of a combination of an Analytical Hierarchy Process and Multi-Attribute Value Theory methods. We define principles, criteria and indicators for evaluating startup prerogatives, quantifying the evaluation process in a unique result. Additionally, this entrepreneurship program spanning 16 months involved more than 2400 young people, from ages 14 to 23, in more than 200 interaction activities.

Keywords: circular economy, entrepreneurship, startups;, multi-criteria decision analysis

Procedia PDF Downloads 92
1040 Social Networks in Business: The Complex Concept of Wasta and the Impact of Islam on the Perception of This Practice

Authors: Sa'ad Ali

Abstract:

This study explores wasta as an example of a social network and how it impacts business practice in the Arab Middle East, drawing links with social network impact in different regions of the world. In doing so, particular attention will be paid to the socio-economic and cultural influences on business practice. In exploring relationships in business, concepts such as social network analysis, social capital and group identity are used to explore the different forms of social networks and how they influence business decisions and practices in the regions and countries where they prevail. The use of social networks to achieve objectives is known as guanxi in China, wasta in the Arab Middle East and blat in ex-Soviet countries. Wasta can be defined as favouritism based on tribal and family affiliation and is a widespread practice that has a substantial impact on political, social and business interactions in the Arab Middle East. Within the business context, it is used in several ways, such as to secure a job or promotion or to cut through bureaucracy in government interactions. The little research available is fragmented, and most studies reveal a negative attitude towards its usage in business. Paradoxically, while wasta is widely practised, people from the Arab Middle East often deny its influence. Moreover, despite the regular exhibition of a negative opinion on the practice of wasta, it can also be a source of great pride. This paper addresses this paradox by conducting a positional literature review, exploring the current literature on wasta and identifying how the identified paradox can be explained. The findings highlight how wasta, to a large extent, has been treated as an umbrella concept, whilst it is a highly complex practice which has evolved from intermediary wasta to intercessory wasta and therefore from bonding social capital relationships to more bridging social capital relationships. In addition, the research found that Islam, as the predominant religion in the region and the main source of ethical guidance for the majority of people from the region, plays a substantial role in this paradox. Specifically, it is submitted that wasta can be viewed positively in Islam when it is practised to aid others without breaking Islamic ethical guidelines, whilst it can be viewed negatively when it is used in contradiction with the teachings of Islam. As such, the unique contribution to knowledge of this study is that it ties together the fragmented literature on wasta, highlighting and helping us understand its complexity. In addition, it sheds light on the role of Islam in wasta practices, aiding our understanding of the paradoxical nature of the practice.

Keywords: Islamic ethics, social capital, social networks, Wasta

Procedia PDF Downloads 139
1039 Auto Calibration and Optimization of Large-Scale Water Resources Systems

Authors: Arash Parehkar, S. Jamshid Mousavi, Shoubo Bayazidi, Vahid Karami, Laleh Shahidi, Arash Azaranfar, Ali Moridi, M. Shabakhti, Tayebeh Ariyan, Mitra Tofigh, Kaveh Masoumi, Alireza Motahari

Abstract:

Water resource systems modelling have constantly been a challenge through history for human being. As the innovative methodological development is evolving alongside computer sciences on one hand, researches are likely to confront more complex and larger water resources systems due to new challenges regarding increased water demands, climate change and human interventions, socio-economic concerns, and environment protection and sustainability. In this research, an automatic calibration scheme has been applied on the Gilan’s large-scale water resource model using mathematical programming. The water resource model’s calibration is developed in order to attune unknown water return flows from demand sites in the complex Sefidroud irrigation network and other related areas. The calibration procedure is validated by comparing several gauged river outflows from the system in the past with model results. The calibration results are pleasantly reasonable presenting a rational insight of the system. Subsequently, the unknown optimized parameters were used in a basin-scale linear optimization model with the ability to evaluate the system’s performance against a reduced inflow scenario in future. Results showed an acceptable match between predicted and observed outflows from the system at selected hydrometric stations. Moreover, an efficient operating policy was determined for Sefidroud dam leading to a minimum water shortage in the reduced inflow scenario.

Keywords: auto-calibration, Gilan, large-scale water resources, simulation

Procedia PDF Downloads 328