Search results for: hierarchical modeling
1845 Effects and Mechanisms of an Online Short-Term Audio-Based Mindfulness Intervention on Wellbeing in Community Settings and How Stress and Negative Affect Influence the Therapy Effects: Parallel Process Latent Growth Curve Modeling of a Randomized Control
Authors: Man Ying Kang, Joshua Kin Man Nan
Abstract:
The prolonged pandemic has posed alarming public health challenges to various parts of the world, and face-to-face mental health treatment is largely discounted for the control of virus transmission, online psychological services and self-help mental health kits have become essential. Online self-help mindfulness-based interventions have proved their effects on fostering mental health for different populations over the globe. This paper was to test the effectiveness of an online short-term audio-based mindfulness (SAM) program in enhancing wellbeing, dispositional mindfulness, and reducing stress and negative affect in community settings in China, and to explore possible mechanisms of how dispositional mindfulness, stress, and negative affect influenced the intervention effects on wellbeing. Community-dwelling adults were recruited via online social networking sites (e.g., QQ, WeChat, and Weibo). Participants (n=100) were randomized into the mindfulness group (n=50) and a waitlist control group (n=50). In the mindfulness group, participants were advised to spend 10–20 minutes listening to the audio content, including mindful-form practices (e.g., eating, sitting, walking, or breathing). Then practice daily mindfulness exercises for 3 weeks (a total of 21 sessions), whereas those in the control group received the same intervention after data collection in the mindfulness group. Participants in the mindfulness group needed to fill in the World Health Organization Five Well-Being Index (WHO), Positive and Negative Affect Schedule (PANAS), Perceived Stress Scale (PSS), and Freiburg Mindfulness Inventory (FMI) four times: at baseline (T0) and at 1 (T1), 2 (T2), and 3 (T3) weeks while those in the waitlist control group only needed to fill in the same scales at pre- and post-interventions. Repeated-measure analysis of variance, paired sample t-test, and independent sample t-test was used to analyze the variable outcomes of the two groups. The parallel process latent growth curve modeling analysis was used to explore the longitudinal moderated mediation effects. The dependent variable was WHO slope from T0 to T3, the independent variable was Group (1=SAM, 2=Control), the mediator was FMI slope from T0 to T3, and the moderator was T0NA and T0PSS separately. The different levels of moderator effects on WHO slope was explored, including low T0NA or T0PSS (Mean-SD), medium T0NA or T0PSS (Mean), and high T0NA or T0PSS (Mean+SD). The results found that SAM significantly improved and predicted higher levels of WHO slope and FMI slope, as well as significantly reduced NA and PSS. FMI slope positively predict WHO slope. FMI slope partially mediated the relationship between SAM and WHO slope. Baseline NA and PSS as the moderators were found to be significant between SAM and WHO slope and between SAM and FMI slope, respectively. The conclusion was that SAM was effective in promoting levels of mental wellbeing, positive affect, and dispositional mindfulness as well as reducing negative affect and stress in community settings in China. SAM improved wellbeing faster through the faster enhancement of dispositional mindfulness. Participants with medium-to-high negative affect and stress buffered the therapy effects of SAM on wellbeing improvement speed.Keywords: mindfulness, negative affect, stress, wellbeing, randomized control trial
Procedia PDF Downloads 1111844 Optimal Protection Coordination in Distribution Systems with Distributed Generations
Authors: Abdorreza Rabiee, Shahla Mohammad Hoseini Mirzaei
Abstract:
The advantages of distributed generations (DGs) based on renewable energy sources (RESs) leads to high penetration level of DGs in distribution network. With incorporation of DGs in distribution systems, the system reliability and security, as well as voltage profile, is improved. However, the protection of such systems is still challenging. In this paper, at first, the related papers are reviewed and then a practical scheme is proposed for coordination of OCRs in distribution system with DGs. The coordination problem is formulated as a nonlinear programming (NLP) optimization problem with the object function of minimizing total operating time of OCRs. The proposed method is studied based on a simple test system. The optimization problem is solved by General Algebraic Modeling System (GAMS) to calculate the optimal time dial setting (TDS) and also pickup current setting of OCRs. The results show the effectiveness of the proposed method and its applicability.Keywords: distributed generation, DG, distribution network, over current relay, OCR, protection coordination, pickup current, time dial setting, TDS
Procedia PDF Downloads 1401843 Interfacing Photovoltaic Systems to the Utility Grid: A Comparative Simulation Study to Mitigate the Impact of Unbalanced Voltage Dips
Authors: Badr M. Alshammari, A. Rabeh, A. K. Mohamed
Abstract:
This paper presents the modeling and the control of a grid-connected photovoltaic system (PVS). Firstly, the MPPT control of the PVS and its associated DC/DC converter has been analyzed in order to extract the maximum of available power. Secondly, the control system of the grid side converter (GSC) which is a three-phase voltage source inverter (VSI) has been presented. A special attention has been paid to the control algorithms of the GSC converter during grid voltages imbalances. Especially, three different control objectives are to achieve; the mitigation of the grid imbalance adverse effects, at the point of common coupling (PCC), on the injected currents, the elimination of double frequency oscillations in active power flow, and the elimination of double frequency oscillations in reactive power flow. Simulation results of two control strategies have been performed via MATLAB software in order to demonstrate the particularities of each control strategy according to power quality standards.Keywords: renewable energies, photovoltaic systems, dc link, voltage source inverter, space vector SVPWM, unbalanced voltage dips, symmetrical components
Procedia PDF Downloads 3781842 Water Monitoring Sentinel Cloud Platform: Water Monitoring Platform Based on Satellite Imagery and Modeling Data
Authors: Alberto Azevedo, Ricardo Martins, André B. Fortunato, Anabela Oliveira
Abstract:
Water is under severe threat today because of the rising population, increased agricultural and industrial needs, and the intensifying effects of climate change. Due to sea-level rise, erosion, and demographic pressure, the coastal regions are of significant concern to the scientific community. The Water Monitoring Sentinel Cloud platform (WORSICA) service is focused on providing new tools for monitoring water in coastal and inland areas, taking advantage of remote sensing, in situ and tidal modeling data. WORSICA is a service that can be used to determine the coastline, coastal inundation areas, and the limits of inland water bodies using remote sensing (satellite and Unmanned Aerial Vehicles - UAVs) and in situ data (from field surveys). It applies to various purposes, from determining flooded areas (from rainfall, storms, hurricanes, or tsunamis) to detecting large water leaks in major water distribution networks. This service was built on components developed in national and European projects, integrated to provide a one-stop-shop service for remote sensing information, integrating data from the Copernicus satellite and drone/unmanned aerial vehicles, validated by existing online in-situ data. Since WORSICA is operational using the European Open Science Cloud (EOSC) computational infrastructures, the service can be accessed via a web browser and is freely available to all European public research groups without additional costs. In addition, the private sector will be able to use the service, but some usage costs may be applied, depending on the type of computational resources needed by each application/user. Although the service has three main sub-services i) coastline detection; ii) inland water detection; iii) water leak detection in irrigation networks, in the present study, an application of the service to Óbidos lagoon in Portugal is shown, where the user can monitor the evolution of the lagoon inlet and estimate the topography of the intertidal areas without any additional costs. The service has several distinct methodologies implemented based on the computations of the water indexes (e.g., NDWI, MNDWI, AWEI, and AWEIsh) retrieved from the satellite image processing. In conjunction with the tidal data obtained from the FES model, the system can estimate a coastline with the corresponding level or even topography of the inter-tidal areas based on the Flood2Topo methodology. The outcomes of the WORSICA service can be helpful for several intervention areas such as i) emergency by providing fast access to inundated areas to support emergency rescue operations; ii) support of management decisions on hydraulic infrastructures operation to minimize damage downstream; iii) climate change mitigation by minimizing water losses and reduce water mains operation costs; iv) early detection of water leakages in difficult-to-access water irrigation networks, promoting their fast repair.Keywords: remote sensing, coastline detection, water detection, satellite data, sentinel, Copernicus, EOSC
Procedia PDF Downloads 1291841 Experimental and Numerical Analyses of Tehran Research Reactor
Authors: A. Lashkari, H. Khalafi, H. Khazeminejad, S. Khakshourniya
Abstract:
In this paper, a numerical model is presented. The model is used to analyze a steady state thermo-hydraulic and reactivity insertion transient in TRR reference cores respectively. The model predictions are compared with the experiments and PARET code results. The model uses the piecewise constant and lumped parameter methods for the coupled point kinetics and thermal-hydraulics modules respectively. The advantages of the piecewise constant method are simplicity, efficiency and accuracy. A main criterion on the applicability range of this model is that the exit coolant temperature remains below the saturation temperature, i.e. no bulk boiling occurs in the core. The calculation values of power and coolant temperature, in steady state and positive reactivity insertion scenario, are in good agreement with the experiment values. However, the model is a useful tool for the transient analysis of most research reactor encountered in practice. The main objective of this work is using simple calculation methods and benchmarking them with experimental data. This model can be used for training proposes.Keywords: thermal-hydraulic, research reactor, reactivity insertion, numerical modeling
Procedia PDF Downloads 4021840 Contribution of Word Decoding and Reading Fluency on Reading Comprehension in Young Typical Readers of Kannada Language
Authors: Vangmayee V. Subban, Suzan Deelan. Pinto, Somashekara Haralakatta Shivananjappa, Shwetha Prabhu, Jayashree S. Bhat
Abstract:
Introduction and Need: During early years of schooling, the instruction in the schools mainly focus on children’s word decoding abilities. However, the skilled readers should master all the components of reading such as word decoding, reading fluency and comprehension. Nevertheless, the relationship between each component during the process of learning to read is less clear. The studies conducted in alphabetical languages have mixed opinion on relative contribution of word decoding and reading fluency on reading comprehension. However, the scenarios in alphasyllabary languages are unexplored. Aim and Objectives: The aim of the study was to explore the role of word decoding, reading fluency on reading comprehension abilities in children learning to read Kannada between the age ranges of 5.6 to 8.6 years. Method: In this cross sectional study, a total of 60 typically developing children, 20 each from Grade I, Grade II, Grade III maintaining equal gender ratio between the age range of 5.6 to 6.6 years, 6.7 to 7.6 years and 7.7 to 8.6 years respectively were selected from Kannada medium schools. The reading fluency and reading comprehension abilities of the children were assessed using Grade level passages selected from the Kannada text book of children core curriculum. All the passages consist of five questions to assess reading comprehension. The pseudoword decoding skills were assessed using 40 pseudowords with varying syllable length and their Akshara composition. Pseudowords are formed by interchanging the syllables within the meaningful word while maintaining the phonotactic constraints of Kannada language. The assessment material was subjected to content validation and reliability measures before collecting the data on the study samples. The data were collected individually, and reading fluency was assessed for words correctly read per minute. Pseudoword decoding was scored for the accuracy of reading. Results: The descriptive statistics indicated that the mean pseudoword reading, reading comprehension, words accurately read per minute increased with the Grades. The performance of Grade III children found to be higher, Grade I lower and Grade II remained intermediate of Grade III and Grade I. The trend indicated that reading skills gradually improve with the Grades. Pearson’s correlation co-efficient showed moderate and highly significant (p=0.00) positive co-relation between the variables, indicating the interdependency of all the three components required for reading. The hierarchical regression analysis revealed 37% variance in reading comprehension was explained by pseudoword decoding and was highly significant. Subsequent entry of reading fluency measure, there was no significant change in R-square and was only change 3%. Therefore, pseudoword-decoding evolved as a single most significant predictor of reading comprehension during early Grades of reading acquisition. Conclusion: The present study concludes that the pseudoword decoding skills contribute significantly to reading comprehension than reading fluency during initial years of schooling in children learning to read Kannada language.Keywords: alphasyllabary, pseudo-word decoding, reading comprehension, reading fluency
Procedia PDF Downloads 2651839 Controlling the Fluid Flow in Hydrogen Fuel Cells through Material Porosity Designs
Authors: Jamal Hussain Al-Smail
Abstract:
Hydrogen fuel cells (HFCs) are environmentally friendly, energy converter devices that convert the chemical energy of the reactants (oxygen and hydrogen) to electricity through electrochemical reactions. The level of the electricity production of HFCs mainly increases depending on the oxygen distribution in the HFC’s cathode gas diffusion layer (GDL). With a constant porosity of the GDL, the electrochemical reaction can have a great variation that reduces the cell’s productivity and stability. Our findings bring a methodology in finding porosity designs of the diffusion layer to improve the oxygen distribution such that it results in a stable oxygen-hydrogen reaction. We first introduce a mathematical model involving the mass and momentum transport equations, in which a porosity function of the GDL is incorporated as a control for the fluid flow. We then derive numerical methods for solving the mathematical model. In conclusion, we present our numerical results to show how to design the GDL porosity to result in a uniform oxygen distribution.Keywords: fuel cells, material porosity design, mathematical modeling, porous media
Procedia PDF Downloads 1551838 2D CFD-PBM Coupled Model of Particle Growth in an Industrial Gas Phase Fluidized Bed Polymerization Reactor
Authors: H. Kazemi Esfeh, V. Akbari, M. Ehdaei, T. N. G. Borhani, A. Shamiri, M. Najafi
Abstract:
In an industrial fluidized bed polymerization reactor, particle size distribution (PSD) plays a significant role in the reactor efficiency evaluation. The computational fluid dynamic (CFD) models coupled with population balance equation (CFD-PBM) have been extensively employed to investigate the flow behavior in the poly-disperse multiphase fluidized bed reactors (FBRs) utilizing ANSYS Fluent code. In this study, an existing CFD-PBM/ DQMOM coupled modeling framework has been used to highlight its potential to analyze the industrial-scale gas phase polymerization reactor. The predicted results reveal an acceptable agreement with the observed industrial data in terms of pressure drop and bed height. The simulated results also indicate that the higher particle growth rate can be achieved for bigger particles. Hence, the 2D CFD-PBM/DQMOM coupled model can be used as a reliable tool for analyzing and improving the design and operation of the gas phase polymerization FBRs.Keywords: computational fluid dynamics, population balance equation, fluidized bed polymerization reactor, direct quadrature method of moments
Procedia PDF Downloads 3691837 Modeling the Effect of Scale Deposition on Heat Transfer in Desalination Multi-Effect Distillation Evaporators
Authors: K. Bourouni, M. Chacha, T. Jaber, A. Tchantchane
Abstract:
In Multi-Effect Distillation (MED) desalination evaporators, the scale deposit outside the tubes presents a barrier to heat transfers reducing the global heat transfer coefficient and causing a decrease in water production; hence a loss of efficiency and an increase in operating and maintenance costs. Scale removal (by acid cleaning) is the main maintenance operation and constitutes the major reason for periodic plant shutdowns. A better understanding of scale deposition mechanisms will lead to an accurate determination of the variation of scale thickness around the tubes and an improved accuracy of the overall heat transfer coefficient calculation. In this paper, a coupled heat transfer-calcium carbonate scale deposition model on a horizontal tube bundle is presented. The developed tool is used to determine precisely the heat transfer area leading to a significant cost reduction for a given water production capacity. Simulations are carried to investigate the influence of different parameters such as water salinity, temperature, etc. on the heat transfer.Keywords: multi-effect-evaporator, scale deposition, water desalination, heat transfer coefficient
Procedia PDF Downloads 1541836 Life Cycle Assessment: Drinking Glass Systems
Authors: Devina Jain
Abstract:
The choice between single-use drinking glasses and reusable glasses is of major concern to our lifestyles, and hence, the environment. This study is aimed at comparing three systems - a disposable paper cup, a disposable cup and a reusable stainless steel cup or glass - with respect to their effect on the environment to find out which one is more advantageous for reducing the impact on the environment. Life Cycle Assessment was conducted using modeling software, Umberto NXT Universal (Version 7.1). For the purpose of this study, the cradle to grave approach was considered. Results showed that cleaning is of a very strong influence on the environmental burden by these drinking systems, with a contribution of up to 90 to 100%. Thus, the burden is determined by the way in which the utensils are washed, and how much water is consumed. It maybe seems like a small, insignificant daily practice. In the short term, it would seem that paper and plastic cups are a better idea, since they are easy to acquire and do not need to be stored, but in the long run, we can say that steel cups will have less of an environmental impact. However, if the frequency of use and the number of glasses employed per use are of significance to decide the appropriateness of the usage, it is better to use disposable cups and glasses.Keywords: disposable glass, life cycle assessment, paper, plastic, reusable glass, stainless steel
Procedia PDF Downloads 3411835 Contextual Toxicity Detection with Data Augmentation
Authors: Julia Ive, Lucia Specia
Abstract:
Understanding and detecting toxicity is an important problem to support safer human interactions online. Our work focuses on the important problem of contextual toxicity detection, where automated classifiers are tasked with determining whether a short textual segment (usually a sentence) is toxic within its conversational context. We use “toxicity” as an umbrella term to denote a number of variants commonly named in the literature, including hate, abuse, offence, among others. Detecting toxicity in context is a non-trivial problem and has been addressed by very few previous studies. These previous studies have analysed the influence of conversational context in human perception of toxicity in controlled experiments and concluded that humans rarely change their judgements in the presence of context. They have also evaluated contextual detection models based on state-of-the-art Deep Learning and Natural Language Processing (NLP) techniques. Counterintuitively, they reached the general conclusion that computational models tend to suffer performance degradation in the presence of context. We challenge these empirical observations by devising better contextual predictive models that also rely on NLP data augmentation techniques to create larger and better data. In our study, we start by further analysing the human perception of toxicity in conversational data (i.e., tweets), in the absence versus presence of context, in this case, previous tweets in the same conversational thread. We observed that the conclusions of previous work on human perception are mainly due to data issues: The contextual data available does not provide sufficient evidence that context is indeed important (even for humans). The data problem is common in current toxicity datasets: cases labelled as toxic are either obviously toxic (i.e., overt toxicity with swear, racist, etc. words), and thus context does is not needed for a decision, or are ambiguous, vague or unclear even in the presence of context; in addition, the data contains labeling inconsistencies. To address this problem, we propose to automatically generate contextual samples where toxicity is not obvious (i.e., covert cases) without context or where different contexts can lead to different toxicity judgements for the same tweet. We generate toxic and non-toxic utterances conditioned on the context or on target tweets using a range of techniques for controlled text generation(e.g., Generative Adversarial Networks and steering techniques). On the contextual detection models, we posit that their poor performance is due to limitations on both of the data they are trained on (same problems stated above) and the architectures they use, which are not able to leverage context in effective ways. To improve on that, we propose text classification architectures that take the hierarchy of conversational utterances into account. In experiments benchmarking ours against previous models on existing and automatically generated data, we show that both data and architectural choices are very important. Our model achieves substantial performance improvements as compared to the baselines that are non-contextual or contextual but agnostic of the conversation structure.Keywords: contextual toxicity detection, data augmentation, hierarchical text classification models, natural language processing
Procedia PDF Downloads 1721834 Analyzing Data Protection in the Era of Big Data under the Framework of Virtual Property Layer Theory
Authors: Xiaochen Mu
Abstract:
Data rights confirmation, as a key legal issue in the development of the digital economy, is undergoing a transition from a traditional rights paradigm to a more complex private-economic paradigm. In this process, data rights confirmation has evolved from a simple claim of rights to a complex structure encompassing multiple dimensions of personality rights and property rights. Current data rights confirmation practices are primarily reflected in two models: holistic rights confirmation and process rights confirmation. The holistic rights confirmation model continues the traditional "one object, one right" theory, while the process rights confirmation model, through contractual relationships in the data processing process, recognizes rights that are more adaptable to the needs of data circulation and value release. In the design of the data property rights system, there is a hierarchical characteristic aimed at decoupling from raw data to data applications through horizontal stratification and vertical staging. This design not only respects the ownership rights of data originators but also, based on the usufructuary rights of enterprises, constructs a corresponding rights system for different stages of data processing activities. The subjects of data property rights include both data originators, such as users, and data producers, such as enterprises, who enjoy different rights at different stages of data processing. The intellectual property rights system, with the mission of incentivizing innovation and promoting the advancement of science, culture, and the arts, provides a complete set of mechanisms for protecting innovative results. However, unlike traditional private property rights, the granting of intellectual property rights is not an end in itself; the purpose of the intellectual property system is to balance the exclusive rights of the rights holders with the prosperity and long-term development of society's public learning and the entire field of science, culture, and the arts. Therefore, the intellectual property granting mechanism provides both protection and limitations for the rights holder. This perfectly aligns with the dual attributes of data. In terms of achieving the protection of data property rights, the granting of intellectual property rights is an important institutional choice that can enhance the effectiveness of the data property exchange mechanism. Although this is not the only path, the granting of data property rights within the framework of the intellectual property rights system helps to establish fundamental legal relationships and rights confirmation mechanisms and is more compatible with the classification and grading system of data. The modernity of the intellectual property rights system allows it to adapt to the needs of big data technology development through special clauses or industry guidelines, thus promoting the comprehensive advancement of data intellectual property rights legislation. This paper analyzes data protection under the virtual property layer theory and two-fold virtual property rights system. Based on the “bundle of right” theory, this paper establishes specific three-level data rights. This paper analyzes the cases: Google v. Vidal-Hall, Halliday v Creation Consumer Finance, Douglas v Hello Limited, Campbell v MGN and Imerman v Tchenquiz. This paper concluded that recognizing property rights over personal data and protecting data under the framework of intellectual property will be beneficial to establish the tort of misuse of personal information.Keywords: data protection, property rights, intellectual property, Big data
Procedia PDF Downloads 421833 The Moderating Effect of Organizational Commitment in the Relationship between Emotional Intelligence and Work Outcomes
Authors: Ali Muhammad
Abstract:
The purpose of this study is to determine the moderating of effect of organizational commitment in the relationship between emotional intelligence and work outcomes. The study presents a new model to explain the mechanism through which emotional intelligence influences work outcomes. The model includes emotional intelligence as an independent variable, organizational commitment as a moderating variable, and work performance, job involvement, job satisfaction, organizational citizenship behavior, and intention to leave as dependent variables. A sample of 208 employees working in eight Kuwaiti business organizations (from industrial, banking, service, and financial sectors) were surveyed, and data was analyzed using structural equation modeling. Results indicate that emotional intelligence is positively associated with organizational commitment and that the positive effect of emotional intelligence on job involvement and organizational citizenship behavior is moderated by organizational commitment. The results of the current study are discussed and are compared to the results of previous studies in this area. Finally, the directions for future research are suggested.Keywords: emotional intelligence, organizational commitment, job involvement, job satisfaction, organizational citizenship behavior, intention to leave
Procedia PDF Downloads 3201832 The Modeling and Effectiveness Evaluation for Vessel Evasion to Acoustic Homing Torpedo
Authors: Li Minghui, Min Shaorong, Zhang Jun
Abstract:
This paper aims for studying the operational efficiency of surface warship’s motorized evasion to acoustic homing torpedo. It orderly developed trajectory model, self-guide detection model, vessel evasion model, as well as anti-torpedo error model in three-dimensional space to make up for the deficiency of precious researches analyzing two-dimensionally confrontational models. Then, making use of the Monte Carlo method, it carried out the simulation for the confrontation process of evasion in the environment of MATLAB. At last, it quantitatively analyzed the main factors which determine vessel’s survival probability. The results show that evasion relative bearing and speed will affect vessel’s survival probability significantly. Thus, choosing appropriate evasion relative bearing and speed according to alarming range and alarming relative bearing for torpedo, improving alarming range and positioning accuracy and reducing the response time against torpedo will improve the vessel’s survival probability significantly.Keywords: acoustic homing torpedo, vessel evasion, monte carlo method, torpedo defense, vessel's survival probability
Procedia PDF Downloads 4581831 A Dynamic Approach for Evaluating the Climate Change Risks on Building Performance
Authors: X. Lu, T. Lu, S. Javadi
Abstract:
A simple dynamic approach is presented for analyzing thermal and moisture dynamics of buildings, which is of particular relevance to understanding climate change impacts on buildings, including assessment of risks and applications of resilience strategies. With the goal to demonstrate the proposed modeling methodology, to verify the model, and to show that wooden materials provide a mechanism that can facilitate the reduction of moisture risks and be more resilient to global warming, a wooden church equipped with high precision measurement systems was taken as a test building for full-scale time-series measurements. Sensitivity analyses indicate a high degree of accuracy in the model prediction regarding the indoor environment. The model is then applied to a future projection of climate indoors aiming to identify significant environmental factors, the changing temperature and humidity, and effective response to the climate change impacts. The paper suggests that wooden building materials offer an effective and resilient response to anticipated future climate changes.Keywords: dynamic model, forecast, climate change impact, wooden structure, buildings
Procedia PDF Downloads 1551830 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models
Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach
Abstract:
In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model
Procedia PDF Downloads 1871829 An Evolutionary Multi-Objective Optimization for Airport Gate Assignment Problem
Authors: Seyedmirsajad Mokhtarimousavi, Danial Talebi, Hamidreza Asgari
Abstract:
Gate Assignment Problem (GAP) is one of the most substantial issues in airport operation. In principle, GAP intends to maintain the maximum capacity of the airport through the best possible allocation of the resources (gates) in order to reach the optimum outcome. The problem involves a wide range of dependent and independent resources and their limitations, which add to the complexity of GAP from both theoretical and practical perspective. In this study, GAP was mathematically formulated as a three-objective problem. The preliminary goal of multi-objective formulation was to address a higher number of objectives that can be simultaneously optimized and therefore increase the practical efficiency of the final solution. The problem is solved by applying the second version of Non-dominated Sorting Genetic Algorithm (NSGA-II). Results showed that the proposed mathematical model could address most of major criteria in the decision-making process in airport management in terms of minimizing both airport/airline cost and passenger walking distance time. Moreover, the proposed approach could properly find acceptable possible answers.Keywords: airport management, gate assignment problem, mathematical modeling, genetic algorithm, NSGA-II
Procedia PDF Downloads 3021828 Techno-Economic Assessment of Distributed Heat Pumps Integration within a Swedish Neighborhood: A Cosimulation Approach
Authors: Monica Arnaudo, Monika Topel, Bjorn Laumert
Abstract:
Within the Swedish context, the current trend of relatively low electricity prices promotes the electrification of the energy infrastructure. The residential heating sector takes part in this transition by proposing a switch from a centralized district heating system towards a distributed heat pumps-based setting. When it comes to urban environments, two issues arise. The first, seen from an electricity-sector perspective, is related to the fact that existing networks are limited with regards to their installed capacities. Additional electric loads, such as heat pumps, can cause severe overloads on crucial network elements. The second, seen from a heating-sector perspective, has to do with the fact that the indoor comfort conditions can become difficult to handle when the operation of the heat pumps is limited by a risk of overloading on the distribution grid. Furthermore, the uncertainty of the electricity market prices in the future introduces an additional variable. This study aims at assessing the extent to which distributed heat pumps can penetrate an existing heat energy network while respecting the technical limitations of the electricity grid and the thermal comfort levels in the buildings. In order to account for the multi-disciplinary nature of this research question, a cosimulation modeling approach was adopted. In this way, each energy technology is modeled in its customized simulation environment. As part of the cosimulation methodology: a steady-state power flow analysis in pandapower was used for modeling the electrical distribution grid, a thermal balance model of a reference building was implemented in EnergyPlus to account for space heating and a fluid-cycle model of a heat pump was implemented in JModelica to account for the actual heating technology. With the models set in place, different scenarios based on forecasted electricity market prices were developed both for present and future conditions of Hammarby Sjöstad, a neighborhood located in the south-east of Stockholm (Sweden). For each scenario, the technical and the comfort conditions were assessed. Additionally, the average cost of heat generation was estimated in terms of levelized cost of heat. This indicator enables a techno-economic comparison study among the different scenarios. In order to evaluate the levelized cost of heat, a yearly performance simulation of the energy infrastructure was implemented. The scenarios related to the current electricity prices show that distributed heat pumps can replace the district heating system by covering up to 30% of the heating demand. By lowering of 2°C, the minimum accepted indoor temperature of the apartments, this level of penetration can increase up to 40%. Within the future scenarios, if the electricity prices will increase, as most likely expected within the next decade, the penetration of distributed heat pumps can be limited to 15%. In terms of levelized cost of heat, a residential heat pump technology becomes competitive only within a scenario of decreasing electricity prices. In this case, a district heating system is characterized by an average cost of heat generation 7% higher compared to a distributed heat pumps option.Keywords: cosimulation, distributed heat pumps, district heating, electrical distribution grid, integrated energy systems
Procedia PDF Downloads 1541827 Proportionally Damped Finite Element State-Space Model of Composite Laminated Plate with Localized Interface Degeneration
Authors: Shi Qi Koo, Ahmad Beng Hong Kueh
Abstract:
In the present work, the finite element formulation for the investigation of the effects of a localized interfacial degeneration on the dynamic behavior of the [90˚/0˚] laminated composite plate employing the state-space technique is performed. The stiffness of the laminate is determined by assembling the stiffnesses of sub-elements. This includes an introduction of an interface layer adopting the virtually zero-thickness formulation to model the interfacial degeneration. Also, the kinematically consistent mass matrix and proportional damping have been formulated to complete the free vibration governing expression. To simulate the interfacial degeneration of the laminate, the degenerated areas are defined from the center propagating outwards in a localized manner. It is found that the natural frequency, damped frequency and damping ratio of the plate decreases as the degenerated area of the interface increases. On the contrary, the loss factor increases correspondingly.Keywords: dynamic finite element, localized interface degeneration, proportional damping, state-space modeling
Procedia PDF Downloads 3001826 Data Mining Meets Educational Analysis: Opportunities and Challenges for Research
Authors: Carla Silva
Abstract:
Recent development of information and communication technology enables us to acquire, collect, analyse data in various fields of socioeconomic – technological systems. Along with the increase of economic globalization and the evolution of information technology, data mining has become an important approach for economic data analysis. As a result, there has been a critical need for automated approaches to effective and efficient usage of massive amount of educational data, in order to support institutions to a strategic planning and investment decision-making. In this article, we will address data from several different perspectives and define the applied data to sciences. Many believe that 'big data' will transform business, government, and other aspects of the economy. We discuss how new data may impact educational policy and educational research. Large scale administrative data sets and proprietary private sector data can greatly improve the way we measure, track, and describe educational activity and educational impact. We also consider whether the big data predictive modeling tools that have emerged in statistics and computer science may prove useful in educational and furthermore in economics. Finally, we highlight a number of challenges and opportunities for future research.Keywords: data mining, research analysis, investment decision-making, educational research
Procedia PDF Downloads 3611825 An Experimental Investigation on the Droplet Behavior Impacting a Hot Surface above the Leidenfrost Temperature
Authors: Khaleel Sami Hamdan, Dong-Eok Kim, Sang-Ki Moon
Abstract:
An appropriate model to predict the size of the droplets resulting from the break-up with the structures will help in a better understanding and modeling of the two-phase flow calculations in the simulation of a reactor core loss-of-coolant accident (LOCA). A droplet behavior impacting on a hot surface above the Leidenfrost temperature was investigated. Droplets of known size and velocity were impacted to an inclined plate of hot temperature, and the behavior of the droplets was observed by a high-speed camera. It was found that for droplets of Weber number higher than a certain value, the higher the Weber number of the droplet the smaller the secondary droplets. The COBRA-TF model over-predicted the measured secondary droplet sizes obtained by the present experiment. A simple model for the secondary droplet size was proposed using the mass conservation equation. The maximum spreading diameter of the droplets was also compared to previous correlations and a fairly good agreement was found. A better prediction of the heat transfer in the case of LOCA can be obtained with the presented model.Keywords: break-up, droplet, impact, inclined hot plate, Leidenfrost temperature, LOCA
Procedia PDF Downloads 4001824 Optimizing Operation of Photovoltaic System Using Neural Network and Fuzzy Logic
Authors: N. Drir, L. Barazane, M. Loudini
Abstract:
It is well known that photovoltaic (PV) cells are an attractive source of energy. Abundant and ubiquitous, this source is one of the important renewable energy sources that have been increasing worldwide year by year. However, in the V-P characteristic curve of GPV, there is a maximum point called the maximum power point (MPP) which depends closely on the variation of atmospheric conditions and the rotation of the earth. In fact, such characteristics outputs are nonlinear and change with variations of temperature and irradiation, so we need a controller named maximum power point tracker MPPT to extract the maximum power at the terminals of photovoltaic generator. In this context, the authors propose here to study the modeling of a photovoltaic system and to find an appropriate method for optimizing the operation of the PV generator using two intelligent controllers respectively to track this point. The first one is based on artificial neural networks and the second on fuzzy logic. After the conception and the integration of each controller in the global process, the performances are examined and compared through a series of simulation. These two controller have prove by their results good tracking of the MPPT compare with the other method which are proposed up to now.Keywords: maximum power point tracking, neural networks, photovoltaic, P&O
Procedia PDF Downloads 3401823 A Tool for Assessing Performance and Structural Quality of Business Process
Authors: Mariem Kchaou, Wiem Khlif, Faiez Gargouri
Abstract:
Modeling business processes is an essential task when evaluating, improving, or documenting existing business processes. To be efficient in such tasks, a business process model (BPM) must have high structural quality and high performance. Evidently, evaluating the performance of a business process model is a necessary step to reduce time, cost, while assessing the structural quality aims to improve the understandability and the modifiability of the BPMN model. To achieve these objectives, a set of structural and performance measures have been proposed. Since the diversity of measures, we propose a framework that integrates both structural and performance aspects for classifying them. Our measure classification is based on business process model perspectives (e.g., informational, functional, organizational, behavioral, and temporal), and the elements (activity, event, actor, etc.) involved in computing the measures. Then, we implement this framework in a tool assisting the structural quality and the performance of a business process. The tool helps the designers to select an appropriate subset of measures associated with the corresponding perspective and to calculate and interpret their values in order to improve the structural quality and the performance of the model.Keywords: performance, structural quality, perspectives, tool, classification framework, measures
Procedia PDF Downloads 1591822 Investigations of Inclusion Complexes of Imazapyr with 2-Hydroxypropyl(β/γ) Cyclodextrin Experimental and Molecular Modeling Approach
Authors: Abdalla A. Elbashir, Maali Saad Mokhtar, FakhrEldin O. Suliman
Abstract:
The inclusion complexes of imazapyr (IMA) with 2-hydroxypropyl(β/γ) cyclodextrins (HP β/γ-CD), have been studied in aqueous media and in the solid state. In this work, fluorescence spectroscopy, electrospray-ionization mass spectrometry (ESI-MS), and HNMR were used to investigate and characterize the inclusion complexes of IMA with the cyclodextrins in solutions. The solid-state complexes were obtained by freeze-drying and were characterized by Fourier transform infrared spectroscopy (FTIR), and powder X-ray diffraction (PXRD). The most predominant complexes of IMA with both hosts are the 1:1 guest: host complexes. The association constants of IMA-HP β-CD and IMA-HP γ -CD were 115 and 215 L mol⁻¹, respectively. Molecular dynamic (MD) simulations were used to monitor the mode of inclusion and also to investigate the stability of these complexes in aqueous media at atomistic levels. The results obtained have indicated that these inclusion complexes are highly stable in aqueous media, thereby corroborating the experimental results. Additionally, it has been demonstrated that in addition to hydrophobic interactions and van der Waals interactions the presence of hydrogen bonding interactions of the type H---O and CH---O between the guest and the host have enhanced the stability of these complexes remarkably.Keywords: imazapyr, inclusion complex, herbicides, 2-hydroxypropyl-β/γ-cyclodextrin
Procedia PDF Downloads 1731821 A Four Free Element Radiofrequency Coil with High B₁ Homogeneity for Magnetic Resonance Imaging
Authors: Khalid Al-Snaie
Abstract:
In this paper, the design and the testing of a symmetrical radiofrequency prototype coil with high B₁ magnetic field homogeneity are presented. The developed coil comprises four tuned coaxial circular loops that can produce a relatively homogeneous radiofrequency field. In comparison with a standard Helmholtz pair that provides 2nd-order homogeneity, it aims to provide fourth-order homogeneity of the B₁ field while preserving the simplicity of implementation. Electrical modeling of the probe, including all couplings, is used to ensure these requirements. Results of comparison tests, in free space and in a spectro-imager, between a standard Helmholtz pair and the presented prototype coil are introduced. In terms of field homogeneity, an improvement of 30% is observed. Moreover, the proposed prototype coil possesses a better quality factor (+25% on average) and a noticeable improvement in sensitivity (+20%). Overall, this work, which includes both theoretical and experimental aspects, aims to contribute to the study and understanding of four-element radio frequency (RF) systems derived from Helmholtz coils for Magnetic Resonance ImagingKeywords: B₁ homogeneity, MRI, NMR, radiofrequency, RF coil, free element systems
Procedia PDF Downloads 931820 An Analytical Survey of Construction Changes: Gaps and Opportunities
Authors: Ehsan Eshtehardian, Saeed Khodaverdi
Abstract:
This paper surveys the studies on construction change and reveals some of the potential future works. A full-scale investigation of change literature, including change definitions, types, causes and effects, and change management systems, is accomplished to explore some of the coming change trends. It is tried to pick up the critical works in each section to deduct a true timeline of construction changes. The findings show that leaping from best practice guides in late 1990s and generic process models in the early 2000s to very advanced modeling environments in the mid-2000s and the early 2010s have made gaps along with opportunities for change researchers in order to develop some more easy and applicable models. Another finding is that there is a compelling similarity between the change and risk prediction models. Therefore, integrating these two concepts, specifically from proactive management point of view, may lead to a synergy and help project teams avoid rework. Also, the findings show that exploitation of cause-effect relationship models, in order to facilitate the dispute resolutions, seems to be an interesting field for future works.Keywords: construction change, change management systems, dispute resolutions, change literature
Procedia PDF Downloads 2971819 Advances and Challenges in Assessing Students’ Learning Competencies in 21st Century Higher Education
Authors: O. Zlatkin-Troitschanskaia, J. Fischer, C. Lautenbach, H. A. Pant
Abstract:
In 21st century higher education (HE), the diversity among students has increased in recent years due to the internationalization and higher mobility. Offering and providing equal and fair opportunities based on students’ individual skills and abilities instead of their social or cultural background is one of the major aims of HE. In this context, valid, objective and transparent assessments of students’ preconditions and academic competencies in HE are required. However, as analyses of the current states of research and practice show, a substantial research gap on assessment practices in HE still exists, calling for the development of effective solutions. These demands lead to significant conceptual and methodological challenges. Funded by the German Federal Ministry of Education and Research, the research program 'Modeling and Measuring Competencies in Higher Education – Validation and Methodological Challenges' (KoKoHs) focusses on addressing these challenges in HE assessment practice by modeling and validating objective test instruments. Including 16 cross-university collaborative projects, the German-wide research program contributes to bridging the research gap in current assessment research and practice by concentrating on practical and policy-related challenges of assessment in HE. In this paper, we present a differentiated overview of existing assessments of HE at the national and international level. Based on the state of research, we describe the theoretical and conceptual framework of the KoKoHs Program as well as results of the validation studies, including their key outcomes. More precisely, this includes an insight into more than 40 developed assessments covering a broad range of transparent and objective methods for validly measuring domain-specific and generic knowledge and skills for five major study areas (Economics, Social Science, Teacher Education, Medicine and Psychology). Computer-, video- and simulation-based instruments have been applied and validated to measure over 20,000 students at the beginning, middle and end of their (bachelor and master) studies at more than 300 HE institutions throughout Germany or during their practical training phase, traineeship or occupation. Focussing on the validity of the assessments, all test instruments have been analyzed comprehensively, using a broad range of methods and observing the validity criteria of the Standards for Psychological and Educational Testing developed by the American Educational Research Association, the American Economic Association and the National Council on Measurement. The results of the developed assessments presented in this paper, provide valuable outcomes to predict students’ skills and abilities at the beginning and the end of their studies as well as their learning development and performance. This allows for a differentiated view of the diversity among students. Based on the given research results practical implications and recommendations are formulated. In particular, appropriate and effective learning opportunities for students can be created to support the learning development of students, promote their individual potential and reduce knowledge and skill gaps. Overall, the presented research on competency assessment is highly relevant to national and international HE practice.Keywords: 21st century skills, academic competencies, innovative assessments, KoKoHs
Procedia PDF Downloads 1431818 Multiband Fractal Patch Antenna for Small Spacecraft of Earth Remote Sensing
Authors: Beibit Karibayev, Akmaral Imanbayeva, Timur Namazbayev
Abstract:
Currently, the small spacecraft (SSC) industry is experiencing a big boom in popularity. This is primarily due to ease of use, low cost and mobility. In addition, these programs can be implemented not only at the state level but also at the level of companies, universities and other organizations. For remote sensing of the Earth (ERS), small spacecraft with an orientation system is used. It is important to take into account here that a remote sensing device, for example, a camera for photographing the Earth's surface, must be directed at the Earth's surface. But this, at first glance, the limitation can be turned into an advantage using a patch antenna. This work proposed to use a patch antenna based on a unidirectional fractal in the SSC. The CST Microwave Studio software package was used for simulation and research. Copper (ε = 1.0) was chosen as the emitting element and reflector. The height of the substrate was 1.6 mm, the type of substrate material was FR-4 (ε = 4.3). The simulation was performed in the frequency range of 0 – 6 GHz. As a result of the research, a patch antenna based on fractal geometry was developed for ERS nanosatellites. The capabilities of these antennas are modeled and investigated. A method for calculating and modeling fractal geometry for patch antennas has been developed.Keywords: antenna, earth remote sensing, fractal, small spacecraft
Procedia PDF Downloads 2621817 Modeling and Design of Rectenna for Low Power Medical Implants
Authors: Madhav Pant, Khem N. Poudel
Abstract:
Wireless power transfer is continuously becoming more powerful and compact in medical implantable devices and the wide range of applications. A rectenna is designed for wireless power transfer technique that can be applied to medical implant devices. The experiment is performed using ANSYS HFSS, a full wave electromagnetic simulation. The dipole antenna combinations operating at 2.4 GHz are used for wireless power transfer and the maximum DC voltage reception by the implant considering International Commission on Non-Ionizing Radiation Protection (ICNIRP) regulation. The power receiving dipole antenna is placed inside the cylindrical geometry having the similar properties of the human body at the frequency of 2.4 GHz. Our design can provide the power at the depth of 5 mm skin and 5mm of bone for the implant. The voltage doubler/quadrupler rectifier in ANSYS Simplorer is used to calculate the exact DC current utilized by implant inside the human body. The qualitative design and analysis of this wireless power transfer method could also be used for other biomedical implants systems such as cardiac pacemaker, insulin pump, and retinal implants.Keywords: dipole antenna, medical implants, wireless power transfer, rectifier
Procedia PDF Downloads 1751816 Analysing Causal Effect of London Cycle Superhighways on Traffic Congestion
Authors: Prajamitra Bhuyan
Abstract:
Transport operators have a range of intervention options available to improve or enhance their networks. But often such interventions are made in the absence of sound evidence on what outcomes may result. Cycling superhighways were promoted as a sustainable and healthy travel mode which aims to cut traffic congestion. The estimation of the impacts of the cycle superhighways on congestion is complicated due to the non-random assignment of such intervention over the transport network. In this paper, we analyse the causal effect of cycle superhighways utilising pre-innervation and post-intervention information on traffic and road characteristics along with socio-economic factors. We propose a modeling framework based on the propensity score and outcome regression model. The method is also extended to doubly robust set-up. Simulation results show the superiority of the performance of the proposed method over existing competitors. The method is applied to analyse a real dataset on the London transport network, and the result would help effective decision making to improve network performance.Keywords: average treatment effect, confounder, difference-in-difference, intelligent transportation system, potential outcome
Procedia PDF Downloads 244