Search results for: measurement models
711 Factors Impacting Training and Adult Education Providers’ Business Performance: The Singapore Context
Abstract:
The SkillsFuture Singapore’s mission to develop a responsive and forward-looking Training and Adult Education (TAE) and workforce development system is undergirded by how successful TAE providers are in their business performance and strategies that strengthen their operational efficiency and processes. Therefore, understanding the factors that drive the business performance of TAE providers is critical to the success of SkillsFuture Singapore’s initiatives. This study aims to investigate how business strategy, work autonomy, work intensity and professional development support impact the business performance of private TAE providers. Specifically, the three research questions are: (1) Are there significant relationships between the above-mentioned four factors and TAE providers’ business performance?; (2) Are there significant differences on the four factors between low and high TAE providers’ business performance groups?; and (3) To what extent and in what manner do the four factors predict TAE providers’ business performance? This was part of the first national study on organizations and professionals working in the Training and Adult Education (TAE) sector. Data from 265 private TAE providers where respondents were Chief Executive Officers representatives from the Senior Management were analyzed. The results showed that business strategy (the extent that the organization leads the way in terms of developing new products and services; uses up-to-date learning technologies; customizes its products and services to the client’s needs), work autonomy (the extent that the staff personally have an influence on how hard they work; deciding what tasks they are to do; deciding how they are to do the tasks, and deciding the quality standards to which they work) and professional development support (both monetary and non-monetary support and incentives) had positive and significant relationships with business performance. However, no significant relationship is found between work intensity and business performance. A business strategy, work autonomy and professional development support were significantly higher in the high business performance group compared to the low-performance group among the TAE providers. Results of hierarchical regression analyses controlling for the size of the TAE providers showed significant impacts of business strategy, work autonomy and professional development support on TAE providers’ business performance. Overall, the model accounted for 27% of the variance in TAE providers’ business performance. This study provides policymakers with insights into improving existing policies, designing new initiatives and implementing targeting interventions to support TAE providers. The findings also have implications on how the TAE providers could better formulate their organizational strategies and business models. Finally, limitations of study, along with directions for future research will be discussed in the paper.Keywords: adult education, business performance, business strategy, training, work autonomy
Procedia PDF Downloads 208710 A Homogenized Mechanical Model of Carbon Nanotubes/Polymer Composite with Interface Debonding
Authors: Wenya Shu, Ilinca Stanciulescu
Abstract:
Carbon nanotubes (CNTs) possess attractive properties, such as high stiffness and strength, and high thermal and electrical conductivities, making them promising filler in multifunctional nanocomposites. Although CNTs can be efficient reinforcements, the expected level of mechanical performance of CNT-polymers is not often reached in practice due to the poor mechanical behavior of the CNT-polymer interfaces. It is believed that the interactions of CNT and polymer mainly result from the Van der Waals force. The interface debonding is a fracture and delamination phenomenon. Thus, the cohesive zone modeling (CZM) is deemed to give good capture of the interface behavior. The detailed, cohesive zone modeling provides an option to consider the CNT-matrix interactions, but brings difficulties in mesh generation and also leads to high computational costs. Homogenized models that smear the fibers in the ground matrix and treat the material as homogeneous are studied in many researches to simplify simulations. But based on the perfect interface assumption, the traditional homogenized model obtained by mixing rules severely overestimates the stiffness of the composite, even comparing with the result of the CZM with artificially very strong interface. A mechanical model that can take into account the interface debonding and achieve comparable accuracy to the CZM is thus essential. The present study first investigates the CNT-matrix interactions by employing cohesive zone modeling. Three different coupled CZM laws, i.e., bilinear, exponential and polynomial, are considered. These studies indicate that the shapes of the CZM constitutive laws chosen do not influence significantly the simulations of interface debonding. Assuming a bilinear traction-separation relationship, the debonding process of single CNT in the matrix is divided into three phases and described by differential equations. The analytical solutions corresponding to these phases are derived. A homogenized model is then developed by introducing a parameter characterizing interface sliding into the mixing theory. The proposed mechanical model is implemented in FEAP8.5 as a user material. The accuracy and limitations of the model are discussed through several numerical examples. The CZM simulations in this study reveal important factors in the modeling of CNT-matrix interactions. The analytical solutions and proposed homogenized model provide alternative methods to efficiently investigate the mechanical behaviors of CNT/polymer composites.Keywords: carbon nanotube, cohesive zone modeling, homogenized model, interface debonding
Procedia PDF Downloads 129709 Computationally Efficient Electrochemical-Thermal Li-Ion Cell Model for Battery Management System
Authors: Sangwoo Han, Saeed Khaleghi Rahimian, Ying Liu
Abstract:
Vehicle electrification is gaining momentum, and many car manufacturers promise to deliver more electric vehicle (EV) models to consumers in the coming years. In controlling the battery pack, the battery management system (BMS) must maintain optimal battery performance while ensuring the safety of a battery pack. Tasks related to battery performance include determining state-of-charge (SOC), state-of-power (SOP), state-of-health (SOH), cell balancing, and battery charging. Safety related functions include making sure cells operate within specified, static and dynamic voltage window and temperature range, derating power, detecting faulty cells, and warning the user if necessary. The BMS often utilizes an RC circuit model to model a Li-ion cell because of its robustness and low computation cost among other benefits. Because an equivalent circuit model such as the RC model is not a physics-based model, it can never be a prognostic model to predict battery state-of-health and avoid any safety risk even before it occurs. A physics-based Li-ion cell model, on the other hand, is more capable at the expense of computation cost. To avoid the high computation cost associated with a full-order model, many researchers have demonstrated the use of a single particle model (SPM) for BMS applications. One drawback associated with the single particle modeling approach is that it forces to use the average current density in the calculation. The SPM would be appropriate for simulating drive cycles where there is insufficient time to develop a significant current distribution within an electrode. However, under a continuous or high-pulse electrical load, the model may fail to predict cell voltage or Li⁺ plating potential. To overcome this issue, a multi-particle reduced-order model is proposed here. The use of multiple particles combined with either linear or nonlinear charge-transfer reaction kinetics enables to capture current density distribution within an electrode under any type of electrical load. To maintain computational complexity like that of an SPM, governing equations are solved sequentially to minimize iterative solving processes. Furthermore, the model is validated against a full-order model implemented in COMSOL Multiphysics.Keywords: battery management system, physics-based li-ion cell model, reduced-order model, single-particle and multi-particle model
Procedia PDF Downloads 111708 Developing a Framework for Assessing and Fostering the Sustainability of Manufacturing Companies
Authors: Ilaria Barletta, Mahesh Mani, Björn Johansson
Abstract:
The concept of sustainability encompasses economic, environmental, social and institutional considerations. Sustainable manufacturing (SM) is, therefore, a multi-faceted concept. It broadly implies the development and implementation of technologies, projects and initiatives that are concerned with the life cycle of products and services, and are able to bring positive impacts to the environment, company stakeholders and profitability. Because of this, achieving SM-related goals requires a holistic, life-cycle-thinking approach from manufacturing companies. Further, such an approach must rely on a logic of continuous improvement and ease of implementation in order to be effective. Currently, there exists in the academic literature no comprehensively structured frameworks that support manufacturing companies in the identification of the issues and the capabilities that can either hinder or foster sustainability. This scarcity of support extends to difficulties in obtaining quantifiable measurements in order to objectively evaluate solutions and programs and identify improvement areas within SM for standards conformance. To bridge this gap, this paper proposes the concept of a framework for assessing and continuously improving the sustainability of manufacturing companies. The framework addresses strategies and projects for SM and operates in three sequential phases: analysis of the issues, design of solutions and continuous improvement. A set of interviews, observations and questionnaires are the research methods to be used for the implementation of the framework. Different decision-support methods - either already-existing or novel ones - can be 'plugged into' each of the phases. These methods can assess anything from business capabilities to process maturity. In particular, the authors are working on the development of a sustainable manufacturing maturity model (SMMM) as decision support within the phase of 'continuous improvement'. The SMMM, inspired by previous maturity models, is made up of four maturity levels stemming from 'non-existing' to 'thriving'. Aggregate findings from the use of the framework should ultimately reveal to managers and CEOs the roadmap for achieving SM goals and identify the maturity of their companies’ processes and capabilities. Two cases from two manufacturing companies in Australia are currently being employed to develop and test the framework. The use of this framework will bring two main benefits: enable visual, intuitive internal sustainability benchmarking and raise awareness of improvement areas that lead companies towards an increasingly developed SM.Keywords: life cycle management, continuous improvement, maturity model, sustainable manufacturing
Procedia PDF Downloads 266707 Instructors Willingness, Self-Efficacy Beliefs, Attitudes and Knowledge about Provisions of Instructional Accommodations for Students with Disabilities: The Case Selected Universities in Ethiopia
Authors: Abdreheman Seid Abdella
Abstract:
This study examined instructors willingness, self-efficacy beliefs, attitudes and knowledge about provisions of instructional accommodations for students with disabilities in universities. Major concepts used in this study operationally defined and some models of disability were reviewed. Questionnaires were distributed to a total of 181 instructors from four universities and quantitative data was generated. Then to analyze the data, appropriate methods of data analysis were employed. The result indicated that on average instructors had positive willingness, strong self-efficacy beliefs and positive attitudes towards providing instructional accommodations. In addition, the result showed that the majority of participants had moderate level of knowledge about provision of instructional accommodations. Concerning the relationship between instructors background variables and dependent variables, the result revealed that location of university and awareness raising training about Inclusive Education showed statistically significant relationship with all dependent variables (willingness, self-efficacy beliefs, attitudes and knowledge). On the other hand, gender and college/faculty did not show a statistically significant relationship. In addition, it was found that among the inter-correlation of dependent variables, the correlation between attitudes and willingness to provide accommodations was the strongest. Furthermore, using multiple linear regression analysis, this study also indicated that predictor variables like self-efficacy beliefs, attitudes, knowledge and teaching methodology training made statistically significant contribution to predicting the criterion willingness. Predictor variables like willingness and attitudes made statistically significant contribution to predicting self-efficacy beliefs. Predictor variables like willingness, Special Needs Education course and self-efficacy beliefs made statistically significant contribution to predict attitudes. Predictor variables like Special Needs Education courses, the location of university and willingness made statistically significant contribution to predicting knowledge. Finally, using exploratory factor analysis, this study showed that there were four components or factors each that represent the underlying constructs of willingness and self-efficacy beliefs to provide instructional accommodations items, five components for attitudes towards providing accommodations items and three components represent the underlying constructs for knowledge about provisions of instructional accommodations items. Based on the findings, recommendations were made for improving the situation of instructional accommodations in Ethiopian universities.Keywords: willingness, self-efficacy belief, attitude, knowledge
Procedia PDF Downloads 270706 Factors in a Sustainability Assessment of New Types of Closed Cavity Facades
Authors: Zoran Veršić, Josip Galić, Marin Binički, Lucija Stepinac
Abstract:
With the current increase in CO₂ emissions and global warming, the sustainability of both existing and new solutions must be assessed on a wide scale. As the implementation of closed cavity facades (CCF) is on the rise, a variety of factors must be included in the analysis of new types of CCF. This paper aims to cover the relevant factors included in the sustainability assessment of new types of CCF. Several mathematical models are being used to describe the physical behavior of CCF. Depending on the type of CCF, they cover the main factors which affect the durability of the façade: thermal behavior of various elements in the façade, stress, and deflection of the glass panels, pressure inside a cavity, exchange rate, and the moisture buildup in the cavity. CCF itself represents a complex system in which all mentioned factors must be considered mutually. Still, the façade is only an envelope of a more complex system, the building. Choice of the façade dictates the heat loss and the heat gain, thermal comfort of inner space, natural lighting, and ventilation. Annual consumption of energy for heating, cooling, lighting, and maintenance costs will present the operational advantages or disadvantages of the chosen façade system in both the economic and environmental aspects. Still, the only operational viewpoint is not all-inclusive. As the building codes constantly demand higher energy efficiency as well as transfer to renewable energy sources, the ratio of embodied and lifetime operational energy footprint of buildings is changing. With the drop in operational energy CO₂ emissions, embodied energy emissions present a larger and larger share in the lifecycle emissions of the building. Taken all into account, the sustainability assessment of a façade, as well as other major building elements, should include all mentioned factors during the lifecycle of an element. The challenge of such an approach is a timescale. Depending on the climatic conditions on the building site, the expected lifetime of CCF can exceed 25 years. In such a time span, some of the factors can be estimated more precisely than others. The ones depending on the socio-economic conditions are more likely to be harder to predict than the natural ones like the climatic load. This work recognizes and summarizes the relevant factors needed for the assessment of new types of CCF, considering the entire lifetime of a façade element and economic and environmental aspects.Keywords: assessment, closed cavity façade, life cycle, sustainability
Procedia PDF Downloads 192705 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients
Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg
Abstract:
Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis
Procedia PDF Downloads 339704 Assessing the Feasibility of Italian Hydrogen Targets with the Open-Source Energy System Optimization Model TEMOA - Italy
Authors: Alessandro Balbo, Gianvito Colucci, Matteo Nicoli, Laura Savoldi
Abstract:
Hydrogen is expected to become a game changer in the energy transition, especially enabling sector coupling possibilities and the decarbonization of hard-to-abate end-uses. The Italian National Recovery and Resilience Plan identifies hydrogen as one of the key elements of the ecologic transition to meet international decarbonization objectives, also including it in several pilot projects for the early development in Italy. This matches the European energy strategy, which aims to make hydrogen a leading energy carrier of the future, setting ambitious goals to be accomplished by 2030. The huge efforts needed to achieve the announced targets require to carefully investigate of their feasibility in terms of economic expenditures and technical aspects. In order to quantitatively assess the hydrogen potential within the Italian context and the feasibility of the planned investments and projects, this work uses the TEMOA-Italy energy system model to study pathways to meet the strict objectives above cited. The possible hydrogen development has been studied both in the supply-side and demand-side of the energy system, also including storage options and distribution chains. The assessment comprehends alternative hydrogen production technologies involved in a competition market, reflecting the several possible investments declined by the Italian National Recovery and Resilience Plan to boost the development and spread of this infrastructure, including the sector coupling potential with natural gas through the currently existing infrastructure and CO2 capture for the production of synfuels. On the other hand, the hydrogen end-uses phase covers a wide range of consumption alternatives, from fuel-cell vehicles, for which both road and non-road transport categories are considered, to steel, and chemical industries uses and cogeneration for residential and commercial buildings. The model includes both high and low TRL technologies in order to provide a consistent outcome for the future decades as it does for the present day, and since it is developed through the use of an open-source code instance and database, transparency and accessibility are fully granted.Keywords: decarbonization, energy system optimization models, hydrogen, open-source modeling, TEMOA
Procedia PDF Downloads 101703 Assessment of Marine Diversity on Rocky Shores of Triporti, Vlore, Albania
Authors: Ina Nasto, Denada Sota, Kerol Sacaj, Brunilda Veshaj, Hajdar Kicaj
Abstract:
Rocky shores are often used as models to describe the dynamics of biodiversity around the world, making them one of the most studied marine habitats and their communities. The variability in the number of species and the abundance of hard-bottom benthic animal communities on the coast of Triporti, north of the Bay of Vlora, Albania is described in relation to environmental variables using multivariate analysis. The purpose of this study is to monitor the species composition, quantitative characteristics, and seasonal variations of the benthic macroinvertebrate populations of the shallow rocky shores of the Triportit-Vlora area, as well as the assessment of the ecological condition of these populations. The rocky coast of Triport, with a length of 7 km, was divided into three sampling stations, with three transects each of 50m. The monitoring of benthic macroinvertebrates in these areas was carried out in two seasons, spring and summer (June and August 2021). In each station and sampling season, estimates of the total and average density for each species, the presence constant, and the assessment of biodiversity were calculated using the Shannon–Wiener and the Simpson index. The species composition, the quantitative characteristics of the populations, and the indicators mentioned above were analyzed in a comparative way, both between the seasons within one station and between the three stations with each other. Statistical processing of the data was carried out to analyze the changes between the seasons and between the sampling stations for the species composition, population density, as well as correlation between them. A total of 105 benthic macroinvertebrate taxa were found, dominated by Molluscs, Annelids, and Arthropods. The small density of species and the low degree of stability of the macrozoobenthic community are indicators of the poor ecological condition and environmental impact in the studied areas. Algal cover, the diversity of coastal microhabitats, and the degree of coastal exposure to waves play an important role in the characteristics of macrozoobenthos populations in the studied areas. Also, the rocky shores are of special interest because, in the infralittoral of these areas, there are dense kelp forests with Gongolaria barbata, Ericaria crinita as well as fragmented areas with Posidonia oceanica that reach the coast, priority habitats of special conservation importance in the Mediterranean.Keywords: Macrozoobenthic communities, Shannon–Wiener, Triporti, Vlore, rocky shore
Procedia PDF Downloads 98702 Optimizing Residential Housing Renovation Strategies at Territorial Scale: A Data Driven Approach and Insights from the French Context
Authors: Rit M., Girard R., Villot J., Thorel M.
Abstract:
In a scenario of extensive residential housing renovation, stakeholders need models that support decision-making through a deep understanding of the existing building stock and accurate energy demand simulations. To address this need, we have modified an optimization model using open data that enables the study of renovation strategies at both territorial and national scales. This approach provides (1) a definition of a strategy to simplify decision trees from theoretical combinations, (2) input to decision makers on real-world renovation constraints, (3) more reliable identification of energy-saving measures (changes in technology or behaviour), and (4) discrepancies between currently planned and actually achieved strategies. The main contribution of the studies described in this document is the geographic scale: all residential buildings in the areas of interest were modeled and simulated using national data (geometries and attributes). These buildings were then renovated, when necessary, in accordance with the environmental objectives, taking into account the constraints applicable to each territory (number of renovations per year) or at the national level (renovation of thermal deficiencies (Energy Performance Certificates F&G)). This differs from traditional approaches that focus only on a few buildings or archetypes. This model can also be used to analyze the evolution of a building stock as a whole, as it can take into account both the construction of new buildings and their demolition or sale. Using specific case studies of French territories, this paper highlights a significant discrepancy between the strategies currently advocated by decision-makers and those proposed by our optimization model. This discrepancy is particularly evident in critical metrics such as the relationship between the number of renovations per year and achievable climate targets or the financial support currently available to households and the remaining costs. In addition, users are free to seek optimizations for their building stock across a range of different metrics (e.g., financial, energy, environmental, or life cycle analysis). These results are a clear call to re-evaluate existing renovation strategies and take a more nuanced and customized approach. As the climate crisis moves inexorably forward, harnessing the potential of advanced technologies and data-driven methodologies is imperative.Keywords: residential housing renovation, MILP, energy demand simulations, data-driven methodology
Procedia PDF Downloads 68701 The Effects of Human Activities on Plant Diversity in Tropical Wetlands of Lake Tana (Ethiopia)
Authors: Abrehet Kahsay Mehari
Abstract:
Aquatic plants provide the physical structure of wetlands and increase their habitat complexity and heterogeneity, and as such, have a profound influence on other biotas. In this study, we investigated how human disturbance activities influenced the species richness and community composition of aquatic plants in the wetlands of Lake Tana, Ethiopia. Twelve wetlands were selected: four lacustrine, four river mouths, and four riverine papyrus swamps. Data on aquatic plants, environmental variables, and human activities were collected during the dry and wet seasons of 2018. A linear mixed effect model and a distance-based Redundancy Analysis (db-RDA) were used to relate aquatic plant species richness and community composition, respectively, to human activities and environmental variables. A total of 113 aquatic plant species, belonging to 38 families, were identified across all wetlands during the dry and wet seasons. Emergent species had the maximum area covered at 73.45 % and attained the highest relative abundance, followed by amphibious and other forms. The mean taxonomic richness of aquatic plants was significantly lower in wetlands with high overall human disturbance scores compared to wetlands with low overall human disturbance scores. Moreover, taxonomic richness showed a negative correlation with livestock grazing, tree plantation, and sand mining. The community composition also varied across wetlands with varying levels of human disturbance and was primarily driven by turnover (i.e., replacement of species) rather than nestedness resultant(i.e., loss of species). Distance-based redundancy analysis revealed that livestock grazing, tree plantation, sand mining, waste dumping, and crop cultivation were significant predictors of variation in aquatic plant communities’ composition in the wetlands. Linear mixed effect models and distance-based redundancy analysis also revealed that water depth, turbidity, conductivity, pH, sediment depth, and temperature were important drivers of variations in aquatic plant species richness and community composition. Papyrus swamps had the highest species richness and supported different plant communities. Conservation efforts should therefore focus on these habitats and measures should be taken to restore the highly disturbed and species poor wetlands near the river mouths.Keywords: species richness, community composition, aquatic plants, wetlands, Lake Tana, human disturbance activities
Procedia PDF Downloads 123700 The Influence of Cognitive Load in the Acquisition of Words through Sentence or Essay Writing
Authors: Breno Barrreto Silva, Agnieszka Otwinowska, Katarzyna Kutylowska
Abstract:
Research comparing lexical learning following the writing of sentences and longer texts with keywords is limited and contradictory. One possibility is that the recursivity of writing may enhance processing and increase lexical learning; another possibility is that the higher cognitive load of complex-text writing (e.g., essays), at least when timed, may hinder the learning of words. In our study, we selected 2 sets of 10 academic keywords matched for part of speech, length (number of characters), frequency (SUBTLEXus), and concreteness, and we asked 90 L1-Polish advanced-level English majors to use the keywords when writing sentences, timed (60 minutes) or untimed essays. First, all participants wrote a timed Control essay (60 minutes) without keywords. Then different groups produced Timed essays (60 minutes; n=33), Untimed essays (n=24), or Sentences (n=33) using the two sets of glossed keywords (counterbalanced). The comparability of the participants in the three groups was ensured by matching them for proficiency in English (LexTALE), and for few measures derived from the control essay: VocD (assessing productive lexical diversity), normed errors (assessing productive accuracy), words per minute (assessing productive written fluency), and holistic scores (assessing overall quality of production). We measured lexical learning (depth and breadth) via an adapted Vocabulary Knowledge Scale (VKS) and a free association test. Cognitive load was measured in the three essays (Control, Timed, Untimed) using normed number of errors and holistic scores (TOEFL criteria). The number of errors and essay scores were obtained from two raters (interrater reliability Pearson’s r=.78-91). Generalized linear mixed models showed no difference in the breadth and depth of keyword knowledge after writing Sentences, Timed essays, and Untimed essays. The task-based measurements found that Control and Timed essays had similar holistic scores, but that Untimed essay had better quality than Timed essay. Also, Untimed essay was the most accurate, and Timed essay the most error prone. Concluding, using keywords in Timed, but not Untimed, essays increased cognitive load, leading to more errors and lower quality. Still, writing sentences and essays yielded similar lexical learning, and differences in the cognitive load between Timed and Untimed essays did not affect lexical acquisition.Keywords: learning academic words, writing essays, cognitive load, english as an L2
Procedia PDF Downloads 73699 Biochemical and Cellular Correlates of Essential Oil of Pistacia Integerrima against in vitro and Murine Models of Bronchial Asthma
Authors: R. L. Shirole, N. L. Shirole, R. B. Patil, M. N. Saraf
Abstract:
The present investigation aimed to elucidate the probable mechanism of antiasthmatic action of essential oil of Pistacia integerrima J.L. Stewart ex Brandis galls (EOPI). EOPI was investigated for its potential antiasthmatic action using in vitro antiallergic assays mast cell degranulation and soyabean lipoxidase enzyme activit, and spasmolytic action using isolated guinea pig ileum preparation. In vivo studies included lipopolysaccharide-induced bronchial inflammation in rats and airway hyperresponsiveness in ovalbumin in sensitized guinea pigs using spirometry. Data was analysed by GraphPad Prism 5.01 and results were expressed as means ± SEM. P < 0.05 was considered to be significant. EOPI inhibits 5-lipoxidase enzyme activity, DPPH scavenging activity and erythropoietin- induced angiogenesis. It showed dose dependent anti-allergic activity by inhibiting compound 48/80 induced mast cell degranulation. The finding that essential oil induced inhibition of transient contraction of acetylcholine in calcium free medium, and relaxation of S-(-)-Bay 8644-precontracted isolated guinea pig ileum jointly suggest that suggesting that the L-subtype Cav channel is involved in spasmolytic action of EOPI. Treatment with EOPI dose dependently (7.5, 15 and 30 mg/kg i.p.) inhibited lipopolysaccharide- induced increased in total cell count, neutrophil count, nitrate-nitrite, total protein, albumin levels in bronchoalveolar fluid and myeloperoxidase levels in lung homogenates. Mild diffused lesions involving focal interalveolar septal, intraluminal infiltration of neutrophils were observed in EOPI (7.5 &15 mg/kg) pretreated while no abnormality was detected in EOPI (30 mg/kg) and roflumilast (1mg/kg) pretreated rats. Roflumilast was used as standard. EOPI reduced the respiratory flow due to gasping in ovalbumin sensitized guinea pigs. The study demonstrates the effectiveness of EOPI in bronchial asthma possibly related to its ability to inhibit L-subtype Cav channel, mast cell stabilization, antioxidant, angiostatic and through inhibition of 5-lipoxygenase enzyme.Keywords: asthma, lipopolysaccharide, spirometry, Pistacia integerrima J.L. Stewart ex Brandis, essential oil
Procedia PDF Downloads 284698 The Need for Innovation Management in the Context of Integrated Management Systems
Authors: Adela Mariana Vadastreanu, Adrian Bot, Andreea Maier, Dorin Maier
Abstract:
This paper approaches the need for innovation management in the context of an existing integrated management system implemented in an organization. The road to success for companies in today’s economic environment is more demanding than ever and the capacity of adapting to the rapid changes is compensatory in order to resist on the market. The managers struggle, daily, with increasingly complex problems, caused by fierce competition in the market but also from the rising demands of customers. Innovation seems to be the solution for these problems. During the last decade almost all companies have been certificated according to various management systems, like quality management system, environmental management system, health and safety management system and others; furthermore many companies have implemented an integrated management system, by integrating two or more management systems. The problem rising today is how to integrate innovation in this integrated management systems. The challenge of the problem is that the development of an innovation management system is in the early phase. In this paper we have studied the possibility of integrating some of the innovation request in an existing management system, we have identify the innovation performance request and we proposed some recommendations regarding innovation management and its implementation as a part of an integrated management system. This paper lies down the bases for developing an model of integration management systems that include innovation as a main part of it. Organizations are becoming more aware of the importance of Integrated Management Systems (IMS). Integrating two or more management systems into an integrated management system can have much advantages.This paper examines various models of management systems integration in accordance with professional references ISO 9001, ISO 18001 and OHSAS 18001, highlighting strengths and weaknesses, creating a basis for future development of integrated management systems, and their involvement in various other processes within the organization, such as innovation management. The more and more demanding economic context emphasizes the awareness of the importance of innovation for organizations. This paper highlights the importance of the innovation for an organization and also gives some practical solution in order to improve the overall success of the business through a better approach of innovation. Various standards have been developed in order to certificate organizations that they respect the requirements. Applying an integrated standards model is shown to be a more effective way then applying the standards independently. The problem that arises is that in order to adopt the integrated version of standards there have to be made some changes at the organizational level. Every change that needs to be done has an effect on its activity, and in this sense the paper tries to deal with the changes needed for adopting an integrated management system and if those changes have an influence over the performance. After the analysis of the results, we can conclude that in order to improve the performance a necessary step is the implementation of innovation in the existing integrated management system.Keywords: innovation, integrated management systems, innovation management, quality
Procedia PDF Downloads 315697 Image Making: The Spectacle of Photography and Text in Obituary Programs as Contemporary Practice of Social Visibility in Southern Nigeria
Authors: Soiduate Ogoye-Atanga
Abstract:
During funeral ceremonies, it has become common for attendees to jostle for burial programs in some southern Nigerian towns. Beginning from ordinary typewritten text only sheets of paper in the 1980s to their current digitally formatted multicolor magazine style, burial programs continue to be collected and kept in homes where they remain as archival documents of family photo histories and as a veritable form of leveraging family status and visibility in a social economy through the inclusion of lots of choreographically arranged photographs and text. The biographical texts speak of idealized and often lofty and aestheticized accomplishments of deceased peoples, which are often corroborated by an accompanying section of tributes from first the immediate family members, and then from affiliations as well as organizations deceased people belonged, in the form of scanned letterheaded corporate tributes. Others speak of modest biographical texts when the deceased accomplished little. Usually, in majority of the cases, the display of photographs and text in these programs follow a trajectory of historical compartmentalization of the deceased, beginning from parentage to the period of youth, occupation, retirement, and old age as the case may be, which usually drives from black and white historical photographs to the color photography of today. This compartmentalization follows varied models but is designed to show the deceased in varying activities during his lifetime. The production of these programs ranges from the extremely expensive and luscious full colors of near fifty-eighty pages to bland and very simplified low-quality few-page editions in a single color and no photographs, except on the cover. Cost and quality, therefore, become determinants of varying family status and social visibility. By a critical selection of photographs and text, family members construct an idealized image of deceased people and themselves, concentrating on mutuality based on appropriate sartorial selections, socioeconomic grade, and social temperaments that are framed to corroborate the public’s perception of them. Burial magazines, therefore, serve purposes beyond their primary use; they symbolize an orchestrated social site for image-making and the validation of the social status of families, shaped by prior family histories.Keywords: biographical texts, burial programs, compartmentalization, magazine, multicolor, photo-histories, social status
Procedia PDF Downloads 188696 Public-Private Partnership Projects in Canada: A Case Study Approach
Authors: Samuel Carpintero
Abstract:
Public-private partnerships (PPP) arrangements have emerged all around the world as a response to infrastructure deficits and the need to refurbish existing infrastructure. The motivations of governments for embarking on PPPs for the delivery of public infrastructure are manifold, and include on-time and on-budget delivery as well as access to private project management expertise. The PPP formula has been used by some State governments in United States and Canada, where the participation of private companies in financing and managing infrastructure projects has increased significantly in the last decade, particularly in the transport sector. On the one hand, this paper examines the various ways used in these two countries in the implementation of PPP arrangements, with a particular focus on risk transfer. The examination of risk transfer in this paper is carried out with reference to the following key PPP risk categories: construction risk, revenue risk, operating risk and availability risk. The main difference between both countries is that in Canada the demand risk remains usually within the public sector whereas in the United States this risk is usually transferred to the private concessionaire. The aim is to explore which lessons can be learnt from both models than might be useful for other countries. On the other hand, the paper also analyzes why the Spanish companies have been so successful in winning PPP contracts in North America during the past decade. Contrary to the Latin American PPP market, the Spanish companies do not have any cultural advantage in the case of the United States and Canada. Arguably, some relevant reasons for the success of the Spanish groups are their extensive experience in PPP projects (that dates back to the late 1960s in some cases), their high technical level (that allows them to be aggressive in their bids), and their good position and track-record in the financial markets. The article’s empirical base consists of data provided by official sources of both countries as well as information collected through face-to-face interviews with public and private representatives of the stakeholders participating in some of the PPP schemes. Interviewees include private project managers of the concessionaires, representatives of banks involved as financiers in the projects, and experts in the PPP industry with close knowledge of the North American market. Unstructured in-depth interviews have been adopted as a means of investigation for this study because of its powers to achieve honest and robust responses and to ensure realism in the collection of an overall impression of stakeholders’ perspectives.Keywords: PPP, concession, infrastructure, construction
Procedia PDF Downloads 300695 Modeling the Effects of Leachate-Impacted Groundwater on the Water Quality of a Large Tidal River
Authors: Emery Coppola Jr., Marwan Sadat, Il Kim, Diane Trube, Richard Kurisko
Abstract:
Contamination sites like landfills often pose significant risks to receptors like surface water bodies. Surface water bodies are often a source of recreation, including fishing and swimming, which not only enhances their value but also serves as a direct exposure pathway to humans, increasing their need for protection from water quality degradation. In this paper, a case study presents the potential effects of leachate-impacted groundwater from a large closed sanitary landfill on the surface water quality of the nearby Raritan River, situated in New Jersey. The study, performed over a two year period, included in-depth field evaluation of both the groundwater and surface water systems, and was supplemented by computer modeling. The analysis required delineation of a representative average daily groundwater discharge from the Landfill shoreline into the large, highly tidal Raritan River, with a corresponding estimate of daily mass loading of potential contaminants of concern. The average daily groundwater discharge into the river was estimated from a high-resolution water level study and a 24-hour constant-rate aquifer pumping test. The significant tidal effects induced on groundwater levels during the aquifer pumping test were filtered out using an advanced algorithm, from which aquifer parameter values were estimated using conventional curve match techniques. The estimated hydraulic conductivity values obtained from individual observation wells closely agree with tidally-derived values for the same wells. Numerous models were developed and used to simulate groundwater contaminant transport and surface water quality impacts. MODFLOW with MT3DMS was used to simulate the transport of potential contaminants of concern from the down-gradient edge of the Landfill to the Raritan River shoreline. A surface water dispersion model based upon a bathymetric and flow study of the river was used to simulate the contaminant concentrations over space within the river. The modeling results helped demonstrate that because of natural attenuation, the Landfill does not have a measurable impact on the river, which was confirmed by an extensive surface water quality study.Keywords: groundwater flow and contaminant transport modeling, groundwater/surface water interaction, landfill leachate, surface water quality modeling
Procedia PDF Downloads 261694 Measuring Human Perception and Negative Elements of Public Space Quality Using Deep Learning: A Case Study of Area within the Inner Road of Tianjin City
Authors: Jiaxin Shi, Kaifeng Hao, Qingfan An, Zeng Peng
Abstract:
Due to a lack of data sources and data processing techniques, it has always been difficult to quantify public space quality, which includes urban construction quality and how it is perceived by people, especially in large urban areas. This study proposes a quantitative research method based on the consideration of emotional health and physical health of the built environment. It highlights the low quality of public areas in Tianjin, China, where there are many negative elements. Deep learning technology is then used to measure how effectively people perceive urban areas. First, this work suggests a deep learning model that might simulate how people can perceive the quality of urban construction. Second, we perform semantic segmentation on street images to identify visual elements influencing scene perception. Finally, this study correlated the scene perception score with the proportion of visual elements to determine the surrounding environmental elements that influence scene perception. Using a small-scale labeled Tianjin street view data set based on transfer learning, this study trains five negative spatial discriminant models in order to explore the negative space distribution and quality improvement of urban streets. Then it uses all Tianjin street-level imagery to make predictions and calculate the proportion of negative space. Visualizing the spatial distribution of negative space along the Tianjin Inner Ring Road reveals that the negative elements are mainly found close to the five key districts. The map of Tianjin was combined with the experimental data to perform the visual analysis. Based on the emotional assessment, the distribution of negative materials, and the direction of street guidelines, we suggest guidance content and design strategy points of the negative phenomena in Tianjin street space in the two dimensions of perception and substance. This work demonstrates the utilization of deep learning techniques to understand how people appreciate high-quality urban construction, and it complements both theory and practice in urban planning. It illustrates the connection between human perception and the actual physical public space environment, allowing researchers to make urban interventions.Keywords: human perception, public space quality, deep learning, negative elements, street images
Procedia PDF Downloads 115693 Estimates of Freshwater Content from ICESat-2 Derived Dynamic Ocean Topography
Authors: Adan Valdez, Shawn Gallaher, James Morison, Jordan Aragon
Abstract:
Global climate change has impacted atmospheric temperatures contributing to rising sea levels, decreasing sea ice, and increased freshening of high latitude oceans. This freshening has contributed to increased stratification inhibiting local mixing and nutrient transport and modifying regional circulations in polar oceans. In recent years, the Western Arctic has seen an increase in freshwater volume at an average rate of 397+-116 km3/year. The majority of the freshwater volume resides in the Beaufort Gyre surface lens driven by anticyclonic wind forcing, sea ice melt, and Arctic river runoff. The total climatological freshwater content is typically defined as water fresher than 34.8. The near-isothermal nature of Arctic seawater and non-linearities in the equation of state for near-freezing waters result in a salinity driven pycnocline as opposed to the temperature driven density structure seen in the lower latitudes. In this study, we investigate the relationship between freshwater content and remotely sensed dynamic ocean topography (DOT). In-situ measurements of freshwater content are useful in providing information on the freshening rate of the Beaufort Gyre; however, their collection is costly and time consuming. NASA’s Advanced Topographic Laser Altimeter System (ATLAS) derived dynamic ocean topography (DOT), and Air Expendable CTD (AXCTD) derived Freshwater Content are used to develop a linear regression model. In-situ data for the regression model is collected across the 150° West meridian, which typically defines the centerline of the Beaufort Gyre. Two freshwater content models are determined by integrating the freshwater volume between the surface and an isopycnal corresponding to reference salinities of 28.7 and 34.8. These salinities correspond to those of the winter pycnocline and total climatological freshwater content, respectively. Using each model, we determine the strength of the linear relationship between freshwater content and satellite derived DOT. The result of this modeling study could provide a future predictive capability of freshwater volume changes in the Beaufort-Chukchi Sea using non in-situ methods. Successful employment of the ICESat-2’s DOT approximation of freshwater content could potentially reduce reliance on field deployment platforms to characterize physical ocean properties.Keywords: ICESat-2, dynamic ocean topography, freshwater content, beaufort gyre
Procedia PDF Downloads 87692 Referencing Anna: Findings From Eye-tracking During Dutch Pronoun Resolution
Authors: Robin Devillers, Chantal van Dijk
Abstract:
Children face ambiguities in everyday language use. Particularly ambiguity in pronoun resolution can be challenging, whereas adults can rapidly identify the antecedent of the mentioned pronoun. Two main factors underlie this process, namely the accessibility of the referent and the syntactic cues of the pronoun. After 200ms, adults have converged the accessibility and the syntactic constraints, while relieving cognitive effort by considering contextual cues. As children are still developing their cognitive capacity, they are not able yet to simultaneously assess and integrate accessibility, contextual cues and syntactic information. As such, they fail to identify the correct referent and possibly fixate more on the competitor in comparison to adults. In this study, Dutch while-clauses were used to investigate the interpretation of pronouns by children. The aim is to a) examine the extent to which 7-10 year old children are able to utilise discourse and syntactic information during online and offline sentence processing and b) analyse the contribution of individual factors, including age, working memory, condition and vocabulary. Adult and child participants are presented with filler-items and while-clauses, and the latter follows a particular structure: ‘Anna and Sophie are sitting in the library. While Anna is reading a book, she is taking a sip of water.’ This sentence illustrates the ambiguous situation, as it is unclear whether ‘she’ refers to Anna or Sophie. In the unambiguous situation, either Anna or Sophie would be substituted by a boy, such as ‘Peter’. The pronoun in the second sentence will unambiguously refer to one of the characters due to the syntactic constraints of the pronoun. Children’s and adults’ responses were measured by means of a visual world paradigm. This paradigm consisted of two characters, of which one was the referent (the target) and the other was the competitor. A sentence was presented and followed by a question, which required the participant to choose which character was the referent. Subsequently, this paradigm yields an online (fixations) and offline (accuracy) score. These findings will be analysed using Generalised Additive Mixed Models, which allow for a thorough estimation of the individual variables. These findings will contribute to the scientific literature in several ways; firstly, the use of while-clauses has not been studied much and it’s processing has not yet been identified. Moreover, online pronoun resolution has not been investigated much in both children and adults, and therefore, this study will contribute to adults and child’s pronoun resolution literature. Lastly, pronoun resolution has not been studied yet in Dutch and as such, this study adds to the languagesKeywords: pronouns, online language processing, Dutch, eye-tracking, first language acquisition, language development
Procedia PDF Downloads 100691 Private Technology Parks–The New Engine for Innovation Development in Russia
Authors: K. Volkonitskaya, S. Lyapina
Abstract:
According to the National Monitoring Centre of innovation infrastructure, scientific and technical activities and regional innovation systems by December 2014. 166 technology parks were established in Russia. Comparative analysis of technological parks performance in Russia, the USA, Israel and the European Union countries revealed significant reduction of key performance indicators in Russian innovation infrastructure institutes. The largest deviations were determined in the following indicators: new products and services launched, number of companies and jobs, amount of venture capital invested. Lower performance indicators of Russian technology parks can be partly explained by slack demand for national high-tech products and services, lack of qualified specialists in the sphere of innovation management and insufficient cooperation between different innovation infrastructure institutes. In spite of all constraints in innovation segment of Russian economy in 2010-2012 private investors for the first time proceeded to finance building of technological parks. The general purpose of the research is to answer two questions: why despite the significant investment risks private investors continue to implement such comprehensive infrastructure projects in Russia and is business model of private technological park more efficient than strategies of state innovation infrastructure institutes? The goal of the research was achieved by analyzing business models of private technological parks in Moscow, Kaliningrad, Astrakhan and Kazan. The research was conducted in two stages: the on-line survey of key performance indicators of private and state Russian technological parks and in-depth interviews with top managers and investors, who have already build private technological parks in by 2014 or are going to complete investment stage in 2014-2016. The results anticipated are intended to identify the reasons of efficient and inefficient technological parks performance. Furthermore, recommendations for improving the efficiency of state technological and industrial parks were formulated. Particularly, the recommendations affect the following issues: networking with other infrastructural institutes, services and infrastructure provided, mechanisms of public-private partnership and investment attraction. In general intensive study of private technological parks performance and development of effective mechanisms of state support can have a positive impact on the growth rates of the number of Russian technological, industrial and science parks.Keywords: innovation development, innovation infrastructure, private technology park, public-private partnership
Procedia PDF Downloads 436690 A Study for Area-level Mosquito Abundance Prediction by Using Supervised Machine Learning Point-level Predictor
Authors: Theoktisti Makridou, Konstantinos Tsaprailis, George Arvanitakis, Charalampos Kontoes
Abstract:
In the literature, the data-driven approaches for mosquito abundance prediction relaying on supervised machine learning models that get trained with historical in-situ measurements. The counterpart of this approach is once the model gets trained on pointlevel (specific x,y coordinates) measurements, the predictions of the model refer again to point-level. These point-level predictions reduce the applicability of those solutions once a lot of early warning and mitigation actions applications need predictions for an area level, such as a municipality, village, etc... In this study, we apply a data-driven predictive model, which relies on public-open satellite Earth Observation and geospatial data and gets trained with historical point-level in-Situ measurements of mosquito abundance. Then we propose a methodology to extract information from a point-level predictive model to a broader area-level prediction. Our methodology relies on the randomly spatial sampling of the area of interest (similar to the Poisson hardcore process), obtaining the EO and geomorphological information for each sample, doing the point-wise prediction for each sample, and aggregating the predictions to represent the average mosquito abundance of the area. We quantify the performance of the transformation from the pointlevel to the area-level predictions, and we analyze it in order to understand which parameters have a positive or negative impact on it. The goal of this study is to propose a methodology that predicts the mosquito abundance of a given area by relying on point-level prediction and to provide qualitative insights regarding the expected performance of the area-level prediction. We applied our methodology to historical data (of Culex pipiens) of two areas of interest (Veneto region of Italy and Central Macedonia of Greece). In both cases, the results were consistent. The mean mosquito abundance of a given area can be estimated with similar accuracy to the point-level predictor, sometimes even better. The density of the samples that we use to represent one area has a positive effect on the performance in contrast to the actual number of sampling points which is not informative at all regarding the performance without the size of the area. Additionally, we saw that the distance between the sampling points and the real in-situ measurements that were used for training did not strongly affect the performance.Keywords: mosquito abundance, supervised machine learning, culex pipiens, spatial sampling, west nile virus, earth observation data
Procedia PDF Downloads 148689 Evaluation of Coal Quality and Geomechanical Moduli Using Core and Geophysical Logs: Study from Middle Permian Barakar Formation of Gondwana Coalfield
Authors: Joyjit Dey, Souvik Sen
Abstract:
Middle Permian Barakar formation is the major economic coal bearing unit of vast east-west trending Damodar Valley basin of Gondwana coalfield. Primary sedimentary structures were studied from the core holes, which represent majorly four facies groups: sandstone dominated facies, sandstone-shale heterolith facies, shale facies and coal facies. Total eight major coal seams have been identified with the bottom most seam being the thickest. Laterally, continuous coal seams were deposited in the calm and quiet environment of extensive floodplain swamps. Channel sinuosity and lateral channel migration/avulsion results in lateral facies heterogeneity and coal splitting. Geophysical well logs (Gamma-Resistivity-Density logs) have been used to establish the vertical and lateral correlation of various litho units field-wide, which reveals the predominance of repetitive fining upwards cycles. Well log data being a permanent record, offers a strong foundation for generating log based property evaluation and helps in characterization of depositional units in terms of lateral and vertical heterogeneity. Low gamma, high resistivity, low density is the typical coal seam signatures in geophysical logs. Here, we have used a density cutoff of 1.6 g/cc as a primary discriminator of coal and the same has been employed to compute various coal assay parameters, which are ash, fixed carbon, moisture, volatile content, cleat porosity, vitrinite reflectance (VRo%), which were calibrated with the laboratory based measurements. The study shows ash content and VRo% increase from west to east (towards basin margin), while fixed carbon, moisture and volatile content increase towards west, depicting increased coal quality westwards. Seam wise cleat porosity decreases from east to west, this would be an effect of overburden, as overburden pressure increases westward with the deepening of basin causing more sediment packet deposited on the western side of the study area. Coal is a porous, viscoelastic material in which velocity and strain both change nonlinearly with stress, especially for stress applied perpendicular to the bedding plane. Usually, the coal seam has a high velocity contrast relative to its neighboring layers. Despite extensive discussion of the maceral and chemical properties of coal, its elastic characteristics have received comparatively little attention. The measurement of the elastic constants of coal presents many difficulties: sample-to-sample inhomogeneity and fragility and velocity dependence on stress, orientation, humidity, and chemical content. In this study, a conclusive empirical equation VS= 0.80VP-0.86 has been used to model shear velocity from compression velocity. Also the same has been used to compute various geomechanical moduli. Geomech analyses yield a Poisson ratio of 0.348 against coals. Average bulk modulus value is 3.97 GPA, while average shear modulus and Young’s modulus values are coming out as 1.34 and 3.59 GPA respectively. These middle Permian Barakar coals show an average 23.84 MPA uniaxial compressive strength (UCS) with 4.97 MPA cohesive strength and 0.46 as friction coefficient. The output values of log based proximate parameters and geomechanical moduli suggest a medium volatile Bituminous grade for the studied coal seams, which is found in the laboratory based core study as well.Keywords: core analysis, coal characterization, geophysical log, geo-mechanical moduli
Procedia PDF Downloads 226688 Sustainability Impact Assessment of Construction Ecology to Engineering Systems and Climate Change
Authors: Moustafa Osman Mohammed
Abstract:
Construction industry, as one of the main contributor in depletion of natural resources, influences climate change. This paper discusses incremental and evolutionary development of the proposed models for optimization of a life-cycle analysis to explicit strategy for evaluation systems. The main categories are virtually irresistible for introducing uncertainties, uptake composite structure model (CSM) as environmental management systems (EMSs) in a practice science of evaluation small and medium-sized enterprises (SMEs). The model simplified complex systems to reflect nature systems’ input, output and outcomes mode influence “framework measures” and give a maximum likelihood estimation of how elements are simulated over the composite structure. The traditional knowledge of modeling is based on physical dynamic and static patterns regarding parameters influence environment. It unified methods to demonstrate how construction systems ecology interrelated from management prospective in procedure reflects the effect of the effects of engineering systems to ecology as ultimately unified technologies in extensive range beyond constructions impact so as, - energy systems. Sustainability broadens socioeconomic parameters to practice science that meets recovery performance, engineering reflects the generic control of protective systems. When the environmental model employed properly, management decision process in governments or corporations could address policy for accomplishment strategic plans precisely. The management and engineering limitation focuses on autocatalytic control as a close cellular system to naturally balance anthropogenic insertions or aggregation structure systems to pound equilibrium as steady stable conditions. Thereby, construction systems ecology incorporates engineering and management scheme, as a midpoint stage between biotic and abiotic components to predict constructions impact. The later outcomes’ theory of environmental obligation suggests either a procedures of method or technique that is achieved in sustainability impact of construction system ecology (SICSE), as a relative mitigation measure of deviation control, ultimately.Keywords: sustainability, environmental impact assessment, environemtal management, construction ecology
Procedia PDF Downloads 393687 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task
Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli
Abstract:
Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making
Procedia PDF Downloads 252686 Environmental Related Mortality Rates through Artificial Intelligence Tools
Authors: Stamatis Zoras, Vasilis Evagelopoulos, Theodoros Staurakas
Abstract:
The association between elevated air pollution levels and extreme climate conditions (temperature, particulate matter, ozone levels, etc.) and mental consequences has been, recently, the focus of significant number of studies. It varies depending on the time of the year it occurs either during the hot period or cold periods but, specifically, when extreme air pollution and weather events are observed, e.g. air pollution episodes and persistent heatwaves. It also varies spatially due to different effects of air quality and climate extremes to human health when considering metropolitan or rural areas. An air pollutant concentration and a climate extreme are taking a different form of impact if the focus area is countryside or in the urban environment. In the built environment the climate extreme effects are driven through the formed microclimate which must be studied more efficiently. Variables such as biological, age groups etc may be implicated by different environmental factors such as increased air pollution/noise levels and overheating of buildings in comparison to rural areas. Gridded air quality and climate variables derived from the land surface observations network of West Macedonia in Greece will be analysed against mortality data in a spatial format in the region of West Macedonia. Artificial intelligence (AI) tools will be used for data correction and prediction of health deterioration with climatic conditions and air pollution at local scale. This would reveal the built environment implications against the countryside. The air pollution and climatic data have been collected from meteorological stations and span the period from 2000 to 2009. These will be projected against the mortality rates data in daily, monthly, seasonal and annual grids. The grids will be operated as AI-based warning models for decision makers in order to map the health conditions in rural and urban areas to ensure improved awareness of the healthcare system by taken into account the predicted changing climate conditions. Gridded data of climate conditions, air quality levels against mortality rates will be presented by AI-analysed gridded indicators of the implicated variables. An Al-based gridded warning platform at local scales is then developed for future system awareness platform for regional level.Keywords: air quality, artificial inteligence, climatic conditions, mortality
Procedia PDF Downloads 113685 Personality Moderates the Relation Between Mother´s Emotional Intelligence and Young Children´s Emotion Situation Knowledge
Authors: Natalia Alonso-Alberca, Ana I. Vergara
Abstract:
From the very first years of their life, children are confronted with situations in which they need to deal with emotions. The family provides the first emotional experiences, and it is in the family context that children usually take their first steps towards acquiring emotion knowledge. Parents play a key role in this important task, helping their children develop emotional skills that they will need in challenging situations throughout their lives. Specifically, mothers are models imitated by their children. They create specific spatial and temporal contexts in which children learn about emotions, their causes, consequences, and complexity. This occurs not only through what mothers say or do directly to the child. Rather, it occurs, to a large extent, through the example that they set using their own emotional skills. The aim of the current study was to analyze how maternal abilities to perceive and to manage emotions influence children’s emotion knowledge, specifically, their emotion situation knowledge, taking into account the role played by the mother’s personality, the time spent together, and controlling the effect of age, sex and the child’s verbal abilities. Participants were 153 children from 4 schools in Spain, and their mothers. Children (41.8% girls)age range was 35 - 72 months. Mothers (N = 140) age (M = 38.7; R = 27-49). Twelve mothers had more than one child participating in the study. Main variables were the child´s emotion situation knowledge (ESK), measured by the Emotion Matching Task (EMT), and receptive language, using the Picture Vocabulary Test. Also, their mothers´ Emotional Intelligence (EI), through the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT) and personality, with The Big Five Inventory were analyzed. The results showed that the predictive power of maternal emotional skills on ESK was moderated by the mother’s personality, affecting both the direction and size of the relationships detected: low neuroticism and low openness to experience lead to a positive influence of maternal EI on children’s ESK, while high levels in these personality dimensions resulted in a negative influence on child´s ESK. The time that the mother and the child spend together was revealed as a positive predictor of this EK, while it did not moderate the influence of the mother's EI on child’s ESK. In light of the results, we can infer that maternal EI is linked to children’s emotional skills, though high level of maternal EI does not necessarily predict a greater degree of emotionknowledge in children, which seems rather to depend on specific personality profiles. The results of the current study indicate that a good level of maternal EI does not guarantee that children will learn the emotional skills that foster prosocial adaptation. Rather, EI must be accompanied by certain psychological characteristics (personality traits in this case).Keywords: emotional intelligence, emotion situation knowledge, mothers, personality, young children
Procedia PDF Downloads 134684 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117683 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies
Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker
Abstract:
Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis
Procedia PDF Downloads 208682 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays
Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir
Abstract:
Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis
Procedia PDF Downloads 113