Search results for: stochastic modeling
157 Rheological Characterization of Polysaccharide Extracted from Camelina Meal as a New Source of Thickening Agent
Authors: Mohammad Anvari, Helen S. Joyner (Melito)
Abstract:
Camelina sativa (L.) Crantz is an oilseed crop currently used for the production of biofuels. However, the low price of diesel and gasoline has made camelina an unprofitable crop for farmers, leading to declining camelina production in the US. Hence, the ability to utilize camelina byproduct (defatted meal) after oil extraction would be a pivotal factor for promoting the economic value of the plant. Camelina defatted meal is rich in proteins and polysaccharides. The great diversity in the polysaccharide structural features provides a unique opportunity for use in food formulations as thickeners, gelling agents, emulsifiers, and stabilizers. There is currently a great degree of interest in the study of novel plant polysaccharides, as they can be derived from readily accessible sources and have potential application in a wide range of food formulations. However, there are no published studies on the polysaccharide extracted from camelina meal, and its potential industrial applications remain largely underexploited. Rheological properties are a key functional feature of polysaccharides and are highly dependent on the material composition and molecular structure. Therefore, the objective of this study was to evaluate the rheological properties of the polysaccharide extracted from camelina meal at different conditions to obtain insight on the molecular characteristics of the polysaccharide. Flow and dynamic mechanical behaviors were determined under different temperatures (5-50°C) and concentrations (1-6% w/v). Additionally, the zeta potential of the polysaccharide dispersion was measured at different pHs (2-11) and a biopolymer concentration of 0.05% (w/v). Shear rate sweep data revealed that the camelina polysaccharide displayed shear thinning (pseudoplastic) behavior, which is typical of polymer systems. The polysaccharide dispersion (1% w/v) showed no significant changes in viscosity with temperature, which makes it a promising ingredient in products requiring texture stability over a range of temperatures. However, the viscosity increased significantly with increased concentration, indicating that camelina polysaccharide can be used in food products at different concentrations to produce a range of textures. Dynamic mechanical spectra showed similar trends. The temperature had little effect on viscoelastic moduli. However, moduli were strongly affected by concentration: samples exhibited concentrated solution behavior at low concentrations (1-2% w/v) and weak gel behavior at higher concentrations (4-6% w/v). These rheological properties can be used for designing and modeling of liquid and semisolid products. Zeta potential affects the intensity of molecular interactions and molecular conformation and can alter solubility, stability, and eventually, the functionality of the materials as their environment changes. In this study, the zeta potential value significantly decreased from 0.0 to -62.5 as pH increased from 2 to 11, indicating that pH may affect the functional properties of the polysaccharide. The results obtained in the current study showed that camelina polysaccharide has significant potential for application in various food systems and can be introduced as a novel anionic thickening agent with unique properties.Keywords: Camelina meal, polysaccharide, rheology, zeta potential
Procedia PDF Downloads 245156 The Effect of Teachers' Personal Values on the Perceptions of the Effective Principal and Student in School
Authors: Alexander Zibenberg, Rima’a Da’As
Abstract:
According to the author’s knowledge, individuals are naturally inclined to classify people as leaders and followers. Individuals utilize cognitive structures or prototypes specifying the traits and abilities that characterize the effective leader (implicit leadership theories) and effective follower in an organization (implicit followership theories). Thus, the present study offers insights into understanding how teachers' personal values (self-enhancement and self-transcendence) explain the preference for styles of effective leader (i.e., principal) and assumptions about the traits and behaviors that characterize effective followers (i.e., student). Beyond the direct effect on perceptions of effective types of leader and follower, the present study argues that values may also interact with organizational and personal contexts in influencing perceptions. Thus authors suggest that teachers' managerial position may moderate the relationships between personal values and perception of the effective leader and follower. Specifically, two key questions are addressed in the present research: (1) Is there a relationship between personal values and perceptions of the effective leader and effective follower? and (2) Are these relationships stable or could they change across different contexts? Two hundred fifty-five Israeli teachers participated in this study, completing questionnaires – about the effective student and effective principal. Results of structural equations modeling (SEM) with maximum likelihood estimation showed: first: the model fit the data well. Second: researchers found a positive relationship between self-enhancement and anti-prototype of the effective principal and anti-prototype of the effective student. The relationship between self-transcendence value and both perceptions were found significant as well. Self-transcendence positively related to the way the teacher perceives the prototype of the effective principal and effective student. Besides, authors found that teachers' managerial position moderates these relationships. The article contributes to the literature both on perceptions and on personal values. Although several earlier studies explored issues of implicit leadership theories and implicit followership theories, personality characteristics (values) have garnered less attention in this matter. This study shows that personal values which are deeply rooted, abstract motivations that guide justify or explain attitudes, norms, opinions and actions explain differences in perception of the effective leader and follower. The results advance the theoretical understanding of the relationship between personal values and individuals’ perceptions in organizations. An additional contribution of this study is the application of the teacher's managerial position to explain a potential boundary condition of the translation of personal values into outcomes. The findings suggest that through the management process in the organization, teachers acquire knowledge and skills which augment their ability (beyond their personal values) to predict perceptions of ideal types of principal and student. The study elucidates the unique role of personal values in understanding an organizational thinking in organization. It seems that personal values might explain the differences in individual preferences of the organizational paradigm (mechanistic vs organic).Keywords: implicit leadership theories, implicit followership theories, organizational paradigms, personal values
Procedia PDF Downloads 157155 3D CFD Model of Hydrodynamics in Lowland Dam Reservoir in Poland
Authors: Aleksandra Zieminska-Stolarska, Ireneusz Zbicinski
Abstract:
Introduction: The objective of the present work was to develop and validate a 3D CFD numerical model for simulating flow through 17 kilometers long dam reservoir of a complex bathymetry. In contrast to flowing waters, dam reservoirs were not emphasized in the early years of water quality modeling, as this issue has never been the major focus of urban development. Starting in the 1970s, however, it was recognized that natural and man-made lakes are equal, if not more important than estuaries and rivers from a recreational standpoint. The Sulejow Reservoir (Central Poland) was selected as the study area as representative of many lowland dam reservoirs and due availability of a large database of the ecological, hydrological and morphological parameters of the lake. Method: 3D, 2-phase and 1-phase CFD models were analysed to determine hydrodynamics in the Sulejow Reservoir. Development of 3D, 2-phase CFD model of flow requires a construction of mesh with millions of elements and overcome serious convergence problems. As 1-phase CFD model of flow in relation to 2-phase CFD model excludes from the simulations the dynamics of waves only, which should not change significantly water flow pattern for the case of lowland, dam reservoirs. In 1-phase CFD model, the phases (water-air) are separated by a plate which allows calculations of one phase (water) flow only. As the wind affects velocity of flow, to take into account the effect of the wind on hydrodynamics in 1-phase CFD model, the plate must move with speed and direction equal to the speed and direction of the upper water layer. To determine the velocity at which the plate will move on the water surface and interacts with the underlying layers of water and apply this value in 1-phase CFD model, the 2D, 2-phase model was elaborated. Result: Model was verified on the basis of the extensive flow measurements (StreamPro ADCP, USA). Excellent agreement (an average error less than 10%) between computed and measured velocity profiles was found. As a result of work, the following main conclusions can be presented: •The results indicate that the flow field in the Sulejow Reservoir is transient in nature, with swirl flows in the lower part of the lake. Recirculating zones, with the size of even half kilometer, may increase water retention time in this region •The results of simulations confirm the pronounced effect of the wind on the development of the water circulation zones in the reservoir which might affect the accumulation of nutrients in the epilimnion layer and result e.g. in the algae bloom. Conclusion: The resulting model is accurate and the methodology develop in the frame of this work can be applied to all types of storage reservoir configurations, characteristics, and hydrodynamics conditions. Large recirculating zones in the lake which increase water retention time and might affect the accumulation of nutrients were detected. Accurate CFD model of hydrodynamics in large water body could help in the development of forecast of water quality, especially in terms of eutrophication and water management of the big water bodies.Keywords: CFD, mathematical modelling, dam reservoirs, hydrodynamics
Procedia PDF Downloads 401154 Monitoring Future Climate Changes Pattern over Major Cities in Ghana Using Coupled Modeled Intercomparison Project Phase 5, Support Vector Machine, and Random Forest Modeling
Authors: Stephen Dankwa, Zheng Wenfeng, Xiaolu Li
Abstract:
Climate change is recently gaining the attention of many countries across the world. Climate change, which is also known as global warming, referring to the increasing in average surface temperature has been a concern to the Environmental Protection Agency of Ghana. Recently, Ghana has become vulnerable to the effect of the climate change as a result of the dependence of the majority of the population on agriculture. The clearing down of trees to grow crops and burning of charcoal in the country has been a contributing factor to the rise in temperature nowadays in the country as a result of releasing of carbon dioxide and greenhouse gases into the air. Recently, petroleum stations across the cities have been on fire due to this climate changes and which have position Ghana in a way not able to withstand this climate event. As a result, the significant of this research paper is to project how the rise in the average surface temperature will be like at the end of the mid-21st century when agriculture and deforestation are allowed to continue for some time in the country. This study uses the Coupled Modeled Intercomparison Project phase 5 (CMIP5) experiment RCP 8.5 model output data to monitor the future climate changes from 2041-2050, at the end of the mid-21st century over the ten (10) major cities (Accra, Bolgatanga, Cape Coast, Koforidua, Kumasi, Sekondi-Takoradi, Sunyani, Ho, Tamale, Wa) in Ghana. In the models, Support Vector Machine and Random forest, where the cities as a function of heat wave metrics (minimum temperature, maximum temperature, mean temperature, heat wave duration and number of heat waves) assisted to provide more than 50% accuracy to predict and monitor the pattern of the surface air temperature. The findings identified were that the near-surface air temperature will rise between 1°C-2°C (degrees Celsius) over the coastal cities (Accra, Cape Coast, Sekondi-Takoradi). The temperature over Kumasi, Ho and Sunyani by the end of 2050 will rise by 1°C. In Koforidua, it will rise between 1°C-2°C. The temperature will rise in Bolgatanga, Tamale and Wa by 0.5°C by 2050. This indicates how the coastal and the southern part of the country are becoming hotter compared with the north, even though the northern part is the hottest. During heat waves from 2041-2050, Bolgatanga, Tamale, and Wa will experience the highest mean daily air temperature between 34°C-36°C. Kumasi, Koforidua, and Sunyani will experience about 34°C. The coastal cities (Accra, Cape Coast, Sekondi-Takoradi) will experience below 32°C. Even though, the coastal cities will experience the lowest mean temperature, they will have the highest number of heat waves about 62. Majority of the heat waves will last between 2 to 10 days with the maximum 30 days. The surface temperature will continue to rise by the end of the mid-21st century (2041-2050) over the major cities in Ghana and so needs to be addressed to the Environmental Protection Agency in Ghana in order to mitigate this problem.Keywords: climate changes, CMIP5, Ghana, heat waves, random forest, SVM
Procedia PDF Downloads 200153 Multilevel Regression Model - Evaluate Relationship Between Early Years’ Activities of Daily Living and Alzheimer’s Disease Onset Accounting for Influence of Key Sociodemographic Factors Using a Longitudinal Household Survey Data
Authors: Linyi Fan, C.J. Schumaker
Abstract:
Background: Biomedical efforts to treat Alzheimer’s disease (AD) have typically produced mixed to poor results, while more lifestyle-focused treatments such as exercise may fare better than existing biomedical treatments. A few promising studies have indicated that activities of daily life (ADL) may be a useful way of predicting AD. However, the existing cross-sectional studies fail to show how functional-related issues such as ADL in early years predict AD and how social factors influence health either in addition to or in interaction with individual risk factors. This study would helpbetterscreening and early treatments for the elderly population and healthcare practice. The findings have significance academically and practically in terms of creating positive social change. Methodology: The purpose of this quantitative historical, correlational study was to examine the relationship between early years’ ADL and the development of AD in later years. The studyincluded 4,526participantsderived fromRAND HRS dataset. The Health and Retirement Study (HRS) is a longitudinal household survey data set that is available forresearchof retirement and health among the elderly in the United States. The sample was selected by the completion of survey questionnaire about AD and dementia. The variablethat indicates whether the participant has been diagnosed with AD was the dependent variable. The ADL indices and changes in ADL were the independent variables. A four-step multilevel regression model approach was utilized to address the research questions. Results: Amongst 4,526 patients who completed the AD and dementia questionnaire, 144 (3.1%) were diagnosed with AD. Of the 4,526 participants, 3,465 (76.6%) have high school and upper education degrees,4,074 (90.0%) were above poverty threshold. The model evaluatedthe effect of ADL and change in ADL on onset of AD in late years while allowing the intercept of the model to vary by level of education. The results suggested that the only significant predictor of the onset of AD was changes in early years’ ADL (b = 20.253, z = 2.761, p < .05). However, the result of the sensitivity analysis (b = 7.562, z = 1.900, p =.058), which included more control variables and increased the observation period of ADL, are not supported this finding. The model also estimated whether the variances of random effect vary by Level-2 variables. The results suggested that the variances associated with random slopes were approximately zero, suggesting that the relationship between early years’ ADL were not influenced bysociodemographic factors. Conclusion: The finding indicated that an increase in changes in ADL leads to an increase in the probability of onset AD in the future. However, this finding is not support in a broad observation period model. The study also failed to reject the hypothesis that the sociodemographic factors explained significant amounts of variance in random effect. Recommendations were then made for future research and practice based on these limitations and the significance of the findings.Keywords: alzheimer’s disease, epidemiology, moderation, multilevel modeling
Procedia PDF Downloads 135152 Spatiotemporal Changes in Drought Sensitivity Captured by Multiple Tree-Ring Parameters of Central European Conifers
Authors: Krešimir Begović, Miloš Rydval, Jan Tumajer, Kristyna Svobodová, Thomas Langbehn, Yumei Jiang, Vojtech Čada, Vaclav Treml, Ryszard Kaczka, Miroslav Svoboda
Abstract:
Environmental changes have increased the frequency and intensity of climatic extremes, particularly hotter droughts, leading to altered tree growth patterns and multi-year lags in tree recovery. The effects of shifting climatic conditions on tree growth are inhomogeneous across species’ natural distribution ranges, with large spatial heterogeneity and inter-population variability, but generally have significant consequences for contemporary forest dynamics and future ecosystem functioning. Despite numerous studies on the impacts of regional drought effects, large uncertainties remain regarding the mechanistic basis of drought legacy effects on wood formation and the ability of individual species to cope with increasingly drier growing conditions and rising year-to-year climatic variability. To unravel the complexity of climate-growth interactions and assess species-specific responses to severe droughts, we combined forward modeling of tree growth (VS-lite model) with correlation analyses against climate (temperature, precipitation, and the SPEI-3 moisture index) and growth responses to extreme drought events from multiple tree-ring parameters (tree-width and blue intensity parameters). We used an extensive dataset with over 1000 tree-ring samples from 23 nature forest reserves across an altitudinal range in Czechia and Slovakia. Our results revealed substantial spatiotemporal variability in growth responses to summer season temperature and moisture availability across species and tree-ring parameters. However, a general trend of increasing spring moisture-growth sensitivity in recent decades was observed in the Scots pine mountain forests and lowland forests of both species. The VS-lite model effectively captured nonstationary climate-growth relationships and accurately estimated high-frequency growth variability, indicating a significant incidence of regional drought events and growth reductions. Notably, growth reductions during extreme drought years and discrete legacy effects identified in individual wood components were most pronounced in the lowland forests. Together with the observed growth declines in recent decades, these findings suggest an increasing vulnerability of Norway spruce and Scots pine in dry lowlands under intensifying climatic constraints.Keywords: dendroclimatology, Vaganova–Shashkin lite, conifers, central Europe, drought, blue intensity
Procedia PDF Downloads 58151 Patterns and Predictors of Intended Service Use among Frail Older Adults in Urban China
Authors: Yuanyuan Fu
Abstract:
Background and Purpose: Along with the change of society and economy, the traditional home function of old people has gradually weakened in the contemporary China. Acknowledging these situations, to better meet old people’s needs on formal services and improve the quality of later life, this study seeks to identify patterns of intended service use among frail old people living in the communities and examined determinants that explain heterogeneous variations in old people’s intended service use patterns. Additionally, this study also tested the relationship between culture value and intended service use patterns and the mediating role of enabling factors in terms of culture value and intended service use patterns. Methods:Participants were recruited from Haidian District, Beijing, China in 2015. The multi-stage sampling method was adopted to select sub-districts, communities and old people aged 70 years old or older. After screening, 577 old people with limitations in daily life, were successfully interviewed. After data cleaning, 550 samples were included for data analysis. This study establishes a conceptual framework based on the Anderson Model (including predisposing factors, enabling factors and need factors), and further developed it by adding culture value factors (including attitudes towards filial piety and attitudes towards social face). Using a latent class analysis (LCA), this study classifies overall patterns of old people’s formal service utilization. Fourteen types of formal services were taken into account, including housework, voluntary support, transportation, home-delivered meals, and home-delivery medical care, elderly’s canteen and day-care center/respite care and so on. Structural equation modeling (SEM) was used to examine the direct effect of culture value on service use pattern, and the mediating effect of the enabling factors. Results: The LCA classified a hierarchical structure of service use patterns: multiple intended service use (N=69, 23%), selective intended service use (N=129, 23%), and light intended service use (N=352, 64%). Through SEM, after controlling predisposing factors and need factors, the results showed the significant direct effect of culture value on older people’s intended service use patterns. Enabling factors had a partial mediation effect on the relationship between culture value and the patterns. Conclusions and Implications: Differentiation of formal services may be important for meeting frail old people’s service needs and distributing program resources by identifying target populations for intervention, which may make reference to specific interventions to better support frail old people. Additionally, culture value had a unique direct effect on the intended service use patterns of frail old people in China, enriching our theoretical understanding of sources of culture value and their impacts. The findings also highlighted the mediation effects of enabling factors on the relationship between culture value factors and intended service use patterns. This study suggests that researchers and service providers should pay more attention to the important role of culture value factors in contributing to intended service use patterns and also be more sensitive to the mediating effect of enabling factors when discussing the relationship between culture value and the patterns.Keywords: frail old people, intended service use pattern, culture value, enabling factors, contemporary China, latent class analysis
Procedia PDF Downloads 226150 Influence of Temperature and Immersion on the Behavior of a Polymer Composite
Authors: Quentin C.P. Bourgogne, Vanessa Bouchart, Pierre Chevrier, Emmanuel Dattoli
Abstract:
This study presents an experimental and theoretical work conducted on a PolyPhenylene Sulfide reinforced with 40%wt of short glass fibers (PPS GF40) and its matrix. Thermoplastics are widely used in the automotive industry to lightweight automotive parts. The replacement of metallic parts by thermoplastics is reaching under-the-hood parts, near the engine. In this area, the parts are subjected to high temperatures and are immersed in cooling liquid. This liquid is composed of water and glycol and can affect the mechanical properties of the composite. The aim of this work was thus to quantify the evolution of mechanical properties of the thermoplastic composite, as a function of temperature and liquid aging effects, in order to develop a reliable design of parts. An experimental campaign in the tensile mode was carried out at different temperatures and for various glycol proportions in the cooling liquid, for monotonic and cyclic loadings on a neat and a reinforced PPS. The results of these tests allowed to highlight some of the main physical phenomena occurring during these solicitations under tough hydro-thermal conditions. Indeed, the performed tests showed that temperature and liquid cooling aging can affect the mechanical behavior of the material in several ways. The more the cooling liquid contains water, the more the mechanical behavior is affected. It was observed that PPS showed a higher sensitivity to absorption than to chemical aggressiveness of the cooling liquid, explaining this dominant sensitivity. Two kinds of behaviors were noted: an elasto-plastic type under the glass transition temperature and a visco-pseudo-plastic one above it. It was also shown that viscosity is the leading phenomenon above the glass transition temperature for the PPS and could also be important under this temperature, mostly under cyclic conditions and when the stress rate is low. Finally, it was observed that soliciting this composite at high temperatures is decreasing the advantages of the presence of fibers. A new phenomenological model was then built to take into account these experimental observations. This new model allowed the prediction of the evolution of mechanical properties as a function of the loading environment, with a reduced number of parameters compared to precedent studies. It was also shown that the presented approach enables the description and the prediction of the mechanical response with very good accuracy (2% of average error at worst), over a wide range of hydrothermal conditions. A temperature-humidity equivalence principle was underlined for the PPS, allowing the consideration of aging effects within the proposed model. Then, a limit of improvement of the reachable accuracy was determinate for all models using this set of data by the application of an artificial intelligence-based model allowing a comparison between artificial intelligence-based models and phenomenological based ones.Keywords: aging, analytical modeling, mechanical testing, polymer matrix composites, sequential model, thermomechanical
Procedia PDF Downloads 116149 Design and Biomechanical Analysis of a Transtibial Prosthesis for Cyclists of the Colombian Team Paralympic
Authors: Jhonnatan Eduardo Zamudio Palacios, Oscar Leonardo Mosquera Dussan, Daniel Guzman Perez, Daniel Alfonso Botero Rosas, Oscar Fabian Rubiano Espinosa, Jose Antonio Garcia Torres, Ivan Dario Chavarro, Ivan Ramiro Rodriguez Camacho, Jaime Orlando Rodriguez
Abstract:
The training of cilsitas with some type of disability finds in the technological development an indispensable ally, generating every day advances to contribute to the quality of life allowing to maximize the capacities of the athletes. The performance of a cyclist depends on physiological and biomechanical factors, such as aerodynamic profile, bicycle measurements, connecting rod length, pedaling systems, type of competition, among others. This study particularly focuses on the description of the dynamic model of a transtibial prosthesis for Paralympic cyclists. To make the model, two points are chosen: in the radius centers of rotation of the plate and pinion of the track bicycle. The parametric scheme of the track bike represents a model of 6 degrees of freedom due to the displacement in X - Y of each of the reference points of the angles of the curve profile β, cant of the velodrome α and the angle of rotation of the connecting rod φ. The force exerted on the crank of the bicycle varies according to the angles of the curve profile β, the velodrome cant of α and the angle of rotation of the crank φ. The behavior is analyzed through the Matlab R2015a software. The average strength that a cyclist exerts on the cranks of a bicycle is 1,607.1 N, the Paralympic cyclist must perform a force on each crank about 803.6 N. Once the maximum force associated with the movement has been determined, it is continued to the dynamic modeling of the transtibial prosthesis that represents a model of 6 degrees of freedom with displacement in X - Y in relation to the angles of rotation of the hip π, knee γ and ankle λ. Subsequently, an analysis of the kinematic behavior of the prosthesis was carried out by means of SolidWorks 2017 and Matlab R2015a, which was used to model and analyze the variation of the hip angles π, knee γ and ankle of the λ prosthesis. The reaction forces generated in the prosthesis were performed on the ankle of the prosthesis, performing the summation of forces on the X and Y axes. The same analysis was then applied to the tibia of the prosthesis and the socket. The reaction force of the parts of the prosthesis varies according to the hip angles π, knee γ and ankle of the prosthesis λ. Therefore, it can be deduced that the maximum forces experienced by the ankle of the prosthesis is 933.6 N on the X axis and 2.160.5 N on the Y axis. Finally, it is calculated that the maximum forces experienced by the tibia and the socket of the transtibial prosthesis in high performance competitions is 3.266 N on the X axis and 1.357 N on the Y axis. In conclusion, it can be said that the performance of the cyclist depends on several physiological factors, linked to biomechanics of training. The influence of biomechanical factors such as aerodynamics, bicycle measurements, connecting rod length, or non-circular pedaling systems on the cyclist performance.Keywords: biomechanics, dynamic model, paralympic cyclist, transtibial prosthesis
Procedia PDF Downloads 341148 Modeling Visual Memorability Assessment with Autoencoders Reveals Characteristics of Memorable Images
Authors: Elham Bagheri, Yalda Mohsenzadeh
Abstract:
Image memorability refers to the phenomenon where certain images are more likely to be remembered by humans than others. It is a quantifiable and intrinsic attribute of an image. Understanding how visual perception and memory interact is important in both cognitive science and artificial intelligence. It reveals the complex processes that support human cognition and helps to improve machine learning algorithms by mimicking the brain's efficient data processing and storage mechanisms. To explore the computational underpinnings of image memorability, this study examines the relationship between an image's reconstruction error, distinctiveness in latent space, and its memorability score. A trained autoencoder is used to replicate human-like memorability assessment inspired by the visual memory game employed in memorability estimations. This study leverages a VGG-based autoencoder that is pre-trained on the vast ImageNet dataset, enabling it to recognize patterns and features that are common to a wide and diverse range of images. An empirical analysis is conducted using the MemCat dataset, which includes 10,000 images from five broad categories: animals, sports, food, landscapes, and vehicles, along with their corresponding memorability scores. The memorability score assigned to each image represents the probability of that image being remembered by participants after a single exposure. The autoencoder is finetuned for one epoch with a batch size of one, attempting to create a scenario similar to human memorability experiments where memorability is quantified by the likelihood of an image being remembered after being seen only once. The reconstruction error, which is quantified as the difference between the original and reconstructed images, serves as a measure of how well the autoencoder has learned to represent the data. The reconstruction error of each image, the error reduction, and its distinctiveness in latent space are calculated and correlated with the memorability score. Distinctiveness is measured as the Euclidean distance between each image's latent representation and its nearest neighbor within the autoencoder's latent space. Different structural and perceptual loss functions are considered to quantify the reconstruction error. The results indicate that there is a strong correlation between the reconstruction error and the distinctiveness of images and their memorability scores. This suggests that images with more unique distinct features that challenge the autoencoder's compressive capacities are inherently more memorable. There is also a negative correlation between the reduction in reconstruction error compared to the autoencoder pre-trained on ImageNet, which suggests that highly memorable images are harder to reconstruct, probably due to having features that are more difficult to learn by the autoencoder. These insights suggest a new pathway for evaluating image memorability, which could potentially impact industries reliant on visual content and mark a step forward in merging the fields of artificial intelligence and cognitive science. The current research opens avenues for utilizing neural representations as instruments for understanding and predicting visual memory.Keywords: autoencoder, computational vision, image memorability, image reconstruction, memory retention, reconstruction error, visual perception
Procedia PDF Downloads 90147 Integrated Manufacture of Polymer and Conductive Tracks for Functional Objects Fabrication
Authors: Barbara Urasinska-Wojcik, Neil Chilton, Peter Todd, Christopher Elsworthy, Gregory J. Gibbons
Abstract:
The recent increase in the application of Additive Manufacturing (AM) of products has resulted in new demands on capability. The ability to integrate both form and function within printed objects is the next frontier in the 3D printing area. To move beyond prototyping into low volume production, we demonstrate a UK-designed and built AM hybrid system that combines polymer based structural deposition with digital deposition of electrically conductive elements. This hybrid manufacturing system is based on a multi-planar build approach to improve on many of the limitations associated with AM, such as poor surface finish, low geometric tolerance, and poor robustness. Specifically, the approach involves a multi-planar Material Extrusion (ME) process in which separated build stations with up to 5 axes of motion replace traditional horizontally-sliced layer modeling. The construction of multi-material architectures also involved using multiple print systems in order to combine both ME and digital deposition of conductive material. To demonstrate multi-material 3D printing, three thermoplastics, acrylonitrile butadiene styrene (ABS), polyamide 6,6/6 copolymers (CoPA) and polyamide 12 (PA) were used to print specimens, on top of which our high viscosity Ag-particulate ink was printed in a non-contact process, during which drop characteristics such as shape, velocity, and volume were assessed using a drop watching system. Spectroscopic analysis of these 3D printed materials in the IR region helped to determine the optimum in-situ curing system for implementation into the AM system to achieve improved adhesion and surface refinement. Thermal Analyses were performed to determine the printed materials glass transition temperature (Tg), stability and degradation behavior to find the optimum annealing conditions post printing. Electrical analysis of printed conductive tracks on polymer surfaces during mechanical testing (static tensile and 3-point bending and dynamic fatigue) was performed to assess the robustness of the electrical circuits. The tracks on CoPA, ABS, and PA exhibited low electrical resistance, and in case of PA resistance values of tracks remained unchanged across hundreds of repeated tensile cycles up to 0.5% strain amplitude. Our developed AM printer has the ability to fabricate fully functional objects in one build, including complex electronics. It enables product designers and manufacturers to produce functional saleable electronic products from a small format modular platform. It will make 3D printing better, faster and stronger.Keywords: additive manufacturing, conductive tracks, hybrid 3D printer, integrated manufacture
Procedia PDF Downloads 166146 Implementation of Deep Neural Networks for Pavement Condition Index Prediction
Authors: M. Sirhan, S. Bekhor, A. Sidess
Abstract:
In-service pavements deteriorate with time due to traffic wheel loads, environment, and climate conditions. Pavement deterioration leads to a reduction in their serviceability and structural behavior. Consequently, proper maintenance and rehabilitation (M&R) are necessary actions to keep the in-service pavement network at the desired level of serviceability. Due to resource and financial constraints, the pavement management system (PMS) prioritizes roads most in need of maintenance and rehabilitation action. It recommends a suitable action for each pavement based on the performance and surface condition of each road in the network. The pavement performance and condition are usually quantified and evaluated by different types of roughness-based and stress-based indices. Examples of such indices are Pavement Serviceability Index (PSI), Pavement Serviceability Ratio (PSR), Mean Panel Rating (MPR), Pavement Condition Rating (PCR), Ride Number (RN), Profile Index (PI), International Roughness Index (IRI), and Pavement Condition Index (PCI). PCI is commonly used in PMS as an indicator of the extent of the distresses on the pavement surface. PCI values range between 0 and 100; where 0 and 100 represent a highly deteriorated pavement and a newly constructed pavement, respectively. The PCI value is a function of distress type, severity, and density (measured as a percentage of the total pavement area). PCI is usually calculated iteratively using the 'Paver' program developed by the US Army Corps. The use of soft computing techniques, especially Artificial Neural Network (ANN), has become increasingly popular in the modeling of engineering problems. ANN techniques have successfully modeled the performance of the in-service pavements, due to its efficiency in predicting and solving non-linear relationships and dealing with an uncertain large amount of data. Typical regression models, which require a pre-defined relationship, can be replaced by ANN, which was found to be an appropriate tool for predicting the different pavement performance indices versus different factors as well. Subsequently, the objective of the presented study is to develop and train an ANN model that predicts the PCI values. The model’s input consists of percentage areas of 11 different damage types; alligator cracking, swelling, rutting, block cracking, longitudinal/transverse cracking, edge cracking, shoving, raveling, potholes, patching, and lane drop off, at three severity levels (low, medium, high) for each. The developed model was trained using 536,000 samples and tested on 134,000 samples. The samples were collected and prepared by The National Transport Infrastructure Company. The predicted results yielded satisfactory compliance with field measurements. The proposed model predicted PCI values with relatively low standard deviations, suggesting that it could be incorporated into the PMS for PCI determination. It is worth mentioning that the most influencing variables for PCI prediction are damages related to alligator cracking, swelling, rutting, and potholes.Keywords: artificial neural networks, computer programming, pavement condition index, pavement management, performance prediction
Procedia PDF Downloads 137145 A Supply Chain Risk Management Model Based on Both Qualitative and Quantitative Approaches
Authors: Henry Lau, Dilupa Nakandala, Li Zhao
Abstract:
In today’s business, it is well-recognized that risk is an important factor that needs to be taken into consideration before a decision is made. Studies indicate that both the number of risks faced by organizations and their potential consequences are growing. Supply chain risk management has become one of the major concerns for practitioners and researchers. Supply chain leaders and scholars are now focusing on the importance of managing supply chain risk. In order to meet the challenge of managing and mitigating supply chain risk (SCR), we must first identify the different dimensions of SCR and assess its relevant probability and severity. SCR has been classified in many different ways, and there are no consistently accepted dimensions of SCRs and several different classifications are reported in the literature. Basically, supply chain risks can be classified into two dimensions namely disruption risk and operational risk. Disruption risks are those caused by events such as bankruptcy, natural disasters and terrorist attack. Operational risks are related to supply and demand coordination and uncertainty, such as uncertain demand and uncertain supply. Disruption risks are rare but severe and hard to manage, while operational risk can be reduced through effective SCM activities. Other SCRs include supply risk, process risk, demand risk and technology risk. In fact, the disorganized classification of SCR has created confusion for SCR scholars. Moreover, practitioners need to identify and assess SCR. As such, it is important to have an overarching framework tying all these SCR dimensions together for two reasons. First, it helps researchers use these terms for communication of ideas based on the same concept. Second, a shared understanding of the SCR dimensions will support the researchers to focus on the more important research objective: operationalization of SCR, which is very important for assessing SCR. In general, fresh food supply chain is subject to certain level of risks, such as supply risk (low quality, delivery failure, hot weather etc.) and demand risk (season food imbalance, new competitors). Effective strategies to mitigate fresh food supply chain risk are required to enhance operations. Before implementing effective mitigation strategies, we need to identify the risk sources and evaluate the risk level. However, assessing the supply chain risk is not an easy matter, and existing research mainly use qualitative method, such as risk assessment matrix. To address the relevant issues, this paper aims to analyze the risk factor of the fresh food supply chain using an approach comprising both fuzzy logic and hierarchical holographic modeling techniques. This novel approach is able to take advantage the benefits of both of these well-known techniques and at the same time offset their drawbacks in certain aspects. In order to develop this integrated approach, substantial research work is needed to effectively combine these two techniques in a seamless way, To validate the proposed integrated approach, a case study in a fresh food supply chain company was conducted to verify the feasibility of its functionality in a real environment.Keywords: fresh food supply chain, fuzzy logic, hierarchical holographic modelling, operationalization, supply chain risk
Procedia PDF Downloads 243144 Introducing an Innovative Structural Fuse for Creation of Repairable Buildings with See-Saw Motion during Earthquake and Investigating It by Nonlinear Finite Element Modeling
Authors: M. Hosseini, N. Ghorbani Amirabad, M. Zhian
Abstract:
Seismic design codes accept structural and nonstructural damages after the sever earthquakes (provided that the building is prevented from collapse), so that in many cases demolishing and reconstruction of the building is inevitable, and this is usually very difficult, costly and time consuming. Therefore, designing and constructing of buildings in such a way that they can be easily repaired after earthquakes, even major ones, is quite desired. For this purpose giving the possibility of rocking or see-saw motion to the building structure, partially or as a whole, has been used by some researchers in recent decade .the central support which has a main role in creating the possibility of see-saw motion in the building’s structural system. In this paper, paying more attention to the key role of the central fuse and support, an innovative energy dissipater which can act as the central fuse and support of the building with seesaw motion is introduced, and the process of reaching an optimal geometry for that by using finite element analysis is presented. Several geometric shapes were considered for the proposed central fuse and support. In each case the hysteresis moment rotation behavior of the considered fuse were obtained under simultaneous effect of vertical and horizontal loads, by nonlinear finite element analyses. To find the optimal geometric shape, the maximum plastic strain value in the fuse body was considered as the main parameter. The rotational stiffness of the fuse under the effect of acting moments is another important parameter for finding the optimum shape. The proposed fuse and support can be called Yielding Curved Bars and Clipped Hemisphere Core (YCB&CHC or more briefly YCB) energy dissipater. Based on extensive nonlinear finite element analyses it was found out the using rectangular section for the curved bars gives more reliable results. Then, the YCB energy dissipater with the optimal shape was used in a structural model of a 12 story regular building as its central fuse and support to give it the possibility of seesaw motion, and its seismic responses were compared to those of a the building in the fixed based conditions, subjected to three-components acceleration of several selected earthquakes including Loma Prieta, Northridge, and Park Field. In building with see-saw motion some simple yielding-plate energy dissipaters were also used under circumferential columns.The results indicated that equipping the buildings with central and circumferential fuses result in remarkable reduction of seismic responses of the building, including the base shear, inter story drift, and roof acceleration. In fact by using the proposed technique the plastic deformations are concentrated in the fuses in the lowest story of the building, so that the main body of the building structure remains basically elastic, and therefore, the building can be easily repaired after earthquake.Keywords: rocking mechanism, see-saw motion, finite element analysis, hysteretic behavior
Procedia PDF Downloads 408143 The Transformation of Hot Spring Destinations in Taiwan in a Post-pandemic Future: Exploring the COVID-19 Impacts on Hot Spring Experiences, Individual, and Community Resilience of Residents From a Posttraumatic Growth Perspective
Authors: Hsin-Hung Lin, Janet Chang, Te-Yi Chang, You-Sheng Huang
Abstract:
The natural and men-made disasters have become huge challenges for tourism destinations as well as emphasizing the fragility of the industry. Hot springs, among all destinations, are prone to disasters due to their dependence on natural resources and locations. After the COVID-19 outbreak, hot spring destinations have experienced not only the loss of businesses but also the psychological trauma. However, evidence has also shown that the impacts may not necessarily reduce the resilience for people but may be converted into posttraumatic growth. In Taiwan, a large proportion of hot springs are located in rural or indigenous areas. As a result, hot spring resources are associated with community cohesion for local residents. Yet prior research on hot spring destinations has mainly focused on visitors, whereas residents have been overlooked. More specifically, the relationship between hot springs resources and resident resilience in the face of the COVID-19 impacts remains unclear. To fulfill this knowledge gap, this paper aims to explore the COVID-19 impacts on residents’ hot spring experiences as well as individual and community resilience from the perspective of posttraumatic growth. A total of 315 residents of 13 hot spring destinations that are most popular in Taiwan were recruited. Online questionnaires were distributed over travel forums and social networks after the COVID-19. This paper subsequently used Partial Least Squares Structural Equation Modeling for data analysis as the technique offers significant advantages in addressing nonnormal data and small sample sizes. A preliminary test was conducted, and the results showed acceptable internal consistency and no serious common method variance. The path analysis demonstrated that the COVID-19 impacts strengthened residents’ perceptions of hot spring resources and experiences, implying that the pandemic had propelled the residents to visit hot springs for the healing benefits. In addition, the COVID-19 impacts significantly enhanced residents’ individual and community resilience, which indicates that the residents at hot springs are more resilient thanks to their awareness of external risks. Thirdly, residents’ individual resilience was positively associated with hot spring experiences, while community resilience was not affected by hot spring experiences. Such findings may suggest that hot spring experiences are more related to individual-level experiences and, consequently, have insignificant influence on community resilience. Finally, individual resilience was proved to be the most relevant factor that help foster community resilience. To conclude, the authorities may consider exploiting the hot spring resources so as to increase individual resilience for local residents. Such implications can be used as a reference for other post-disaster tourist destinations as well. As for future research, longitudinal studies with qualitative methods are suggested to better understand how the hot spring experiences have changed individuals and communities over the long term. It should be noted that the main subjects of this paper were focused on the hot spring communities in Taiwan. Therefore, the results cannot be generalized for all types of tourism destinations. That is, more diverse tourism destinations may be investigated to provide a broader perspective of post-disaster recovery.Keywords: community resilience, hot spring destinations, individual resilience, posttraumatic growth
Procedia PDF Downloads 81142 The Transformation of Hot Spring Destinations in Taiwan in a Post-pandemic Future: Exploring the COVID-19 Impacts on Hot Spring Experiences and Resilience of Local Residents from a Posttraumatic Growth Perspective
Authors: Hsin-Hung Lin, Janet Chang, Te-Yi Chang, You-Sheng Huang
Abstract:
The natural and men-made disasters have become huge challenges for tourism destinations as well as emphasizing the fragility of the industry. Hot springs, among all destinations, are prone to disasters due to their dependence on natural resources and locations. After the COVID-19 outbreak, hot spring destinations have experienced not only the loss of businesses but also the psychological trauma. However, evidence has also shown that the impacts may not necessarily reduce the resilience for people but may be converted into posttraumatic growth. In Taiwan, a large proportion of hot springs are located in rural or indigenous areas. As a result, hot spring resources are associated with community cohesion for local residents. Yet prior research on hot spring destinations has mainly focused on visitors, whereas residents have been overlooked. More specifically, the relationship between hot springs resources and resident resilience in the face of the COVID-19 impacts remains unclear. To fulfill this knowledge gap, this paper aims to explore the COVID-19 impacts on residents’ hot spring experiences as well as individual and community resilience from the perspective of posttraumatic growth. A total of 315 residents of 13 hot spring destinations that are most popular in Taiwan were recruited. Online questionnaires were distributed over travel forums and social networks after the COVID-19. This paper subsequently used Partial Least Squares Structural Equation Modeling for data analysis as the technique offers significant advantages in addressing nonnormal data and small sample sizes. A preliminary test was conducted, and the results showed acceptable internal consistency and no serious common method variance. The path analysis demonstrated that the COVID-19 impacts strengthened residents’ perceptions of hot spring resources and experiences, implying that the pandemic had propelled the residents to visit hot springs for the healing benefits. In addition, the COVID-19 impacts significantly enhanced residents’ individual and community resilience, which indicates that the residents at hot springs are more resilient thanks to their awareness of external risks. Thirdly, residents’ individual resilience was positively associated with hot spring experiences, while community resilience was not affected by hot spring experiences. Such findings may suggest that hot spring experiences are more related to individual-level experiences and, consequently, have insignificant influence on community resilience. Finally, individual resilience was proved to be the most relevant factor that help foster community resilience. To conclude, the authorities may consider exploiting the hot spring resources so as to increase individual resilience for local residents. Such implications can be used as a reference for other post-disaster tourist destinations as well.As for future research, longitudinal studies with qualitative methods are suggested to better understand how the hot spring experiences have changed individuals and communities over the long term. It should be noted that the main subjects of this paper were focused on the hot spring communities in Taiwan. Therefore, the results cannot be generalized for all types of tourism destinations. That is, more diverse tourism destinations may be investigated to provide a broader perspective of post-disaster recovery.Keywords: community resilience, hot spring destinations, individual resilience, posttraumatic growth (PTG)
Procedia PDF Downloads 74141 Modeling and Design of a Solar Thermal Open Volumetric Air Receiver
Authors: Piyush Sharma, Laltu Chandra, P. S. Ghoshdastidar, Rajiv Shekhar
Abstract:
Metals processing operations such as melting and heat treatment of metals are energy-intensive, requiring temperatures greater than 500oC. The desired temperature in these industrial furnaces is attained by circulating electrically-heated air. In most of these furnaces, electricity produced from captive coal-based thermal power plants is used. Solar thermal energy could be a viable heat source in these furnaces. A retrofitted solar convective furnace (SCF) concept, which uses solar thermal generated hot air, has been proposed. Critical to the success of a SCF is the design of an open volumetric air receiver (OVAR), which can heat air in excess of 800oC. The OVAR is placed on top of a tower and receives concentrated solar radiation from a heliostat field. Absorbers, mixer assembly, and the return air flow chamber (RAFC) are the major components of an OVAR. The absorber is a porous structure that transfers heat from concentrated solar radiation to ambient air, referred to as primary air. The mixer ensures uniform air temperature at the receiver exit. Flow of the relatively cooler return air in the RAFC ensures that the absorbers do not fail by overheating. In an earlier publication, the detailed design basis, fabrication, and characterization of a 2 kWth open volumetric air receiver (OVAR) based laboratory solar air tower simulator was presented. Development of an experimentally-validated, CFD based mathematical model which can ultimately be used for the design and scale-up of an OVAR has been the major objective of this investigation. In contrast to the published literature, where flow and heat transfer have been modeled primarily in a single absorber module, the present study has modeled the entire receiver assembly, including the RAFC. Flow and heat transfer calculations have been carried out in ANSYS using the LTNE model. The complex return air flow pattern in the RAFC requires complicated meshes and is computational and time intensive. Hence a simple, realistic 1-D mathematical model, which circumvents the need for carrying out detailed flow and heat transfer calculations, has also been proposed. Several important results have emerged from this investigation. Circumferential electrical heating of absorbers can mimic frontal heating by concentrated solar radiation reasonably well in testing and characterizing the performance of an OVAR. Circumferential heating, therefore, obviates the need for expensive high solar concentration simulators. Predictions suggest that the ratio of power on aperture (POA) and mass flow rate of air (MFR) is a normalizing parameter for characterizing the thermal performance of an OVAR. Increasing POA/MFR increases the maximum temperature of air, but decreases the thermal efficiency of an OVAR. Predictions of the 1-D mathematical are within 5% of ANSYS predictions and computation time is reduced from ~ 5 hours to a few seconds.Keywords: absorbers, mixer assembly, open volumetric air receiver, return air flow chamber, solar thermal energy
Procedia PDF Downloads 197140 'You’re Not Alone': Peer Feedback Practices for Cross-Cultural Writing Classrooms and Centers
Authors: Cassandra Branham, Danielle Farrar
Abstract:
As writing instructors and writing center administrators at a large research university with a significant population of English language learners (ELLs), we are interested in how peer feedback pedagogy can be effectively translated for writing center purposes, as well as how various modes of peer feedback can enrich the learning experiences of L1 and L2 writers in these spaces. Although peer feedback is widely used in classrooms and centers, instructor, student, and researcher opinions vary in respect to its effectiveness. We argue that peer feedback - traditional and digital, synchronous and asynchronous - is an indispensable element for both classrooms and centers and emphasize that it should occur with both L1 and L2 students to further develop an array of reading and writing skills. We also believe that further understanding of the best practices of peer feedback in such cross-cultural spaces, like the classroom and center, can optimize the benefits of peer feedback. After a critical review of the literature, we implemented an embedded tutoring program in our university’s writing center in collaboration with its First-Year Composition (FYC) program and Language Institute. The embedded tutoring program matches a graduate writing consultant with L1 and L2 writers enrolled in controlled-matriculation composition courses where ELLs make up at least 50% of each class. Furthermore, this program is informed by what we argue to be some best practices of peer feedback for both classroom and center purposes, including expectation-based training through rubrics, modeling effective feedback, hybridizing traditional and digital modes of feedback, recognizing the significance the body in composition (what we call writer embodiment), and maximizing digital technologies to exploit extended cognition. After conducting surveys and follow-up interviews with students, instructors, and writing consultants in the embedded tutoring program, we found that not only did students see an increased value in peer feedback, but also instructors saw an improvement in both writing style and critical thinking skills. Our L2 participants noted improvements in language acquisition while our L1 students recognized a broadening of their worldviews. We believe that both L1 and L2 students developed self-efficacy and agency in their identities as writers because they gained confidence in their abilities to offer feedback, as well as in the legitimacy of feedback they received from peers. We also argue that these best practices situate novice writers as experts, as writers become a valued and integral part of the revision process with their own and their peers’ papers. Finally, the use of iPads in embedded tutoring recovered the importance of the body and its senses in writing; the highly sensory feedback from these multi-modal sessions that offer audio and visual input underscores the significant role both the body and mind play in compositional practices. After beginning with a brief review of the literature that sparked this research, this paper will discuss the embedded tutoring program in detail, report on the results of the pilot program, and will conclude with a discussion of the pedagogical implications that arise from this research for both classroom and center.Keywords: English language learners, peer feedback, writing center, writing classroom
Procedia PDF Downloads 402139 Creative Resolutions to Intercultural Conflicts: The Joint Effects of International Experience and Cultural Intelligence
Authors: Thomas Rockstuhl, Soon Ang, Kok Yee Ng, Linn Van Dyne
Abstract:
Intercultural interactions are often challenging and fraught with conflicts. To shed light on how to interact effectively across cultures, academics and practitioners alike have advanced a plethora of intercultural competence models. However, the majority of this work has emphasized distal outcomes, such as job performance and cultural adjustment, rather than proximal outcomes, such as how individuals resolve inevitable intercultural conflicts. As a consequence, the processes by which individuals negotiate challenging intercultural conflicts are not well understood. The current study advances theorizing on intercultural conflict resolution by exploring antecedents of how people resolve intercultural conflicts. To this end, we examine creativity – the generation of novel and useful ideas – in the context of resolving cultural conflicts in intercultural interactions. Based on the dual-identity theory of creativity, we propose that individuals with greater international experience will display greater creativity and that the relationship is accentuated by individual’s cultural intelligence. Two studies test these hypotheses. The first study comprises 84 senior university students, drawn from an international organizational behavior course. The second study replicates findings from the first study in a sample of 89 executives from eleven countries. Participants in both studies provided protocols of their strategies for resolving two intercultural conflicts, as depicted in two multimedia-vignettes of challenging intercultural work-related interactions. Two research assistants, trained in intercultural management but blind to the study hypotheses, coded all strategies for their novelty and usefulness following scoring procedures for creativity tasks. Participants also completed online surveys of demographic background information, including their international experience, and cultural intelligence. Hierarchical linear modeling showed that surprisingly, while international experience is positively associated with usefulness, it is unrelated to novelty. Further, a person’s cultural intelligence strengthens the positive effect of international experience on usefulness and mitigates the effect of international experience on novelty. Theoretically, our findings offer an important theoretical extension to the dual-identity theory of creativity by identifying cultural intelligence as an important individual difference moderator that qualifies the relationship between international experience and creative conflict resolution. In terms of novelty, individuals higher in cultural intelligence seem less susceptible to rigidity effects of international experiences. Perhaps they are more capable of assessing which aspects of culture are relevant and apply relevant experiences when they brainstorm novel ideas. For utility, individuals high in cultural intelligence are better able to leverage on their international experience to assess the viability of their ideas because their richer and more organized cultural knowledge structure allows them to assess possible options more efficiently and accurately. In sum, our findings suggest that cultural intelligence is an important and promising intercultural competence that fosters creative resolutions to intercultural conflicts. We hope that our findings stimulate future research on creativity and conflict resolution in intercultural contexts.Keywords: cultural Intelligence, intercultural conflict, intercultural creativity, international experience
Procedia PDF Downloads 148138 Test Rig Development for Up-to-Date Experimental Study of Multi-Stage Flash Distillation Process
Authors: Marek Vondra, Petr Bobák
Abstract:
Vacuum evaporation is a reliable and well-proven technology with a wide application range which is frequently used in food, chemical or pharmaceutical industries. Recently, numerous remarkable studies have been carried out to investigate utilization of this technology in the area of wastewater treatment. One of the most successful applications of vacuum evaporation principal is connected with seawater desalination. Since 1950’s, multi-stage flash distillation (MSF) has been the leading technology in this field and it is still irreplaceable in many respects, despite a rapid increase in cheaper reverse-osmosis-based installations in recent decades. MSF plants are conveniently operated in countries with a fluctuating seawater quality and at locations where a sufficient amount of waste heat is available. Nowadays, most of the MSF research is connected with alternative heat sources utilization and with hybridization, i.e. merging of different types of desalination technologies. Some of the studies are concerned with basic principles of the static flash phenomenon, but only few scientists have lately focused on the fundamentals of continuous multi-stage evaporation. Limited measurement possibilities at operating plants and insufficiently equipped experimental facilities may be the reasons. The aim of the presented study was to design, construct and test an up-to-date test rig with an advanced measurement system which will provide real time monitoring options of all the important operational parameters under various conditions. The whole system consists of a conventionally designed MSF unit with 8 evaporation chambers, versatile heating circuit for different kinds of feed water (e.g. seawater, waste water), sophisticated system for acquisition and real-time visualization of all the related quantities (temperature, pressure, flow rate, weight, conductivity, pH, water level, power input), access to a wide spectrum of operational media (salt, fresh and softened water, steam, natural gas, compressed air, electrical energy) and integrated transparent features which enable a direct visual control of selected physical mechanisms (water evaporation in chambers, water level right before brine and distillate pumps). Thanks to the adjustable process parameters, it is possible to operate the test unit at desired operational conditions. This allows researchers to carry out statistical design and analysis of experiments. Valuable results obtained in this manner could be further employed in simulations and process modeling. First experimental tests confirm correctness of the presented approach and promise interesting outputs in the future. The presented experimental apparatus enables flexible and efficient research of the whole MSF process.Keywords: design of experiment, multi-stage flash distillation, test rig, vacuum evaporation
Procedia PDF Downloads 387137 A Variational Reformulation for the Thermomechanically Coupled Behavior of Shape Memory Alloys
Authors: Elisa Boatti, Ulisse Stefanelli, Alessandro Reali, Ferdinando Auricchio
Abstract:
Thanks to their unusual properties, shape memory alloys (SMAs) are good candidates for advanced applications in a wide range of engineering fields, such as automotive, robotics, civil, biomedical, aerospace. In the last decades, the ever-growing interest for such materials has boosted several research studies aimed at modeling their complex nonlinear behavior in an effective and robust way. Since the constitutive response of SMAs is strongly thermomechanically coupled, the investigation of the non-isothermal evolution of the material must be taken into consideration. The present study considers an existing three-dimensional phenomenological model for SMAs, able to reproduce the main SMA properties while maintaining a simple user-friendly structure, and proposes a variational reformulation of the full non-isothermal version of the model. While the considered model has been thoroughly assessed in an isothermal setting, the proposed formulation allows to take into account the full nonisothermal problem. In particular, the reformulation is inspired to the GENERIC (General Equations for Non-Equilibrium Reversible-Irreversible Coupling) formalism, and is based on a generalized gradient flow of the total entropy, related to thermal and mechanical variables. Such phrasing of the model is new and allows for a discussion of the model from both a theoretical and a numerical point of view. Moreover, it directly implies the dissipativity of the flow. A semi-implicit time-discrete scheme is also presented for the fully coupled thermomechanical system, and is proven unconditionally stable and convergent. The correspondent algorithm is then implemented, under a space-homogeneous temperature field assumption, and tested under different conditions. The core of the algorithm is composed of a mechanical subproblem and a thermal subproblem. The iterative scheme is solved by a generalized Newton method. Numerous uniaxial and biaxial tests are reported to assess the performance of the model and algorithm, including variable imposed strain, strain rate, heat exchange properties, and external temperature. In particular, the heat exchange with the environment is the only source of rate-dependency in the model. The reported curves clearly display the interdependence between phase transformation strain and material temperature. The full thermomechanical coupling allows to reproduce the exothermic and endothermic effects during respectively forward and backward phase transformation. The numerical tests have thus demonstrated that the model can appropriately reproduce the coupled SMA behavior in different loading conditions and rates. Moreover, the algorithm has proved effective and robust. Further developments are being considered, such as the extension of the formulation to the finite-strain setting and the study of the boundary value problem.Keywords: generalized gradient flow, GENERIC formalism, shape memory alloys, thermomechanical coupling
Procedia PDF Downloads 221136 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses
Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen
Abstract:
Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.Keywords: ehealth, one health, risk communication, stakeholder, zoonosis
Procedia PDF Downloads 286135 In-Flight Aircraft Performance Model Enhancement Using Adaptive Lookup Tables
Authors: Georges Ghazi, Magali Gelhaye, Ruxandra Botez
Abstract:
Over the years, the Flight Management System (FMS) has experienced a continuous improvement of its many features, to the point of becoming the pilot’s primary interface for flight planning operation on the airplane. With the assistance of the FMS, the concept of distance and time has been completely revolutionized, providing the crew members with the determination of the optimized route (or flight plan) from the departure airport to the arrival airport. To accomplish this function, the FMS needs an accurate Aircraft Performance Model (APM) of the aircraft. In general, APMs that equipped most modern FMSs are established before the entry into service of an individual aircraft, and results from the combination of a set of ordinary differential equations and a set of performance databases. Unfortunately, an aircraft in service is constantly exposed to dynamic loads that degrade its flight characteristics. These degradations endow two main origins: airframe deterioration (control surfaces rigging, seals missing or damaged, etc.) and engine performance degradation (fuel consumption increase for a given thrust). Thus, after several years of service, the performance databases and the APM associated to a specific aircraft are no longer representative enough of the actual aircraft performance. It is important to monitor the trend of the performance deterioration and correct the uncertainties of the aircraft model in order to improve the accuracy the flight management system predictions. The basis of this research lies in the new ability to continuously update an Aircraft Performance Model (APM) during flight using an adaptive lookup table technique. This methodology was developed and applied to the well-known Cessna Citation X business aircraft. For the purpose of this study, a level D Research Aircraft Flight Simulator (RAFS) was used as a test aircraft. According to Federal Aviation Administration the level D is the highest certification level for the flight dynamics modeling. Basically, using data available in the Flight Crew Operating Manual (FCOM), a first APM describing the variation of the engine fan speed and aircraft fuel flow w.r.t flight conditions was derived. This model was next improved using the proposed methodology. To do that, several cruise flights were performed using the RAFS. An algorithm was developed to frequently sample the aircraft sensors measurements during the flight and compare the model prediction with the actual measurements. Based on these comparisons, a correction was performed on the actual APM in order to minimize the error between the predicted data and the measured data. In this way, as the aircraft flies, the APM will be continuously enhanced, making the FMS more and more precise and the prediction of trajectories more realistic and more reliable. The results obtained are very encouraging. Indeed, using the tables initialized with the FCOM data, only a few iterations were needed to reduce the fuel flow prediction error from an average relative error of 12% to 0.3%. Similarly, the FCOM prediction regarding the engine fan speed was reduced from a maximum error deviation of 5.0% to 0.2% after only ten flights.Keywords: aircraft performance, cruise, trajectory optimization, adaptive lookup tables, Cessna Citation X
Procedia PDF Downloads 264134 Policy Evaluation of Republic Act 9502 “Universally Accessible Cheaper and Quality Medicines Act of 2008”
Authors: Trina Isabel D. Santiago, Juan Raphael M. Perez, Maria Angelica O. Soriano, Teresita B. Suing, Jumee F. Tayaban
Abstract:
To achieve universal healthcare for everyone, the World Health Organization has emphasized the importance of National Medicines Policies for increased accessibility and utilization of high-quality and affordable medications. In the Philippines, significant challenges have been identified surrounding the sustainability of essential medicines, resulting in limited access such as high cost and dominance and market dominance and monopoly of multinational companies (MNCs) in the Philippine pharmaceutical industry. These identified challenges have been addressed by several initiatives, such as the Philippine National Drug Policy and Generics Act of 1988 (Republic Act 6675), to attempt to reduce drug prices. Despite these efforts, the concerns with drug accessibility and affordability continue to persist; hence, Republic Act 9502 was enacted. This paper attempts to review RA 9502 in the pursuit of making medicines more affordable for Filipinos, analyze and critique the problems and challenges associated with the law, and provide recommendations to address identified problems and challenges. A literature search and review, as well as an analysis of the law, has been done to evaluate the policy. RA 9502 recognizes the importance of market competition in drug price reduction and quality medicine accessibility. Contentious issues prior to enactment of the law include 1) parallel importation, pointing out that the drug price will depend on the global market price, 2) contrasting approaches in the drafting of the law as the House version focused on medicine price control while the Senate version prioritized market competition, and 3) MNCs opposing the amendments with concerns on discrimination, constitutional violations, and noncompliance with international treaty obligations. There are also criticisms and challenges with the implementation of the law in terms of content or modeling, interpretation and implementation, and other external factors or hindrances. The law has been criticized for its narrow scope as it only covers specific essential medicines with no cooperation with the national health insurance program. Moreover, the law has sections taking advantage of the TRIPS flexibilities, which disallow smaller countries to reap the benefits of flexibilities. The sanctions and penalties have an insignificant role in implementation as they only ask for a small portion of the income of MNCs. Proposed recommendations for policy improvement include aligning existing legislation through strengthened price regulation and expanded law coverage, strengthening penalties to promote law adherence, and promoting research and development to encourage and support local initiatives. Through these comprehensive recommendations, the issues surrounding the policy can be addressed, and the goal of enhancing the affordability and accessibility of medicines in the country can be achieved.Keywords: drug accessibility, drug affordability, price regulation, Republic Act 9502
Procedia PDF Downloads 46133 Selection and Preparation of High Performance, Natural and Cost-Effective Hydrogel as a Bio-Ink for 3D Bio-Printing and Organ on Chip Applications
Authors: Rawan Ashraf, Ahmed E. Gomaa, Gehan Safwat, Ayman Diab
Abstract:
Background: Three-dimensional (3D) bio-printing has become a versatile and powerful method for generating a variety of biological constructs, including bone or extracellular matrix scaffolds endo- or epithelial, muscle tissue, as well as organoids. Aim of the study: Fabricate a low cost DIY 3D bio-printer to produce 3D bio-printed products such as anti-microbial packaging or multi-organs on chips. We demonstrate the alignment between two types of 3D printer technology (3D Bio-printer and DLP) on Multi-organ-on-a-chip (multi-OoC) devices fabrication. Methods: First, Design and Fabrication of the Syringe Unit for Modification of an Off-the-Shelf 3D Printer, then Preparation of Hydrogel based on natural polymers Sodium Alginate and Gelatin, followed by acquisition of the cell suspension, then modeling the desired 3D structure. Preparation for 3D printing, then Cell-free and cell-laden hydrogels went through the printing process at room temperature under sterile conditions and finally post printing curing process and studying the printed structure regards physical and chemical characteristics. The hard scaffold of the Organ on chip devices was designed and fabricated using the DLP-3D printer, following similar approaches as the Microfluidics system fabrication. Results: The fabricated Bio-Ink was based onHydrogel polymer mix of sodium alginate and gelatin 15% to 0.5%, respectively. Later the 3D printing process was conducted using a higher percentage of alginate-based hydrogels because of it viscosity and the controllable crosslinking, unlike the thermal crosslinking of Gelatin. The hydrogels were colored to simulate the representation of two types of cells. The adaption of the hard scaffold, whether for the Microfluidics system or the hard-tissues, has been acquired by the DLP 3D printers with fabricated natural bioactive essential oils that contain antimicrobial activity, followed by printing in Situ three complex layers of soft-hydrogel as a cell-free Bio-Ink to simulate the real-life tissue engineering process. The final product was a proof of concept for a rapid 3D cell culturing approaches that uses an engineered hard scaffold along with soft-tissues, thus, several applications were offered as products of the current prototype, including the Organ-On-Chip as a successful integration between DLP and 3D bioprinter. Conclusion: Multiple designs for the organ-on-a-chip (multi-OoC) devices have been acquired in our study with main focus on the low cost fabrication of such technology and the potential to revolutionize human health research and development. We describe circumstances in which multi-organ models are useful after briefly examining the requirement for full multi-organ models with a systemic component. Following that, we took a look at the current multi-OoC platforms, such as integrated body-on-a-chip devices and modular techniques that use linked organ-specific modules.Keywords: 3d bio-printer, hydrogel, multi-organ on chip, bio-inks
Procedia PDF Downloads 174132 Vision and Challenges of Developing VR-Based Digital Anatomy Learning Platforms and a Solution Set for 3D Model Marking
Authors: Gizem Kayar, Ramazan Bakir, M. Ilkay Koşar, Ceren U. Gencer, Alperen Ayyildiz
Abstract:
Anatomy classes are crucial for general education of medical students, whereas learning anatomy is quite challenging and requires memorization of thousands of structures. In traditional teaching methods, learning materials are still based on books, anatomy mannequins, or videos. This results in forgetting many important structures after several years. However, more interactive teaching methods like virtual reality, augmented reality, gamification, and motion sensors are becoming more popular since such methods ease the way we learn and keep the data in mind for longer terms. During our study, we designed a virtual reality based digital head anatomy platform to investigate whether a fully interactive anatomy platform is effective to learn anatomy and to understand the level of teaching and learning optimization. The Head is one of the most complicated human anatomy structures, with thousands of tiny, unique structures. This makes the head anatomy one of the most difficult parts to understand during class sessions. Therefore, we developed a fully interactive digital tool with 3D model marking, quiz structures, 2D/3D puzzle structures, and VR support so as to integrate the power of VR and gamification. The project has been developed in Unity game engine with HTC Vive Cosmos VR headset. The head anatomy 3D model has been selected with full skeletal, muscular, integumentary, head, teeth, lymph, and vein system. The biggest issue during the development was the complexity of our model and the marking of it in the 3D world system. 3D model marking requires to access to each unique structure in the counted subsystems which means hundreds of marking needs to be done. Some parts of our 3D head model were monolithic. This is why we worked on dividing such parts to subparts which is very time-consuming. In order to subdivide monolithic parts, one must use an external modeling tool. However, such tools generally come with high learning curves, and seamless division is not ensured. Second option was to integrate tiny colliders to all unique items for mouse interaction. However, outside colliders which cover inner trigger colliders cause overlapping, and these colliders repel each other. Third option is using raycasting. However, due to its own view-based nature, raycasting has some inherent problems. As the model rotate, view direction changes very frequently, and directional computations become even harder. This is why, finally, we studied on the local coordinate system. By taking the pivot point of the model into consideration (back of the nose), each sub-structure is marked with its own local coordinate with respect to the pivot. After converting the mouse position to the world position and checking its relation with the corresponding structure’s local coordinate, we were able to mark all points correctly. The advantage of this method is its applicability and accuracy for all types of monolithic anatomical structures.Keywords: anatomy, e-learning, virtual reality, 3D model marking
Procedia PDF Downloads 100131 State, Public Policies, and Rights: Public Expenditure and Social and Welfare Policies in America, as Opposed to Argentina
Authors: Mauro Cristeche
Abstract:
This paper approaches the intervention of the American State in the social arena and the modeling of the rights system from the Argentinian experience, by observing the characteristics of its federal budgetary system, the evolution of social public spending and welfare programs in recent years, labor and poverty statistics, and the changes on the labor market structure. The analysis seeks to combine different methodologies and sources: in-depth interviews with specialists, analysis of theoretical and mass-media material, and statistical sources. Among the results, it could be mentioned that the tendency to state interventionism (what has been called ‘nationalization of social life’) is quite evident in the United States, and manifests itself in multiple forms. The bibliography consulted, and the experts interviewed pointed out this increase of the state presence in historical terms (beyond short-term setbacks) in terms of increase of public spending, fiscal pressure, public employment, protective and control mechanisms, the extension of welfare policies to the poor sectors, etc. In fact, despite the significant differences between both countries, the United States and Argentina have common patterns of behavior in terms of the aforementioned phenomena. On the other hand, dissimilarities are also important. Some of them are determined by each country's own political history. The influence of political parties on the economic model seems more decisive in the United States than in Argentina, where the tendency to state interventionism is more stable. The centrality of health spending is evident in America, while in Argentina that discussion is more concentrated in the social security system and public education. The biggest problem of the labor market in the United States is the disqualification as a consequence of the technological development while in Argentina it is a result of its weakness. Another big difference is the huge American public spending on Defense. Then, the more federal character of the American State is also a factor of differential analysis against a centralized Argentine state. American public employment (around 10%) is comparatively quite lower than the Argentinian (around 18%). The social statistics show differences, but inequality and poverty have been growing as a trend in the last decades in both countries. According to public rates, poverty represents 14% in The United States and 33% in Argentina. American public spending is important (welfare spending and total public spending represent around 12% and 34% of GDP, respectively), but a bit lower than Latin-American or European average). In both cases, the tendency to underemployment and disqualification unemployment does not assume a serious gravity. Probably one of the most important aspects of the analysis is that private initiative and public intervention are much more intertwined in the United States, which makes state intervention more ‘fuzzy’, while in Argentina the difference is clearer. Finally, the power of its accumulation of capital and, more specifically, of the industrial and services sectors in the United States, which continues to be the engine of the economy, express great differences with Argentina, supported by its agro-industrial power and its public sector.Keywords: state intervention, welfare policies, labor market, system of rights, United States of America
Procedia PDF Downloads 131130 An Adaptive Decomposition for the Variability Analysis of Observation Time Series in Geophysics
Authors: Olivier Delage, Thierry Portafaix, Hassan Bencherif, Guillaume Guimbretiere
Abstract:
Most observation data sequences in geophysics can be interpreted as resulting from the interaction of several physical processes at several time and space scales. As a consequence, measurements time series in geophysics have often characteristics of non-linearity and non-stationarity and thereby exhibit strong fluctuations at all time-scales and require a time-frequency representation to analyze their variability. Empirical Mode Decomposition (EMD) is a relatively new technic as part of a more general signal processing method called the Hilbert-Huang transform. This analysis method turns out to be particularly suitable for non-linear and non-stationary signals and consists in decomposing a signal in an auto adaptive way into a sum of oscillating components named IMFs (Intrinsic Mode Functions), and thereby acts as a bank of bandpass filters. The advantages of the EMD technic are to be entirely data driven and to provide the principal variability modes of the dynamics represented by the original time series. However, the main limiting factor is the frequency resolution that may give rise to the mode mixing phenomenon where the spectral contents of some IMFs overlap each other. To overcome this problem, J. Gilles proposed an alternative entitled “Empirical Wavelet Transform” (EWT) which consists in building from the segmentation of the original signal Fourier spectrum, a bank of filters. The method used is based on the idea utilized in the construction of both Littlewood-Paley and Meyer’s wavelets. The heart of the method lies in the segmentation of the Fourier spectrum based on the local maxima detection in order to obtain a set of non-overlapping segments. Because linked to the Fourier spectrum, the frequency resolution provided by EWT is higher than that provided by EMD and therefore allows to overcome the mode-mixing problem. On the other hand, if the EWT technique is able to detect the frequencies involved in the original time series fluctuations, EWT does not allow to associate the detected frequencies to a specific mode of variability as in the EMD technic. Because EMD is closer to the observation of physical phenomena than EWT, we propose here a new technic called EAWD (Empirical Adaptive Wavelet Decomposition) based on the coupling of the EMD and EWT technics by using the IMFs density spectral content to optimize the segmentation of the Fourier spectrum required by EWT. In this study, EMD and EWT technics are described, then EAWD technic is presented. Comparison of results obtained respectively by EMD, EWT and EAWD technics on time series of ozone total columns recorded at Reunion island over [1978-2019] period is discussed. This study was carried out as part of the SOLSTYCE project dedicated to the characterization and modeling of the underlying dynamics of time series issued from complex systems in atmospheric sciencesKeywords: adaptive filtering, empirical mode decomposition, empirical wavelet transform, filter banks, mode-mixing, non-linear and non-stationary time series, wavelet
Procedia PDF Downloads 137129 Tip60 Histone Acetyltransferase Activators as Neuroepigenetic Therapeutic Modulators for Alzheimer’s Disease
Authors: Akanksha Bhatnagar, Sandhya Kortegare, Felice Elefant
Abstract:
Context: Alzheimer's disease (AD) is a neurodegenerative disorder that is characterized by progressive cognitive decline and memory loss. The cause of AD is not fully understood, but it is thought to be caused by a combination of genetic, environmental, and lifestyle factors. One of the hallmarks of AD is the loss of neurons in the hippocampus, a brain region that is important for memory and learning. This loss of neurons is thought to be caused by a decrease in histone acetylation, which is a process that regulates gene expression. Research Aim: The research aim of the study was to develop mall molecule compounds that can enhance the activity of Tip60, a histone acetyltransferase that is important for memory and learning. Methodology/Analysis: The researchers used in silico structural modeling and a pharmacophore-based virtual screening approach to design and synthesize small molecule compounds strongly predicted to target and enhance Tip60’s HAT activity. The compounds were then tested in vitro and in vivo to assess their ability to enhance Tip60 activity and rescue cognitive deficits in AD models. Findings: The researchers found that several of the compounds were able to enhance Tip60 activity and rescue cognitive deficits in AD models. The compounds were also developed to cross the blood-brain barrier, which is an important factor for the development of potential AD therapeutics. Theoretical Importance: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Data Collection: The study collected data from a variety of sources, including in vitro assays and animal models. The in vitro assays assessed the ability of compounds to enhance Tip60 activity using histone acetyltransferase (HAT) enzyme assays and chromatin immunoprecipitation assays. Animal models were used to assess the ability of the compounds to rescue cognitive deficits in AD models using a variety of behavioral tests, including locomotor ability, sensory learning, and recognition tasks. The human clinical trials will be used to assess the safety and efficacy of the compounds in humans. Questions: The question addressed by this study was whether Tip60 HAT activators could be developed as therapeutic agents for AD. Conclusions: The findings of this study suggest that Tip60 HAT activators have the potential to be developed as therapeutic agents for AD. The compounds are specific to Tip60, which suggests that they may have fewer side effects than other HDAC inhibitors. Additionally, the compounds are able to cross the blood-brain barrier, which is a major hurdle for the development of AD therapeutics. Further research is needed to confirm the safety and efficacy of these compounds in humans.Keywords: Alzheimer's disease, cognition, neuroepigenetics, drug discovery
Procedia PDF Downloads 75128 Perception of Corporate Social Responsibility and Enhancing Compassion at Work through Sense of Meaningfulness
Authors: Nikeshala Weerasekara, Roshan Ajward
Abstract:
Contemporary business environment, given the circumstance of stringent scrutiny toward corporate behavior, organizations are under pressure to develop and implement solid overarching Corporate Social Responsibility (CSR) strategies. In that milieu, in order to differentiate themselves from competitors and maintain stakeholder confidence banks spend millions of dollars on CSR programmes. However, knowledge on how non-western bank employees perceive such activities is inconclusive. At the same time recently only researchers have shifted their focus on positive effects of compassion at work or the organizational conditions under which it arises. Nevertheless, mediation mechanisms between CSR and compassion at work have not been adequately examined leaving a vacuum to be explored. Despite finding a purpose in work that is greater than extrinsic outcomes of the work is important to employees, meaningful work has not been examined adequately. Thus, in addition to examining the direct relationship between CSR and compassion at work, this study examined the mediating capability of meaningful work between these variables. Specifically, the researcher explored how CSR enables employees to sense work as meaningful which in turn would enhance their level of compassion at work. Hypotheses were developed to examine the direct relationship between CSR and compassion at work and the mediating effect of meaningful work on the relationship between CSR and compassion at work. Both Social Identity Theory (SIT) and Social Exchange Theory (SET) were used to theoretically support the relationships. The sample comprised of 450 respondents covering different levels of the bank. A convenience sampling strategy was used to secure responses from 13 local licensed commercial banks in Sri Lanka. Data was collected using a structured questionnaire which was developed based on a comprehensive review of literature and refined using both expert opinions and a pilot survey. Structural equation modeling using Smart Partial Least Square (PLS) was utilized for data analysis. Findings indicate a positive and significant (p < .05) relationship between CSR and compassion at work. Also, it was found that meaningful work partially mediates the relationship between CSR and compassion at work. As per the findings it is concluded that bank employees’ perception of CSR engagement not only directly influence compassion at work but also impact such through meaningful work as well. This implies that employees consider working for a socially responsible bank since it creates greater meaningfulness of work to retain with the organization, which in turn trigger higher level of compassion at work. By utilizing both SIT and SET in explaining relationships between CSR and compassion at work it amounts to theoretical significance of the study. Enhance existing literature on CSR and compassion at work. Also, adds insights on mediating capability of psychologically related variables such as meaningful work. This study is expected to have significant policy implications in terms of increasing compassion at work where managers must understand the importance of including CSR activities into their strategy in order to thrive. Finally, it provides evidence of suitability of using Smart PLS to test models with mediating relationships involving non normal data.Keywords: compassion at work, corporate social responsibility, employee commitment, meaningful work, positive affect
Procedia PDF Downloads 126