Search results for: demand models
703 Modeling the Acquisition of Expertise in a Sequential Decision-Making Task
Authors: Cristóbal Moënne-Loccoz, Rodrigo C. Vergara, Vladimir López, Domingo Mery, Diego Cosmelli
Abstract:
Our daily interaction with computational interfaces is plagued of situations in which we go from inexperienced users to experts through self-motivated exploration of the same task. In many of these interactions, we must learn to find our way through a sequence of decisions and actions before obtaining the desired result. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion so that a specific sequence of actions must be performed in order to produce the expected outcome. But, as they become experts in the use of such interfaces, do users adopt specific search and learning strategies? Moreover, if so, can we use this information to follow the process of expertise development and, eventually, predict future actions? This would be a critical step towards building truly adaptive interfaces that can facilitate interaction at different moments of the learning curve. Furthermore, it could provide a window into potential mechanisms underlying decision-making behavior in real world scenarios. Here we tackle this question using a simple game interface that instantiates a 4-level binary decision tree (BDT) sequential decision-making task. Participants have to explore the interface and discover an underlying concept-icon mapping in order to complete the game. We develop a Hidden Markov Model (HMM)-based approach whereby a set of stereotyped, hierarchically related search behaviors act as hidden states. Using this model, we are able to track the decision-making process as participants explore, learn and develop expertise in the use of the interface. Our results show that partitioning the problem space into such stereotyped strategies is sufficient to capture a host of exploratory and learning behaviors. Moreover, using the modular architecture of stereotyped strategies as a Mixture of Experts, we are able to simultaneously ask the experts about the user's most probable future actions. We show that for those participants that learn the task, it becomes possible to predict their next decision, above chance, approximately halfway through the game. Our long-term goal is, on the basis of a better understanding of real-world decision-making processes, to inform the construction of interfaces that can establish dynamic conversations with their users in order to facilitate the development of expertise.Keywords: behavioral modeling, expertise acquisition, hidden markov models, sequential decision-making
Procedia PDF Downloads 252702 Environmental Related Mortality Rates through Artificial Intelligence Tools
Authors: Stamatis Zoras, Vasilis Evagelopoulos, Theodoros Staurakas
Abstract:
The association between elevated air pollution levels and extreme climate conditions (temperature, particulate matter, ozone levels, etc.) and mental consequences has been, recently, the focus of significant number of studies. It varies depending on the time of the year it occurs either during the hot period or cold periods but, specifically, when extreme air pollution and weather events are observed, e.g. air pollution episodes and persistent heatwaves. It also varies spatially due to different effects of air quality and climate extremes to human health when considering metropolitan or rural areas. An air pollutant concentration and a climate extreme are taking a different form of impact if the focus area is countryside or in the urban environment. In the built environment the climate extreme effects are driven through the formed microclimate which must be studied more efficiently. Variables such as biological, age groups etc may be implicated by different environmental factors such as increased air pollution/noise levels and overheating of buildings in comparison to rural areas. Gridded air quality and climate variables derived from the land surface observations network of West Macedonia in Greece will be analysed against mortality data in a spatial format in the region of West Macedonia. Artificial intelligence (AI) tools will be used for data correction and prediction of health deterioration with climatic conditions and air pollution at local scale. This would reveal the built environment implications against the countryside. The air pollution and climatic data have been collected from meteorological stations and span the period from 2000 to 2009. These will be projected against the mortality rates data in daily, monthly, seasonal and annual grids. The grids will be operated as AI-based warning models for decision makers in order to map the health conditions in rural and urban areas to ensure improved awareness of the healthcare system by taken into account the predicted changing climate conditions. Gridded data of climate conditions, air quality levels against mortality rates will be presented by AI-analysed gridded indicators of the implicated variables. An Al-based gridded warning platform at local scales is then developed for future system awareness platform for regional level.Keywords: air quality, artificial inteligence, climatic conditions, mortality
Procedia PDF Downloads 113701 Neoliberal Settler City: Socio-Spatial Segregation, Livelihood of Artists/Craftsmen in Delhi
Authors: Sophy Joseph
Abstract:
The study uses the concept of ‘Settler city’ to understand the nature of peripheralization that a neoliberal city initiates. The settler city designs powerless communities without inherent rights, title and sovereignty. Kathputli Colony, home to generations of artists/craftsmen, who have kept heritage of arts/crafts alive, has undergone eviction of its population from urban space. The proposed study, ‘Neoliberal Settler City: Socio-spatial segregation and livelihood of artists/craftsmen in Delhi’ would problematize the settler city as a colonial technology. The colonial regime has ‘erased’ the ‘unwanted’ as primitive and swept them to peripheries in the city. This study would also highlight how structural change in political economy has undermined their crafts/arts by depriving them from practicing/performing it with dignity in urban space. The interconnections between citizenship and In-Situ Private Public Partnership in Kathputli rehabilitation has become part of academic exercise. However, a comprehensive study connecting inherent characteristics of neoliberal settler city, trajectory of political economy of unorganized workers - artists/craftsmen and legal containment and exclusion leading to dispossession and marginalization of communities from the city site, is relevant to contextualize the trauma of spatial segregation. This study would deal with political, cultural, social and economic dominant behavior of the structure in the state formation, accumulation of property and design of urban space, fueled by segregation of marginalized/unorganized communities and disowning the ‘footloose proletariat’, the migrant workforce. The methodology of study involves qualitative research amongst communities and the field work-oral testimonies and personal accounts- becomes the primary material to theorize the realities. The secondary materials in the forms of archival materials about historical evolution of Delhi as a planned city from various archives, would be used. As the study also adopt ‘narrative approach’ in qualitative study, the life experiences of craftsmen/artists as performers and emotional trauma of losing their livelihood and space forms an important record to understand the instability and insecurity that marginalization and development attributes on urban poor. The study attempts to prove that though there was a change in political tradition from colonialism to constitutional democracy, new state still follows the policy of segregation and dispossession of the communities. It is this dispossession from the space, deprivation of livelihood and non-consultative process in rehabilitation that reflects the neoliberal approach of the state and also critical findings in the study. This study would entail critical spatial lens analyzing ethnographic and sociological data, representational practices and development debates to understand ‘urban otherization’ against craftsmen/artists. This seeks to develop a conceptual framework for understanding the resistance of communities against primitivity attached with them and to decolonize the city. This would help to contextualize the demand for declaring Kathputli Colony as ‘heritage artists village’. The conceptualization and contextualization would help to argue for right to city of the communities, collective rights to property, services and self-determination. The aspirations of the communities also help to draw normative orientation towards decolonization. It is important to study this site as part of the framework, ‘inclusive cities’ because cities are rarely noted as important sites of ‘community struggles’.Keywords: neoliberal settler city, socio-spatial segregation, the livelihood of artists/craftsmen, dispossession of indigenous communities, urban planning and cultural uprooting
Procedia PDF Downloads 130700 Personality Moderates the Relation Between Mother´s Emotional Intelligence and Young Children´s Emotion Situation Knowledge
Authors: Natalia Alonso-Alberca, Ana I. Vergara
Abstract:
From the very first years of their life, children are confronted with situations in which they need to deal with emotions. The family provides the first emotional experiences, and it is in the family context that children usually take their first steps towards acquiring emotion knowledge. Parents play a key role in this important task, helping their children develop emotional skills that they will need in challenging situations throughout their lives. Specifically, mothers are models imitated by their children. They create specific spatial and temporal contexts in which children learn about emotions, their causes, consequences, and complexity. This occurs not only through what mothers say or do directly to the child. Rather, it occurs, to a large extent, through the example that they set using their own emotional skills. The aim of the current study was to analyze how maternal abilities to perceive and to manage emotions influence children’s emotion knowledge, specifically, their emotion situation knowledge, taking into account the role played by the mother’s personality, the time spent together, and controlling the effect of age, sex and the child’s verbal abilities. Participants were 153 children from 4 schools in Spain, and their mothers. Children (41.8% girls)age range was 35 - 72 months. Mothers (N = 140) age (M = 38.7; R = 27-49). Twelve mothers had more than one child participating in the study. Main variables were the child´s emotion situation knowledge (ESK), measured by the Emotion Matching Task (EMT), and receptive language, using the Picture Vocabulary Test. Also, their mothers´ Emotional Intelligence (EI), through the Mayer, Salovey, Caruso Emotional Intelligence Test (MSCEIT) and personality, with The Big Five Inventory were analyzed. The results showed that the predictive power of maternal emotional skills on ESK was moderated by the mother’s personality, affecting both the direction and size of the relationships detected: low neuroticism and low openness to experience lead to a positive influence of maternal EI on children’s ESK, while high levels in these personality dimensions resulted in a negative influence on child´s ESK. The time that the mother and the child spend together was revealed as a positive predictor of this EK, while it did not moderate the influence of the mother's EI on child’s ESK. In light of the results, we can infer that maternal EI is linked to children’s emotional skills, though high level of maternal EI does not necessarily predict a greater degree of emotionknowledge in children, which seems rather to depend on specific personality profiles. The results of the current study indicate that a good level of maternal EI does not guarantee that children will learn the emotional skills that foster prosocial adaptation. Rather, EI must be accompanied by certain psychological characteristics (personality traits in this case).Keywords: emotional intelligence, emotion situation knowledge, mothers, personality, young children
Procedia PDF Downloads 134699 Market Solvency Capital Requirement Minimization: How Non-linear Solvers Provide Portfolios Complying with Solvency II Regulation
Authors: Abraham Castellanos, Christophe Durville, Sophie Echenim
Abstract:
In this article, a portfolio optimization problem is performed in a Solvency II context: it illustrates how advanced optimization techniques can help to tackle complex operational pain points around the monitoring, control, and stability of Solvency Capital Requirement (SCR). The market SCR of a portfolio is calculated as a combination of SCR sub-modules. These sub-modules are the results of stress-tests on interest rate, equity, property, credit and FX factors, as well as concentration on counter-parties. The market SCR is non convex and non differentiable, which does not make it a natural optimization criteria candidate. In the SCR formulation, correlations between sub-modules are fixed, whereas risk-driven portfolio allocation is usually driven by the dynamics of the actual correlations. Implementing a portfolio construction approach that is efficient on both a regulatory and economic standpoint is not straightforward. Moreover, the challenge for insurance portfolio managers is not only to achieve a minimal SCR to reduce non-invested capital but also to ensure stability of the SCR. Some optimizations have already been performed in the literature, simplifying the standard formula into a quadratic function. But to our knowledge, it is the first time that the standard formula of the market SCR is used in an optimization problem. Two solvers are combined: a bundle algorithm for convex non- differentiable problems, and a BFGS (Broyden-Fletcher-Goldfarb- Shanno)-SQP (Sequential Quadratic Programming) algorithm, to cope with non-convex cases. A market SCR minimization is then performed with historical data. This approach results in significant reduction of the capital requirement, compared to a classical Markowitz approach based on the historical volatility. A comparative analysis of different optimization models (equi-risk-contribution portfolio, minimizing volatility portfolio and minimizing value-at-risk portfolio) is performed and the impact of these strategies on risk measures including market SCR and its sub-modules is evaluated. A lack of diversification of market SCR is observed, specially for equities. This was expected since the market SCR strongly penalizes this type of financial instrument. It was shown that this direct effect of the regulation can be attenuated by implementing constraints in the optimization process or minimizing the market SCR together with the historical volatility, proving the interest of having a portfolio construction approach that can incorporate such features. The present results are further explained by the Market SCR modelling.Keywords: financial risk, numerical optimization, portfolio management, solvency capital requirement
Procedia PDF Downloads 117698 CFD Simulation of Spacer Effect on Turbulent Mixing Phenomena in Sub Channels of Boiling Nuclear Assemblies
Authors: Shashi Kant Verma, S. L. Sinha, D. K. Chandraker
Abstract:
Numerical simulations of selected subchannel tracer (Potassium Nitrate) based experiments have been performed to study the capabilities of state-of-the-art of Computational Fluid Dynamics (CFD) codes. The Computational Fluid Dynamics (CFD) methodology can be useful for investigating the spacer effect on turbulent mixing to predict turbulent flow behavior such as Dimensionless mixing scalar distributions, radial velocity and vortices in the nuclear fuel assembly. A Gibson and Launder (GL) Reynolds stress model (RSM) has been selected as the primary turbulence model to be applied for the simulation case as it has been previously found reasonably accurate to predict flows inside rod bundles. As a comparison, the case is also simulated using a standard k-ε turbulence model that is widely used in industry. Despite being an isotropic turbulence model, it has also been used in the modeling of flow in rod bundles and to produce lateral velocities after thorough mixing of coolant fairly. Both these models have been solved numerically to find out fully developed isothermal turbulent flow in a 30º segment of a 54-rod bundle. Numerical simulation has been carried out for the study of natural mixing of a Tracer (Passive scalar) to characterize the growth of turbulent diffusion in an injected sub-channel and, afterwards on, cross-mixing between adjacent sub-channels. The mixing with water has been numerically studied by means of steady state CFD simulations with the commercial code STAR-CCM+. Flow enters into the computational domain through the mass inflow at the three subchannel faces. Turbulence intensity and hydraulic diameter of 1% and 5.9 mm respectively were used for the inlet. A passive scalar (Potassium nitrate) is injected through the mass fraction of 5.536 PPM at subchannel 2 (Upstream of the mixing section). Flow exited the domain through the pressure outlet boundary (0 Pa), and the reference pressure was 1 atm. Simulation results have been extracted at different locations of the mixing zone and downstream zone. The local mass fraction shows uniform mixing. The effect of the applied turbulence model is nearly negligible just before the outlet plane because the distributions look like almost identical and the flow is fully developed. On the other hand, quantitatively the dimensionless mixing scalar distributions change noticeably, which is visible in the different scale of the colour bars.Keywords: single-phase flow, turbulent mixing, tracer, sub channel analysis
Procedia PDF Downloads 207697 Structural Optimization, Design, and Fabrication of Dissolvable Microneedle Arrays
Authors: Choupani Andisheh, Temucin Elif Sevval, Bediz Bekir
Abstract:
Due to their various advantages compared to many other drug delivery systems such as hypodermic injections and oral medications, microneedle arrays (MNAs) are a promising drug delivery system. To achieve enhanced performance of the MN, it is crucial to develop numerical models, optimization methods, and simulations. Accordingly, in this work, the optimized design of dissolvable MNAs, as well as their manufacturing, is investigated. For this purpose, a mechanical model of a single MN, having the geometry of an obelisk, is developed using commercial finite element software. The model considers the condition in which the MN is under pressure at the tip caused by the reaction force when penetrating the skin. Then, a multi-objective optimization based on non-dominated sorting genetic algorithm II (NSGA-II) is performed to obtain geometrical properties such as needle width, tip (apex) angle, and base fillet radius. The objective of the optimization study is to reach a painless and effortless penetration into the skin along with minimizing its mechanical failures caused by the maximum stress occurring throughout the structure. Based on the obtained optimal design parameters, master (male) molds are then fabricated from PMMA using a mechanical micromachining process. This fabrication method is selected mainly due to the geometry capability, production speed, production cost, and the variety of materials that can be used. Then to remove any chip residues, the master molds are cleaned using ultrasonic cleaning. These fabricated master molds can then be used repeatedly to fabricate Polydimethylsiloxane (PDMS) production (female) molds through a micro-molding approach. Finally, Polyvinylpyrrolidone (PVP) as a dissolvable polymer is cast into the production molds under vacuum to produce the dissolvable MNAs. This fabrication methodology can also be used to fabricate MNAs that include bioactive cargo. To characterize and demonstrate the performance of the fabricated needles, (i) scanning electron microscope images are taken to show the accuracy of the fabricated geometries, and (ii) in-vitro piercing tests are performed on artificial skin. It is shown that optimized MN geometries can be precisely fabricated using the presented fabrication methodology and the fabricated MNAs effectively pierce the skin without failure.Keywords: microneedle, microneedle array fabrication, micro-manufacturing structural optimization, finite element analysis
Procedia PDF Downloads 113696 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management
Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing
Abstract:
Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.Keywords: digital twin, infrastructure asset management, maturity model, smart city
Procedia PDF Downloads 157695 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads
Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li
Abstract:
Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability
Procedia PDF Downloads 225694 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 249693 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs
Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira
Abstract:
The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure
Procedia PDF Downloads 253692 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes
Authors: Chih Ming Tseng
Abstract:
Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation
Procedia PDF Downloads 28691 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 100690 Anti-Neuroinflammatory and Anti-Apoptotic Efficacy of Equol, against Lipopolysaccharide Activated Microglia and Its Neurotoxicity
Authors: Lalita Subedi, Jae Kyoung Chae, Yong Un Park, Cho Kyo Hee, Lee Jae Hyuk, Kang Min Cheol, Sun Yeou Kim
Abstract:
Neuroinflammation may mediate the relationship between low levels of estrogens and neurodegenerative disease. Estrogens are neuroprotective and anti-inflammatory in neurodegenerative disease models. Due to the long term side effects of estrogens, researches have been focused on finding an effective phytoestrogens for biological activities. Daidzein present in soybeans and its active metabolite equol (7-hydroxy-3-(4'-hydroxyphenyl)-chroman) bears strong antioxidant and anticancer showed more potent anti-inflammatory and neuroprotective role in neuroinflammatory model confirmed its in vitro activity with molecular mechanism through NF-κB pathway. Three major CNS cells Microglia (BV-2), Astrocyte (C6), Neuron (N2a) were used to find the effect of equol in inducible nitric oxide synthase (iNOS), cyclooxygenase (COX-2), MAPKs signaling proteins, apoptosis related proteins by western blot analysis. Nitric oxide (NO) and prostaglandin E2 (PGE2) was measured by the Gries method and ELISA, respectively. Cytokines like tumor necrosis factor-α (TNF-α) and IL-6 were also measured in the conditioned medium of LPS activated cells with or without equol. Equol inhibited the NO production, PGE-2 production and expression of COX-2 and iNOS in LPS-stimulated microglial cells at a dose dependent without any cellular toxicity. At the same time Equol also showed promising effect in modulation of MAPK’s and nuclear factor kappa B (NF-κB) expression with significant inhibition of the production of proinflammatory cytokine like interleukin -6 (IL-6), and tumor necrosis factor -α (TNF-α). Additionally, it inhibited the LPS activated microglia-induced neuronal cell death by downregulating the apoptotic phenomenon in neuronal cells. Furthermore, equol increases the production of neurotrophins like NGF and increase the neurite outgrowth as well. In conclusion the natural daidzein metabolite equol are more active than daidzein, which showed a promising effectiveness as an anti-neuroinflammatory and neuroprotective agent via downregulating the LPS stimulated microglial activation and neuronal apoptosis. This work was supported by Brain Korea 21 Plus project and High Value-added Food Technology Development Program 114006-4, Ministry of Agriculture, Food and Rural Affairs.Keywords: apoptosis, equol, neuroinflammation, phytoestrogen
Procedia PDF Downloads 361689 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 61688 Culturally Relevant Pedagogy: A Cross-Cultural Comparison
Authors: Medha Talpade, Salil Talpade
Abstract:
The intent of this quantitative project was to compare the values and perceptions of students from a predominantly white college (PWI) to those from a historically black college (HBCU) about culturally relevant teaching and learning practices in the academic realm. The reason for interrelating student culture with teaching practices is to enable a pedagogical response to the low retention rates of African American students and first generation Caucasian students in high schools, colleges, and their low rates of social mobility and educational achievement. Culturally relevant pedagogy, according to related research, is deemed rewarding to students, teachers, the local and national community. Critical race theory (CRT) is the main framework used in this project to explain the ubiquity of a culturally relevant pedagogy. The purpose of this quantitative study was to test the critical race theory that relates the presence of the factors associated with culturally relevant teaching strategies with perceived relevance. The culturally relevant teaching strategies were identified based on the recommendations and findings of past research. Participants in this study included approximately 145 students from a HBCU and 55 students from the PWI. A survey consisting of 37 items related to culturally relevant pedagogy was administered. The themes used to construct the items were: Use of culturally-specific examples in class whenever possible; use of culturally-specific presentational models, use of relational reinforcers, and active engagement. All the items had a likert-type response scale. Participants reported their degree of agreement (5-point scale ranging from strongly disagree to strongly agree) and importance (3-point scale ranging from not at all important to very important) with each survey item. A new variable, Relevance was formed based on the multiplicative function of importance and presence of a teaching and learning strategy. A set of six demographic questions were included in the survey. A consent form based on NIH and APA ethical standards was distributed prior to survey administration to the volunteers. Results of a Factor Analyses on the data from the PWI and the HBCU, and a ANOVA indicated significant differences on ‘Relevance’ related to specific themes. Results of this study are expected to inform educational practices and improve teaching and learning outcomes.Keywords: culturally relevant pedagogy, college students, cross-cultural, applied psychology
Procedia PDF Downloads 432687 The Path of Cotton-To-Clothing Value Chains to Development: A Mixed Methods Exploration of the Resuscitation of the Cotton-To-Clothing Value Chain in Post
Authors: Emma Van Schie
Abstract:
The purpose of this study is to use mixed methods research to create typologies of the performance of firms in the cotton-to-clothing value chain in Zimbabwe, and to use these typologies to achieve the objective of adding to the small pool of studies on Sub-Saharan African value chains performing in the context of economic liberalisation and achieving development. The uptake of economic liberalisation measures across Sub-Saharan Africa has led to the restructuring of many value chains. While this action has resulted in some African economies positively reintegrating into global commodity chains, it has also been deeply problematic for the development impacts of the majority of others. Over and above this, these nations have been placed at a disadvantage due to the fact that there is little scholarly and policy research on approaches for managing economic liberalisation and value chain development in the unique African context. As such, the central question facing these less successful cases is how they can integrate into the world economy whilst still fostering their development. This paper draws from quantitative questionnaires and qualitative interviews with 28 stakeholders in the cotton-to-clothing value chain in Zimbabwe. This paper examines the performance of firms in the value chain, and the subsequent local socio-economic development impacts that are affected by the revival of the cotton-to-clothing value chain following its collapse in the wake of Zimbabwe’s uptake of economic liberalisation measures. Firstly, the paper finds the relatively undocumented characteristics and structures of firms in the value chain in the post-economic liberalisation era. As well as this, it finds typologies of the status of firms as either being in operation, closed down, or being placed under judicial management and the common characteristics that these typologies hold. The key findings show how a mixture of macro and local level aspects, such as value chain governance and the management structure of a business, leads to the most successful typology that is able to add value to the chain in the context of economic liberalisation, and thus unlock its socioeconomic development potential. These typologies are used in making industry and policy recommendations on achieving this balance between the macro and the local level, as well as recommendations for further academic research for more typologies and models on the case of cotton value chains in Sub-Saharan Africa. In doing so, this study adds to the small collection of academic evidence and policy recommendations for the challenges that African nations face when trying to incorporate into global commodity chains in attempts to benefit from their associated socioeconomic development opportunities.Keywords: cotton-to-clothing value chain, economic liberalisation, restructuring value chain, typologies of firms, value chain governance, Zimbabwe
Procedia PDF Downloads 167686 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model
Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro
Abstract:
The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion
Procedia PDF Downloads 387685 Gender and Total Compensation, in an ‘Age’ of Disruption
Authors: Daniel J. Patricio Jiménez
Abstract:
The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.Keywords: diversity, gender gap, human resources, sustainability.
Procedia PDF Downloads 168684 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions
Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka
Abstract:
In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).Keywords: cytotoxicity, graphene oxide, monolayer, spheroid
Procedia PDF Downloads 125683 Nanoliposomes in Photothermal Therapy: Advancements and Applications
Authors: Mehrnaz Mostafavi
Abstract:
Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy
Procedia PDF Downloads 112682 Separation of Lanthanides Ions from Mineral Waste with Functionalized Pillar[5]Arenes: Synthesis, Physicochemical Characterization and Molecular Dynamics Studies
Authors: Ariesny Vera, Rodrigo Montecinos
Abstract:
The rare-earth elements (REEs) or rare-earth metals (REMs), correspond to seventeen chemical elements composed by the fifteen lanthanoids, as well as scandium and yttrium. Lanthanoids corresponds to lanthanum and the f-block elements, from cerium to lutetium. Scandium and yttrium are considered rare-earth elements because they have ionic radii similar to the lighter f-block elements. These elements were called rare earths because they are simply more difficult to extract and separate individually than the most metals and, generally, they do not accumulate in minerals, they are rarely found in easily mined ores and are often unfavorably distributed in common ores/minerals. REEs show unique chemical and physical properties, in comparison to the other metals in the periodic table. Nowadays, these physicochemical properties are utilized in a wide range of synthetic, catalytic, electronic, medicinal, and military applications. Because of their applications, the global demand for rare earth metals is becoming progressively more important in the transition to a self-sustaining society and greener economy. However, due to the difficult separation between lanthanoid ions, the high cost and pollution of these processes, the scientists search the development of a method that combines selectivity and quantitative separation of lanthanoids from the leaching liquor, while being more economical and environmentally friendly processes. This motivation has favored the design and development of more efficient and environmentally friendly cation extractors with the incorporation of compounds as ionic liquids, membrane inclusion polymers (PIM) and supramolecular systems. Supramolecular chemistry focuses on the development of host-guest systems, in which a host molecule can recognize and bind a certain guest molecule or ion. Normally, the formation of a host-guest complex involves non-covalent interactions Additionally, host-guest interactions can be influenced among others effects by the structural nature of host and guests. The different macrocyclic hosts for lanthanoid species that have been studied are crown ethers, cyclodextrins, cucurbituryls, calixarenes and pillararenes.Among all the factors that can influence and affect lanthanoid (III) coordination, perhaps the most basic of them is the systematic control using macrocyclic substituents that promote a selective coordination. In this sense, macrocycles pillar[n]arenes (P[n]As) present a relatively easy functionalization and they have more π-rich cavity than other host molecules. This gives to P[n]As a negative electrostatic potential in the cavity which would be responsible for the selectivity of these compounds towards cations. Furthermore, the cavity size, the linker, and the functional groups of the polar headgroups could be modified in order to control the association of lanthanoid cations. In this sense, different P[n]As systems, specifically derivatives of the pentamer P[5]A functionalized with amide, amine, phosphate and sulfate derivatives, have been designed in terms of experimental synthesis and molecular dynamics, and the interaction between these P[5]As and some lanthanoid ions such as La³+, Eu³+ and Lu³+ has been studied by physicochemical characterization by 1H-NMR, ITC and fluorescence in the case of Eu³+ systems. The molecular dynamics study of these systems was developed in hexane as solvent, also taking into account the lanthanoid ions mentioned above, and the respective comparison studies between the different ions.Keywords: lanthanoids, macrocycles, pillar[n]arenes, rare-earth metal extraction, supramolecular chemistry, supramolecular complexes.
Procedia PDF Downloads 77681 Screening and Improved Production of an Extracellular β-Fructofuranosidase from Bacillus Sp
Authors: Lynette Lincoln, Sunil S. More
Abstract:
With the rising demand of sugar used today, it is proposed that world sugar is expected to escalate up to 203 million tonnes by 2021. Hydrolysis of sucrose (table sugar) into glucose and fructose equimolar mixture is catalyzed by β-D-fructofuranoside fructohydrolase (EC 3.2.1.26), commonly called as invertase. For fluid filled center in chocolates, preparation of artificial honey, as a sweetener and especially to ensure that food stuffs remain fresh, moist and soft for longer spans invertase is applied widely and is extensively being used. From an industrial perspective, properties such as increased solubility, osmotic pressure and prevention of crystallization of sugar in food products are highly desired. Screening for invertase does not involve plate assay/qualitative test to determine the enzyme production. In this study, we use a three-step screening strategy for identification of a novel bacterial isolate from soil which is positive for invertase production. The primary step was serial dilution of soil collected from sugarcane fields (black soil, Maddur region of Mandya district, Karnataka, India) was grown on a Czapek-Dox medium (pH 5.0) containing sucrose as the sole C-source. Only colonies with the capability to utilize/breakdown sucrose exhibited growth. Bacterial isolates released invertase in order to take up sucrose, splitting the disaccharide into simple sugars. Secondly, invertase activity was determined from cell free extract by measuring the glucose released in the medium at 540 nm. Morphological observation of the most potent bacteria was examined by several identification tests using Bergey’s manual, which enabled us to know the genus of the isolate to be Bacillus. Furthermore, this potent bacterial colony was subjected to 16S rDNA PCR amplification and a single discrete PCR amplicon band of 1500 bp was observed. The 16S rDNA sequence was used to carry out BLAST alignment search tool of NCBI Genbank database to obtain maximum identity score of sequence. Molecular sequencing and identification was performed by Xcelris Labs Ltd. (Ahmedabad, India). The colony was identified as Bacillus sp. BAB-3434, indicating to be the first novel strain for extracellular invertase production. Molasses, a by-product of the sugarcane industry is a dark viscous liquid obtained upon crystallization of sugar. An enhanced invertase production and optimization studies were carried out by one-factor-at-a-time approach. Crucial parameters such as time course (24 h), pH (6.0), temperature (45 °C), inoculum size (2% v/v), N-source (yeast extract, 0.2% w/v) and C-source (molasses, 4% v/v) were found to be optimum demonstrating an increased yield. The findings of this study reveal a simple screening method of an extracellular invertase from a rapidly growing Bacillus sp., and selection of best factors that elevate enzyme activity especially utilization of molasses which served as an ideal substrate and also as C-source, results in a cost-effective production under submerged conditions. The invert mixture could be a replacement for table sugar which is an economic advantage and reduce the tedious work of sugar growers. On-going studies involve purification of extracellular invertase and determination of transfructosylating activity as at high concentration of sucrose, invertase produces fructooligosaccharides (FOS) which possesses probiotic properties.Keywords: Bacillus sp., invertase, molasses, screening, submerged fermentation
Procedia PDF Downloads 231680 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App
Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira
Abstract:
The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.Keywords: travel app, behavior change, persuasive technology, travel information, causality
Procedia PDF Downloads 141679 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 214678 Neurocognitive and Executive Function in Cocaine Addicted Females
Authors: Gwendolyn Royal-Smith
Abstract:
Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function
Procedia PDF Downloads 402677 State Forest Management Practices by Indigenous Peoples in Dharmasraya District, West Sumatra Province, Indonesia
Authors: Abdul Mutolib, Yonariza Mahdi, Hanung Ismono
Abstract:
The existence of forests is essential to human lives on earth, but its existence is threatened by forest deforestations and degradations. Forest deforestations and degradations in Indonesia is not only caused by the illegal activity by the company or the like, even today many cases in Indonesia forest damage caused by human activities, one of which cut down forests for agriculture and plantations. In West Sumatra, community forest management are the result supported the enactment of customary land tenure, including ownership of land within the forest. Indigenous forest management have a positive benefit, which gives the community an opportunity to get livelihood and income, but if forest management practices by indigenous peoples is not done wisely, then there is the destruction of forests and cause adverse effects on the environment. Based on intensive field works in Dhamasraya District employing some data collection techniques such as key informant interviews, household surveys, secondary data analysis, and satellite image interpretation. This paper answers the following questions; how the impact of forest management by local communities on forest conditions (foccus in Forest Production and Limited Production Forest) and knowledge of the local community on the benefits of forests. The site is a Nagari Bonjol, Dharmasraya District, because most of the forest in Dharmasraya located and owned by Nagari Bonjol community. The result shows that there is damage to forests in Dharmasraya because of forest management activities by local communities. Damage to the forest area of 33,500 ha in Dharmasraya because forests are converted into oil palm and rubber plantations with monocultures. As a result of the destruction of forests, water resources are also diminishing, and the community has experienced a drought in the dry season due to forest cut down and replaced by oil palm plantations. Knowledge of the local community on the benefits of low forest, the people considered that the forest does not have better benefits and cut down and converted into oil palm or rubber plantations. Local people do not understand the benefits of ecological and environmental services that forests. From the phenomena in Dharmasraya on land ownership, need to educate the local community about the importance of protecting the forest, and need a strategy to integrate forests management to keep the ecological functions that resemble the woods and counts the economic benefits for the welfare of local communities. One alternative that can be taken is to use forest management models agroforestry smallholders in accordance with the characteristics of the local community who still consider the economic, social and environmental.Keywords: community, customary land, farmer plantations, and forests
Procedia PDF Downloads 335676 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea
Authors: Masoud Sakhinia, Sajjad Ahmad
Abstract:
Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR
Procedia PDF Downloads 278675 A Discussion on Urban Planning Methods after Globalization within the Context of Anticipatory Systems
Authors: Ceylan Sozer, Ece Ceylan Baba
Abstract:
The reforms and changes that began with industrialization in cities and continued with globalization in 1980’s, created many changes in urban environments. City centers which are desolated due to industrialization, began to get crowded with globalization and became the heart of technology, commerce and social activities. While the immediate and intense alterations are planned around rigorous visions in developed countries, several urban areas where the processes were underestimated and not taken precaution faced with irrevocable situations. When the effects of the globalization in the cities are examined, it is seen that there are some anticipatory system plans in the cities about the future problems. Several cities such as New York, London and Tokyo have planned to resolve probable future problems in a systematic scheme to decrease possible side effects during globalization. The decisions in urban planning and their applications are the main points in terms of sustainability and livability in such mega-cities. This article examines the effects of globalization on urban planning through 3 mega cities and the applications. When the applications of urban plannings of the three mega-cities are investigated, it is seen that the city plans are generated under light of past experiences and predictions of a certain future. In urban planning, past and present experiences of a city should have been examined and then future projections could be predicted together with current world dynamics by a systematic way. In this study, methods used in urban planning will be discussed and ‘Anticipatory System’ model will be explained and relations with global-urban planning will be discussed. The concept of ‘anticipation’ is a phenomenon that means creating foresights and predictions about the future by combining past, present and future within an action plan. The main distinctive feature that separates anticipatory systems from other systems is the combination of past, present and future and concluding with an act. Urban plans that consist of various parameters and interactions together are identified as ‘live’ and they have systematic integrities. Urban planning with an anticipatory system might be alive and can foresight some ‘side effects’ in design processes. After globalization, cities became more complex and should be designed within an anticipatory system model. These cities can be more livable and can have sustainable urban conditions for today and future.In this study, urban planning of Istanbul city is going to be analyzed with comparisons of New York, Tokyo and London city plans in terms of anticipatory system models. The lack of a system in İstanbul and its side effects will be discussed. When past and present actions in urban planning are approached through an anticipatory system, it can give more accurate and sustainable results in the future.Keywords: globalization, urban planning, anticipatory system, New York, London, Tokyo, Istanbul
Procedia PDF Downloads 143674 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation
Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos
Abstract:
One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).Keywords: code generation, MATLAB, tunable parameters, TwinCAT
Procedia PDF Downloads 228