Search results for: macroeconomics models
712 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method
Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon
Abstract:
The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue
Procedia PDF Downloads 249711 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs
Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira
Abstract:
The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure
Procedia PDF Downloads 254710 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes
Authors: Chih Ming Tseng
Abstract:
Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation
Procedia PDF Downloads 35709 Terrestrial Laser Scans to Assess Aerial LiDAR Data
Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani
Abstract:
The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy
Procedia PDF Downloads 101708 Anti-Neuroinflammatory and Anti-Apoptotic Efficacy of Equol, against Lipopolysaccharide Activated Microglia and Its Neurotoxicity
Authors: Lalita Subedi, Jae Kyoung Chae, Yong Un Park, Cho Kyo Hee, Lee Jae Hyuk, Kang Min Cheol, Sun Yeou Kim
Abstract:
Neuroinflammation may mediate the relationship between low levels of estrogens and neurodegenerative disease. Estrogens are neuroprotective and anti-inflammatory in neurodegenerative disease models. Due to the long term side effects of estrogens, researches have been focused on finding an effective phytoestrogens for biological activities. Daidzein present in soybeans and its active metabolite equol (7-hydroxy-3-(4'-hydroxyphenyl)-chroman) bears strong antioxidant and anticancer showed more potent anti-inflammatory and neuroprotective role in neuroinflammatory model confirmed its in vitro activity with molecular mechanism through NF-κB pathway. Three major CNS cells Microglia (BV-2), Astrocyte (C6), Neuron (N2a) were used to find the effect of equol in inducible nitric oxide synthase (iNOS), cyclooxygenase (COX-2), MAPKs signaling proteins, apoptosis related proteins by western blot analysis. Nitric oxide (NO) and prostaglandin E2 (PGE2) was measured by the Gries method and ELISA, respectively. Cytokines like tumor necrosis factor-α (TNF-α) and IL-6 were also measured in the conditioned medium of LPS activated cells with or without equol. Equol inhibited the NO production, PGE-2 production and expression of COX-2 and iNOS in LPS-stimulated microglial cells at a dose dependent without any cellular toxicity. At the same time Equol also showed promising effect in modulation of MAPK’s and nuclear factor kappa B (NF-κB) expression with significant inhibition of the production of proinflammatory cytokine like interleukin -6 (IL-6), and tumor necrosis factor -α (TNF-α). Additionally, it inhibited the LPS activated microglia-induced neuronal cell death by downregulating the apoptotic phenomenon in neuronal cells. Furthermore, equol increases the production of neurotrophins like NGF and increase the neurite outgrowth as well. In conclusion the natural daidzein metabolite equol are more active than daidzein, which showed a promising effectiveness as an anti-neuroinflammatory and neuroprotective agent via downregulating the LPS stimulated microglial activation and neuronal apoptosis. This work was supported by Brain Korea 21 Plus project and High Value-added Food Technology Development Program 114006-4, Ministry of Agriculture, Food and Rural Affairs.Keywords: apoptosis, equol, neuroinflammation, phytoestrogen
Procedia PDF Downloads 362707 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow
Authors: Alex Fedoseyev
Abstract:
This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow
Procedia PDF Downloads 61706 Culturally Relevant Pedagogy: A Cross-Cultural Comparison
Authors: Medha Talpade, Salil Talpade
Abstract:
The intent of this quantitative project was to compare the values and perceptions of students from a predominantly white college (PWI) to those from a historically black college (HBCU) about culturally relevant teaching and learning practices in the academic realm. The reason for interrelating student culture with teaching practices is to enable a pedagogical response to the low retention rates of African American students and first generation Caucasian students in high schools, colleges, and their low rates of social mobility and educational achievement. Culturally relevant pedagogy, according to related research, is deemed rewarding to students, teachers, the local and national community. Critical race theory (CRT) is the main framework used in this project to explain the ubiquity of a culturally relevant pedagogy. The purpose of this quantitative study was to test the critical race theory that relates the presence of the factors associated with culturally relevant teaching strategies with perceived relevance. The culturally relevant teaching strategies were identified based on the recommendations and findings of past research. Participants in this study included approximately 145 students from a HBCU and 55 students from the PWI. A survey consisting of 37 items related to culturally relevant pedagogy was administered. The themes used to construct the items were: Use of culturally-specific examples in class whenever possible; use of culturally-specific presentational models, use of relational reinforcers, and active engagement. All the items had a likert-type response scale. Participants reported their degree of agreement (5-point scale ranging from strongly disagree to strongly agree) and importance (3-point scale ranging from not at all important to very important) with each survey item. A new variable, Relevance was formed based on the multiplicative function of importance and presence of a teaching and learning strategy. A set of six demographic questions were included in the survey. A consent form based on NIH and APA ethical standards was distributed prior to survey administration to the volunteers. Results of a Factor Analyses on the data from the PWI and the HBCU, and a ANOVA indicated significant differences on ‘Relevance’ related to specific themes. Results of this study are expected to inform educational practices and improve teaching and learning outcomes.Keywords: culturally relevant pedagogy, college students, cross-cultural, applied psychology
Procedia PDF Downloads 432705 The Path of Cotton-To-Clothing Value Chains to Development: A Mixed Methods Exploration of the Resuscitation of the Cotton-To-Clothing Value Chain in Post
Authors: Emma Van Schie
Abstract:
The purpose of this study is to use mixed methods research to create typologies of the performance of firms in the cotton-to-clothing value chain in Zimbabwe, and to use these typologies to achieve the objective of adding to the small pool of studies on Sub-Saharan African value chains performing in the context of economic liberalisation and achieving development. The uptake of economic liberalisation measures across Sub-Saharan Africa has led to the restructuring of many value chains. While this action has resulted in some African economies positively reintegrating into global commodity chains, it has also been deeply problematic for the development impacts of the majority of others. Over and above this, these nations have been placed at a disadvantage due to the fact that there is little scholarly and policy research on approaches for managing economic liberalisation and value chain development in the unique African context. As such, the central question facing these less successful cases is how they can integrate into the world economy whilst still fostering their development. This paper draws from quantitative questionnaires and qualitative interviews with 28 stakeholders in the cotton-to-clothing value chain in Zimbabwe. This paper examines the performance of firms in the value chain, and the subsequent local socio-economic development impacts that are affected by the revival of the cotton-to-clothing value chain following its collapse in the wake of Zimbabwe’s uptake of economic liberalisation measures. Firstly, the paper finds the relatively undocumented characteristics and structures of firms in the value chain in the post-economic liberalisation era. As well as this, it finds typologies of the status of firms as either being in operation, closed down, or being placed under judicial management and the common characteristics that these typologies hold. The key findings show how a mixture of macro and local level aspects, such as value chain governance and the management structure of a business, leads to the most successful typology that is able to add value to the chain in the context of economic liberalisation, and thus unlock its socioeconomic development potential. These typologies are used in making industry and policy recommendations on achieving this balance between the macro and the local level, as well as recommendations for further academic research for more typologies and models on the case of cotton value chains in Sub-Saharan Africa. In doing so, this study adds to the small collection of academic evidence and policy recommendations for the challenges that African nations face when trying to incorporate into global commodity chains in attempts to benefit from their associated socioeconomic development opportunities.Keywords: cotton-to-clothing value chain, economic liberalisation, restructuring value chain, typologies of firms, value chain governance, Zimbabwe
Procedia PDF Downloads 170704 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model
Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro
Abstract:
The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion
Procedia PDF Downloads 390703 Inner Quality Parameters of Rapeseed (Brassica napus) Populations in Different Sowing Technology Models
Authors: É. Vincze
Abstract:
Demand on plant oils has increased to an enormous extent that is due to the change of human nutrition habits on the one hand, while on the other hand to the increase of raw material demand of some industrial sectors, just as to the increase of biofuel production. Besides the determining importance of sunflower in Hungary the production area, just as in part the average yield amount of rapeseed has increased among the produced oil crops. The variety/hybrid palette has changed significantly during the past decade. The available varieties’/hybrids’ palette has been extended to a significant extent. It is agreed that rapeseed production demands professionalism and local experience. Technological elements are successive; high yield amounts cannot be produced without system-based approach. The aim of the present work was to execute the complex study of one of the most critical production technology element of rapeseed production, that was sowing technology. Several sowing technology elements are studied in this research project that are the following: biological basis (the hybrid Arkaso is studied in this regard), sowing time (sowing time treatments were set so that they represent the wide period used in industrial practice: early, optimal and late sowing time) plant density (in this regard reaction of rare, optimal and too dense populations) were modelled. The multifactorial experimental system enables the single and complex evaluation of rapeseed sowing technology elements, just as their modelling using experimental result data. Yield quality and quantity have been determined as well in the present experiment, just as the interactions between these factors. The experiment was set up in four replications at the Látókép Plant Production Research Site of the University of Debrecen. Two different sowing times were sown in the first experimental year (2014), while three in the second (2015). Three different plant densities were set in both years: 200, 350 and 500 thousand plants ha-1. Uniform nutrient supply and a row spacing of 45 cm were applied. Winter wheat was used as pre-crop. Plant physiological measurements were executed in the populations of the Arkaso rapeseed hybrid that were: relative chlorophyll content analysis (SPAD) and leaf area index (LAI) measurement. Relative chlorophyll content (SPAD) and leaf area index (LAI) were monitored in 7 different measurement times.Keywords: inner quality, plant density, rapeseed, sowing time
Procedia PDF Downloads 202702 Gender and Total Compensation, in an ‘Age’ of Disruption
Authors: Daniel J. Patricio Jiménez
Abstract:
The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.Keywords: diversity, gender gap, human resources, sustainability.
Procedia PDF Downloads 169701 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions
Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka
Abstract:
In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).Keywords: cytotoxicity, graphene oxide, monolayer, spheroid
Procedia PDF Downloads 126700 Nanoliposomes in Photothermal Therapy: Advancements and Applications
Authors: Mehrnaz Mostafavi
Abstract:
Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy
Procedia PDF Downloads 114699 A Reduced Ablation Model for Laser Cutting and Laser Drilling
Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz
Abstract:
In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling
Procedia PDF Downloads 216698 Neurocognitive and Executive Function in Cocaine Addicted Females
Authors: Gwendolyn Royal-Smith
Abstract:
Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function
Procedia PDF Downloads 402697 State Forest Management Practices by Indigenous Peoples in Dharmasraya District, West Sumatra Province, Indonesia
Authors: Abdul Mutolib, Yonariza Mahdi, Hanung Ismono
Abstract:
The existence of forests is essential to human lives on earth, but its existence is threatened by forest deforestations and degradations. Forest deforestations and degradations in Indonesia is not only caused by the illegal activity by the company or the like, even today many cases in Indonesia forest damage caused by human activities, one of which cut down forests for agriculture and plantations. In West Sumatra, community forest management are the result supported the enactment of customary land tenure, including ownership of land within the forest. Indigenous forest management have a positive benefit, which gives the community an opportunity to get livelihood and income, but if forest management practices by indigenous peoples is not done wisely, then there is the destruction of forests and cause adverse effects on the environment. Based on intensive field works in Dhamasraya District employing some data collection techniques such as key informant interviews, household surveys, secondary data analysis, and satellite image interpretation. This paper answers the following questions; how the impact of forest management by local communities on forest conditions (foccus in Forest Production and Limited Production Forest) and knowledge of the local community on the benefits of forests. The site is a Nagari Bonjol, Dharmasraya District, because most of the forest in Dharmasraya located and owned by Nagari Bonjol community. The result shows that there is damage to forests in Dharmasraya because of forest management activities by local communities. Damage to the forest area of 33,500 ha in Dharmasraya because forests are converted into oil palm and rubber plantations with monocultures. As a result of the destruction of forests, water resources are also diminishing, and the community has experienced a drought in the dry season due to forest cut down and replaced by oil palm plantations. Knowledge of the local community on the benefits of low forest, the people considered that the forest does not have better benefits and cut down and converted into oil palm or rubber plantations. Local people do not understand the benefits of ecological and environmental services that forests. From the phenomena in Dharmasraya on land ownership, need to educate the local community about the importance of protecting the forest, and need a strategy to integrate forests management to keep the ecological functions that resemble the woods and counts the economic benefits for the welfare of local communities. One alternative that can be taken is to use forest management models agroforestry smallholders in accordance with the characteristics of the local community who still consider the economic, social and environmental.Keywords: community, customary land, farmer plantations, and forests
Procedia PDF Downloads 337696 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea
Authors: Masoud Sakhinia, Sajjad Ahmad
Abstract:
Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR
Procedia PDF Downloads 279695 A Discussion on Urban Planning Methods after Globalization within the Context of Anticipatory Systems
Authors: Ceylan Sozer, Ece Ceylan Baba
Abstract:
The reforms and changes that began with industrialization in cities and continued with globalization in 1980’s, created many changes in urban environments. City centers which are desolated due to industrialization, began to get crowded with globalization and became the heart of technology, commerce and social activities. While the immediate and intense alterations are planned around rigorous visions in developed countries, several urban areas where the processes were underestimated and not taken precaution faced with irrevocable situations. When the effects of the globalization in the cities are examined, it is seen that there are some anticipatory system plans in the cities about the future problems. Several cities such as New York, London and Tokyo have planned to resolve probable future problems in a systematic scheme to decrease possible side effects during globalization. The decisions in urban planning and their applications are the main points in terms of sustainability and livability in such mega-cities. This article examines the effects of globalization on urban planning through 3 mega cities and the applications. When the applications of urban plannings of the three mega-cities are investigated, it is seen that the city plans are generated under light of past experiences and predictions of a certain future. In urban planning, past and present experiences of a city should have been examined and then future projections could be predicted together with current world dynamics by a systematic way. In this study, methods used in urban planning will be discussed and ‘Anticipatory System’ model will be explained and relations with global-urban planning will be discussed. The concept of ‘anticipation’ is a phenomenon that means creating foresights and predictions about the future by combining past, present and future within an action plan. The main distinctive feature that separates anticipatory systems from other systems is the combination of past, present and future and concluding with an act. Urban plans that consist of various parameters and interactions together are identified as ‘live’ and they have systematic integrities. Urban planning with an anticipatory system might be alive and can foresight some ‘side effects’ in design processes. After globalization, cities became more complex and should be designed within an anticipatory system model. These cities can be more livable and can have sustainable urban conditions for today and future.In this study, urban planning of Istanbul city is going to be analyzed with comparisons of New York, Tokyo and London city plans in terms of anticipatory system models. The lack of a system in İstanbul and its side effects will be discussed. When past and present actions in urban planning are approached through an anticipatory system, it can give more accurate and sustainable results in the future.Keywords: globalization, urban planning, anticipatory system, New York, London, Tokyo, Istanbul
Procedia PDF Downloads 144694 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation
Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos
Abstract:
One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).Keywords: code generation, MATLAB, tunable parameters, TwinCAT
Procedia PDF Downloads 228693 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People
Authors: James Boylan, Denny Meyer, Won Sun Chen
Abstract:
Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire
Procedia PDF Downloads 107692 The Usage of Bridge Estimator for Hegy Seasonal Unit Root Tests
Authors: Huseyin Guler, Cigdem Kosar
Abstract:
The aim of this study is to propose Bridge estimator for seasonal unit root tests. Seasonality is an important factor for many economic time series. Some variables may contain seasonal patterns and forecasts that ignore important seasonal patterns have a high variance. Therefore, it is very important to eliminate seasonality for seasonal macroeconomic data. There are some methods to eliminate the impacts of seasonality in time series. One of them is filtering the data. However, this method leads to undesired consequences in unit root tests, especially if the data is generated by a stochastic seasonal process. Another method to eliminate seasonality is using seasonal dummy variables. Some seasonal patterns may result from stationary seasonal processes, which are modelled using seasonal dummies but if there is a varying and changing seasonal pattern over time, so the seasonal process is non-stationary, deterministic seasonal dummies are inadequate to capture the seasonal process. It is not suitable to use seasonal dummies for modeling such seasonally nonstationary series. Instead of that, it is necessary to take seasonal difference if there are seasonal unit roots in the series. Different alternative methods are proposed in the literature to test seasonal unit roots, such as Dickey, Hazsa, Fuller (DHF) and Hylleberg, Engle, Granger, Yoo (HEGY) tests. HEGY test can be also used to test the seasonal unit root in different frequencies (monthly, quarterly, and semiannual). Another issue in unit root tests is the lag selection. Lagged dependent variables are added to the model in seasonal unit root tests as in the unit root tests to overcome the autocorrelation problem. In this case, it is necessary to choose the lag length and determine any deterministic components (i.e., a constant and trend) first, and then use the proper model to test for seasonal unit roots. However, this two-step procedure might lead size distortions and lack of power in seasonal unit root tests. Recent studies show that Bridge estimators are good in selecting optimal lag length while differentiating nonstationary versus stationary models for nonseasonal data. The advantage of this estimator is the elimination of the two-step nature of conventional unit root tests and this leads a gain in size and power. In this paper, the Bridge estimator is proposed to test seasonal unit roots in a HEGY model. A Monte-Carlo experiment is done to determine the efficiency of this approach and compare the size and power of this method with HEGY test. Since Bridge estimator performs well in model selection, our approach may lead to some gain in terms of size and power over HEGY test.Keywords: bridge estimators, HEGY test, model selection, seasonal unit root
Procedia PDF Downloads 342691 Modelling High Strain Rate Tear Open Behavior of a Bilaminate Consisting of Foam and Plastic Skin Considering Tensile Failure and Compression
Authors: Laura Pytel, Georg Baumann, Gregor Gstrein, Corina Klug
Abstract:
Premium cars often coat the instrument panels with a bilaminate consisting of a soft foam and a plastic skin. The coating is torn open during the passenger airbag deployment under high strain rates. Characterizing and simulating the top coat layer is crucial for predicting the attenuation that delays the airbag deployment, effecting the design of the restrain system and to reduce the demand of simulation adjustments through expensive physical component testing.Up to now, bilaminates used within cars either have been modelled by using a two-dimensional shell formulation for the whole coating system as one which misses out the interaction of the two layers or by combining a three-dimensional formulation foam layer with a two-dimensional skin layer but omitting the foam in the significant parts like the expected tear line area and the hinge where high compression is expected. In both cases, the properties of the coating causing the attenuation are not considered. Further, at present, the availability of material information, as there are failure dependencies of the two layers, as well as the strain rate of up to 200 1/s, are insufficient. The velocity of the passenger airbag flap during an airbag shot has been measured with about 11.5 m/s during first ripping; the digital image correlation evaluation showed resulting strain rates of above 1500 1/s. This paper provides a high strain rate material characterization of a bilaminate consisting of a thin polypropylene foam and a thermoplasctic olefins (TPO) skin and the creation of validated material models. With the help of a Split Hopkinson tension bar, strain rates of 1500 1/s were within reach. The experimental data was used to calibrate and validate a more physical modelling approach of the forced ripping of the bilaminate. In the presented model, the three-dimensional foam layer is continuously tied to the two-dimensional skin layer, allowing failure in both layers at any possible position. The simulation results show a higher agreement in terms of the trajectory of the flaps and its velocity during ripping. The resulting attenuation of the airbag deployment measured by the contact force between airbag and flaps increases and serves usable data for dimensioning modules of an airbag system.Keywords: bilaminate ripping behavior, High strain rate material characterization and modelling, induced material failure, TPO and foam
Procedia PDF Downloads 71690 Commodifying Things Past: Comparative Study of Heritage Tourism Practices in Montenegro and Serbia
Authors: Jovana Vukcevic, Sanja Pekovic, Djurdjica Perovic, Tatjana Stanovcic
Abstract:
This paper presents a critical inquiry into the role of uncomfortable heritage in nation branding with the particular focus on the specificities of the politics of memory, forgetting and revisionism in the post-communist post-Yugoslavia. It addresses legacies of unwanted, ambivalent or unacknowledged past and different strategies employed by the former-Yugoslav states and private actors in “rebranding” their heritage, ensuring its preservation, but re-contextualizing the narrative of the past through contemporary tourism practices. It questions the interplay between nostalgia, heritage and market, and the role of heritage in polishing the history of totalitarian and authoritarian regimes in the Balkans. It argues that in post-socialist Yugoslavia, the necessity to limit correlations with former ideology and the use of the commercial brush in shaping a marketable version of the past instigated the emergence of the profit-oriented heritage practices. Building on that argument, the paper addresses these issues as “commodification” and “disneyfication” of Balkans’ ambivalent heritage, contributing to the analysis of changing forms of memorialisation and heritagization practices in Europe. It questions the process of ‘coming to terms with the past’ through marketable forms of heritage tourism, fetching the boundary between market-driven nostalgia and state-imposed heritage policies. In order to analyse plurality of ways of dealing with controversial, ambivalent and unwanted heritage of dictatorships in the Balkans, the paper considers two prominent examples of heritage commodification in Serbia and Montenegro, and the re-appropriations of those narratives for the nation branding purposes. The first one is the story of the Tito’s Blue Train, the landmark of the socialist past and the symbol of Yugoslavia which has nowadays being used for birthday parties and marriage celebrations, while the second emphasises the unusual business arrangement turning the fortress Mamula, former concentration camp through the Second World War, into a luxurious Mediterranean resort. Questioning how the ‘uneasy’ past was acknowledged and embedded into the official heritage institutions and tourism practices, study examines the changing relation towards the legacies of dictatorships, inviting us to rethink the economic models of the things past. Analysis of these processes should contribute to better understanding of the new mnemonics strategies and (converging?) ways of ‘doing’ past in Europe.Keywords: commodification, heritage tourism, totalitarianism, Serbia, Montenegro
Procedia PDF Downloads 252689 Challenges of School Leadership
Authors: Stefan Ninković
Abstract:
The main purpose of this paper is to examine the different theoretical approaches and relevant empirical evidence and thus, recognize some of the most pressing challenges faced by school leaders. This paper starts from the fact that the new mission of the school is characterized by the need for stronger coordination among students' academic, social and emotional learning. In this sense, school leaders need to focus their commitment, vision and leadership on the issues of students' attitudes, language, cultural and social background, and sexual orientation. More specifically, they should know what a good teaching is for student’s at-risk, students whose first language is not dominant in school, those who’s learning styles are not in accordance with usual teaching styles, or who are stigmatized. There is a rather wide consensus around the fact that the traditionally popular concept of instructional leadership of the school principal is no longer sufficient. However, in a number of "pro-leadership" circles, including certain groups of academic researchers, consultants and practitioners, there is an established tendency of attributing school principal an extraordinary influence towards school achievements. On the other hand, the situation in which all employees in the school are leaders is a utopia par excellence. Although leadership obviously can be efficiently distributed across the school, there are few findings that speak about sources of this distribution and factors making it sustainable. Another idea that is not particularly new, but has only recently gained in importance is related to the fact that the collective capacity of the school is an important resource that often remains under-cultivated. To understand the nature and power of collaborative school cultures, it is necessary to know that these operate in a way that they make their all collective members' tacit knowledge explicit. In this sense, the question is how leaders in schools can shape collaborative culture and create social capital in the school. Pressure exerted on schools to systematically collect and use the data has been accompanied by the need for school leaders to develop new competencies. The role of school leaders is critical in the process of assessing what data are needed and for what purpose. Different types of data are important: test results, data on student’s absenteeism, satisfaction with school, teacher motivation, etc. One of the most important tasks of school leaders are data-driven decision making as well as ensuring transparency of the decision-making process. Finally, the question arises whether the existing models of school leadership are compatible with the current social and economic trends. It is necessary to examine whether and under what conditions schools are in need for forms of leadership that are different from those that currently prevail. Closely related to this issue is also to analyze the adequacy of different approaches to leadership development in the school.Keywords: educational changes, leaders, leadership, school
Procedia PDF Downloads 336688 Epidemiological and Clinical Characteristics of Five Rare Pathological Subtypes of Hepatocellular Carcinoma
Authors: Xiaoyuan Chen
Abstract:
Background: This study aimed to characterize the epidemiological and clinical features of five rare subtypes of hepatocellular carcinoma (HCC) and to create a competing risk nomogram for predicting cancer-specific survival. Methods: This study used the Surveillance, Epidemiology, and End Results database to analyze the clinicopathological data of 50,218 patients with classic HCC and five rare subtypes (ICD-O-3 Histology Code=8170/3-8175/3) between 2004 and 2018. The annual percent change (APC) was calculated using Joinpoint regression, and a nomogram was developed based on multivariable competing risk survival analyses. The prognostic performance of the nomogram was evaluated using the Akaike information criterion, Bayesian information criterion, C-index, calibration curve, and area under the receiver operating characteristic curve. Decision curve analysis was used to assess the clinical value of the models. Results: The incidence of scirrhous carcinoma showed a decreasing trend (APC=-6.8%, P=0.025), while the morbidity of other rare subtypes remained stable from 2004 to 2018. The incidence-based mortality plateau in all subtypes during the period. Clear cell carcinoma was the most common subtype (n=551, 1.1%), followed by fibrolamellar (n=241, 0.5%), scirrhous (n=82, 0.2%), spindle cell (n=61, 0.1%), and pleomorphic (n=17, ~0%) carcinomas. Patients with fibrolamellar carcinoma were younger and more likely to have non-cirrhotic liver and better prognoses. Scirrhous carcinoma shared almost the same macro clinical characteristics and outcomes as classic HCC. Clear cell carcinoma tended to occur in the Asia-Pacific elderly male population, and more than half of them were large HCC (Size>5cm). Sarcomatoid (including spindle cell and pleomorphic) carcinoma was associated with larger tumor size, poorer differentiation, and more dismal prognoses. The pathological subtype, T stage, M stage, surgery, alpha-fetoprotein, and cancer history were identified as independent predictors in patients with rare subtypes. The nomogram showed good calibration, discrimination, and net benefits in clinical practice. Conclusion: The rare subtypes of HCC had distinct clinicopathological features and biological behaviors compared with classic HCC. Our findings could provide a valuable reference for clinicians. The constructed nomogram could accurately predict prognoses, which is beneficial for individualized management.Keywords: hepatocellular carcinoma, pathological subtype, fibrolamellar carcinoma, scirrhous carcinoma, clear cell carcinoma, spindle cell carcinoma, pleomorphic carcinoma
Procedia PDF Downloads 80687 The Positive Impact of COVID-19 on the Level of Investments of U.S. Retail Investors: Evidence from a Quantitative Online Survey and Ordered Probit Analysis
Authors: Corina E. Niculaescu, Ivan Sangiorgi, Adrian R. Bell
Abstract:
The COVID-19 pandemic has been life-changing in many aspects of people’s daily and social lives, but has it also changed attitudes towards investments? This paper explores the effect of the COVID-19 pandemic on retail investors’ levels of investments in the U.S. during the first COVID-19 wave in summer 2020. This is an unprecedented health crisis, which could lead to changes in investment behavior, including irrational behavior in retail investors. As such, this study aims to inform policymakers of what happened to investment decisions during the COVID-19 pandemic so that they can protect retail investors during extreme events like a global health crisis. The study aims to answer two research questions. First, was the level of investments affected by the COVID-19 pandemic, and if so, why? Second, how were investments affected by retail investors’ personal experience with COVID-19? The research analysis is based on primary survey data collected on the Amazon Mechanical Turk platform from a representative sample of U.S. respondents. Responses were collected between the 15th of July and 28th of August 2020 from 1,148 U.S. retail investors who hold mutual fund investments and a savings account. The research explores whether being affected by COVID-19, change in the level of savings, and risk capacity can explain the change in the level of investments by using regression analysis. The dependent variable is changed in investments measured as decrease, no change, and increase. For this reason, the methodology used is ordered probit regression models. The results show that retail investors in the U.S. increased their investments during the first wave of COVID-19, which is unexpected as investors are usually more cautious in crisis times. Moreover, the study finds that those who were affected personally by COVID-19 (e.g., tested positive) were more likely to increase their investments, which is irrational behavior and contradicts expectations. An increase in the level of savings and risk capacity was also associated with increased investments. Overall, the findings show that having personal experience with a health crisis can have an impact on one’s investment decisions as well. Those findings are important for both retail investors and policymakers, especially now that online trading platforms have made trading easily accessible to everyone. There are risks and potential irrational behaviors associated with investment decisions during times of crisis, and it is important that retail investors are aware of them before making financial decisions.Keywords: COVID-19, financial decision-making, health crisis retail investors, survey
Procedia PDF Downloads 192686 The Extent of Virgin Olive-Oil Prices' Distribution Revealing the Behavior of Market Speculators
Authors: Fathi Abid, Bilel Kaffel
Abstract:
The olive tree, the olive harvest during winter season and the production of olive oil better known by professionals under the name of the crushing operation have interested institutional traders such as olive-oil offices and private companies such as food industry refining and extracting pomace olive oil as well as export-import public and private companies specializing in olive oil. The major problem facing producers of olive oil each winter campaign, contrary to what is expected, it is not whether the harvest will be good or not but whether the sale price will allow them to cover production costs and achieve a reasonable margin of profit or not. These questions are entirely legitimate if we judge by the importance of the issue and the heavy complexity of the uncertainty and competition made tougher by a high level of indebtedness and the experience and expertise of speculators and producers whose objectives are sometimes conflicting. The aim of this paper is to study the formation mechanism of olive oil prices in order to learn about speculators’ behavior and expectations in the market, how they contribute by their industry knowledge and their financial alliances and the size the financial challenge that may be involved for them to build private information hoses globally to take advantage. The methodology used in this paper is based on two stages, in the first stage we study econometrically the formation mechanisms of olive oil price in order to understand the market participant behavior by implementing ARMA, SARMA, GARCH and stochastic diffusion processes models, the second stage is devoted to prediction purposes, we use a combined wavelet- ANN approach. Our main findings indicate that olive oil market participants interact with each other in a way that they promote stylized facts formation. The unstable participant’s behaviors create the volatility clustering, non-linearity dependent and cyclicity phenomena. By imitating each other in some periods of the campaign, different participants contribute to the fat tails observed in the olive oil price distribution. The best prediction model for the olive oil price is based on a back propagation artificial neural network approach with input information based on wavelet decomposition and recent past history.Keywords: olive oil price, stylized facts, ARMA model, SARMA model, GARCH model, combined wavelet-artificial neural network, continuous-time stochastic volatility mode
Procedia PDF Downloads 340685 Postfeminism, Femvertising and Inclusion: An Analysis of Changing Women's Representation in Contemporary Media
Authors: Saveria Capecchi
Abstract:
In this paper, the results of qualitative content research on postfeminist female representation in contemporary Western media (advertising, television series, films, social media) are presented. Female role models spectacularized in media culture are an important part of the development of social identities and could inspire new generations. Postfeminist cultural texts have given rise to heated debate between gender and media studies scholars. There are those who claim they are commercial products seeking to sell feminism to women, a feminism whose political and subversive role is completely distorted and linked to the commercial interests of the cosmetics, fashion, fitness and cosmetic surgery industries, in which women’s ‘power’ lies mainly in their power to seduce. There are those who consider them feminist manifestos because they represent independent ‘modern women’ free from male control who aspire to achieve professionally and overcome gender stereotypes like that of the ‘housewife-mother’. Major findings of the research show that feminist principles have been gradually absorbed by the cultural industry and adapted to its commercial needs, resulting in the dissemination of contradictory values. On the one hand, in line with feminist arguments, patriarchal ideology is condemned and the concepts of equality and equal opportunity between men and women are promoted. On the other hand, feminist principles and demands are ascribed to individualism, which translates into the slogan: women are free to decide for themselves, even to objectify their own bodies. In particular, it is observed that femvertising trend in media industry is changing female representation moving away from classic stereotypes: the feminine beauty ideal of slenderness, emphasized in the media since the seventies, is ultimately challenged by the ‘curvy’ body model, which is considered to be more inclusive and based on the concept of ‘natural beauty’. Another aspect of change is the ‘anti-romantic’ revolution performed by some heroines, who are not in search of Prince Charming, in television drama and in the film industry. In conclusion, although femvertising tends to simplify and trivialize the concepts characterizing fourth-wave feminism (‘intersectionality’ and ‘inclusion’), it is also a tendency that enables the challenging of media imagery largely based on male viewpoints, interests and desires.Keywords: feminine beauty ideal, femvertising, gender and media, postfeminism
Procedia PDF Downloads 153684 Experimental Study Analyzing the Similarity Theory Formulations for the Effect of Aerodynamic Roughness Length on Turbulence Length Scales in the Atmospheric Surface Layer
Authors: Matthew J. Emes, Azadeh Jafari, Maziar Arjomandi
Abstract:
Velocity fluctuations of shear-generated turbulence are largest in the atmospheric surface layer (ASL) of nominal 100 m depth, which can lead to dynamic effects such as galloping and flutter on small physical structures on the ground when the turbulence length scales and characteristic length of the physical structure are the same order of magnitude. Turbulence length scales are a measure of the average sizes of the energy-containing eddies that are widely estimated using two-point cross-correlation analysis to convert the temporal lag to a separation distance using Taylor’s hypothesis that the convection velocity is equal to the mean velocity at the corresponding height. Profiles of turbulence length scales in the neutrally-stratified ASL, as predicted by Monin-Obukhov similarity theory in Engineering Sciences Data Unit (ESDU) 85020 for single-point data and ESDU 86010 for two-point correlations, are largely dependent on the aerodynamic roughness length. Field measurements have shown that longitudinal turbulence length scales show significant regional variation, whereas length scales of the vertical component show consistent Obukhov scaling from site to site because of the absence of low-frequency components. Hence, the objective of this experimental study is to compare the similarity theory relationships between the turbulence length scales and aerodynamic roughness length with those calculated using the autocorrelations and cross-correlations of field measurement velocity data at two sites: the Surface Layer Turbulence and Environmental Science Test (SLTEST) facility in a desert ASL in Dugway, Utah, USA and the Commonwealth Scientific and Industrial Research Organisation (CSIRO) wind tower in a rural ASL in Jemalong, NSW, Australia. The results indicate that the longitudinal turbulence length scales increase with increasing aerodynamic roughness length, as opposed to the relationships derived by similarity theory correlations in ESDU models. However, the ratio of the turbulence length scales in the lateral and vertical directions to the longitudinal length scales is relatively independent of surface roughness, showing consistent inner-scaling between the two sites and the ESDU correlations. Further, the diurnal variation of wind velocity due to changes in atmospheric stability conditions has a significant effect on the turbulence structure of the energy-containing eddies in the lower ASL.Keywords: aerodynamic roughness length, atmospheric surface layer, similarity theory, turbulence length scales
Procedia PDF Downloads 124683 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models
Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble
Abstract:
Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate
Procedia PDF Downloads 217