Search results for: circular business models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10070

Search results for: circular business models

830 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 189
829 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management

Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing

Abstract:

Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.

Keywords: digital twin, infrastructure asset management, maturity model, smart city

Procedia PDF Downloads 157
828 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data

Authors: Sara Bonetti

Abstract:

The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.

Keywords: data literacy, early childhood professionals, intersectionality, quantitative data

Procedia PDF Downloads 253
827 Advancing Women's Participation in SIDS' Renewable Energy Sector: A Multicriteria Evaluation Framework

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 69
826 A Multicriteria Evaluation Framework for Enhancing Women's Participation in SIDS Renewable Energy Sector

Authors: Carolina Mayen Huerta, Clara Ivanescu, Paloma Marcos

Abstract:

Due to their unique geographic challenges and the imperative to combat climate change, Small Island Developing States (SIDS) are experiencing rapid growth in the renewable energy (RE) sector. However, women's representation in formal employment within this burgeoning field remains significantly lower than their male counterparts. Conventional methodologies often overlook critical geographic data that influence women's job prospects. To address this gap, this paper introduces a Multicriteria Evaluation (MCE) framework designed to identify spatially enabling environments and restrictions affecting women's access to formal employment and business opportunities in the SIDS' RE sector. The proposed MCE framework comprises 24 key factors categorized into four dimensions: Individual, Contextual, Accessibility, and Place Characterization. "Individual factors" encompass personal attributes influencing women's career development, including caregiving responsibilities, exposure to domestic violence, and disparities in education. "Contextual factors" pertain to the legal and policy environment, influencing workplace gender discrimination, financial autonomy, and overall gender empowerment. "Accessibility factors" evaluate women's day-to-day mobility, considering travel patterns, access to public transport, educational facilities, RE job opportunities, healthcare facilities, and financial services. Finally, "Place Characterization factors" enclose attributes of geographical locations or environments. This dimension includes walkability, public transport availability, safety, electricity access, digital inclusion, fragility, conflict, violence, water and sanitation, and climatic factors in specific regions. The analytical framework proposed in this paper incorporates a spatial methodology to visualize regions within countries where conducive environments for women to access RE jobs exist. In areas where these environments are absent, the methodology serves as a decision-making tool to reinforce critical factors, such as transportation, education, and internet access, which currently hinder access to employment opportunities. This approach is designed to equip policymakers and institutions with data-driven insights, enabling them to make evidence-based decisions that consider the geographic dimensions of disparity. These insights, in turn, can help ensure the efficient allocation of resources to achieve gender equity objectives.

Keywords: gender, women, spatial analysis, renewable energy, access

Procedia PDF Downloads 83
825 Molecular Dynamics Simulations on Richtmyer-Meshkov Instability of Li-H2 Interface at Ultra High-Speed Shock Loads

Authors: Weirong Wang, Shenghong Huang, Xisheng Luo, Zhenyu Li

Abstract:

Material mixing process and related dynamic issues at extreme compressing conditions have gained more and more concerns in last ten years because of the engineering appealings in inertial confinement fusion (ICF) and hypervelocity aircraft developments. However, there lacks models and methods that can handle fully coupled turbulent material mixing and complex fluid evolution under conditions of high energy density regime up to now. In aspects of macro hydrodynamics, three numerical methods such as direct numerical simulation (DNS), large eddy simulation (LES) and Reynolds-averaged Navier–Stokes equations (RANS) has obtained relative acceptable consensus under the conditions of low energy density regime. However, under the conditions of high energy density regime, they can not be applied directly due to occurrence of dissociation, ionization, dramatic change of equation of state, thermodynamic properties etc., which may make the governing equations invalid in some coupled situations. However, in view of micro/meso scale regime, the methods based on Molecular Dynamics (MD) as well as Monte Carlo (MC) model are proved to be promising and effective ways to investigate such issues. In this study, both classical MD and first-principle based electron force field MD (eFF-MD) methods are applied to investigate Richtmyer-Meshkov Instability of metal Lithium and gas Hydrogen (Li-H2) interface mixing at different shock loading speed ranging from 3 km/s to 30 km/s. It is found that: 1) Classical MD method based on predefined potential functions has some limits in application to extreme conditions, since it cannot simulate the ionization process and its potential functions are not suitable to all conditions, while the eFF-MD method can correctly simulate the ionization process due to its ‘ab initio’ feature; 2) Due to computational cost, the eFF-MD results are also influenced by simulation domain dimensions, boundary conditions and relaxation time choices, etc., in computations. Series of tests have been conducted to determine the optimized parameters. 3) Ionization induced by strong shock compression has important effects on Li-H2 interface evolutions of RMI, indicating a new micromechanism of RMI under conditions of high energy density regime.

Keywords: first-principle, ionization, molecular dynamics, material mixture, Richtmyer-Meshkov instability

Procedia PDF Downloads 225
824 Comparisons between Student Leaning Achievements and Their Problem Solving Skills on Stoichiometry Issue with the Think-Pair-Share Model and Stem Education Method

Authors: P. Thachitasing, N. Jansawang, W. Rakrai, T. Santiboon

Abstract:

The aim of this study is to investigate of the comparing the instructional design models between the Think-Pair-Share and Conventional Learning (5E Inquiry Model) Processes to enhance students’ learning achievements and their problem solving skills on stoichiometry issue for concerning the 2-instructional method with a sample consisted of 80 students in 2 classes at the 11th grade level in Chaturaphak Phiman Ratchadaphisek School. Students’ different learning outcomes in chemistry classes with the cluster random sampling technique were used. Instructional Methods designed with the 40-experimenl student group by Think-Pair-Share process and the 40-controlling student group by the conventional learning (5E Inquiry Model) method. These learning different groups were obtained using the 5 instruments; the 5-lesson instructional plans of Think-Pair-Share and STEM Education Method, students’ learning achievements and their problem solving skills were assessed with the pretest and posttest techniques, students’ outcomes of their instructional the Think-Pair-Share (TPSM) and the STEM Education Methods were compared. Statistically significant was differences with the paired t-test and F-test between posttest and pretest technique of the whole students in chemistry classes were found, significantly. Associations between student learning outcomes in chemistry and two methods of their learning to students’ learning achievements and their problem solving skills also were found. The use of two methods for this study is revealed that the students perceive their learning achievements to their problem solving skills to be differently learning achievements in different groups are guiding practical improvements in chemistry classrooms to assist teacher in implementing effective approaches for improving instructional methods. Students’ learning achievements of mean average scores to their controlling group with the Think-Pair-Share Model (TPSM) are lower than experimental student group for the STEM education method, evidence significantly. The E1/E2 process were revealed evidence of 82.56/80.44, and 83.02/81.65 which results based on criteria are higher than of 80/80 standard level with the IOC, consequently. The predictive efficiency (R2) values indicate that 61% and 67% and indicate that 63% and 67% of the variances in chemistry classes to their learning achievements on posttest in chemistry classes of the variances in students’ problem solving skills to their learning achievements to their chemistry classrooms on Stoichiometry issue with the posttest were attributable to their different learning outcomes for the TPSM and STEMe instructional methods.

Keywords: comparisons, students’ learning achievements, think-pare-share model (TPSM), stem education, problem solving skills, chemistry classes, stoichiometry issue

Procedia PDF Downloads 249
823 Morphological and Molecular Evaluation of Dengue Virus Serotype 3 Infection in BALB/c Mice Lungs

Authors: Gabriela C. Caldas, Fernanda C. Jacome, Arthur da C. Rasinhas, Ortrud M. Barth, Flavia B. dos Santos, Priscila C. G. Nunes, Yuli R. M. de Souza, Pedro Paulo de A. Manso, Marcelo P. Machado, Debora F. Barreto-Vieira

Abstract:

The establishment of animal models for studies of DENV infections has been challenging, since circulating epidemic viruses do not naturally infect nonhuman species. Such studies are of great relevance to the various areas of dengue research, including immunopathogenesis, drug development and vaccines. In this scenario, the main objective of this study is to verify possible morphological changes, as well as the presence of antigens and viral RNA in lung samples from BALB/c mice experimentally infected with an epidemic and non-neuroadapted DENV-3 strain. Male BALB/c mice, 2 months old, were inoculated with DENV-3 by intravenous route. After 72 hours of infection, the animals were euthanized and the lungs were collected. Part of the samples was processed by standard technique for analysis by light and transmission electronic microscopies and another part was processed for real-time PCR analysis. Morphological analyzes of lungs from uninfected mice showed preserved tissue areas. In mice infected with DENV-3, the analyzes revealed interalveolar septum thickening with presence of inflammatory infiltrate, foci of alveolar atelectasis and hyperventilation, bleeding foci in the interalveolar septum and bronchioles, peripheral capillary congestion, accumulation of fluid in the blood capillary, signs of interstitial cell necrosis presence of platelets and mononuclear inflammatory cells circulating in the capillaries and/or adhered to the endothelium. In addition, activation of endothelial cells, platelets, mononuclear inflammatory cell and neutrophil-type polymorphonuclear inflammatory cell evidenced by the emission of cytoplasmic membrane prolongation was observed. DEN-like particles were seen in the cytoplasm of endothelial cells. The viral genome was recovered from 3 in 12 lung samples. These results demonstrate that the BALB / c mouse represents a suitable model for the study of the histopathological changes induced by DENV infection in the lung, with tissue alterations similar to those observed in human cases of DEN.

Keywords: BALB/c mice, dengue, histopathology, lung, ultrastructure

Procedia PDF Downloads 253
822 Profitability and Productivity Performance of the Selected Public Sector Banks in India

Authors: Sudipto Jana

Abstract:

Background and significance of the study: Banking industry performs as a catalyst for industrial growth and agricultural growth, however, as well involves the existence and welfare of the citizens. The banking system in India was described by unmatched growth and the recreation of bunch making in the pre-liberalization era. At the time of financial sector reforms Reserve Bank of India issued a regulatory norm concerning capital adequacy, income recognition, asset classification and provisioning that have increasingly precede meeting by means of the international paramount performs. Bank management ceaselessly manages the triumph, effectiveness, productivity and performance of the bank as good performance, high productivity and efficiency authorizes the triumph of the bank management targets as well as aims of bank. In a comparable move toward performance of any economy depends upon the expediency and effectiveness of its financial system of nation establishes its economic growth indicators. Profitability and productivity are the most important relevant parameters of any banking group. Keeping in view of this, this study examines the profitability and productivity performance of the selected public sector banks in India. Methodology: This study is based on secondary data obtained from Reserve Bank of India database for the periods between 2006 and 2015. This study purposively selects four types of commercial banks, namely, State Bank of India, United Bank of India, Punjab National Bank and Allahabad Bank. In order to analyze the performance with relation to profitability and productivity, productivity performance indicators in terms of capital adequacy ratio, burden ratio, business per employee, spread per employee and advances per employee and profitability performance indicators in terms of return on assets, return on equity, return on advances and return on branch have been considered. In the course of analysis, descriptive statistics, correlation statistics and multiple regression have been used. Major findings: Descriptive statistics indicate that productivity performance of State Bank of India is very satisfactory than other public sector banks in India. But management of productivity is unsatisfactory in case of all the public sector banks under study. Correlation statistics point out that profitability of the public sector banks are strongly positively related with productivity performance in case of all the public sector banks under study. Multiple regression test results show that when profitability increases profit per employee increases and net non-performing assets decreases. Concluding statements: Productivity and profitability performance of United Bank of India, Allahabad Bank and Punjab National Bank are unsatisfactory due to poor management of asset quality as well as management efficiency. It needs government’s interference so that profitability and productivity performance are increased in the near future.

Keywords: India, productivity, profitability, public sector banks

Procedia PDF Downloads 430
821 Assessment of Sediment Control Characteristics of Notches in Different Sediment Transport Regimes

Authors: Chih Ming Tseng

Abstract:

Landslides during typhoons that generate substantial amounts of sediment and subsequent rainfall can trigger various types of sediment transport regimes, such as debris flows, high-concentration sediment-laden flows, and typical river sediment transport. This study aims to investigate the sediment control characteristics of natural notches within different sediment transport regimes. High-resolution digital terrain models were used to establish the relationship between slope gradients and catchment areas, which were then used to delineate distinct sediment transport regimes and analyze the sediment control characteristics of notches within these regimes. The research results indicate that the catchment areas of Aiyuzi Creek, Hossa Creek, and Chushui Creek in the study region can be clearly categorized into three sediment transport regimes based on the slope-area relationship curves: frequent collapse headwater areas, debris flow zones, and high-concentration sediment-laden flow zones. The threshold for transitioning from the collapse zone to the debris flow zone in the Aiyuzi Creek catchment is lower compared to Hossa Creek and Chushui Creek, suggesting that the active collapse processes in the upper reaches of Aiyuzi Creek continuously supply a significant sediment source, making it more susceptible to subsequent debris flow events. Moreover, the analysis of sediment trapping efficiency at notches within different sediment transport regimes reveals that as the notch constriction ratio increases, the sediment accumulation per unit area also increases. The accumulation thickness per unit area in high-concentration sediment-laden flow zones is greater than in debris flow zones, indicating differences in sediment deposition characteristics among various sediment transport regimes. Regarding sediment control rates at notches, there is a generally positive correlation with the notch constriction ratio. During the 2009 Morakot Typhoon, the substantial sediment supply from slope failures in the upstream catchment led to an oversupplied sediment transport condition in the river channel. Consequently, sediment control rates were more pronounced during medium and small sediment transport events between 2010 and 2015. However, there were no significant differences in sediment control rates among the different sediment transport regimes at notches. Overall, this research provides valuable insights into the sediment control characteristics of notches under various sediment transport conditions, which can aid in the development of improved sediment management strategies in watersheds.

Keywords: landslide, debris flow, notch, sediment control, DTM, slope–area relation

Procedia PDF Downloads 28
820 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 101
819 Anti-Neuroinflammatory and Anti-Apoptotic Efficacy of Equol, against Lipopolysaccharide Activated Microglia and Its Neurotoxicity

Authors: Lalita Subedi, Jae Kyoung Chae, Yong Un Park, Cho Kyo Hee, Lee Jae Hyuk, Kang Min Cheol, Sun Yeou Kim

Abstract:

Neuroinflammation may mediate the relationship between low levels of estrogens and neurodegenerative disease. Estrogens are neuroprotective and anti-inflammatory in neurodegenerative disease models. Due to the long term side effects of estrogens, researches have been focused on finding an effective phytoestrogens for biological activities. Daidzein present in soybeans and its active metabolite equol (7-hydroxy-3-(4'-hydroxyphenyl)-chroman) bears strong antioxidant and anticancer showed more potent anti-inflammatory and neuroprotective role in neuroinflammatory model confirmed its in vitro activity with molecular mechanism through NF-κB pathway. Three major CNS cells Microglia (BV-2), Astrocyte (C6), Neuron (N2a) were used to find the effect of equol in inducible nitric oxide synthase (iNOS), cyclooxygenase (COX-2), MAPKs signaling proteins, apoptosis related proteins by western blot analysis. Nitric oxide (NO) and prostaglandin E2 (PGE2) was measured by the Gries method and ELISA, respectively. Cytokines like tumor necrosis factor-α (TNF-α) and IL-6 were also measured in the conditioned medium of LPS activated cells with or without equol. Equol inhibited the NO production, PGE-2 production and expression of COX-2 and iNOS in LPS-stimulated microglial cells at a dose dependent without any cellular toxicity. At the same time Equol also showed promising effect in modulation of MAPK’s and nuclear factor kappa B (NF-κB) expression with significant inhibition of the production of proinflammatory cytokine like interleukin -6 (IL-6), and tumor necrosis factor -α (TNF-α). Additionally, it inhibited the LPS activated microglia-induced neuronal cell death by downregulating the apoptotic phenomenon in neuronal cells. Furthermore, equol increases the production of neurotrophins like NGF and increase the neurite outgrowth as well. In conclusion the natural daidzein metabolite equol are more active than daidzein, which showed a promising effectiveness as an anti-neuroinflammatory and neuroprotective agent via downregulating the LPS stimulated microglial activation and neuronal apoptosis. This work was supported by Brain Korea 21 Plus project and High Value-added Food Technology Development Program 114006-4, Ministry of Agriculture, Food and Rural Affairs.

Keywords: apoptosis, equol, neuroinflammation, phytoestrogen

Procedia PDF Downloads 361
818 Modeling of Turbulent Flow for Two-Dimensional Backward-Facing Step Flow

Authors: Alex Fedoseyev

Abstract:

This study investigates a generalized hydrodynamic equation (GHE) simplified model for the simulation of turbulent flow over a two-dimensional backward-facing step (BFS) at Reynolds number Re=132000. The GHE were derived from the generalized Boltzmann equation (GBE). GBE was obtained by first principles from the chain of Bogolubov kinetic equations and considers particles of finite dimensions. The GHE has additional terms, temporal and spatial fluctuations, compared to the Navier-Stokes equations (NSE). These terms have a timescale multiplier τ, and the GHE becomes the NSE when $\tau$ is zero. The nondimensional τ is a product of the Reynolds number and the squared length scale ratio, τ=Re*(l/L)², where l is the apparent Kolmogorov length scale, and L is a hydrodynamic length scale. The BFS flow modeling results obtained by 2D calculations cannot match the experimental data for Re>450. One or two additional equations are required for the turbulence model to be added to the NSE, which typically has two to five parameters to be tuned for specific problems. It is shown that the GHE does not require an additional turbulence model, whereas the turbulent velocity results are in good agreement with the experimental results. A review of several studies on the simulation of flow over the BFS from 1980 to 2023 is provided. Most of these studies used different turbulence models when Re>1000. In this study, the 2D turbulent flow over a BFS with height H=L/3 (where L is the channel height) at Reynolds number Re=132000 was investigated using numerical solutions of the GHE (by a finite-element method) and compared to the solutions from the Navier-Stokes equations, k–ε turbulence model, and experimental results. The comparison included the velocity profiles at X/L=5.33 (near the end of the recirculation zone, available from the experiment), recirculation zone length, and velocity flow field. The mean velocity of NSE was obtained by averaging the solution over the number of time steps. The solution with a standard k −ε model shows a velocity profile at X/L=5.33, which has no backward flow. A standard k−ε model underpredicts the experimental recirculation zone length X/L=7.0∓0.5 by a substantial amount of 20-25%, and a more sophisticated turbulence model is needed for this problem. The obtained data confirm that the GHE results are in good agreement with the experimental results for turbulent flow over two-dimensional BFS. A turbulence model was not required in this case. The computations were stable. The solution time for the GHE is the same or less than that for the NSE and significantly less than that for the NSE with the turbulence model. The proposed approach was limited to 2D and only one Reynolds number. Further work will extend this approach to 3D flow and a higher Re.

Keywords: backward-facing step, comparison with experimental data, generalized hydrodynamic equations, separation, reattachment, turbulent flow

Procedia PDF Downloads 61
817 Culturally Relevant Pedagogy: A Cross-Cultural Comparison

Authors: Medha Talpade, Salil Talpade

Abstract:

The intent of this quantitative project was to compare the values and perceptions of students from a predominantly white college (PWI) to those from a historically black college (HBCU) about culturally relevant teaching and learning practices in the academic realm. The reason for interrelating student culture with teaching practices is to enable a pedagogical response to the low retention rates of African American students and first generation Caucasian students in high schools, colleges, and their low rates of social mobility and educational achievement. Culturally relevant pedagogy, according to related research, is deemed rewarding to students, teachers, the local and national community. Critical race theory (CRT) is the main framework used in this project to explain the ubiquity of a culturally relevant pedagogy. The purpose of this quantitative study was to test the critical race theory that relates the presence of the factors associated with culturally relevant teaching strategies with perceived relevance. The culturally relevant teaching strategies were identified based on the recommendations and findings of past research. Participants in this study included approximately 145 students from a HBCU and 55 students from the PWI. A survey consisting of 37 items related to culturally relevant pedagogy was administered. The themes used to construct the items were: Use of culturally-specific examples in class whenever possible; use of culturally-specific presentational models, use of relational reinforcers, and active engagement. All the items had a likert-type response scale. Participants reported their degree of agreement (5-point scale ranging from strongly disagree to strongly agree) and importance (3-point scale ranging from not at all important to very important) with each survey item. A new variable, Relevance was formed based on the multiplicative function of importance and presence of a teaching and learning strategy. A set of six demographic questions were included in the survey. A consent form based on NIH and APA ethical standards was distributed prior to survey administration to the volunteers. Results of a Factor Analyses on the data from the PWI and the HBCU, and a ANOVA indicated significant differences on ‘Relevance’ related to specific themes. Results of this study are expected to inform educational practices and improve teaching and learning outcomes.

Keywords: culturally relevant pedagogy, college students, cross-cultural, applied psychology

Procedia PDF Downloads 432
816 Experimental Analysis of Supersonic Combustion Induced by Shock Wave at the Combustion Chamber of the 14-X Scramjet Model

Authors: Ronaldo de Lima Cardoso, Thiago V. C. Marcos, Felipe J. da Costa, Antonio C. da Oliveira, Paulo G. P. Toro

Abstract:

The 14-X is a strategic project of the Brazil Air Force Command to develop a technological demonstrator of a hypersonic air-breathing propulsion system based on supersonic combustion programmed to flight in the Earth's atmosphere at 30 km of altitude and Mach number 10. The 14-X is under development at the Laboratory of Aerothermodynamics and Hypersonic Prof. Henry T. Nagamatsu of the Institute of Advanced Studies. The program began in 2007 and was planned to have three stages: development of the wave rider configuration, development of the scramjet configuration and finally the ground tests in the hypersonic shock tunnel T3. The install configuration of the model based in the scramjet of the 14-X in the test section of the hypersonic shock tunnel was made to proportionate and test the flight conditions in the inlet of the combustion chamber. Experimental studies with hypersonic shock tunnel require special techniques to data acquisition. To measure the pressure along the experimental model geometry tested we used 30 pressure transducers model 122A22 of PCB®. The piezoeletronic crystals of a piezoelectric transducer pressure when to suffer pressure variation produces electric current (PCB® PIEZOTRONIC, 2016). The reading of the signal of the pressure transducers was made by oscilloscope. After the studies had begun we observed that the pressure inside in the combustion chamber was lower than expected. One solution to improve the pressure inside the combustion chamber was install an obstacle to providing high temperature and pressure. To confirm if the combustion occurs was selected the spectroscopy emission technique. The region analyzed for the spectroscopy emission system is the edge of the obstacle installed inside the combustion chamber. The emission spectroscopy technique was used to observe the emission of the OH*, confirming or not the combustion of the mixture between atmospheric air in supersonic speed and the hydrogen fuel inside of the combustion chamber of the model. This paper shows the results of experimental studies of the supersonic combustion induced by shock wave performed at the Hypersonic Shock Tunnel T3 using the scramjet 14-X model. Also, this paper provides important data about the combustion studies using the model based on the engine of 14-X (second stage of the 14-X Program). Informing the possibility of necessaries corrections to be made in the next stages of the program or in other models to experimental study.

Keywords: 14-X, experimental study, ground tests, scramjet, supersonic combustion

Procedia PDF Downloads 387
815 Inner Quality Parameters of Rapeseed (Brassica napus) Populations in Different Sowing Technology Models

Authors: É. Vincze

Abstract:

Demand on plant oils has increased to an enormous extent that is due to the change of human nutrition habits on the one hand, while on the other hand to the increase of raw material demand of some industrial sectors, just as to the increase of biofuel production. Besides the determining importance of sunflower in Hungary the production area, just as in part the average yield amount of rapeseed has increased among the produced oil crops. The variety/hybrid palette has changed significantly during the past decade. The available varieties’/hybrids’ palette has been extended to a significant extent. It is agreed that rapeseed production demands professionalism and local experience. Technological elements are successive; high yield amounts cannot be produced without system-based approach. The aim of the present work was to execute the complex study of one of the most critical production technology element of rapeseed production, that was sowing technology. Several sowing technology elements are studied in this research project that are the following: biological basis (the hybrid Arkaso is studied in this regard), sowing time (sowing time treatments were set so that they represent the wide period used in industrial practice: early, optimal and late sowing time) plant density (in this regard reaction of rare, optimal and too dense populations) were modelled. The multifactorial experimental system enables the single and complex evaluation of rapeseed sowing technology elements, just as their modelling using experimental result data. Yield quality and quantity have been determined as well in the present experiment, just as the interactions between these factors. The experiment was set up in four replications at the Látókép Plant Production Research Site of the University of Debrecen. Two different sowing times were sown in the first experimental year (2014), while three in the second (2015). Three different plant densities were set in both years: 200, 350 and 500 thousand plants ha-1. Uniform nutrient supply and a row spacing of 45 cm were applied. Winter wheat was used as pre-crop. Plant physiological measurements were executed in the populations of the Arkaso rapeseed hybrid that were: relative chlorophyll content analysis (SPAD) and leaf area index (LAI) measurement. Relative chlorophyll content (SPAD) and leaf area index (LAI) were monitored in 7 different measurement times.

Keywords: inner quality, plant density, rapeseed, sowing time

Procedia PDF Downloads 201
814 Gender and Total Compensation, in an ‘Age’ of Disruption

Authors: Daniel J. Patricio Jiménez

Abstract:

The term 'total compensation’ refers to salary, training, innovation, and development, and of course, motivation; total compensation is an open and flexible system which must facilitate personal and family conciliation and therefore cannot be isolated from social reality. Today, the challenge for any company that wants to have a future is to be sustainable, and women play a ‘special’ role in this. Spain, in its statutory and conventional development, has not given sufficient response to new phenomena such as ‘bonuses’, ‘stock options’ or ‘fringe benefits’ (constructed dogmatically and by court decisions), the new digital reality, where cryptocurrency, new collaborative models and service provision -such as remote work-, are always ahead of the law. To talk about compensation is to talk about the gender gap, and with the entry into force of RD.902 /2020 on 14 April 2021, certain measures are necessary under the principle of salary transparency; the valuation of jobs, the pay register (Rd. 6/2019) and the pay audit, are an example of this. Analyzing the methodologies, and in particular the determination and weight of the factors -so that the system itself is not discriminatory- is essential. The wage gap in Spain is smaller than in Europe, but the sources do not reflect the reality, and since the beginning of the pandemic, there has been a clear stagnation. A living wage is not the minimum wage; it is identified with rights and needs; it is that which, based on internal equity, reflects the competitiveness of the company in terms of human capital. Spain has lost and has not recovered the relative weight of its wages; this is having a direct impact on our competitiveness, consequently on the precariousness of employment and undoubtedly on the levels of extreme poverty. Training is becoming more than ever a strategic factor; the new digital reality requires that each component of the system is connected, the transversality is imposed on us, this forces us to redefine content, to give answers to the new demands that the new normality requires because technology and robotization are changing the concept of employability. The presence of women in this context is necessary, and there is a long way to go. The so-called emotional compensation becomes particularly relevant at a time when pandemics, silence, and disruption, are leaving after-effects; technostress (in all its manifestations) is just one of them. Talking about motivation today makes no sense without first being aware that mental health is a priority, that it must be treated and communicated in an inclusive way because it increases satisfaction, productivity, and engagement. There is a clear conclusion to all this: compensation systems do not respond to the ‘new normality’: diversity, and in particular women, cannot be invisible in human resources policies if the company wants to be sustainable.

Keywords: diversity, gender gap, human resources, sustainability.

Procedia PDF Downloads 168
813 Evaluation of the Influence of Graphene Oxide on Spheroid and Monolayer Culture under Flow Conditions

Authors: A. Zuchowska, A. Buta, M. Mazurkiewicz-Pawlicka, A. Malolepszy, L. Stobinski, Z. Brzozka

Abstract:

In recent years, graphene-based materials are finding more and more applications in biological science. As a thin, tough, transparent and chemically resistant materials, they appear to be a very good material for the production of implants and biosensors. Interest in graphene derivatives also resulted at the beginning of research about the possibility of their application in cancer therapy. Currently, the analysis of their potential use in photothermal therapy and as a drug carrier is mostly performed. Moreover, the direct anticancer properties of graphene-based materials are also tested. Nowadays, cytotoxic studies are conducted on in vitro cell culture in standard culture vessels (macroscale). However, in this type of cell culture, the cells grow on the synthetic surface in static conditions. For this reason, cell culture in macroscale does not reflect in vivo environment. The microfluidic systems, called Lab-on-a-chip, are proposed as a solution for improvement of cytotoxicity analysis of new compounds. Here, we present the evaluation of cytotoxic properties of graphene oxide (GO) on breast, liver and colon cancer cell line in a microfluidic system in two spatial models (2D and 3D). Before cell introduction, the microchambers surface was modified by the fibronectin (2D, monolayer) and poly(vinyl alcohol) (3D, spheroids) covering. After spheroid creation (3D) and cell attachment (2D, monolayer) the selected concentration of GO was introduced into microsystems. Then monolayer and spheroids viability/proliferation using alamarBlue® assay and standard microplate reader was checked for three days. Moreover, in every day of the culture, the morphological changes of cells were determined using microscopic analysis. Additionally, on the last day of the culture differential staining using Calcein AM and Propidium iodide were performed. We were able to note that the GO has an influence on all tested cell line viability in both monolayer and spheroid arrangement. We showed that GO caused higher viability/proliferation decrease for spheroids than a monolayer (this was observed for all tested cell lines). Higher cytotoxicity of GO on spheroid culture can be caused by different geometry of the microchambers for 2D and 3D cell cultures. Probably, GO was removed from the flat microchambers for 2D culture. Those results were also confirmed by differential staining. Comparing our results with the studies conducted in the macroscale, we also proved that the cytotoxic properties of GO are changed depending on the cell culture conditions (static/ flow).

Keywords: cytotoxicity, graphene oxide, monolayer, spheroid

Procedia PDF Downloads 125
812 Nanoliposomes in Photothermal Therapy: Advancements and Applications

Authors: Mehrnaz Mostafavi

Abstract:

Nanoliposomes, minute lipid-based vesicles at the nano-scale, show promise in the realm of photothermal therapy (PTT). This study presents an extensive overview of nanoliposomes in PTT, exploring their distinct attributes and the significant progress in this therapeutic methodology. The research delves into the fundamental traits of nanoliposomes, emphasizing their adaptability, compatibility with biological systems, and their capacity to encapsulate diverse therapeutic substances. Specifically, it examines the integration of light-absorbing materials, like gold nanoparticles or organic dyes, into nanoliposomal formulations, enabling their efficacy as proficient agents for photothermal treatment Additionally, this paper elucidates the mechanisms involved in nanoliposome-mediated PTT, highlighting their capability to convert light energy into localized heat, facilitating the precise targeting of diseased cells or tissues. This precise regulation of light absorption and heat generation by nanoliposomes presents a non-invasive and precisely focused therapeutic approach, particularly in conditions like cancer. The study explores advancements in nanoliposomal formulations aimed at optimizing PTT outcomes. These advancements include strategies for improved stability, enhanced drug loading, and the targeted delivery of therapeutic agents to specific cells or tissues. Furthermore, the paper discusses multifunctional nanoliposomal systems, integrating imaging components or targeting elements for real-time monitoring and improved accuracy in PTT. Moreover, the review highlights recent preclinical and clinical trials showcasing the effectiveness and safety of nanoliposome-based PTT across various disease models. It also addresses challenges in clinical implementation, such as scalability, regulatory considerations, and long-term safety assessments. In conclusion, this paper underscores the substantial potential of nanoliposomes in advancing PTT as a promising therapeutic approach. Their distinctive characteristics, combined with their precise ability to convert light into heat, offer a tailored and efficient method for treating targeted diseases. The encouraging outcomes from preclinical studies pave the way for further exploration and potential clinical applications of nanoliposome-based PTT.

Keywords: nanoliposomes, photothermal therapy, light absorption, heat conversion, therapeutic agents, targeted delivery, cancer therapy

Procedia PDF Downloads 113
811 Branding in FMCG Sector in India: A Comparison of Indian and Multinational Companies

Authors: Pragati Sirohi, Vivek Singh Rana

Abstract:

Brand is a name, term, sign, symbol or design or a combination of all these which is intended to identify the goods or services of one seller or a group of sellers and to differentiate them from those of the competitors and perception influences purchase decisions here and so building that perception is critical. The FMCG industry is a low margin business. Volumes hold the key to success in this industry. Therefore, the industry has a strong emphasis on marketing. Creating strong brands is important for FMCG companies and they devote considerable money and effort in developing brands. Brand loyalty is fickle. Companies know this and that is why they relentlessly work towards brand building. The purpose of the study is a comparison between Indian and Multinational companies with regard to FMCG sector in India. It has been hypothesized that after liberalization the Indian companies has taken up the challenge of globalization and some of these are giving a stiff competition to MNCs. There is an existence of strong brand image of MNCs compared to Indian companies. Advertisement expenditures of MNCs are proportionately higher compared to Indian counterparts. The operational area of the study is the country as a whole. Continuous time series data is available from 1996-2014 for the selected 8 companies. The selection of these companies is done on the basis of their large market share, brand equity and prominence in the market. Research methodology focuses on finding trend growth rates of market capitalization, net worth, and brand values through regression analysis by the usage of secondary data from prowess database developed by CMIE (Centre for monitoring Indian Economy). Estimation of brand values of selected FMCG companies is being attempted, which can be taken to be the excess of market capitalization over the net worth of a company. Brand value indices are calculated. Correlation between brand values and advertising expenditure is also measured to assess the effect of advertising on branding. Major results indicate that although MNCs enjoy stronger brand image but few Indian companies like ITC is the outstanding leader in terms of its market capitalization and brand values. Dabur and Tata Global Beverages Ltd are competing equally well on these values. Advertisement expenditures are the highest for HUL followed by ITC, Colgate and Dabur which shows that Indian companies are not behind in the race. Although advertisement expenditures are playing a role in brand building process there are many other factors which affect the process. Also, brand values are decreasing over the years for FMCG companies in India which show that competition is intense with aggressive price wars and brand clutter. Implications for Indian companies are that they have to consistently put in proactive and relentless efforts in their brand building process. Brands need focus and consistency. Brand longevity without innovation leads to brand respect but does not create brand value.

Keywords: brand value, FMCG, market capitalization, net worth

Procedia PDF Downloads 357
810 Mega Sporting Events and Branding: Marketing Implications for the Host Country’s Image

Authors: Scott Wysong

Abstract:

Qatar will spend billions of dollars to host the 2022 World Cup. While football fans around the globe get excited to cheer on their favorite team every four years, critics debate the merits of a country hosting such an expensive and large-scale event. That is, the host countries spend billions of dollars on stadiums and infrastructure to attract these mega sporting events with the hope of equitable returns in economic impact and creating jobs. Yet, in many cases, the host countries are left in debt with decaying venues. There are benefits beyond the economic impact of hosting mega-events. For example, citizens are often proud of their city/country to host these famous events. Yet, often overlooked in the literature is the proposition that serving as the host for a mega-event may enhance the country’s brand image, not only as a tourist destination but for the products made in that country of origin. This research aims to explore this phenomenon by taking an exploratory look at consumer perceptions of three host countries of a mega-event in sports. In 2014, the U.S., Chinese and Finn (Finland) consumer attitudes toward Brazil and its products were measured before and after the World Cup via surveys (n=89). An Analysis of Variance (ANOVA) revealed that there were no statistically significant differences in the pre-and post-World Cup perceptions of Brazil’s brand personality or country-of-origin image. After the World Cup in 2018, qualitative interviews were held with U.S. sports fans (n=17) in an effort to further explore consumer perceptions of products made in the host country: Russia. A consistent theme of distrust and corruption with Russian products emerged despite their hosting of this prestigious global event. In late 2021, U.S. football (soccer) fans (n=42) and non-fans (n=37) were surveyed about the upcoming 2022 World Cup. A regression analysis revealed that how much an individual indicated that they were a soccer fan did not significantly influence their desire to visit Qatar or try products from Qatar in the future even though the country was hosting the World Cup—in the end, hosting a mega-event as grand as the World Cup showcases the country to the world. However, it seems to have little impact on consumer perceptions of the country, as a whole, or its brands. That is, the World Cup appeared to enhance already pre-existing stereotypes about Brazil (e.g., beaches, partying and fun, yet with crime and poverty), Russia (e.g., cold weather, vodka and business corruption) and Qatar (desert and oil). Moreover, across all three countries, respondents could rarely name a brand from the host country. Because mega-events cost a lot of time and money, countries need to do more to market their country and its brands when hosting. In addition, these countries would be wise to measure the impact of the event from different perspectives. Hence, we put forth a comprehensive future research agenda to further the understanding of how countries, and their brands, can benefit from hosting a mega sporting event.

Keywords: branding, country-of-origin effects, mega sporting events, return on investment

Procedia PDF Downloads 282
809 Simultaneous Optimization of Design and Maintenance through a Hybrid Process Using Genetic Algorithms

Authors: O. Adjoul, A. Feugier, K. Benfriha, A. Aoussat

Abstract:

In general, issues related to design and maintenance are considered in an independent manner. However, the decisions made in these two sets influence each other. The design for maintenance is considered an opportunity to optimize the life cycle cost of a product, particularly in the nuclear or aeronautical field, where maintenance expenses represent more than 60% of life cycle costs. The design of large-scale systems starts with product architecture, a choice of components in terms of cost, reliability, weight and other attributes, corresponding to the specifications. On the other hand, the design must take into account maintenance by improving, in particular, real-time monitoring of equipment through the integration of new technologies such as connected sensors and intelligent actuators. We noticed that different approaches used in the Design For Maintenance (DFM) methods are limited to the simultaneous characterization of the reliability and maintainability of a multi-component system. This article proposes a method of DFM that assists designers to propose dynamic maintenance for multi-component industrial systems. The term "dynamic" refers to the ability to integrate available monitoring data to adapt the maintenance decision in real time. The goal is to maximize the availability of the system at a given life cycle cost. This paper presents an approach for simultaneous optimization of the design and maintenance of multi-component systems. Here the design is characterized by four decision variables for each component (reliability level, maintainability level, redundancy level, and level of monitoring data). The maintenance is characterized by two decision variables (the dates of the maintenance stops and the maintenance operations to be performed on the system during these stops). The DFM model helps the designers choose technical solutions for the large-scale industrial products. Large-scale refers to the complex multi-component industrial systems and long life-cycle, such as trains, aircraft, etc. The method is based on a two-level hybrid algorithm for simultaneous optimization of design and maintenance, using genetic algorithms. The first level is to select a design solution for a given system that considers the life cycle cost and the reliability. The second level consists of determining a dynamic and optimal maintenance plan to be deployed for a design solution. This level is based on the Maintenance Free Operating Period (MFOP) concept, which takes into account the decision criteria such as, total reliability, maintenance cost and maintenance time. Depending on the life cycle duration, the desired availability, and the desired business model (sales or rental), this tool provides visibility of overall costs and optimal product architecture.

Keywords: availability, design for maintenance (DFM), dynamic maintenance, life cycle cost (LCC), maintenance free operating period (MFOP), simultaneous optimization

Procedia PDF Downloads 119
808 A Reduced Ablation Model for Laser Cutting and Laser Drilling

Authors: Torsten Hermanns, Thoufik Al Khawli, Wolfgang Schulz

Abstract:

In laser cutting as well as in long pulsed laser drilling of metals, it can be demonstrated that the ablation shape (the shape of cut faces respectively the hole shape) that is formed approaches a so-called asymptotic shape such that it changes only slightly or not at all with further irradiation. These findings are already known from the ultrashort pulse (USP) ablation of dielectric and semiconducting materials. The explanation for the occurrence of an asymptotic shape in laser cutting and long pulse drilling of metals is identified, its underlying mechanism numerically implemented, tested and clearly confirmed by comparison with experimental data. In detail, there now is a model that allows the simulation of the temporal (pulse-resolved) evolution of the hole shape in laser drilling as well as the final (asymptotic) shape of the cut faces in laser cutting. This simulation especially requires much less in the way of resources, such that it can even run on common desktop PCs or laptops. Individual parameters can be adjusted using sliders – the simulation result appears in an adjacent window and changes in real time. This is made possible by an application-specific reduction of the underlying ablation model. Because this reduction dramatically decreases the complexity of calculation, it produces a result much more quickly. This means that the simulation can be carried out directly at the laser machine. Time-intensive experiments can be reduced and set-up processes can be completed much faster. The high speed of simulation also opens up a range of entirely different options, such as metamodeling. Suitable for complex applications with many parameters, metamodeling involves generating high-dimensional data sets with the parameters and several evaluation criteria for process and product quality. These sets can then be used to create individual process maps that show the dependency of individual parameter pairs. This advanced simulation makes it possible to find global and local extreme values through mathematical manipulation. Such simultaneous optimization of multiple parameters is scarcely possible by experimental means. This means that new methods in manufacturing such as self-optimization can be executed much faster. However, the software’s potential does not stop there; time-intensive calculations exist in many areas of industry. In laser welding or laser additive manufacturing, for example, the simulation of thermal induced residual stresses still uses up considerable computing capacity or is even not possible. Transferring the principle of reduced models promises substantial savings there, too.

Keywords: asymptotic ablation shape, interactive process simulation, laser drilling, laser cutting, metamodeling, reduced modeling

Procedia PDF Downloads 215
807 Neurocognitive and Executive Function in Cocaine Addicted Females

Authors: Gwendolyn Royal-Smith

Abstract:

Cocaine ranks as one of the world’s most addictive and commonly abused stimulant drugs. Recent evidence indicates that the abuse of cocaine has risen so quickly among females that this group now accounts for about 40 percent of all users in the United States. Neuropsychological studies have demonstrated that specific neural activation patterns carry higher risks for neurocognitive and executive function in cocaine addicted females thereby increasing their vulnerability for poorer treatment outcomes and more frequent post-treatment relapse when compared to males. This study examined secondary data with a convenience sample of 164 cocaine addicted male and females to assess neurocognitive and executive function. The principal objective of this study was to assess whether individual performance on the Stroop Word Color Task is predictive of treatment success by gender. A second objective of the study evaluated whether individual performance employing neurocognitive measures including the Stroop Word-Color task, the Rey Auditory Verbal Learning Test (RALVT), the Iowa Gambling Task, the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale (FrSBE) test demonstrated differences in neurocognitive and executive function performance by gender. Logistic regression models were employed utilizing a covariate adjusted model application. Initial analyses of the Stroop Word color tasks indicated significant differences in the performance of males and females, with females experiencing more challenges in derived interference reaction time and associate recall ability. In early testing including the Rey Auditory Verbal Learning Test (RALVT), the number of advantageous vs disadvantageous cards from the Iowa Gambling Task, the number of perseverance errors from the Wisconsin Card Sorting Task (WISCT), the total score from the Barratte Impulsiveness Scale (Version 11) (BIS-11) and the total score from the Frontal Systems Behavioral Scale, results were mixed with women scoring lower in multiple indicators in both neurocognitive and executive function.

Keywords: cocaine addiction, gender, neuropsychology, neurocognitive, executive function

Procedia PDF Downloads 402
806 State Forest Management Practices by Indigenous Peoples in Dharmasraya District, West Sumatra Province, Indonesia

Authors: Abdul Mutolib, Yonariza Mahdi, Hanung Ismono

Abstract:

The existence of forests is essential to human lives on earth, but its existence is threatened by forest deforestations and degradations. Forest deforestations and degradations in Indonesia is not only caused by the illegal activity by the company or the like, even today many cases in Indonesia forest damage caused by human activities, one of which cut down forests for agriculture and plantations. In West Sumatra, community forest management are the result supported the enactment of customary land tenure, including ownership of land within the forest. Indigenous forest management have a positive benefit, which gives the community an opportunity to get livelihood and income, but if forest management practices by indigenous peoples is not done wisely, then there is the destruction of forests and cause adverse effects on the environment. Based on intensive field works in Dhamasraya District employing some data collection techniques such as key informant interviews, household surveys, secondary data analysis, and satellite image interpretation. This paper answers the following questions; how the impact of forest management by local communities on forest conditions (foccus in Forest Production and Limited Production Forest) and knowledge of the local community on the benefits of forests. The site is a Nagari Bonjol, Dharmasraya District, because most of the forest in Dharmasraya located and owned by Nagari Bonjol community. The result shows that there is damage to forests in Dharmasraya because of forest management activities by local communities. Damage to the forest area of 33,500 ha in Dharmasraya because forests are converted into oil palm and rubber plantations with monocultures. As a result of the destruction of forests, water resources are also diminishing, and the community has experienced a drought in the dry season due to forest cut down and replaced by oil palm plantations. Knowledge of the local community on the benefits of low forest, the people considered that the forest does not have better benefits and cut down and converted into oil palm or rubber plantations. Local people do not understand the benefits of ecological and environmental services that forests. From the phenomena in Dharmasraya on land ownership, need to educate the local community about the importance of protecting the forest, and need a strategy to integrate forests management to keep the ecological functions that resemble the woods and counts the economic benefits for the welfare of local communities. One alternative that can be taken is to use forest management models agroforestry smallholders in accordance with the characteristics of the local community who still consider the economic, social and environmental.

Keywords: community, customary land, farmer plantations, and forests

Procedia PDF Downloads 336
805 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case

Authors: Henry Acurio, Alvaro Corral, Juan Fonseca

Abstract:

In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility: 1) A Business as Usual (BAU) scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios, buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro-mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies by the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP), and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.

Keywords: electro mobility, energy, policy, sustainable transportation

Procedia PDF Downloads 82
804 Study into the Interactions of Primary Limbal Epithelial Stem Cells and HTCEPI Using Tissue Engineered Cornea

Authors: Masoud Sakhinia, Sajjad Ahmad

Abstract:

Introduction: Though knowledge of the compositional makeup and structure of the limbal niche has progressed exponentially during the past decade, much is yet to be understood. Identifying the precise profile and role of the stromal makeup which spans the ocular surface may inform researchers of the most optimum conditions needed to effectively expand LESCs in vitro, whilst preserving their differentiation status and phenotype. Limbal fibroblasts, as opposed to corneal fibroblasts are thought to form an important component of the microenvironment where LESCs reside. Methods: The corneal stroma was tissue engineered in vitro using both limbal and corneal fibroblasts embedded within a tissue engineered 3D collagen matrix. The effect of these two different fibroblasts on LESCs and hTCEpi corneal epithelial cell line were then subsequently determined using phase contrast microscopy, histolological analysis and PCR for specific stem cell markers. The study aimed to develop an in vitro model which could be used to determine whether limbal, as opposed to corneal fibroblasts, maintained the stem cell phenotype of LESCs and hTCEpi cell line. Results: Tissue culture analysis was inconclusive and required further quantitative analysis for remarks on cell proliferation within the varying stroma. Histological analysis of the tissue-engineered cornea showed a comparable structure to that of the human cornea, though with limited epithelial stratification. PCR results for epithelial cell markers of cells cultured on limbal fibroblasts showed reduced expression of CK3, a negative marker for LESC’s, whilst also exhibiting a relatively low expression level of P63, a marker for undifferentiated LESCs. Conclusion: We have shown the potential for the construction of a tissue engineered human cornea using a 3D collagen matrix and described some preliminary results in the analysis of the effects of varying stroma consisting of limbal and corneal fibroblasts, respectively, on the proliferation of stem cell phenotype of primary LESCs and hTCEpi corneal epithelial cells. Although no definitive marker exists to conclusively illustrate the presence of LESCs, the combination of positive and negative stem cell markers in our study were inconclusive. Though it is less traslational to the human corneal model, the use of conditioned medium from that of limbal and corneal fibroblasts may provide a more simple avenue. Moreover, combinations of extracellular matrices could be used as a surrogate in these culture models.

Keywords: cornea, Limbal Stem Cells, tissue engineering, PCR

Procedia PDF Downloads 278
803 A Discussion on Urban Planning Methods after Globalization within the Context of Anticipatory Systems

Authors: Ceylan Sozer, Ece Ceylan Baba

Abstract:

The reforms and changes that began with industrialization in cities and continued with globalization in 1980’s, created many changes in urban environments. City centers which are desolated due to industrialization, began to get crowded with globalization and became the heart of technology, commerce and social activities. While the immediate and intense alterations are planned around rigorous visions in developed countries, several urban areas where the processes were underestimated and not taken precaution faced with irrevocable situations. When the effects of the globalization in the cities are examined, it is seen that there are some anticipatory system plans in the cities about the future problems. Several cities such as New York, London and Tokyo have planned to resolve probable future problems in a systematic scheme to decrease possible side effects during globalization. The decisions in urban planning and their applications are the main points in terms of sustainability and livability in such mega-cities. This article examines the effects of globalization on urban planning through 3 mega cities and the applications. When the applications of urban plannings of the three mega-cities are investigated, it is seen that the city plans are generated under light of past experiences and predictions of a certain future. In urban planning, past and present experiences of a city should have been examined and then future projections could be predicted together with current world dynamics by a systematic way. In this study, methods used in urban planning will be discussed and ‘Anticipatory System’ model will be explained and relations with global-urban planning will be discussed. The concept of ‘anticipation’ is a phenomenon that means creating foresights and predictions about the future by combining past, present and future within an action plan. The main distinctive feature that separates anticipatory systems from other systems is the combination of past, present and future and concluding with an act. Urban plans that consist of various parameters and interactions together are identified as ‘live’ and they have systematic integrities. Urban planning with an anticipatory system might be alive and can foresight some ‘side effects’ in design processes. After globalization, cities became more complex and should be designed within an anticipatory system model. These cities can be more livable and can have sustainable urban conditions for today and future.In this study, urban planning of Istanbul city is going to be analyzed with comparisons of New York, Tokyo and London city plans in terms of anticipatory system models. The lack of a system in İstanbul and its side effects will be discussed. When past and present actions in urban planning are approached through an anticipatory system, it can give more accurate and sustainable results in the future.

Keywords: globalization, urban planning, anticipatory system, New York, London, Tokyo, Istanbul

Procedia PDF Downloads 143
802 Practical Challenges of Tunable Parameters in Matlab/Simulink Code Generation

Authors: Ebrahim Shayesteh, Nikolaos Styliaras, Alin George Raducu, Ozan Sahin, Daniel Pombo VáZquez, Jonas Funkquist, Sotirios Thanopoulos

Abstract:

One of the important requirements in many code generation projects is defining some of the model parameters tunable. This helps to update the model parameters without performing the code generation again. This paper studies the concept of embedded code generation by MATLAB/Simulink coder targeting the TwinCAT Simulink system. The generated runtime modules are then tested and deployed to the TwinCAT 3 engineering environment. However, defining the parameters tunable in MATLAB/Simulink code generation targeting TwinCAT is not very straightforward. This paper focuses on this subject and reviews some of the techniques tested here to make the parameters tunable in generated runtime modules. Three techniques are proposed for this purpose, including normal tunable parameters, callback functions, and mask subsystems. Moreover, some test Simulink models are developed and used to evaluate the results of proposed approaches. A brief summary of the study results is presented in the following. First of all, the parameters defined tunable and used in defining the values of other Simulink elements (e.g., gain value of a gain block) could be changed after the code generation and this value updating will affect the values of all elements defined based on the values of the tunable parameter. For instance, if parameter K=1 is defined as a tunable parameter in the code generation process and this parameter is used to gain a gain block in Simulink, the gain value for the gain block is equal to 1 in the gain block TwinCAT environment after the code generation. But, the value of K can be changed to a new value (e.g., K=2) in TwinCAT (without doing any new code generation in MATLAB). Then, the gain value of the gain block will change to 2. Secondly, adding a callback function in the form of “pre-load function,” “post-load function,” “start function,” and will not help to make the parameters tunable without performing a new code generation. This means that any MATLAB files should be run before performing the code generation. The parameters defined/calculated in this file will be used as fixed values in the generated code. Thus, adding these files as callback functions to the Simulink model will not make these parameters flexible since the MATLAB files will not be attached to the generated code. Therefore, to change the parameters defined/calculated in these files, the code generation should be done again. However, adding these files as callback functions forces MATLAB to run them before the code generation, and there is no need to define the parameters mentioned in these files separately. Finally, using a tunable parameter in defining/calculating the values of other parameters through the mask is an efficient method to change the value of the latter parameters after the code generation. For instance, if tunable parameter K is used in calculating the value of two other parameters K1 and K2 and, after the code generation, the value of K is updated in TwinCAT environment, the value of parameters K1 and K2 will also be updated (without any new code generation).

Keywords: code generation, MATLAB, tunable parameters, TwinCAT

Procedia PDF Downloads 228
801 An Evaluation of the Use of Telematics for Improving the Driving Behaviours of Young People

Authors: James Boylan, Denny Meyer, Won Sun Chen

Abstract:

Background: Globally, there is an increasing trend of road traffic deaths, reaching 1.35 million in 2016 in comparison to 1.3 million a decade ago, and overall, road traffic injuries are ranked as the eighth leading cause of death for all age groups. The reported death rate for younger drivers aged 16-19 years is almost twice the rate reported for older drivers aged 25 and above, with a rate of 3.5 road traffic fatalities per annum for every 10,000 licenses held. Telematics refers to a system with the ability to capture real-time data about vehicle usage. The data collected from telematics can be used to better assess a driver's risk. It is typically used to measure acceleration, turn, braking, and speed, as well as to provide locational information. With the Australian government creating the National Telematics Framework, there has been an increase in the government's focus on using telematics data to improve road safety outcomes. The purpose of this study is to test the hypothesis that improvements in telematics measured driving behaviour to relate to improvements in road safety attitudes measured by the Driving Behaviour Questionnaire (DBQ). Methodology: 28 participants were recruited and given a telematics device to insert into their vehicles for the duration of the study. The participant's driving behaviour over the course of the first month will be compared to their driving behaviour in the second month to determine whether feedback from telematics devices improves driving behaviour. Participants completed the DBQ, evaluated using a 6-point Likert scale (0 = never, 5 = nearly all the time) at the beginning, after the first month, and after the second month of the study. This is a well-established instrument used worldwide. Trends in the telematics data will be captured and correlated with the changes in the DBQ using regression models in SAS. Results: The DBQ has provided a reliable measure (alpha = .823) of driving behaviour based on a sample of 23 participants, with an average of 50.5 and a standard deviation of 11.36, and a range of 29 to 76, with higher scores, indicating worse driving behaviours. This initial sample is well stratified in terms of gender and age (range 19-27). It is expected that in the next six weeks, a larger sample of around 40 will have completed the DBQ after experiencing in-vehicle telematics for 30 days, allowing a comparison with baseline levels. The trends in the telematics data over the first 30 days will be compared with the changes observed in the DBQ. Conclusions: It is expected that there will be a significant relationship between the improvements in the DBQ and the trends in reduced telematics measured aggressive driving behaviours supporting the hypothesis.

Keywords: telematics, driving behavior, young drivers, driving behaviour questionnaire

Procedia PDF Downloads 106