Search results for: circuit models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7430

Search results for: circuit models

1100 Epistemological and Ethical Dimensions of Current Concepts of Human Resilience in the Neurosciences

Authors: Norbert W. Paul

Abstract:

Since a number of years, scientific interest in human resilience is rapidly increasing especially in psychology and more recently and highly visible in neurobiological research. Concepts of resilience are regularly discussed in the light of liminal experiences and existential challenges in human life. Resilience research is providing both, explanatory models and strategies to promote or foster human resilience. Surprisingly, these approaches attracted little attention so far in philosophy in general and in ethics in particular. This is even more astonishing given the fact that the neurosciences as such have been and still are of major interest to philosophy and ethics and even brought about the specialized field of neuroethics, which, however, is not concerned with concepts of resilience, so far. As a result of the little attention given to the topic of resilience, the whole concept has to date been a philosophically under-theorized. This abstinence of ethics and philosophy in resilience research is lamentable because resilience as a concept as well as resilience interventions based on neurobiological findings do undoubtedly pose philosophical, social and ethical questions. In this paper, we will argue that particular notions of resilience are crossing the sometimes fine line between maintaining a person’s mental health despite the impact of severe psychological or physical adverse events and ethically more debatable discourses of enhancement. While we neither argue for or against enhancement nor re-interpret resilience research and interventions by subsuming them strategies of psychological and/or neuro-enhancement, we encourage those who see social or ethical problems with enhancement technologies should also take a closer look on resilience and the related neurobiological concepts. We will proceed in three steps. In our first step, we will describe the concept of resilience in general and its neurobiological study in particular. Here, we will point out some important differences in the way ‘resilience’ is conceptualized and how neurobiological research understands resilience. In what follows we will try to show that a one-sided concept of resilience – as it is often presented in neurobiological research on resilience – does pose social and ethical problems. Secondly, we will identify and explore the social and ethical challenges of (neurobiological) enhancement. In the last and final step of this paper, we will argue that a one-sided reading of resilience can be understood as latent form of enhancement in transition and poses ethical questions similar to those discussed in relation to other approaches to the biomedical enhancement of humans.

Keywords: resilience, neurosciences, epistemology, bioethics

Procedia PDF Downloads 160
1099 Climate Change Adaptation: Methodologies and Tools to Define Resilience Scenarios for Existing Buildings in Mediterranean Urban Areas

Authors: Francesca Nicolosi, Teresa Cosola

Abstract:

Climate changes in Mediterranean areas, such as the increase of average seasonal temperatures, the urban heat island phenomenon, the intensification of solar radiation and the extreme weather threats, cause disruption events, so that climate adaptation has become a pressing issue. Due to the strategic role that the built heritage holds in terms of environmental impact and energy waste and its potentiality, it is necessary to assess the vulnerability and the adaptive capacity of the existing building to climate change, in order to define different mitigation scenarios. The aim of this research work is to define an optimized and integrated methodology for the assessment of resilience levels and adaptation scenarios for existing buildings in Mediterranean urban areas. Moreover, the study of resilience indicators allows us to define building environmental and energy performance in order to identify the design and technological solutions for the improvement of the building and its urban area potentialities. The methodology identifies step-by-step different phases, starting from the detailed study of characteristic elements of urban system: climatic, natural, human, typological and functional components are analyzed in their critical factors and their potential. Through the individuation of the main perturbing factors and the vulnerability degree of the system to the risks linked to climate change, it is possible to define mitigation and adaptation scenarios. They can be different, according to the typological, functional and constructive features of the analyzed system, divided into categories of intervention, and characterized by different analysis levels (from the single building to the urban area). The use of software simulations allows obtaining information on the overall behavior of the building and the urban system, to generate predictive models in the medium and long-term environmental and energy retrofit and to make a comparative study of the mitigation scenarios identified. The studied methodology is validated on a case study.

Keywords: climate impact mitigation, energy efficiency, existing building heritage, resilience

Procedia PDF Downloads 240
1098 An Investigation into the Crystallization Tendency/Kinetics of Amorphous Active Pharmaceutical Ingredients: A Case Study with Dipyridamole and Cinnarizine

Authors: Shrawan Baghel, Helen Cathcart, Biall J. O'Reilly

Abstract:

Amorphous drug formulations have great potential to enhance solubility and thus bioavailability of BCS class II drugs. However, the higher free energy and molecular mobility of the amorphous form lowers the activation energy barrier for crystallization and thermodynamically drives it towards the crystalline state which makes them unstable. Accurate determination of the crystallization tendency/kinetics is the key to the successful design and development of such systems. In this study, dipyridamole (DPM) and cinnarizine (CNZ) has been selected as model compounds. Thermodynamic fragility (m_T) is measured from the heat capacity change at the glass transition temperature (Tg) whereas dynamic fragility (m_D) is evaluated using methods based on extrapolation of configurational entropy to zero 〖(m〗_(D_CE )), and heating rate dependence of Tg 〖(m〗_(D_Tg)). The mean relaxation time of amorphous drugs was calculated from Vogel-Tammann-Fulcher (VTF) equation. Furthermore, the correlation between fragility and glass forming ability (GFA) of model drugs has been established and the relevance of these parameters to crystallization of amorphous drugs is also assessed. Moreover, the crystallization kinetics of model drugs under isothermal conditions has been studied using Johnson-Mehl-Avrami (JMA) approach to determine the Avrami constant ‘n’ which provides an insight into the mechanism of crystallization. To further probe into the crystallization mechanism, the non-isothermal crystallization kinetics of model systems was also analysed by statistically fitting the crystallization data to 15 different kinetic models and the relevance of model-free kinetic approach has been established. In addition, the crystallization mechanism for DPM and CNZ at each extent of transformation has been predicted. The calculated fragility, glass forming ability (GFA) and crystallization kinetics is found to be in good correlation with the stability prediction of amorphous solid dispersions. Thus, this research work involves a multidisciplinary approach to establish fragility, GFA and crystallization kinetics as stability predictors for amorphous drug formulations.

Keywords: amorphous, fragility, glass forming ability, molecular mobility, mean relaxation time, crystallization kinetics, stability

Procedia PDF Downloads 353
1097 Research on the Ecological Impact Evaluation Index System of Transportation Construction Projects

Authors: Yu Chen, Xiaoguang Yang, Lin Lin

Abstract:

Traffic engineering construction is an important infrastructure for economic and social development. In the process of construction and operation, the ability to make a correct evaluation of the project's environmental impact appears to be crucial to the rational operation of existing transportation projects, the correct development of transportation engineering construction and the adoption of corresponding measures to scientifically carry out environmental protection work. Most of the existing research work on ecological and environmental impact assessment is limited to individual aspects of the environment and less to the overall evaluation of the environmental system; in terms of research conclusions, there are more qualitative analyses from the technical and policy levels, and there is a lack of quantitative research results and quantitative and operable evaluation models. In this paper, a comprehensive analysis of the ecological and environmental impacts of transportation construction projects is conducted, and factors such as the accessibility of data and the reliability of calculation results are comprehensively considered to extract indicators that can reflect the essence and characteristics. The qualitative evaluation indicators were screened using the expert review method, the qualitative indicators were measured using the fuzzy statistics method, the quantitative indicators were screened using the principal component analysis method, and the quantitative indicators were measured by both literature search and calculation. An environmental impact evaluation index system with the general objective layer, sub-objective layer and indicator layer was established, dividing the environmental impact of the transportation construction project into two periods: the construction period and the operation period. On the basis of the evaluation index system, the index weights are determined using the hierarchical analysis method, and the individual indicators to be evaluated are dimensionless, eliminating the influence of the original background and meaning of the indicators. Finally, the thesis uses the above research results, combined with the actual engineering practice, to verify the correctness and operability of the evaluation method.

Keywords: transportation construction projects, ecological and environmental impact, analysis and evaluation, indicator evaluation system

Procedia PDF Downloads 105
1096 Mapping the Pain Trajectory of Breast Cancer Survivors: Results from a Retrospective Chart Review

Authors: Wilfred Elliam

Abstract:

Background: Pain is a prevalent and debilitating symptom among breast cancer patients, impacting their quality of life and overall well-being. The experience of pain in this population is multifaceted, influenced by a combination of disease-related factors, treatment side effects, and individual characteristics. Despite advancements in cancer treatment and pain management, many breast cancer patients continue to suffer from chronic pain, which can persist long after the completion of treatment. Understanding the progression of pain in breast cancer patients over time and identifying its correlates is crucial for effective pain management and supportive care strategies. The purpose of this research is to understand the patterns and progression of pain experienced by breast cancer survivors over time. Methods: Data were collected from breast cancer patients at Hartford Hospital at four time points: baseline, 3, 6 and 12 weeks. Key variables measured include pain, body mass index (BMI), fatigue, musculoskeletal pain, sleep disturbance, and demographic variables (age, employment status, cancer stage, and ethnicity). Binomial generalized linear mixed models were used to examine changes in pain and symptoms over time. Results: A total of 100 breast cancer patients aged  18 years old were included in the analysis. We found that the effect of time on pain (p = 0.024), musculoskeletal pain (p= <0.001), fatigue (p= <0.001), and sleep disturbance (p-value = 0.013) were statistically significant with pain progression in breast cancer patients. Patients using aromatase inhibitors have worse fatigue (<0.05) and musculoskeletal pain (<0.001) compared to patients with Tamoxifen. Patients who are obese (<0.001) and overweight (<0.001) are more likely to report pain compared to patients with normal weight. Conclusion: This study revealed the complex interplay between various factors such as time, pain, sleep disturbance in breast cancer patient. Specifically, pain, musculoskeletal pain, sleep disturbance, fatigue exhibited significant changes across the measured time points, indicating a dynamic pain progression in these patients. The findings provide a foundation for future research and targeted interventions aimed at improving pain in breast cancer patient outcomes.

Keywords: breast cancer, chronic pain, pain management, quality of life

Procedia PDF Downloads 29
1095 Exploring Regularity Results in the Context of Extremely Degenerate Elliptic Equations

Authors: Zahid Ullah, Atlas Khan

Abstract:

This research endeavors to explore the regularity properties associated with a specific class of equations, namely extremely degenerate elliptic equations. These equations hold significance in understanding complex physical systems like porous media flow, with applications spanning various branches of mathematics. The focus is on unraveling and analyzing regularity results to gain insights into the smoothness of solutions for these highly degenerate equations. Elliptic equations, fundamental in expressing and understanding diverse physical phenomena through partial differential equations (PDEs), are particularly adept at modeling steady-state and equilibrium behaviors. However, within the realm of elliptic equations, the subset of extremely degenerate cases presents a level of complexity that challenges traditional analytical methods, necessitating a deeper exploration of mathematical theory. While elliptic equations are celebrated for their versatility in capturing smooth and continuous behaviors across different disciplines, the introduction of degeneracy adds a layer of intricacy. Extremely degenerate elliptic equations are characterized by coefficients approaching singular behavior, posing non-trivial challenges in establishing classical solutions. Still, the exploration of extremely degenerate cases remains uncharted territory, requiring a profound understanding of mathematical structures and their implications. The motivation behind this research lies in addressing gaps in the current understanding of regularity properties within solutions to extremely degenerate elliptic equations. The study of extreme degeneracy is prompted by its prevalence in real-world applications, where physical phenomena often exhibit characteristics defying conventional mathematical modeling. Whether examining porous media flow or highly anisotropic materials, comprehending the regularity of solutions becomes crucial. Through this research, the aim is to contribute not only to the theoretical foundations of mathematics but also to the practical applicability of mathematical models in diverse scientific fields.

Keywords: elliptic equations, extremely degenerate, regularity results, partial differential equations, mathematical modeling, porous media flow

Procedia PDF Downloads 72
1094 Computational Assistance of the Research, Using Dynamic Vector Logistics of Processes for Critical Infrastructure Subjects Continuity

Authors: Urbánek Jiří J., Krahulec Josef, Urbánek Jiří F., Johanidesová Jitka

Abstract:

These Computational assistance for the research and modelling of critical infrastructure subjects continuity deal with this paper. It enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It serves for crisis situations investigation and modelling within the organizations of critical infrastructure. In the first part of the paper, it will be introduced entities, operators and actors of DYVELOP method. It uses just three operators of Boolean algebra and four types of the entities: the Environments, the Process Systems, the Cases and the Controlling. The Process Systems (PrS) have five “brothers”: Management PrS, Transformation PrS, Logistic PrS, Event PrS and Operation PrS. The Cases have three “sisters”: Process Cell Case, Use Case and Activity Case. They all need for the controlling of their functions special Ctrl actors, except ENV – it can do without Ctrl. Model´s maps are named the Blazons and they are able mathematically - graphically express the relationships among entities, actors and processes. In the second part of this paper, the rich blazons of DYVELOP method will be used for the discovering and modelling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission. The crisis management of energetic crisis infrastructure organization is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process bring for crisis management fruitfulness and it is a good indicator and controlling actor of organizational continuity and its sustainable development advanced possibilities. The research reliable rules are derived for the safety and reliable continuity of energetic critical infrastructure organization in the crisis situation.

Keywords: blazons, computational assistance, DYVELOP method, critical infrastructure

Procedia PDF Downloads 382
1093 Ill-Posed Inverse Problems in Molecular Imaging

Authors: Ranadhir Roy

Abstract:

Inverse problems arise in medical (molecular) imaging. These problems are characterized by large in three dimensions, and by the diffusion equation which models the physical phenomena within the media. The inverse problems are posed as a nonlinear optimization where the unknown parameters are found by minimizing the difference between the predicted data and the measured data. To obtain a unique and stable solution to an ill-posed inverse problem, a priori information must be used. Mathematical conditions to obtain stable solutions are established in Tikhonov’s regularization method, where the a priori information is introduced via a stabilizing functional, which may be designed to incorporate some relevant information of an inverse problem. Effective determination of the Tikhonov regularization parameter requires knowledge of the true solution, or in the case of optical imaging, the true image. Yet, in, clinically-based imaging, true image is not known. To alleviate these difficulties we have applied the penalty/modified barrier function (PMBF) method instead of Tikhonov regularization technique to make the inverse problems well-posed. Unlike the Tikhonov regularization method, the constrained optimization technique, which is based on simple bounds of the optical parameter properties of the tissue, can easily be implemented in the PMBF method. Imposing the constraints on the optical properties of the tissue explicitly restricts solution sets and can restore uniqueness. Like the Tikhonov regularization method, the PMBF method limits the size of the condition number of the Hessian matrix of the given objective function. The accuracy and the rapid convergence of the PMBF method require a good initial guess of the Lagrange multipliers. To obtain the initial guess of the multipliers, we use a least square unconstrained minimization problem. Three-dimensional images of fluorescence absorption coefficients and lifetimes were reconstructed from contact and noncontact experimentally measured data.

Keywords: constrained minimization, ill-conditioned inverse problems, Tikhonov regularization method, penalty modified barrier function method

Procedia PDF Downloads 270
1092 Pathologies in the Left Atrium Reproduced Using a Low-Order Synergistic Numerical Model of the Cardiovascular System

Authors: Nicholas Pearce, Eun-jin Kim

Abstract:

Pathologies of the cardiovascular (CV) system remain a serious and deadly health problem for human society. Computational modelling provides a relatively accessible tool for diagnosis, treatment, and research into CV disorders. However, numerical models of the CV system have largely focused on the function of the ventricles, frequently overlooking the behaviour of the atria. Furthermore, in the study of the pressure-volume relationship of the heart, which is a key diagnosis of cardiac vascular pathologies, previous works often evoke popular yet questionable time-varying elastance (TVE) method that imposes the pressure-volume relationship instead of calculating it consistently. Despite the convenience of the TVE method, there have been various indications of its limitations and the need for checking its validity in different scenarios. A model of the combined left ventricle (LV) and left atrium (LA) is presented, which consistently considers various feedback mechanisms in the heart without having to use the TVE method. Specifically, a synergistic model of the left ventricle is extended and modified to include the function of the LA. The synergy of the original model is preserved by modelling the electro-mechanical and chemical functions of the micro-scale myofiber for the LA and integrating it with the microscale and macro-organ-scale heart dynamics of the left ventricle and CV circulation. The atrioventricular node function is included and forms the conduction pathway for electrical signals between the atria and ventricle. The model reproduces the essential features of LA behaviour, such as the two-phase pressure-volume relationship and the classic figure of eight pressure-volume loops. Using this model, disorders in the internal cardiac electrical signalling are investigated by recreating the mechano-electric feedback (MEF), which is impossible where the time-varying elastance method is used. The effects of AV node block and slow conduction are then investigated in the presence of an atrial arrhythmia. It is found that electrical disorders and arrhythmia in the LA degrade the CV system by reducing the cardiac output, power, and heart rate.

Keywords: cardiovascular system, left atrium, numerical model, MEF

Procedia PDF Downloads 114
1091 Targeting Mre11 Nuclease Overcomes Platinum Resistance and Induces Synthetic Lethality in Platinum Sensitive XRCC1 Deficient Epithelial Ovarian Cancers

Authors: Adel Alblihy, Reem Ali, Mashael Algethami, Ahmed Shoqafi, Michael S. Toss, Juliette Brownlie, Natalie J. Tatum, Ian Hickson, Paloma Ordonez Moran, Anna Grabowska, Jennie N. Jeyapalan, Nigel P. Mongan, Emad A. Rakha, Srinivasan Madhusudan

Abstract:

Platinum resistance is a clinical challenge in ovarian cancer. Platinating agents induce DNA damage which activate Mre11 nuclease directed DNA damage signalling and response (DDR). Upregulation of DDR may promote chemotherapy resistance. Here we have comprehensively evaluated Mre11 in epithelial ovarian cancers. In clinical cohort that received platinum- based chemotherapy (n=331), Mre11 protein overexpression was associated with aggressive phenotype and poor progression free survival (PFS) (p=0.002). In the ovarian cancer genome atlas (TCGA) cohort (n=498), Mre11 gene amplification was observed in a subset of serous tumours (5%) which correlated highly with Mre11 mRNA levels (p<0.0001). Altered Mre11 levels was linked with genome wide alterations that can influence platinum sensitivity. At the transcriptomic level (n=1259), Mre11 overexpression was associated with poor PFS (p=0.003). ROC analysis showed an area under the curve (AUC) of 0.642 for response to platinum-based chemotherapy. Pre-clinically, Mre11 depletion by gene knock down or blockade by small molecule inhibitor (Mirin) reversed platinum resistance in ovarian cancer cells and in 3D spheroid models. Importantly, Mre11 inhibition was synthetically lethal in platinum sensitive XRCC1 deficient ovarian cancer cells and 3D-spheroids. Selective cytotoxicity was associated with DNA double strand break (DSB) accumulation, S-phase cell cycle arrest and increased apoptosis. We conclude that pharmaceutical development of Mre11 inhibitors is a viable clinical strategy for platinum sensitization and synthetic lethality in ovarian cancer.

Keywords: MRE11; XRCC1, ovarian cancer, platinum sensitization, synthetic lethality

Procedia PDF Downloads 129
1090 A Review of Benefit-Risk Assessment over the Product Lifecycle

Authors: M. Miljkovic, A. Urakpo, M. Simic-Koumoutsaris

Abstract:

Benefit-risk assessment (BRA) is a valuable tool that takes place in multiple stages during a medicine's lifecycle, and this assessment can be conducted in a variety of ways. The aim was to summarize current BRA methods used during approval decisions and in post-approval settings and to see possible future directions. Relevant reviews, recommendations, and guidelines published in medical literature and through regulatory agencies over the past five years have been examined. BRA implies the review of two dimensions: the dimension of benefits (determined mainly by the therapeutic efficacy) and the dimension of risks (comprises the safety profile of a drug). Regulators, industry, and academia have developed various approaches, ranging from descriptive textual (qualitative) to decision-analytic (quantitative) models, to facilitate the BRA of medicines during the product lifecycle (from Phase I trials, to authorization procedure, post-marketing surveillance and health technology assessment for inclusion in public formularies). These approaches can be classified into the following categories: stepwise structured approaches (frameworks); measures for benefits and risks that are usually endpoint specific (metrics), simulation techniques and meta-analysis (estimation techniques), and utility survey techniques to elicit stakeholders’ preferences (utilities). All these approaches share the following two common goals: to assist this analysis and to improve the communication of decisions, but each is subject to its own specific strengths and limitations. Before using any method, its utility, complexity, the extent to which it is established, and the ease of results interpretation should be considered. Despite widespread and long-time use, BRA is subject to debate, suffers from a number of limitations, and currently is still under development. The use of formal, systematic structured approaches to BRA for regulatory decision-making and quantitative methods to support BRA during the product lifecycle is a standard practice in medicine that is subject to continuous improvement and modernization, not only in methodology but also in cooperation between organizations.

Keywords: benefit-risk assessment, benefit-risk profile, product lifecycle, quantitative methods, structured approaches

Procedia PDF Downloads 154
1089 Application of Response Surface Methodology in Optimizing Chitosan-Argan Nutshell Beads for Radioactive Wastewater Treatment

Authors: F. F. Zahra, E. G. Touria, Y. Samia, M. Ahmed, H. Hasna, B. M. Latifa

Abstract:

The presence of radioactive contaminants in wastewater poses a significant environmental and health risk, necessitating effective treatment solutions. This study investigates the optimization of chitosan-Argan nutshell beads for the removal of radioactive elements from wastewater, utilizing Response Surface Methodology (RSM) to enhance the treatment efficiency. Chitosan, known for its biocompatibility and adsorption properties, was combined with Argan nutshell powder to form composite beads. These beads were then evaluated for their capacity to remove radioactive contaminants from synthetic wastewater. The Box-Behnken design (BBD) under RSM was employed to analyze the influence of key operational parameters, including initial contaminant concentration, pH, bead dosage, and contact time, on the removal efficiency. Experimental results indicated that all tested parameters significantly affected the removal efficiency, with initial contaminant concentration and pH showing the most substantial impact. The optimized conditions, as determined by RSM, were found to be an initial contaminant concentration of 50 mg/L, a pH of 6, a bead dosage of 0.5 g/L, and a contact time of 120 minutes. Under these conditions, the removal efficiency reached up to 95%, demonstrating the potential of chitosan-Argan nutshell beads as a viable solution for radioactive wastewater treatment. Furthermore, the adsorption process was characterized by fitting the experimental data to various isotherm and kinetic models. The adsorption isotherms conformed well to the Langmuir model, indicating monolayer adsorption, while the kinetic data were best described by the pseudo-second-order model, suggesting chemisorption as the primary mechanism. This study highlights the efficacy of chitosan-Argan nutshell beads in removing radioactive contaminants from wastewater and underscores the importance of optimizing treatment parameters using RSM. The findings provide a foundation for developing cost-effective and environmentally friendly treatment technologies for radioactive wastewater.

Keywords: adsorption, argan nutshell, beads, chitosan, mechanism, optimization, radioactive wastewater, response surface methodology

Procedia PDF Downloads 32
1088 Energy Efficiency Approach to Reduce Costs of Ownership of Air Jet Weaving

Authors: Corrado Grassi, Achim Schröter, Yves Gloy, Thomas Gries

Abstract:

Air jet weaving is the most productive, but also the most energy consuming weaving method. Increasing energy costs and environmental impact are constantly a challenge for the manufacturers of weaving machines. Current technological developments concern with low energy costs, low environmental impact, high productivity, and constant product quality. The high degree of energy consumption of the method can be ascribed to the high need of compressed air. An energy efficiency method is applied to the air jet weaving technology. Such method identifies and classifies the main relevant energy consumers and processes from the exergy point of view and it leads to the identification of energy efficiency potentials during the weft insertion process. Starting from the design phase, energy efficiency is considered as the central requirement to be satisfied. The initial phase of the method consists of an analysis of the state of the art of the main weft insertion components in order to point out a prioritization of the high demanding energy components and processes. The identified major components are investigated to reduce the high demand of energy of the weft insertion process. During the interaction of the flow field coming from the relay nozzles within the profiled reed, only a minor part of the stream is really accelerating the weft yarn, hence resulting in large energy inefficiency. Different tools such as FEM analysis, CFD simulation models and experimental analysis are used in order to design a more energy efficient design of the involved components in the filling insertion. A different concept for the metal strip of the profiled reed is developed. The developed metal strip allows a reduction of the machine energy consumption. Based on a parametric and aerodynamic study, the designed reed transmits higher values of the flow power to the filling yarn. The innovative reed fulfills both the requirement of raising energy efficiency and the compliance with the weaving constraints.

Keywords: air jet weaving, aerodynamic simulation, energy efficiency, experimental validation, weft insertion

Procedia PDF Downloads 197
1087 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models

Authors: Ravi Ande, Mousumi Hazari

Abstract:

One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.

Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine

Procedia PDF Downloads 92
1086 Application of Building Information Modeling in Energy Management of Individual Departments Occupying University Facilities

Authors: Kung-Jen Tu, Danny Vernatha

Abstract:

To assist individual departments within universities in their energy management tasks, this study explores the application of Building Information Modeling in establishing the ‘BIM based Energy Management Support System’ (BIM-EMSS). The BIM-EMSS consists of six components: (1) sensors installed for each occupant and each equipment, (2) electricity sub-meters (constantly logging lighting, HVAC, and socket electricity consumptions of each room), (3) BIM models of all rooms within individual departments’ facilities, (4) data warehouse (for storing occupancy status and logged electricity consumption data), (5) building energy management system that provides energy managers with various energy management functions, and (6) energy simulation tool (such as eQuest) that generates real time 'standard energy consumptions' data against which 'actual energy consumptions' data are compared and energy efficiency evaluated. Through the building energy management system, the energy manager is able to (a) have 3D visualization (BIM model) of each room, in which the occupancy and equipment status detected by the sensors and the electricity consumptions data logged are displayed constantly; (b) perform real time energy consumption analysis to compare the actual and standard energy consumption profiles of a space; (c) obtain energy consumption anomaly detection warnings on certain rooms so that energy management corrective actions can be further taken (data mining technique is employed to analyze the relation between space occupancy pattern with current space equipment setting to indicate an anomaly, such as when appliances turn on without occupancy); and (d) perform historical energy consumption analysis to review monthly and annually energy consumption profiles and compare them against historical energy profiles. The BIM-EMSS was further implemented in a research lab in the Department of Architecture of NTUST in Taiwan and implementation results presented to illustrate how it can be used to assist individual departments within universities in their energy management tasks.

Keywords: database, electricity sub-meters, energy anomaly detection, sensor

Procedia PDF Downloads 307
1085 PLO-AIM: Potential-Based Lane Organization in Autonomous Intersection Management

Authors: Berk Ecer, Ebru Akcapinar Sezer

Abstract:

Traditional management models of intersections, such as no-light intersections or signalized intersection, are not the most effective way of passing the intersections if the vehicles are intelligent. To this end, Dresner and Stone proposed a new intersection control model called Autonomous Intersection Management (AIM). In the AIM simulation, they were examining the problem from a multi-agent perspective, demonstrating that intelligent intersection control can be made more efficient than existing control mechanisms. In this study, autonomous intersection management has been investigated. We extended their works and added a potential-based lane organization layer. In order to distribute vehicles evenly to each lane, this layer triggers vehicles to analyze near lanes, and they change their lane if other lanes have an advantage. We can observe this behavior in real life, such as drivers, change their lane by considering their intuitions. Basic intuition on selecting the correct lane for traffic is selecting a less crowded lane in order to reduce delay. We model that behavior without any change in the AIM workflow. Experiment results show us that intersection performance is directly connected with the vehicle distribution in lanes of roads of intersections. We see the advantage of handling lane management with a potential approach in performance metrics such as average delay of intersection and average travel time. Therefore, lane management and intersection management are problems that need to be handled together. This study shows us that the lane through which vehicles enter the intersection is an effective parameter for intersection management. Our study draws attention to this parameter and suggested a solution for it. We observed that the regulation of AIM inputs, which are vehicles in lanes, was as effective as contributing to aim intersection management. PLO-AIM model outperforms AIM in evaluation metrics such as average delay of intersection and average travel time for reasonable traffic rates, which is in between 600 vehicle/hour per lane to 1300 vehicle/hour per lane. The proposed model reduced the average travel time reduced in between %0.2 - %17.3 and reduced the average delay of intersection in between %1.6 - %17.1 for 4-lane and 6-lane scenarios.

Keywords: AIM project, autonomous intersection management, lane organization, potential-based approach

Procedia PDF Downloads 139
1084 Modelling Volatility Spillovers and Cross Hedging among Major Agricultural Commodity Futures

Authors: Roengchai Tansuchat, Woraphon Yamaka, Paravee Maneejuk

Abstract:

From the past recent, the global financial crisis, economic instability, and large fluctuation in agricultural commodity price have led to increased concerns about the volatility transmission among them. The problem is further exacerbated by commodities volatility caused by other commodity price fluctuations, hence the decision on hedging strategy has become both costly and useless. Thus, this paper is conducted to analysis the volatility spillover effect among major agriculture including corn, soybeans, wheat and rice, to help the commodity suppliers hedge their portfolios, and manage the risk and co-volatility of them. We provide a switching regime approach to analyzing the issue of volatility spillovers in different economic conditions, namely upturn and downturn economic. In particular, we investigate relationships and volatility transmissions between these commodities in different economic conditions. We purposed a Copula-based multivariate Markov Switching GARCH model with two regimes that depend on an economic conditions and perform simulation study to check the accuracy of our proposed model. In this study, the correlation term in the cross-hedge ratio is obtained from six copula families – two elliptical copulas (Gaussian and Student-t) and four Archimedean copulas (Clayton, Gumbel, Frank, and Joe). We use one-step maximum likelihood estimation techniques to estimate our models and compare the performance of these copula using Akaike information criterion (AIC) and Bayesian information criteria (BIC). In the application study of agriculture commodities, the weekly data used are conducted from 4 January 2005 to 1 September 2016, covering 612 observations. The empirical results indicate that the volatility spillover effects among cereal futures are different, as response of different economic condition. In addition, the results of hedge effectiveness will also suggest the optimal cross hedge strategies in different economic condition especially upturn and downturn economic.

Keywords: agricultural commodity futures, cereal, cross-hedge, spillover effect, switching regime approach

Procedia PDF Downloads 202
1083 Using a Train-the-Trainer Model to Deliver Post-Partum Haemorrhage Simulation in Rural Uganda

Authors: Michael Campbell, Malaz Elsaddig, Kevin Jones

Abstract:

Background: Despite encouraging progress, global maternal mortality has remained stubbornly high since the declaration of the Millennium development goals. Sub-Saharan Africa accounts for well over half of maternal deaths with Post-Partum Haemorrhage (PPH) being the lead cause. ‘In house’ simulation training delivered by local doctors may be a sustainable approach for improving emergency obstetric care. The aim of this study was to evaluate the use of a Train-the-Trainer (TtT) model in a rural Ugandan hospital to ascertain whether it can feasibly improve practitioners’ management of PPH. Methods: Three Ugandan doctors underwent a training course to enable them to design and deliver simulation training. These doctors used MamaNatalie® models to simulate PPH scenarios for midwives, nurses and medical students. The main outcome was improvement in participants’ knowledge and confidence, assessed using self-reported scores on a 10-point scale. Results: The TtT model produced significant improvements in the confidence and knowledge scores of the ten participants. The mean confidence score rose significantly (p=0.0005) from 6.4 to 8.6 following the simulation training. There was also a significant increase in the mean knowledge score from 7.2 to 9.0 (p=0.04). Medical students demonstrated the greatest overall increase in confidence scores whilst increases in knowledge scores were largest amongst nurses. Conclusions: This study demonstrates that a TtT model can be used in a low resource setting to improve healthcare professionals’ confidence and knowledge in managing obstetric emergencies. This Train-the-Trainer model represents a sustainable approach to addressing skill deficits in low resource settings. We believe that its expansion across healthcare institutions in Sub-Saharan Africa will help to reduce the region’s high maternal mortality rate and step closer to achieving the ambitions of the Millennium development goals.

Keywords: low resource setting, post-partum haemorrhage, simulation training, train the trainer

Procedia PDF Downloads 177
1082 Sparse Representation Based Spatiotemporal Fusion Employing Additional Image Pairs to Improve Dictionary Training

Authors: Dacheng Li, Bo Huang, Qinjin Han, Ming Li

Abstract:

Remotely sensed imagery with the high spatial and temporal characteristics, which it is hard to acquire under the current land observation satellites, has been considered as a key factor for monitoring environmental changes over both global and local scales. On a basis of the limited high spatial-resolution observations, challenged studies called spatiotemporal fusion have been developed for generating high spatiotemporal images through employing other auxiliary low spatial-resolution data while with high-frequency observations. However, a majority of spatiotemporal fusion approaches yield to satisfactory assumption, empirical but unstable parameters, low accuracy or inefficient performance. Although the spatiotemporal fusion methodology via sparse representation theory has advantage in capturing reflectance changes, stability and execution efficiency (even more efficient when overcomplete dictionaries have been pre-trained), the retrieval of high-accuracy dictionary and its response to fusion results are still pending issues. In this paper, we employ additional image pairs (here each image-pair includes a Landsat Operational Land Imager and a Moderate Resolution Imaging Spectroradiometer acquisitions covering the partial area of Baotou, China) only into the coupled dictionary training process based on K-SVD (K-means Singular Value Decomposition) algorithm, and attempt to improve the fusion results of two existing sparse representation based fusion models (respectively utilizing one and two available image-pair). The results show that more eligible image pairs are probably related to a more accurate overcomplete dictionary, which generally indicates a better image representation, and is then contribute to an effective fusion performance in case that the added image-pair has similar seasonal aspects and image spatial structure features to the original image-pair. It is, therefore, reasonable to construct multi-dictionary training pattern for generating a series of high spatial resolution images based on limited acquisitions.

Keywords: spatiotemporal fusion, sparse representation, K-SVD algorithm, dictionary learning

Procedia PDF Downloads 260
1081 Dao Din Student Activists: From Hope to Victims under the Thai Society of Darkness

Authors: Siwach Sripokangkul, Autthapon Muangming

Abstract:

The Dao Din group is a gathering of students from the Faculty of Law, Khon Kaen University, a leading university in the northeast of Thailand. The Dao Din group has been one of the most prominent student movements in the past four decades since the bloody massacre of the 6th of October 1976. The group of student is a movement who gather to oppose and protest against different capitalist-run projects that have impacted upon the environment since 2009. The students have become heroes in Thai society and receive support from various groups, especially the middle class who regard the students as role models for the youth. Subsequently, the Dao Din group has received numerous awards between 2011-2013. However, the Dao Din group opposed the military coup d’état of 2014 and the subsequent military junta. Under the military dictatorship regime (2014-present), security officials have hunted, insulted, arrested, and jailed members of the group many times amidst silence from most of the from the middle class. Therefore, this article posits the question of why the Dao Din group which was once the hero and hope of Thai society, has become a political victim in only a few years. The study methods used are the analysis of documentaries, news articles, and interviews with representatives of the Dao Din group. The author argues that Thailand’s middle class previously demonstrated a positive perception of the Dao Din group precisely because that group had earlier opposed policies of the elected Yingluck Shinawatra government, which most of the middle class already despised. However, once the Dao Din group began to protest against the anti-Yingluck military government, then the middle class turned to harshly criticize the Dao Din group. So it can be concluded that the Thai middle class tends to put its partisan interests ahead of a civil society group which has been critical of elected as well as military administrations. This has led the middle class to support the demolishing of Thai democracy. Such a Thai middle-class characteristic not only poses a strong bulwark for the perpetuation of military rule but also destroys a civil society group (composed of young people) who should be the future hope of the nation rather than under the Thai society of darkness.

Keywords: Dao Din student activists, the military coup d’état of 2014, Thai politics, human rights violations

Procedia PDF Downloads 224
1080 Risk Management in Islamic Micro Finance Credit System for Poverty Alleviation from Qualitative Perspective

Authors: Liyu Adhi Kasari Sulung

Abstract:

Poverty has been a major problem in Indonesia. Islamic micro finance (IMF) named Baitul Maal Wat Tamwil (Bmt) plays a prominent role to eradicate this. Indonesia as the biggest muslim country has many successful applied products such as worldwide adopt group-based lending approach, flexible financing for farmers, and gold pawning. The Problems related to these models are operation risk management and internal control system (ICS). A proper ICS will help an organization in preventing the occurrence of bad financing through detecting error and irregularities in its operation. This study aims to seek a proper risk management scheme of credit system in Bmt and internal control system’s rank for every stage. Risk management variables are obtained at the first In-Depth Interview (IDI) and Focus Group Discussion (FGD) with Shariah supervisory boards, boards of directors, and operational managers. Survey was conducted covering nationwide data; West Java, South Sulawesi, and West Nusa Tenggara. Moreover, Content analysis is employed to build the relationship among these variables. Research Findings shows that risk management Characteristics in Indonesia involves ex ante, credit process, and ex post strategies to deal with risk in credit system. Ex-ante control consists of Shariah compliance, survey, group leader reference, and islamic forming orientation. Then, credit process involves saving, collateral, joint liability, loan repayment, and credit installment controlling. Finally, ex-post control includes shariah evaluation, credit evaluation, grace period and low installment provisions. In addition, internal control order sort three stages by its priority; Credit process as first rank, then ex-post control as second, and ex ante control as the last rank.

Keywords: internal control system, islamic micro finance, poverty, risk management

Procedia PDF Downloads 407
1079 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models

Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan

Abstract:

Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.

Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network

Procedia PDF Downloads 26
1078 Building Education Leader Capacity through an Integrated Information and Communication Technology Leadership Model and Tool

Authors: Sousan Arafeh

Abstract:

Educational systems and schools worldwide are increasingly reliant on information and communication technology (ICT). Unfortunately, most educational leadership development programs do not offer formal curricular and/or field experiences that prepare students for managing ICT resources, personnel, and processes. The result is a steep learning curve for the leader and his/her staff and dissipated organizational energy that compromises desired outcomes. To address this gap in education leaders’ development, Arafeh’s Integrated Technology Leadership Model (AITLM) was created. It is a conceptual model and tool that educational leadership students can use to better understand the ICT ecology that exists within their schools. The AITL Model consists of six 'infrastructure types' where ICT activity takes place: technical infrastructure, communications infrastructure, core business infrastructure, context infrastructure, resources infrastructure, and human infrastructure. These six infrastructures are further divided into 16 key areas that need management attention. The AITL Model was created by critically analyzing existing technology/ICT leadership models and working to make something more authentic and comprehensive regarding school leaders’ purview and experience. The AITL Model then served as a tool when it was distributed to over 150 educational leadership students who were asked to review it and qualitatively share their reactions. Students said the model presented crucial areas of consideration that they had not been exposed to before and that the exercise of reviewing and discussing the AITL Model as a group was useful for identifying areas of growth that they could pursue in the leadership development program and in their professional settings. While development in all infrastructures and key areas was important for students’ understanding of ICT, they noted that they were least aware of the importance of the intangible area of the resources infrastructure. The AITL Model will be presented and session participants will have an opportunity to review and reflect on its impact and utility. Ultimately, the AITL Model is one that could have significant policy and practice implications. At the very least, it might help shape ICT content in educational leadership development programs through curricular and pedagogical updates.

Keywords: education leadership, information and communications technology, ICT, leadership capacity building, leadership development

Procedia PDF Downloads 116
1077 Modelling Dengue Disease With Climate Variables Using Geospatial Data For Mekong River Delta Region of Vietnam

Authors: Thi Thanh Nga Pham, Damien Philippon, Alexis Drogoul, Thi Thu Thuy Nguyen, Tien Cong Nguyen

Abstract:

Mekong River Delta region of Vietnam is recognized as one of the most vulnerable to climate change due to flooding and seawater rise and therefore an increased burden of climate change-related diseases. Changes in temperature and precipitation are likely to alter the incidence and distribution of vector-borne diseases such as dengue fever. In this region, the peak of the dengue epidemic period is around July to September during the rainy season. It is believed that climate is an important factor for dengue transmission. This study aims to enhance the capacity of dengue prediction by the relationship of dengue incidences with climate and environmental variables for Mekong River Delta of Vietnam during 2005-2015. Mathematical models for vector-host infectious disease, including larva, mosquito, and human being were used to calculate the impacts of climate to the dengue transmission with incorporating geospatial data for model input. Monthly dengue incidence data were collected at provincial level. Precipitation data were extracted from satellite observations of GSMaP (Global Satellite Mapping of Precipitation), land surface temperature and land cover data were from MODIS. The value of seasonal reproduction number was estimated to evaluate the potential, severity and persistence of dengue infection, while the final infected number was derived to check the outbreak of dengue. The result shows that the dengue infection depends on the seasonal variation of climate variables with the peak during the rainy season and predicted dengue incidence follows well with this dynamic for the whole studied region. However, the highest outbreak of 2007 dengue was not captured by the model reflecting nonlinear dependences of transmission on climate. Other possible effects will be discussed to address the limitation of the model. This suggested the need of considering of both climate variables and another variability across temporal and spatial scales.

Keywords: infectious disease, dengue, geospatial data, climate

Procedia PDF Downloads 383
1076 Heterogeneous-Resolution and Multi-Source Terrain Builder for CesiumJS WebGL Virtual Globe

Authors: Umberto Di Staso, Marco Soave, Alessio Giori, Federico Prandi, Raffaele De Amicis

Abstract:

The increasing availability of information about earth surface elevation (Digital Elevation Models DEM) generated from different sources (remote sensing, Aerial Images, Lidar) poses the question about how to integrate and make available to the most than possible audience this huge amount of data. In order to exploit the potential of 3D elevation representation the quality of data management plays a fundamental role. Due to the high acquisition costs and the huge amount of generated data, highresolution terrain surveys tend to be small or medium sized and available on limited portion of earth. Here comes the need to merge large-scale height maps that typically are made available for free at worldwide level, with very specific high resolute datasets. One the other hand, the third dimension increases the user experience and the data representation quality, unlocking new possibilities in data analysis for civil protection, real estate, urban planning, environment monitoring, etc. The open-source 3D virtual globes, which are trending topics in Geovisual Analytics, aim at improving the visualization of geographical data provided by standard web services or with proprietary formats. Typically, 3D Virtual globes like do not offer an open-source tool that allows the generation of a terrain elevation data structure starting from heterogeneous-resolution terrain datasets. This paper describes a technological solution aimed to set up a so-called “Terrain Builder”. This tool is able to merge heterogeneous-resolution datasets, and to provide a multi-resolution worldwide terrain services fully compatible with CesiumJS and therefore accessible via web using traditional browser without any additional plug-in.

Keywords: Terrain Builder, WebGL, Virtual Globe, CesiumJS, Tiled Map Service, TMS, Height-Map, Regular Grid, Geovisual Analytics, DTM

Procedia PDF Downloads 425
1075 Associations between Sharing Bike Usage and Characteristics of Urban Street Built Environment in Wuhan, China

Authors: Miao Li, Mengyuan Xu

Abstract:

As a low-carbon travel mode, bicycling has drawn increasing political interest in the contemporary Chinese urban context, and the public sharing bikes have become the most popular ways of bike usage in China now. This research aims to explore the spatial-temporal relationship between sharing bike usage and different characteristics of the urban street built environment. In the research, street segments were used as the analytic unit of the street built environment defined by street intersections. The sharing bike usage data in the research include a total of 2.64 million samples that are the entire sharing bike distribution data recorded in two days in 2018 within a neighborhood of 185.4 hectares in the city of Wuhan, China. And these data are assigned to the 97 urban street segments in this area based on their geographic location. The built environment variables used in this research are categorized into three sections: 1) street design characteristics, such as street width, street greenery, types of bicycle lanes; 2) condition of other public transportation, such as the availability of metro station; 3) Street function characteristics that are described by the categories and density of the point of interest (POI) along the segments. Spatial Lag Models (SLM) were used in order to reveal the relationships of specific urban streets built environment characteristics and the likelihood of sharing bicycling usage in whole and different periods a day. The results show: 1) there is spatial autocorrelation among sharing bicycling usage of urban streets in case area in general, non-working day, working day and each period of a day, which presents a clustering pattern in the street space; 2) a statistically strong association between bike sharing usage and several different built environment characteristics such as POI density, types of bicycle lanes and street width; 3) the pattern that bike sharing usage is influenced by built environment characteristics depends on the period within a day. These findings could be useful for policymakers and urban designers to better understand the factors affecting bike sharing system and thus propose guidance and strategy for urban street planning and design in order to promote the use of sharing bikes.

Keywords: big data, sharing bike usage, spatial statistics, urban street built environment

Procedia PDF Downloads 145
1074 Adaptor Protein APPL2 Could Be a Therapeutic Target for Improving Hippocampal Neurogenesis and Attenuating Depressant Behaviors and Olfactory Dysfunctions in Chronic Corticosterone-induced Depression

Authors: Jiangang Shen

Abstract:

Olfactory dysfunction is a common symptom companied by anxiety- and depressive-like behaviors in depressive patients. Chronic stress triggers hormone responses and inhibits the proliferation and differentiation of neural stem cells (NSCs) in the hippocampus and subventricular zone (SVZ)-olfactory bulb (OB), contributing to depressive behaviors and olfactory dysfunction. However, the cellular signaling molecules to regulate chronic stress mediated olfactory dysfunction are largely unclear. Adaptor proteins containing the pleckstrin homology domain, phosphotyrosine binding domain, and leucine zipper motif (APPLs) are multifunctional adaptor proteins. Herein, we tested the hypothesis that APPL2 could inhibit hippocampal neurogenesis by affecting glucocorticoid receptor (GR) signaling, subsequently contributing to depressive and anxiety behaviors as well as olfactory dysfunctions. The major discoveries are included: (1) APPL2 Tg mice had enhanced GR phosphorylation under basic conditions but had no different plasma corticosterone (CORT) level and GR phosphorylation under stress stimulation. (2) APPL2 Tg mice had impaired hippocampal neurogenesis and revealed depressive and anxiety behaviors. (3) GR antagonist RU486 reversed the impaired hippocampal neurogenesis in the APPL2 Tg mice. (4) APPL2 Tg mice displayed higher GR activity and less capacity for neurogenesis at the olfactory system with lesser olfactory sensitivity than WT mice. (5) APPL2 negatively regulates olfactory functions by switching fate commitments of NSCs in adult olfactory bulbs via interaction with Notch1 signaling. Furthermore, baicalin, a natural medicinal compound, was found to be a promising agent targeting APPL2/GR signaling and promoting adult neurogenesis in APPL2 Tg mice and chronic corticosterone-induced depression mouse models. Behavioral tests revealed that baicalin had antidepressant and olfactory-improving effects. Taken together, APPL2 is a critical therapeutic target for antidepressant treatment.

Keywords: APPL2, hippocampal neurogenesis, depressive behaviors and olfactory dysfunction, stress

Procedia PDF Downloads 76
1073 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up

Procedia PDF Downloads 320
1072 Trip Reduction in Turbo Machinery

Authors: Pranay Mathur, Carlo Michelassi, Simi Karatha, Gilda Pedoto

Abstract:

Industrial plant uptime is top most importance for reliable, profitable & sustainable operation. Trip and failed start has major impact on plant reliability and all plant operators focussed on efforts required to minimise the trips & failed starts. The performance of these CTQs are measured with 2 metrics, MTBT(Mean time between trips) and SR (Starting reliability). These metrics helps to identify top failure modes and identify units need more effort to improve plant reliability. Baker Hughes Trip reduction program structured to reduce these unwanted trip 1. Real time machine operational parameters remotely available and capturing the signature of malfunction including related boundary condition. 2. Real time alerting system based on analytics available remotely. 3. Remote access to trip logs and alarms from control system to identify the cause of events. 4. Continuous support to field engineers by remotely connecting with subject matter expert. 5. Live tracking of key CTQs 6. Benchmark against fleet 7. Break down to the cause of failure to component level 8. Investigate top contributor, identify design and operational root cause 9. Implement corrective and preventive action 10. Assessing effectiveness of implemented solution using reliability growth models. 11. Develop analytics for predictive maintenance With this approach , Baker Hughes team is able to support customer in achieving their Reliability Key performance Indicators for monitored units, huge cost savings for plant operators. This Presentation explains these approach while providing successful case studies, in particular where 12nos. of LNG and Pipeline operators with about 140 gas compressing line-ups has adopted these techniques and significantly reduce the number of trips and improved MTBT

Keywords: reliability, availability, sustainability, digital infrastructure, weibull, effectiveness, automation, trips, fail start

Procedia PDF Downloads 76
1071 To Identify the Importance of Telemedicine in Diabetes and Its Impact on Hba1c

Authors: Sania Bashir

Abstract:

A promising approach to healthcare delivery, telemedicine makes use of communication technology to reach out to remote regions of the world, allowing for beneficial interactions between diabetic patients and healthcare professionals as well as the provision of affordable and easily accessible medical care. The emergence of contemporary care models, fueled by the pervasiveness of mobile devices, provides better information, offers low cost with the best possible outcomes, and is known as digital health. It involves the integration of collected data using software and apps, as well as low-cost, high-quality outcomes. The goal of this study is to assess how well telemedicine works for diabetic patients and how it impacts their HbA1c levels. A questionnaire-based survey of 300 diabetics included 150 patients in each of the groups receiving usual care and via telemedicine. A descriptive and observational study that lasted from September 2021 to May 2022 was conducted. HbA1c has been gathered for both categories every three months. A remote monitoring tool has been used to assess the efficacy of telemedicine and continuing therapy instead of the customary three monthly meetings like in-person consultations. The patients were (42.3) 18.3 years old on average. 128 men were outnumbered by 172 women (57.3% of the total). 200 patients (66.6%) have type 2 diabetes, compared to over 100 (33.3%) candidates for type 1. Despite the average baseline BMI being within normal ranges at 23.4 kg/m², the mean baseline HbA1c (9.45 1.20) indicates that glycemic treatment is not well-controlled at the time of registration. While patients who use telemedicine experienced a mean percentage change of 10.5, those who visit the clinic experienced a mean percentage change of 3.9. Changes in HbA1c are dependent on several factors, including improvements in BMI (61%) after 9 months of research and compliance with healthy lifestyle recommendations for diet and activity. More compliance was achieved by the telemedicine group. It is an undeniable reality that patient-physician communication is crucial for enhancing health outcomes and avoiding long-term issues. Telemedicine has shown its value in the management of diabetes and holds promise as a novel technique for improved clinical-patient communication in the twenty-first century.

Keywords: diabetes, digital health, mobile app, telemedicine

Procedia PDF Downloads 90