Search results for: students response
160 Using Business Interactive Games to Improve Management Skills
Authors: Nuno Biga
Abstract:
Continuous processes’ improvement is a permanent challenge for managers of any organization. Lean management means that efficiency gains can be obtained through a systematic framework able to explore synergies between processes, eliminate waste of time, and other resources. Leaderships in organizations determine the efficiency of the teams through their influence on collaborators, their motivation, and consolidation of ownership (group) feeling. The “organization health” depends on the leadership style, which is directly influenced by the intrinsic characteristics of each personality and leadership ability (leadership competencies). Therefore, it’s important that managers can correct in advance any deviation from expected leadership exercises. Top management teams must assume themselves as regulatory agents of leadership within the organization, ensuring monitoring of actions and the alignment of managers in accordance with the humanist standards anchored in a visible Code of Ethics and Conduct. This article is built around an innovative model of “Business Interactive Games” (BI GAMES) that simulates a real-life management environment. It shows that the strategic management of operations depends on a complex set of endogenous and exogenous variables to the intervening agents that require specific skills and a set of critical processes to monitor. BI GAMES are designed for each management reality and have already been applied successfully in several contexts over the last five years comprising the educational and enterprise ones. Results from these experiences are used to demonstrate how serious games in working living labs contributed to improve the organizational environment by focusing on the evaluation of players’ (agents’) skills, empower its capabilities, and the critical factors that create value in each context. The implementation of the BI GAMES simulator highlights that leadership skills are decisive for the performance of teams, regardless of the sector of activity and the specificities of each organization whose operation is intended to simulate. The players in the BI GAMES can be managers or employees of different roles in the organization or students in the learning context. They interact with each other and are asked to decide/make choices in the presence of several options for the follow-up operation, for example, when the costs and benefits are not fully known but depend on the actions of external parties (e.g., subcontracted enterprises and actions of regulatory bodies). Each team must evaluate resources used/needed in each operation, identify bottlenecks in the system of operations, assess the performance of the system through a set of key performance indicators, and set a coherent strategy to improve efficiency. Through the gamification and the serious games approach, organizational managers will be able to confront the scientific approach in strategic decision-making versus their real-life approach based on experiences undertaken. Considering that each BI GAME’s team has a leader (chosen by draw), the performance of this player has a direct impact on the results obtained. Leadership skills are thus put to the test during the simulation of the functioning of each organization, allowing conclusions to be drawn at the end of the simulation, including its discussion amongst participants.Keywords: business interactive games, gamification, management empowerment skills, simulation living labs
Procedia PDF Downloads 110159 Artificial Intelligence Based Method in Identifying Tumour Infiltrating Lymphocytes of Triple Negative Breast Cancer
Authors: Nurkhairul Bariyah Baharun, Afzan Adam, Reena Rahayu Md Zin
Abstract:
Tumor microenvironment (TME) in breast cancer is mainly composed of cancer cells, immune cells, and stromal cells. The interaction between cancer cells and their microenvironment plays an important role in tumor development, progression, and treatment response. The TME in breast cancer includes tumor-infiltrating lymphocytes (TILs) that are implicated in killing tumor cells. TILs can be found in tumor stroma (sTILs) and within the tumor (iTILs). TILs in triple negative breast cancer (TNBC) have been demonstrated to have prognostic and potentially predictive value. The international Immune-Oncology Biomarker Working Group (TIL-WG) had developed a guideline focus on the assessment of sTILs using hematoxylin and eosin (H&E)-stained slides. According to the guideline, the pathologists use “eye balling” method on the H&E stained- slide for sTILs assessment. This method has low precision, poor interobserver reproducibility, and is time-consuming for a comprehensive evaluation, besides only counted sTILs in their assessment. The TIL-WG has therefore recommended that any algorithm for computational assessment of TILs utilizing the guidelines provided to overcome the limitations of manual assessment, thus providing highly accurate and reliable TILs detection and classification for reproducible and quantitative measurement. This study is carried out to develop a TNBC digital whole slide image (WSI) dataset from H&E-stained slides and IHC (CD4+ and CD8+) stained slides. TNBC cases were retrieved from the database of the Department of Pathology, Hospital Canselor Tuanku Muhriz (HCTM). TNBC cases diagnosed between the year 2010 and 2021 with no history of other cancer and available block tissue were included in the study (n=58). Tissue blocks were sectioned approximately 4 µm for H&E and IHC stain. The H&E staining was performed according to a well-established protocol. Indirect IHC stain was also performed on the tissue sections using protocol from Diagnostic BioSystems PolyVue™ Plus Kit, USA. The slides were stained with rabbit monoclonal, CD8 antibody (SP16) and Rabbit monoclonal, CD4 antibody (EP204). The selected and quality-checked slides were then scanned using a high-resolution whole slide scanner (Pannoramic DESK II DW- slide scanner) to digitalize the tissue image with a pixel resolution of 20x magnification. A manual TILs (sTILs and iTILs) assessment was then carried out by the appointed pathologist (2 pathologists) for manual TILs scoring from the digital WSIs following the guideline developed by TIL-WG 2014, and the result displayed as the percentage of sTILs and iTILs per mm² stromal and tumour area on the tissue. Following this, we aimed to develop an automated digital image scoring framework that incorporates key elements of manual guidelines (including both sTILs and iTILs) using manually annotated data for robust and objective quantification of TILs in TNBC. From the study, we have developed a digital dataset of TNBC H&E and IHC (CD4+ and CD8+) stained slides. We hope that an automated based scoring method can provide quantitative and interpretable TILs scoring, which correlates with the manual pathologist-derived sTILs and iTILs scoring and thus has potential prognostic implications.Keywords: automated quantification, digital pathology, triple negative breast cancer, tumour infiltrating lymphocytes
Procedia PDF Downloads 114158 Health Equity in Hard-to-Reach Rural Communities in Abia State, Nigeria: An Asset-Based Community Development Intervention to Influence Community Norms and Address the Social Determinants of Health in Hard-to-Reach Rural Communities
Authors: Chinasa U. Imo, Queen Chikwendu, Jonathan Ajuma, Mario Banuelos
Abstract:
Background: Sociocultural norms primarily influence the health-seeking behavior of populations in rural communities. In the Nkporo community, Abia State, Nigeria, their sociocultural perception of diseases runs counter to biomedical definitions, wherein they rely heavily on traditional medicine and practices. In a state where birth asphyxia and sepsis account for the significant causes of death for neonates, malaria leads to the causes of other mortalities, followed by common preventable diseases such as diarrhea, pneumonia, acute respiratory tract infection, malnutrition, and HIV/AIDS. Most local mothers attribute their health conditions and that of their children to witchcraft attacks, the hand of God, and ancestral underlining. This influences how they see antenatal and postnatal care, choice of place of accessing care and birth delivery, response to children's illnesses, immunization, and nutrition. Method: To implement a community health improvement program, we adopted an asset-based community development model to address health's normative and social determinants. The first step was to use a qualitative approach to conduct a community health needs baseline assessment, involving focus group discussions with twenty-five (25) youths aged 18-25, semi-structured interviews with ten (10) officers-in-charge of primary health centers, eight (8) ward health committee members, and nine (9) community leaders. Secondly, we designed an intervention program. Going forward, we will proceed with implementing and evaluating this program. Result: The priority needs identified by the communities were malaria, lack of clean drinking water, and the need for behavioral change information. The study also highlighted the significant influence of youths on their peers, family, and community as caregivers and information interpreters. Based on the findings, the NGO SieDi-Hub collaborated with the Abia State Ministry of Health, the State Primary Healthcare Agency, and Empower Next Generations to design a one-year "Community Health Youth Champions Pilot Program." Twenty (20) youths in the community were trained and equipped to champion a participatory approach to bridging the gap between access and delivery of primary healthcare, to adjust sociocultural norms to improve health equity for people in Nkporo community – with limited education, lack of access to health information, and quality healthcare facilities using an innovative community-led improvement approach. Conclusion: Youths play a vital role in achieving health equity, being a vulnerable population with significant influence. To ensure effective primary healthcare, strategies must include cultural humility. The asset-based community development model offers valuable tools, and this article will share ongoing lessons from the intervention's behavioral change strategies with young people.Keywords: asset-based community development, community health, primary health systems strengthening, youth empowerment
Procedia PDF Downloads 92157 A Computational Framework for Load Mediated Patellar Ligaments Damage at the Tropocollagen Level
Authors: Fadi Al Khatib, Raouf Mbarki, Malek Adouni
Abstract:
In various sport and recreational activities, the patellofemoral joint undergoes large forces and moments while accommodating the significant knee joint movement. In doing so, this joint is commonly the source of anterior knee pain related to instability in normal patellar tracking and excessive pressure syndrome. One well-observed explanation of the instability of the normal patellar tracking is the patellofemoral ligaments and patellar tendon damage. Improved knowledge of the damage mechanism mediating ligaments and tendon injuries can be a great help not only in rehabilitation and prevention procedures but also in the design of better reconstruction systems in the management of knee joint disorders. This damage mechanism, specifically due to excessive mechanical loading, has been linked to the micro level of the fibred structure precisely to the tropocollagen molecules and their connection density. We argue defining a clear frame starting from the bottom (micro level) to up (macro level) in the hierarchies of the soft tissue may elucidate the essential underpinning on the state of the ligaments damage. To do so, in this study a multiscale fibril reinforced hyper elastoplastic Finite Element model that accounts for the synergy between molecular and continuum syntheses was developed to determine the short-term stresses/strains patellofemoral ligaments and tendon response. The plasticity of the proposed model is associated only with the uniaxial deformation of the collagen fibril. The yield strength of the fibril is a function of the cross-link density between tropocollagen molecules, defined here by a density function. This function obtained through a Coarse-graining procedure linking nanoscale collagen features and the tissue level materials properties using molecular dynamics simulations. The hierarchies of the soft tissues were implemented using the rule of mixtures. Thereafter, the model was calibrated using a statistical calibration procedure. The model then implemented into a real structure of patellofemoral ligaments and patellar tendon (OpenKnee) and simulated under realistic loading conditions. With the calibrated material parameters the calculated axial stress lies well with the experimental measurement with a coefficient of determination (R2) equal to 0.91 and 0.92 for the patellofemoral ligaments and the patellar tendon respectively. The ‘best’ prediction of the yielding strength and strain as compared with the reported experimental data yielded when the cross-link density between the tropocollagen molecule of the fibril equal to 5.5 ± 0.5 (patellofemoral ligaments) and 12 (patellar tendon). Damage initiation of the patellofemoral ligaments was located at the femoral insertions while the damage of the patellar tendon happened in the middle of the structure. These predicted finding showed a meaningful correlation between the cross-link density of the tropocollagen molecules and the stiffness of the connective tissues of the extensor mechanism. Also, damage initiation and propagation were documented with this model, which were in satisfactory agreement with earlier observation. To the best of our knowledge, this is the first attempt to model ligaments from the bottom up, predicted depending to the tropocollagen cross-link density. This approach appears more meaningful towards a realistic simulation of a damaging process or repair attempt compared with certain published studies.Keywords: tropocollagen, multiscale model, fibrils, knee ligaments
Procedia PDF Downloads 127156 Diffusion MRI: Clinical Application in Radiotherapy Planning of Intracranial Pathology
Authors: Pomozova Kseniia, Gorlachev Gennadiy, Chernyaev Aleksandr, Golanov Andrey
Abstract:
In clinical practice, and especially in stereotactic radiosurgery planning, the significance of diffusion-weighted imaging (DWI) is growing. This makes the existence of software capable of quickly processing and reliably visualizing diffusion data, as well as equipped with tools for their analysis in terms of different tasks. We are developing the «MRDiffusionImaging» software on the standard C++ language. The subject part has been moved to separate class libraries and can be used on various platforms. The user interface is Windows WPF (Windows Presentation Foundation), which is a technology for managing Windows applications with access to all components of the .NET 5 or .NET Framework platform ecosystem. One of the important features is the use of a declarative markup language, XAML (eXtensible Application Markup Language), with which you can conveniently create, initialize and set properties of objects with hierarchical relationships. Graphics are generated using the DirectX environment. The MRDiffusionImaging software package has been implemented for processing diffusion magnetic resonance imaging (dMRI), which allows loading and viewing images sorted by series. An algorithm for "masking" dMRI series based on T2-weighted images was developed using a deformable surface model to exclude tissues that are not related to the area of interest from the analysis. An algorithm of distortion correction using deformable image registration based on autocorrelation of local structure has been developed. Maximum voxel dimension was 1,03 ± 0,12 mm. In an elementary brain's volume, the diffusion tensor is geometrically interpreted using an ellipsoid, which is an isosurface of the probability density of a molecule's diffusion. For the first time, non-parametric intensity distributions, neighborhood correlations, and inhomogeneities are combined in one segmentation of white matter (WM), grey matter (GM), and cerebrospinal fluid (CSF) algorithm. A tool for calculating the coefficient of average diffusion and fractional anisotropy has been created, on the basis of which it is possible to build quantitative maps for solving various clinical problems. Functionality has been created that allows clustering and segmenting images to individualize the clinical volume of radiation treatment and further assess the response (Median Dice Score = 0.963 ± 0,137). White matter tracts of the brain were visualized using two algorithms: deterministic (fiber assignment by continuous tracking) and probabilistic using the Hough transform. The proposed algorithms test candidate curves in the voxel, assigning to each one a score computed from the diffusion data, and then selects the curves with the highest scores as the potential anatomical connections. White matter fibers were visualized using a Hough transform tractography algorithm. In the context of functional radiosurgery, it is possible to reduce the irradiation volume of the internal capsule receiving 12 Gy from 0,402 cc to 0,254 cc. The «MRDiffusionImaging» will improve the efficiency and accuracy of diagnostics and stereotactic radiotherapy of intracranial pathology. We develop software with integrated, intuitive support for processing, analysis, and inclusion in the process of radiotherapy planning and evaluating its results.Keywords: diffusion-weighted imaging, medical imaging, stereotactic radiosurgery, tractography
Procedia PDF Downloads 84155 Coil-Over Shock Absorbers Compared to Inherent Material Damping
Authors: Carina Emminger, Umut D. Cakmak, Evrim Burkut, Rene Preuer, Ingrid Graz, Zoltan Major
Abstract:
Damping accompanies us daily in everyday life and is used to protect (e.g., in shoes) and make our life more comfortable (damping of unwanted motion) and calm (noise reduction). In general, damping is the absorption of energy which is either stored in the material (vibration isolation systems) or changed into heat (vibration absorbers). In case of the last, the damping mechanism can be split in active, passive, as well as semi-active (a combination of active and passive). Active damping is required to enable an almost perfect damping over the whole application range and is used, for instance, in sport cars. In contrast, passive damping is a response of the material due to external loading. Consequently, the material composition has a huge influence on the damping behavior. For elastomers, the material behavior is inherent viscoelastic, temperature, and frequency dependent. However, passive damping is not adjustable during application. Therefore, it is of importance to understand the fundamental viscoelastic behavior and the dissipation capability due to external loading. The objective of this work is to assess the limitation and applicability of viscoelastic material damping for applications in which currently coil-over shock absorbers are utilized. Coil-over shock absorbers are usually made of various mechanical parts and incorporate fluids within the damper. These shock absorbers are well-known and studied in the industry, and when needed, they can be easily adjusted during their product lifetime. In contrary, dampers made of – ideally – a single material are more resource efficient, have an easier serviceability, and are easier manufactured. However, they lack of adaptability and adjustability in service. Therefore, a case study with a remote-controlled sport car was conducted. The original shock absorbers were redesigned, and the spring-dashpot system was replaced by both an elastomer and a thermoplastic-elastomer, respectively. Here, five different formulations of elastomers were used, including a pure and an iron-particle filled thermoplastic poly(urethan) (TPU) and blends of two different poly(dimethyl siloxane) (PDMS). In addition, the TPUs were investigated as full and hollow dampers to investigate the difference between solid and structured material. To get comparative results each material formulation was comprehensively characterized, by monotonic uniaxial compression tests, dynamic thermomechanical analysis (DTMA), and rebound resilience. Moreover, the new material-based shock absorbers were compared with spring-dashpot shock absorbers. The shock absorbers were analyzed under monotonic and cyclic loading. In addition, an impact loading was applied on the remote-controlled car to measure the damping properties in operation. A servo-hydraulic high-speed linear actuator was utilized to apply the loads. The acceleration of the car and the displacement of specific measurement points were recorded while testing by a sensor and high-speed camera, respectively. The results prove that elastomers are suitable in damping applications, but they are temperature and frequency dependent. This is a limitation in applicability of viscous material damper. Feasible fields of application may be in the case of micromobility, like bicycles, e-scooters, and e-skateboards. Furthermore, the viscous material damping could be used to increase the inherent damping of a whole structure, e.g., in bicycle-frames.Keywords: damper structures, material damping, PDMS, TPU
Procedia PDF Downloads 113154 The 5-HT1A Receptor Biased Agonists, NLX-101 and NLX-204, Elicit Rapid-Acting Antidepressant Activity in Rat Similar to Ketamine and via GABAergic Mechanisms
Authors: A. Newman-Tancredi, R. Depoortère, P. Gruca, E. Litwa, M. Lason, M. Papp
Abstract:
The N-methyl-D-aspartic acid (NMDA) receptor antagonist, ketamine, can elicit rapid-acting antidepressant (RAAD) effects in treatment-resistant patients, but it requires parenteral co-administration with a classical antidepressant under medical supervision. In addition, ketamine can also produce serious side effects that limit its long-term use, and there is much interest in identifying RAADs based on ketamine’s mechanism of action but with safer profiles. Ketamine elicits GABAergic interneuron inhibition, glutamatergic neuron stimulation, and, notably, activation of serotonin 5-HT1A receptors in the prefrontal cortex (PFC). Direct activation of the latter receptor subpopulation with selective ‘biased agonists’ may therefore be a promising strategy to identify novel RAADs and, consistent with this hypothesis, the prototypical cortical biased agonist, NLX-101, exhibited robust RAAD-like activity in the chronic mild stress model of depression (CMS). The present study compared the effects of a novel, selective 5-HT1A receptor-biased agonist, NLX-204, with those of ketamine and NLX-101. Materials and methods: CMS procedure was conducted on Wistar rats; drugs were administered either intraperitoneally (i.p.) or by bilateral intracortical microinjection. Ketamine: 10 mg/kg i.p. or 10 µg/side in PFC; NLX-204 and NLX-101: 0.08 and 0.16 mg/kg i.p. or 16 µg/side in PFC. In addition, interaction studies were carried out with systemic NLX-204 or NLX-101 (each at 0.16 mg/kg i.p.) in combination with intracortical WAY-100635 (selective 5-HT1A receptor antagonist; 2 µg/side) or muscimol (GABA-A receptor agonist, 12.5 ng/side). Anhedonia was assessed by CMS-induced decrease in sucrose solution consumption; anxiety-like behavior was assessed using the Elevated Plus Maze (EPM), and cognitive impairment was assessed by the Novel Object Recognition (NOR) test. Results: A single administration of NLX-204 was sufficient to reverse the CMS-induced deficit in sucrose consumption, similarly to ketamine and NLX-101. NLX-204 also reduced CMS-induced anxiety in the EPM and abolished CMS-induced NOR deficits. These effects were maintained (EPM and NOR) or enhanced (sucrose consumption) over a subsequent 2-week period of treatment. The anti-anhedonic response of the drugs was also maintained for several weeks Following treatment discontinuation, suggesting that they had sustained effects on neuronal networks. A single PFC administration of NLX-204 reversed deficient sucrose consumption, similarly to ketamine and NLX-101. Moreover, the anti-anhedonic activities of systemic NLX-204 and NLX 101 were abolished by coadministration with intracortical WAY-100635 or muscimol. Conclusions: (i) The antidepressant-like activity of NLX-204 in the rat CMS model was as rapid as that of ketamine or NLX-101, supporting targeting cortical 5-HT1A receptors with selective, biased agonists to achieve RAAD effects. (ii)The anti-anhedonic activity of systemic NLX-204 was mimicked by local administration of the compound in the PFC, confirming the involvement of cortical circuits in its RAAD-like effects. (iii) Notably, the effects of systemic NLX-204 and NLX-101 were abolished by PFC administration of muscimol, indicating that they act by (indirectly) eliciting a reduction in cortical GABAergic neurotransmission. This is consistent with ketamine’s mechanism of action and suggests that there are converging NMDA and 5-HT1A receptor signaling cascades in PFC underlying the RAAD-like activities of ketamine and NLX-204. Acknowledgements: The study was financially supported by NCN grant no. 2019/35/B/NZ7/00787.Keywords: depression, ketamine, serotonin, 5-HT1A receptor, chronic mild stress
Procedia PDF Downloads 110153 Optimized Processing of Neural Sensory Information with Unwanted Artifacts
Authors: John Lachapelle
Abstract:
Introduction: Neural stimulation is increasingly targeted toward treatment of back pain, PTSD, Parkinson’s disease, and for sensory perception. Sensory recording during stimulation is important in order to examine neural response to stimulation. Most neural amplifiers (headstages) focus on noise efficiency factor (NEF). Conversely, neural headstages need to handle artifacts from several sources including power lines, movement (EMG), and neural stimulation itself. In this work a layered approach to artifact rejection is used to reduce corruption of the neural ENG signal by 60dBv, resulting in recovery of sensory signals in rats and primates that would previously not be possible. Methods: The approach combines analog techniques to reduce and handle unwanted signal amplitudes. The methods include optimized (1) sensory electrode placement, (2) amplifier configuration, and (3) artifact blanking when necessary. The techniques together are like concentric moats protecting a castle; only the wanted neural signal can penetrate. There are two conditions in which the headstage operates: unwanted artifact < 50mV, linear operation, and artifact > 50mV, fast-settle gain reduction signal limiting (covered in more detail in a separate paper). Unwanted Signals at the headstage input: Consider: (a) EMG signals are by nature < 10mV. (b) 60 Hz power line signals may be > 50mV with poor electrode cable conditions; with careful routing much of the signal is common to both reference and active electrode and rejected in the differential amplifier with <50mV remaining. (c) An unwanted (to the neural recorder) stimulation signal is attenuated from stimulation to sensory electrode. The voltage seen at the sensory electrode can be modeled Φ_m=I_o/4πσr. For a 1 mA stimulation signal, with 1 cm spacing between electrodes, the signal is <20mV at the headstage. Headstage ASIC design: The front end ASIC design is designed to produce < 1% THD at 50mV input; 50 times higher than typical headstage ASICs, with no increase in noise floor. This requires careful balance of amplifier stages in the headstage ASIC, as well as consideration of the electrodes effect on noise. The ASIC is designed to allow extremely small signal extraction on low impedance (< 10kohm) electrodes with configuration of the headstage ASIC noise floor to < 700nV/rt-Hz. Smaller high impedance electrodes (> 100kohm) are typically located closer to neural sources and transduce higher amplitude signals (> 10uV); the ASIC low-power mode conserves power with 2uV/rt-Hz noise. Findings: The enhanced neural processing ASIC has been compared with a commercial neural recording amplifier IC. Chronically implanted primates at MGH demonstrated the presence of commercial neural amplifier saturation as a result of large environmental artifacts. The enhanced artifact suppression headstage ASIC, in the same setup, was able to recover and process the wanted neural signal separately from the suppressed unwanted artifacts. Separately, the enhanced artifact suppression headstage ASIC was able to separate sensory neural signals from unwanted artifacts in mouse-implanted peripheral intrafascicular electrodes. Conclusion: Optimizing headstage ASICs allow observation of neural signals in the presence of large artifacts that will be present in real-life implanted applications, and are targeted toward human implantation in the DARPA HAPTIX program.Keywords: ASIC, biosensors, biomedical signal processing, biomedical sensors
Procedia PDF Downloads 327152 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation
Authors: Maria Lazari, Lorenzo Sanavia
Abstract:
Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity
Procedia PDF Downloads 222151 LncRNA-miRNA-mRNA Networks Associated with BCR-ABL T315I Mutation in Chronic Myeloid Leukemia
Authors: Adenike Adesanya, Nonthaphat Wong, Xiang-Yun Lan, Shea Ping Yip, Chien-Ling Huang
Abstract:
Background: The most challenging mutation of the oncokinase BCR-ABL protein T315I, which is commonly known as the “gatekeeper” mutation and is notorious for its strong resistance to almost all tyrosine kinase inhibitors (TKIs), especially imatinib. Therefore, this study aims to identify T315I-dependent downstream microRNA (miRNA) pathways associated with drug resistance in chronic myeloid leukemia (CML) for prognostic and therapeutic purposes. Methods: T315I-carrying K562 cell clones (K562-T315I) were generated by the CRISPR-Cas9 system. Imatinib-treated K562-T315I cells were subjected to small RNA library preparation and next-generation sequencing. Putative lncRNA-miRNA-mRNA networks were analyzed with (i) DESeq2 to extract differentially expressed miRNAs, using Padj value of 0.05 as cut-off, (ii) STarMir to obtain potential miRNA response element (MRE) binding sites of selected miRNAs on lncRNA H19, (iii) miRDB, miRTarbase, and TargetScan to predict mRNA targets of selected miRNAs, (iv) IntaRNA to obtain putative interactions between H19 and the predicted mRNAs, (v) Cytoscape to visualize putative networks, and (vi) several pathway analysis platforms – Enrichr, PANTHER and ShinyGO for pathway enrichment analysis. Moreover, mitochondria isolation and transcript quantification were adopted to determine the new mechanism involved in T315I-mediated resistance of CML treatment. Results: Verification of the CRISPR-mediated mutagenesis with digital droplet PCR detected the mutation abundance of ≥80%. Further validation showed the viability of ≥90% by cell viability assay, and intense phosphorylated CRKL protein band being detected with no observable change for BCR-ABL and c-ABL protein expressions by Western blot. As reported by several investigations into hematological malignancies, we determined a 7-fold increase of H19 expression in K562-T315I cells. After imatinib treatment, a 9-fold increment was observed. DESeq2 revealed 171 miRNAs were differentially expressed K562-T315I, 112 out of these miRNAs were identified to have MRE binding regions on H19, and 26 out of the 112 miRNAs were significantly downregulated. Adopting the seed-sequence analysis of these identified miRNAs, we obtained 167 mRNAs. 6 hub miRNAs (hsa-let-7b-5p, hsa-let-7e-5p, hsa-miR-125a-5p, hsa-miR-129-5p, and hsa-miR-372-3p) and 25 predicted genes were identified after constructing hub miRNA-target gene network. These targets demonstrated putative interactions with H19 lncRNA and were mostly enriched in pathways related to cell proliferation, senescence, gene silencing, and pluripotency of stem cells. Further experimental findings have also shown the up-regulation of mitochondrial transcript and lncRNA MALAT1 contributing to the lncRNA-miRNA-mRNA networks induced by BCR-ABL T315I mutation. Conclusions: Our results have indicated that lncRNA-miRNA regulators play a crucial role not only in leukemogenesis but also in drug resistance, considering the significant dysregulation and interactions in the K562-T315I cell model generated by CRISPR-Cas9. In silico analysis has further shown that lncRNAs H19 and MALAT1 bear several complementary miRNA sites. This implies that they could serve as a sponge, hence sequestering the activity of the target miRNAs.Keywords: chronic myeloid leukemia, imatinib resistance, lncRNA-miRNA-mRNA, T315I mutation
Procedia PDF Downloads 158150 Cost-Conscious Treatment of Basal Cell Carcinoma
Authors: Palak V. Patel, Jessica Pixley, Steven R. Feldman
Abstract:
Introduction: Basal cell carcinoma (BCC) is the most common skin cancer worldwide and requires substantial resources to treat. When choosing between indicated therapies, providers consider their associated adverse effects, efficacy, cosmesis, and function preservation. The patient’s tumor burden, infiltrative risk, and risk of tumor recurrence are also considered. Treatment cost is often left out of these discussions. This can lead to financial toxicity, which describes the harm and quality of life reductions inflicted by high care costs. Methods: We studied the guidelines set forth by the American Academy of Dermatology for the treatment of BCC. A PubMed literature search was conducted to identify the costs of each recommended therapy. We discuss costs alongside treatment efficacy and side-effect profile. Results: Surgical treatment for BCC can be cost-effective if the appropriate treatment is selected for the presenting tumor. Curettage and electrodesiccation can be used in low-grade, low-recurrence tumors in aesthetically unimportant areas. The benefits of cost-conscious care are not likely to be outweighed by the risks of poor cosmesis or tumor return ($471 BCC of the cheek). When tumor burden is limited, MMS offers better cure rates and lower recurrence rates than surgical excision, and with comparable costs (MMS $1263; SE $949). Surgical excision with permanent sections may be indicated when tumor burden is more extensive or if molecular testing is necessary. The utility of surgical excision with frozen sections, which costs substantially more than MMS without comparable outcomes, is less clear (SE with frozen sections $2334-$3085). Less data exists on non-surgical treatments for BCC. These techniques cost less, but recurrence-risk is high. Side-effects of nonsurgical treatment are limited to local skin reactions, and cosmesis is good. Cryotherapy, 5-FU, and MAL-PDT are all more affordable than surgery, but high recurrence rates increase risk of secondary financial and psychosocial burden (recurrence rates 21-39%; cost $100-270). Radiation therapy offers better clearance rates than other nonsurgical treatments but is associated with similar recurrence rates and a significantly larger financial burden ($2591-$3460 BCC of the cheek). Treatments for advanced or metastatic BCC are extremely costly, but few patients require their use, and the societal cost burden remains low. Vismodegib and sonidegib have good response rates but substantial side effects, and therapy should be combined with multidisciplinary care and palliative measures. Expert-review has found sonidegib to be the less expensive and more efficacious option (vismodegib $128,358; sonidegib $122,579). Platinum therapy, while not FDA-approved, is also effective but expensive (~91,435). Immunotherapy offers a new line of treatment in patients intolerant of hedgehog inhibitors ($683,061). Conclusion: Dermatologists working within resource-compressed practices and with resource-limited patients must prudently manage the healthcare dollar. Surgical therapies for BCC offer the lowest risk of recurrence at the most reasonable cost. Non-surgical therapies are more affordable, but high recurrence rates increase the risk of secondary financial and psychosocial burdens. Treatments for advanced BCC are incredibly costly, but the low incidence means the overall cost to the system is low.Keywords: nonmelanoma skin cancer, basal cell skin cancer, squamous cell skin cancer, cost of care
Procedia PDF Downloads 122149 The in Vitro and in Vivo Antifungal Activity of Terminalia Mantaly on Aspergillus Species Using Drosophila melanogaster (UAS-Diptericin) As a Model
Authors: Ponchang Apollos Wuyep, Alice Njolke Mafe, Longchi Satkat Zacheaus, Dogun Ojochogu, Dabot Ayuba Yakubu
Abstract:
Fungi causes huge losses when infections occur both in plants and animals. Synthetic Antifungal drugs are mostly very expensive and highly cytotoxic when taken. This study was aimed at determining the in vitro and in vivo antifungal activities of the leaves and stem extracts of Terminalia mantaly (Umbrella tree)H. Perrier on Aspergillus species in a bid to identify potential sources of cheap starting materials for the synthesis of new drugs to address the growing antimicrobial resistance. T. mantaly leave and stem powdered plant was extracted by fractionation using the method of solvent partition co-efficient in their graded form in the order n-hexane, Ethyl acetate, methanol and distilled water and phytochemical screening of each fraction revealed the presence of alkaloids, saponins, Tannins, flavonoids, carbohydrates, steroids, anthraquinones, cardiac glycosides and terpenoids in varying degrees. The Agar well diffusion technique was used to screen for antifungal activity of the fractions on clinical isolates of Aspergillus species (Aspergillus flavus and Aspergillus fumigatus). Minimum inhibitory concentration (MIC50) of the most active extracts was determined by the broth dilution method. The fractions test indicated a high antifungal activity with zones of inhibition ranging from 6 to 26 mm and 8 to 30mm (leave fractions) and 10mm to 34mm and 14mm to36mm (stem fractions) on A. flavus and A. fumigatus respectively. All the fractions indicated antifungal activity in a dose response relationship at concentrations of 62.5mg/ml, 125mg/ml, 250mg/ml and 500mg/ml. Better antifungal efficacy was shown by the Ethyl acetate, Hexane and Methanol fractions in the in vitro as the most potent fraction with MIC ranging from 62.5 to 125mg/ml. There was no statistically significant difference (P>0.05) in the potency of the Eight fractions from leave and stem (Hexane, Ethyl acetate, methanol and distilled water, antifungal (fluconazole), which served as positive control and 10% DMSO(Dimethyl Sulfoxide)which served as negative control. In the in vivo investigations, the ingestion technique was used for the infectious studies Female Drosophilla melanogaster(UAS-Diptericin)normal flies(positive control),infected and not treated flies (negative control) and infected flies with A. fumigatus and placed on normal diet, diet containing fractions(MSM and HSM each at concentrations of 10mg/ml 20mg/ml, 30mg/ml, 40mg/ml, 50mg/ml, 60mg/ml, 70mg/ml, 80mg/ml, 90mg/ml and 100mg/ml), diet containing control drugs(fluconazole as positive control)and infected flies on normal diet(negative control), the flies were observed for fifteen(15) days. Then the total mortality of flies was recorded each day. The results of the study reveals that the flies were susceptible to infection with A. fumigatus and responded to treatment with more effectiveness at 50mg/ml, 60mg/ml and 70mg/ml for both the Methanol and Hexane stem fractions. Therefore, the Methanol and Hexane stem fractions of T. mantaly contain therapeutically useful compounds, justifying the traditional use of this plant for the treatment of fungal infections.Keywords: Terminalia mantaly, Aspergillus fumigatus, cytotoxic, Drosophila melanogaster, antifungal
Procedia PDF Downloads 83148 Severe Post Operative Gas Gangrene of the Liver: Off-Label Treatment by Percutaneous Radiofrequency Ablation
Authors: Luciano Tarantino
Abstract:
Gas gangrene is a rare, severe infection with a very high mortality rate caused by Clostridium species. The infection causes a non-suppurative localized producing gas lesion from which harmful toxins that impair the inflammatory response cause vessel damage and multiple organ failure. Gas gangrene of the liver is very rare and develops suddenly, often as a complication of abdominal surgery and liver transplantation. The present paper deals with a case of gas gangrene of the liver that occurred after percutaneous MW ablation of hepatocellular carcinoma, resulting in progressive liver necrosis and multi-organ failure in spite of specific antibiotics administration. The patient was successfully treated with percutaneous Radiofrequency ablation. Case report: Female, 76 years old, Child A class cirrhosis, treated with synchronous insertion of 3 MW antennae for large HCC (5.5 cm) in the VIII segment. 24 hours after treatment, the patient was asymptomatic and left the hospital . 2 days later, she complained of fever, weakness, abdominal swelling, and pain. Abdominal US detected a 2.3 cm in size gas-containing area, eccentric within the large (7 cm) ablated area. The patient was promptly hospitalized with the diagnosis of anaerobic liver abscess and started antibiotic therapy with Imipenem/cilastatine+metronidazole+teicoplanine. On the fourth day, the patient was moved to the ICU because of dyspnea, congestive heart failure, atrial fibrillation, right pleural effusion, ascites, and renal failure. Blood tests demonstrated severe leukopenia and neutropenia, anemia, increased creatinine and blood nitrogen, high-level FDP, and high INR. Blood cultures were negative. At US, unenhanced CT, and CEUS, a progressive enlargement of the infected liver lesion was observed. Percutaneous drainage was attempted, but only drops of non-suppurative brownish material could be obtained. Pleural and peritoneal drainages gave serosanguineous muddy fluid. The Surgeon and the Anesthesiologist excluded any indication of surgical resection because of the high perioperative mortality risk. Therefore, we asked for the informed consent of the patient and her relatives to treat the gangrenous liver lesion by percutaneous Ablation. Under conscious sedation, percutaneous RFA of GG was performed by double insertion of 3 cool-tip needles (Covidien LDT, USA ) into the infected area. The procedure was well tolerated by the patient. A dramatic improvement in the patient's condition was observed in the subsequent 24 hours and thereafter. Fever and dyspnea disappeared. Normalization of blood tests, including creatinine, was observed within 4 days. Heart performance improved, 10 days after the RFA the patient left the hospital and was followed-up with weekly as an outpatient for 2 months and every two months thereafter. At 18 months follow-up, the patient is well compensated (Child-Pugh class B7), without any peritoneal or pleural effusion and without any HCC recurrence at imaging (US every 3 months, CT every 6 months). Percutaneous RFA could be a valuable therapy of focal GG of the liver in patients non-responder to antibiotics and when surgery and liver transplantation are not feasible. A fast and early indication is needed in case of rapid worsening of patient's conditions.Keywords: liver tumor ablation, interventional ultrasound, liver infection, gas gangrene, radiofrequency ablation
Procedia PDF Downloads 76147 Understanding the Perceived Barriers and Facilitators to Exercise Participation in the Workplace
Authors: Jayden R. Hunter, Brett A. Gordon, Stephen R. Bird, Amanda C. Benson
Abstract:
The World Health Organisation recognises the workplace as an important setting for exercise promotion, with potential benefits including improved employee health and fitness, and reduced worker absenteeism and presenteeism. Despite these potential benefits to both employee and employer, there is a lack of evidence supporting the long-term effectiveness of workplace exercise programs. There is, therefore, a need for better-informed programs that cater to employee exercise preferences. Specifically, workplace exercise programs should address any time, motivation, internal and external barriers to participation reported by sub-groups of employees. This study sought to compare exercise participation to perceived barriers and facilitators to workplace exercise engagement of university employees. This information is needed to design and implement wider-reaching programs aiming to maximise long-term employee exercise adherence and subsequent health, fitness and productivity benefits. An online survey was advertised at an Australian university with the potential to reach 3,104 full-time employees. Along with exercise participation (International physical activity questionnaire) and behaviour (stage of behaviour change in relation to physical activity questionnaire), perceived barriers (corporate exercise barriers scale) and facilitators to workplace exercise participation were identified. The survey response rate was 8.1% (252 full-time employees; 95% white-collar; 60% female; 79.4% aged 30–59 years; 57% professional and 38% academic). Most employees reported meeting (43.7%) or exceeding (42.9%) exercise guidelines over the previous week (i.e. ⩾30 min of moderate-intensity exercise on most days or ⩾ 25 min of vigorous-intensity exercise on at least three days per week). Reported exercise behaviour over the previous six months showed that 64.7% of employees were in maintenance, 8.3% were in action, 10.9% were in preparation, 12.4% were in contemplation, and 3.8% were in the pre-contemplation stage of change. Perceived barriers towards workplace exercise participation were significantly higher in employees not attaining weekly exercise guidelines compared to employees meeting or exceeding guidelines, including a lack of time or reduced motivation (p < 0.001; partial eta squared = 0.24 (large effect)), exercise attitude (p < 0.05; partial eta squared = 0.04 (small effect)), internal (p < 0.01; partial eta squared = 0.10 (moderate effect)) and external (p < 0.01; partial eta squared = 0.06 (moderate effect)) barriers. The most frequently reported exercise facilitators were personal training (particularly for insufficiently active employees; 33%) and group exercise classes (20%). The most frequently cited preferred modes of exercise were walking (70%), swimming (50%), gym (48%), and cycling (45%). In conclusion, providing additional means of support such as individualised gym, swimming and cycling programs with personal supervision and guidance may be particularly useful for employees not meeting recommended moderate-vigorous volumes of exercise, to help overcome reported exercise barriers in order to improve participation, health, and fitness. While individual biopsychosocial factors should be considered when making recommendations for interventions, the specific barriers and facilitators to workplace exercise participation identified by this study can inform the development of workplace exercise programs aiming to broaden employee engagement and promote greater ongoing exercise adherence. This is especially important for the uptake of less active employees who perceive greater barriers to workplace exercise participation than their more active colleagues.Keywords: exercise barriers, exercise facilitators, physical activity, workplace health
Procedia PDF Downloads 145146 Big Data Applications for the Transport Sector
Authors: Antonella Falanga, Armando Cartenì
Abstract:
Today, an unprecedented amount of data coming from several sources, including mobile devices, sensors, tracking systems, and online platforms, characterizes our lives. The term “big data” not only refers to the quantity of data but also to the variety and speed of data generation. These data hold valuable insights that, when extracted and analyzed, facilitate informed decision-making. The 4Vs of big data - velocity, volume, variety, and value - highlight essential aspects, showcasing the rapid generation, vast quantities, diverse sources, and potential value addition of these kinds of data. This surge of information has revolutionized many sectors, such as business for improving decision-making processes, healthcare for clinical record analysis and medical research, education for enhancing teaching methodologies, agriculture for optimizing crop management, finance for risk assessment and fraud detection, media and entertainment for personalized content recommendations, emergency for a real-time response during crisis/events, and also mobility for the urban planning and for the design/management of public and private transport services. Big data's pervasive impact enhances societal aspects, elevating the quality of life, service efficiency, and problem-solving capacities. However, during this transformative era, new challenges arise, including data quality, privacy, data security, cybersecurity, interoperability, the need for advanced infrastructures, and staff training. Within the transportation sector (the one investigated in this research), applications span planning, designing, and managing systems and mobility services. Among the most common big data applications within the transport sector are, for example, real-time traffic monitoring, bus/freight vehicle route optimization, vehicle maintenance, road safety and all the autonomous and connected vehicles applications. Benefits include a reduction in travel times, road accidents and pollutant emissions. Within these issues, the proper transport demand estimation is crucial for sustainable transportation planning. Evaluating the impact of sustainable mobility policies starts with a quantitative analysis of travel demand. Achieving transportation decarbonization goals hinges on precise estimations of demand for individual transport modes. Emerging technologies, offering substantial big data at lower costs than traditional methods, play a pivotal role in this context. Starting from these considerations, this study explores the usefulness impact of big data within transport demand estimation. This research focuses on leveraging (big) data collected during the COVID-19 pandemic to estimate the evolution of the mobility demand in Italy. Estimation results reveal in the post-COVID-19 era, more than 96 million national daily trips, about 2.6 trips per capita, with a mobile population of more than 37.6 million Italian travelers per day. Overall, this research allows us to conclude that big data better enhances rational decision-making for mobility demand estimation, which is imperative for adeptly planning and allocating investments in transportation infrastructures and services.Keywords: big data, cloud computing, decision-making, mobility demand, transportation
Procedia PDF Downloads 61145 Delicate Balance between Cardiac Stress and Protection: Role of Mitochondrial Proteins
Authors: Zuzana Tatarkova, Ivana Pilchova, Michal Cibulka, Martin Kolisek, Peter Racay, Peter Kaplan
Abstract:
Introduction: Normal functioning of mitochondria is crucial for cardiac performance. Mitochondria undergo mitophagy and biogenesis, and mitochondrial proteins are subject to extensive post-translational modifications. The state of mitochondrial homeostasis reflects overall cellular fitness and longevity. Perturbed mitochondria produce less ATP, release greater amounts of reactive molecules, and are more prone to apoptosis. Therefore mitochondrial turnover is an integral aspect of quality control in which dysfunctional mitochondria are selectively eliminated through mitophagy. Currently, the progressive deterioration of physiological functions is seen as accumulation of modified/damaged proteins with limiting regenerative ability and disturbance of such affected protein-protein communication throughout aging in myocardial cells. Methodologies: For our study was used immunohistochemistry, biochemical methods: spectrophotometry, western blotting, immunodetection as well as more sophisticated 2D electrophoresis and mass spectrometry for evaluation protein-protein interactions and specific post-translational modification. Results and Discussion: Mitochondrial stress response to reactive species was evaluated as electron transport chain (ETC) complexes, redox-active molecules, and their possible communication. Protein-protein interactions revealed a strong linkage between age and ETC protein subunits. Redox state was strongly affected in senescent mitochondria with shift in favor of more pro-oxidizing condition within cardiomyocytes. Acute myocardial ischemia and ischemia-reperfusion (IR) injury affected ETC complexes I, II and IV with no change in complex III. Ischemia induced decrease in total antioxidant capacity, MnSOD, GSH and catalase activity with recovery in some extent during reperfusion. While MnSOD protein content was higher in IR group, activity returned to 95% of control. Nitric oxide is one of the biological molecules that can out compete MnSOD for superoxide and produce peroxynitrite. This process is faster than dismutation and led to the 10-fold higher production of nitrotyrosine after IR injury in adult with higher protection in senescent ones. 2D protein profiling revealed 140 mitochondrial proteins, 12 of them with significant changes after IR injury and 36 individual nitrotyrosine-modified proteins further identified by mass spectrometry. Linking these two groups, 5 proteins were altered after IR as well as nitrated, but only one showed massive nitration per lowering content of protein after IR injury in adult. Conclusions: Senescent cells have greater proportion of protein content, which might be modulated by several post-translational modifications. If these protein modifications are connected to functional consequences and protein-protein interactions are revealed, link may lead to the solution. Assume all together, dysfunctional proteostasis can play a causative role and restoration of protein homeostasis machinery is protective against aging and possibly age-related disorders. This work was supported by the project VEGA 1/0018/18 and by project 'Competence Center for Research and Development in the field of Diagnostics and Therapy of Oncological diseases', ITMS: 26220220153, co-financed from EU sources.Keywords: aging heart, mitochondria, proteomics, redox state
Procedia PDF Downloads 166144 Celebrity Culture and Social Role of Celebrities in Türkiye during the 1990s: The Case of Türkiye, Newspaper, Radio, Televison (TGRT) Channel
Authors: Yelda Yenel, Orkut Acele
Abstract:
In a media-saturated world, celebrities have become ubiquitous figures, encountered both in public spaces and within the privacy of our homes, seamlessly integrating into daily life. From Alexander the Great to contemporary media personalities, the image of celebrity has persisted throughout history, manifesting in various forms and contexts. Over time, as the relationship between society and the market evolved, so too did the roles and behaviors of celebrities. These transformations offer insights into the cultural climate, revealing shifts in habits and worldviews. In Türkiye, the emergence of private television channels brought an influx of celebrities into everyday life, making them a pervasive part of daily routines. To understand modern celebrity culture, it is essential to examine the ideological functions of media within political, economic, and social contexts. Within this framework, celebrities serve as both reflections and creators of cultural values and, at times, act as intermediaries, offering insights into the society of their era. Starting its broadcasting life in 1992 with religious films and religious conversation, Türkiye Newspaper, Radio, Television channel (TGRT) later changed its appearance, slogan, and the celebrities it featured in response to the political atmosphere. Celebrities played a critical role in transforming from the existing slogan 'Peace has come to the screen' to 'Watch and see what will happen”. Celebrities hold significant roles in society, and their images are produced and circulated by various actors, including media organizations and public relations teams. Understanding these dynamics is crucial for analyzing their influence and impact. This study aims to explore Turkish society in the 1990s, focusing on TGRT and its visual and discursive characteristics regarding celebrity figures such as Seda Sayan. The first section examines the historical development of celebrity culture and its transformations, guided by the conceptual framework of celebrity studies. The complex and interconnected image of celebrity, as introduced by post-structuralist approaches, plays a fundamental role in making sense of existing relationships. This section traces the existence and functions of celebrities from antiquity to the present day. The second section explores the economic, social, and cultural contexts of 1990s Türkiye, focusing on the media landscape and visibility that became prominent in the neoliberal era following the 1980s. This section also discusses the political factors underlying TGRT's transformation, such as the 1997 military memorandum. The third section analyzes TGRT as a case study, focusing on its significance as an Islamic television channel and the shifts in its public image, categorized into two distinct periods. The channel’s programming, which aligned with Islamic teachings, and the celebrities who featured prominently during these periods became the public face of both TGRT and the broader society. In particular, the transition to a more 'secular' format during TGRT's second phase is analyzed, focusing on changes in celebrity attire and program formats. This study reveals that celebrities are used as indicators of ideology, benefiting from this instrumentalization by enhancing their own fame and reflecting the prevailing cultural hegemony in society.Keywords: celebrity culture, media, neoliberalism, TGRT
Procedia PDF Downloads 28143 Enhancing Strategic Counter-Terrorism: Understanding How Familial Leadership Influences the Resilience of Terrorist and Insurgent Organizations in Asia
Authors: Andrew D. Henshaw
Abstract:
The research examines the influence of familial and kinship based leadership on the resilience of politically violent organizations. Organizations of this type frequently fight in the same conflicts though are called 'terrorist' or 'insurgent' depending on political foci of the time, and thus different approaches are used to combat them. The research considers them correlated phenomena with significant overlap and identifies strengths and vulnerabilities in resilience processes. The research employs paired case studies to examine resilience in organizations under significant external pressure, and achieves this by measuring three variables. 1: Organizational robustness in terms of leadership and governance. 2. Bounce-back response efficiency to external pressures and adaptation to endogenous and exogenous shock. 3. Perpetuity of operational and attack capability, and political legitimacy. The research makes three hypotheses. First, familial/kinship leadership groups have a significant effect on organizational resilience in terms of informal operations. Second, non-familial/kinship organizations suffer in terms of heightened security transaction costs and social economics surrounding recruitment, retention, and replacement. Third, resilience in non-familial organizations likely stems from critical external supports like state sponsorship or powerful patrons, rather than organic resilience dynamics. The case studies pair familial organizations with non-familial organizations. Set 1: The Haqqani Network (HQN) - Pair: Lashkar-e-Toiba (LeT). Set 2: Jemaah Islamiyah (JI) - Pair: The Abu Sayyaf Group (ASG). Case studies were selected based on three requirements, being: contrasting governance types, exposure to significant external pressures and, geographical similarity. The case study sets were examined over 24 months following periods of significantly heightened operational activities. This enabled empirical measurement of the variables as substantial external pressures came into force. The rationale for the research is obvious. Nearly all organizations have some nexus of familial interconnectedness. Examining familial leadership networks does not provide further understanding of how terrorism and insurgency originate, however, the central focus of the research does address how they persist. The sparse attention to this in existing literature presents an unexplored yet important area of security studies. Furthermore, social capital in familial systems is largely automatic and organic, given at birth or through kinship. It reduces security vetting cost for recruits, fighters and supporters which lowers liabilities and entry costs, while raising organizational efficiency and exit costs. Better understanding of these process is needed to exploit strengths into weaknesses. Outcomes and implications of the research have critical relevance to future operational policy development. Increased clarity of internal trust dynamics, social capital and power flows are essential to fracturing and manipulating kinship nexus. This is highly valuable to external pressure mechanisms such as counter-terrorism, counterinsurgency, and strategic intelligence methods to penetrate, manipulate, degrade or destroy the resilience of politically violent organizations.Keywords: Counterinsurgency (COIN), counter-terrorism, familial influence, insurgency, intelligence, kinship, resilience, terrorism
Procedia PDF Downloads 312142 Runoff Estimates of Rapidly Urbanizing Indian Cities: An Integrated Modeling Approach
Authors: Rupesh S. Gundewar, Kanchan C. Khare
Abstract:
Runoff contribution from urban areas is generally from manmade structures and few natural contributors. The manmade structures are buildings; roads and other paved areas whereas natural contributors are groundwater and overland flows etc. Runoff alleviation is done by manmade as well as natural storages. Manmade storages are storage tanks or other storage structures such as soakways or soak pits which are more common in western and European countries. Natural storages are catchment slope, infiltration, catchment length, channel rerouting, drainage density, depression storage etc. A literature survey on the manmade and natural storages/inflow has presented percentage contribution of each individually. Sanders et.al. in their research have reported that a vegetation canopy reduces runoff by 7% to 12%. Nassif et el in their research have reported that catchment slope has an impact of 16% on bare standard soil and 24% on grassed soil on rainfall runoff. Infiltration being a pervious/impervious ratio dependent parameter is catchment specific. But a literature survey has presented a range of 15% to 30% loss of rainfall runoff in various catchment study areas. Catchment length and channel rerouting too play a considerable role in reduction of rainfall runoff. Ground infiltration inflow adds to the runoff where the groundwater table is very shallow and soil saturates even in a lower intensity storm. An approximate percent contribution through this inflow and surface inflow contributes to about 2% of total runoff volume. Considering the various contributing factors in runoff it has been observed during a literature survey that integrated modelling approach needs to be considered. The traditional storm water network models are able to predict to a fair/acceptable degree of accuracy provided no interaction with receiving water (river, sea, canal etc), ground infiltration, treatment works etc. are assumed. When such interactions are significant then it becomes difficult to reproduce the actual flood extent using the traditional discrete modelling approach. As a result the correct flooding situation is very rarely addressed accurately. Since the development of spatially distributed hydrologic model the predictions have become more accurate at the cost of requiring more accurate spatial information.The integrated approach provides a greater understanding of performance of the entire catchment. It enables to identify the source of flow in the system, understand how it is conveyed and also its impact on the receiving body. It also confirms important pain points, hydraulic controls and the source of flooding which could not be easily understood with discrete modelling approach. This also enables the decision makers to identify solutions which can be spread throughout the catchment rather than being concentrated at single point where the problem exists. Thus it can be concluded from the literature survey that the representation of urban details can be a key differentiator to the successful understanding of flooding issue. The intent of this study is to accurately predict the runoff from impermeable areas from urban area in India. A representative area has been selected for which data was available and predictions have been made which are corroborated with the actual measured data.Keywords: runoff, urbanization, impermeable response, flooding
Procedia PDF Downloads 248141 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation
Authors: A. Zapata, Orlando Arroyo, R. Bonett
Abstract:
Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm
Procedia PDF Downloads 72140 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant
Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana
Abstract:
Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle
Procedia PDF Downloads 121139 Assessment and Forecasting of the Impact of Negative Environmental Factors on Public Health
Authors: Nurlan Smagulov, Aiman Konkabayeva, Akerke Sadykova, Arailym Serik
Abstract:
Introduction. Adverse environmental factors do not immediately lead to pathological changes in the body. They can exert the growth of pre-pathology characterized by shifts in physiological, biochemical, immunological and other indicators of the body state. These disorders are unstable, reversible and indicative of body reactions. There is an opportunity to objectively judge the internal structure of the adaptive body reactions at the level of individual organs and systems. In order to obtain a stable response of the body to the chronic effects of unfavorable environmental factors of low intensity (compared to production environment factors), a time called the «lag time» is needed. The obtained results without considering this factor distort reality and, for the most part, cannot be a reliable statement of the main conclusions in any work. A technique is needed to reduce methodological errors and combine mathematical logic using statistical methods and a medical point of view, which ultimately will affect the obtained results and avoid a false correlation. Objective. Development of a methodology for assessing and predicting the environmental factors impact on the population health considering the «lag time.» Methods. Research objects: environmental and population morbidity indicators. The database on the environmental state was compiled from the monthly newsletters of Kazhydromet. Data on population morbidity were obtained from regional statistical yearbooks. When processing static data, a time interval (lag) was determined for each «argument-function» pair. That is the required interval, after which the harmful factor effect (argument) will fully manifest itself in the indicators of the organism's state (function). The lag value was determined by cross-correlation functions of arguments (environmental indicators) with functions (morbidity). Correlation coefficients (r) and their reliability (t), Fisher's criterion (F) and the influence share (R2) of the main factor (argument) per indicator (function) were calculated as a percentage. Results. The ecological situation of an industrially developed region has an impact on health indicators, but it has some nuances. Fundamentally opposite results were obtained in the mathematical data processing, considering the «lag time». Namely, an expressed correlation was revealed after two databases (ecology-morbidity) shifted. For example, the lag period was 4 years for dust concentration, general morbidity, and 3 years – for childhood morbidity. These periods accounted for the maximum values of the correlation coefficients and the largest percentage of the influencing factor. Similar results were observed in relation to the concentration of soot, dioxide, etc. The comprehensive statistical processing using multiple correlation-regression variance analysis confirms the correctness of the above statement. This method provided the integrated approach to predicting the degree of pollution of the main environmental components to identify the most dangerous combinations of concentrations of leading negative environmental factors. Conclusion. The method of assessing the «environment-public health» system (considering the «lag time») is qualitatively different from the traditional (without considering the «lag time»). The results significantly differ and are more amenable to a logical explanation of the obtained dependencies. The method allows presenting the quantitative and qualitative dependence in a different way within the «environment-public health» system.Keywords: ecology, morbidity, population, lag time
Procedia PDF Downloads 80138 Role of Toll Like Receptor-2 in Female Genital Tuberculosis Disease Infection and Its Severity
Authors: Swati Gautam, Salman Akhtar, S. P. Jaiswar, Amita Jain
Abstract:
Background: FGTB is now a major global health problem mostly in developing countries including India. In humans, Mycobacterium Tuberculosis (M.tb) is a causating agent of infection. High index of suspicion is required for early diagnosis due to asymptomatic presentation of FGTB disease. In macrophages Toll Like Receptor-2 (TLR-2) is one which mediated host’s immune response to M.tb. The expression of TLR-2 on macrophages is important to determine the fate of innate immune responses to M.tb. TLR-2 have two work. First its high expression on macrophages worsen the outer of infection and another side, it maintains M.tb to its dormant stage avoids activation of M.tb from latent phase. Single Nucleotide Polymorphism (SNP) of TLR-2 gene plays an important role in susceptibility to TB among different populations and subsequently, in the development of infertility. Methodology: This Case-Control study was done in the Department of Obs and Gynae and Department of Microbiology at King George’s Medical University, U.P, Lucknow, India. Total 300 subjects (150 Cases and 150 Controls) were enrolled in the study. All subjects were enrolled only after fulfilling the given inclusion and exclusion criteria. Inclusion criteria: Age 20-35 years, menstrual-irregularities, positive on Acid-Fast Bacilli (AFB), TB-PCR, (LJ/MGIT) culture in Endometrial Aspiration (EA). Exclusion criteria: Koch’s active, on ATT, PCOS, and Endometriosis fibroid women, positive on Gonococal and Chlamydia. Blood samples were collected in EDTA tubes from cases and healthy control women (HCW) and genomic DNA extraction was carried out by salting-out method. Genotyping of TLR2 genetic variants (Arg753Gln and Arg677Trp) were performed by using single amplification refractory mutation system (ARMS) PCR technique. PCR products were analyzed by electrophoresis on 1.2% agarose gel and visualized by gel-doc. Statistical analysis of the data was performed using the SPSS 16.3 software and computing odds ratio (OR) with 95% CI. Linkage Disequiliribium (LD) analysis was done by SNP stats online software. Results: In TLR-2 (Arg753Gln) polymorphism significant risk of FGTB observed with GG homozygous mutant genotype (OR=13, CI=0.71-237.7, p=0.05), AG heterozygous mutant genotype (OR=13.7, CI=0.76-248.06, p=0.03) however, G allele (OR=1.09, CI=0.78-1.52, p=0.67) individually was not associated with FGTB. In TLR-2 (Arg677Trp) polymorphism a significant risk of FGTB observed with TT homozygous mutant genotype (OR= 0.020, CI=0.001-0.341, p < 0.001), CT heterozygous mutant genotype (OR=0.53, CI=0.33-0.86, p=0.014) and T allele (OR=0.463, CI=0.32-0.66, p < 0.001). TT mutant genotype was only found in FGTB cases and frequency of CT heterozygous more in control group as compared to FGTB group. So, CT genotype worked as protective mutation for FGTB susceptibility group. In haplotype analysis of TLR-2 genetic variants, four possible combinations, i.e. (G-T, A-C, G-C, and A-T) were obtained. The frequency of haplotype A-C was significantly higher in FGTB cases (0.32). Control group did not show A-C haplotype and only found in FGTB cases. Conclusion: In conclusion, study showed a significant association with both genetic variants of TLR-2 of FGTB disease. Moreover, the presence of specific associated genotype/alleles suggest the possibility of disease severity and clinical approach aimed to prevent extensive damage by disease and also helpful for early detection of disease.Keywords: ARMS, EDTA, FGTB, TLR
Procedia PDF Downloads 303137 Cross-Cultural Conflict Management in Transnational Business Relationships: A Qualitative Study with Top Executives in Chinese, German and Middle Eastern Cases
Authors: Sandra Hartl, Meena Chavan
Abstract:
This paper presents the outcome of a four year Ph.D. research on cross-cultural conflict management in transnational business relationships. An important and complex problem about managing conflicts that arise across cultures in business relationships is investigated, and conflict resolution strategies are identified. This paper particularly focuses on transnational relationships within a Chinese, German and Middle Eastern framework. Unlike many papers on this issue which have been built on experiments with international MBA students, this research provides real-life cases of cross-cultural conflicts which are not easy to capture. Its uniqueness is underpinned as the real case data was gathered by interviewing top executives at management positions in large multinational corporations through a qualitative case study method approach. This paper makes a valuable contribution to the theory of cross-cultural conflicts, and despite the sensitivity, this research primarily presents real-time business data about breaches of contracts between two counterparties engaged in transnational operating organizations. The overarching aim of this research is to identify the degree of significance for the cultural factors and the communication factors embedded in cross-cultural business conflicts. It questions from a cultural perspective what factors lead to the conflicts in each of the cases, what the causes are and the role of culture in identifying effective strategies for resolving international disputes in an increasingly globalized business world. The results of 20 face to face interviews are outlined, which were conducted, recorded, transcribed and then analyzed using the NVIVO qualitative data analysis system. The outcomes make evident that the factors leading to conflicts are broadly organized under seven themes, which are communication, cultural difference, environmental issues, work structures, knowledge and skills, cultural anxiety and personal characteristics. When evaluating the causes of the conflict it is to notice that these are rather multidimensional. Irrespective of the conflict types (relationship or task-based conflict or due to individual personal differences), relationships are almost always an element of all conflicts. Cultural differences, which are a critical factor for conflicts, result from different cultures placing different levels of importance on relationships. Communication issues which are another cause of conflict also reflect different relationships styles favored by different cultures. In identifying effective strategies for solving cross-cultural business conflicts this research identifies that solutions need to consider the national cultures (country specific characteristics), organizational cultures and individual culture, of the persons engaged in the conflict and how these are interlinked to each other. Outcomes identify practical dispute resolution strategies to resolve cross-cultural business conflicts in reference to communication, empathy and training to improve cultural understanding and cultural competence, through the use of mediation. To conclude, the findings of this research will not only add value to academic knowledge of cross-cultural conflict management across transnational businesses but will also add value to numerous cross-border business relationships worldwide. Above all it identifies the influence of cultures and communication and cross-cultural competence in reducing cross-cultural business conflicts in transnational business.Keywords: business conflict, conflict management, cross-cultural communication, dispute resolution
Procedia PDF Downloads 162136 Teachers’ Language Insecurity in English as a Second Language Instruction: Developing Effective In-Service Training
Authors: Mamiko Orii
Abstract:
This study reports on primary school second language teachers’ sources of language insecurity. Furthermore, it aims to develop an in-service training course to reduce anxiety and build sufficient English communication skills. Language/Linguistic insecurity refers to a lack of confidence experienced by language speakers. In particular, second language/non-native learners often experience insecurity, influencing their learning efficacy. While language learner insecurity has been well-documented, research on the insecurity of language teaching professionals is limited. Teachers’ language insecurity or anxiety in target language use may adversely affect language instruction. For example, they may avoid classroom activities requiring intensive language use. Therefore, understanding teachers’ language insecurity and providing continuing education to help teachers to improve their proficiency is vital to improve teaching quality. This study investigated Japanese primary school teachers’ language insecurity. In Japan, teachers are responsible for teaching most subjects, including English, which was recently added as compulsory. Most teachers have never been professionally trained in second language instruction during college teacher certificate preparation, leading to low confidence in English teaching. Primary source of language insecurity is a lack of confidence regarding English communication skills. Their actual use of English in classrooms remains unclear. Teachers’ classroom speech remains a neglected area requiring improvement. A more refined programme for second language teachers could be constructed if we can identify areas of need. Two questionnaires were administered to primary school teachers in Tokyo: (1) Questionnaire A: 396 teachers answered questions (using a 5-point scale) concerning classroom teaching anxiety and general English use and needs for in-service training (Summer 2021); (2) Questionnaire B: 20 teachers answered detailed questions concerning their English use (Autumn 2022). Questionnaire A’s responses showed that over 80% of teachers have significant language insecurity and anxiety, mainly when speaking English in class or teaching independently. Most teachers relied on a team-teaching partner (e.g., ALT) and avoided speaking English. Over 70% of the teachers said they would like to participate in training courses in classroom English. Questionnaire B’s results showed that teachers could use simple classroom English, such as greetings and basic instructions (e.g., stand up, repeat after me), and initiate conversation (e.g., asking questions). In contrast, teachers reported that conversations were mainly carried on in a simple question-answer style. They had difficulty continuing conversations. Responding to learners’ ‘on-the-spot’ utterances was particularly difficult. Instruction in turn-taking patterns suitable in the classroom communication context is needed. Most teachers received grammar-based instruction during their entire English education. They were predominantly exposed to displayed questions and form-focused corrective feedback. Therefore, strategies such as encouraging teachers to ask genuine questions (i.e., referential questions) and responding to students with content feedback are crucial. When learners’ utterances are incorrect or unsatisfactory, teachers should rephrase or extend (recast) them instead of offering explicit corrections. These strategies support a continuous conversational flow. These results offer benefits beyond Japan’s English as a second Language context. They will be valuable in any context where primary school teachers are underprepared but must provide English-language instruction.Keywords: english as a second/non-native language, in-service training, primary school, teachers’ language insecurity
Procedia PDF Downloads 67135 Converting Urban Organic Waste into Aquaculture Feeds: A Two-Step Bioconversion Approach
Authors: Aditi Chitharanjan Parmar, Marco Gottardo, Giulia Adele Tuci, Francesco Valentino
Abstract:
The generation of urban organic waste is a significant environmental problem due to the potential release of leachate and/or methane into the environment. This contributes to climate change, discharging a valuable resource that could be used in various ways. This research addresses this issue by proposing a two-step approach by linking biowaste management to aquaculture industry via single cell proteins (SCP) production. A mixture of food waste and municipal sewage sludge (FW-MSS) was firstly subjected to a mesophilic (37°C) anaerobic fermentation to produce a liquid stream rich in short-chain fatty acids (SCFAs), which are important building blocks for the following microbial biomass growth. In the frame of stable fermentation activity (after 1 week of operation), the average value of SCFAs was 21.3 0.4 g COD/L, with a CODSCFA/CODSOL ratio of 0.77 COD/COD. This indicated the successful strategy to accumulate SCFAs from the biowaste mixture by applying short hydraulic retention time (HRT; 4 days) and medium organic loading rate (OLR; 7 – 12 g VS/L d) in the lab-scale (V = 4 L) continuous stirred tank reactor (CSTR). The SCFA-rich effluent was then utilized as feedstock for the growth of a mixed microbial consortium able to store polyhydroxyalkanoates (PHA), a class of biopolymers completely biodegradable in nature and produced as intracellular carbon/energy source. Given the demonstrated properties of the intracellular PHA as antimicrobial and immunomodulatory effect on various fish species, the PHA-producing culture was intended to be utilized as SCP in aquaculture. The growth of PHA-storing biomass was obtained in a 2-L sequencing batch reactor (SBR), fully aerobic and set at 25°C; to stimulate a certain storage response (PHA production) in the cells, the feast-famine conditions were adopted, consisting in an alternation of cycles during which the biomass was exposed to an initial abundance of substrate (feast phase) followed by a starvation period (famine phase). To avoid the proliferation of other bacteria not able to store PHA, the SBR was maintained at low HRT (2 days). Along the stable growth of the mixed microbial consortium (the growth yield was estimated to be 0.47 COD/COD), the feast-famine strategy enhanced the PHA production capacity, leading to a final PHA content in the biomass equal to 16.5 wt%, which is suitable for the use as SCP. In fact, by incorporating the waste-derived PHA-rich biomass into fish feed at 20 wt%, the final feed could contain a PHA content around 3.0 wt%, within the recommended range (0.2–5.0 wt%) for promoting fish health. Proximate analysis of the PHA-rich biomass revealed a good crude proteins level (around 51 wt%) and the presence of all the essential amino acids (EAA), together accounting for 31% of the SCP total amino acid composition. This suggested that the waste-derived SCP was a source of good quality proteins with a good nutritional value. This approach offers a sustainable solution for urban waste management, potentially establishing a sustainable waste-to-value conversion route by connecting waste management to the growing aquaculture and fish feed production sectors.Keywords: feed supplement, nutritional value, polyhydroxyalkanoates (PHA), single cell protein (SCP), urban organic waste.
Procedia PDF Downloads 39134 Developing a Methodology to Examine Psychophysiological Responses during Stress Exposure and Relaxation: An Experimental Paradigm
Authors: M. Velana, G. Rinkenauer
Abstract:
Nowadays, nurses are facing unprecedented amounts of pressure due to the ongoing global health demands. Work-related stress can cause a high physical and psychological workload, which can lead, in turn, to burnout. On the physiological level, stress triggers an initial activation of the sympathetic nervous and adrenomedullary systems resulting in increases in cardiac activity. Furthermore, activation of the hypothalamus-pituitary-adrenal axis provokes endocrine and immune changes leading to the release of cortisol and cytokines in an effort to re-establish body balance. Based on the current state of the literature, it has been identified that resilience and mindfulness exercises among nurses can effectively decrease stress and improve mood. However, it is still unknown what relaxation techniques would be suitable for and to what extent would be effective to decrease psychophysiological arousal deriving from either a physiological or a psychological stressor. Moreover, although cardiac activity and cortisol are promising candidates to examine the effectiveness of relaxation to reduce stress, it still remains to shed light on the role of cytokines in this process so as to thoroughly understand the body’s response to stress and to relaxation. Therefore, the main aim of the present study is to develop a comprehensive experimental paradigm and assess different relaxation techniques, namely progressive muscle relaxation and a mindfulness exercise originating from cognitive therapy by means of biofeedback, under highly controlled laboratory conditions. An experimental between-subject design will be employed, where 120 participants will be randomized either to a physiological or a psychological stress-related experiment. Particularly, the cold pressor test refers to a procedure in which the participants have to immerse their non-dominant hands into ice water (2-3 °C) for 3 min. The participants are requested to keep their hands in the water throughout the whole duration. However, they can immediately terminate the test in case it would be barely tolerable. A pre-test anticipation phase and a post-stress period of 3 min, respectively, are planned. The Trier Social Stress Test will be employed to induce psychological stress. During this laboratory stressor, the participants are instructed to give a 5-min speech in front of a committee of communication specialists. Before the main task, there is a 10-min anticipation period. Subsequently, participants are requested to perform an unexpected arithmetic task. After stress exposure, the participants will perform one of the relaxation exercises (treatment condition) or watch a neutral video (control condition). Electrocardiography, salivary samples, and self-report will be collected at different time points. The preliminary results deriving from the pilot study showed that the aforementioned paradigm could effectively induce stress reactions and that relaxation might decrease the impact of stress exposure. It is of utmost importance to assess how the human body responds under different stressors and relaxation exercises so that an evidence-based intervention could be transferred in a clinical setting to improve nurses’ general health. Based on suggestive future laboratory findings, the research group plans to conduct a pilot-level randomized study to decrease stress and promote well-being among nurses who work in the stress-riddled environment of a hospital located in Northern Germany.Keywords: nurses, psychophysiology, relaxation, stress
Procedia PDF Downloads 109133 Physiological Effects during Aerobatic Flights on Science Astronaut Candidates
Authors: Pedro Llanos, Diego García
Abstract:
Spaceflight is considered the last frontier in terms of science, technology, and engineering. But it is also the next frontier in terms of human physiology and performance. After more than 200,000 years humans have evolved under earth’s gravity and atmospheric conditions, spaceflight poses environmental stresses for which human physiology is not adapted. Hypoxia, accelerations, and radiation are among such stressors, our research involves suborbital flights aiming to develop effective countermeasures in order to assure sustainable human space presence. The physiologic baseline of spaceflight participants is subject to great variability driven by age, gender, fitness, and metabolic reserve. The objective of the present study is to characterize different physiologic variables in a population of STEM practitioners during an aerobatic flight. Cardiovascular and pulmonary responses were determined in Science Astronaut Candidates (SACs) during unusual attitude aerobatic flight indoctrination. Physiologic data recordings from 20 subjects participating in high-G flight training were analyzed. These recordings were registered by wearable sensor-vest that monitored electrocardiographic tracings (ECGs), signs of dysrhythmias or other electric disturbances during all the flight. The same cardiovascular parameters were also collected approximately 10 min pre-flight, during each high-G/unusual attitude maneuver and 10 min after the flights. The ratio (pre-flight/in-flight/post-flight) of the cardiovascular responses was calculated for comparison of inter-individual differences. The resulting tracings depicting the cardiovascular responses of the subjects were compared against the G-loads (Gs) during the aerobatic flights to analyze cardiovascular variability aspects and fluid/pressure shifts due to the high Gs. In-flight ECG revealed cardiac variability patterns associated with rapid Gs onset in terms of reduced heart rate (HR) and some scattered dysrhythmic patterns (15% premature ventricular contractions-type) that were considered as triggered physiological responses to high-G/unusual attitude training and some were considered as instrument artifact. Variation events were observed in subjects during the +Gz and –Gz maneuvers and these may be due to preload and afterload, sudden shift. Our data reveal that aerobatic flight influenced the breathing rate of the subject, due in part by the various levels of energy expenditure due to the increased use of muscle work during these aerobatic maneuvers. Noteworthy was the high heterogeneity in the different physiological responses among a relatively small group of SACs exposed to similar aerobatic flights with similar Gs exposures. The cardiovascular responses clearly demonstrated that SACs were subjected to significant flight stress. Routine ECG monitoring during high-G/unusual attitude flight training is recommended to capture pathology underlying dangerous dysrhythmias in suborbital flight safety. More research is currently being conducted to further facilitate the development of robust medical screening, medical risk assessment approaches, and suborbital flight training in the context of the evolving commercial human suborbital spaceflight industry. A more mature and integrative medical assessment method is required to understand the physiology state and response variability among highly diverse populations of prospective suborbital flight participants.Keywords: g force, aerobatic maneuvers, suborbital flight, hypoxia, commercial astronauts
Procedia PDF Downloads 128132 Impact of Transgenic Adipose Derived Stem Cells in the Healing of Spinal Cord Injury of Dogs
Authors: Imdad Ullah Khan, Yongseok Yoon, Kyeung Uk Choi, Kwang Rae Jo, Namyul Kim, Eunbee Lee, Wan Hee Kim, Oh-Kyeong Kweon
Abstract:
The primary spinal cord injury (SCI) causes mechanical damage to the neurons and blood vessels. It leads to secondary SCI, which activates multiple pathological pathways, which expand neuronal damage at the injury site. It is characterized by vascular disruption, ischemia, excitotoxicity, oxidation, inflammation, and apoptotic cell death. It causes nerve demyelination and disruption of axons, which perpetuate a loss of impulse conduction through the injured spinal cord. It also leads to the production of myelin inhibitory molecules, which with a concomitant formation of an astroglial scar, impede axonal regeneration. The pivotal role regarding the neuronal necrosis is played by oxidation and inflammation. During an early stage of spinal cord injury, there occurs an abundant expression of reactive oxygen species (ROS) due to defective mitochondrial metabolism and abundant migration of phagocytes (macrophages, neutrophils). ROS cause lipid peroxidation of the cell membrane, and cell death. Abundant migration of neutrophils, macrophages, and lymphocytes collectively produce pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), interleukin-1beta (IL-1β), matrix metalloproteinase, superoxide dismutase, and myeloperoxidases which synergize neuronal apoptosis. Therefore, it is crucial to control inflammation and oxidation injury to minimize the nerve cell death during secondary spinal cord injury. Therefore, in response to oxidation and inflammation, heme oxygenase-1 (HO-1) is induced by the resident cells to ameliorate the milieu. In the meanwhile, neurotrophic factors are induced to promote neuroregeneration. However, it seems that anti-stress enzyme (HO-1) and neurotrophic factor (BDNF) do not significantly combat the pathological events during secondary spinal cord injury. Therefore, optimum healing can be induced if anti-inflammatory and neurotrophic factors are administered in a higher amount through an exogenous source. During the first experiment, the inflammation and neuroregeneration were selectively targeted. HO-1 expressing MSCs (HO-1 MSCs) and BDNF expressing MSCs (BDNF MSC) were co-transplanted in one group (combination group) of dogs with subacute spinal cord injury to selectively control the expression of inflammatory cytokines by HO-1 and induce neuroregeneration by BDNF. We compared the combination group with the HO-1 MSCs group, BDNF MSCs group, and GFP MSCs group. We found that the combination group showed significant improvement in functional recovery. It showed increased expression of neural markers and growth-associated proteins (GAP-43) than in other groups, which depicts enhanced neuroregeneration/neural sparing due to reduced expression of pro-inflammatory cytokines such as TNF-alpha, IL-6 and COX-2; and increased expression of anti-inflammatory markers such as IL-10 and HO-1. Histopathological study revealed reduced intra-parenchymal fibrosis in the injured spinal cord segment in the combination group than in other groups. Thus it was concluded that selectively targeting the inflammation and neuronal growth with the combined use of HO-1 MSCs and BDNF MSCs more favorably promote healing of the SCI. HO-1 MSCs play a role in controlling the inflammation, which favors the BDNF induced neuroregeneration at the injured spinal cord segment of dogs.Keywords: HO-1 MSCs, BDNF MSCs, neuroregeneration, inflammation, anti-inflammation, spinal cord injury, dogs
Procedia PDF Downloads 117131 High Purity Germanium Detector Characterization by Means of Monte Carlo Simulation through Application of Geant4 Toolkit
Authors: Milos Travar, Jovana Nikolov, Andrej Vranicar, Natasa Todorovic
Abstract:
Over the years, High Purity Germanium (HPGe) detectors proved to be an excellent practical tool and, as such, have established their today's wide use in low background γ-spectrometry. One of the advantages of gamma-ray spectrometry is its easy sample preparation as chemical processing and separation of the studied subject are not required. Thus, with a single measurement, one can simultaneously perform both qualitative and quantitative analysis. One of the most prominent features of HPGe detectors, besides their excellent efficiency, is their superior resolution. This feature virtually allows a researcher to perform a thorough analysis by discriminating photons of similar energies in the studied spectra where otherwise they would superimpose within a single-energy peak and, as such, could potentially scathe analysis and produce wrongly assessed results. Naturally, this feature is of great importance when the identification of radionuclides, as well as their activity concentrations, is being practiced where high precision comes as a necessity. In measurements of this nature, in order to be able to reproduce good and trustworthy results, one has to have initially performed an adequate full-energy peak (FEP) efficiency calibration of the used equipment. However, experimental determination of the response, i.e., efficiency curves for a given detector-sample configuration and its geometry, is not always easy and requires a certain set of reference calibration sources in order to account for and cover broader energy ranges of interest. With the goal of overcoming these difficulties, a lot of researches turned towards the application of different software toolkits that implement the Monte Carlo method (e.g., MCNP, FLUKA, PENELOPE, Geant4, etc.), as it has proven time and time again to be a very powerful tool. In the process of creating a reliable model, one has to have well-established and described specifications of the detector. Unfortunately, the documentation that manufacturers provide alongside the equipment is rarely sufficient enough for this purpose. Furthermore, certain parameters tend to evolve and change over time, especially with older equipment. Deterioration of these parameters consequently decreases the active volume of the crystal and can thus affect the efficiencies by a large margin if they are not properly taken into account. In this study, the optimisation method of two HPGe detectors through the implementation of the Geant4 toolkit developed by CERN is described, with the goal of further improving simulation accuracy in calculations of FEP efficiencies by investigating the influence of certain detector variables (e.g., crystal-to-window distance, dead layer thicknesses, inner crystal’s void dimensions, etc.). Detectors on which the optimisation procedures were carried out were a standard traditional co-axial extended range detector (XtRa HPGe, CANBERRA) and a broad energy range planar detector (BEGe, CANBERRA). Optimised models were verified through comparison with experimentally obtained data from measurements of a set of point-like radioactive sources. Acquired results of both detectors displayed good agreement with experimental data that falls under an average statistical uncertainty of ∼ 4.6% for XtRa and ∼ 1.8% for BEGe detector within the energy range of 59.4−1836.1 [keV] and 59.4−1212.9 [keV], respectively.Keywords: HPGe detector, γ spectrometry, efficiency, Geant4 simulation, Monte Carlo method
Procedia PDF Downloads 117