Search results for: efficient score function
104 Mapping Iron Content in the Brain with Magnetic Resonance Imaging and Machine Learning
Authors: Gabrielle Robertson, Matthew Downs, Joseph Dagher
Abstract:
Iron deposition in the brain has been linked with a host of neurological disorders such as Alzheimer’s, Parkinson’s, and Multiple Sclerosis. While some treatment options exist, there are no objective measurement tools that allow for the monitoring of iron levels in the brain in vivo. An emerging Magnetic Resonance Imaging (MRI) method has been recently proposed to deduce iron concentration through quantitative measurement of magnetic susceptibility. This is a multi-step process that involves repeated modeling of physical processes via approximate numerical solutions. For example, the last two steps of this Quantitative Susceptibility Mapping (QSM) method involve I) mapping magnetic field into magnetic susceptibility and II) mapping magnetic susceptibility into iron concentration. Process I involves solving an ill-posed inverse problem by using regularization via injection of prior belief. The end result from Process II highly depends on the model used to describe the molecular content of each voxel (type of iron, water fraction, etc.) Due to these factors, the accuracy and repeatability of QSM have been an active area of research in the MRI and medical imaging community. This work aims to estimate iron concentration in the brain via a single step. A synthetic numerical model of the human head was created by automatically and manually segmenting the human head on a high-resolution grid (640x640x640, 0.4mm³) yielding detailed structures such as microvasculature and subcortical regions as well as bone, soft tissue, Cerebral Spinal Fluid, sinuses, arteries, and eyes. Each segmented region was then assigned tissue properties such as relaxation rates, proton density, electromagnetic tissue properties and iron concentration. These tissue property values were randomly selected from a Probability Distribution Function derived from a thorough literature review. In addition to having unique tissue property values, different synthetic head realizations also possess unique structural geometry created by morphing the boundary regions of different areas within normal physical constraints. This model of the human brain is then used to create synthetic MRI measurements. This is repeated thousands of times, for different head shapes, volume, tissue properties and noise realizations. Collectively, this constitutes a training-set that is similar to in vivo data, but larger than datasets available from clinical measurements. This 3D convolutional U-Net neural network architecture was used to train data-driven Deep Learning models to solve for iron concentrations from raw MRI measurements. The performance was then tested on both synthetic data not used in training as well as real in vivo data. Results showed that the model trained on synthetic MRI measurements is able to directly learn iron concentrations in areas of interest more effectively than other existing QSM reconstruction methods. For comparison, models trained on random geometric shapes (as proposed in the Deep QSM method) are less effective than models trained on realistic synthetic head models. Such an accurate method for the quantitative measurement of iron deposits in the brain would be of important value in clinical studies aiming to understand the role of iron in neurological disease.Keywords: magnetic resonance imaging, MRI, iron deposition, machine learning, quantitative susceptibility mapping
Procedia PDF Downloads 138103 Black-Box-Optimization Approach for High Precision Multi-Axes Forward-Feed Design
Authors: Sebastian Kehne, Alexander Epple, Werner Herfs
Abstract:
A new method for optimal selection of components for multi-axes forward-feed drive systems is proposed in which the choice of motors, gear boxes and ball screw drives is optimized. Essential is here the synchronization of electrical and mechanical frequency behavior of all axes because even advanced controls (like H∞-controls) can only control a small part of the mechanical modes – namely only those of observable and controllable states whose value can be derived from the positions of extern linear length measurement systems and/or rotary encoders on the motor or gear box shafts. Further problems are the unknown processing forces like cutting forces in machine tools during normal operation which make the estimation and control via an observer even more difficult. To start with, the open source Modelica Feed Drive Library which was developed at the Laboratory for Machine Tools, and Production Engineering (WZL) is extended from one axis design to the multi axes design. It is capable to simulate the mechanical, electrical and thermal behavior of permanent magnet synchronous machines with inverters, different gear boxes and ball screw drives in a mechanical system. To keep the calculation time down analytical equations are used for field and torque producing equivalent circuit, heat dissipation and mechanical torque at the shaft. As a first step, a small machine tool with a working area of 635 x 315 x 420 mm is taken apart, and the mechanical transfer behavior is measured with an impulse hammer and acceleration sensors. With the frequency transfer functions, a mechanical finite element model is built up which is reduced with substructure coupling to a mass-damper system which models the most important modes of the axes. The model is modelled with Modelica Feed Drive Library and validated by further relative measurements between machine table and spindle holder with a piezo actor and acceleration sensors. In a next step, the choice of possible components in motor catalogues is limited by derived analytical formulas which are based on well-known metrics to gain effective power and torque of the components. The simulation in Modelica is run with different permanent magnet synchronous motors, gear boxes and ball screw drives from different suppliers. To speed up the optimization different black-box optimization methods (Surrogate-based, gradient-based and evolutionary) are tested on the case. The objective that was chosen is to minimize the integral of the deviations if a step is given on the position controls of the different axes. Small values are good measures for a high dynamic axes. In each iteration (evaluation of one set of components) the control variables are adjusted automatically to have an overshoot less than 1%. It is obtained that the order of the components in optimization problem has a deep impact on the speed of the black-box optimization. An approach to do efficient black-box optimization for multi-axes design is presented in the last part. The authors would like to thank the German Research Foundation DFG for financial support of the project “Optimierung des mechatronischen Entwurfs von mehrachsigen Antriebssystemen (HE 5386/14-1 | 6954/4-1)” (English: Optimization of the Mechatronic Design of Multi-Axes Drive Systems).Keywords: ball screw drive design, discrete optimization, forward feed drives, gear box design, linear drives, machine tools, motor design, multi-axes design
Procedia PDF Downloads 287102 Complete Genome Sequence Analysis of Pasteurella multocida Subspecies multocida Serotype A Strain PMTB2.1
Authors: Shagufta Jabeen, Faez J. Firdaus Abdullah, Zunita Zakaria, Nurulfiza M. Isa, Yung C. Tan, Wai Y. Yee, Abdul R. Omar
Abstract:
Pasteurella multocida (PM) is an important veterinary opportunistic pathogen particularly associated with septicemic pasteurellosis, pneumonic pasteurellosis and hemorrhagic septicemia in cattle and buffaloes. P. multocida serotype A has been reported to cause fatal pneumonia and septicemia. Pasteurella multocida subspecies multocida of serotype A Malaysian isolate PMTB2.1 was first isolated from buffaloes died of septicemia. In this study, the genome of P. multocida strain PMTB2.1 was sequenced using third-generation sequencing technology, PacBio RS2 system and analyzed bioinformatically via de novo analysis followed by in-depth analysis based on comparative genomics. Bioinformatics analysis based on de novo assembly of PacBio raw reads generated 3 contigs followed by gap filling of aligned contigs with PCR sequencing, generated a single contiguous circular chromosome with a genomic size of 2,315,138 bp and a GC content of approximately 40.32% (Accession number CP007205). The PMTB2.1 genome comprised of 2,176 protein-coding sequences, 6 rRNA operons and 56 tRNA and 4 ncRNAs sequences. The comparative genome sequence analysis of PMTB2.1 with nine complete genomes which include Actinobacillus pleuropneumoniae, Haemophilus parasuis, Escherichia coli and five P. multocida complete genome sequences including, PM70, PM36950, PMHN06, PM3480, PMHB01 and PMTB2.1 was carried out based on OrthoMCL analysis and Venn diagram. The analysis showed that 282 CDs (13%) are unique to PMTB2.1and 1,125 CDs with orthologs in all. This reflects overall close relationship of these bacteria and supports the classification in the Gamma subdivision of the Proteobacteria. In addition, genomic distance analysis among all nine genomes indicated that PMTB2.1 is closely related with other five Pasteurella species with genomic distance less than 0.13. Synteny analysis shows subtle differences in genetic structures among different P.multocida indicating the dynamics of frequent gene transfer events among different P. multocida strains. However, PM3480 and PM70 exhibited exceptionally large structural variation since they were swine and chicken isolates. Furthermore, genomic structure of PMTB2.1 is more resembling that of PM36950 with a genomic size difference of approximately 34,380 kb (smaller than PM36950) and strain-specific Integrative and Conjugative Elements (ICE) which was found only in PM36950 is absent in PMTB2.1. Meanwhile, two intact prophages sequences of approximately 62 kb were found to be present only in PMTB2.1. One of phage is similar to transposable phage SfMu. The phylogenomic tree was constructed and rooted with E. coli, A. pleuropneumoniae and H. parasuis based on OrthoMCL analysis. The genomes of P. multocida strain PMTB2.1 were clustered with bovine isolates of P. multocida strain PM36950 and PMHB01 and were separated from avian isolate PM70 and swine isolates PM3480 and PMHN06 and are distant from Actinobacillus and Haemophilus. Previous studies based on Single Nucleotide Polymorphism (SNPs) and Multilocus Sequence Typing (MLST) unable to show a clear phylogenetic relatedness between Pasteurella multocida and the different host. In conclusion, this study has provided insight on the genomic structure of PMTB2.1 in terms of potential genes that can function as virulence factors for future study in elucidating the mechanisms behind the ability of the bacteria in causing diseases in susceptible animals.Keywords: comparative genomics, DNA sequencing, phage, phylogenomics
Procedia PDF Downloads 188101 Numerical Prediction of Width Crack of Concrete Dapped-End Beams
Authors: Jatziri Y. Moreno-Martinez, Arturo Galvan, Xavier Chavez Cardenas, Hiram Arroyo
Abstract:
Several methods have been utilized to study the prediction of cracking of concrete structural under loading. The finite element analysis is an alternative that shows good results. The aim of this work was the numerical study of the width crack in reinforced concrete beams with dapped ends, these are frequently found in bridge girders and precast concrete construction. Properly restricting cracking is an important aspect of the design in dapped ends, it has been observed that the cracks that exceed the allowable widths are unacceptable in an aggressive environment for reinforcing steel. For simulating the crack width, the discrete crack approach was considered by means of a Cohesive Zone (CZM) Model using a function to represent the crack opening. Two cases of dapped-end were constructed and tested in the laboratory of Structures and Materials of Engineering Institute of UNAM. The first case considers a reinforcement based on hangers as well as on vertical and horizontal ring, the second case considers 50% of the vertical stirrups in the dapped end to the main part of the beam were replaced by an equivalent area (vertically projected) of diagonal bars under. The loading protocol consisted on applying symmetrical loading to reach the service load. The models were performed using the software package ANSYS v. 16.2. The concrete structure was modeled using three-dimensional solid elements SOLID65 capable of cracking in tension and crushing in compression. Drucker-Prager yield surface was used to include the plastic deformations. The reinforcement was introduced with smeared approach. Interface delamination was modeled by traditional fracture mechanics methods such as the nodal release technique adopting softening relationships between tractions and the separations, which in turn introduce a critical fracture energy that is also the energy required to break apart the interface surfaces. This technique is called CZM. The interface surfaces of the materials are represented by a contact elements Surface-to-Surface (CONTA173) with bonded (initial contact). The Mode I dominated bilinear CZM model assumes that the separation of the material interface is dominated by the displacement jump normal to the interface. Furthermore, the opening crack was taken into consideration according to the maximum normal contact stress, the contact gap at the completion of debonding, and the maximum equivalent tangential contact stress. The contact elements were placed in the crack re-entrant corner. To validate the proposed approach, the results obtained with the previous procedure are compared with experimental test. A good correlation between the experimental and numerical Load-Displacement curves was presented, the numerical models also allowed to obtain the load-crack width curves. In these two cases, the proposed model confirms the capability of predicting the maximum crack width, with an error of ± 30 %. Finally, the orientation of the crack is a fundamental for the prediction of crack width. The results regarding the crack width can be considered as good from the practical point view. Load-Displacement curve of the test and the location of the crack were able to obtain favorable results.Keywords: cohesive zone model, dapped-end beams, discrete crack approach, finite element analysis
Procedia PDF Downloads 169100 Chronic Impact of Silver Nanoparticle on Aerobic Wastewater Biofilm
Authors: Sanaz Alizadeh, Yves Comeau, Arshath Abdul Rahim, Sunhasis Ghoshal
Abstract:
The application of silver nanoparticles (AgNPs) in personal care products, various household and industrial products has resulted in an inevitable environmental exposure of such engineered nanoparticles (ENPs). Ag ENPs, released via household and industrial wastes, reach water resource recovery facilities (WRRFs), yet the fate and transport of ENPs in WRRFs and their potential risk in the biological wastewater processes are poorly understood. Accordingly, our main objective was to elucidate the impact of long-term continuous exposure to AgNPs on biological activity of aerobic wastewater biofilm. The fate, transport and toxicity of 10 μg.L-1and 100 μg.L-1 PVP-stabilized AgNPs (50 nm) were evaluated in an attached growth biological treatment process, using lab-scale moving bed bioreactors (MBBRs). Two MBBR systems for organic matter removal were fed with a synthetic influent and operated at a hydraulic retention time (HRT) of 180 min and 60% volumetric filling ratio of Anox-K5 carriers with specific surface area of 800 m2/m3. Both reactors were operated for 85 days after reaching steady state conditions to develop a mature biofilm. The impact of AgNPs on the biological performance of the MBBRs was characterized over a period of 64 days in terms of the filtered biodegradable COD (SCOD) removal efficiency, the biofilm viability and key enzymatic activities (α-glucosidase and protease). The AgNPs were quantitatively characterized using single-particle inductively coupled plasma mass spectroscopy (spICP-MS), determining simultaneously the particle size distribution, particle concentration and dissolved silver content in influent, bioreactor and effluent samples. The generation of reactive oxygen species and the oxidative stress were assessed as the proposed toxicity mechanism of AgNPs. Results indicated that a low concentration of AgNPs (10 μg.L-1) did not significantly affect the SCOD removal efficiency whereas a significant reduction in treatment efficiency (37%) was observed at 100 μg.L-1AgNPs. Neither the viability nor the enzymatic activities of biofilm were affected at 10 μg.L-1AgNPs but a higher concentration of AgNPs induced cell membrane integrity damage resulting in 31% loss of viability and reduced α-glucosidase and protease enzymatic activities by 31% and 29%, respectively, over the 64-day exposure period. The elevated intercellular ROS in biofilm at a higher AgNPs concentration over time was consistent with a reduced biological biofilm performance, confirming the occurrence of a nanoparticle-induced oxidative stress in the heterotrophic biofilm. The spICP-MS analysis demonstrated a decrease in the nanoparticles concentration over the first 25 days, indicating a significant partitioning of AgNPs into the biofilm matrix in both reactors. The concentration of nanoparticles increased in effluent of both reactors after 25 days, however, indicating a decreased retention capacity of AgNPs in biofilm. The observed significant detachment of biofilm also contributed to a higher release of nanoparticles due to cell-wall destabilizing properties of AgNPs as an antimicrobial agent. The removal efficiency of PVP-AgNPs and the biofilm biological responses were a function of nanoparticle concentration and exposure time. This study contributes to a better understanding of the fate and behavior of AgNPs in biological wastewater processes, providing key information that can be used to predict the environmental risks of ENPs in aquatic ecosystems.Keywords: biofilm, silver nanoparticle, single particle ICP-MS, toxicity, wastewater
Procedia PDF Downloads 26999 Effect of Cerebellar High Frequency rTMS on the Balance of Multiple Sclerosis Patients with Ataxia
Authors: Shereen Ismail Fawaz, Shin-Ichi Izumi, Nouran Mohamed Salah, Heba G. Saber, Ibrahim Mohamed Roushdi
Abstract:
Background: Multiple sclerosis (MS) is a chronic, inflammatory, mainly demyelinating disease of the central nervous system, more common in young adults. Cerebellar involvement is one of the most disabling lesions in MS and is usually a sign of disease progression. It plays a major role in the planning, initiation, and organization of movement via its influence on the motor cortex and corticospinal outputs. Therefore, it contributes to controlling movement, motor adaptation, and motor learning, in addition to its vast connections with other major pathways controlling balance, such as the cerebellopropriospinal pathways and cerebellovestibular pathways. Hence, trying to stimulate the cerebellum by facilitatory protocols will add to our motor control and balance function. Non-invasive brain stimulation, both repetitive transcranial magnetic stimulation (rTMS) and transcranial direct current stimulation (tDCS), has recently emerged as effective neuromodulators to influence motor and nonmotor functions of the brain. Anodal tDCS has been shown to improve motor skill learning and motor performance beyond the training period. Similarly, rTMS, when used at high frequency (>5 Hz), has a facilitatory effect on the motor cortex. Objective: Our aim was to determine the effect of high-frequency rTMS over the cerebellum in improving balance and functional ambulation of multiple sclerosis patients with Ataxia. Patients and methods: This was a randomized single-blinded placebo-controlled prospective trial on 40 patients. The active group (N=20) received real rTMS sessions, and the control group (N=20) received Sham rTMS using a placebo program designed for this treatment. Both groups received 12 sessions of high-frequency rTMS over the cerebellum, followed by an intensive exercise training program. Sessions were given three times per week for four weeks. The active group protocol had a frequency of 10 Hz rTMS over the cerebellar vermis, work period 5S, number of trains 25, and intertrain interval 25s. The total number of pulses was 1250 pulses per session. The control group received Sham rTMS using a placebo program designed for this treatment. Both groups of patients received an intensive exercise program, which included generalized strengthening exercises, endurance and aerobic training, trunk abdominal exercises, generalized balance training exercises, and task-oriented training such as Boxing. As a primary outcome measure the Modified ICARS was used. Static Posturography was done with: Patients were tested both with open and closed eyes. Secondary outcome measures included the expanded Disability Status Scale (EDSS) and 8 Meter walk test (8MWT). Results: The active group showed significant improvements in all the functional scales, modified ICARS, EDSS, and 8-meter walk test, in addition to significant differences in static Posturography with open eyes, while the control group did not show such differences. Conclusion: Cerebellar high-frequency rTMS could be effective in the functional improvement of balance in MS patients with ataxia.Keywords: brain neuromodulation, high frequency rTMS, cerebellar stimulation, multiple sclerosis, balance rehabilitation
Procedia PDF Downloads 9298 Pulmonary Disease Identification Using Machine Learning and Deep Learning Techniques
Authors: Chandu Rathnayake, Isuri Anuradha
Abstract:
Early detection and accurate diagnosis of lung diseases play a crucial role in improving patient prognosis. However, conventional diagnostic methods heavily rely on subjective symptom assessments and medical imaging, often causing delays in diagnosis and treatment. To overcome this challenge, we propose a novel lung disease prediction system that integrates patient symptoms and X-ray images to provide a comprehensive and reliable diagnosis.In this project, develop a mobile application specifically designed for detecting lung diseases. Our application leverages both patient symptoms and X-ray images to facilitate diagnosis. By combining these two sources of information, our application delivers a more accurate and comprehensive assessment of the patient's condition, minimizing the risk of misdiagnosis. Our primary aim is to create a user-friendly and accessible tool, particularly important given the current circumstances where many patients face limitations in visiting healthcare facilities. To achieve this, we employ several state-of-the-art algorithms. Firstly, the Decision Tree algorithm is utilized for efficient symptom-based classification. It analyzes patient symptoms and creates a tree-like model to predict the presence of specific lung diseases. Secondly, we employ the Random Forest algorithm, which enhances predictive power by aggregating multiple decision trees. This ensemble technique improves the accuracy and robustness of the diagnosis. Furthermore, we incorporate a deep learning model using Convolutional Neural Network (CNN) with the RestNet50 pre-trained model. CNNs are well-suited for image analysis and feature extraction. By training CNN on a large dataset of X-ray images, it learns to identify patterns and features indicative of lung diseases. The RestNet50 architecture, known for its excellent performance in image recognition tasks, enhances the efficiency and accuracy of our deep learning model. By combining the outputs of the decision tree-based algorithms and the deep learning model, our mobile application generates a comprehensive lung disease prediction. The application provides users with an intuitive interface to input their symptoms and upload X-ray images for analysis. The prediction generated by the system offers valuable insights into the likelihood of various lung diseases, enabling individuals to take appropriate actions and seek timely medical attention. Our proposed mobile application has significant potential to address the rising prevalence of lung diseases, particularly among young individuals with smoking addictions. By providing a quick and user-friendly approach to assessing lung health, our application empowers individuals to monitor their well-being conveniently. This solution also offers immense value in the context of limited access to healthcare facilities, enabling timely detection and intervention. In conclusion, our research presents a comprehensive lung disease prediction system that combines patient symptoms and X-ray images using advanced algorithms. By developing a mobile application, we provide an accessible tool for individuals to assess their lung health conveniently. This solution has the potential to make a significant impact on the early detection and management of lung diseases, benefiting both patients and healthcare providers.Keywords: CNN, random forest, decision tree, machine learning, deep learning
Procedia PDF Downloads 7497 Potential Benefits and Adaptation of Climate Smart Practices by Small Farmers Under Three-Crop Rice Production System in Vietnam
Authors: Azeem Tariq, Stephane De Tourdonnet, Lars Stoumann Jensen, Reiner Wassmann, Bjoern Ole Sander, Quynh Duong Vu, Trinh Van Mai, Andreas De Neergaard
Abstract:
Rice growing area is increasing to meet the food demand of increasing population. Mostly, rice is growing on lowland, small landholder fields in most part of the world, which is one of the major sources of greenhouse gases (GHG) emissions from agriculture fields. The strategies such as, altering water and residues (carbon) management practices are assumed to be essential to mitigate the GHG emissions from flooded rice system. The actual implementation and potential of these measures on small farmer fields is still challenging. A field study was conducted on red river delta in Northern Vietnam to identify the potential challenges and barriers to the small rice farmers for implementation of climate smart rice practices. The objective of this study was to develop and access the feasibility of climate smart rice prototypes under actual farmer conditions. Field and scientific oriented framework was used to meet our objective. The methodological framework composed of six steps: i) identification of stakeholders and possible options, ii) assessment of barrios, drawbacks/advantages of new technologies, iii) prototype design, iv) assessment of mitigation potential of each prototype, v) scenario building and vi) scenario assessment. A farm survey was conducted to identify the existing farm practices and major constraints of small rice farmers. We proposed the two water (pre transplant+midseason drainage and early+midseason drainage) and one straw (full residue incorporation) management option keeping in views the farmers constraints and barriers for implementation. To test new typologies with existing prototypes (midseason drainage, partial residue incorporation) at farmer local conditions, a participatory field experiment was conducted for two consecutive rice seasons at farmer fields. Following the results of each season a workshop was conducted with stakeholders (farmers, village leaders, cooperatives, irrigation staff, extensionists, agricultural officers) at local and district level to get feedbacks on new tested prototypes and to develop possible scenarios for climate smart rice production practices. The farm analysis survey showed that non-availability of cheap labor and lacks of alternatives for straw management influence the small farmers to burn the residues in the fields except to use for composting or other purposes. Our field results revealed that application of early season drainage significantly mitigates (40-60%) the methane emissions from residue incorporation. Early season drainage was more efficient and easy to control under cooperate manage system than individually managed water system, and it leads to both economic (9-11% high rice yield, low cost of production, reduced nutrient loses) and environmental (mitigate methane emissions) benefits. The participatory field study allows the assessment of adaptation potential and possible benefits of climate smart practices on small farmer fields. If farmers have no other residue management option, full residue incorporation with early plus midseason drainage is adaptable and beneficial (both environmentally and economically) management option for small rice farmers.Keywords: adaptation, climate smart agriculture, constrainsts, smallholders
Procedia PDF Downloads 26796 Structured Cross System Planning and Control in Modular Production Systems by Using Agent-Based Control Loops
Authors: Simon Komesker, Achim Wagner, Martin Ruskowski
Abstract:
In times of volatile markets with fluctuating demand and the uncertainty of global supply chains, flexible production systems are the key to an efficient implementation of a desired production program. In this publication, the authors present a holistic information concept taking into account various influencing factors for operating towards the global optimum. Therefore, a strategy for the implementation of multi-level planning for a flexible, reconfigurable production system with an alternative production concept in the automotive industry is developed. The main contribution of this work is a system structure mixing central and decentral planning and control evaluated in a simulation framework. The information system structure in current production systems in the automotive industry is rigidly hierarchically organized in monolithic systems. The production program is created rule-based with the premise of achieving uniform cycle time. This program then provides the information basis for execution in subsystems at the station and process execution level. In today's era of mixed-(car-)model factories, complex conditions and conflicts arise in achieving logistics, quality, and production goals. There is no provision for feedback loops of results from the process execution level (resources) and process supporting (quality and logistics) systems and reconsideration in the planning systems. To enable a robust production flow, the complexity of production system control is artificially reduced by the line structure and results, for example in material-intensive processes (buffers and safety stocks - two container principle also for different variants). The limited degrees of freedom of line production have produced the principle of progress figure control, which results in one-time sequencing, sequential order release, and relatively inflexible capacity control. As a result, modularly structured production systems such as modular production according to known approaches with more degrees of freedom are currently difficult to represent in terms of information technology. The remedy is an information concept that supports cross-system and cross-level information processing for centralized and decentralized decision-making. Through an architecture of hierarchically organized but decoupled subsystems, the paradigm of hybrid control is used, and a holonic manufacturing system is offered, which enables flexible information provisioning and processing support. In this way, the influences from quality, logistics, and production processes can be linked holistically with the advantages of mixed centralized and decentralized planning and control. Modular production systems also require modularly networked information systems with semi-autonomous optimization for a robust production flow. Dynamic prioritization of different key figures between subsystems should lead the production system to an overall optimum. The tasks and goals of quality, logistics, process, resource, and product areas in a cyber-physical production system are designed as an interconnected multi-agent-system. The result is an alternative system structure that executes centralized process planning and decentralized processing. An agent-based manufacturing control is used to enable different flexibility and reconfigurability states and manufacturing strategies in order to find optimal partial solutions of subsystems, that lead to a near global optimum for hybrid planning. This allows a robust near to plan execution with integrated quality control and intralogistics.Keywords: holonic manufacturing system, modular production system, planning, and control, system structure
Procedia PDF Downloads 16995 Thermally Conductive Polymer Nanocomposites Based on Graphene-Related Materials
Authors: Alberto Fina, Samuele Colonna, Maria del Mar Bernal, Orietta Monticelli, Mauro Tortello, Renato Gonnelli, Julio Gomez, Chiara Novara, Guido Saracco
Abstract:
Thermally conductive polymer nanocomposites are of high interest for several applications including low-temperature heat recovery, heat exchangers in a corrosive environment and heat management in electronics and flexible electronics. In this paper, the preparation of thermally conductive nanocomposites exploiting graphene-related materials is addressed, along with their thermal characterization. In particular, correlations between 1- chemical and physical features of the nanoflakes and 2- processing conditions with the heat conduction properties of nanocomposites is studied. Polymers are heat insulators; therefore, the inclusion of conductive particles is the typical solution to obtain a sufficient thermal conductivity. In addition to traditional microparticles such as graphite and ceramics, several nanoparticles have been proposed, including carbon nanotubes and graphene, for the use in polymer nanocomposites. Indeed, thermal conductivities for both carbon nanotubes and graphenes were reported in the wide range of about 1500 to 6000 W/mK, despite such property may decrease dramatically as a function of the size, number of layers, the density of topological defects, re-hybridization defects as well as on the presence of impurities. Different synthetic techniques have been developed, including mechanical cleavage of graphite, epitaxial growth on SiC, chemical vapor deposition, and liquid phase exfoliation. However, the industrial scale-up of graphene, defined as an individual, single-atom-thick sheet of hexagonally arranged sp2-bonded carbons still remains very challenging. For large scale bulk applications in polymer nanocomposites, some graphene-related materials such as multilayer graphenes (MLG), reduced graphene oxide (rGO) or graphite nanoplatelets (GNP) are currently the most interesting graphene-based materials. In this paper, different types of graphene-related materials were characterized for their chemical/physical as well as for thermal properties of individual flakes. Two selected rGOs were annealed at 1700°C in vacuum for 1 h to reduce defectiveness of the carbon structure. Thermal conductivity increase of individual GNP with annealing was assessed via scanning thermal microscopy. Graphene nano papers were prepared from both conventional RGO and annealed RGO flakes. Characterization of the nanopapers evidenced a five-fold increase in the thermal diffusivity on the nano paper plane for annealed nanoflakes, compared to pristine ones, demonstrating the importance of structural defectiveness reduction to maximize the heat dissipation performance. Both pristine and annealed RGO were used to prepare polymer nanocomposites, by melt reactive extrusion. Thermal conductivity showed two- to three-fold increase in the thermal conductivity of the nanocomposite was observed for high temperature treated RGO compared to untreated RGO, evidencing the importance of using low defectivity nanoflakes. Furthermore, the study of different processing paremeters (time, temperature, shear rate) during the preparation of poly (butylene terephthalate) nanocomposites evidenced a clear correlation with the dispersion and fragmentation of the GNP nanoflakes; which in turn affected the thermal conductivity performance. Thermal conductivity of about 1.7 W/mK, i.e. one order of magnitude higher than for pristine polymer, was obtained with 10%wt of annealed GNPs, which is in line with state of the art nanocomposites prepared by more complex and less upscalable in situ polymerization processes.Keywords: graphene, graphene-related materials, scanning thermal microscopy, thermally conductive polymer nanocomposites
Procedia PDF Downloads 26894 Adapting to College: Exploration of Psychological Well-Being, Coping, and Identity as Markers of Readiness
Authors: Marit D. Murry, Amy K. Marks
Abstract:
The transition to college is a critical period that affords abundant opportunities for growth in conjunction with novel challenges for emerging adults. During this time, emerging adults are garnering experiences and acquiring hosts of new information that they are required to synthesize and use to inform life-shaping decisions. This stage is characterized by instability and exploration, which necessitates a diverse set of coping skills to successfully navigate and positively adapt to their evolving environment. However, important sociocultural factors result in differences that occur developmentally for minority emerging adults (i.e., emerging adults with an identity that has been or is marginalized). While the transition to college holds vast potential, not all are afforded the same chances, and many individuals enter into this stage at varying degrees of readiness. Understanding the nuance and diversity of student preparedness for college and contextualizing these factors will better equip systems to support incoming students. Emerging adulthood for ethnic, racial minority students presents itself as an opportunity for growth and resiliency in the face of systemic adversity. Ethnic, racial identity (ERI) is defined as an identity that develops as a function of one’s ethnic-racial group membership. Research continues to demonstrate ERI as a resilience factor that promotes positive adjustment in young adulthood. Adaptive coping responses (e.g., engaging in help-seeking behavior, drawing on personal and community resources) have been identified as possible mechanisms through which ERI buffers youth against stressful life events, including discrimination. Additionally, trait mindfulness has been identified as a significant predictor of general psychological health, and mindfulness practice has been shown to be a self-regulatory strategy that promotes healthy stress responses and adaptive coping strategy selection. The current study employed a person-centered approach to explore emerging patterns across ethnic identity development and psychological well-being criterion variables among college freshmen. Data from 283 incoming college freshmen at Northeastern University were analyzed. The Brief COPE Acceptance and Emotional Support scales, the Five Factor Mindfulness Questionnaire, and MIEM Exploration and Affirmation measures were used to inform the cluster profiles. The TwoStep auto-clustering algorithm revealed an optimal three-cluster solution (BIC = 848.49), which classified 92.6% (n = 262) of participants in the sample into one of the three clusters. The clusters were characterized as ‘Mixed Adjustment’, ‘Lowest Adjustment’, and ‘Moderate Adjustment.’ Cluster composition varied significantly by ethnicity X² (2, N = 262) = 7.74 (p = .021) and gender X² (2, N = 259) = 10.40 (p = .034). The ‘Lowest Adjustment’ cluster contained the highest proportion of students of color, 41% (n = 32), and male-identifying students, 44.2% (n = 34). Follow-up analyses showed higher ERI exploration in ‘Moderate Adjustment’ cluster members, also reported higher levels of psychological distress, with significantly elevated depression scores (p = .011), psychological diagnoses of depression (p = .013), anxiety (p = .005) and psychiatric disorders (p = .025). Supporting prior research, students engaging with identity exploration processes often endure more psychological distress. These results indicate that students undergoing identity development may require more socialization and different services beyond normal strategies.Keywords: adjustment, coping, college, emerging adulthood, ethnic-racial identity, psychological well-being, resilience
Procedia PDF Downloads 11193 An Explorative Analysis of Effective Project Management of Research and Research-Related Projects within a recently Formed Multi-Campus Technology University
Authors: Àidan Higgins
Abstract:
Higher education will be crucial in the coming decades in helping to make Ireland a nation is known for innovation, competitive enterprise, and ongoing academic success, as well as a desirable location to live and work with a high quality of life, vibrant culture, and inclusive social structures. Higher education institutions will actively connect with each student community, society, and business; they will help students develop a sense of place and identity in Ireland and provide the tools they need to contribute significantly to the global community. It will also serve as a catalyst for novel ideas through research, many of which will become the foundation for long-lasting inventive businesses in the future as part of the 2030 National Strategy on Education focuses on change and developing our education system with a focus on how we carry out Research. The emphasis is central to knowledge transfer and a consistent research framework with exploiting opportunities and having the necessary expertise. The newly formed Technological Universities (TU) in Ireland are based on a government initiative to create a new type of higher education institution that focuses on applied and industry-focused research and education. The basis of the TU is to bring together two or more existing institutes of technology to create a larger and more comprehensive institution that offers a wider range of programs and services to students and industry partners. The TU model aims to promote collaboration between academia, industry, and community organizations to foster innovation, research, and economic development. The TU model also aims to enhance the student experience by providing a more seamless pathway from undergraduate to postgraduate studies, as well as greater opportunities for work placements and engagement with industry partners. Additionally, the TUs are designed to provide a greater emphasis on applied research, technology transfer, and entrepreneurship, with the goal of fostering innovation and contributing to economic growth. A project is a collection of organised tasks carried out precisely to produce a singular output (product or service) within a given time frame. Project management is a set of activities that facilitates the successful implementation of a project. The significant differences between research and development projects are the (lack of) precise requirements and (the inability to) plan an outcome from the beginning of the project. The evaluation criteria for a research project must consider these and other "particularities" in works; for instance, proving something cannot be done may be a successful outcome. This study intends to explore how a newly established multi-campus technological university manages research projects effectively. The study will identify the potential and difficulties of managing research projects, the tools, resources and processes available in a multi-campus Technological University context and the methods and approaches employed to deal with these difficulties. Key stakeholders like project managers, academics, and administrators will be surveyed as part of the study, which will also involve an explorative investigation of current literature and data. The findings of this study will contribute significantly to creating best practices for project management in this setting and offer insightful information about the efficient management of research projects within a multi-campus technological university.Keywords: project management, research and research-related projects, multi-campus technology university, processes
Procedia PDF Downloads 6092 Evaluating Viability of Using South African Forestry Process Biomass Waste Mixtures as an Alternative Pyrolysis Feedstock in the Production of Bio Oil
Authors: Thembelihle Portia Lubisi, Malusi Ntandoyenkosi Mkhize, Jonas Kalebe Johakimu
Abstract:
Fertilizers play an important role in maintaining the productivity and quality of plants. Inorganic fertilizers (containing nitrogen, phosphorus, and potassium) are largely used in South Africa as they are considered inexpensive and highly productive. When applied, a portion of the excess fertilizer will be retained in the soil, a portion enters water streams due to surface runoff or the irrigation system adopted. Excess nutrient from the fertilizers entering the water stream eventually results harmful algal blooms (HABs) in freshwater systems, which not only disrupt wildlife but can also produce toxins harmful to humans. Use of agro-chemicals such as pesticides and herbicides has been associated with increased antimicrobial resistance (AMR) in humans as the plants are consumed by humans. This resistance of bacterial poses a threat as it prevents the Health sector from being able to treat infectious disease. Archaeological studies have found that pyrolysis liquids were already used in the time of the Neanderthal as a biocide and plant protection product. Pyrolysis is thermal degradation process of plant biomass or organic material under anaerobic conditions leading to production of char, bio-oils and syn gases. Bio-oil constituents can be categorized as water soluble (wood vinegar) and water insoluble fractions (tar and light oils). Wood vinegar (pyro-ligneous acid) is said to contain contains highly oxygenated compounds including acids, alcohols, aldehydes, ketones, phenols, esters, furans, and other multifunctional compounds with various molecular weights and compositions depending on the biomass material derived from and pyrolysis operating conditions. Various researchers have found the wood vinegar to be efficient in the eradication of termites, effective in plant protection and plant growth, has antibacterial characteristics and was found effective in inhibiting the micro-organisms such as candida yeast, E-coli, etc. This study investigated characterisation of South African forestry product processing waste with intention of evaluating the potential of using the respective biomass waste as feedstock for boil oil production via pyrolysis process. Ability to use biomass waste materials in production of wood-vinegar has advantages that it does not only allows for reduction of environmental pollution and landfill requirement, but it also does not negatively affect food security. The biomass wastes investigated were from the popular tree types in KZN, which are, pine saw dust (PSD), pine bark (PB), eucalyptus saw dust (ESD) and eucalyptus bark (EB). Furthermore, the research investigates the possibility of mixing the different wastes with an aim to lessen the cost of raw material separation prior to feeding into pyrolysis process and mixing also increases the amount of biomass material available for beneficiation. A 50/50 mixture of PSD and ESD (EPSD) and mixture containing pine saw dust; eucalyptus saw dust, pine bark and eucalyptus bark (EPSDB). Characterisation of the biomass waste will look at analysis such as proximate (volatiles, ash, fixed carbon), ultimate (carbon, hydrogen, nitrogen, oxygen, sulphur), high heating value, structural (cellulose, hemicellulose and lignin) and thermogravimetric analysis.Keywords: characterisation, biomass waste, saw dust, wood waste
Procedia PDF Downloads 7191 Xen45 Gel Implant in Open Angle Glaucoma: Efficacy, Safety and Predictors of Outcome
Authors: Fossarello Maurizio, Mattana Giorgio, Tatti Filippo.
Abstract:
The most widely performed surgical procedure in Open-Angle Glaucoma (OAG) is trabeculectomy. Although this filtering procedure is extremely effective, surgical failure and postoperative complications are reported. Due to the its invasive nature and possible complications, trabeculectomy is usually reserved, in practice, for patients who are refractory to medical and laser therapy. Recently, a number of micro-invasive surgical techniques (MIGS: Micro-Invasive Glaucoma Surgery), have been introduced in clinical practice. They meet the criteria of micro-incisional approach, minimal tissue damage, short surgical time, reliable IOP reduction, extremely high safety profile and rapid post-operative recovery. Xen45 Gel Implant (Allergan, Dublin, Ireland) is one of the MIGS alternatives, and consists in a porcine gelatin tube designed to create an aqueous flow from the anterior chamber to the subconjunctival space, bypassing the resistance of the trabecular meshwork. In this study we report the results of this technique as a favorable option in the treatment of OAG for its benefits in term of efficacy and safety, either alone or in combination with cataract surgery. This is a retrospective, single-center study conducted in consecutive OAG patients, who underwent Xen45 Gel Stent implantation alone or in combination with phacoemulsification, from October 2018 to June 2019. The primary endpoint of the study was to evaluate the reduction of both IOP and number of antiglaucoma medications at 12 months. The secondary endpoint was to correlate filtering bleb morphology evaluated by means of anterior segment OCT with efficacy in IOP lowering and eventual further procedures requirement. Data were recorded on Microsoft Excel and study analysis was performed using Microsoft Excel and SPSS (IBM). Mean values with standard deviations were calculated for IOPs and number of antiglaucoma medications at all points. Kolmogorov-Smirnov test showed that IOP followed a normal distribution at all time, therefore the paired Student’s T test was used to compare baseline and postoperative mean IOP. Correlation between postoperative Day 1 IOP and Month 12 IOP was evaluated using Pearson coefficient. Thirty-six eyes of 36 patients were evaluated. As compared to baseline, mean IOP and the mean number of antiglaucoma medications significantly decreased from 27,33 ± 7,67 mmHg to 16,3 ± 2,89 mmHg (38,8% reduction) and from 2,64 ± 1,39 to 0,42 ± 0,8 (84% reduction), respectively, at 12 months after surgery (both p < 0,001). According to bleb morphology, eyes were divided in uniform group (n=8, 22,2%), subconjunctival separation group (n=5, 13,9%), microcystic multiform group (n=9, 25%) and multiple internal layer group (n=14, 38,9%). Comparing to baseline, there was no significative difference in IOP between the 4 groups at month 12 follow-up visit. Adverse events included bleb function decrease (n=14, 38,9%), hypotony (n=8, 22,2%) and choroidal detachment (n=2, 5,6%). All eyes presenting bleb flattening underwent needling and MMC injection. The higher percentage of patients that required secondary needling was in the uniform group (75%), with a significant difference between the groups (p=0,03). Xen45 gel stent, either alone or in combination with phacoemulsification, provided a significant lowering in both IOP and medical antiglaucoma treatment and an elevated safety profile.Keywords: anterior segment OCT, bleb morphology, micro-invasive glaucoma surgery, open angle glaucoma, Xen45 gel implant
Procedia PDF Downloads 14290 Transitioning Towards a Circular Economy in the Textile Industry: Approaches to Address Environmental Challenges
Authors: Atefeh Salehipoor
Abstract:
Textiles play a vital role in human life, particularly in the form of clothing. However, the alarming rate at which textiles end up in landfills presents a significant environmental risk. With approximately one garbage truck per second being filled with discarded textiles, urgent measures are required to mitigate this trend. Governments and responsible organizations are calling upon various stakeholders to shift from a linear economy to a circular economy model in the textile industry. This article highlights several key approaches that can be undertaken to address this pressing issue. These approaches include the creation of renewable raw material sources, rethinking production processes, maximizing the use and reuse of textile products, implementing reproduction and recycling strategies, exploring redistribution to new markets, and finding innovative means to extend the lifespan of textiles. However, the rapid accumulation of textiles in landfills poses a significant threat to the environment. This article explores the urgent need for the textile industry to transition from a linear economy model to a circular economy model. The linear model, characterized by the creation, use, and disposal of textiles, is unsustainable in the long term. By adopting a circular economy approach, the industry can minimize waste, reduce environmental impact, and promote sustainable practices. This article outlines key approaches that can be undertaken to drive this transition. Approaches to Address Environmental Challenges: 1. Creation of Renewable Raw Materials Sources: Exploring and promoting the use of renewable and sustainable raw materials, such as organic cotton, hemp, and recycled fibers, can significantly reduce the environmental footprint of textile production. 2. Rethinking Production Processes: Implementing cleaner production techniques, optimizing resource utilization, and minimizing waste generation are crucial steps in reducing the environmental impact of textile manufacturing. 3. Maximizing Use and Reuse of Textile Products: Encouraging consumers to prolong the lifespan of textile products through proper care, maintenance, and repair services can reduce the frequency of disposal and promote a culture of sustainability. 4. Reproduction and Recycling Strategies: Investing in innovative technologies and infrastructure to enable efficient reproduction and recycling of textiles can close the loop and minimize waste generation. 5. Redistribution of Textiles to New Markets: Exploring opportunities to redistribute textiles to new and parallel markets, such as resale platforms, can extend their lifecycle and prevent premature disposal. 6. Improvising Means to Extend Textile Lifespan: Encouraging design practices that prioritize durability, versatility, and timeless aesthetics can contribute to prolonging the lifespan of textiles. Conclusion The textile industry must urgently transition from a linear economy to a circular economy model to mitigate the adverse environmental impact caused by textile waste. By implementing the outlined approaches, such as sourcing renewable raw materials, rethinking production processes, promoting reuse and recycling, exploring new markets, and extending the lifespan of textiles, stakeholders can work together to create a more sustainable and environmentally friendly textile industry. These measures require collective action and collaboration between governments, organizations, manufacturers, and consumers to drive positive change and safeguard the planet for future generations.Keywords: textiles, circular economy, environmental challenges, renewable raw materials, production processes, reuse, recycling, redistribution, textile lifespan extension
Procedia PDF Downloads 8789 Effective Emergency Response and Disaster Prevention: A Decision Support System for Urban Critical Infrastructure Management
Authors: M. Shahab Uddin, Pennung Warnitchai
Abstract:
Currently more than half of the world’s populations are living in cities, and the number and sizes of cities are growing faster than ever. Cities rely on the effective functioning of complex and interdependent critical infrastructures networks to provide public services, enhance the quality of life, and save the community from hazards and disasters. In contrast, complex connectivity and interdependency among the urban critical infrastructures bring management challenges and make the urban system prone to the domino effect. Unplanned rapid growth, increased connectivity, and interdependency among the infrastructures, resource scarcity, and many other socio-political factors are affecting the typical state of an urban system and making it susceptible to numerous sorts of diversion. In addition to internal vulnerabilities, urban systems are consistently facing external threats from natural and manmade hazards. Cities are not just complex, interdependent system, but also makeup hubs of the economy, politics, culture, education, etc. For survival and sustainability, complex urban systems in the current world need to manage their vulnerabilities and hazardous incidents more wisely and more interactively. Coordinated management in such systems makes for huge potential when it comes to absorbing negative effects in case some of its components were to function improperly. On the other hand, ineffective management during a similar situation of overall disorder from hazards devastation may make the system more fragile and push the system to an ultimate collapse. Following the quantum, the current research hypothesizes that a hazardous event starts its journey as an emergency, and the system’s internal vulnerability and response capacity determine its destination. Connectivity and interdependency among the urban critical infrastructures during this stage may transform its vulnerabilities into dynamic damaging force. An emergency may turn into a disaster in the absence of effective management; similarly, mismanagement or lack of management may lead the situation towards a catastrophe. Situation awareness and factual decision-making is the key to win a battle. The current research proposed a contextual decision support system for an urban critical infrastructure system while integrating three different models: 1) Damage cascade model which demonstrates damage propagation among the infrastructures through their connectivity and interdependency, 2) Restoration model, a dynamic restoration process of individual infrastructure, which is based on facility damage state and overall disruptions in surrounding support environment, and 3) Optimization model that ensures optimized utilization and distribution of available resources in and among the facilities. All three models are tightly connected, mutually interdependent, and together can assess the situation and forecast the dynamic outputs of every input. Moreover, this integrated model will hold disaster managers and decision makers responsible when it comes to checking all the alternative decision before any implementation, and support to produce maximum possible outputs from the available limited inputs. This proposed model will not only support to reduce the extent of damage cascade but will ensure priority restoration and optimize resource utilization through adaptive and collaborative management. Complex systems predictably fail but in unpredictable ways. System understanding, situation awareness, and factual decisions may significantly help urban system to survive and sustain.Keywords: disaster prevention, decision support system, emergency response, urban critical infrastructure system
Procedia PDF Downloads 22888 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design
Authors: H. K. Esfahani, B. Datta
Abstract:
Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site
Procedia PDF Downloads 23187 Heterotopic Ossification: DISH and Myositis Ossificans in Human Remains Identification
Authors: Patricia Shirley Almeida Prado, Liz Brito, Selma Paixão Argollo, Gracie Moreira, Leticia Matos Sobrinho
Abstract:
Diffuse idiopathic skeletal hyperostosis (DISH) is a degenerative bone disease also known as Forestier´s disease and ankylosing hyperostosis of the spine is characterized by a tendency toward ossification of half the anterior longitudinal spinal ligament without intervertebral disc disease. DISH is not considered to be osteoarthritis, although the two conditions commonly occur together. Diagnostic criteria include fusion of at least four vertebrae by bony bridges arising from the anterolateral aspect of the vertebral bodies. These vertebral bodies have a 'dripping candle wax' appearance, also can be seen periosteal new bone formation on the anterior surface of the vertebral bodies and there is no ankylosis at zygoapophyseal facet joint. Clinically, patients with DISH tend to be asymptomatic some patients mention moderate pain and stiffness in upper back. This disease is more common in man, uncommon in patients younger than 50 years and rare in patients under 40 years old. In modern populations, DISH is found in association with obesity, (type II) diabetes; abnormal vitamin A metabolism and also associated with higher levels of serum uric acid. There is also some association between the increase of risk of stroke or other cerebrovascular disease. The DISH condition can be confused with Heterotopic Ossification, what is the bone formation in the soft tissues as the result of trauma, wounding, surgery, burnings, prolonged immobility and some central nervous system disorder. All these conditions have been described extensively as myositis ossificans which can be confused with the fibrodysplasia (myositis) ossificans progressive. As in the DISH symptomatology it can be asymptomatic or extensive enough to impair joint function. A third confusion osteoarthritis disease that can bring confusion are the enthesopathies that occur in the entire skeleton being common on the ischial tuberosities, iliac crests, patellae, and calcaneus. Ankylosis of the sacroiliac joint by bony bridges may also be found. CASE 1: this case is skeletal remains presenting skull, some vertebrae and scapulae. This case remains unidentified and due to lack of bone remains. Sex, age and ancestry profile was compromised, however the DISH pathognomonic findings and diagnostic helps to estimate sex and age characteristics. Moreover to presenting DISH these skeletal remains also showed some bone alterations and non-metrics as fusion of the first vertebrae with occipital bone, maxillae and palatine torus and scapular foramen on the right scapulae. CASE 2: this skeleton remains shows an extensive bone heterotopic ossification on the great trochanter area of left femur, right fibula showed a healed fracture in its body however in its inteosseous crest there is an extensive bone growth, also in the Ilium at the region of inferior gluteal line can be observed some pronounced bone growth and the skull presented a pronounced mandibular, maxillary and palatine torus. Despite all these pronounced heterotopic ossification the whole skeleton presents moderate bone overgrowth that is not linked with aging, since the skeleton belongs to a young unidentified individual. The appropriate osteopathological diagnosis support the human identification process through medical reports and also assist with epidemiological data that can strengthen vulnerable anthropological estimates.Keywords: bone disease, DISH, human identification, human remains
Procedia PDF Downloads 33386 A Digital Clone of an Irrigation Network Based on Hardware/Software Simulation
Authors: Pierre-Andre Mudry, Jean Decaix, Jeremy Schmid, Cesar Papilloud, Cecile Munch-Alligne
Abstract:
In most of the Swiss Alpine regions, the availability of water resources is usually adequate even in times of drought, as evidenced by the 2003 and 2018 summers. Indeed, important natural stocks are for the moment available in the form of snow and ice, but the situation is likely to change in the future due to global and regional climate change. In addition, alpine mountain regions are areas where climate change will be felt very rapidly and with high intensity. For instance, the ice regime of these regions has already been affected in recent years with a modification of the monthly availability and extreme events of precipitations. The current research, focusing on the municipality of Val de Bagnes, located in the canton of Valais, Switzerland, is part of a project led by the Altis company and achieved in collaboration with WSL, BlueArk Entremont, and HES-SO Valais-Wallis. In this region, water occupies a key position notably for winter and summer tourism. Thus, multiple actors want to apprehend the future needs and availabilities of water, on both the 2050 and 2100 horizons, in order to plan the modifications to the water supply and distribution networks. For those changes to be salient and efficient, a good knowledge of the current water distribution networks is of most importance. In the current case, the water drinking network is well documented, but this is not the case for the irrigation one. Since the water consumption for irrigation is ten times higher than for drinking water, data acquisition on the irrigation network is a major point to determine future scenarios. This paper first presents the instrumentation and simulation of the irrigation network using custom-designed IoT devices, which are coupled with a digital clone simulated to reduce the number of measuring locations. The developed IoT ad-hoc devices are energy-autonomous and can measure flows and pressures using industrial sensors such as calorimetric water flow meters. Measurements are periodically transmitted using the LoRaWAN protocol over a dedicated infrastructure deployed in the municipality. The gathered values can then be visualized in real-time on a dashboard, which also provides historical data for analysis. In a second phase, a digital clone of the irrigation network was modeled using EPANET, a software for water distribution systems that performs extended-period simulations of flows and pressures in pressurized networks composed of reservoirs, pipes, junctions, and sinks. As a preliminary work, only a part of the irrigation network was modelled and validated by comparisons with the measurements. The simulations are carried out by imposing the consumption of water at several locations. The validation is performed by comparing the simulated pressures are different nodes with the measured ones. An accuracy of +/- 15% is observed on most of the nodes, which is acceptable for the operator of the network and demonstrates the validity of the approach. Future steps will focus on the deployment of the measurement devices on the whole network and the complete modelling of the network. Then, scenarios of future consumption will be investigated. Acknowledgment— The authors would like to thank the Swiss Federal Office for Environment (FOEN), the Swiss Federal Office for Agriculture (OFAG) for their financial supports, and ALTIS for the technical support, this project being part of the Swiss Pilot program 'Adaptation aux changements climatiques'.Keywords: hydraulic digital clone, IoT water monitoring, LoRaWAN water measurements, EPANET, irrigation network
Procedia PDF Downloads 14785 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis
Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam
Abstract:
Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.Keywords: chlorophyll, gene expression, genetic variation, stomata
Procedia PDF Downloads 24084 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies
Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho
Abstract:
The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system
Procedia PDF Downloads 18083 Understanding the Impact of Spatial Light Distribution on Object Identification in Low Vision: A Pilot Psychophysical Study
Authors: Alexandre Faure, Yoko Mizokami, éRic Dinet
Abstract:
These recent years, the potential of light in assisting visually impaired people in their indoor mobility has been demonstrated by different studies. Implementing smart lighting systems for selective visual enhancement, especially designed for low-vision people, is an approach that breaks with the existing visual aids. The appearance of the surface of an object is significantly influenced by the lighting conditions and the constituent materials of the objects. Appearance of objects may appear to be different from expectation. Therefore, lighting conditions lead to an important part of accurate material recognition. The main objective of this work was to investigate the effect of the spatial distribution of light on object identification in the context of low vision. The purpose was to determine whether and what specific lighting approaches should be preferred for visually impaired people. A psychophysical experiment was designed to study the ability of individuals to identify the smallest cube of a pair under different lighting diffusion conditions. Participants were divided into two distinct groups: a reference group of observers with normal or corrected-to-normal visual acuity and a test group, in which observers were required to wear visual impairment simulation glasses. All participants were presented with pairs of cubes in a "miniature room" and were instructed to estimate the relative size of the two cubes. The miniature room replicates real-life settings, adorned with decorations and separated from external light sources by black curtains. The correlated color temperature was set to 6000 K, and the horizontal illuminance at the object level at approximately 240 lux. The objects presented for comparison consisted of 11 white cubes and 11 black cubes of different sizes manufactured with a 3D printer. Participants were seated 60 cm away from the objects. Two different levels of light diffuseness were implemented. After receiving instructions, participants were asked to judge whether the two presented cubes were the same size or if one was smaller. They provided one of five possible answers: "Left one is smaller," "Left one is smaller but unsure," "Same size," "Right one is smaller," or "Right one is smaller but unsure.". The method of constant stimuli was used, presenting stimulus pairs in a random order to prevent learning and expectation biases. Each pair consisted of a comparison stimulus and a reference cube. A psychometric function was constructed to link stimulus value with the frequency of correct detection, aiming to determine the 50% correct detection threshold. Collected data were analyzed through graphs illustrating participants' responses to stimuli, with accuracy increasing as the size difference between cubes grew. Statistical analyses, including 2-way ANOVA tests, showed that light diffuseness had no significant impact on the difference threshold, whereas object color had a significant influence in low vision scenarios. The first results and trends derived from this pilot experiment clearly and strongly suggest that future investigations could explore extreme diffusion conditions to comprehensively assess the impact of diffusion on object identification. For example, the first findings related to light diffuseness may be attributed to the range of manipulation, emphasizing the need to explore how other lighting-related factors interact with diffuseness.Keywords: Lighting, Low Vision, Visual Aid, Object Identification, Psychophysical Experiment
Procedia PDF Downloads 6482 Framework to Organize Community-Led Project-Based Learning at a Massive Scale of 900 Indian Villages
Authors: Ayesha Selwyn, Annapoorni Chandrashekar, Kumar Ashwarya, Nishant Baghel
Abstract:
Project-based learning (PBL) activities are typically implemented in technology-enabled schools by highly trained teachers. In rural India, students have limited access to technology and quality education. Implementing typical PBL activities is challenging. This study details how Pratham Education Foundation’s Hybrid Learning model was used to implement two PBL activities related to music in 900 remote Indian villages with 46,000 students aged 10-14. The activities were completed by 69% of groups that submitted a total of 15,000 videos (completed projects). Pratham’s H-Learning model reaches 100,000 students aged 3-14 in 900 Indian villages. The community-driven model engages students in 20,000 self-organized groups outside of school. The students are guided by 6,000 youth volunteers and 100 facilitators. The students partake in learning activities across subjects with the support of community stakeholders and offline digital content on shared Android tablets. A training and implementation toolkit for PBL activities is designed by subject experts. This toolkit is essential in ensuring efficient implementation of activities as facilitators aren’t highly skilled and have limited access to training resources. The toolkit details the activity at three levels of student engagement - enrollment, participation, and completion. The subject experts train project leaders and facilitators who train youth volunteers. Volunteers need to be trained on how to execute the activity and guide students. The training is focused on building the volunteers’ capacity to enable students to solve problems, rather than developing the volunteers’ subject-related knowledge. This structure ensures that continuous intervention of subject matter experts isn’t required, and the onus of judging creativity skills is put on community members. 46,000 students in the H-Learning program were engaged in two PBL activities related to Music from April-June 2019. For one activity, students had to conduct a “musical survey” in their village by designing a survey and shooting and editing a video. This activity aimed to develop students’ information retrieval, data gathering, teamwork, communication, project management, and creativity skills. It also aimed to identify talent and document local folk music. The second activity, “Pratham Idol”, was a singing competition. Students participated in performing, producing, and editing videos. This activity aimed to develop students’ teamwork and creative skills and give students a creative outlet. Students showcased their completed projects at village fairs wherein a panel of community members evaluated the videos. The shortlisted videos from all villages were further evaluated by experts who identified students and adults to participate in advanced music workshops. The H-Learning framework enables students in low resource settings to engage in PBL and develop relevant skills by leveraging community support and using video creation as a tool. In rural India, students do not have access to high-quality education or infrastructure. Therefore designing activities that can be implemented by community members after limited training is essential. The subject experts have minimal intervention once the activity is initiated, which significantly reduces the cost of implementation and allows the activity to be implemented at a massive scale.Keywords: community supported learning, project-based learning, self-organized learning, education technology
Procedia PDF Downloads 18681 Separation of Lanthanides Ions from Mineral Waste with Functionalized Pillar[5]Arenes: Synthesis, Physicochemical Characterization and Molecular Dynamics Studies
Authors: Ariesny Vera, Rodrigo Montecinos
Abstract:
The rare-earth elements (REEs) or rare-earth metals (REMs), correspond to seventeen chemical elements composed by the fifteen lanthanoids, as well as scandium and yttrium. Lanthanoids corresponds to lanthanum and the f-block elements, from cerium to lutetium. Scandium and yttrium are considered rare-earth elements because they have ionic radii similar to the lighter f-block elements. These elements were called rare earths because they are simply more difficult to extract and separate individually than the most metals and, generally, they do not accumulate in minerals, they are rarely found in easily mined ores and are often unfavorably distributed in common ores/minerals. REEs show unique chemical and physical properties, in comparison to the other metals in the periodic table. Nowadays, these physicochemical properties are utilized in a wide range of synthetic, catalytic, electronic, medicinal, and military applications. Because of their applications, the global demand for rare earth metals is becoming progressively more important in the transition to a self-sustaining society and greener economy. However, due to the difficult separation between lanthanoid ions, the high cost and pollution of these processes, the scientists search the development of a method that combines selectivity and quantitative separation of lanthanoids from the leaching liquor, while being more economical and environmentally friendly processes. This motivation has favored the design and development of more efficient and environmentally friendly cation extractors with the incorporation of compounds as ionic liquids, membrane inclusion polymers (PIM) and supramolecular systems. Supramolecular chemistry focuses on the development of host-guest systems, in which a host molecule can recognize and bind a certain guest molecule or ion. Normally, the formation of a host-guest complex involves non-covalent interactions Additionally, host-guest interactions can be influenced among others effects by the structural nature of host and guests. The different macrocyclic hosts for lanthanoid species that have been studied are crown ethers, cyclodextrins, cucurbituryls, calixarenes and pillararenes.Among all the factors that can influence and affect lanthanoid (III) coordination, perhaps the most basic of them is the systematic control using macrocyclic substituents that promote a selective coordination. In this sense, macrocycles pillar[n]arenes (P[n]As) present a relatively easy functionalization and they have more π-rich cavity than other host molecules. This gives to P[n]As a negative electrostatic potential in the cavity which would be responsible for the selectivity of these compounds towards cations. Furthermore, the cavity size, the linker, and the functional groups of the polar headgroups could be modified in order to control the association of lanthanoid cations. In this sense, different P[n]As systems, specifically derivatives of the pentamer P[5]A functionalized with amide, amine, phosphate and sulfate derivatives, have been designed in terms of experimental synthesis and molecular dynamics, and the interaction between these P[5]As and some lanthanoid ions such as La³+, Eu³+ and Lu³+ has been studied by physicochemical characterization by 1H-NMR, ITC and fluorescence in the case of Eu³+ systems. The molecular dynamics study of these systems was developed in hexane as solvent, also taking into account the lanthanoid ions mentioned above, and the respective comparison studies between the different ions.Keywords: lanthanoids, macrocycles, pillar[n]arenes, rare-earth metal extraction, supramolecular chemistry, supramolecular complexes.
Procedia PDF Downloads 7780 Assessing Organizational Resilience Capacity to Flooding: Index Development and Application to Greek Small & Medium-Sized Enterprises
Authors: Antonis Skouloudis, Konstantinos Evangelinos, Walter Leal-Filho, Panagiotis Vouros, Ioannis Nikolaou
Abstract:
Organizational resilience capacity to extreme weather events (EWEs) has sparked a growth in scholarly attention over the past decade as an essential aspect in business continuity management, with supporting evidence for this claim to suggest that it retains a key role in successful responses to adverse situations, crises and shocks. Small and medium-sized enterprises (SMEs) are more vulnerable to face floods compared to their larger counterparts, so they are disproportionately affected by such extreme weather events. The limited resources at their disposal, the lack of time and skills all conduce to inadequate preparedness to challenges posed by floods. SMEs tend to plan in the short-term, reacting to circumstances as they arise and focussing on their very survival. Likewise, they share less formalised structures and codified policies while they are most usually owner-managed, resulting in a command-and-control management culture. Such characteristics result in them having limited opportunities to recover from flooding and quickly turnaround their operation from a loss making to a profit making one. Scholars frame the capacity of business entities to be resilient upon an EWE disturbance (such as flash floods) as the rate of recovery and restoration of organizational performance to pre-disturbance conditions, the amount of disturbance (i.e. threshold level) a business can absorb before losing structural and/or functional components that will alter or cease operation, as well as the extent to which the organization maintains its function (i.e. impact resistance) before performance levels are driven to zero. Nevertheless, while it seems to be accepted as an essential trait of firms effectively transcending uncertain conditions, research deconstructing the enabling conditions and/or inhibitory factors of SMEs resilience capacity to natural hazards is still sparse, fragmentary and mostly fuelled by anecdotal evidence or normative assumptions. Focusing on the individual level of analysis, i.e. the individual enterprise and its endeavours to succeed, the emergent picture from this relatively new research strand delineates the specification of variables, conceptual relationships or dynamic boundaries of resilience capacity components in an attempt to provide prescriptions for policy-making as well as business management. This study will present the development of a flood resilience capacity index (FRCI) and its application to Greek SMEs. The proposed composite indicator pertains to cognitive, behavioral/managerial and contextual factors that influence an enterprise’s ability to shape effective responses to meet flood challenges. Through the proposed indicator-based approach, an analytical framework is set forth that will help standardize such assessments with the overarching aim of reducing the vulnerability of SMEs to flooding. This will be achieved by identifying major internal and external attributes explaining resilience capacity which is particularly important given the limited resources these enterprises have and that they tend to be primary sources of vulnerabilities in supply chain networks, generating Single Points of Failure (SPOF).Keywords: Floods, Small & Medium-Sized enterprises, organizational resilience capacity, index development
Procedia PDF Downloads 19279 Spectroscopic Study of the Anti-Inflammatory Action of Propofol and Its Oxidant Derivatives: Inhibition of the Myeloperoxidase Activity and of the Superoxide Anions Production by Neutrophils
Authors: Pauline Nyssen, Ange Mouithys-Mickalad, Maryse Hoebeke
Abstract:
Inflammation is a complex physiological phenomenon involving chemical and enzymatic mechanisms. Polymorphonuclear neutrophil leukocytes (PMNs) play an important role by producing reactive oxygen species (ROS) and releasing myeloperoxidase (MPO), a pro-oxidant enzyme. Released both in the phagolysosome and the extracellular medium, MPO produces during its peroxidase and halogenation cycles oxidant species, including hypochlorous acid, involved in the destruction of pathogen agents, like bacteria or viruses. Inflammatory pathologies, like rheumatoid arthritis, atherosclerosis induce an excessive stimulation of the PMNs and, therefore, an uncontrolled release of ROS and MPO in the extracellular medium, causing severe damages to the surrounding tissues and biomolecules such as proteins, lipids, and DNA. The treatment of chronic inflammatory pathologies remains a challenge. For many years, MPO has been used as a target for the development of effective treatments. Numerous studies have been focused on the design of new drugs presenting more efficient MPO inhibitory properties. However, some designed inhibitors can be toxic. An alternative consists of assessing the potential inhibitory action of clinically-known molecules, having antioxidant activity. Propofol, 2,6-diisopropyl phenol, which is used as an intravenous anesthetic agent, meets these requirements. Besides its anesthetic action employed to induce a sedative state during surgery or in intensive care units, propofol and its injectable form Diprivan indeed present antioxidant properties and act as ROS and free radical scavengers. A study has also evidenced the ability of propofol to inhibit the formation of the neutrophil extracellular traps fibers, which are important to trap pathogen microorganisms during the inflammation process. The aim of this study was to investigate the potential inhibitory action mechanism of propofol and Diprivan on MPO activity. To go into the anti-inflammatory action of propofol in-depth, two of its oxidative derivatives, 2,6-diisopropyl-1,4-p-benzoquinone (PPFQ) and 3,5,3’,5’-tetra isopropyl-(4,4’)-diphenoquinone (PPFDQ), were studied regarding their inhibitory action. Specific immunological extraction followed by enzyme detection (SIEFED) and molecular modeling have evidenced the low anti-catalytic action of propofol. Stopped-flow absorption spectroscopy and direct MPO activity analysis have proved that propofol acts as a reversible MPO inhibitor by interacting as a reductive substrate in the peroxidase cycle and promoting the accumulation of redox compound II. Overall, Diprivan exhibited a weaker inhibitory action than the active molecule propofol. In contrast, PPFQ seemed to bind and obstruct the enzyme active site, preventing the trigger of the MPO oxidant cycles. PPFQ induced a better chlorination cycle inhibition at basic and neutral pH in comparison to propofol. PPFDQ did not show any MPO inhibition activity. The three interest molecules have also demonstrated their inhibition ability on an important step of the inflammation pathway, the PMNs superoxide anions production, thanks to EPR spectroscopy and chemiluminescence. In conclusion, propofol presents an interesting immunomodulatory activity by acting as a reductive substrate in the peroxidase cycle of MPO, slowing down its activity, whereas PPFQ acts more as an anti-catalytic substrate. Although PPFDQ has no impact on MPO, it can act on the inflammation process by inhibiting the superoxide anions production by PMNs.Keywords: Diprivan, inhibitor, myeloperoxidase, propofol, spectroscopy
Procedia PDF Downloads 14978 Relevance of Dosing Time for Everolimus Toxicity in Respect to the Circadian P-Glycoprotein Expression in Mdr1a::Luc Mice
Authors: Narin Ozturk, Xiao-Mei Li, Sylvie Giachetti, Francis Levi, Alper Okyar
Abstract:
P-glycoprotein (P-gp, MDR1, ABCB1) is a transmembrane protein acting as an ATP-dependent efflux pump and functions as a biological barrier by extruding drugs and xenobiotics out of cells in healthy tissues especially in intestines, liver and brain as well as in tumor cells. The circadian timing system controls a variety of biological functions in mammals including xenobiotic metabolism and detoxification, proliferation and cell cycle events, and may affect pharmacokinetics, toxicity and efficacy of drugs. Selective mTOR (mammalian target of rapamycin) inhibitor everolimus is an immunosuppressant and anticancer drug that is active against many cancers, and its pharmacokinetics depend on P-gp. The aim of this study was to investigate the dosing time-dependent toxicity of everolimus with respect to the intestinal P-gp expression rhythms in mdr1a::Luc mice using Real Time-Biolumicorder (RT-BIO) System. Mdr1a::Luc male mice were synchronized with 12 h of Light and 12 h of Dark (LD12:12, with Zeitgeber Time 0 – ZT0 – corresponding Light onset). After 1-week baseline recordings, everolimus (5 mg/kg/day x 14 days) was administered orally at ZT1-resting period- and ZT13-activity period- to mdr1a::Luc mice singly housed in an innovative monitoring device, Real Time-Biolumicorder units which let us monitor real-time and long-term gene expression in freely moving mice. D-luciferin (1.5 mg/mL) was dissolved in drinking water. Mouse intestinal mdr1a::Luc oscillation profile reflecting P-gp gene expression and locomotor activity pattern were recorded every minute with the photomultiplier tube and infrared sensor respectively. General behavior and clinical signs were monitored, and body weight was measured every day as an index of toxicity. Drug-induced body weight change was expressed relative to body weight on the initial treatment day. Statistical significance of differences between groups was validated with ANOVA. Circadian rhythms were validated with Cosinor Analysis. Everolimus toxicity changed as a function of drug timing, which was least following dosing at ZT13, near the onset of the activity span in male mice. Mean body weight loss was nearly twice as large in mice treated with 5 mg/kg everolimus at ZT1 as compared to ZT13 (8.9% vs. 5.4%; ANOVA, p < 0.001). Based on the body weight loss and clinical signs upon everolimus treatment, tolerability for the drug was best following dosing at ZT13. Both rest-activity and mdr1a::Luc expression displayed stable 24-h periodic rhythms before everolimus and in both vehicle-treated controls. Real-time bioluminescence pattern of mdr1a revealed a circadian rhythm with a 24-h period with an acrophase at ZT16 (Cosinor, p < 0.001). Mdr1a expression remained rhythmic in everolimus-treated mice, whereas down-regulation was observed in P-gp expression in 2 of 4 mice. The study identified the circadian pattern of intestinal P-gp expression with an unprecedented precision. The circadian timing depending on the P-gp expression rhythms may play a crucial role in the tolerability/toxicity of everolimus. The circadian changes in mdr1a genes deserve further studies regarding their relevance for in vitro and in vivo chronotolerance of mdr1a-transported anticancer drugs. Chronotherapy with P-gp-effluxed anticancer drugs could then be applied according to their rhythmic patterns in host and tumor to jointly maximize treatment efficacy and minimize toxicity.Keywords: circadian rhythm, chronotoxicity, everolimus, mdr1a::Luc mice, p-glycoprotein
Procedia PDF Downloads 34277 Optimizing the Residential Design Process Using Automated Technologies and AI
Authors: Milena Nanova, Martin Georgiev, Radul Shishkov, Damyan Damov
Abstract:
Modern residential architecture is increasingly influenced by rapid urbanization, technological advancements, and growing investor expectations. The integration of AI and digital tools such as CAD and BIM (Building Information Modelling) are transforming the design process by improving efficiency, accuracy, and speed. However, urban development faces challenges, including the high competition for viable sites and the time-consuming nature of traditional investment feasibility studies and architectural planning. Finding and analysing suitable sites for residential development is complicated by intense competition and rising investor demands. Investors require quick assessments of property potential to avoid missing opportunities, while traditional architectural design processes are relying on experience of the team and can be time consuming, adding pressure to make fast, effective decisions. The widespread use of CAD tools has sped up the drafting process, enhancing both accuracy and efficiency. Digital tools allow designers to manipulate drawings quickly, reducing the time spent on revisions. BIM further advances this by enabling native 3D modelling, where changes to a design in one view are automatically reflected in all others, minimizing errors and saving time. AI is becoming an integral part of architectural design software. While AI is currently being incorporated into existing programs like AutoCAD, Revit, and ArchiCAD, its full potential is reached in parametric modelling. In this process, designers define parameters (e.g., building size, layout, and materials), and the software generates multiple design variations based on those inputs. This method accelerates the design process by automating decisions and enabling quick generation of alternative solutions. The study utilizes generative design, a specific application of parametric modelling which uses AI to explore a wide range of design possibilities based on predefined criteria. It optimizes designs through iterations, testing many variations to find the best solutions. This process is particularly beneficial in the early stages of design, where multiple options are explored before refining the best ones. AI’s ability to handle complex mathematical tasks allows it to generate unconventional yet effective designs that a human designer might overlook. Residential architecture, with its anticipated and typical layouts and modular nature, is especially suitable for generative design. The relationships between rooms and the overall organization of apartment units follow logical patterns, making it an ideal candidate for parametric modelling. Using these tools, architects can quickly explore various apartment configurations, considering factors like apartment sizes, types, and circulation patterns, and identify the most efficient layout for a given site. Parametric modelling and generative design offer significant benefits to residential architecture by streamlining the design process, enabling faster decision-making, and optimizing building layouts. These technologies allow architects and developers to analyse numerous design possibilities, improving outcomes while responding to the challenges of urban development. By integrating AI-driven generative design, the architecture industry can enhance creativity, efficiency, and adaptability in residential projects.Keywords: architectural design, residential buildings, generative design, parametric models, workflow optimization
Procedia PDF Downloads 276 A Compact Standing-Wave Thermoacoustic Refrigerator Driven by a Rotary Drive Mechanism
Authors: Kareem Abdelwahed, Ahmed Salama, Ahmed Rabie, Ahmed Hamdy, Waleed Abdelfattah, Ahmed Abd El-Rahman
Abstract:
Conventional vapor-compression refrigeration systems rely on typical refrigerants, such as CFC, HCFC and ammonia. Despite of their suitable thermodynamic properties and their stability in the atmosphere, their corresponding global warming potential and ozone depletion potential raise concerns about their usage. Thus, the need for new refrigeration systems, which are environment-friendly, inexpensive and simple in construction, has strongly motivated the development of thermoacoustic energy conversion systems. A thermoacoustic refrigerator (TAR) is a device that is mainly consisting of a resonator, a stack and two heat exchangers. Typically, the resonator is a long circular tube, made of copper or steel and filled with Helium as a the working gas, while the stack has short and relatively low thermal conductivity ceramic parallel plates aligned with the direction of the prevailing resonant wave. Typically, the resonator of a standing-wave refrigerator has one end closed and is bounded by the acoustic driver at the other end enabling the propagation of half-wavelength acoustic excitation. The hot and cold heat exchangers are made of copper to allow for efficient heat transfer between the working gas and the external heat source and sink respectively. TARs are interesting because they have no moving parts, unlike conventional refrigerators, and almost no environmental impact exists as they rely on the conversion of acoustic and heat energies. Their fabrication process is rather simpler and sizes span wide variety of length scales. The viscous and thermal interactions between the stack plates, heat exchangers' plates and the working gas significantly affect the flow field within the plates' channels, and the energy flux density at the plates' surfaces, respectively. Here, the design, the manufacture and the testing of a compact refrigeration system that is based on the thermoacoustic energy-conversion technology is reported. A 1-D linear acoustic model is carefully and specifically developed, which is followed by building the hardware and testing procedures. The system consists of two harmonically-oscillating pistons driven by a simple 1-HP rotary drive mechanism operating at a frequency of 42Hz -hereby, replacing typical expensive linear motors and loudspeakers-, and a thermoacoustic stack within which the energy conversion of sound into heat is taken place. Air at ambient conditions is used as the working gas while the amplitude of the driver's displacement reaches 19 mm. The 30-cm-long stack is a simple porous ceramic material having 100 square channels per square inch. During operation, both oscillating-gas pressure and solid-stack temperature are recorded for further analysis. Measurements show a maximum temperature difference of about 27 degrees between the stack hot and cold ends with a Carnot coefficient of performance of 11 and estimated cooling capacity of five Watts, when operating at ambient conditions. A dynamic pressure of 7-kPa-amplitude is recorded, yielding a drive ratio of 7% approximately, and found in a good agreement with theoretical prediction. The system behavior is clearly non-linear and significant non-linear loss mechanisms are evident. This work helps understanding the operation principles of thermoacoustic refrigerators and presents a keystone towards developing commercial thermoacoustic refrigerator units.Keywords: refrigeration system, rotary drive mechanism, standing-wave, thermoacoustic refrigerator
Procedia PDF Downloads 36975 Policies for Circular Bioeconomy in Portugal: Barriers and Constraints
Authors: Ana Fonseca, Ana Gouveia, Edgar Ramalho, Rita Henriques, Filipa Figueiredo, João Nunes
Abstract:
Due to persistent climate pressures, there is a need to find a resilient economic system that is regenerative in nature. Bioeconomy offers the possibility of replacing non-renewable and non-biodegradable materials derived from fossil fuels with ones that are renewable and biodegradable, while a Circular Economy aims at sustainable and resource-efficient operations. The term "Circular Bioeconomy", which can be summarized as all activities that transform biomass for its use in various product streams, expresses the interaction between these two ideas. Portugal has a very favourable context to promote a Circular Bioeconomy due to its variety of climates and ecosystems, availability of biologically based resources, location, and geomorphology. Recently, there have been political and legislative efforts to develop the Portuguese Circular Bioeconomy. The Action Plan for a Sustainable Bioeconomy, approved in 2021, is composed of five axes of intervention, ranging from sustainable production and the use of regionally based biological resources to the development of a circular and sustainable bioindustry through research and innovation. However, as some statistics show, Portugal is still far from achieving circularity. According to Eurostat, Portugal has circularity rates of 2.8%, which is the second lowest among the member states of the European Union. Some challenges contribute to this scenario, including sectorial heterogeneity and fragmentation, prevalence of small producers, lack of attractiveness for younger generations, and absence of implementation of collaborative solutions amongst producers and along value chains.Regarding the Portuguese industrial sector, there is a tendency towards complex bureaucratic processes, which leads to economic and financial obstacles and an unclear national strategy. Together with the limited number of incentives the country has to offer to those that pretend to abandon the linear economic model, many entrepreneurs are hesitant to invest the capital needed to make their companies more circular. Absence of disaggregated, georeferenced, and reliable information regarding the actual availability of biological resources is also a major issue. Low literacy on bioeconomy among many of the sectoral agents and in society in general directly impacts the decisions of production and final consumption. The WinBio project seeks to outline a strategic approach for the management of weaknesses/opportunities in the technology transfer process, given the reality of the territory, through road mapping and national and international benchmarking. The developed work included the identification and analysis of agents in the interior region of Portugal, natural endogenous resources, products, and processes associated with potential development. Specific flow of biological wastes, possible value chains, and the potential for replacing critical raw materials with bio-based products was accessed, taking into consideration other countries with a matured bioeconomy. The study found food industry, agriculture, forestry, and fisheries generate huge amounts of waste streams, which in turn provide an opportunity for the establishment of local bio-industries powered by this biomass. The project identified biological resources with potential for replication and applicability in the Portuguese context. The richness of natural resources and potentials known in the interior region of Portugal is a major key to developing the Circular Economy and sustainability of the country.Keywords: circular bioeconomy, interior region of portugal, regional development., public policy
Procedia PDF Downloads 94