Search results for: symphonic variations
38 Hyperspectral Imagery for Tree Speciation and Carbon Mass Estimates
Authors: Jennifer Buz, Alvin Spivey
Abstract:
The most common greenhouse gas emitted through human activities, carbon dioxide (CO2), is naturally consumed by plants during photosynthesis. This process is actively being monetized by companies wishing to offset their carbon dioxide emissions. For example, companies are now able to purchase protections for vegetated land due-to-be clear cut or purchase barren land for reforestation. Therefore, by actively preventing the destruction/decay of plant matter or by introducing more plant matter (reforestation), a company can theoretically offset some of their emissions. One of the biggest issues in the carbon credit market is validating and verifying carbon offsets. There is a need for a system that can accurately and frequently ensure that the areas sold for carbon credits have the vegetation mass (and therefore for carbon offset capability) they claim. Traditional techniques for measuring vegetation mass and determining health are costly and require many person-hours. Orbital Sidekick offers an alternative approach that accurately quantifies carbon mass and assesses vegetation health through satellite hyperspectral imagery, a technique which enables us to remotely identify material composition (including plant species) and condition (e.g., health and growth stage). How much carbon a plant is capable of storing ultimately is tied to many factors, including material density (primarily species-dependent), plant size, and health (trees that are actively decaying are not effectively storing carbon). All of these factors are capable of being observed through satellite hyperspectral imagery. This abstract focuses on speciation. To build a species classification model, we matched pixels in our remote sensing imagery to plants on the ground for which we know the species. To accomplish this, we collaborated with the researchers at the Teakettle Experimental Forest. Our remote sensing data comes from our airborne “Kato” sensor, which flew over the study area and acquired hyperspectral imagery (400-2500 nm, 472 bands) at ~0.5 m/pixel resolution. Coverage of the entire teakettle experimental forest required capturing dozens of individual hyperspectral images. In order to combine these images into a mosaic, we accounted for potential variations of atmospheric conditions throughout the data collection. To do this, we ran an open source atmospheric correction routine called ISOFIT1 (Imaging Spectrometer Optiman FITting), which converted all of our remote sensing data from radiance to reflectance. A database of reflectance spectra for each of the tree species within the study area was acquired using the Teakettle stem map and the geo-referenced hyperspectral images. We found that a wide variety of machine learning classifiers were able to identify the species within our images with high (>95%) accuracy. For the most robust quantification of carbon mass and the best assessment of the health of a vegetated area, speciation is critical. Through the use of high resolution hyperspectral data, ground-truth databases, and complex analytical techniques, we are able to determine the species present within a pixel to a high degree of accuracy. These species identifications will feed directly into our carbon mass model.Keywords: hyperspectral, satellite, carbon, imagery, python, machine learning, speciation
Procedia PDF Downloads 13137 Analysis of Potential Associations of Single Nucleotide Polymorphisms in Patients with Schizophrenia Spectrum Disorders
Authors: Tatiana Butkova, Nikolai Kibrik, Kristina Malsagova, Alexander Izotov, Alexander Stepanov, Anna Kaysheva
Abstract:
Relevance. The genetic risk of developing schizophrenia is determined by two factors: single nucleotide polymorphisms and gene copy number variations. The search for serological markers for early diagnosis of schizophrenia is driven by the fact that the first five years of the disease are accompanied by significant biological, psychological, and social changes. It is during this period that pathological processes are most amenable to correction. The aim of this study was to analyze single nucleotide polymorphisms (SNPs) that are hypothesized to potentially influence the onset and development of the endogenous process. Materials and Methods It was analyzed 73 single nucleotide polymorphism variants. The study included 48 patients undergoing inpatient treatment at "Psychiatric Clinical Hospital No. 1" in Moscow, comprising 23 females and 25 males. Inclusion criteria: - Patients aged 18 and above. - Diagnosis according to ICD-10: F20.0, F20.2, F20.8, F21.8, F25.1, F25.2. - Voluntary informed consent from patients. Exclusion criteria included: - The presence of concurrent somatic or neurological pathology, neuroinfections, epilepsy, organic central nervous system damage of any etiology, and regular use of medication. - Substance abuse and alcohol dependence. - Women who were pregnant or breastfeeding. Clinical and psychopathological assessment was complemented by psychometric evaluation using the PANSS scale at the beginning and end of treatment. The duration of observation during therapy was 4-6 weeks. Total DNA extraction was performed using QIAamp DNA. Blood samples were processed on Illumina HiScan and genotyped for 652,297 markers on the Infinium Global Chips Screening Array-24v2.0 using the IMPUTE2 program with parameters Ne=20,000 and k=90. Additional filtration was performed based on INFO>0.5 and genotype probability>0.5. Quality control of the obtained DNA was conducted using agarose gel electrophoresis, with each tested sample having a volume of 100 µL. Results. It was observed that several SNPs exhibited gender dependence. We identified groups of single nucleotide polymorphisms with a membership of 80% or more in either the female or male gender. These SNPs included rs2661319, rs2842030, rs4606, rs11868035, rs518147, rs5993883, and rs6269.Another noteworthy finding was the limited combination of SNPs sufficient to manifest clinical symptoms leading to hospitalization. Among all 48 patients, each of whom was analyzed for deviations in 73 SNPs, it was discovered that the combination of involved SNPs in the manifestation of pronounced clinical symptoms of schizophrenia was 19±3 out of 73 possible. In study, the frequency of occurrence of single nucleotide polymorphisms also varied. The most frequently observed SNPs were rs4849127 (in 90% of cases), rs1150226 (86%), rs1414334 (75%), rs10170310 (73%), rs2857657, and rs4436578 (71%). Conclusion. Thus, the results of this study provide additional evidence that these genes may be associated with the development of schizophrenia spectrum disorders. However, it's impossible cannot rule out the hypothesis that these polymorphisms may be in linkage disequilibrium with other functionally significant polymorphisms that may actually be involved in schizophrenia spectrum disorders. It has been shown that missense SNPs by themselves are likely not causative of the disease but are in strong linkage disequilibrium with non-functional SNPs that may indeed contribute to disease predisposition.Keywords: gene polymorphisms, genotyping, single nucleotide polymorphisms, schizophrenia.
Procedia PDF Downloads 8036 Developing a Machine Learning-based Cost Prediction Model for Construction Projects using Particle Swarm Optimization
Authors: Soheila Sadeghi
Abstract:
Accurate cost prediction is essential for effective project management and decision-making in the construction industry. This study aims to develop a cost prediction model for construction projects using Machine Learning techniques and Particle Swarm Optimization (PSO). The research utilizes a comprehensive dataset containing project cost estimates, actual costs, resource details, and project performance metrics from a road reconstruction project. The methodology involves data preprocessing, feature selection, and the development of an Artificial Neural Network (ANN) model optimized using PSO. The study investigates the impact of various input features, including cost estimates, resource allocation, and project progress, on the accuracy of cost predictions. The performance of the optimized ANN model is evaluated using metrics such as Mean Squared Error (MSE), Root Mean Squared Error (RMSE), Mean Absolute Error (MAE), and R-squared. The results demonstrate the effectiveness of the proposed approach in predicting project costs, outperforming traditional benchmark models. The feature selection process identifies the most influential variables contributing to cost variations, providing valuable insights for project managers. However, this study has several limitations. Firstly, the model's performance may be influenced by the quality and quantity of the dataset used. A larger and more diverse dataset covering different types of construction projects would enhance the model's generalizability. Secondly, the study focuses on a specific optimization technique (PSO) and a single Machine Learning algorithm (ANN). Exploring other optimization methods and comparing the performance of various ML algorithms could provide a more comprehensive understanding of the cost prediction problem. Future research should focus on several key areas. Firstly, expanding the dataset to include a wider range of construction projects, such as residential buildings, commercial complexes, and infrastructure projects, would improve the model's applicability. Secondly, investigating the integration of additional data sources, such as economic indicators, weather data, and supplier information, could enhance the predictive power of the model. Thirdly, exploring the potential of ensemble learning techniques, which combine multiple ML algorithms, may further improve cost prediction accuracy. Additionally, developing user-friendly interfaces and tools to facilitate the adoption of the proposed cost prediction model in real-world construction projects would be a valuable contribution to the industry. The findings of this study have significant implications for construction project management, enabling proactive cost estimation, resource allocation, budget planning, and risk assessment, ultimately leading to improved project performance and cost control. This research contributes to the advancement of cost prediction techniques in the construction industry and highlights the potential of Machine Learning and PSO in addressing this critical challenge. However, further research is needed to address the limitations and explore the identified future research directions to fully realize the potential of ML-based cost prediction models in the construction domain.Keywords: cost prediction, construction projects, machine learning, artificial neural networks, particle swarm optimization, project management, feature selection, road reconstruction
Procedia PDF Downloads 6135 Emotional State and Cognitive Workload during a Flight Simulation: Heart Rate Study
Authors: Damien Mouratille, Antonio R. Hidalgo-Muñoz, Nadine Matton, Yves Rouillard, Mickael Causse, Radouane El Yagoubi
Abstract:
Background: The monitoring of the physiological activity related to mental workload (MW) on pilots will be useful to improve aviation safety by anticipating human performance degradation. The electrocardiogram (ECG) can reveal MW fluctuations due to either cognitive workload or/and emotional state since this measure exhibits autonomic nervous system modulations. Arguably, heart rate (HR) is one of its most intuitive and reliable parameters. It would be particularly interesting to analyze the interaction between cognitive requirements and emotion in ecologic sets such as a flight simulator. This study aims to explore by means of HR the relation between cognitive demands and emotional activation. Presumably, the effects of cognition and emotion overloads are not necessarily cumulative. Methodology: Eight healthy volunteers in possession of the Private Pilot License were recruited (male; 20.8±3.2 years). ECG signal was recorded along the whole experiment by placing two electrodes on the clavicle and left pectoral of the participants. The HR was computed within 4 minutes segments. NASA-TLX and Big Five inventories were used to assess subjective workload and to consider the influence of individual personality differences. The experiment consisted in completing two dual-tasks of approximately 30 minutes of duration into a flight simulator AL50. Each dual-task required the simultaneous accomplishment of both a pre-established flight plan and an additional task based on target stimulus discrimination inserted between Air Traffic Control instructions. This secondary task allowed us to vary the cognitive workload from low (LC) to high (HC) levels, by combining auditory and visual numerical stimuli to respond to meeting specific criteria. Regarding emotional condition, the two dual-tasks were designed to assure analogous difficulty in terms of solicited cognitive demands. The former was realized by the pilot alone, i.e. Low Arousal (LA) condition. In contrast, the latter generates a high arousal (HA), since the pilot was supervised by two evaluators, filmed and involved into a mock competition with the rest of the participants. Results: Performance for the secondary task showed significant faster reaction times (RT) for HA compared to LA condition (p=.003). Moreover, faster RT was found for LC compared to HC (p < .001) condition. No interaction was found. Concerning HR measure, despite the lack of main effects an interaction between emotion and cognition is evidenced (p=.028). Post hoc analysis showed smaller HR for HA compared to LA condition only for LC (p=.049). Conclusion. The control of an aircraft is a very complex task including strong cognitive demands and depends on the emotional state of pilots. According to the behavioral data, the experimental set has permitted to generate satisfactorily different emotional and cognitive levels. As suggested by the interaction found in HR measure, these two factors do not seem to have a cumulative impact on the sympathetic nervous system. Apparently, low cognitive workload makes pilots more sensitive to emotional variations. These results hint the independency between data processing and emotional regulation. Further physiological data are necessary to confirm and disentangle this relation. This procedure may be useful for monitoring objectively pilot’s mental workload.Keywords: cognitive demands, emotion, flight simulator, heart rate, mental workload
Procedia PDF Downloads 27534 Anesthesia for Spinal Stabilization Using Neuromuscular Blocking Agents in Dog: Case Report
Authors: Agata Migdalska, Joanna Berczynska, Ewa Bieniek, Jacek Sterna
Abstract:
Muscle relaxation is considered important during general anesthesia for spine stabilization. In a presented case peripherally acting muscle relaxant was applied during general anesthesia for spine stabilization surgery. The patient was a dog, 11-years old, 26 kg, male, mix breed. Spine fracture was situated between Th13-L1-L2, probably due to the car accident. Preanesthetic physical examination revealed no sign underlying health issues. The dog was premedicated with midazolam 0.2 mg IM and butorphanol 2.4 mg IM. General anesthesia was induced with propofol IV. After the induction, the dog was intubated with an endotracheal tube and connected to an open-ended rebreathing system and maintained with the use of inhalation anesthesia with isoflurane in oxygen. 0,5 mg/ kg of rocuronium was given IV. Use of muscle relaxant was accompanied by an assessment of the degree of neuromuscular blockade by peripheral nerve stimulator. Electrodes were attached to the skin overlying at the peroneal nerve at the lateral cranial tibia. Four electrical pulses were applied to the nerve over a 2 second period. When satisfying nerve block was detected dog was prepared for the surgery. No further monitoring of the effectiveness of blockade was performed during surgery. Mechanical ventilation was kept during anesthesia. During surgery dog maintain stable, and no anesthesiological complication occur. Intraoperatively surgeon claimed that neuromuscular blockade results in a better approach to the spine and easier muscle manipulation which was helpful in order to see the fracture and replace bone fragments. Finally, euthanasia was performed intraoperatively as a result of vast myelomalacia process of the spinal cord. This prevented examination of the recovering process. Neuromuscular blocking agents act at the neuromuscular junction to provide profound muscle relaxation throughout the body. Muscle blocking agents are neither anesthetic nor analgesic; therefore inappropriately used may cause paralysis in fully conscious and feeling pain patient. They cause paralysis of all skeletal muscles, also diaphragm and intercostal muscles when given in higher doses. Intraoperative management includes maintaining stable physiological conditions, which involves adjusting hemodynamic parameters, ensuring proper ventilation, avoiding variations in temperature, maintain normal blood flow to promote proper oxygen exchange. Neuromuscular blocking agent can cause many side effects like residual paralysis, anaphylactic or anaphylactoid reactions, delayed recovery from anesthesia, histamine release, recurarization. Therefore reverse drug like neostigmine (with glikopyrolat) or edrofonium (with atropine) should be used in case of a life-threatening situation. Another useful drug is sugammadex, although the cost of this drug strongly limits its use. Muscle relaxant improves surgical conditions during spinal surgery, especially in heavily muscled individuals. They are also used to facilitate the replacement of dislocated joints as they improve conditions during fracture reduction. It is important to emphasize that in a patient with muscle weakness neuromuscular blocking agents may result in intraoperative and early postoperative cardiovascular and respiratory complications, as well as prolonged recovery from anesthesia. This should not appear in patients with recent spine fracture or luxation. Therefore it is believed that neuromuscular blockers could be useful during spine stabilization procedures.Keywords: anesthesia, dog, neuromuscular block, spine surgery
Procedia PDF Downloads 18133 Advancements in Arthroscopic Surgery Techniques for Anterior Cruciate Ligament (ACL) Reconstruction
Authors: Islam Sherif, Ahmed Ashour, Ahmed Hassan, Hatem Osman
Abstract:
Anterior Cruciate Ligament (ACL) injuries are common among athletes and individuals participating in sports with sudden stops, pivots, and changes in direction. Arthroscopic surgery is the gold standard for ACL reconstruction, aiming to restore knee stability and function. Recent years have witnessed significant advancements in arthroscopic surgery techniques, graft materials, and technological innovations, revolutionizing the field of ACL reconstruction. This presentation delves into the latest advancements in arthroscopic surgery techniques for ACL reconstruction and their potential impact on patient outcomes. Traditionally, autografts from the patellar tendon, hamstring tendon, or quadriceps tendon have been commonly used for ACL reconstruction. However, recent studies have explored the use of allografts, synthetic scaffolds, and tissue-engineered grafts as viable alternatives. This abstract evaluates the benefits and potential drawbacks of each graft type, considering factors such as graft incorporation, strength, and risk of graft failure. Moreover, the application of augmented reality (AR) and virtual reality (VR) technologies in surgical planning and intraoperative navigation has gained traction. AR and VR platforms provide surgeons with detailed 3D anatomical reconstructions of the knee joint, enhancing preoperative visualization and aiding in graft tunnel placement during surgery. We discuss the integration of AR and VR in arthroscopic ACL reconstruction procedures, evaluating their accuracy, cost-effectiveness, and overall impact on surgical outcomes. Beyond graft selection and surgical navigation, patient-specific planning has gained attention in recent research. Advanced imaging techniques, such as MRI-based personalized planning, enable surgeons to tailor ACL reconstruction procedures to each patient's unique anatomy. By accounting for individual variations in the femoral and tibial insertion sites, this personalized approach aims to optimize graft placement and potentially improve postoperative knee kinematics and stability. Furthermore, rehabilitation and postoperative care play a crucial role in the success of ACL reconstruction. This abstract explores novel rehabilitation protocols, emphasizing early mobilization, neuromuscular training, and accelerated recovery strategies. Integrating technology, such as wearable sensors and mobile applications, into postoperative care can facilitate remote monitoring and timely intervention, contributing to enhanced rehabilitation outcomes. In conclusion, this presentation provides an overview of the cutting-edge advancements in arthroscopic surgery techniques for ACL reconstruction. By embracing innovative graft materials, augmented reality, patient-specific planning, and technology-driven rehabilitation, orthopedic surgeons and sports medicine specialists can achieve superior outcomes in ACL injury management. These developments hold great promise for improving the functional outcomes and long-term success rates of ACL reconstruction, benefitting athletes and patients alike.Keywords: arthroscopic surgery, ACL, autograft, allograft, graft materials, ACL reconstruction, synthetic scaffolds, tissue-engineered graft, virtual reality, augmented reality, surgical planning, intra-operative navigation
Procedia PDF Downloads 9232 Modelling Spatial Dynamics of Terrorism
Authors: André Python
Abstract:
To this day, terrorism persists as a worldwide threat, exemplified by the recent deadly attacks in January 2015 in Paris and the ongoing massacres perpetrated by ISIS in Iraq and Syria. In response to this threat, states deploy various counterterrorism measures, the cost of which could be reduced through effective preventive measures. In order to increase the efficiency of preventive measures, policy-makers may benefit from accurate predictive models that are able to capture the complex spatial dynamics of terrorism occurring at a local scale. Despite empirical research carried out at country-level that has confirmed theories explaining the diffusion processes of terrorism across space and time, scholars have failed to assess diffusion’s theories on a local scale. Moreover, since scholars have not made the most of recent statistical modelling approaches, they have been unable to build up predictive models accurate in both space and time. In an effort to address these shortcomings, this research suggests a novel approach to systematically assess the theories of terrorism’s diffusion on a local scale and provide a predictive model of the local spatial dynamics of terrorism worldwide. With a focus on the lethal terrorist events that occurred after 9/11, this paper addresses the following question: why and how does lethal terrorism diffuse in space and time? Based on geolocalised data on worldwide terrorist attacks and covariates gathered from 2002 to 2013, a binomial spatio-temporal point process is used to model the probability of terrorist attacks on a sphere (the world), the surface of which is discretised in the form of Delaunay triangles and refined in areas of specific interest. Within a Bayesian framework, the model is fitted through an integrated nested Laplace approximation - a recent fitting approach that computes fast and accurate estimates of posterior marginals. Hence, for each location in the world, the model provides a probability of encountering a lethal terrorist attack and measures of volatility, which inform on the model’s predictability. Diffusion processes are visualised through interactive maps that highlight space-time variations in the probability and volatility of encountering a lethal attack from 2002 to 2013. Based on the previous twelve years of observation, the location and lethality of terrorist events in 2014 are statistically accurately predicted. Throughout the global scope of this research, local diffusion processes such as escalation and relocation are systematically examined: the former process describes an expansion from high concentration areas of lethal terrorist events (hotspots) to neighbouring areas, while the latter is characterised by changes in the location of hotspots. By controlling for the effect of geographical, economical and demographic variables, the results of the model suggest that the diffusion processes of lethal terrorism are jointly driven by contagious and non-contagious factors that operate on a local scale – as predicted by theories of diffusion. Moreover, by providing a quantitative measure of predictability, the model prevents policy-makers from making decisions based on highly uncertain predictions. Ultimately, this research may provide important complementary tools to enhance the efficiency of policies that aim to prevent and combat terrorism.Keywords: diffusion process, terrorism, spatial dynamics, spatio-temporal modeling
Procedia PDF Downloads 35131 Early Predictive Signs for Kasai Procedure Success
Authors: Medan Isaeva, Anna Degtyareva
Abstract:
Context: Biliary atresia is a common reason for liver transplants in children, and the Kasai procedure can potentially be successful in avoiding the need for transplantation. However, it is important to identify factors that influence surgical outcomes in order to optimize treatment and improve patient outcomes. Research aim: The aim of this study was to develop prognostic models to assess the outcomes of the Kasai procedure in children with biliary atresia. Methodology: This retrospective study analyzed data from 166 children with biliary atresia who underwent the Kasai procedure between 2002 and 2021. The effectiveness of the operation was assessed based on specific criteria, including post-operative stool color, jaundice reduction, and bilirubin levels. The study involved a comparative analysis of various parameters, such as gestational age, birth weight, age at operation, physical development, liver and spleen sizes, and laboratory values including bilirubin, ALT, AST, and others, measured pre- and post-operation. Ultrasonographic evaluations were also conducted pre-operation, assessing the hepatobiliary system and related quantitative parameters. The study was carried out by two experienced specialists in pediatric hepatology. Comparative analysis and multifactorial logistic regression were used as the primary statistical methods. Findings: The study identified several statistically significant predictors of a successful Kasai procedure, including the presence of the gallbladder and levels of cholesterol and direct bilirubin post-operation. A detectable gallbladder was associated with a higher probability of surgical success, while elevated post-operative cholesterol and direct bilirubin levels were indicative of a reduced chance of positive outcomes. Theoretical importance: The findings of this study contribute to the optimization of treatment strategies for children with biliary atresia undergoing the Kasai procedure. By identifying early predictive signs of success, clinicians can modify treatment plans and manage patient care more effectively and proactively. Data collection and analysis procedures: Data for this analysis were obtained from the health records of patients who received the Kasai procedure. Comparative analysis and multifactorial logistic regression were employed to analyze the data and identify significant predictors. Question addressed: The study addressed the question of identifying predictive factors for the success of the Kasai procedure in children with biliary atresia. Conclusion: The developed prognostic models serve as valuable tools for early detection of patients who are less likely to benefit from the Kasai procedure. This enables clinicians to modify treatment plans and manage patient care more effectively and proactively. Potential limitations of the study: The study has several limitations. Its retrospective nature may introduce biases and inconsistencies in data collection. Being single centered, the results might not be generalizable to wider populations due to variations in surgical and postoperative practices. Also, other potential influencing factors beyond the clinical, laboratory, and ultrasonographic parameters considered in this study were not explored, which could affect the outcomes of the Kasai operation. Future studies could benefit from including a broader range of factors.Keywords: biliary atresia, kasai operation, prognostic model, native liver survival
Procedia PDF Downloads 5630 Solar and Galactic Cosmic Ray Impacts on Ambient Dose Equivalent Considering a Flight Path Statistic Representative to World-Traffic
Abstract:
The earth is constantly bombarded by cosmic rays that can be of either galactic or solar origin. Thus, humans are exposed to high levels of galactic radiation due to altitude aircraft. The typical total ambient dose equivalent for a transatlantic flight is about 50 μSv during quiet solar activity. On the contrary, estimations differ by one order of magnitude for the contribution induced by certain solar particle events. Indeed, during Ground Level Enhancements (GLE) event, the Sun can emit particles of sufficient energy and intensity to raise radiation levels on Earth's surface. Analyses of GLE characteristics occurring since 1942 showed that for the worst of them, the dose level is of the order of 1 mSv and more. The largest of these events was observed on February 1956 for which the ambient dose equivalent rate is in the orders of 10 mSv/hr. The extra dose at aircraft altitudes for a flight during this event might have been about 20 mSv, i.e. comparable with the annual limit for aircrew. The most recent GLE, occurred on September 2017 resulting from an X-class solar flare, and it was measured on the surface of both the Earth and Mars using the Radiation Assessment Detector on the Mars Science Laboratory's Curiosity Rover. Recently, Hubert et al. proposed a GLE model included in a particle transport platform (named ATMORAD) describing the extensive air shower characteristics and allowing to assess the ambient dose equivalent. In this approach, the GCR is based on the Force-Field approximation model. The physical description of the Solar Cosmic Ray (i.e. SCR) considers the primary differential rigidity spectrum and the distribution of primary particles at the top of the atmosphere. ATMORAD allows to determine the spectral fluence rate of secondary particles induced by extensive showers, considering altitude range from ground to 45 km. Ambient dose equivalent can be determined using fluence-to-ambient dose equivalent conversion coefficients. The objective of this paper is to analyze the GCR and SCR impacts on ambient dose equivalent considering a high number statistic of world-flight paths. Flight trajectories are based on the Eurocontrol Demand Data Repository (DDR) and consider realistic flight plan with and without regulations or updated with Radar Data from CFMU (Central Flow Management Unit). The final paper will present exhaustive analyses implying solar impacts on ambient dose equivalent level and will propose detailed analyses considering route and airplane characteristics (departure, arrival, continent, airplane type etc.), and the phasing of the solar event. Preliminary results show an important impact of the flight path, particularly the latitude which drives the cutoff rigidity variations. Moreover, dose values vary drastically during GLE events, on the one hand with the route path (latitude, longitude altitude), on the other hand with the phasing of the solar event. Considering the GLE occurred on 23 February 1956, the average ambient dose equivalent evaluated for a flight Paris - New York is around 1.6 mSv, which is relevant to previous works This point highlights the importance of monitoring these solar events and of developing semi-empirical and particle transport method to obtain a reliable calculation of dose levels.Keywords: cosmic ray, human dose, solar flare, aviation
Procedia PDF Downloads 20629 Upper Jurassic to Lower Cretaceous Oysters (Bivalvia, Ostreoidea) from Siberia: Taxonomy and Variations of Carbon and Oxygen Isotopes
Authors: Igor N. Kosenko
Abstract:
The present contribution is an analysis of more than 300 specimens of Upper Jurassic to Lower Cretaceous oysters collected by V.A. Zakharov during the 1960s and currently stored in the Trofimuk Institute of Geology and Geophysics SB RAS (Novosibirsk, Russia). They were sampled in the northwestern bounder of Western Siberia (Yatriya, Maurynia, Tol’ya and Lopsiya rivers) and the north of Eastern Siberia (Boyarka, Bolshaya Romanikha and Dyabaka-Tari rivers). During the last five years, they were examined with taxonomical and palaeoecological purposes. Based on carbonate material of oyster’s shells were performed isotopic analyses and associated palaeotemperatures. Taxonomical study consists on classical morphofunctional and biometrical analyses. It is completed by another large amount of Cretaceous oysters from Crimea as well as modern Pacific oyster - Crassostrea gigas. Those were studied to understand the range of modification variability between different species. Oysters previously identified as Liostrea are attributed now to four genera: Praeexogyra and Helvetostrea (Flemingostreidae), Pernostrea (Gryphaeidae) and one new genus (Gryphaeidae), including one species “Liostrea” roemeri (Quenstedt). This last is characterized by peculiar ethology, being attached to floating ammonites and morphology, outlined by a beak-shaped umbo on the right (!) valve. Endemic Siberian species from the Pernostrea genus have been included into the subgenus Boreiodeltoideum subgen. nov. Pernostrea and Deltoideum genera have been included into the tribe Pernostreini n. trib. from the Gryphaeinae subfamily. Model of phylogenetic relationships between species of this tribe has been proposed. Siberian oyster complexes were compared with complexes from Western Europe, Poland and East European Platform. In western Boreal and Subboreal Realm (England, northern France and Poland) two stages of oyster’s development were recognized: Jurassic-type and Cretaceous-type. In Siberia, Jurassic and Lower Cretaceous oysters formed a unique complex. It may be due to the isolation of the Siberian Basin toward the West during the Early Cretaceous. Seven oyster’s shells of Pernostrea (Pernostrea) uralensis (Zakharov) from the Jurassic/Cretaceous Boundary Interval (Upper Volgian – Lower Ryazanian) of Maurynia river were used to perform δ13C and δ18O isotopic analyses. The preservation of the carbonate material was controlled by: cathodoluminescence analyses; content of Fe, Mn, Sr; absence of correlation between δ13C and δ18O and content of Fe and Mn. The obtained δ13C and δ18O data were compared with isotopic data based on belemnites from the same stratigraphical interval of the same section and were used to trace palaeotemperatures. A general trend towards negative δ18O values is recorded in the Maurynia section, from the lower part of the Upper Volgian to the middle part of the Ryazanian Chetaites sibiricus ammonite zone. This trend was previously recorded in the Nordvik section. The higher palaeotemperatures (2°C in average) determined from oyster’s shells indicate that belemnites likely migrated laterally and lived part of their lives in cooler waters. This work financially supported by the Russian Foundation for Basic Researches (grant no. 16-35-00003).Keywords: isotopes, oysters, Siberia, taxonomy
Procedia PDF Downloads 19428 Strategy to Evaluate Health Risks of Short-Term Exposure of Air Pollution in Vulnerable Individuals
Authors: Sarah Nauwelaerts, Koen De Cremer, Alfred Bernard, Meredith Verlooy, Kristel Heremans, Natalia Bustos Sierra, Katrien Tersago, Tim Nawrot, Jordy Vercauteren, Christophe Stroobants, Sigrid C. J. De Keersmaecker, Nancy Roosens
Abstract:
Projected climate changes could lead to exacerbation of respiratory disorders associated with reduced air quality. Air pollution and climate changes influence each other through complex interactions. The poor air quality in urban and rural areas includes high levels of particulate matter (PM), ozone (O3) and nitrogen oxides (NOx), representing a major threat to public health and especially for the most vulnerable population strata, and especially young children. In this study, we aim to develop generic standardized policy supporting tools and methods that allow evaluating in future follow-up larger scale epidemiological studies the risks of the combined short-term effects of O3 and PM on the cardiorespiratory system of children. We will use non-invasive indicators of airway damage/inflammation and of genetic or epigenetic variations by using urine or saliva as alternative to blood samples. Therefore, a multi-phase field study will be organized in order to assess the sensitivity and applicability of these tests in large cohorts of children during episodes of air pollution. A first test phase was planned in March 2018, not yet taking into account ‘critical’ pollution periods. Working with non-invasive samples, choosing the right set-up for the field work and the volunteer selection were parameters to consider, as they significantly influence the feasibility of this type of study. During this test phase, the selection of the volunteers was done in collaboration with medical doctors from the Centre for Student Assistance (CLB), by choosing a class of pre-pubertal children of 9-11 years old in a primary school in Flemish Brabant, Belgium. A questionnaire, collecting information on the health and background of children and an informed consent document were drawn up for the parents as well as a simplified cartoon-version of this document for the children. A detailed study protocol was established, giving clear information on the study objectives, the recruitment, the sample types, the medical examinations to be performed, the strategy to ensure anonymity, and finally on the sample processing. Furthermore, the protocol describes how this field study will be conducted in relation with the prevision and monitoring of air pollutants for the future phases. Potential protein, genetic and epigenetic biomarkers reflecting the respiratory function and the levels of air pollution will be measured in the collected samples using unconventional technologies. The test phase results will be used to address the most important bottlenecks before proceeding to the following phases of the study where the combined effect of O3 and PM during pollution peaks will be examined. This feasibility study will allow identifying possible bottlenecks and providing missing scientific knowledge, necessary for the preparation, implementation and evaluation of federal policies/strategies, based on the most appropriate epidemiological studies on the health effects of air pollution. The research leading to these results has been funded by the Belgian Science Policy Office through contract No.: BR/165/PI/PMOLLUGENIX-V2.Keywords: air pollution, biomarkers, children, field study, feasibility study, non-invasive
Procedia PDF Downloads 17927 Facies, Diagenetic Analysis and Sequence Stratigraphy of Habib Rahi Formation Dwelling in the Vicinity of Jacobabad Khairpur High, Southern Indus Basin, Pakistan
Authors: Muhammad Haris, Syed Kamran Ali, Mubeen Islam, Tariq Mehmood, Faisal Shah
Abstract:
Jacobabad Khairpur High, part of a Sukkur rift zone, is the separating boundary between Central and Southern Indus Basin, formed as a result of Post-Jurassic uplift after the deposition of Middle Jurassic Chiltan Formation. Habib Rahi Formation of Middle to Late Eocene outcrops in the vicinity of Jacobabad Khairpur High, a section at Rohri near Sukkur is measured in detail for lithofacies, microfacies, diagenetic analysis and sequence stratigraphy. Habib Rahi Formation is richly fossiliferous and consists of mostly limestone with subordinate clays and marl. The total thickness of the formation in this section is 28.8m. The bottom of the formation is not exposed, while the upper contact with the Sirki Shale of the Middle Eocene age is unconformable in some places. A section is measured using Jacob’s Staff method, and traverses were made perpendicular to the strike. Four different lithofacies were identified based on outcrop geology which includes coarse-grained limestone facies (HR-1 to HR-5), massive bedded limestone facies (HR-6 HR-7), and micritic limestone facies (HR-8 to HR-13) and algal dolomitic limestone facie (HR-14). Total 14 rock samples were collected from outcrop for detailed petrographic studies, and thin sections of respective samples were prepared and analyzed under the microscope. On the basis of Dunham’s (1962) classification systems after studying textures, grain size, and fossil content and using Folk’s (1959) classification system after reviewing Allochems type, four microfacies were identified. These microfacies include HR-MF 1: Benthonic Foraminiferal Wackstone/Biomicrite Microfacies, HR-MF 2: Foramineral Nummulites Wackstone-Packstone/Biomicrite Microfacies HR-MF 3: Benthonic Foraminiferal Packstone/Biomicrite Microfacies, HR-MF 4: Bioclasts Carbonate Mudstone/Micrite Microfacies. The abundance of larger benthic Foraminifera’s (LBF), including Assilina sp., A. spiral abrade, A. granulosa, A. dandotica, A. laminosa, Nummulite sp., N. fabiani, N. stratus, N. globulus, Textularia, Bioclasts, and Red algae indicates shallow marine (Tidal Flat) environment of deposition. Based on variations in rock types, grain size, and marina fauna Habib Rahi Formation shows progradational stacking patterns, which indicates coarsening upward cycles. The second order of sea-level rise is identified (spanning from Y-Persian to Bartonian age) that represents the Transgressive System Tract (TST) and a third-order Regressive System Tract (RST) (spanning from Bartonian to Priabonian age). Diagenetic processes include fossils replacement by mud, dolomitization, pressure dissolution associated stylolites features and filling with dark organic matter. The presence of the microfossils includes Nummulite. striatus, N. fabiani, and Assilina. dandotica, signify Bartonian to Priabonian age of Habib Rahi Formation.Keywords: Jacobabad Khairpur High, Habib Rahi Formation, lithofacies, microfacies, sequence stratigraphy, diagenetic history
Procedia PDF Downloads 47326 Workflow Based Inspection of Geometrical Adaptability from 3D CAD Models Considering Production Requirements
Authors: Tobias Huwer, Thomas Bobek, Gunter Spöcker
Abstract:
Driving forces for enhancements in production are trends like digitalization and individualized production. Currently, such developments are restricted to assembly parts. Thus, complex freeform surfaces are not addressed in this context. The need for efficient use of resources and near-net-shape production will require individualized production of complex shaped workpieces. Due to variations between nominal model and actual geometry, this can lead to changes in operations in Computer-aided process planning (CAPP) to make CAPP manageable for an adaptive serial production. In this context, 3D CAD data can be a key to realizing that objective. Along with developments in the geometrical adaptation, a preceding inspection method based on CAD data is required to support the process planner by finding objective criteria to make decisions about the adaptive manufacturability of workpieces. Nowadays, this kind of decisions is depending on the experience-based knowledge of humans (e.g. process planners) and results in subjective decisions – leading to a variability of workpiece quality and potential failure in production. In this paper, we present an automatic part inspection method, based on design and measurement data, which evaluates actual geometries of single workpiece preforms. The aim is to automatically determine the suitability of the current shape for further machining, and to provide a basis for an objective decision about subsequent adaptive manufacturability. The proposed method is realized by a workflow-based approach, keeping in mind the requirements of industrial applications. Workflows are a well-known design method of standardized processes. Especially in applications like aerospace industry standardization and certification of processes are an important aspect. Function blocks, providing a standardized, event-driven abstraction to algorithms and data exchange, will be used for modeling and execution of inspection workflows. Each analysis step of the inspection, such as positioning of measurement data or checking of geometrical criteria, will be carried out by function blocks. One advantage of this approach is its flexibility to design workflows and to adapt algorithms specific to the application domain. In general, within the specified tolerance range it will be checked if a geometrical adaption is possible. The development of particular function blocks is predicated on workpiece specific information e.g. design data. Furthermore, for different product lifecycle phases, appropriate logics and decision criteria have to be considered. For example, tolerances for geometric deviations are different in type and size for new-part production compared to repair processes. In addition to function blocks, appropriate referencing systems are important. They need to support exact determination of position and orientation of the actual geometries to provide a basis for precise analysis. The presented approach provides an inspection methodology for adaptive and part-individual process chains. The analysis of each workpiece results in an inspection protocol and an objective decision about further manufacturability. A representative application domain is the product lifecycle of turbine blades containing a new-part production and a maintenance process. In both cases, a geometrical adaptation is required to calculate individual production data. In contrast to existing approaches, the proposed initial inspection method provides information to decide between different potential adaptive machining processes.Keywords: adaptive, CAx, function blocks, turbomachinery
Procedia PDF Downloads 29825 Methodology for Temporary Analysis of Production and Logistic Systems on the Basis of Distance Data
Authors: M. Mueller, M. Kuehn, M. Voelker
Abstract:
In small and medium-sized enterprises (SMEs), the challenge is to create a well-grounded and reliable basis for process analysis, optimization and planning due to a lack of data. SMEs have limited access to methods with which they can effectively and efficiently analyse processes and identify cause-and-effect relationships in order to generate the necessary database and derive optimization potential from it. The implementation of digitalization within the framework of Industry 4.0 thus becomes a particular necessity for SMEs. For these reasons, the abstract presents an analysis methodology that is subject to the objective of developing an SME-appropriate methodology for efficient, temporarily feasible data collection and evaluation in flexible production and logistics systems as a basis for process analysis and optimization. The overall methodology focuses on retrospective, event-based tracing and analysis of material flow objects. The technological basis consists of Bluetooth low energy (BLE)-based transmitters, so-called beacons, and smart mobile devices (SMD), e.g. smartphones as receivers, between which distance data can be measured and derived motion profiles. The distance is determined using the Received Signal Strength Indicator (RSSI), which is a measure of signal field strength between transmitter and receiver. The focus is the development of a software-based methodology for interpretation of relative movements of transmitters and receivers based on distance data. The main research is on selection and implementation of pattern recognition methods for automatic process recognition as well as methods for the visualization of relative distance data. Due to an existing categorization of the database regarding process types, classification methods (e.g. Support Vector Machine) from the field of supervised learning are used. The necessary data quality requires selection of suitable methods as well as filters for smoothing occurring signal variations of the RSSI, the integration of methods for determination of correction factors depending on possible signal interference sources (columns, pallets) as well as the configuration of the used technology. The parameter settings on which respective algorithms are based have a further significant influence on result quality of the classification methods, correction models and methods for visualizing the position profiles used. The accuracy of classification algorithms can be improved up to 30% by selected parameter variation; this has already been proven in studies. Similar potentials can be observed with parameter variation of methods and filters for signal smoothing. Thus, there is increased interest in obtaining detailed results on the influence of parameter and factor combinations on data quality in this area. The overall methodology is realized with a modular software architecture consisting of independently modules for data acquisition, data preparation and data storage. The demonstrator for initialization and data acquisition is available as mobile Java-based application. The data preparation, including methods for signal smoothing, are Python-based with the possibility to vary parameter settings and to store them in the database (SQLite). The evaluation is divided into two separate software modules with database connection: the achievement of an automated assignment of defined process classes to distance data using selected classification algorithms and the visualization as well as reporting in terms of a graphical user interface (GUI).Keywords: event-based tracing, machine learning, process classification, parameter settings, RSSI, signal smoothing
Procedia PDF Downloads 13424 Unveiling the Dynamics of Preservice Teachers’ Engagement with Mathematical Modeling through Model Eliciting Activities: A Comprehensive Exploration of Acceptance and Resistance Towards Modeling and Its Pedagogy
Authors: Ozgul Kartal, Wade Tillett, Lyn D. English
Abstract:
Despite its global significance in curricula, mathematical modeling encounters persistent disparities in recognition and emphasis within regular mathematics classrooms and teacher education across countries with diverse educational and cultural traditions, including variations in the perceived role of mathematical modeling. Over the past two decades, increased attention has been given to the integration of mathematical modeling into national curriculum standards in the U.S. and other countries. Therefore, the mathematics education research community has dedicated significant efforts to investigate various aspects associated with the teaching and learning of mathematical modeling, primarily focusing on exploring the applicability of modeling in schools and assessing students', teachers', and preservice teachers' (PTs) competencies and engagement in modeling cycles and processes. However, limited attention has been directed toward examining potential resistance hindering teachers and PTs from effectively implementing mathematical modeling. This study focuses on how PTs, without prior modeling experience, resist and/or embrace mathematical modeling and its pedagogy as they learn about models and modeling perspectives, navigate the modeling process, design and implement their modeling activities and lesson plans, and experience the pedagogy enabling modeling. Model eliciting activities (MEAs) were employed due to their high potential to support the development of mathematical modeling pedagogy. The mathematical modeling module was integrated into a mathematics methods course to explore how PTs embraced or resisted mathematical modeling and its pedagogy. The module design included reading, reflecting, engaging in modeling, assessing models, creating a modeling task (MEA), and designing a modeling lesson employing an MEA. Twelve senior undergraduate students participated, and data collection involved video recordings, written prompts, lesson plans, and reflections. An open coding analysis revealed acceptance and resistance toward teaching mathematical modeling. The study identified four overarching themes, including both acceptance and resistance: pedagogy, affordance of modeling (tasks), modeling actions, and adjusting modeling. In the category of pedagogy, PTs displayed acceptance based on potential pedagogical benefits and resistance due to various concerns. The affordance of modeling (tasks) category emerged from instances when PTs showed acceptance or resistance while discussing the nature and quality of modeling tasks, often debating whether modeling is considered mathematics. PTs demonstrated both acceptance and resistance in their modeling actions, engaging in modeling cycles as students and designing/implementing MEAs as teachers. The adjusting modeling category captured instances where PTs accepted or resisted maintaining the qualities and nature of the modeling experience or converted modeling into a typical structured mathematics experience for students. While PTs displayed a mix of acceptance and resistance in their modeling actions, limitations were observed in embracing complexity and adhering to model principles. The study provides valuable insights into the challenges and opportunities of integrating mathematical modeling into teacher education, emphasizing the importance of addressing pedagogical concerns and providing support for effective implementation. In conclusion, this research offers a comprehensive understanding of PTs' engagement with modeling, advocating for a more focused discussion on the distinct nature and significance of mathematical modeling in the broader curriculum to establish a foundation for effective teacher education programs.Keywords: mathematical modeling, model eliciting activities, modeling pedagogy, secondary teacher education
Procedia PDF Downloads 6623 Bio-Nanotechnology Approach of Nano-Size Iron Particles as Promising Iron Supplements: An Exploratory Study to Combat the Problems of Iron Fortification in Children and Pregnant Women of Rural India
Authors: Roshni Raha, Kavya P., Gayathri M.
Abstract:
India, with a humongous population, remains the world's poorest developing nation in terms of nutritional status, with iron deficiency anaemia (IDA) affecting the population. Despite efforts over the past decades, India's anaemia prevalence has not been reduced. Researchers are interested in developing therapies that will minimize the typical side effects of oral iron and optimize iron salts-based treatment through delivery methods based on the physiology of hepcidin regulation. However, they need to come up with iron therapies that will prevent making the infection worse. This article explores using bio-nanotechnology as the alternative, promising substitution of providing iron supplements for the treatment of diarrhoea and gut inflammation in kids and pregnant women. This article is an exploratory study using a literature survey and secondary research from review papers. In the realm of biotechnology, nanoparticles have become extremely famous due to unexpected variations in surface characteristics caused by particle size. Particle size distribution and shape exhibit unusual, enhanced characteristics when reduced to nanoscale. The article attempts to develop a model for a nanotechnology based solution in iron fortification to combat the problems of diarrhoea and gut inflammation. Certain dimensions that have been considered in the model include the size, shape, source, and biosynthesis of the iron nanoparticles. Another area of investigation addressed in the article is the cost-effective biocompatible production of these iron nanoparticles. Studies have demonstrated that a substantial reduction of metal ions to form nanoparticles from the bulk metal occurs in plants because of the presence of a wide diversity of biomolecules. Using this concept, the paper investigates the effectiveness and impact of how similar sources can be used for the biological synthesis of iron nanoparticles. Results showed that iron particles, when prepared in nano-metre size, offer potential advantages. When the particle size of the iron compound decreases and attains nano configuration, its surface area increases, which further improves its solubility in the gastric acid, leading to higher absorption, higher bioavailability, and producing the least organoleptic changes in food. It has no negative effects and possesses a safe, effective profile to reduce IDA. Considering all the parameters, it has been concluded that iron particles in nano configuration serve as alternative iron supplements for the complete treatment of IDA. Nanoparticles of ferric phosphate, ferric pyrophosphate, and iron oxide are the choices of iron supplements. From a sourcing perspective, the paper concludes green sources are the primary sources for the biological synthesis of iron nanoparticles. It will also be a cost-effective strategy since our goal is to treat the target population in rural India. Bio-nanotechnology serves as an alternative and promising substitution for iron supplements due to its low cost, excellent bioavailability, and strong organoleptic properties. One area of future research can be to explore the type of size and shape of iron nanoparticles that would be suitable for the different age groups of pregnant women and children and whether it would be influenced based on the topography in certain areas.Keywords: anemia, bio-nanotechnology, iron-fortification, nanoparticle
Procedia PDF Downloads 7622 Developing Primal Teachers beyond the Classroom: The Quadrant Intelligence (Q-I) Model
Authors: Alexander K. Edwards
Abstract:
Introduction: The moral dimension of teacher education globally has assumed a new paradigm of thinking based on the public gain (return-on-investments), value-creation (quality), professionalism (practice), and business strategies (innovations). Abundant literature reveals an interesting revolutionary trend in complimenting the raising of teachers and academic performances. Because of the global competition in the knowledge-creation and service areas, the C21st teacher at all levels is expected to be resourceful, strategic thinker, socially intelligent, relationship aptitude, and entrepreneur astute. This study is a significant contribution to practice and innovations to raise exemplary or primal teachers. In this study, the qualities needed were considered as ‘Quadrant Intelligence (Q-i)’ model for a primal teacher leadership beyond the classroom. The researcher started by examining the issue of the majority of teachers in Ghana Education Services (GES) in need of this Q-i to be effective and efficient. The conceptual framing became determinants of such Q-i. This is significant for global employability and versatility in teacher education to create premium and primal teacher leadership, which are again gaining high attention in scholarship due to failing schools. The moral aspect of teachers failing learners is a highly important discussion. In GES, some schools score zero percent at the basic education certificate examination (BECE). The question is what will make any professional teacher highly productive, marketable, and an entrepreneur? What will give teachers the moral consciousness of doing the best to succeed? Method: This study set out to develop a model for primal teachers in GES as an innovative way to highlight a premium development for the C21st business-education acumen through desk reviews. The study is conceptually framed by examining certain skill sets such as strategic thinking, social intelligence, relational and emotional intelligence and entrepreneurship to answer three main burning questions and other hypotheses. Then the study applied the causal comparative methodology with a purposive sampling technique (N=500) from CoE, GES, NTVI, and other teachers associations. Participants responded to a 30-items, researcher-developed questionnaire. Data is analyzed on the quadrant constructs and reported as ex post facto analyses of multi-variances and regressions. Multiple associations were established for statistical significance (p=0.05). Causes and effects are postulated for scientific discussions. Findings: It was found out that these quadrants are very significant in teacher development. There were significant variations in the demographic groups. However, most teachers lack considerable skills in entrepreneurship, leadership in teaching and learning, and business thinking strategies. These have significant effect on practices and outcomes. Conclusion and Recommendations: It is quite conclusive therefore that in GES teachers may need further instructions in innovations and creativity to transform knowledge-creation into business venture. In service training (INSET) has to be comprehensive. Teacher education curricula at Colleges may have to be re-visited. Teachers have the potential to raise their social capital, to be entrepreneur, and to exhibit professionalism beyond their community services. Their primal leadership focus will benefit many clienteles including students and social circles. Recommendations examined the policy implications for curriculum design, practice, innovations and educational leadership.Keywords: emotional intelligence, entrepreneurship, leadership, quadrant intelligence (q-i), primal teacher leadership, strategic thinking, social intelligence
Procedia PDF Downloads 31521 Dynamic Facades: A Literature Review on Double-Skin Façade with Lightweight Materials
Authors: Victor Mantilla, Romeu Vicente, António Figueiredo, Victor Ferreira, Sandra Sorte
Abstract:
Integrating dynamic facades into contemporary building design is shaping a new era of energy efficiency and user comfort. These innovative facades, often constructed using lightweight construction systems and materials, offer an opportunity to have a responsive and adaptive nature to the dynamic behavior of the outdoor climate. Therefore, in regions characterized by high fluctuations in daily temperatures, the ability to adapt to environmental changes is of paramount importance and a challenge. This paper presents a thorough review of the state of the art on double-skin facades (DSF), focusing on lightweight solutions for the external envelope. Dynamic facades featuring elements like movable shading devices, phase change materials, and advanced control systems have revolutionized the built environment. They offer a promising path for reducing energy consumption while enhancing occupant well-being. Lightweight construction systems are increasingly becoming the choice for the constitution of these facade solutions, offering benefits such as reduced structural loads and reduced construction waste, improving overall sustainability. However, the performance of dynamic facades based on low thermal inertia solutions in climatic contexts with high thermal amplitude is still in need of research since their ability to adapt is traduced in variability/manipulation of the thermal transmittance coefficient (U-value). Emerging technologies can enable such a dynamic thermal behavior through innovative materials, changes in geometry and control to optimize the facade performance. These innovations will allow a facade system to respond to shifting outdoor temperature, relative humidity, wind, and solar radiation conditions, ensuring that energy efficiency and occupant comfort are both met/coupled. This review addresses the potential configuration of double-skin facades, particularly concerning their responsiveness to seasonal variations in temperature, with a specific focus on addressing the challenges posed by winter and summer conditions. Notably, the design of a dynamic facade is significantly shaped by several pivotal factors, including the choice of materials, geometric considerations, and the implementation of effective monitoring systems. Within the realm of double skin facades, various configurations are explored, encompassing exhaust air, supply air, and thermal buffering mechanisms. According to the review places a specific emphasis on the thermal dynamics at play, closely examining the impact of factors such as the color of the facade, the slat angle's dimensions, and the positioning and type of shading devices employed in these innovative architectural structures.This paper will synthesize the current research trends in this field, with the presentation of case studies and technological innovations with a comprehensive understanding of the cutting-edge solutions propelling the evolution of building envelopes in the face of climate change, namely focusing on double-skin lightweight solutions to create sustainable, adaptable, and responsive building envelopes. As indicated in the review, flexible and lightweight systems have broad applicability across all building sectors, and there is a growing recognition that retrofitting existing buildings may emerge as the predominant approach.Keywords: adaptive, control systems, dynamic facades, energy efficiency, responsive, thermal comfort, thermal transmittance
Procedia PDF Downloads 8220 Microstructural Characterization of Bitumen/Montmorillonite/Isocyanate Composites by Atomic Force Microscopy
Authors: Francisco J. Ortega, Claudia Roman, Moisés García-Morales, Francisco J. Navarro
Abstract:
Asphaltic bitumen has been largely used in both industrial and civil engineering, mostly in pavement construction and roofing membrane manufacture. However, bitumen as such is greatly susceptible to temperature variations, and dramatically changes its in-service behavior from a viscoelastic liquid, at medium-high temperatures, to a brittle solid at low temperatures. Bitumen modification prevents these problems and imparts improved performance. Isocyanates like polymeric MDI (mixture of 4,4′-diphenylmethane di-isocyanate, 2,4’ and 2,2’ isomers, and higher homologues) have shown to remarkably enhance bitumen properties at the highest in-service temperatures expected. This comes from the reaction between the –NCO pendant groups of the oligomer and the most polar groups of asphaltenes and resins in bitumen. In addition, oxygen diffusion and/or UV radiation may provoke bitumen hardening and ageing. With the purpose of minimizing these effects, nano-layered-silicates (nanoclays) are increasingly being added to bitumen formulations. Montmorillonites, a type of naturally occurring mineral, may produce a nanometer scale dispersion which improves bitumen thermal, mechanical and barrier properties. In order to increase their lipophilicity, these nanoclays are normally treated so that organic cations substitute the inorganic cations located in their intergallery spacing. In the present work, the combined effect of polymeric MDI and the commercial montmorillonite Cloisite® 20A was evaluated. A selected bitumen with penetration within the range 160/220 was modified with 10 wt.% Cloisite® 20A and 2 wt.% polymeric MDI, and the resulting ternary composites were characterized by linear rheology, X-ray diffraction (XRD) and Atomic Force Microscopy (AFM). The rheological tests evidenced a notable solid-like behavior at the highest temperatures studied when bitumen was just loaded with 10 wt.% Cloisite® 20A and high-shear blended for 20 minutes. However, if polymeric MDI was involved, the sequence of addition exerted a decisive control on the linear rheology of the final ternary composites. Hence, in bitumen/Cloisite® 20A/polymeric MDI formulations, the previous solid-like behavior disappeared. By contrast, an inversion in the order of addition (bitumen/polymeric MDI/ Cloisite® 20A) enhanced further the solid-like behavior imparted by the nanoclay. In order to gain a better understanding of the factors that govern the linear rheology of these ternary composites, a morphological and microstructural characterization based on XRD and AFM was conducted. XRD demonstrated the existence of clay stacks intercalated by bitumen molecules to some degree. However, the XRD technique cannot provide detailed information on the extent of nanoclay delamination, unless the entire fraction has effectively been fully delaminated (situation in which no peak is observed). Furthermore, XRD was unable to provide precise knowledge neither about the spatial distribution of the intercalated/exfoliated platelets nor about the presence of other structures at larger length scales. In contrast, AFM proved its power at providing conclusive information on the morphology of the composites at the nanometer scale and at revealing the structural modification that yielded the rheological properties observed. It was concluded that high-shear blending brought about a nanoclay-reinforced network. As for the bitumen/Cloisite® 20A/polymeric MDI formulations, the solid-like behavior was destroyed as a result of the agglomeration of the nanoclay platelets promoted by chemical reactions.Keywords: Atomic Force Microscopy, bitumen, composite, isocyanate, montmorillonite.
Procedia PDF Downloads 26119 Traditional Wisdom of Indigenous Vernacular Architecture as Tool for Climate Resilience Among PVTG Indigenous Communities in Jharkhand, India
Authors: Ankush, Harshit Sosan Lakra, Rachita Kuthial
Abstract:
Climate change poses significant challenges to vulnerable communities, particularly indigenous populations in ecologically sensitive regions. Jharkhand, located in the heart of India, is home to several indigenous communities, including the Particularly Vulnerable Tribal Groups (PVTGs). The Indigenous architecture of the region functions as a significant reservoir of climate adaptation wisdom. It explores the architectural analysis encompassing the construction materials, construction techniques, design principles, climate responsiveness, cultural relevance, adaptation, integration with the environment and traditional wisdom that has evolved through generations, rooted in cultural and socioeconomic traditions, and has allowed these communities to thrive in a variety of climatic zones, including hot and dry, humid, and hilly terrains to withstand the test of time. Despite their historical resilience to adverse climatic conditions, PVTG tribal communities face new and amplified challenges due to the accelerating pace of climate change. There is a significant research void that exists in assimilating their traditional practices and local wisdom into contemporary climate resilience initiatives. Most of the studies place emphasis on technologically advanced solutions, often ignoring the invaluable Indigenous Local knowledge that can complement and enhance these efforts. This research gap highlights the need to bridge the disconnect between indigenous knowledge and contemporary climate adaptation strategies. The study aims to explore and leverage indigenous knowledge of vernacular architecture as a strategic tool for enhancing climatic resilience among PVTGs of the region. The first objective is to understand the traditional wisdom of vernacular architecture by analyzing and documenting distinct architectural practices and cultural significance of PVTG communities, emphasizing construction techniques, materials and spatial planning. The second objective is to develop culturally sensitive climatic resilience strategies based on findings of vernacular architecture by employing a multidisciplinary research approach that encompasses ethnographic fieldwork climate data assessment considering multiple variables such as temperature variations, precipitation patterns, extreme weather events and climate change reports. This will be a tailor-made solution integrating indigenous knowledge with modern technology and sustainable practices. With the involvement of indigenous communities in the process, the research aims to ensure that the developed strategies are practical, culturally appropriate, and accepted. To foster long-term resilience against the global issue of climate change, we can bridge the gap between present needs and future aspirations with Traditional wisdom, offering sustainable solutions that will empower PVTG communities. Moreover, the study emphasizes the significance of preserving and reviving traditional Architectural wisdom for enhancing climatic resilience. It also highlights the need for cooperative endeavors of communities, stakeholders, policymakers, and researchers to encourage integrating traditional Knowledge into Modern sustainable design methods. Through these efforts, this research will contribute not only to the well-being of PVTG communities but also to the broader global effort to build a more resilient and sustainable future. Also, the Indigenous communities like PVTG in the state of Jharkhand can achieve climatic resilience while respecting and safeguarding the cultural heritage and peculiar characteristics of its native population.Keywords: vernacular architecture, climate change, resilience, PVTGs, Jharkhand, indigenous people, India
Procedia PDF Downloads 7418 Neologisms and Word-Formation Processes in Board Game Rulebook Corpus: Preliminary Results
Authors: Athanasios Karasimos, Vasiliki Makri
Abstract:
This research focuses on the design and development of the first text Corpus based on Board Game Rulebooks (BGRC) with direct application on the morphological analysis of neologisms and tendencies in word-formation processes. Corpus linguistics is a dynamic field that examines language through the lens of vast collections of texts. These corpora consist of diverse written and spoken materials, ranging from literature and newspapers to transcripts of everyday conversations. By morphologically analyzing these extensive datasets, morphologists can gain valuable insights into how language functions and evolves, as these extensive datasets can reflect the byproducts of inflection, derivation, blending, clipping, compounding, and neology. This entails scrutinizing how words are created, modified, and combined to convey meaning in a corpus of challenging, creative, and straightforward texts that include rules, examples, tutorials, and tips. Board games teach players how to strategize, consider alternatives, and think flexibly, which are critical elements in language learning. Their rulebooks reflect not only their weight (complexity) but also the language properties of each genre and subgenre of these games. Board games are a captivating realm where strategy, competition, and creativity converge. Beyond the excitement of gameplay, board games also spark the art of word creation. Word games, like Scrabble, Codenames, Bananagrams, Wordcraft, Alice in the Wordland, Once uUpona Time, challenge players to construct words from a pool of letters, thus encouraging linguistic ingenuity and vocabulary expansion. These games foster a love for language, motivating players to unearth obscure words and devise clever combinations. On the other hand, the designers and creators produce rulebooks, where they include their joy of discovering the hidden potential of language, igniting the imagination, and playing with the beauty of words, making these games a delightful fusion of linguistic exploration and leisurely amusement. In this research, more than 150 rulebooks in English from all types of modern board games, either language-independent or language-dependent, are used to create the BGRC. A representative sample of each genre (family, party, worker placement, deckbuilding, dice, and chance games, strategy, eurogames, thematic, role-playing, among others) was selected based on the score from BoardGameGeek, the size of the texts and the level of complexity (weight) of the game. A morphological model with morphological networks, multi-word expressions, and word-creation mechanics based on the complexity of the textual structure, difficulty, and board game category will be presented. In enabling the identification of patterns, trends, and variations in word formation and other morphological processes, this research aspires to make avail of this creative yet strict text genre so as to (a) give invaluable insight into morphological creativity and innovation that (re)shape the lexicon of the English language and (b) test morphological theories. Overall, it is shown that corpus linguistics empowers us to explore the intricate tapestry of language, and morphology in particular, revealing its richness, flexibility, and adaptability in the ever-evolving landscape of human expression.Keywords: board game rulebooks, corpus design, morphological innovations, neologisms, word-formation processes
Procedia PDF Downloads 10317 Biophilic Design Strategies: Four Case-Studies from Northern Europe
Authors: Carmen García Sánchez
Abstract:
The UN's 17 Sustainable Development Goals – specifically the nº 3 and nº 11- urgently call for new architectural design solutions at different design scales to increase human contact with nature in the health and wellbeing promotion of primarily urban communities. The discipline of Interior Design offers an important alternative to large-scale nature-inclusive actions which are not always possible due to space limitations. These circumstances provide an immense opportunity to integrate biophilic design, a complex emerging and under-developed approach that pursues sustainable design strategies for increasing the human-nature connection through the experience of the built environment. Biophilic design explores the diverse ways humans are inherently inclined to affiliate with nature, attach meaning to and derive benefit from the natural world. It represents a biological understanding of architecture which categorization is still in progress. The internationally renowned Danish domestic architecture built in the 1950´s and early 1960´s - a golden age of Danish modern architecture - left a leading legacy that has greatly influenced the domestic sphere and has further led the world in terms of good design and welfare. This study examines how four existing post-war domestic buildings establish a dialogue with nature and her variations over time. The case-studies unveil both memorable and unique biophilic resources through sophisticated and original design expressions, where transformative processes connect the users to the natural setting and reflect fundamental ways in which they attach meaning to the place. In addition, fascinating analogies in terms of this nature interaction with particular traditional Japanese architecture inform the research. They embody prevailing lessons for our time today. The research methodology is based on a thorough literature review combined with a phenomenological analysis into how these case-studies contribute to the connection between humans and nature, after conducting fieldwork throughout varying seasons to document understanding in nature transformations multi-sensory perception (via sight, touch, sound, smell, time and movement) as a core research strategy. The cases´ most outstanding features have been studied attending the following key parameters: 1. Space: 1.1. Relationships (itineraries); 1.2. Measures/scale; 2. Context: Context: Landscape reading in different weather/seasonal conditions; 3. Tectonic: 3.1. Constructive joints, elements assembly; 3.2. Structural order; 4. Materiality: 4.1. Finishes, 4.2. Colors; 4.3. Tactile qualities; 5. Daylight interplay. Departing from an artistic-scientific exploration this groundbreaking study provides sustainable practical design strategies, perspectives, and inspiration to boost humans´ contact with nature through the experience of the interior built environment. Some strategies are associated with access to outdoor space or require ample space, while others can thrive in a dense urban context without direct access to the natural environment. The objective is not only to produce knowledge, but to phase in biophilic design in the built environment, expanding its theory and practice into a new dimension. Its long-term vision is to efficiently enhance the health and well-being of urban communities through daily interaction with Nature.Keywords: sustainability, biophilic design, architectural design, interior design, nature, Danish architecture, Japanese architecture
Procedia PDF Downloads 10216 Photosynthesis Metabolism Affects Yield Potentials in Jatropha curcas L.: A Transcriptomic and Physiological Data Analysis
Authors: Nisha Govender, Siju Senan, Zeti-Azura Hussein, Wickneswari Ratnam
Abstract:
Jatropha curcas, a well-described bioenergy crop has been extensively accepted as future fuel need especially in tropical regions. Ideal planting material required for large-scale plantation is still lacking. Breeding programmes for improved J. curcas varieties are rendered difficult due to limitations in genetic diversity. Using a combined transcriptome and physiological data, we investigated the molecular and physiological differences in high and low yielding Jatropha curcas to address plausible heritable variations underpinning these differences, in regard to photosynthesis, a key metabolism affecting yield potentials. A total of 6 individual Jatropha plant from 4 accessions described as high and low yielding planting materials were selected from the Experimental Plot A, Universiti Kebangsaan Malaysia (UKM), Bangi. The inflorescence and shoots were collected for transcriptome study. For the physiological study, each individual plant (n=10) from the high and low yielding populations were screened for agronomic traits, chlorophyll content and stomatal patterning. The J. curcas transcriptomes are available under BioProject PRJNA338924 and BioSample SAMN05827448-65, respectively Each transcriptome was subjected to functional annotation analysis of sequence datasets using the BLAST2Go suite; BLASTing, mapping, annotation, statistical analysis and visualization Large-scale phenotyping of the number of fruits per plant (NFPP) and fruits per inflorescence (FPI) classified the high yielding Jatropha accessions with average NFPP =60 and FPI > 10, whereas the low yielding accessions yielded an average NFPP=10 and FPI < 5. Next generation sequencing revealed genes with differential expressions in the high yielding Jatropha relative to the low yielding plants. Distinct differences were observed in transcript level associated to photosynthesis metabolism. DEGs collection in the low yielding population showed comparable CAM photosynthetic metabolism and photorespiration, evident as followings: phosphoenolpyruvate phosphate translocator chloroplastic like isoform with 2.5 fold change (FC) and malate dehydrogenase (2.03 FC). Green leaves have the most pronounced photosynthetic activity in a plant body due to significant accumulation of chloroplast. In most plants, the leaf is always the dominant photosynthesizing heart of the plant body. Large number of the DEGS in the high-yielding population were found attributable to chloroplast and chloroplast associated events; STAY-GREEN chloroplastic, Chlorophyllase-1-like (5.08 FC), beta-amylase (3.66 FC), chlorophyllase-chloroplastic-like (3.1 FC), thiamine thiazole chloroplastic like (2.8 FC), 1-4, alpha glucan branching enzyme chloroplastic amyliplastic (2.6FC), photosynthetic NDH subunit (2.1 FC) and protochlorophyllide chloroplastic (2 FC). The results were parallel to a significant increase in chlorophyll a content in the high yielding population. In addition to the chloroplast associated transcript abundance, the TOO MANY MOUTHS (TMM) at 2.9 FC, which code for distant stomatal distribution and patterning in the high-yielding population may explain high concentration of CO2. The results were in agreement with the role of TMM. Clustered stomata causes back diffusion in the presence of gaps localized closely to one another. We conclude that high yielding Jatropha population corresponds to a collective function of C3 metabolism with a low degree of CAM photosynthetic fixation. From the physiological descriptions, high chlorophyll a content and even distribution of stomata in the leaf contribute to better photosynthetic efficiency in the high yielding Jatropha compared to the low yielding population.Keywords: chlorophyll, gene expression, genetic variation, stomata
Procedia PDF Downloads 24015 Resilience in the Face of Environmental Extremes through Networking and Resource Mobilization
Authors: Abdullah Al Mohiuddin
Abstract:
Bangladesh is one of the poorest countries in the world, and ranks low on almost all measures of economic development, thus leaving the population extremely vulnerable to natural disasters and climate events. 20% of GDP come from agriculture but more than 60% of the population relies on agriculture as their main source of income making the entire economy vulnerable to climate change and natural disasters. High population density exacerbates the exposure to and effect of climate events, and increases the levels of vulnerability, as does the poor institutional development of the country. The most vulnerable sectors to climate change impacts in Bangladesh are agriculture, coastal zones, water resources, forestry, fishery, health, biomass, and energy. High temperatures, heavy rainfall, high humidity and fairly marked seasonal variations characterize the climate in Bangladesh: Mild winter, hot humid summer and humid, warm rainy monsoon. Much of the country is flooded during the summer monsoon. The Department of Environment (DOE) under the Ministry of Environment and Forestry (MoEF) is the focal point for the United Nations Framework Convention on Climate Change (UNFCCC) and coordinates climate related activities in the country. Recently, a Climate Change Cell (CCC) has been established to address several issues including adaptation to climate change. The climate change focus started with The National Environmental Management Action Plan (NEMAP) which was prepared in 1995 in order to initiate the process to address environmental and climate change issues as long-term environmental problems for Bangladesh. Bangladesh was one of the first countries to finalise a NAPA (Preparation of a National Adaptation Plan of Action) which addresses climate change issues. The NAPA was completed in 2005, and is the first official initiative for mainstreaming adaptation to national policies and actions to cope with climate change and vulnerability. The NAPA suggests a number of adaptation strategies, for example: - Providing drinking water to coastal communities to fight the enhanced salinity caused by sea level rise, - Integrating climate change in planning and design of infrastructure, - Including climate change issues in education, - Supporting adaptation of agricultural systems to new weather extremes, - Mainstreaming CCA into policies and programmes in different sectors, e.g. disaster management, water and health, - Dissemination of CCA information and awareness raising on enhanced climate disasters, especially in vulnerable communities. Bangladesh has geared up its environment conservation steps to save the world’s poorest countries from the adverse effects of global warming. Now it is turning towards green economy policies to save the degrading ecosystem. Bangladesh is a developing country and always fights against Natural Disaster. At the same time we also fight for establishing ecological environment through promoting Green Economy/Energy by Youth Networking. ANTAR is coordinating a big Youth Network in the southern part of Bangladesh where 30 Youth group involved. It can be explained as the economic development based on sustainable development which generates growth and improvement in human’s lives while significantly reducing environmental risks and ecological scarcities. Green economy in Bangladesh promotes three bottom lines – sustaining economic, environment and social well-being.Keywords: resilience, networking, mobilizing, resource
Procedia PDF Downloads 31114 Linguistic Insights Improve Semantic Technology in Medical Research and Patient Self-Management Contexts
Authors: William Michael Short
Abstract:
Semantic Web’ technologies such as the Unified Medical Language System Metathesaurus, SNOMED-CT, and MeSH have been touted as transformational for the way users access online medical and health information, enabling both the automated analysis of natural-language data and the integration of heterogeneous healthrelated resources distributed across the Internet through the use of standardized terminologies that capture concepts and relationships between concepts that are expressed differently across datasets. However, the approaches that have so far characterized ‘semantic bioinformatics’ have not yet fulfilled the promise of the Semantic Web for medical and health information retrieval applications. This paper argues within the perspective of cognitive linguistics and cognitive anthropology that four features of human meaning-making must be taken into account before the potential of semantic technologies can be realized for this domain. First, many semantic technologies operate exclusively at the level of the word. However, texts convey meanings in ways beyond lexical semantics. For example, transitivity patterns (distributions of active or passive voice) and modality patterns (configurations of modal constituents like may, might, could, would, should) convey experiential and epistemic meanings that are not captured by single words. Language users also naturally associate stretches of text with discrete meanings, so that whole sentences can be ascribed senses similar to the senses of words (so-called ‘discourse topics’). Second, natural language processing systems tend to operate according to the principle of ‘one token, one tag’. For instance, occurrences of the word sound must be disambiguated for part of speech: in context, is sound a noun or a verb or an adjective? In syntactic analysis, deterministic annotation methods may be acceptable. But because natural language utterances are typically characterized by polyvalency and ambiguities of all kinds (including intentional ambiguities), such methods leave the meanings of texts highly impoverished. Third, ontologies tend to be disconnected from everyday language use and so struggle in cases where single concepts are captured through complex lexicalizations that involve profile shifts or other embodied representations. More problematically, concept graphs tend to capture ‘expert’ technical models rather than ‘folk’ models of knowledge and so may not match users’ common-sense intuitions about the organization of concepts in prototypical structures rather than Aristotelian categories. Fourth, and finally, most ontologies do not recognize the pervasively figurative character of human language. However, since the time of Galen the widespread use of metaphor in the linguistic usage of both medical professionals and lay persons has been recognized. In particular, metaphor is a well-documented linguistic tool for communicating experiences of pain. Because semantic medical knowledge-bases are designed to help capture variations within technical vocabularies – rather than the kinds of conventionalized figurative semantics that practitioners as well as patients actually utilize in clinical description and diagnosis – they fail to capture this dimension of linguistic usage. The failure of semantic technologies in these respects degrades the efficiency and efficacy not only of medical research, where information retrieval inefficiencies can lead to direct financial costs to organizations, but also of care provision, especially in contexts of patients’ self-management of complex medical conditions.Keywords: ambiguity, bioinformatics, language, meaning, metaphor, ontology, semantic web, semantics
Procedia PDF Downloads 13313 The Systematic Impact of Climatic Disasters on the Maternal Health in Pakistan
Authors: Yiqi Zhu, Jean Francois Trani, Rameez Ulhassan
Abstract:
Extreme weather phenomena increased by 46% between 2007 and 2017 and have become more intense with the rise in global average temperatures. This increased intensity of climate variations often induces humanitarian crises and particularly affects vulnerable populations in low- and middle-income countries (LMICs). Expectant and lactating mothers are among the most vulnerable groups. Pakistan ranks 10th among the most affected countries by climate disasters. In 2022, monsoon floods submerged a third of the country, causing the loss of 1,500 lives. Approximately 650,000 expectant and lactating mothers faced systematic stress from climatic disasters. Our study used participatory methods to investigate the systematic impact of climatic disasters on maternal health. In March 2023, we conducted six Group Model Building (GMB) workshops with healthcare workers, fathers, and mothers separately in two of the most affected areas in Pakistan. This study was approved by the Islamic Relief Research Review Board. GMB workshops consist of three sessions. In the first session, participants discussed the factors that impact maternal health. After identifying the factors, they discussed the connections among them and explored the system structures that collectively impact maternal health. Based on the discussion, a causal loop diagram (CLD) was created. Finally, participants discussed action ideas that could improve the system to enhance maternal health. Based on our discussions and the causal loop diagram, we identified interconnected factors at the family, community, and policy levels. Mothers and children are directly impacted by three interrelated factors: food insecurity, unstable housing, and lack of income. These factors create a reinforcing cycle that negatively affects both mothers and newborns. After the flood, many mothers were unable to produce sufficient breastmilk due to their health status. Without breastmilk and sufficient food for complementary feeding, babies tend to get sick in damp and unhygienic environments resulting from temporary or unstable housing. When parents take care of sick children, they miss out on income-generating opportunities. At the community level, the lack of access to clean water and sanitation (WASH) and maternal healthcare further worsens the situation. Structural failures such as a lack of safety nets and programs associated with flood preparedness make families increasingly vulnerable with each disaster. Several families reported that they had not fully recovered from a flood that occurred ten years ago, and this latest disaster destroyed their lives again. Although over twenty non-profit organizations are working in these villages, few of them provide sustainable support. Therefore, participants called for systemic changes in response to the increasing frequency of climate disasters. The study reveals the systematic vulnerabilities of mothers and children after climatic disasters. The most vulnerable populations are often affected the most by climate change. Collaborative efforts are required to improve water and forest management, strengthen public infrastructure, increase access to WASH, and gradually build climate-resilient communities. Governments, non-governmental organizations, and the community should work together to develop and implement effective strategies to prevent, mitigate, and adapt to climate change and its impacts.Keywords: climatic disasters, maternal health, Pakistan, systematic impact, flood, disaster relief.
Procedia PDF Downloads 7812 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering
Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi
Abstract:
In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering
Procedia PDF Downloads 15211 Parallel Opportunity for Water Conservation and Habitat Formation on Regulated Streams through Formation of Thermal Stratification in River Pools
Authors: Todd H. Buxton, Yong G. Lai
Abstract:
Temperature management in regulated rivers can involve significant expenditures of water to meet the cold-water requirements of species in summer. For this purpose, flows released from Lewiston Dam on the Trinity River in Northern California are 12.7 cms with temperatures around 11oC in July through September to provide adult spring Chinook cold water to hold in deep pools and mature until spawning in fall. The releases are more than double the flow and 10oC colder temperatures than the natural conditions before the dam was built. The high, cold releases provide springers the habitat they require but may suppress the stream food base and limit future populations of salmon by reducing the juvenile fish size and survival to adults via the positive relationship between the two. Field and modeling research was undertaken to explore whether lowering summer releases from Lewiston Dam may promote thermal stratification in river pools so that both the cold-water needs of adult salmon and warmer water requirements of other organisms in the stream biome may be met. For this investigation, a three-dimensional (3D) computational fluid dynamics (CFD) model was developed and validated with field measurements in two deep pools on the Trinity River. Modeling and field observations were then used to identify the flows and temperatures that may form and maintain thermal stratification under different meteorologic conditions. Under low flows, a pool was found to be well mixed and thermally homogenous until temperatures began to stratify shortly after sunrise. Stratification then strengthened through the day until shading from trees and mountains cooled the inlet flow and decayed the thermal gradient, which collapsed shortly before sunset and returned the pool to a well-mixed state. This diurnal process of stratification formation and destruction was closely predicted by the 3D CFD model. Both the model and field observations indicate that thermal stratification maintained the coldest temperatures of the day at ≥2m depth in a pool and provided water that was around 8oC warmer in the upper 2m of the pool. Results further indicate that the stratified pool under low flows provided almost the same daily average temperatures as when flows were an order of magnitude higher and stratification was prevented, indicating significant water savings may be realized in regulated streams while also providing a diversity in water temperatures the ecosystem requires. With confidence in the 3D CFD model, the model is now being applied to a dozen pools in the Trinity River to understand how pool bathymetry influences thermal stratification under variable flows and diurnal temperature variations. This knowledge will be used to expand the results to 52 pools in a 64 km reach below Lewiston Dam that meet the depth criteria (≥2 m) for spring Chinook holding. From this, rating curves will be developed to relate discharge to the volume of pool habitat that provides springers the temperature (<15.6oC daily average), velocity (0.15 to 0.4 m/s) and depths that accommodate the escapement target for spring Chinook (6,000 adults) under maximum fish densities measured in other streams (3.1 m3/fish) during the holding time of year (May through August). Flow releases that meet these goals will be evaluated for water savings relative to the current flow regime and their influence on indicator species, including the Foothill Yellow-Legged Frog, and aspects of the stream biome that support salmon populations, including macroinvertebrate production and juvenile Chinook growth rates.Keywords: 3D CFD modeling, flow regulation, thermal stratification, chinook salmon, foothill yellow-legged frogs, water managment
Procedia PDF Downloads 6410 Comparative Analysis of Pet-parent Reported Pruritic Symptoms in Cats: Data from Social Media Listening and Surveys Similar
Authors: Georgina Cherry, Taranpreet Rai, Luke Boyden, Sitira Williams, Andrea Wright, Richard Brown, Viva Chu, Alasdair Cook, Kevin Wells
Abstract:
Estimating population-level burden, abilities of pet-parents to identify disease and demand for veterinary services worldwide is challenging. The purpose of this study is to compare a feline pruritus survey with social media listening (SML) data discussing this condition. Surveys are expensive and labour intensive to analyse, but SML data is freeform and requires careful filtering for relevancy. This study considers data from a survey of owner-observed symptoms of 156 pruritic cats conducted using Pet Parade® and SML posts collected through web-scraping to gain insights into the characterisation and management of feline pruritus. SML posts meeting a feline body area, behaviour and symptom were captured and reviewed for relevance representing 1299 public posts collected from 2021 to 2023. The survey involved 1067 pet-parents who reported on pruritic symptoms in their cats. Among the observed cats, approximately 18.37% (n=196) exhibited at least one symptom. The most frequently reported symptoms were hair loss (9.2%), bald spots (7.3%) and infection, crusting, scaling, redness, scabbing, scaling, or bumpy skin (8.2%). Notably, bald spots were the primary symptom reported for short-haired cats, while other symptoms were more prevalent in medium and long-haired cats. Affected body areas, according to pet-parents, were primarily the head, face, chin, neck (27%), and the top of the body, along the spine (22%). 35% of all cats displayed excessive behaviours consistent with pruritic skin disease. Interestingly, 27% of these cats were perceived as non-symptomatic by their owners, suggesting an under-identification of itch-related signs. Furthermore, a significant proportion of symptomatic cats did not receive any skin disease medication, whether prescribed or over the counter (n=41). These findings indicate a higher incidence of pruritic skin disease in cats than recognized by pet owners, potentially leading to a lack of medical intervention for clinically symptomatic cases. The comparison between the survey and social media listening data revealed bald spots were reported in similar proportions in both datasets (25% in the survey and 28% in SML). Infection, crusting, scaling, redness, scabbing, scaling, or bumpy skin accounted for 31% of symptoms in the survey, whereas it represented 53% of relevant SML posts (excluding bumpy skin). Abnormal licking or chewing behaviours were mentioned by pet-parents in 40% of SML posts compared to 38% in the survey. The consistency in the findings of these two disparate data sources, including a complete overlap in affected body areas for the top 80% of social media listening posts, indicates minimal biases in each method, as significant biases would likely yield divergent results. Therefore, the strong agreement across pruritic symptoms, affected body areas, and reported behaviours enhances our confidence in the reliability of the findings. Moreover, the small differences identified between the datasets underscore the valuable insights that arise from utilising multiple data sources. These variations provide additional depth in characterising and managing feline pruritus, allowing for more comprehensive understanding of the condition. By combining survey data and social media listening, researchers can obtain a nuanced perspective and capture a wider range of experiences and perspectives, supporting informed decision-making in veterinary practice.Keywords: social media listening, feline pruritus, surveys, felines, cats, pet owners
Procedia PDF Downloads 1349 A Case Study on Utility of 18FDG-PET/CT Scan in Identifying Active Extra Lymph Nodes and Staging of Breast Cancer
Authors: Farid Risheq, M. Zaid Alrisheq, Shuaa Al-Sadoon, Karim Al-Faqih, Mays Abdulazeez
Abstract:
Breast cancer is the most frequently diagnosed cancer worldwide, and a common cause of death among women. Various conventional anatomical imaging tools are utilized for diagnosis, histological assessment and TNM (Tumor, Node, Metastases) staging of breast cancer. Biopsy of sentinel lymph node is becoming an alternative to the axillary lymph node dissection. Advances in 18-Fluoro-Deoxi-Glucose Positron Emission Tomography/Computed Tomography (18FDG-PET/CT) imaging have facilitated breast cancer diagnosis utilizing biological trapping of 18FDG inside lesion cells, expressed as Standardized Uptake Value (SUVmax). Objective: To present the utility of 18FDG uptake PET/CT scans in detecting active extra lymph nodes and distant occult metastases for breast cancer staging. Subjects and Methods: Four female patients were presented with initially classified TNM stages of breast cancer based on conventional anatomical diagnostic techniques. 18FDG-PET/CT scans were performed one hour post 18FDG intra-venous injection of (300-370) MBq, and (7-8) bed/130sec. Transverse, sagittal, and coronal views; fused PET/CT and MIP modality were reconstructed for each patient. Results: A total of twenty four lesions in breast, extended lesions to lung, liver, bone and active extra lymph nodes were detected among patients. The initial TNM stage was significantly changed post 18FDG-PET/CT scan for each patient, as follows: Patient-1: Initial TNM-stage: T1N1M0-(stage I). Finding: Two lesions in right breast (3.2cm2, SUVmax=10.2), (1.8cm2, SUVmax=6.7), associated with metastases to two right axillary lymph nodes. Final TNM-stage: T1N2M0-(stage II). Patient-2: Initial TNM-stage: T2N2M0-(stage III). Finding: Right breast lesion (6.1cm2, SUVmax=15.2), associated with metastases to right internal mammary lymph node, two right axillary lymph nodes, and sclerotic lesions in right scapula. Final TNM-stage: T2N3M1-(stage IV). Patient-3: Initial TNM-stage: T2N0M1-(stage III). Finding: Left breast lesion (11.1cm2, SUVmax=18.8), associated with metastases to two lymph nodes in left hilum, and three lesions in both lungs. Final TNM-stage: T2N2M1-(stage IV). Patient-4: Initial TNM-stage: T4N1M1-(stage III). Finding: Four lesions in upper outer quadrant area of right breast (largest: 12.7cm2, SUVmax=18.6), in addition to one lesion in left breast (4.8cm2, SUVmax=7.1), associated with metastases to multiple lesions in liver (largest: 11.4cm2, SUV=8.0), and two bony-lytic lesions in left scapula and cervicle-1. No evidence of regional or distant lymph node involvement. Final TNM-stage: T4N0M2-(stage IV). Conclusions: Our results demonstrated that 18FDG-PET/CT scans had significantly changed the TNM stages of breast cancer patients. While the T factor was unchanged, N and M factors showed significant variations. A single session of PET/CT scan was effective in detecting active extra lymph nodes and distant occult metastases, which were not identified by conventional diagnostic techniques, and might advantageously replace bone scan, and contrast enhanced CT of chest, abdomen and pelvis. Applying 18FDG-PET/CT scan early in the investigation, might shorten diagnosis time, helps deciding adequate treatment protocol, and could improve patients’ quality of life and survival. Trapping of 18FDG in malignant lesion cells, after a PET/CT scan, increases the retention index (RI%) for a considerable time, which might help localize sentinel lymph node for biopsy using a hand held gamma probe detector. Future work is required to demonstrate its utility.Keywords: axillary lymph nodes, breast cancer staging, fluorodeoxyglucose positron emission tomography/computed tomography, lymph nodes
Procedia PDF Downloads 314