Search results for: ethical sensitivity
251 Modern Information Security Management and Digital Technologies: A Comprehensive Approach to Data Protection
Authors: Mahshid Arabi
Abstract:
With the rapid expansion of digital technologies and the internet, information security has become a critical priority for organizations and individuals. The widespread use of digital tools such as smartphones and internet networks facilitates the storage of vast amounts of data, but simultaneously, vulnerabilities and security threats have significantly increased. The aim of this study is to examine and analyze modern methods of information security management and to develop a comprehensive model to counteract threats and information misuse. This study employs a mixed-methods approach, including both qualitative and quantitative analyses. Initially, a systematic review of previous articles and research in the field of information security was conducted. Then, using the Delphi method, interviews with 30 information security experts were conducted to gather their insights on security challenges and solutions. Based on the results of these interviews, a comprehensive model for information security management was developed. The proposed model includes advanced encryption techniques, machine learning-based intrusion detection systems, and network security protocols. AES and RSA encryption algorithms were used for data protection, and machine learning models such as Random Forest and Neural Networks were utilized for intrusion detection. Statistical analyses were performed using SPSS software. To evaluate the effectiveness of the proposed model, T-Test and ANOVA statistical tests were employed, and results were measured using accuracy, sensitivity, and specificity indicators of the models. Additionally, multiple regression analysis was conducted to examine the impact of various variables on information security. The findings of this study indicate that the comprehensive proposed model reduced cyber-attacks by an average of 85%. Statistical analysis showed that the combined use of encryption techniques and intrusion detection systems significantly improves information security. Based on the obtained results, it is recommended that organizations continuously update their information security systems and use a combination of multiple security methods to protect their data. Additionally, educating employees and raising public awareness about information security can serve as an effective tool in reducing security risks. This research demonstrates that effective and up-to-date information security management requires a comprehensive and coordinated approach, including the development and implementation of advanced techniques and continuous training of human resources.Keywords: data protection, digital technologies, information security, modern management
Procedia PDF Downloads 33250 Study of COVID-19 Intensity Correlated with Specific Biomarkers and Environmental Factors
Authors: Satendra Pal Singh, Dalip Kr. Kakru, Jyoti Mishra, Rajesh Thakur, Tarana Sarwat
Abstract:
COVID-19 is still an intrigue as far as morbidity or mortality is concerned. The rate of recovery varies from person to person, & it depends upon the accessibility of the healthcare system and the roles played by the physicians and caregivers. It is envisaged that with the passage of time, people would become immune to this virus, and those who are vulnerable would sustain themselves with the help of vaccines. The proposed study deals with the severeness of COVID-19 is associated with some specific biomarkers linked to correlate age and gender. We will be assessing the overall homeostasis of the persons who were affected by the coronavirus infection and also of those who recovered from it. Some people show more severe effects, while others show very mild symptoms, however, they show low CT values. Thus far, it is unclear why the new strain of Covid has different effects on different people in terms of age, gender, and ABO blood typing. According to data, the fatality rate with heart disease was 10.5 percent, 7.3 percent were diabetic, and 6 percent who are already infected from other comorbidities. However, some COVID-19 cases are worse than others & it is not fully explainable as of date. Overall data show that the ABO blood group is effective or prone to the risk of SARS-COV2 infection, while another study also shows the phenotypic effects of the blood group related to covid. It is an accepted fact that females have more strong immune systems than males, which may be related to the fact that females have two ‘X’ chromosomes, which might contain a more effective immunity booster gene on the X chromosome, and are capable to protect the female. Also specific sex hormones also induce a better immune response in a specific gender. This calls for in-depth analysis to be able to gain insight into this dilemma. COVID-19 is still not fully characterized, and thus we are not very familiar with its biology, mode of infection, susceptibility, and overall viral load in the human body. How many virus particles are needed to infect a person? How, then, comorbidity contribute to coronavirus infection? Since the emergence of this virus in 2020, a large number of papers have been published, and seemingly, vaccines have been prepared. But still, a large number of questions remain unanswered. The proneness of humans for infection by covid-19 needs to be established to be able to develop a better strategy to fight this virus. Our study will be on the Impact of demography on the Severity of covid-19 infection & at the same time, will look into gender-specific sensitivity of Covid-19 and the Operational variation of different biochemical markers in Covid-19 positive patients. Besides, we will be studying the co-relation, if any, of COVID severity & ABO Blood group type and the occurrence of the most common blood group type amongst positive patience.Keywords: coronavirus, ABO blood group, age, gender
Procedia PDF Downloads 99249 Biosensor for Determination of Immunoglobulin A, E, G and M
Authors: Umut Kokbas, Mustafa Nisari
Abstract:
Immunoglobulins, also known as antibodies, are glycoprotein molecules produced by activated B cells that transform into plasma cells and result in them. Antibodies are critical molecules of the immune response to fight, which help the immune system specifically recognize and destroy antigens such as bacteria, viruses, and toxins. Immunoglobulin classes differ in their biological properties, structures, targets, functions, and distributions. Five major classes of antibodies have been identified in mammals: IgA, IgD, IgE, IgG, and IgM. Evaluation of the immunoglobulin isotype can provide a useful insight into the complex humoral immune response. Evaluation and knowledge of immunoglobulin structure and classes are also important for the selection and preparation of antibodies for immunoassays and other detection applications. The immunoglobulin test measures the level of certain immunoglobulins in the blood. IgA, IgG, and IgM are usually measured together. In this way, they can provide doctors with important information, especially regarding immune deficiency diseases. Hypogammaglobulinemia (HGG) is one of the main groups of primary immunodeficiency disorders. HGG is caused by various defects in B cell lineage or function that result in low levels of immunoglobulins in the bloodstream. This affects the body's immune response, causing a wide range of clinical features, from asymptomatic diseases to severe and recurrent infections, chronic inflammation and autoimmunity Transient infant hypogammaglobulinemia (THGI), IgM deficiency (IgMD), Bruton agammaglobulinemia, IgA deficiency (SIgAD) HGG samples are a few. Most patients can continue their normal lives by taking prophylactic antibiotics. However, patients with severe infections require intravenous immune serum globulin (IVIG) therapy. The IgE level may rise to fight off parasitic infections, as well as a sign that the body is overreacting to allergens. Also, since the immune response can vary with different antigens, measuring specific antibody levels also aids in the interpretation of the immune response after immunization or vaccination. Immune deficiencies usually occur in childhood. In Immunology and Allergy clinics, apart from the classical methods, it will be more useful in terms of diagnosis and follow-up of diseases, if it is fast, reliable and especially in childhood hypogammaglobulinemia, sampling from children with a method that is more convenient and uncomplicated. The antibodies were attached to the electrode surface via the poly hydroxyethyl methacrylamide cysteine nanopolymer. It was used to evaluate the anodic peak results obtained in the electrochemical study. According to the data obtained, immunoglobulin determination can be made with a biosensor. However, in further studies, it will be useful to develop a medical diagnostic kit with biomedical engineering and to increase its sensitivity.Keywords: biosensor, immunosensor, immunoglobulin, infection
Procedia PDF Downloads 110248 Regulatory and Economic Challenges of AI Integration in Cyber Insurance
Authors: Shreyas Kumar, Mili Shangari
Abstract:
Integrating artificial intelligence (AI) in the cyber insurance sector represents a significant advancement, offering the potential to revolutionize risk assessment, fraud detection, and claims processing. However, this integration introduces a range of regulatory and economic challenges that must be addressed to ensure responsible and effective deployment of AI technologies. This paper examines the multifaceted regulatory landscape governing AI in cyber insurance and explores the economic implications of compliance, innovation, and market dynamics. AI's capabilities in processing vast amounts of data and identifying patterns make it an invaluable tool for insurers in managing cyber risks. Yet, the application of AI in this domain is subject to stringent regulatory scrutiny aimed at safeguarding data privacy, ensuring algorithmic transparency, and preventing biases. Regulatory bodies, such as the European Union with its General Data Protection Regulation (GDPR), mandate strict compliance requirements that can significantly impact the deployment of AI systems. These regulations necessitate robust data protection measures, ethical AI practices, and clear accountability frameworks, all of which entail substantial compliance costs for insurers. The economic implications of these regulatory requirements are profound. Insurers must invest heavily in upgrading their IT infrastructure, implementing robust data governance frameworks, and training personnel to handle AI systems ethically and effectively. These investments, while essential for regulatory compliance, can strain financial resources, particularly for smaller insurers, potentially leading to market consolidation. Furthermore, the cost of regulatory compliance can translate into higher premiums for policyholders, affecting the overall affordability and accessibility of cyber insurance. Despite these challenges, the potential economic benefits of AI integration in cyber insurance are significant. AI-enhanced risk assessment models can provide more accurate pricing, reduce the incidence of fraudulent claims, and expedite claims processing, leading to overall cost savings and increased efficiency. These efficiencies can improve the competitiveness of insurers and drive innovation in product offerings. However, balancing these benefits with regulatory compliance is crucial to avoid legal penalties and reputational damage. The paper also explores the potential risks associated with AI integration, such as algorithmic biases that could lead to unfair discrimination in policy underwriting and claims adjudication. Regulatory frameworks need to evolve to address these issues, promoting fairness and transparency in AI applications. Policymakers play a critical role in creating a balanced regulatory environment that fosters innovation while protecting consumer rights and ensuring market stability. In conclusion, the integration of AI in cyber insurance presents both regulatory and economic challenges that require a coordinated approach involving regulators, insurers, and other stakeholders. By navigating these challenges effectively, the industry can harness the transformative potential of AI, driving advancements in risk management and enhancing the resilience of the cyber insurance market. This paper provides insights and recommendations for policymakers and industry leaders to achieve a balanced and sustainable integration of AI technologies in cyber insurance.Keywords: artificial intelligence (AI), cyber insurance, regulatory compliance, economic impact, risk assessment, fraud detection, cyber liability insurance, risk management, ransomware
Procedia PDF Downloads 34247 The Influence of the Variety and Harvesting Date on Haskap Composition and Anti-Diabetic Properties
Authors: Aruma Baduge Kithma Hansanee De Silva
Abstract:
Haskap (Lonicera caerulea L.), also known as blue honeysuckle, is a recently commercialized berry crop in Canada. Haskap berries are rich in polyphenols, including anthocyanins, which are known for potential health-promoting effects. Cyanidin-3-O-glucoside (C3G) is the most prominent anthocyanin of haskap berries. Recent literature reveals the efficacy of C3G in reducing the risk of type 2 diabetes (T2D), which has become an increasingly common health issue around the world. The T2D is characterized as a metabolic disorder of hyperglycemia and insulin resistance. It has been demonstrated that C3G has anti-diabetic effects in various ways, including improvement in insulin sensitivity, and inhibition of activities of carbohydrate-hydrolyzing enzymes, including alpha-amylase and alpha-glucosidase. The goal of this study was to investigate the influence of variety and harvesting date on haskap composition, biological properties, and antidiabetic properties. The polyphenolic compounds present in four commercially grown haskap cultivars, Aurora, Rebecca, Larissa and Evie among five harvesting stages (H1-H5), were extracted separately in 80% ethanol and analyzed to characterize their phenolic profiles. The haskap berries contain different types of polyphenols including flavonoids and phenolic acids. Anthocyanin is the major type of flavonoid. C3G is the most prominent type of anthocyanin, which accounts for 79% of total anthocyanin in all extracts. The variety Larissa at H5 contained the highest average C3G content, and its ethanol extract had the highest (1212.3±63.9 mg/100g FW) while, Evie at H1 contained the lowest C3G content (96.9±40.4 mg/100g FW). The average C3G content of Larissa from H1 – H5 varies from 208 – 1212 mg/100g FW. Quarcetin-3-Rutinoside (Q3Rut) is the major type of flavonol and highest is observed in Rebecca at H4 (47.81 mg/100g FW). The haskap berries also contained phenolic acids, but approximately 95% of the phenolic acids consisted of chlorogenic acid. The cultivar Larissa has a higher level of anthocyanin than the other four cultivars. The highest total phenolic content is observed in Evie at H5 (2.97±1.03 mg/g DW) while the lowest in Rebecca at H1 (1.47±0.96 mg/g DW). The antioxidant capacity of Evie at H5 was higher (14.40±2.21 µmol TE/ g DW) among other cultivars and the lowest observed in Aurora at H3 (5.69±0.34 µmol TE/ g DW). Furthermore, Larissa H5 shows the greatest inhibition of carbohydrate-hydrolyzing enzymes including alpha-glucosidase and alpha-amylase. In conclusion Larissa, at H5 demonstrated highest polyphenol composition and antidiabetic properties.Keywords: anthocyanin, cyanidin-3-O-glucoside, haskap, type 2 diabetes
Procedia PDF Downloads 459246 Adaptation of the Scenario Test for Greek-speaking People with Aphasia: Reliability and Validity Study
Authors: Marina Charalambous, Phivos Phylactou, Thekla Elriz, Loukia Psychogios, Jean-Marie Annoni
Abstract:
Background: Evidence-based practices for the evaluation and treatment of people with aphasia (PWA) in Greek are mainly impairment-based. Functional and multimodal communication is usually under assessed and neglected by clinicians. This study explores the adaptation and psychometric testing of the Greek (GR) version of The Scenario Test. The Scenario Test assesses the everyday functional communication of PWA in an interactive multimodal communication setting with the support of an active communication facilitator. Aims: To define the reliability and validity of The Scenario Test GR and discuss its clinical value. Methods & Procedures: The Scenario Test-GR was administered to 54 people with chronic stroke (6+ months post-stroke): 32 PWA and 22 people with stroke without aphasia. Participants were recruited from Greece and Cyprus. All measures were performed in an interview format. Standard psychometric criteria were applied to evaluate reliability (internal consistency, test-retest, and interrater reliability) and validity (construct and known – groups validity) of the Scenario Test GR. Video analysis was performed for the qualitative examination of the communication modes used. Outcomes & Results: The Scenario Test-GR shows high levels of reliability and validity. High scores of internal consistency (Cronbach’s α = .95), test-retest reliability (ICC = .99), and interrater reliability (ICC = .99) were found. Interrater agreement in scores on individual items fell between good and excellent levels of agreement. Correlations with a tool measuring language function in aphasia (the Aphasia Severity Rating Scale of the Boston Diagnostic Aphasia Examination), a measure of functional communication (the Communicative Effectiveness Index), and two instruments examining the psychosocial impact of aphasia (the Stroke and Aphasia Quality of Life questionnaire and the Aphasia Impact Questionnaire) revealed good convergent validity (all ps< .05). Results showed good known – groups validity (Mann-Whitney U = 96.5, p < .001), with significantly higher scores for participants without aphasia compared to those with aphasia. Conclusions: The psychometric qualities of The Scenario Test-GR support the reliability and validity of the tool for the assessment of functional communication for Greek-speaking PWA. The Scenario Test-GR can be used to assess multimodal functional communication, orient aphasia rehabilitation goal setting towards the activity and participation level, and be used as an outcome measure of everyday communication. Future studies will focus on the measurement of sensitivity to change in PWA with severe non-fluent aphasia.Keywords: the scenario test GR, functional communication assessment, people with aphasia (PWA), tool validation
Procedia PDF Downloads 131245 Bridging Minds and Nature: Revolutionizing Elementary Environmental Education Through Artificial Intelligence
Authors: Hoora Beheshti Haradasht, Abooali Golzary
Abstract:
Environmental education plays a pivotal role in shaping the future stewards of our planet. Leveraging the power of artificial intelligence (AI) in this endeavor presents an innovative approach to captivate and educate elementary school children about environmental sustainability. This paper explores the application of AI technologies in designing interactive and personalized learning experiences that foster curiosity, critical thinking, and a deep connection to nature. By harnessing AI-driven tools, virtual simulations, and personalized content delivery, educators can create engaging platforms that empower children to comprehend complex environmental concepts while nurturing a lifelong commitment to protecting the Earth. With the pressing challenges of climate change and biodiversity loss, cultivating an environmentally conscious generation is imperative. Integrating AI in environmental education revolutionizes traditional teaching methods by tailoring content, adapting to individual learning styles, and immersing students in interactive scenarios. This paper delves into the potential of AI technologies to enhance engagement, comprehension, and pro-environmental behaviors among elementary school children. Modern AI technologies, including natural language processing, machine learning, and virtual reality, offer unique tools to craft immersive learning experiences. Adaptive platforms can analyze individual learning patterns and preferences, enabling real-time adjustments in content delivery. Virtual simulations, powered by AI, transport students into dynamic ecosystems, fostering experiential learning that goes beyond textbooks. AI-driven educational platforms provide tailored content, ensuring that environmental lessons resonate with each child's interests and cognitive level. By recognizing patterns in students' interactions, AI algorithms curate customized learning pathways, enhancing comprehension and knowledge retention. Utilizing AI, educators can develop virtual field trips and interactive nature explorations. Children can navigate virtual ecosystems, analyze real-time data, and make informed decisions, cultivating an understanding of the delicate balance between human actions and the environment. While AI offers promising educational opportunities, ethical concerns must be addressed. Safeguarding children's data privacy, ensuring content accuracy, and avoiding biases in AI algorithms are paramount to building a trustworthy learning environment. By merging AI with environmental education, educators can empower children not only with knowledge but also with the tools to become advocates for sustainable practices. As children engage in AI-enhanced learning, they develop a sense of agency and responsibility to address environmental challenges. The application of artificial intelligence in elementary environmental education presents a groundbreaking avenue to cultivate environmentally conscious citizens. By embracing AI-driven tools, educators can create transformative learning experiences that empower children to grasp intricate ecological concepts, forge an intimate connection with nature, and develop a strong commitment to safeguarding our planet for generations to come.Keywords: artificial intelligence, environmental education, elementary children, personalized learning, sustainability
Procedia PDF Downloads 84244 A Digital Health Approach: Using Electronic Health Records to Evaluate the Cost Benefit of Early Diagnosis of Alpha-1 Antitrypsin Deficiency in the UK
Authors: Sneha Shankar, Orlando Buendia, Will Evans
Abstract:
Alpha-1 antitrypsin deficiency (AATD) is a rare, genetic, and multisystemic condition. Underdiagnosis is common, leading to chronic pulmonary and hepatic complications, increased resource utilization, and additional costs to the healthcare system. Currently, there is limited evidence of the direct medical costs of AATD diagnosis in the UK. This study explores the economic impact of AATD patients during the 3 years before diagnosis and to identify the major cost drivers using primary and secondary care electronic health record (EHR) data. The 3 years before diagnosis time period was chosen based on the ability of our tool to identify patients earlier. The AATD algorithm was created using published disease criteria and applied to 148 known AATD patients’ EHR found in a primary care database of 936,148 patients (413,674 Biobank and 501,188 in a single primary care locality). Among 148 patients, 9 patients were flagged earlier by the tool and, on average, could save 3 (1-6) years per patient. We analysed 101 of the 148 AATD patients’ primary care journey and 20 patients’ Hospital Episode Statistics (HES) data, all of whom had at least 3 years of clinical history in their records before diagnosis. The codes related to laboratory tests, clinical visits, referrals, hospitalization days, day case, and inpatient admissions attributable to AATD were examined in this 3-year period before diagnosis. The average cost per patient was calculated, and the direct medical costs were modelled based on the mean prevalence of 100 AATD patients in a 500,000 population. A deterministic sensitivity analysis (DSA) of 20% was performed to determine the major cost drivers. Cost data was obtained from the NHS National tariff 2020/21, National Schedule of NHS Costs 2018/19, PSSRU 2018/19, and private care tariff. The total direct medical cost of one hundred AATD patients three years before diagnosis in primary and secondary care in the UK was £3,556,489, with an average direct cost per patient of £35,565. A vast majority of this total direct cost (95%) was associated with inpatient admissions (£3,378,229). The DSA determined that the costs associated with tier-2 laboratory tests and inpatient admissions were the greatest contributors to direct costs in primary and secondary care, respectively. This retrospective study shows the role of EHRs in calculating direct medical costs and the potential benefit of new technologies for the early identification of patients with AATD to reduce the economic burden in primary and secondary care in the UK.Keywords: alpha-1 antitrypsin deficiency, costs, digital health, early diagnosis
Procedia PDF Downloads 168243 Monitoring Key Biomarkers Related to the Risk of Low Breastmilk Production in Women, Leading to a Positive Impact in Infant’s Health
Authors: R. Sanchez-Salcedo, N. H. Voelcker
Abstract:
Currently, low breast milk production in women is one of the leading health complications in infants. Recently, It has been demonstrated that exclusive breastfeeding, especially up to a minimum of 6 months, significantly reduces respiratory and gastrointestinal infections, which are the main causes of death in infants. However, the current data shows that a high percentage of women stop breastfeeding their children because they perceive an inadequate supply of milk, and only 45% of children are breastfeeding under 6 months. It is, therefore, clear the necessity to design and develop a biosensor that is sensitive and selective enough to identify and validate a panel of milk biomarkers that allow the early diagnosis of this condition. In this context, electrochemical biosensors could be a powerful tool for assessing all the requirements in terms of reliability, selectivity, sensitivity, cost efficiency and potential for multiplex detection. Moreover, they are suitable for the development of POC devices and wearable sensors. In this work, we report the development of two types of sensing platforms towards several biomarkers, including miRNAs and hormones present in breast milk and dysregulated in this pathological condition. The first type of sensing platform consists of an enzymatic sensor for the detection of lactose, one of the main components in milk. In this design, we used gold surface as an electrochemical transducer due to the several advantages, such as the variety of strategies available for its rapid and efficient functionalization with bioreceptors or capture molecules. For the second type of sensing platform, nanoporous silicon film (pSi) was chosen as the electrode material for the design of DNA sensors and aptasensors targeting miRNAs and hormones, respectively. pSi matrix offers a large superficial area with an abundance of active sites for the immobilization of bioreceptors and tunable characteristics, which increase the selectivity and specificity, making it an ideal alternative material. The analytical performance of the designed biosensors was not only characterized in buffer but also validated in minimally treated breastmilk samples. We have demonstrated the potential of an electrochemical transducer on pSi and gold surface for monitoring clinically relevant biomarkers associated with the heightened risk of low milk production in women. This approach, in which the nanofabrication techniques and the functionalization methods were optimized to increase the efficacy of the biosensor highly provided a foundation for further research and development of targeted diagnosis strategies.Keywords: biosensors, electrochemistry, early diagnosis, clinical markers, miRNAs
Procedia PDF Downloads 20242 Winter – Not Spring - Climate Drives Annual Adult Survival in Common Passerines: A Country-Wide, Multi-Species Modeling Exercise
Authors: Manon Ghislain, Timothée Bonnet, Olivier Gimenez, Olivier Dehorter, Pierre-Yves Henry
Abstract:
Climatic fluctuations affect the demography of animal populations, generating changes in population size, phenology, distribution and community assemblages. However, very few studies have identified the underlying demographic processes. For short-lived species, like common passerine birds, are these changes generated by changes in adult survival or in fecundity and recruitment? This study tests for an effect of annual climatic conditions (spring and winter) on annual, local adult survival at very large spatial (a country, 252 sites), temporal (25 years) and biological (25 species) scales. The Constant Effort Site ringing has allowed the collection of capture - mark - recapture data for 100 000 adult individuals since 1989, over metropolitan France, thus documenting annual, local survival rates of the most common passerine birds. We specifically developed a set of multi-year, multi-species, multi-site Bayesian models describing variations in local survival and recapture probabilities. This method allows for a statistically powerful hierarchical assessment (global versus species-specific) of the effects of climate variables on survival. A major part of between-year variations in survival rate was common to all species (74% of between-year variance), whereas only 26% of temporal variation was species-specific. Although changing spring climate is commonly invoked as a cause of population size fluctuations, spring climatic anomalies (mean precipitation or temperature for March-August) do not impact adult survival: only 1% of between-year variation of species survival is explained by spring climatic anomalies. However, for sedentary birds, winter climatic anomalies (North Atlantic Oscillation) had a significant, quadratic effect on adult survival, birds surviving less during intermediate years than during more extreme years. For migratory birds, we do not detect an effect of winter climatic anomalies (Sahel Rainfall). We will analyze the life history traits (migration, habitat, thermal range) that could explain a different sensitivity of species to winter climate anomalies. Overall, we conclude that changes in population sizes for passerine birds are unlikely to be the consequences of climate-driven mortality (or emigration) in spring but could be induced by other demographic parameters, like fecundity.Keywords: Bayesian approach, capture-recapture, climate anomaly, constant effort sites scheme, passerine, seasons, survival
Procedia PDF Downloads 303241 Transgenerational Impact of Intrauterine Hyperglycaemia to F2 Offspring without Pre-Diabetic Exposure on F1 Male Offspring
Authors: Jun Ren, Zhen-Hua Ming, He-Feng Huang, Jian-Zhong Sheng
Abstract:
Adverse intrauterine stimulus during critical or sensitive periods in early life, may lead to health risk not only in later life span, but also further generations. Intrauterine hyperglycaemia, as a major feature of gestational diabetes mellitus (GDM), is a typical adverse environment for both F1 fetus and F1 gamete cells development. However, there is scare information of phenotypic difference of metabolic memory between somatic cells and germ cells exposed by intrauterine hyperglycaemia. The direct transmission effect of intrauterine hyperglycaemia per se has not been assessed either. In this study, we built a GDM mice model and selected male GDM offspring without pre-diabetic phenotype as our founders, to exclude postnatal diabetic influence on gametes, thereby investigate the direct transmission effect of intrauterine hyperglycaemia exposure on F2 offspring, and we further compared the metabolic difference of affected F1-GDM male offspring and F2 offspring. A GDM mouse model of intrauterine hyperglycemia was established by intraperitoneal injection of streptozotocin after pregnancy. Pups of GDM mother were fostered by normal control mothers. All the mice were fed with standard food. Male GDM offspring without metabolic dysfunction phenotype were crossed with normal female mice to obtain F2 offspring. Body weight, glucose tolerance test, insulin tolerance test and homeostasis model of insulin resistance (HOMA-IR) index were measured in both generations at 8 week of age. Some of F1-GDM male mice showed impaired glucose tolerance (p < 0.001), none of F1-GDM male mice showed impaired insulin sensitivity. Body weight of F1-GDM mice showed no significance with control mice. Some of F2-GDM offspring exhibited impaired glucose tolerance (p < 0.001), all the F2-GDM offspring exhibited higher HOMA-IR index (p < 0.01 of normal glucose tolerance individuals vs. control, p < 0.05 of glucose intolerance individuals vs. control). All the F2-GDM offspring exhibited higher ITT curve than control (p < 0.001 of normal glucose tolerance individuals, p < 0.05 of glucose intolerance individuals, vs. control). F2-GDM offspring had higher body weight than control mice (p < 0.001 of normal glucose tolerance individuals, p < 0.001 of glucose intolerance individuals, vs. control). While glucose intolerance is the only phenotype that F1-GDM male mice may exhibit, F2 male generation of healthy F1-GDM father showed insulin resistance, increased body weight and/or impaired glucose tolerance. These findings imply that intrauterine hyperglycaemia exposure affects germ cells and somatic cells differently, thus F1 and F2 offspring demonstrated distinct metabolic dysfunction phenotypes. And intrauterine hyperglycaemia exposure per se has a strong influence on F2 generation, independent of postnatal metabolic dysfunction exposure.Keywords: inheritance, insulin resistance, intrauterine hyperglycaemia, offspring
Procedia PDF Downloads 238240 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 232239 Predicting Polyethylene Processing Properties Based on Reaction Conditions via a Coupled Kinetic, Stochastic and Rheological Modelling Approach
Authors: Kristina Pflug, Markus Busch
Abstract:
Being able to predict polymer properties and processing behavior based on the applied operating reaction conditions in one of the key challenges in modern polymer reaction engineering. Especially, for cost-intensive processes such as the high-pressure polymerization of low-density polyethylene (LDPE) with high safety-requirements, the need for simulation-based process optimization and product design is high. A multi-scale modelling approach was set-up and validated via a series of high-pressure mini-plant autoclave reactor experiments. The approach starts with the numerical modelling of the complex reaction network of the LDPE polymerization taking into consideration the actual reaction conditions. While this gives average product properties, the complex polymeric microstructure including random short- and long-chain branching is calculated via a hybrid Monte Carlo-approach. Finally, the processing behavior of LDPE -its melt flow behavior- is determined in dependence of the previously determined polymeric microstructure using the branch on branch algorithm for randomly branched polymer systems. All three steps of the multi-scale modelling approach can be independently validated against analytical data. A triple-detector GPC containing an IR, viscosimetry and multi-angle light scattering detector is applied. It serves to determine molecular weight distributions as well as chain-length dependent short- and long-chain branching frequencies. 13C-NMR measurements give average branching frequencies, and rheological measurements in shear and extension serve to characterize the polymeric flow behavior. The accordance of experimental and modelled results was found to be extraordinary, especially taking into consideration that the applied multi-scale modelling approach does not contain parameter fitting of the data. This validates the suggested approach and proves its universality at the same time. In the next step, the modelling approach can be applied to other reactor types, such as tubular reactors or industrial scale. Moreover, sensitivity analysis for systematically varying process conditions is easily feasible. The developed multi-scale modelling approach finally gives the opportunity to predict and design LDPE processing behavior simply based on process conditions such as feed streams and inlet temperatures and pressures.Keywords: low-density polyethylene, multi-scale modelling, polymer properties, reaction engineering, rheology
Procedia PDF Downloads 125238 European Hinterland and Foreland: Impact of Accessibility, Connectivity, Inter-Port Competition on Containerization
Authors: Dial Tassadit Rania, Figueiredo De Oliveira Gabriel
Abstract:
In this paper, we investigate the relationship between ports and their hinterland and foreland environments and the competitive relationship between the ports themselves. These two environments are changing, evolving and introducing new challenges for commercial and economic development at the regional, national and international levels. Because of the rise of the containerization phenomenon, shipping costs and port handling costs have considerably decreased due to economies of scale. The volume of maritime trade has increased substantially and the markets served by the ports have expanded. On these bases, overlapping hinterlands can give rise to the phenomenon of competition between ports. Our main contribution comparing to the existing literature on this issue, is to build a set of hinterland, foreland and competition indicators. Using these indicators? we investigate the effect of hinterland accessibility, foreland connectivity and inter-ports competition on containerized traffic of Europeans ports. For this, we have a 10-year panel database from 2004 to 2014. Our hinterland indicators are given by two indicators of accessibility; they describe the market potential of a port and are calculated using information on population and wealth (GDP). We then calculate population and wealth for different neighborhoods within a distance from a port ranging from 100 to 1000km. For the foreland, we produce two indicators: port connectivity and number of partners for each port. Finally, we compute the two indicators of inter-port competition and a market concentration indicator (Hirshmann-Herfindhal) for different neighborhood-distances around the port. We then apply a fixed-effect model to test the relationship above. Again, with a fixed effects model, we do a sensitivity analysis for each of these indicators to support the results obtained. The econometric results of the general model given by the regression of the accessibility indicators, the LSCI for port i, and the inter-port competition indicator on the containerized traffic of European ports show a positive and significant effect for accessibility to wealth and not to the population. The results are positive and significant for the two indicators of connectivity and competition as well. One of the main results of this research is that the port development given here by the increase of its containerized traffic is strongly related to the development of its hinterland and foreland environment. In addition, it is the market potential, given by the wealth of the hinterland that has an impact on the containerized traffic of a port. However, accessibility to a large population pool is not important for understanding the dynamics of containerized port traffic. Furthermore, in order to continue to develop, a port must penetrate its hinterland at a deep level exceeding 100 km around the port and seek markets beyond this perimeter. The port authorities could focus their marketing efforts on the immediate hinterland, which can, as the results shows, not be captive and thus engage new approaches of port governance to make it more attractive.Keywords: accessibility, connectivity, European containerization, European hinterland and foreland, inter-port competition
Procedia PDF Downloads 197237 Management of Caverno-Venous Leakage: A Series of 133 Patients with Symptoms, Hemodynamic Workup, and Results of Surgery
Authors: Allaire Eric, Hauet Pascal, Floresco Jean, Beley Sebastien, Sussman Helene, Virag Ronald
Abstract:
Background: Caverno-venous leakage (CVL) is devastating, although barely known disease, the first cause of major physical impairment in men under 25, and responsible for 50% of resistances to phosphodiesterase 5-inhibitors (PDE5-I), affecting 30 to 40% of users in this medication class. In this condition, too early blood drainage from corpora cavernosa prevents penile rigidity and penetration during sexual intercourse. The role of conservative surgery in this disease remains controversial. Aim: Assess complications and results of combined open surgery and embolization for CVL. Method: Between June 2016 and September 2021, 133 consecutive patients underwent surgery in our institution for CVL, causing severe erectile dysfunction (ED) resistance to oral medical treatment. Procedures combined vein embolization and ligation with microsurgical techniques. We performed a pre-and post-operative clinical (Erection Harness Scale: EHS) hemodynamic evaluation by duplex sonography in all patients. Before surgery, the CVL network was visualized by computed tomography cavernography. Penile EMG was performed in case of diabetes or suspected other neurological conditions. All patients were optimized for hormonal status—data we prospectively recorded. Results: Clinical signs suggesting CVL were ED since age lower than 25, loss of erection when changing position, penile rigidity varying according to the position. Main complications were minor pulmonary embolism in 2 patients, one after airline travel, one with Factor V Leiden heterozygote mutation, one infection and three hematomas requiring reoperation, one decreased gland sensitivity lasting for more than one year. Mean pre-operative pharmacologic EHS was 2.37+/-0.64, mean pharmacologic post-operative EHS was 3.21+/-0.60, p<0.0001 (paired t-test). The mean EHS variation was 0.87+/-0.74. After surgery, 81.5% of patients had a pharmacologic EHS equal to or over 3, allowing for intercourse with penetration. Three patients (2.2%) experienced lower post-operative EHS. The main cause of failure was leakage from the deep dorsal aspect of the corpus cavernosa. In a 14 months follow-up, 83.2% of patients had a clinical EHS equal to or over 3, allowing for sexual intercourse with penetration, one-third of them without any medication. 5 patients had a penile implant after unsuccessful conservative surgery. Conclusion: Open surgery combined with embolization for CVL is an efficient approach to CVL causing severe erectile dysfunction.Keywords: erectile dysfunction, cavernovenous leakage, surgery, embolization, treatment, result, complications, penile duplex sonography
Procedia PDF Downloads 153236 Occult Haemolacria Paradigm in the Study of Tears
Authors: Yuliya Huseva
Abstract:
To investigate the contents of tears to determine latent blood. Methods: Tear samples from 72 women were studied with the microscopy of tears aspirated with a capillary and stained by Nocht and with a chemical method of test strips with chromogen. Statistical data processing was carried out using statistical packages Statistica 10.0 for Windows, calculation of Pearson's chi-square test, Yule association coefficient, the method of determining sensitivity and specificity. Results:, In 30.6% (22) of tear samples erythrocytes were revealed microscopically. Correlations between the presence of erythrocytes in the tear and the phase of the menstrual cycle has been discovered. In the follicular phase of the cycle, erythrocytes were found in 59.1% (13) people, which is significantly more (x2=4.2, p=0.041) compared to the luteal phase - in 40.9% (9) women. In the first seven days of the follicular phase of the menstrual cycle the erythrocytes were predominanted of in the tears of women examined testifies in favour of the vicarious bleeding from the mucous membranes of extragenital organs in sync with menstruation. Of the other cellular elements in tear samples with latent haemolacria, neutrophils prevailed - in 45.5% (10), while lymphocytes were less common - in 27.3% (6), because neutrophil exudation is accompanied by vasodilatation of the conjunctiva and the release of erythrocytes into the conjunctival cavity. It was found that the prognostic significance of the chemical method was 0.53 of the microscopic method. In contrast to microscopy, which detected blood in tear samples from 30.6% (22) of women, blood was detected chemically in tears of 16.7% (12). An association between latent haemolacria and endometriosis was found (k=0.75, p≤0.05). Microscopically, in the tears of patients with endometriosis, erythrocytes were detected in 70% of cases, while in healthy women without endometriosis - in 25% of cases. The proportion of women with erythrocytes in tears, determined by a chemical method, was 41.7% among patients with endometriosis, which is significantly more (x2=6.5, p=0.011) than 11.7% among women without endometriosis. The data obtained can be explained by the etiopathogenesis of the extragenital endometriosis which is caused by hematogenous spread of endometrial tissue into the orbit. In endometriosis, erythrocytes are found against the background of accumulations of epithelial cells. In the tear samples of 4 women with endometriosis, glandular cuboidal epithelial cells, morphologically similar to endometrial cells, were found, which may indicate a generalization of the disease. Conclusions: Single erythrocytes can normally be found in the tears, their number depends on the phase of the menstrual cycle, increasing in the follicular phase. Erythrocytes found in tears against the background of accumulations of epitheliocytes and their glandular atypia may indicate a manifestation of extragenital endometriosis. Both used methods (microscopic and chemical) are informative in revealing latent haemolacria. The microscopic method is more sensitive, reveals intact erythrocytes, and besides, it provides information about other cells. At the same time, the chemical method is faster and technically simpler, it determines the presence of haemoglobin and its metabolic products, and can be used as a screening.Keywords: tear, blood, microscopy, epitheliocytes
Procedia PDF Downloads 121235 Elevated Systemic Oxidative-Nitrosative Stress and Cerebrovascular Function in Professional Rugby Union Players: The Link to Impaired Cognition
Authors: Tom S. Owens, Tom A. Calverley, Benjamin S. Stacey, Christopher J. Marley, George Rose, Lewis Fall, Gareth L. Jones, Priscilla Williams, John P. R. Williams, Martin Steggall, Damian M. Bailey
Abstract:
Introduction and aims: Sports-related concussion (SRC) represents a significant and growing public health concern in rugby union, yet remains one of the least understood injuries facing the health community today. Alongside increasing SRC incidence rates, there is concern that prior recurrent concussion may contribute to long-term neurologic sequelae in later-life. This may be due to an accelerated decline in cerebral perfusion, a major risk factor for neurocognitive decline and neurodegeneration, though the underlying mechanisms remain to be established. The present study hypothesised that recurrent concussion in current professional rugby union players would result in elevated systemic oxidative-nitrosative stress, reflected by a free radical-mediated reduction in nitric oxide (NO) bioavailability and impaired cerebrovascular and cognitive function. Methodology: A longitudinal study design was adopted across the 2017-2018 rugby union season. Ethical approval was obtained from the University of South Wales Ethics Committee. Data collection is ongoing, and therefore the current report documents result from the pre-season and first half of the in-season data collection. Participants were initially divided into two subgroups; 23 professional rugby union players (aged 26 ± 5 years) and 22 non-concussed controls (27 ± 8 years). Pre-season measurements were performed for cerebrovascular function (Doppler ultrasound of middle cerebral artery velocity (MCAv) in response to hypocapnia/normocapnia/hypercapnia), cephalic venous concentrations of the ascorbate radical (A•-, electron paramagnetic resonance spectroscopy), NO (ozone-based chemiluminescence) and cognition (neuropsychometric tests). Notational analysis was performed to assess contact in the rugby group throughout each competitive game. Results: 1001 tackles and 62 injuries, including three concussions were observed across the first half of the season. However, no associations were apparent between number of tackles and any injury type (P > 0.05). The rugby group expressed greater oxidative stress as indicated by increased A•- (P < 0.05 vs. control) and a subsequent decrease in NO bioavailability (P < 0.05 vs. control). The rugby group performed worse in the Ray Auditory Verbal Learning Test B (RAVLT-B, learning, and memory) and the Grooved Pegboard test using both the dominant and non-dominant hands (visuomotor coordination, P < 0.05 vs. control). There were no between-group differences in cerebral perfusion at baseline (MCAv: 54 ± 13 vs. 59 ± 12, P > 0.05). Likewise, no between-group differences in CVRCO2Hypo (2.58 ± 1.01 vs. 2.58 ± 0.75, P > 0.05) or CVRCO2Hyper (2.69 ± 1.07 vs. 3.35 ± 1.28, P > 0.05) were observed. Conclusion: The present study identified that the rugby union players are characterized by impaired cognitive function subsequent to elevated systemic-oxidative-nitrosative stress. However, this appears to be independent of any functional impairment in cerebrovascular function. Given the potential long-term trajectory towards accelerated cognitive decline in populations exposed to SRC, prophylaxis to increase NO bioavailability warrants consideration.Keywords: cognition, concussion, mild traumatic brain injury, rugby
Procedia PDF Downloads 178234 In the Primary Education, the Classroom Teacher's Procedure of Coping WITH Stress, the Health of Psyche and the Direction of Check Point
Authors: Caglayan Pinar Demirtas, Mustafa Koc
Abstract:
Objective: This study was carried out in order to find out; the methods which are used by primary school teachers to cope with stress, their psychological health, and the direction of controlling focus. The study was carried out by using the ‘school survey’ and ‘society survey’ methods. Method: The study included primary school teachers. The study group was made up of 1066 people; 511 women and 555 men who accepted volunteerly to complete; ‘the inventory for collecting data, ‘the Scale for Attitude of Overcoming Stress’ (SBTE / SAOS), ‘Rotter’s Scale for the Focus of Inner- Outer Control’ (RİDKOÖ / RSFIOC), and ‘the Symptom Checking List’ (SCL- 90). The data was collected by using ‘the Scale for Attitude of Overcoming Stress’, ‘the Scale for the Focus of Inner- Outer Control’, ‘the Symptom Checking List’, and a personal information form developed by the researcher. SPSS for Windows packet programme was used. Result: The age variable is a factor in interpersonal sensitivity, depression, anxciety, hostality symptoms but it is not a factor in the other symptoms. The variable, gender, is a factor in emotional practical escaping overcoming method but it is not a factor in the other overcoming methods. Namely, it has been found out that, women use emotional practical escaping overcoming method more than men. Marital status is a factor in methods of overcoming stress such as trusting in religion, emotional practical escaping and biochemical escaping while it is not a factor in the other methods. Namely, it has been found out that married teachers use trusting in religion method, and emotional practical escaping method more than single ones. Single teachers generally use biochemical escaping method. In primary school teachers’ direction of controlling focus, gender variable is a factor. It has been found out that women are more inner controlled while the men are more outer controlled. The variable, time of service, is a factor in the direction of controlling focus; that is, teachers with 1-5 years of service time are more inner controlled compared with teachers with 16-20 years of service time. The variable, age, is a factor in the direction of controlling focus; that is, teachers in 26-30 age groups are more outer controlled compared with the other age groups and again teachers in 26-30 age group are more inner controlled when compared with the other age groups. Direction of controlling focus is a factor in the primary school teachers’ psychological health. Namely, being outer controlled is a factor but being inner controlled is not. The methods; trusting in religion, active plannıng and biochemical escaping used by primary school teachers to cope with stress act as factors in the direction of controlling focus but not in the others. Namely, it has been found out that outer controlled teachers prefer the methods of trusting in religion and active planning while the inner controlled ones prefer biochemical escaping.Keywords: coping with, controlling focus, psychological health, stress
Procedia PDF Downloads 352233 Investigating Early Markers of Alzheimer’s Disease Using a Combination of Cognitive Tests and MRI to Probe Changes in Hippocampal Anatomy and Functionality
Authors: Netasha Shaikh, Bryony Wood, Demitra Tsivos, Michael Knight, Risto Kauppinen, Elizabeth Coulthard
Abstract:
Background: Effective treatment of dementia will require early diagnosis, before significant brain damage has accumulated. Memory loss is an early symptom of Alzheimer’s disease (AD). The hippocampus, a brain area critical for memory, degenerates early in the course of AD. The hippocampus comprises several subfields. In contrast to healthy aging where CA3 and dentate gyrus are the hippocampal subfields with most prominent atrophy, in AD the CA1 and subiculum are thought to be affected early. Conventional clinical structural neuroimaging is not sufficiently sensitive to identify preferential atrophy in individual subfields. Here, we will explore the sensitivity of new magnetic resonance imaging (MRI) sequences designed to interrogate medial temporal regions as an early marker of Alzheimer’s. As it is likely a combination of tests may predict early Alzheimer’s disease (AD) better than any single test, we look at the potential efficacy of such imaging alone and in combination with standard and novel cognitive tasks of hippocampal dependent memory. Methods: 20 patients with mild cognitive impairment (MCI), 20 with mild-moderate AD and 20 age-matched healthy elderly controls (HC) are being recruited to undergo 3T MRI (with sequences designed to allow volumetric analysis of hippocampal subfields) and a battery of cognitive tasks (including Paired Associative Learning from CANTAB, Hopkins Verbal Learning Test and a novel hippocampal-dependent abstract word memory task). AD participants and healthy controls are being tested just once whereas patients with MCI will be tested twice a year apart. We will compare subfield size between groups and correlate subfield size with cognitive performance on our tasks. In the MCI group, we will explore the relationship between subfield volume, cognitive test performance and deterioration in clinical condition over a year. Results: Preliminary data (currently on 16 participants: 2 AD; 4 MCI; 9 HC) have revealed subfield size differences between subject groups. Patients with AD perform with less accuracy on tasks of hippocampal-dependent memory, and MCI patient performance and reaction times also differ from healthy controls. With further testing, we hope to delineate how subfield-specific atrophy corresponds with changes in cognitive function, and characterise how this progresses over the time course of the disease. Conclusion: Novel sequences on a MRI scanner such as those in route in clinical use can be used to delineate hippocampal subfields in patients with and without dementia. Preliminary data suggest that such subfield analysis, perhaps in combination with cognitive tasks, may be an early marker of AD.Keywords: Alzheimer's disease, dementia, memory, cognition, hippocampus
Procedia PDF Downloads 573232 Role of Indigenous Peoples in Climate Change
Authors: Neelam Kadyan, Pratima Ranga, Yogender
Abstract:
Indigenous people are the One who are affected by the climate change the most, although there have contributed little to its causes. This is largely a result of their historic dependence on local biological diversity, ecosystem services and cultural landscapes as a source of their sustenance and well-being. Comprising only four percent of the world’s population they utilize 22 percent of the world’s land surface. Despite their high exposure-sensitivity indigenous peoples and local communities are actively responding to changing climatic conditions and have demonstrated their resourcefulness and resilience in the face of climate change. Traditional Indigenous territories encompass up to 22 percent of the world’s land surface and they coincide with areas that hold 80 percent of the planet’s biodiversity. Also, the greatest diversity of indigenous groups coincides with the world’s largest tropical forest wilderness areas in the Americas (including Amazon), Africa, and Asia, and 11 percent of world forest lands are legally owned by Indigenous Peoples and communities. This convergence of biodiversity-significant areas and indigenous territories presents an enormous opportunity to expand efforts to conserve biodiversity beyond parks, which tend to benefit from most of the funding for biodiversity conservation. Tapping on Ancestral Knowledge Indigenous Peoples are carriers of ancestral knowledge and wisdom about this biodiversity. Their effective participation in biodiversity conservation programs as experts in protecting and managing biodiversity and natural resources would result in more comprehensive and cost effective conservation and management of biodiversity worldwide. Addressing the Climate Change Agenda Indigenous Peoples has played a key role in climate change mitigation and adaptation. The territories of indigenous groups who have been given the rights to their lands have been better conserved than the adjacent lands (i.e., Brazil, Colombia, Nicaragua, etc.). Preserving large extensions of forests would not only support the climate change objectives, but it would respect the rights of Indigenous Peoples and conserve biodiversity as well. A climate change agenda fully involving Indigenous Peoples has many more benefits than if only government and/or the private sector are involved. Indigenous peoples are some of the most vulnerable groups to the negative effects of climate change. Also, they are a source of knowledge to the many solutions that will be needed to avoid or ameliorate those effects. For example, ancestral territories often provide excellent examples of a landscape design that can resist the negatives effects of climate change. Over the millennia, Indigenous Peoples have developed adaptation models to climate change. They have also developed genetic varieties of medicinal and useful plants and animal breeds with a wider natural range of resistance to climatic and ecological variability.Keywords: ancestral knowledge, cost effective conservation, management, indigenous peoples, climate change
Procedia PDF Downloads 678231 Forecasting Residential Water Consumption in Hamilton, New Zealand
Authors: Farnaz Farhangi
Abstract:
Many people in New Zealand believe that the access to water is inexhaustible, and it comes from a history of virtually unrestricted access to it. For the region like Hamilton which is one of New Zealand’s fastest growing cities, it is crucial for policy makers to know about the future water consumption and implementation of rules and regulation such as universal water metering. Hamilton residents use water freely and they do not have any idea about how much water they use. Hence, one of proposed objectives of this research is focusing on forecasting water consumption using different methods. Residential water consumption time series exhibits seasonal and trend variations. Seasonality is the pattern caused by repeating events such as weather conditions in summer and winter, public holidays, etc. The problem with this seasonal fluctuation is that, it dominates other time series components and makes difficulties in determining other variations (such as educational campaign’s effect, regulation, etc.) in time series. Apart from seasonality, a stochastic trend is also combined with seasonality and makes different effects on results of forecasting. According to the forecasting literature, preprocessing (de-trending and de-seasonalization) is essential to have more performed forecasting results, while some other researchers mention that seasonally non-adjusted data should be used. Hence, I answer the question that is pre-processing essential? A wide range of forecasting methods exists with different pros and cons. In this research, I apply double seasonal ARIMA and Artificial Neural Network (ANN), considering diverse elements such as seasonality and calendar effects (public and school holidays) and combine their results to find the best predicted values. My hypothesis is the examination the results of combined method (hybrid model) and individual methods and comparing the accuracy and robustness. In order to use ARIMA, the data should be stationary. Also, ANN has successful forecasting applications in terms of forecasting seasonal and trend time series. Using a hybrid model is a way to improve the accuracy of the methods. Due to the fact that water demand is dominated by different seasonality, in order to find their sensitivity to weather conditions or calendar effects or other seasonal patterns, I combine different methods. The advantage of this combination is reduction of errors by averaging of each individual model. It is also useful when we are not sure about the accuracy of each forecasting model and it can ease the problem of model selection. Using daily residential water consumption data from January 2000 to July 2015 in Hamilton, I indicate how prediction by different methods varies. ANN has more accurate forecasting results than other method and preprocessing is essential when we use seasonal time series. Using hybrid model reduces forecasting average errors and increases the performance.Keywords: artificial neural network (ANN), double seasonal ARIMA, forecasting, hybrid model
Procedia PDF Downloads 339230 The Role of Personality Traits and Self-Efficacy in Shaping Teaching Styles: Insights from Indian Higher Education Faculty
Authors: Pritha Niraj Arya
Abstract:
Education plays a crucial role in societal evolution by promoting economic expansion and creativity. The varied demands of students in India’s higher education setting signify inclusive and efficient teaching methods. The present study examined how teaching styles, self-efficacy, and personality traits interact among Indian higher education faculty members and how these factors collectively affect pedagogical practices. Specifically, the research explored differences in personality traits -agreeableness, conscientiousness, neuroticism, openness, and extraversion- between teachers with high and low self-efficacy and examined how these traits shape teaching strategies, either student-focused or teacher-focused. Data collection took place for three months, ensuring confidentiality and ethical compliance. 268 faculty members from Indian higher education institutions participated in this comparative study. An online questionnaire was used to gather data in which participants completed three well-established tools: the approaches to teaching inventory, which measures teaching styles; the teacher self-efficacy questionnaire, which measures self-efficacy levels; and the big five inventory, which measures personality traits. The results showed that while teachers with low self-efficacy had higher levels of neuroticism, those with high self-efficacy scored much higher on traits such as agreeableness, conscientiousness, openness, and extraversion. Despite the traditional belief that high self-efficacy is only associated with student-focused teaching, the findings suggest that teachers with high self-efficacy have cognitive flexibility, which enables them to skillfully use both teacher-focused and student-focused approaches to cater to a wide range of classroom needs. Teachers with low self-efficacy, on the other hand, are less flexible and adopt fewer different strategies in their teaching practice. The findings challenge simplistic associations between self-efficacy and teaching strategies, emphasising that high self-efficacy promotes adaptability rather than a fixed preference for specific teaching methods. This adaptability is crucial in India’s diverse educational settings, where teachers must balance standardised curricula with the varied learning needs of students. This study highlights the importance of integrating personality traits and self-efficacy into teacher training programs. By promoting self-efficacy and tailoring professional development to consider individual personality traits, institutions can enhance teachers’ teaching flexibility, hence improving student engagement and learning outcomes. These findings have practical implications for teacher education, suggesting that adopting cognitive flexibility among teachers can improve instructional quality and classroom dynamics. To gain a deeper knowledge of how personality traits and self-efficacy impact teaching practices over time, future research should investigate causal relationships using longitudinal studies. Examining external factors like institutional policies, availability of resources, and cultural settings will help to clarify the dynamics at play. Furthermore, this study emphasises the need to strike a balance between teacher-focused and student-focused approaches to provide a comprehensive education that covers both conceptual understanding and the delivery of key information. This study offers insights into how the Indian educational system is changing and how, to achieve global standards, effective teaching techniques are becoming increasingly important. This study promotes the larger objective of educational excellence by exploring the interaction of internal and external factors impacting teaching styles and providing practical policy and practice recommendations.Keywords: higher education, personality traits, self-efficacy, teaching styles
Procedia PDF Downloads 14229 Gender Policies and Political Culture: An Examination of the Canadian Context
Authors: Chantal Maille
Abstract:
This paper is about gender-based analysis plus (GBA+), an intersectional gender policy used in Canada to assess the impact of policies and programs for men and women from different origins. It looks at Canada’s political culture to explain the nature of its gender policies. GBA+ is defined as an analysis method that makes it possible to assess the eventual effects of policies, programs, services, and other initiatives on women and men of different backgrounds because it takes account of gender and other identity factors. The ‘plus’ in the name serves to emphasize that GBA+ goes beyond gender to include an examination of a wide range of other related identity factors, such as age, education, language, geography, culture, and income. The point of departure for GBA+ is that women and men are not homogeneous populations and gender is never the only factor in defining a person’s identity; rather, it interacts with factors such as ethnic origin, age, disabilities, where the person lives, and other aspects of individual and social identity. GBA+ takes account of these factors and thus challenges notions of similarity or homogeneity within populations of women and men. Comparative analysis based on sex and gender may serve as a gateway to studying a given question, but women, men, girls, and boys do not form homogeneous populations. In the 1990s, intersectionality emerged as a new feminist framework. The popularity of the notion of intersectionality corresponds to a time when, in hindsight, the damage done to minoritized groups by state disengagement policies in concert with global intensification of neoliberalism, and vice versa, can be measured. Although GBA+ constitutes a form of intersectionalization of GBA, it must be understood that the two frameworks do not spring from a similar logic. Intersectionality first emerged as a dynamic analysis of differences between women that was oriented toward change and social justice, whereas GBA is a technique developed by state feminists in a context of analyzing governmental policies and aiming to promote equality between men and women. It can nevertheless be assumed that there might be interest in such a policy and program analysis grid that is decentred from gender and offers enough flexibility to take account of a group of inequalities. In terms of methodology, the research is supported by a qualitative analysis of governmental documents about GBA+ in Canada. Research findings identify links between Canadian gender policies and its political culture. In Canada, diversity has been taken into account as an element at the basis of gendered analysis of public policies since 1995. The GBA+ adopted by the government of Canada conveys an opening to intersectionality and a sensitivity to multiculturalism. The Canadian Multiculturalism Act, adopted 1988, proposes to recognize the fact that multiculturalism is a fundamental characteristic of the Canadian identity and heritage and constitutes an invaluable resource for the future of the country. In conclusion, Canada’s distinct political culture can be associated with the specific nature of its gender policies.Keywords: Canada, gender-based analysis, gender policies, political culture
Procedia PDF Downloads 224228 Bi-objective Network Optimization in Disaster Relief Logistics
Authors: Katharina Eberhardt, Florian Klaus Kaiser, Frank Schultmann
Abstract:
Last-mile distribution is one of the most critical parts of a disaster relief operation. Various uncertainties, such as infrastructure conditions, resource availability, and fluctuating beneficiary demand, render last-mile distribution challenging in disaster relief operations. The need to balance critical performance criteria like response time, meeting demand and cost-effectiveness further complicates the task. The occurrence of disasters cannot be controlled, and the magnitude is often challenging to assess. In summary, these uncertainties create a need for additional flexibility, agility, and preparedness in logistics operations. As a result, strategic planning and efficient network design are critical for an effective and efficient response. Furthermore, the increasing frequency of disasters and the rising cost of logistical operations amplify the need to provide robust and resilient solutions in this area. Therefore, we formulate a scenario-based bi-objective optimization model that integrates pre-positioning, allocation, and distribution of relief supplies extending the general form of a covering location problem. The proposed model aims to minimize underlying logistics costs while maximizing demand coverage. Using a set of disruption scenarios, the model allows decision-makers to identify optimal network solutions to address the risk of disruptions. We provide an empirical case study of the public authorities’ emergency food storage strategy in Germany to illustrate the potential applicability of the model and provide implications for decision-makers in a real-world setting. Also, we conduct a sensitivity analysis focusing on the impact of varying stockpile capacities, single-site outages, and limited transportation capacities on the objective value. The results show that the stockpiling strategy needs to be consistent with the optimal number of depots and inventory based on minimizing costs and maximizing demand satisfaction. The strategy has the potential for optimization, as network coverage is insufficient and relies on very high transportation and personnel capacity levels. As such, the model provides decision support for public authorities to determine an efficient stockpiling strategy and distribution network and provides recommendations for increased resilience. However, certain factors have yet to be considered in this study and should be addressed in future works, such as additional network constraints and heuristic algorithms.Keywords: humanitarian logistics, bi-objective optimization, pre-positioning, last mile distribution, decision support, disaster relief networks
Procedia PDF Downloads 80227 Evolutionary Analysis of Influenza A (H1N1) Pdm 09 in Post Pandemic Period in Pakistan
Authors: Nazish Badar
Abstract:
In early 2009, Pandemic type A (H1N1) Influenza virus emerged globally. Since then, it has continued circulation causing considerable morbidity and mortality. The purpose of this study was to evaluate the evolutionary changes in Influenza A (H1N1) pdm09 viruses from 2009-15 and their relevance with the current vaccine viruses. Methods: Respiratory specimens were collected with influenza-like illness and Severe Acute Respiratory Illness. Samples were processed according to CDC protocol. Sequencing and phylogenetic analysis of Haemagglutinin (HA) and neuraminidase (NA) genes was carried out comparing representative isolates from Pakistan viruses. Results: Between Jan2009 - Feb 2016, 1870 (13.2%) samples were positive for influenza A out of 14086. During the pandemic period (2009–10), Influenza A/ H1N1pdm 09 was the dominant strain with 366 (45%) of total influenza positives. In the post-pandemic period (2011–2016), a total of 1066 (59.6%) cases were positive Influenza A/ H1N1pdm 09 with co-circulation of different Influenza A subtypes. Overall, the Pakistan A(H1N1) pdm09 viruses grouped in two genetic clades. Influenza A(H1N1)pdm09 viruses only ascribed to Clade 7 during the pandemic period whereas viruses belong to clade 7 (2011) and clade 6B (2015) during the post-pandemic years. Amino acid analysis of the HA gene revealed mutations at positions S220T, I338V and P100S specially associated with outbreaks in all the analyzed strains. Sequence analyses of post-pandemic A(H1N1)pdm09 viruses showed additional substitutions at antigenic sites; S179N,K180Q (SA), D185N, D239G (CA), S202A (SB) and at receptor binding sites; A13T, S200P when compared with pandemic period. Substitution at Genetic markers; A273T (69%), S200P/T (15%) and D239G (7.6%) associated with severity and E391K (69%) associated with virulence was identified in viruses isolated during 2015. Analysis of NA gene revealed outbreak markers; V106I (23%) among pandemic and N248D (100%) during post-pandemic Pakistan viruses. Additional N-Glycosylation site; HA S179N (23%), NA I23T(7.6%) and N44S (77%) in place of N386K(77%) were only found in post-pandemic viruses. All isolates showed histidine (H) at position 275 in NA indicating sensitivity to neuraminidase inhibitors. Conclusion: This study shows that the Influenza A(H1N1)pdm09 viruses from Pakistan clustered into two genetic clades, with co-circulation of some variants. Certain key substitutions in the receptor binding site and few changes indicative of virulence were also detected in post-pandemic strains. Therefore, it is imperative to continue monitoring of the viruses for early identification of potential variants of high virulence or emergence of drug-resistant variants.Keywords: Influenza A (H1N1) pdm09, evolutionary analysis, post pandemic period, Pakistan
Procedia PDF Downloads 208226 Coping with Incompatible Identities in Russia: Case of Orthodox Gays
Authors: Siuzan Uorner
Abstract:
The era of late modernity is characterized, on the one hand, by social disintegration, values of personal freedom, tolerance, and self-expression. Boundaries between the accessible and the elitist, normal and abnormal are blurring. On the other hand, traditional social institutions, such as religion (especially Russian Orthodox Church), exist, criticizing lifestyle and worldview other than conventionally structured canons. Despite the declared values and opportunities in late modern society, people's freedom is ambivalent. Personal identity and its aspects are becoming a subject of choice. Hence, combinations of identity aspects can be incompatible. Our theoretical framework is based on P. Ricoeur's concept of narrative identity and hermeneutics, E. Goffman’s theory of social stigma, self-presentation, discrepant roles and W. James lectures about varieties of religious experience. This paper aims to reconstruct ways of coping with incompatible identities of Orthodox gays (an extreme sampling of a combination of sexual orientation and religious identity in a heteronormative society). This study focuses on the discourse of Orthodox gay parishioners and ROC gay priests in Russia (sampling ‘hard to reach’ populations because of the secrecy of gay community in ROC and sensitivity of the topic itself). We conducted a qualitative research design, using in-depth personal semi-structured online-interviews. Recruiting of informants took place in 'Nuntiare et Recreare' (Russian movement of religious LGBT) page in VKontakte through the post with an invitation to participate in the research. In this work, we analyzed interview transcripts using axial coding. We chose the Grounded Theory methodology to construct a theory from empirical data and contribute to the growing body of knowledge in ways of harmonizing incompatible identities in late modern societies. The research has found that there are two types of conflicts Orthodox gays meet with: canonic contradictions (postulates of Scripture and its interpretations) and problems in social interaction, mainly with ROC priests and Orthodox parishioners. We have revealed semantic meanings of most commonly used words that appear in the narratives (words such as ‘love’, ‘sin’, ‘religion’ etc.). Finally, we have reconstructed biographical patterns of LGBT social movements’ involvement. This paper argues that all incompatibilities are harmonizing in the narrative itself. As Ricoeur has suggested, the narrative configuration allows the speaker to gather facts and events together and to compose causal relationships between them. Sexual orientation and religious identity are getting along and harmonizing in the narrative.Keywords: gay priests, incompatible identities, narrative identity, Orthodox gays, religious identity, ROC, sexual orientation
Procedia PDF Downloads 138225 Strategic Interventions to Combat Socio-economic Impacts of Drought in Thar - A Case Study of Nagarparkar
Authors: Anila Hayat
Abstract:
Pakistan is one of those developing countries that are least involved in emissions but has the most vulnerable environmental conditions. Pakistan is ranked 8th in most affected countries by climate change on the climate risk index 1992-2011. Pakistan is facing severe water shortages and flooding as a result of changes in rainfall patterns, specifically in the least developed areas such as Tharparkar. Nagarparkar, once an attractive tourist spot located in Tharparkar because of its tropical desert climate, is now facing severe drought conditions for the last few decades. This study investigates the present socio-economic situation of local communities, major impacts of droughts and their underlying causes and current mitigation strategies adopted by local communities. The study uses both secondary (quantitative in nature) and primary (qualitative in nature) methods to understand the impacts and explore causes on the socio-economic life of local communities of the study area. The relevant data has been collected through household surveys using structured questionnaires, focus groups and in-depth interviews of key personnel from local and international NGOs to explore the sensitivity of impacts and adaptation to droughts in the study area. This investigation is limited to four rural communities of union council Pilu of Nagarparkar district, including Bheel, BhojaBhoon, Mohd Rahan Ji Dhani and Yaqub Ji Dhani villages. The results indicate that drought has caused significant economic and social hardships for the local communities as more than 60% of the overall population is dependent on rainfall which has been disturbed by irregular rainfall patterns. The decline in Crop yields has forced the local community to migrate to nearby areas in search of livelihood opportunities. Communities have not undertaken any appropriate adaptive actions to counteract the adverse effect of drought; they are completely dependent on support from the government and external aid for survival. Respondents also reported that poverty is a major cause of their vulnerability to drought. An increase in population, limited livelihood opportunities, caste system, lack of interest from the government sector, unawareness shaped their vulnerability to drought and other social issues. Based on the findings of this study, it is recommended that the local authorities shall create awareness about drought hazards and improve the resilience of communities against drought. It is further suggested to develop, introduce and implement water harvesting practices at the community level to promote drought-resistant crops.Keywords: migration, vulnerability, awareness, Drought
Procedia PDF Downloads 135224 Investigation of Mass Transfer for RPB Distillation at High Pressure
Authors: Amiza Surmi, Azmi Shariff, Sow Mun Serene Lock
Abstract:
In recent decades, there has been a significant emphasis on the pivotal role of Rotating Packed Beds (RPBs) in absorption processes, encompassing the removal of Volatile Organic Compounds (VOCs) from groundwater, deaeration, CO2 absorption, desulfurization, and similar critical applications. The primary focus is elevating mass transfer rates, enhancing separation efficiency, curbing power consumption, and mitigating pressure drops. Additionally, substantial efforts have been invested in exploring the adaptation of RPB technology for offshore deployment. This comprehensive study delves into the intricacies of nitrogen removal under low temperature and high-pressure conditions, employing the high gravity principle via innovative RPB distillation concept with a specific emphasis on optimizing mass transfer. Based on the author's knowledge and comprehensive research, no cryogenic experimental testing was conducted to remove nitrogen via RPB. The research identifies pivotal process control factors through meticulous experimental testing, with pressure, reflux ratio, and reboil ratio emerging as critical determinants in achieving the desired separation performance. The results are remarkable, with nitrogen removal reaching less than one mole% in the Liquefied Natural Gas (LNG) product and less than three moles% methane in the nitrogen-rich gas stream. The study further unveils the mass transfer coefficient, revealing a noteworthy trend of decreasing Number of Transfer Units (NTU) and Area of Transfer Units (ATU) as the rotational speed escalates. Notably, the condenser and reboiler impose varying demands based on the operating pressure, with lower pressures at 12 bar requiring a more substantial duty than the 15-bar operation of the RPB. In pursuit of optimal energy efficiency, a meticulous sensitivity analysis is conducted, pinpointing the ideal combination of pressure and rotating speed that minimizes overall energy consumption. These findings underscore the efficiency of the RPB distillation approach in effecting efficient separation, even when operating under the challenging conditions of low temperature and high pressure. This achievement is attributed to a rigorous process control framework that diligently manages the operational pressure and temperature profile of the RPB. Nonetheless, the study's conclusions point towards the need for further research to address potential scaling challenges and associated risks, paving the way for the industrial implementation of this transformative technology.Keywords: mass transfer coefficient, nitrogen removal, liquefaction, rotating packed bed
Procedia PDF Downloads 54223 Oxalate Method for Assessing the Electrochemical Surface Area for Ni-Based Nanoelectrodes Used in Formaldehyde Sensing Applications
Authors: S. Trafela, X. Xua, K. Zuzek Rozmana
Abstract:
In this study, we used an accurate and precise method to measure the electrochemically active surface areas (Aecsa) of nickel electrodes. Calculated Aecsa is really important for the evaluation of an electro-catalyst’s activity in electrochemical reaction of different organic compounds. The method involves the electrochemical formation of Ni(OH)₂ and NiOOH in the presence of adsorbed oxalate in alkaline media. The studies were carried out using cyclic voltammetry with polycrystalline nickel as a reference material and electrodeposited nickel nanowires, homogeneous and heterogeneous nickel films. From cyclic voltammograms, the charge (Q) values for the formation of Ni(OH)₂ and NiOOH surface oxides were calculated under various conditions. At sufficiently fast potential scan rates (200 mV s⁻¹), the adsorbed oxalate limits the growth of the surface hydroxides to a monolayer. Although the Ni(OH)₂/NiOOH oxidation peak overlaps with the oxygen evolution reaction, in the reverse scan, the NiOOH/ Ni(OH)₂ reduction peak is well-separated from other electrochemical processes and can be easily integrated. The values of these integrals were used to correlate experimentally measured charge density with an electrochemically active surface layer. The Aecsa of the nickel nanowires, homogeneous and heterogeneous nickel films were calculated to be Aecsa-NiNWs = 4.2066 ± 0.0472 cm², Aecsa-homNi = 1.7175 ± 0.0503 cm² and Aecsa-hetNi = 2.1862 ± 0.0154 cm². These valuable results were expanded and used in electrochemical studies of formaldehyde oxidation. As mentioned nickel nanowires, heterogeneous and homogeneous nickel films were used as simple and efficient sensor for formaldehyde detection. For this purpose, electrodeposited nickel electrodes were modified in 0.1 mol L⁻¹ solution of KOH in order to expect electrochemical activity towards formaldehyde. The investigation of the electrochemical behavior of formaldehyde oxidation in 0.1 mol L⁻¹ NaOH solution at the surface of modified nickel nanowires, homogeneous and heterogeneous nickel films were carried out by means of electrochemical techniques such as cyclic voltammetric and chronoamperometric methods. From investigations of effect of different formaldehyde concentrations (from 0.001 to 0.1 mol L⁻¹) on electrochemical signal - current we provided catalysis mechanism of formaldehyde oxidation, detection limit and sensitivity of nickel electrodes. The results indicated that nickel electrodes participate directly in the electrocatalytic oxidation of formaldehyde. In the overall reaction, formaldehyde in alkaline aqueous solution exists predominantly in form of CH₂(OH)O⁻, which is oxidized to CH₂(O)O⁻. Taking into account the determined (Aecsa) values we have been able to calculate the sensitivities: 7 mA mol L⁻¹ cm⁻² for nickel nanowires, 3.5 mA mol L⁻¹ cm⁻² for heterogeneous nickel film and 2 mA mol L⁻¹ cm⁻² for heterogeneous nickel film. The detection limit was 0.2 mM for nickel nanowires, 0.5 mM for porous Ni film and 0.8 mM for homogeneous Ni film. All of these results make nickel electrodes capable for further applications.Keywords: electrochemically active surface areas, nickel electrodes, formaldehyde, electrocatalytic oxidation
Procedia PDF Downloads 162222 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 161