Search results for: safety validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4569

Search results for: safety validation

849 The Role of Women in Climate Change Impact in Kupang-Indonesia

Authors: Rolland Epafras Fanggidae

Abstract:

The impact of climate change such as natural disasters, crop failures, increasing crop pests, bad gisi on children and other impacts, will indirectly affect education, health, food safety, as well as the economy. The impact of climate change has put a man in a situation of vulnerability, which was powerless to meet the minimum requirements, it is in close contact with poverty. When talking about poverty, the most plausible is female. The role of women in Indonesia, particularly in East Nusa Tenggara in Domestic aktifity very central and dominant. This makes Indonesian woman can say "outstanding actor in the face of climate change mitigation and adaptation and applying local knowledge", but still ignored when women based on gender division of work entrusted role in domestic activities. Similarly, in public activity is an extension of the Domestic example, trading activity in the market lele / mama. Although men are also affected by climate change, but most feel is female. From the above problems, it can be said that Indonesia's commitment has not been followed by optimal empowerment of women's role in addressing climate change, it is necessary to learn to know how the role of women in the face of climate change impacts that hit on her role as a woman, a housewife or head of the family and will be input in order to determine how women find a solution to tackle the problem of climate change. This study focuses on the efforts made by women cope with the impacts of climate change, efforts by the government, empowerment model used in Playing the impact of climate change. The container with the formulation of the title "The Role of Women in Climate Change Impact in Kupang district". Where the assessment in use types Research mix Methods combination of quantitative research and qualitative research. While the location of the research conducted in Kupang regency, East Nusa Tenggara, namely: District of East Kupang is a district granary in Kupang district. Subdistrict West Kupang, especially Tablolong Village is the center of seaweed cultivation in Kupang district.

Keywords: climate change, women, women's roles, gender, family

Procedia PDF Downloads 286
848 Stakeholder-Driven Development of a One Health Platform to Prevent Non-Alimentary Zoonoses

Authors: A. F. G. Van Woezik, L. M. A. Braakman-Jansen, O. A. Kulyk, J. E. W. C. Van Gemert-Pijnen

Abstract:

Background: Zoonoses pose a serious threat to public health and economies worldwide, especially as antimicrobial resistance grows and newly emerging zoonoses can cause unpredictable outbreaks. In order to prevent and control emerging and re-emerging zoonoses, collaboration between veterinary, human health and public health domains is essential. In reality however, there is a lack of cooperation between these three disciplines and uncertainties exist about their tasks and responsibilities. The objective of this ongoing research project (ZonMw funded, 2014-2018) is to develop an online education and communication One Health platform, “eZoon”, for the general public and professionals working in veterinary, human health and public health domains to support the risk communication of non-alimentary zoonoses in the Netherlands. The main focus is on education and communication in times of outbreak as well as in daily non-outbreak situations. Methods: A participatory development approach was used in which stakeholders from veterinary, human health and public health domains participated. Key stakeholders were identified using business modeling techniques previously used for the design and implementation of antibiotic stewardship interventions and consisted of a literature scan, expert recommendations, and snowball sampling. We used a stakeholder salience approach to rank stakeholders according to their power, legitimacy, and urgency. Semi-structured interviews were conducted with stakeholders (N=20) from all three disciplines to identify current problems in risk communication and stakeholder values for the One Health platform. Interviews were transcribed verbatim and coded inductively by two researchers. Results: The following key values were identified (but were not limited to): (a) need for improved awareness of veterinary and human health of each other’s fields, (b) information exchange between veterinary and human health, in particularly at a regional level; (c) legal regulations need to match with daily practice; (d) professionals and general public need to be addressed separately using tailored language and information; (e) information needs to be of value to professionals (relevant, important, accurate, and have financial or other important consequences if ignored) in order to be picked up; and (f) need for accurate information from trustworthy, centrally organised sources to inform the general public. Conclusion: By applying a participatory development approach, we gained insights from multiple perspectives into the main problems of current risk communication strategies in the Netherlands and stakeholder values. Next, we will continue the iterative development of the One Health platform by presenting key values to stakeholders for validation and ranking, which will guide further development. We will develop a communication platform with a serious game in which professionals at the regional level will be trained in shared decision making in time-critical outbreak situations, a smart Question & Answer (Q&A) system for the general public tailored towards different user profiles, and social media to inform the general public adequately during outbreaks.

Keywords: ehealth, one health, risk communication, stakeholder, zoonosis

Procedia PDF Downloads 280
847 Optimal Tamping for Railway Tracks, Reducing Railway Maintenance Expenditures by the Use of Integer Programming

Authors: Rui Li, Min Wen, Kim Bang Salling

Abstract:

For the modern railways, maintenance is critical for ensuring safety, train punctuality and overall capacity utilization. The cost of railway maintenance in Europe is high, on average between 30,000 – 100,000 Euros per kilometer per year. In order to reduce such maintenance expenditures, this paper presents a mixed 0-1 linear mathematical model designed to optimize the predictive railway tamping activities for ballast track in the planning horizon of three to four years. The objective function is to minimize the tamping machine actual costs. The approach of the research is using the simple dynamic model for modelling condition-based tamping process and the solution method for finding optimal condition-based tamping schedule. Seven technical and practical aspects are taken into account to schedule tamping: (1) track degradation of the standard deviation of the longitudinal level over time; (2) track geometrical alignment; (3) track quality thresholds based on the train speed limits; (4) the dependency of the track quality recovery on the track quality after tamping operation; (5) Tamping machine operation practices (6) tamping budgets and (7) differentiating the open track from the station sections. A Danish railway track between Odense and Fredericia with 42.6 km of length is applied for a time period of three and four years in the proposed maintenance model. The generated tamping schedule is reasonable and robust. Based on the result from the Danish railway corridor, the total costs can be reduced significantly (50%) than the previous model which is based on optimizing the number of tamping. The different maintenance strategies have been discussed in the paper. The analysis from the results obtained from the model also shows a longer period of predictive tamping planning has more optimal scheduling of maintenance actions than continuous short term preventive maintenance, namely yearly condition-based planning.

Keywords: integer programming, railway tamping, predictive maintenance model, preventive condition-based maintenance

Procedia PDF Downloads 436
846 A Crossover Study of Therapeutic Equivalence of Generic Product Versus Reference Product of Ivabradine in Patients with Chronic Heart Failure

Authors: Hadeer E. Eliwa, Naglaa S. Bazan, Ebtissam A. Darweesh, Nagwa A. Sabri

Abstract:

Background: Generic substitution of brand ivabradine prescriptions can reduce drug expenditures and improve adherence. However, the distrust of generic medicines by practitioners and patients due to doubts regarding their quality and fear of counterfeiting compromise the acceptance of this practice. Aim: The goal of this study is to compare the therapeutic equivalence of brand product versus the generic product of ivabradine in adult patients with chronic heart failure with reduced ejection fraction (≤ 40%) (HFrEF). Methodology: Thirty-two Egyptian patients with chronic heart failure with reduced ejection fraction (HFrEF) were treated with branded ivabradine (Procrolan ©) and generic (Bradipect ©) during 24 (2x12) weeks. Primary outcomes were resting heart rate (HR), NYHA FC, Quality of life (QoL) using Minnesota Living with Heart Failure (MLWHF) and EF. Secondary outcomes were the number of hospitalizations for worsening HFrEF and adverse effects. The washout period was not allowed. Findings: At the 12th week, the reduction in HR was comparable in the two groups (90.13±7.11 to 69±11.41 vs 96.13±17.58 to 67.31±8.68 bpm in brand and generic groups, respectively). Also, the increase in EF was comparable in the two groups (27.44 ±4.59 to 33.38±5.62 vs 32±5.96 to 39.31±8.95 in brand and generic groups, respectively). The improvement in NYHA FC was comparable in both groups (87.5% in brand group vs 93.8% in the generic group). The mean value of the QOL improved from 31.63±15.8 to 19.6±14.7 vs 35.68±17.63 to 22.9±15.1 for the brand and generic groups, respectively. Similarly, at end of 24 weeks, no significant changes were observed from data observed at 12th week regarding HR, EF, QoL and NYHA FC. Only minor side effects, mainly phosphenes, and a comparable number of hospitalizations were observed in both groups. Conclusion: The study revealed no statistically significant differences in the therapeutic effect and safety between generic and branded ivabradine. We assume that practitioners can safely interchange between them for economic reasons.

Keywords: bradipect©, heart failure, ivabradine, Procrolan ©, therapeutic equivalence

Procedia PDF Downloads 455
845 The Clash between Environmental and Heritage Laws: An Australian Case Study

Authors: Andrew R. Beatty

Abstract:

The exploitation of Australia’s vast mineral wealth is regulated by a matrix of planning, environment and heritage legislation, and despite the desire for a ‘balance’ between economic, environmental and heritage values, Aboriginal objects and places are often detrimentally impacted by mining approvals. The Australian experience is not novel. There are other cases of clashes between the rights of traditional landowners and businesses seeking to exploit mineral or other resources on or beneath those lands, including in the United States, Canada, and Brazil. How one reconciles the rights of traditional owners with those of resource companies is an ongoing legal problem of general interest. In Australia, planning and environmental approvals for resource projects are ordinarily issued by State or Territory governments. Federal legislation such as the Aboriginal and Torres Strait Islander Heritage Protection Act 1984 (Cth) is intended to act as a safety net when State or Territory legislation is incapable of protecting Indigenous objects or places in the context of approvals for resource projects. This paper will analyse the context and effectiveness of legislation enacted to protect Indigenous heritage in the planning process. In particular, the paper will analyse how the statutory objects of such legislation need to be weighed against the statutory objects of competing legislation designed to facilitate and control resource exploitation. Using a current claim in the Federal Court of Australia for the protection of a culturally significant landscape as a case study, this paper will examine the challenges faced in ascribing value to cultural heritage within the wider context of environmental and planning laws. Our findings will reveal that there is an inherent difficulty in defining and weighing competing economic, environmental and heritage considerations. An alternative framework will be proposed to guide regulators towards making decisions that result in better protection of Indigenous heritage in the context of resource management.

Keywords: environmental law, heritage law, indigenous rights, mining

Procedia PDF Downloads 93
844 Whey Protein in Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis

Authors: Zyrah Lou R. Samar, Genecarlo Liwanag

Abstract:

Type 2 Diabetes Mellitus is the more prevalent type, caused by a combination of insulin resistance and inadequate insulin response to hyperglycemia1. Aside from pharmacologic interventions, medical nutrition therapy is an integral part of the management of patients with Type 2 Diabetes Mellitus. Whey protein, which is one of the best protein sources, has been investigated for its applicability in improving glycemic control in patients with Type 2 Diabetes Mellitus. This systematic review and meta-analysis was conducted to measure the magnitude of the effect of whey protein on glycemic control in type 2 diabetes mellitus. The aim of this review is to evaluate the efficacy and safety of whey protein in patients with type 2 diabetes mellitus. Methods: A systematic electronic search for studies in the PubMed and Cochrane Collaboration database was done. Included in this review were randomized controlled trials of whey protein enrolling patients with type 2 diabetes mellitus. Three reviewers independently searched, assessed, and extracted data from the individual studies. Results: A systematic literature search on online databases such as Cochrane Central Registry, PubMed, and Herdin Plus was conducted in April to September 2021 to identify eligible studies. The search yielded 21 randomized controlled trials after removing duplicates. Only 5 articles were included after reviewing the full text, which met the criteria for selection. Conclusion: Whey protein supplementation significantly reduced fasting blood glucose. However, it did not reduce post-prandial blood glucose, HbA1c level, and weight when compared with the placebo. There has been a considerate heterogeneity across all studies, which may have contributed/confounded its effects. A larger sample size and better inclusion, and a more specific study may be included in the future reviews.

Keywords: whey protein, diabetes, nutrition, fasting blood sugar, postprandial glucose, HbA1c, weight reduction

Procedia PDF Downloads 100
843 Determination of Mechanical Properties of Adhesives via Digital Image Correlation (DIC) Method

Authors: Murat Demir Aydin, Elanur Celebi

Abstract:

Adhesively bonded joints are used as an alternative to traditional joining methods due to the important advantages they provide. The most important consideration in the use of adhesively bonded joints is that these joints have appropriate requirements for their use in terms of safety. In order to ensure control of this condition, damage analysis of the adhesively bonded joints should be performed by determining the mechanical properties of the adhesives. When the literature is investigated; it is generally seen that the mechanical properties of adhesives are determined by traditional measurement methods. In this study, to determine the mechanical properties of adhesives, the Digital Image Correlation (DIC) method, which can be an alternative to traditional measurement methods, has been used. The DIC method is a new optical measurement method which is used to determine the parameters of displacement and strain in an appropriate and correct way. In this study, tensile tests of Thick Adherent Shear Test (TAST) samples formed using DP410 liquid structural adhesive and steel materials and bulk tensile specimens formed using and DP410 liquid structural adhesive was performed. The displacement and strain values of the samples were determined by DIC method and the shear stress-strain curves of the adhesive for TAST specimens and the tensile strain curves of the bulk adhesive specimens were obtained. Various methods such as numerical methods are required as conventional measurement methods (strain gauge, mechanic extensometer, etc.) are not sufficient in determining the strain and displacement values of the very thin adhesive layer such as TAST samples. As a result, the DIC method removes these requirements and easily achieves displacement measurements with sufficient accuracy.

Keywords: structural adhesive, adhesively bonded joints, digital image correlation, thick adhered shear test (TAST)

Procedia PDF Downloads 309
842 Climate Change and Urban Flooding: The Need to Rethinking Urban Flood Management through Resilience

Authors: Suresh Hettiarachchi, Conrad Wasko, Ashish Sharma

Abstract:

The ever changing and expanding urban landscape increases the stress on urban systems to support and maintain safe and functional living spaces. Flooding presents one of the more serious threats to this safety, putting a larger number of people in harm’s way in congested urban settings. Climate change is adding to this stress by creating a dichotomy in the urban flood response. On the one hand, climate change is causing storms to intensify, resulting in more destructive, rarer floods, while on the other hand, longer dry periods are decreasing the severity of more frequent, less intense floods. This variability is creating a need to be more agile and innovative in how we design for and manage urban flooding. Here, we argue that to cope with this challenge climate change brings, we need to move towards urban flood management through resilience rather than flood prevention. We also argue that dealing with the larger variation in flood response to climate change means that we need to look at flooding from all aspects rather than the single-dimensional focus of flood depths and extents. In essence, we need to rethink how we manage flooding in the urban space. This change in our thought process and approach to flood management requires a practical way to assess and quantify resilience that is built into the urban landscape so that informed decision-making can support the required changes in planning and infrastructure design. Towards that end, we propose a Simple Urban Flood Resilience Index (SUFRI) based on a robust definition of resilience as a tool to assess flood resilience. The application of a simple resilience index such as the SUFRI can provide a practical tool that considers urban flood management in a multi-dimensional way and can present solutions that were not previously considered. When such an index is grounded on a clear and relevant definition of resilience, it can be a reliable and defensible way to assess and assist the process of adapting to the increasing challenges in urban flood management with climate change.

Keywords: urban flood resilience, climate change, flood management, flood modelling

Procedia PDF Downloads 41
841 Cooperative Agents to Prevent and Mitigate Distributed Denial of Service Attacks of Internet of Things Devices in Transportation Systems

Authors: Borhan Marzougui

Abstract:

Road and Transport Authority (RTA) is moving ahead with the implementation of the leader’s vision in exploring all avenues that may bring better security and safety services to the community. Smart transport means using smart technologies such as IoT (Internet of Things). This technology continues to affirm its important role in the context of Information and Transportation Systems. In fact, IoT is a network of Internet-connected objects able to collect and exchange different data using embedded sensors. With the growth of IoT, Distributed Denial of Service (DDoS) attacks is also growing exponentially. DDoS attacks are the major and a real threat to various transportation services. Currently, the defense mechanisms are mainly passive in nature, and there is a need to develop a smart technique to handle them. In fact, new IoT devices are being used into a botnet for DDoS attackers to accumulate for attacker purposes. The aim of this paper is to provide a relevant understanding of dangerous types of DDoS attack related to IoT and to provide valuable guidance for the future IoT security method. Our methodology is based on development of the distributed algorithm. This algorithm manipulates dedicated intelligent and cooperative agents to prevent and to mitigate DDOS attacks. The proposed technique ensure a preventive action when a malicious packets start to be distributed through the connected node (Network of IoT devices). In addition, the devices such as camera and radio frequency identification (RFID) are connected within the secured network, and the data generated by it are analyzed in real time by intelligent and cooperative agents. The proposed security system is based on a multi-agent system. The obtained result has shown a significant reduction of a number of infected devices and enhanced the capabilities of different security dispositives.

Keywords: IoT, DDoS, attacks, botnet, security, agents

Procedia PDF Downloads 136
840 External Validation of Established Pre-Operative Scoring Systems in Predicting Response to Microvascular Decompression for Trigeminal Neuralgia

Authors: Kantha Siddhanth Gujjari, Shaani Singhal, Robert Andrew Danks, Adrian Praeger

Abstract:

Background: Trigeminal neuralgia (TN) is a heterogenous pain syndrome characterised by short paroxysms of lancinating facial pain in the distribution of the trigeminal nerve, often triggered by usually innocuous stimuli. TN has a low prevalence of less than 0.1%, of which 80% to 90% is caused by compression of the trigeminal nerve from an adjacent artery or vein. The root entry zone of the trigeminal nerve is most sensitive to neurovascular conflict (NVC), causing dysmyelination. Whilst microvascular decompression (MVD) is an effective treatment for TN with NVC, all patients do not achieve long-term pain relief. Pre-operative scoring systems by Panczykowski and Hardaway have been proposed but have not been externally validated. These pre-operative scoring systems are composite scores calculated according to a subtype of TN, presence and degree of neurovascular conflict, and response to medical treatments. There is discordance in the assessment of NVC identified on pre-operative magnetic resonance imaging (MRI) between neurosurgeons and radiologists. To our best knowledge, the prognostic impact for MVD of this difference of interpretation has not previously been investigated in the form of a composite scoring system such as those suggested by Panczykowski and Hardaway. Aims: This study aims to identify prognostic factors and externally validate the proposed scoring systems by Panczykowski and Hardaway for TN. A secondary aim is to investigate the prognostic difference between a neurosurgeon's interpretation of NVC on MRI compared with a radiologist’s. Methods: This retrospective cohort study included 95 patients who underwent de novo MVD in a single neurosurgical unit in Melbourne. Data was recorded from patients’ hospital records and neurosurgeon’s correspondence from perioperative clinic reviews. Patient demographics, type of TN, distribution of TN, response to carbamazepine, neurosurgeon, and radiologist interpretation of NVC on MRI, were clearly described prospectively and preoperatively in the correspondence. Scoring systems published by Panczykowski et al. and Hardaway et al. were used to determine composite scores, which were compared with the recurrence of TN recorded during follow-up over 1-year. Categorical data analysed using Pearson chi-square testing. Independent numerical and nominal data analysed with logistical regression. Results: Logistical regression showed that a Panczykowski composite score of greater than 3 points was associated with a higher likelihood of pain-free outcome 1-year post-MVD with an OR 1.81 (95%CI 1.41-2.61, p=0.032). The composite score using neurosurgeon’s impression of NVC had an OR 2.96 (95%CI 2.28-3.31, p=0.048). A Hardaway composite score of greater than 2 points was associated with a higher likelihood of pain-free outcome 1 year post-MVD with an OR 3.41 (95%CI 2.58-4.37, p=0.028). The composite score using neurosurgeon’s impression of NVC had an OR 3.96 (95%CI 3.01-4.65, p=0.042). Conclusion: Composite scores developed by Panczykowski and Hardaway were validated for the prediction of response to MVD in TN. A composite score based on the neurosurgeon’s interpretation of NVC on MRI, when compared with the radiologist’s had a greater correlation with pain-free outcomes 1 year post-MVD.

Keywords: de novo microvascular decompression, neurovascular conflict, prognosis, trigeminal neuralgia

Procedia PDF Downloads 70
839 Psychopathic Disorders and Judges Sentencing: Can Neurosciences Change this Aggravating Factor in a Mitigating Factor?

Authors: Kevin Moustapha

Abstract:

Psychopathy is perceived today as being «the most important concept in the criminal justice system» and as «the most important legal notion of the early 21 th century». The explosion of research related to psychopathy seems to perfectly illustrate this trend. Traditionally, many studies tend to focus on links between insanity defense and psychopathy. That is why our purpose in this article is to analyze psychopathic disorders in the scope of judges sentencing in Canada. Indeed, in every Canadian case related to dangerous offenders, judges must balance between fairness and protection of the individuals rights of the accused and protection of society from dangerous predators who may commit future acts of physical or sexual violence. Increasingly, psychopathic disorders are taking an important part in judge sentencing, especially in Canada. This phenomenon can be illustrated by the high proportion of psychopath offenders incarcerated in North American prisons. Many decisions in Canadians courtrooms seem to point out that psychopathy is often used as a strong argument by the judges to preserve public safety. The fact that psychopathy is often associated with violence, recklessness and recidivism, it could explain why many judges consider psychopathic disorders as an aggravating factor. Generally, the judge reasoning is based on article 753 of Canadian Criminal Code related to dangerous offenders, which is used for individuals who show a pattern of repetitive and persistent aggressive behaviour. However, with cognitive neurosciences, the psychopath’s situation in courtrooms would probably change. Cerebral imaging and news data provided by the neurosciences show that emotional and volitional functions in psychopath’s brains are impaired. Understanding these new issues could enable some judges to recognize psychopathic disorders as a mitigating factor. Two important questions ought to be raised in this article: can exploring psychopaths ‘brains really change the judge sentencing in Canadian courtrooms? If yes, can judges consider psychopathy more as a mitigating factor than an aggravating factor?

Keywords: criminal law, judges sentencing, neurosciences, psychopathy

Procedia PDF Downloads 920
838 Healthy Feeding and Drinking Troughs for Profitable Intensive Deep-Litter Poultry Farming

Authors: Godwin Ojochogu Adejo, Evelyn UnekwuOjo Adejo, Sunday UnenwOjo Adejo

Abstract:

The mainstream contemporary approach to controlling the impact of diseases among poultry birds rely largely on curative measures through the administration of drugs to infected birds. Most times as observed in the deep liter poultry farming system, entire flocks including uninfected birds receive the treatment they do not need. As such, unguarded use of chemical drugs and antibiotics has led to wastage and accumulation of chemical residues in poultry products with associated health hazards to humans. However, wanton and frequent drug usage in poultry is avoidable if feeding and drinking equipment are designed to curb infection transmission among birds. Using toxicological assays as guide and with efficiency and simplicity in view, two newly field-tested and recently patented equipments called 'healthy liquid drinking trough (HDT)' and 'healthy feeding trough (HFT)' that systematically eliminate contamination of the feeding and drinking channels, thereby, curbing wide-spread infection and transmission of diseases in the (intensive) deep litter poultry farming system were designed. Upon combined usage, they automatically and drastically reduced both the amount and frequency of antibiotics use in poultry by over > 50%. Additionally, they conferred optimization of feed and water utilization/elimination of wastage by > 80%, reduced labour by > 70%, reduced production cost by about 15%, and reduced chemical residues in poultry meat or eggs by > 85%. These new and cheap technologies which require no energy input are likely to elevate safety of poultry products for consumers' health, increase marketability locally and for export, and increase output and profit especially among poultry farmers and poor people who keep poultry or inevitably utilize poultry products in developing countries.

Keywords: healthy, trough, toxicological, assay-guided, poultry

Procedia PDF Downloads 149
837 Effect of Loop Diameter, Height and Insulation on a High Temperature CO2 Based Natural Circulation Loop

Authors: S. Sadhu, M. Ramgopal, S. Bhattacharyya

Abstract:

Natural circulation loops (NCLs) are buoyancy driven flow systems without any moving components. NCLs have vast applications in geothermal, solar and nuclear power industry where reliability and safety are of foremost concern. Due to certain favorable thermophysical properties, especially near supercritical regions, carbon dioxide can be considered as an ideal loop fluid in many applications. In the present work, a high temperature NCL that uses supercritical carbon dioxide as loop fluid is analysed. The effects of relevant design and operating variables on loop performance are studied. The system operating under steady state is modelled taking into account the axial conduction through loop fluid and loop wall, and heat transfer with surroundings. The heat source is considered to be a heater with controlled heat flux and heat sink is modelled as an end heat exchanger with water as the external cold fluid. The governing equations for mass, momentum and energy conservation are normalized and are solved numerically using finite volume method. Results are obtained for a loop pressure of 90 bar with the power input varying from 0.5 kW to 6.0 kW. The numerical results are validated against the experimental results reported in the literature in terms of the modified Grashof number (Grm) and Reynolds number (Re). Based on the results, buoyancy and friction dominated regions are identified for a given loop. Parametric analysis has been done to show the effect of loop diameter, loop height, ambient temperature and insulation. The results show that for the high temperature loop, heat loss to surroundings affects the loop performance significantly. Hence this conjugate heat transfer between the loop and surroundings has to be considered in the analysis of high temperature NCLs.

Keywords: conjugate heat transfer, heat loss, natural circulation loop, supercritical carbon dioxide

Procedia PDF Downloads 235
836 Statistical Approach to Identify Stress and Biases Impairing Decision-Making in High-Risk Industry

Authors: Ph. Fauquet-Alekhine

Abstract:

Decision-making occurs several times an hour when working in high risk industry and an erroneous choice might have undesirable outcomes for people and the environment surrounding the industrial plant. Industrial decisions are very often made in a context of acute stress. Time pressure is a crucial stressor leading decision makers sometimes to boost up the decision-making process and if it is not possible then shift to the simplest strategy. We thus found it interesting to update the characterization of the stress factors impairing decision-making at Chinon Nuclear Power Plant (France) in order to optimize decision making contexts and/or associated processes. The investigation was based on the analysis of reports addressing safety events over the last 3 years. Among 93 reports, those explicitly addressing decision-making issues were identified. Characterization of each event was undertaken in terms of three criteria: stressors, biases impairing decision making and weaknesses of the decision-making process. The statistical analysis showed that biases were distributed over 10 possibilities among which the hypothesis confirmation bias was clearly salient. No significant correlation was found between criteria. The analysis indicated that the main stressor was time pressure and highlights an unexpected form of stressor: the trust asymmetry principle of the expert. The analysis led to the conclusion that this stressor impaired decision-making from a psychological angle rather than from a physiological angle: it induces defensive bias of self-esteem, self-protection associated with a bias of confirmation. This leads to the hypothesis that this stressor can intervene in some cases without being detected, and to the hypothesis that other stressors of the same kind might occur without being detected too. Further investigations addressing these hypotheses are considered. The analysis also led to the conclusion that dealing with these issues implied i) decision-making methods being well known to the workers and automated and ii) the decision-making tools being well known and strictly applied. Training was thus adjusted.

Keywords: bias, expert, high risk industry, stress.

Procedia PDF Downloads 104
835 Need of Trained Clinical Research Professionals Globally to Conduct Clinical Trials

Authors: Tambe Daniel Atem

Abstract:

Background: Clinical Research is an organized research on human beings intended to provide adequate information on the drug use as a therapeutic agent on its safety and efficacy. The significance of the study is to educate the global health and life science graduates in Clinical Research in depth to perform better as it involves testing drugs on human beings. Objectives: to provide an overall understanding of the scientific approach to the evaluation of new and existing medical interventions and to apply ethical and regulatory principles appropriate to any individual research. Methodology: It is based on – Primary data analysis and Secondary data analysis. Primary data analysis: means the collection of data from journals, the internet, and other online sources. Secondary data analysis: a survey was conducted with a questionnaire to interview the Clinical Research Professionals to understand the need of training to perform clinical trials globally. The questionnaire consisted details of the professionals working with the expertise. It also included the areas of clinical research which needed intense training before entering into hardcore clinical research domain. Results: The Clinical Trials market worldwide worth over USD 26 billion and the industry has employed an estimated 2,10,000 people in the US and over 70,000 in the U.K, and they form one-third of the total research and development staff. There are more than 2,50,000 vacant positions globally with salary variations in the regions for a Clinical Research Coordinator. R&D cost on new drug development is estimated at US$ 70-85 billion. The cost of doing clinical trials for a new drug is US$ 200-250 million. Due to an increase trained Clinical Research Professionals India has emerged as a global hub for clinical research. The Global Clinical Trial outsourcing opportunity in India in the pharmaceutical industry increased to more than $2 billion in 2014 due to increased outsourcing from U.S and Europe to India. Conclusion: Assessment of training need is recommended for newer Clinical Research Professionals and trial sites, especially prior the conduct of larger confirmatory clinical trials.

Keywords: clinical research, clinical trials, clinical research professionals

Procedia PDF Downloads 446
834 Improving Grade Control Turnaround Times with In-Pit Hyperspectral Assaying

Authors: Gary Pattemore, Michael Edgar, Andrew Job, Marina Auad, Kathryn Job

Abstract:

As critical commodities become more scarce, significant time and resources have been used to better understand complicated ore bodies and extract their full potential. These challenging ore bodies provide several pain points for geologists and engineers to overcome, poor handling of these issues flows downs stream to the processing plant affecting throughput rates and recovery. Many open cut mines utilise blast hole drilling to extract additional information to feed back into the modelling process. This method requires samples to be collected during or after blast hole drilling. Samples are then sent for assay with turnaround times varying from 1 to 12 days. This method is time consuming, costly, requires human exposure on the bench and collects elemental data only. To address this challenge, research has been undertaken to utilise hyperspectral imaging across a broad spectrum to scan samples, collars or take down hole measurements for minerals and moisture content and grade abundances. Automation of this process using unmanned vehicles and on-board processing reduces human in pit exposure to ensure ongoing safety. On-board processing allows data to be integrated into modelling workflows with immediacy. The preliminary results demonstrate numerous direct and indirect benefits from this new technology, including rapid and accurate grade estimates, moisture content and mineralogy. These benefits allow for faster geo modelling updates, better informed mine scheduling and improved downstream blending and processing practices. The paper presents recommendations for implementation of the technology in open cut mining environments.

Keywords: grade control, hyperspectral scanning, artificial intelligence, autonomous mining, machine learning

Procedia PDF Downloads 104
833 Shape Management Method of Large Structure Based on Octree Space Partitioning

Authors: Gichun Cha, Changgil Lee, Seunghee Park

Abstract:

The objective of the study is to construct the shape management method contributing to the safety of the large structure. In Korea, the research of the shape management is lack because of the new attempted technology. Terrestrial Laser Scanning (TLS) is used for measurements of large structures. TLS provides an efficient way to actively acquire accurate the point clouds of object surfaces or environments. The point clouds provide a basis for rapid modeling in the industrial automation, architecture, construction or maintenance of the civil infrastructures. TLS produce a huge amount of point clouds. Registration, Extraction and Visualization of data require the processing of a massive amount of scan data. The octree can be applied to the shape management of the large structure because the scan data is reduced in the size but, the data attributes are maintained. The octree space partitioning generates the voxel of 3D space, and the voxel is recursively subdivided into eight sub-voxels. The point cloud of scan data was converted to voxel and sampled. The experimental site is located at Sungkyunkwan University. The scanned structure is the steel-frame bridge. The used TLS is Leica ScanStation C10/C5. The scan data was condensed 92%, and the octree model was constructed with 2 millimeter in resolution. This study presents octree space partitioning for handling the point clouds. The basis is created by shape management of the large structures such as double-deck tunnel, building and bridge. The research will be expected to improve the efficiency of structural health monitoring and maintenance. "This work is financially supported by 'U-City Master and Doctor Course Grant Program' and the National Research Foundation of Korea(NRF) grant funded by the Korea government (MSIP) (NRF- 2015R1D1A1A01059291)."

Keywords: 3D scan data, octree space partitioning, shape management, structural health monitoring, terrestrial laser scanning

Procedia PDF Downloads 294
832 Probiotic Potential and Antimicrobial Activity of Enterococcus faecium Isolated from Chicken Caecal and Fecal Samples

Authors: Salma H. Abu Hafsa, A. Mendonca, B. Brehm-Stecher, A. A. Hassan, S. A. Ibrahim

Abstract:

Enterococci are important inhabitants of the animal intestine and are widely used in probiotic products. A probiotic strain is expected to possess several desirable properties in order to exert beneficial effects. Therefore, the objective of this study was to isolate and characterize strains of Enterococcus sp. from chicken cecal and fecal samples to determine potential probiotic properties. Enterococci were isolated from thirty one chicken cecal and fecal samples collected from a local farm. In vitro studies were performed to assess antibacterial activity (using agar well diffusion and cell free supernatant broth technique against Salmonella enterica serotype Enteritidis), susceptibility to antibiotics (amoxycillin, cotrimoxazole, chloramphenicol, cefuroxime, ceftriaxone, ciprofloxacin, and nalidixic acid), survival in acidic conditions, resistance to bile salts, and their survival during simulated gastric juice conditions at pH 2.5. Isolates were identified using biochemical and molecular assays (API 50 CHL, and API ZYM kits followed by 16S rDNA gene sequence analysis). Two strains were identified, of which, Enteroccocus faecium was capable of inhibiting the growth of S. enteritidis and was susceptible to a wide range of antibiotics. In addition, the isolated strain exhibited significant resistance under highly acidic conditions (pH=2.5) for 8 hours and survived well in bile salt at 0.2% for 24 hours and showing ability to survive in the presence of simulated gastric juice at pH 2.5. Based on these results, the E. faecium isolate fulfills some of the criteria to be considered as a probiotic strain and therefore, could be used as a feed additive with good potential for controlling S. enteritidis in chickens. However, in vivo studies are needed to determine the safety of the strain.

Keywords: acid tolerance, antimicrobial activity, Enterococcus faecium, probiotic

Procedia PDF Downloads 394
831 FEM for Stress Reduction by Optimal Auxiliary Holes in a Loaded Plate with Elliptical Hole

Authors: Basavaraj R. Endigeri, S. G. Sarganachari

Abstract:

Steel is widely used in machine parts, structural equipment and many other applications. In many steel structural elements, holes of different shapes and orientations are made with a view to satisfy the design requirements. The presence of holes in steel elements creates stress concentration, which eventually reduce the mechanical strength of the structure. Therefore, it is of great importance to investigate the state of stress around the holes for the safety and properties design of such elements. By literature survey, it is known that till date, there is no analytical solution to reduce the stress concentration by providing auxiliary holes at a definite location and radii in a steel plate. The numerical method can be used to determine the optimum location and radii of auxiliary holes. In the present work plate with an elliptical hole, for a steel material subjected to uniaxial load is analyzed and the effect of stress concentration is graphically represented .The introduction of auxiliary holes at a optimum location and radii with its effect on stress concentration is also represented graphically. The finite element analysis package ANSYS 11.0 is used to analyse the steel plate. The analysis is carried out using a plane 42 element. Further the ANSYS optimization model is used to determine the location and radii for optimum values of auxiliary hole to reduce stress concentration. All the results for different diameter to plate width ratio are presented graphically. The results of this study are in the form of the graphs for determining the locations and diameter of optimal auxiliary holes. The graph of stress concentration v/s central hole diameter to plate width ratio. The Finite Elements results of the study indicates that the stress concentration effect of central elliptical hole in an uniaxial loaded plate can be reduced by introducing auxiliary holes on either side of the central circular hole.

Keywords: finite element method, optimization, stress concentration factor, auxiliary holes

Procedia PDF Downloads 446
830 Reconstruction of Age-Related Generations of Siberian Larch to Quantify the Climatogenic Dynamics of Woody Vegetation Close the Upper Limit of Its Growth

Authors: A. P. Mikhailovich, V. V. Fomin, E. M. Agapitov, V. E. Rogachev, E. A. Kostousova, E. S. Perekhodova

Abstract:

Woody vegetation among the upper limit of its habitat is a sensitive indicator of biota reaction to regional climate changes. Quantitative assessment of temporal and spatial changes in the distribution of trees and plant biocenoses calls for the development of new modeling approaches based upon selected data from measurements on the ground level and ultra-resolution aerial photography. Statistical models were developed for the study area located in the Polar Urals. These models allow obtaining probabilistic estimates for placing Siberian Larch trees into one of the three age intervals, namely 1-10, 11-40 and over 40 years, based on the Weilbull distribution of the maximum horizontal crown projection. Authors developed the distribution map for larch trees with crown diameters exceeding twenty centimeters by deciphering aerial photographs made by a UAV from an altitude equal to fifty meters. The total number of larches was equal to 88608, forming the following distribution row across the abovementioned intervals: 16980, 51740, and 19889 trees. The results demonstrate that two processes can be observed in the course of recent decades: first is the intensive forestation of previously barren or lightly wooded fragments of the study area located within the patches of wood, woodlands, and sparse stand, and second, expansion into mountain tundra. The current expansion of the Siberian Larch in the region replaced the depopulation process that occurred in the course of the Little Ice Age from the late 13ᵗʰ to the end of the 20ᵗʰ century. Using data from field measurements of Siberian larch specimen biometric parameters (including height, diameter at root collar and at 1.3 meters, and maximum projection of the crown in two orthogonal directions) and data on tree ages obtained at nine circular test sites, authors developed a model for artificial neural network including two layers with three and two neurons, respectively. The model allows quantitative assessment of a specimen's age based on height and maximum crone projection values. Tree height and crown diameters can be quantitatively assessed using data from aerial photographs and lidar scans. The resulting model can be used to assess the age of all Siberian larch trees. The proposed approach, after validation, can be applied to assessing the age of other tree species growing near the upper tree boundaries in other mountainous regions. This research was collaboratively funded by the Russian Ministry for Science and Education (project No. FEUG-2023-0002) and Russian Science Foundation (project No. 24-24-00235) in the field of data modeling on the basis of artificial neural network.

Keywords: treeline, dynamic, climate, modeling

Procedia PDF Downloads 68
829 Deep Learning Prediction of Residential Radon Health Risk in Canada and Sweden to Prevent Lung Cancer Among Non-Smokers

Authors: Selim M. Khan, Aaron A. Goodarzi, Joshua M. Taron, Tryggve Rönnqvist

Abstract:

Indoor air quality, a prime determinant of health, is strongly influenced by the presence of hazardous radon gas within the built environment. As a health issue, dangerously high indoor radon arose within the 20th century to become the 2nd leading cause of lung cancer. While the 21st century building metrics and human behaviors have captured, contained, and concentrated radon to yet higher and more hazardous levels, the issue is rapidly worsening in Canada. It is established that Canadians in the Prairies are the 2nd highest radon-exposed population in the world, with 1 in 6 residences experiencing 0.2-6.5 millisieverts (mSv) radiation per week, whereas the Canadian Nuclear Safety Commission sets maximum 5-year occupational limits for atomic workplace exposure at only 20 mSv. This situation is also deteriorating over time within newer housing stocks containing higher levels of radon. Deep machine learning (LSTM) algorithms were applied to analyze multiple quantitative and qualitative features, determine the most important contributory factors, and predicted radon levels in the known past (1990-2020) and projected future (2021-2050). The findings showed gradual downwards patterns in Sweden, whereas it would continue to go from high to higher levels in Canada over time. The contributory factors found to be the basement porosity, roof insulation depthness, R-factor, and air dynamics of the indoor environment related to human window opening behaviour. Building codes must consider including these factors to ensure adequate indoor ventilation and healthy living that can prevent lung cancer in non-smokers.

Keywords: radon, building metrics, deep learning, LSTM prediction model, lung cancer, canada, sweden

Procedia PDF Downloads 107
828 A Quantitative Model for Replacement of Medical Equipment Based on Technical and Environmental Factors

Authors: Ghadeer Mohammad Said El-Sheikh, Samer Mohamad Shalhoob

Abstract:

Medical equipment operation state is a valid reflection of health care organizations' performance, where such equipment highly contributes to the quality of healthcare services on several levels in which quality improvement has become an intrinsic part of the discourse and activities of health care services. In healthcare organizations, clinical and biomedical engineering departments play an essential role in maintaining the safety and efficiency of such equipment. One of the most challenging topics when it comes to such sophisticated equipment is the lifespan of medical equipment, where many factors will impact such characteristics of medical equipment through its life cycle. So far, many attempts have been made in order to address this issue where most of the approaches are kind of arbitrary approaches and one of the criticisms of existing approaches trying to estimate and understand the lifetime of a medical equipment lies under the inquiry of what are the environmental factors that can play into such a critical characteristic of a medical equipment. In an attempt to address this shortcoming, the purpose of our study rises where in addition to the standard technical factors taken into consideration through the decision-making process by a clinical engineer in case of medical equipment failure, the dimension of environmental factors shall be added. The investigations, researches and studies applied for the purpose of supporting the decision making process by a clinical engineers and assessing the lifespan of healthcare equipment’s in the Lebanese society was highly dependent on the identification of technical criteria’s that impacts the lifespan of a medical equipment where the affecting environmental factors didn’t receive the proper attention. The objective of our study is based on the need for introducing a new well-designed plan for evaluating medical equipment depending on two dimensions. According to this approach, the equipment that should be replaced or repaired will be classified based on a systematic method taking into account two essential criteria; the standard identified technical criteria and the added environmental criteria.

Keywords: technical, environmental, healthcare, characteristic of medical equipment

Procedia PDF Downloads 151
827 The Impact of CYP2C9 Gene Polymorphisms on Warfarin Dosing

Authors: Weaam Aldeeban, Majd Aljamali, Lama A. Youssef

Abstract:

Background & Objective: Warfarin is considered a problematic drug due to its narrow therapeutic window and wide inter-individual response variations, which are attributed to demographic, environmental, and genetic factors, particularly single nucleotide polymorphism (SNPs) in the genes encoding VKORC1 and CYP2C9 involved in warfarin's mechanism of action and metabolism, respectively. CYP2C9*2rs1799853 and CYP2C9*3rs1057910 alleles are linked to reduced enzyme activity, as carriers of either or both alleles are classified as moderate or slow metabolizers, and therefore exhibit higher sensitivity of warfarin compared with wild type (CYP2C9*1*1). Our study aimed to assess the frequency of *1, *2, and *3 alleles in the CYP2C9 gene in a cohort of Syrian patients receiving a maintenance dose of warfarin for different indications, the impact of genotypes on warfarin dosing, and the frequency of adverse effects (i.e., bleedings). Subjects & Methods: This retrospective cohort study encompassed 94 patients treated with warfarin. Patients’ genotypes were identified by sequencing the polymerase chain reaction (PCR) specific products of the gene encoding CYP2C9, and the effects on warfarin therapeutic outcomes were investigated. Results: Sequencing revealed that 43.6% of the study population has the *2 and/or *3 SNPs. The mean weekly maintenance dose of warfarin was 37.42 ± 15.5 mg for patients with the wild-type allele (CYP2C9*1*1), whereas patients with one or both variants (*2 and/or *3) demanded a significantly lower dose (28.59 ±11.58 mg) of warfarin, (P= 0.015). A higher percentage (40.7%) of patients with allele *2 and/or *3 experienced hemorrhagic accidents compared with only 17.9% of patients with the wild type *1*1, (P = 0.04). Conclusions: Our study proves an association between *2 and *3 genotypes and higher sensitivity to warfarin and a tendency to bleed, which necessitates lowering the dose. These findings emphasize the significance of CYP2C9 genotyping prior to commencing warfarin therapy in order to achieve optimal and faster dose control and to ensure effectiveness and safety.

Keywords: warfarin, CYP2C9, polymorphisms, Syrian, hemorrhage

Procedia PDF Downloads 141
826 An Adiabatic Quantum Optimization Approach for the Mixed Integer Nonlinear Programming Problem

Authors: Maxwell Henderson, Tristan Cook, Justin Chan Jin Le, Mark Hodson, YoungJung Chang, John Novak, Daniel Padilha, Nishan Kulatilaka, Ansu Bagchi, Sanjoy Ray, John Kelly

Abstract:

We present a method of using adiabatic quantum optimization (AQO) to solve a mixed integer nonlinear programming (MINLP) problem instance. The MINLP problem is a general form of a set of NP-hard optimization problems that are critical to many business applications. It requires optimizing a set of discrete and continuous variables with nonlinear and potentially nonconvex constraints. Obtaining an exact, optimal solution for MINLP problem instances of non-trivial size using classical computation methods is currently intractable. Current leading algorithms leverage heuristic and divide-and-conquer methods to determine approximate solutions. Creating more accurate and efficient algorithms is an active area of research. Quantum computing (QC) has several theoretical benefits compared to classical computing, through which QC algorithms could obtain MINLP solutions that are superior to current algorithms. AQO is a particular form of QC that could offer more near-term benefits compared to other forms of QC, as hardware development is in a more mature state and devices are currently commercially available from D-Wave Systems Inc. It is also designed for optimization problems: it uses an effect called quantum tunneling to explore all lowest points of an energy landscape where classical approaches could become stuck in local minima. Our work used a novel algorithm formulated for AQO to solve a special type of MINLP problem. The research focused on determining: 1) if the problem is possible to solve using AQO, 2) if it can be solved by current hardware, 3) what the currently achievable performance is, 4) what the performance will be on projected future hardware, and 5) when AQO is likely to provide a benefit over classical computing methods. Two different methods, integer range and 1-hot encoding, were investigated for transforming the MINLP problem instance constraints into a mathematical structure that can be embedded directly onto the current D-Wave architecture. For testing and validation a D-Wave 2X device was used, as well as QxBranch’s QxLib software library, which includes a QC simulator based on simulated annealing. Our results indicate that it is mathematically possible to formulate the MINLP problem for AQO, but that currently available hardware is unable to solve problems of useful size. Classical general-purpose simulated annealing is currently able to solve larger problem sizes, but does not scale well and such methods would likely be outperformed in the future by improved AQO hardware with higher qubit connectivity and lower temperatures. If larger AQO devices are able to show improvements that trend in this direction, commercially viable solutions to the MINLP for particular applications could be implemented on hardware projected to be available in 5-10 years. Continued investigation into optimal AQO hardware architectures and novel methods for embedding MINLP problem constraints on to those architectures is needed to realize those commercial benefits.

Keywords: adiabatic quantum optimization, mixed integer nonlinear programming, quantum computing, NP-hard

Procedia PDF Downloads 520
825 The Impact of Legislation on Waste and Losses in the Food Processing Sector in the UK/EU

Authors: David Lloyd, David Owen, Martin Jardine

Abstract:

Introduction: European weight regulations with respect to food products require a full understanding of regulation guidelines to assure regulatory compliance. It is suggested that the complexity of regulation leads to practices which result to over filling of food packages by food processors. Purpose: To establish current practices by food processors and the financial, sustainable and societal impacts on the food supply chain of ineffective food production practices. Methods: An analysis of food packing controls with 10 companies of varying food categories and quantitative based research of a further 15 food processes on the confidence in weight control analysis of finished food packs within their organisation. Results: A process floor analysis of manufacturing operations focussing on 10 products found over fill of packages ranging from 4.8% to 20.2%. Standard deviation figures for all products showed a potential for reducing average weight of the pack whilst still retain the legal status of the product. In 20% of cases, an automatic weight analysis machine was in situ however weight packs were still significantly overweight. Collateral impacts noted included the effect of overfill on raw material purchase and added food miles often on a global basis with one raw material alone creating 10,000 extra food miles due to the poor weight control of the processing unit. A case study of a meat and bakery product will be discussed with the impact of poor controls resulting from complex legislation. The case studies will highlight extra energy costs in production and the impact of the extra weight on fuel usage. If successful a risk assessment model used primarily on food safety but adapted to identify waste /sustainability risks will be discussed within the presentation.

Keywords: legislation, overfill, profile, waste

Procedia PDF Downloads 400
824 Assessing Moisture Adequacy over Semi-arid and Arid Indian Agricultural Farms using High-Resolution Thermography

Authors: Devansh Desai, Rahul Nigam

Abstract:

Crop water stress (W) at a given growth stage starts to set in as moisture availability (M) to roots falls below 75% of maximum. It has been found that ratio of crop evapotranspiration (ET) and reference evapotranspiration (ET0) is an indicator of moisture adequacy and is strongly correlated with ‘M’ and ‘W’. The spatial variability of ET0 is generally less over an agricultural farm of 1-5 ha than ET, which depends on both surface and atmospheric conditions, while the former depends only on atmospheric conditions. Solutions from surface energy balance (SEB) and thermal infrared (TIR) remote sensing are now known to estimate latent heat flux of ET. In the present study, ET and moisture adequacy index (MAI) (=ET/ET0) have been estimated over two contrasting western India agricultural farms having rice-wheat system in semi-arid climate and arid grassland system, limited by moisture availability. High-resolution multi-band TIR sensing observations at 65m from ECOSTRESS (ECOsystemSpaceborne Thermal Radiometer Experiment on Space Station) instrument on-board International Space Station (ISS) were used in an analytical SEB model, STIC (Surface Temperature Initiated Closure) to estimate ET and MAI. The ancillary variables used in the ET modeling and MAI estimation were land surface albedo, NDVI from close-by LANDSAT data at 30m spatial resolution, ET0 product at 4km spatial resolution from INSAT 3D, meteorological forcing variables from short-range weather forecast on air temperature and relative humidity from NWP model. Farm-scale ET estimates at 65m spatial resolution were found to show low RMSE of 16.6% to 17.5% with R2 >0.8 from 18 datasets as compared to reported errors (25 – 30%) from coarser-scale ET at 1 to 8 km spatial resolution when compared to in situ measurements from eddy covariance systems. The MAI was found to show lower (<0.25) and higher (>0.5) magnitudes in the contrasting agricultural farms. The study showed the potential need of high-resolution high-repeat spaceborne multi-band TIR payloads alongwith optical payload in estimating farm-scale ET and MAI for estimating consumptive water use and water stress. A set of future high-resolution multi-band TIR sensors are planned on-board Indo-French TRISHNA, ESA’s LSTM, NASA’s SBG space-borne missions to address sustainable irrigation water management at farm-scale to improve crop water productivity. These will provide precise and fundamental variables of surface energy balance such as LST (Land Surface Temperature), surface emissivity, albedo and NDVI. A synchronization among these missions is needed in terms of observations, algorithms, product definitions, calibration-validation experiments and downstream applications to maximize the potential benefits.

Keywords: thermal remote sensing, land surface temperature, crop water stress, evapotranspiration

Procedia PDF Downloads 65
823 Anti-Diabetic Effect of High Purity Epigallocatechin Gallate from Green Tea

Authors: Hye Jin Choi, Mirim Jin, Jeong June Choi

Abstract:

Green tea, which is one of the most popular of tea, contains various ingredients that help health. Epigallocatechin gallate (EGCG) is one of the main active polyphenolic compound possessing diverse biologically beneficial effects such as anti-oxidation, anti-cancer founding in green tea. This study was performed to investigate the anti-diabetic effect of high-purity EGCG ( > 98%) in a spontaneous diabetic mellitus animal model, db/db mouse. Four-week-old male db/db mice, which was induced to diabetic mellitus by the high-fat diet, were orally administered with high-purity EGCG (10, 50 and 100 mg/kg) for 4 weeks. Daily weight and diet efficiency were examined, and blood glucose level was assessed once a week. After 4 weeks of EGCG administration, fasting blood glucose level was measured. Then, the mice were sacrificed and total abdominal fat was sampled to examine the change in fat weight. Plasma was separated from the blood and the levels of aspartate amino-transferase (ALT) and alanine amino-transferase (AST) were investigated. As results, blood glucose and body weight were significantly decreased by EGCG treatment compared to the control group. Also, the amount of abdominal fat was down-regulated by EGCG. However, ALT and AST levels, which are indicators of liver function, were similar to those of control group. Taken together, our study suggests that high purity EGCG is capable of treating diabetes mellitus based in db / db mice with safety and has a potent to develop a therapeutics for metabolic disorders. This work was supported by Korea Institute of Planning and Evaluation for Technology in Food, Agriculture, Forestry (IPET) through High Value-added Food Technology Development Program, funded by Ministry of Agriculture, Food and Rural Affairs (MAFRA) (317034-03-2-HD030)

Keywords: anti-diabetic effect, db/db mouse, diabetes mellitus, green tea, epigallocatechin gallate

Procedia PDF Downloads 179
822 Housing First, Not Housing Only: The Life Skills Project

Authors: Sara Cumming, Julianne DiSanto, Leah Burton

Abstract:

Homelessness in Canada is a persistent problem. It has been widely argued that the best tactic for eradicating homelessness is to approach social issues from a Housing First perspective—an approach that centers on quickly moving people into permanent and independent housing and then providing them additional support and services as needed. It is recognized that life skills training is both necessary and an effective way to reduce cyclical homelessness; however, there is a scarcity of research on effective ways to teach life skills; this problem was exacerbated in a pandemic context, where in-person delivery was severely restricted or no longer possible. Very little attention has been paid to the diverse cultural needs of clients in a multicultural context and the need to foster cultural knowledge/awareness in individuals to successfully contribute to the cultural safety of communities. This research attempts to fill these gaps in the literature and in practice by employing a community-engaged research (CER) approach. Academic, government, funders, front-line staff, and clients at 15 not-for-profits from across the Greater Toronto Area in Ontario, Canada, collaborated to co-create a virtual, client-centric, equity, diversity, and inclusion (EDI) informed life skill learning management system. We employed a triangulation methodology for this research. An environmental scan was conducted for best practices. Two separate Creative Problem Solving Sessions were held with over 100 front-line workers, managers, and executive directors who work with homeless populations. Quantitative and open-ended surveys were completed by over 200 individuals with experience with homelessness. All sections of this research aimed to discover the areas of skills that individuals need to maintain housing and to ascertain what a more client-driven EDI approach to life skills training should include. This research will showcase which life skills are deemed essential for homeless and precariously housed individuals.

Keywords: homelessness, Housing First, life skills, community engaged research

Procedia PDF Downloads 62
821 Grid and Market Integration of Large Scale Wind Farms using Advanced Predictive Data Mining Techniques

Authors: Umit Cali

Abstract:

The integration of intermittent energy sources like wind farms into the electricity grid has become an important challenge for the utilization and control of electric power systems, because of the fluctuating behaviour of wind power generation. Wind power predictions improve the economic and technical integration of large amounts of wind energy into the existing electricity grid. Trading, balancing, grid operation, controllability and safety issues increase the importance of predicting power output from wind power operators. Therefore, wind power forecasting systems have to be integrated into the monitoring and control systems of the transmission system operator (TSO) and wind farm operators/traders. The wind forecasts are relatively precise for the time period of only a few hours, and, therefore, relevant with regard to Spot and Intraday markets. In this work predictive data mining techniques are applied to identify a statistical and neural network model or set of models that can be used to predict wind power output of large onshore and offshore wind farms. These advanced data analytic methods helps us to amalgamate the information in very large meteorological, oceanographic and SCADA data sets into useful information and manageable systems. Accurate wind power forecasts are beneficial for wind plant operators, utility operators, and utility customers. An accurate forecast allows grid operators to schedule economically efficient generation to meet the demand of electrical customers. This study is also dedicated to an in-depth consideration of issues such as the comparison of day ahead and the short-term wind power forecasting results, determination of the accuracy of the wind power prediction and the evaluation of the energy economic and technical benefits of wind power forecasting.

Keywords: renewable energy sources, wind power, forecasting, data mining, big data, artificial intelligence, energy economics, power trading, power grids

Procedia PDF Downloads 510
820 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 34