Search results for: prosthetic control
2749 Nanofluid-Based Emulsion Liquid Membrane for Selective Extraction and Separation of Dysprosium
Authors: Maliheh Raji, Hossein Abolghasemi, Jaber Safdari, Ali Kargari
Abstract:
Dysprosium is a rare earth element which is essential for many growing high-technology applications. Dysprosium along with neodymium plays a significant role in different applications such as metal halide lamps, permanent magnets, and nuclear reactor control rods preparation. The purification and separation of rare earth elements are challenging because of their similar chemical and physical properties. Among the various methods, membrane processes provide many advantages over the conventional separation processes such as ion exchange and solvent extraction. In this work, selective extraction and separation of dysprosium from aqueous solutions containing an equimolar mixture of dysprosium and neodymium by emulsion liquid membrane (ELM) was investigated. The organic membrane phase of the ELM was a nanofluid consisting of multiwalled carbon nanotubes (MWCNT), Span80 as surfactant, Cyanex 272 as carrier, kerosene as base fluid, and nitric acid solution as internal aqueous phase. Factors affecting separation of dysprosium such as carrier concentration, MWCNT concentration, feed phase pH and stripping phase concentration were analyzed using Taguchi method. Optimal experimental condition was obtained using analysis of variance (ANOVA) after 10 min extraction. Based on the results, using MWCNT nanofluid in ELM process leads to increase the extraction due to higher stability of membrane and mass transfer enhancement and separation factor of 6 for dysprosium over neodymium can be achieved under the optimum conditions. Additionally, demulsification process was successfully performed and the membrane phase reused effectively in the optimum condition.Keywords: emulsion liquid membrane, MWCNT nanofluid, separation, Taguchi method
Procedia PDF Downloads 2872748 Postgraduate Supervision Relationship: Practices, Challenges, and Strategies of Stakeholders in the Côte d’Ivoire University System
Authors: Akuélé Radha Kondo, Kathrin Heitz-Tokpa, Bassirou Bonfoh, Francis Akindes
Abstract:
Postgraduate supervision contributes significantly to a student’s academic career, a supervisor’s promotion, and a university’s reputation. Despite this, the length of graduation in the Côte d’Ivoire University system is beyond the normal duration, two years for a master's and three years for a PhD. The paper analyses supervision practices regarding the challenges and strategies mobilised by students, supervisors, and administration staff to manage various relationships. Using a qualitative research design, this study was conducted at three public universities in Côte d’Ivoire. Data were generated from thirty-two postgraduate students, seventeen supervisors, and four administration staff through semi-structured interviews. Data were analysed using content analysis and presented thematically. Findings revealed delegated supervision and co-supervision, two types of supervision relationship practices. Students pointed out that feedback is often delayed from their supervisors in delegation supervision. However, they acknowledged receiving input and scientific guidance. All students believed that their role is to be proactive, not to wait to receive everything from the supervisor, and need to be more autonomous and hardworking. They developed strategies related to these qualities. Supervisors were considered to guide, give advice, control, motivate, provide critical feedback, and validate the work. The administration was rather absent in monitoring supervision delays. Major challenges were related to the supervision relationships and access to the research funds. The study showed that more engagement of the main supervisor, administration monitoring, and secured funding would reduce the time and increase the completion rate.Keywords: Côte d’Ivoire, postgraduate supervision, practices, strategies
Procedia PDF Downloads 942747 A Review of Deep Learning Methods in Computer-Aided Detection and Diagnosis Systems based on Whole Mammogram and Ultrasound Scan Classification
Authors: Ian Omung'a
Abstract:
Breast cancer remains to be one of the deadliest cancers for women worldwide, with the risk of developing tumors being as high as 50 percent in Sub-Saharan African countries like Kenya. With as many as 42 percent of these cases set to be diagnosed late when cancer has metastasized and or the prognosis has become terminal, Full Field Digital [FFD] Mammography remains an effective screening technique that leads to early detection where in most cases, successful interventions can be made to control or eliminate the tumors altogether. FFD Mammograms have been proven to multiply more effective when used together with Computer-Aided Detection and Diagnosis [CADe] systems, relying on algorithmic implementations of Deep Learning techniques in Computer Vision to carry out deep pattern recognition that is comparable to the level of a human radiologist and decipher whether specific areas of interest in the mammogram scan image portray abnormalities if any and whether these abnormalities are indicative of a benign or malignant tumor. Within this paper, we review emergent Deep Learning techniques that will prove relevant to the development of State-of-The-Art FFD Mammogram CADe systems. These techniques will span self-supervised learning for context-encoded occlusion, self-supervised learning for pre-processing and labeling automation, as well as the creation of a standardized large-scale mammography dataset as a benchmark for CADe systems' evaluation. Finally, comparisons are drawn between existing practices that pre-date these techniques and how the development of CADe systems that incorporate them will be different.Keywords: breast cancer diagnosis, computer aided detection and diagnosis, deep learning, whole mammogram classfication, ultrasound classification, computer vision
Procedia PDF Downloads 922746 Hepatic Regenerative Capacity after Acetaminophen-Induced Liver Injury in Mouse Model
Authors: N. F. Hamid, A. Kipar, J. Stewart, D. J. Antoine, B. K. Park, D. P. Williams
Abstract:
Acetaminophen (APAP) is a widely used analgesic that is safe at therapeutic doses. The mouse model of APAP has been extensively used for studies on pathogenesis and intervention of drug induced liver injury based on the CytP450 mediated formation of N-acetyl-p-benzo-quinoneimine and, more recently, as model for mechanism based biomarkers. Delay of the fasted CD1 mice to rebound to the basal level of hepatic GSH compare to fed mice is reported in this study. Histologically, 15 hours fasted mice prior to APAP treatment leading to overall more intense cell loss with no evidence of apoptosis as compared to non-fasted mice, where the apoptotic cells were clearly seen on cleaved caspase-3 immunostaining. After 15 hours post APAP administration, hepatocytes underwent stage of recovery with evidence of mitotic figures in fed mice and return to completely no histological difference to control at 24 hours. On the contrary, the evidence of ongoing cells damage and inflammatory cells infiltration are still present on fasted mice until the end of the study. To further measure the regenerative capacity of the hepatocytes, the inflammatory mediators of cytokines that involved in the progression or regression of the toxicity like TNF-α and IL-6 in liver and spleen using RT-qPCR were also included. Yet, quantification of proliferating cell nuclear antigen (PCNA) has demonstrated the time for hepatic regenerative in fasted is longer than that to fed mice. Together, these data would probably confirm that fasting prior to APAP treatment does not only modulate liver injury, but could have further effects to delay subsequent regeneration of the hepatocytes.Keywords: acetaminophen, liver, proliferating cell nuclear antigen, regeneration, apoptosis
Procedia PDF Downloads 4282745 Power System Cyber Security Risk in the Era of Digital Transformation
Authors: Rafat Rob, Khaled Alotaibi, Dana Nour, Abdullah Albadrani, Abdulmohsen Mulhim
Abstract:
Power systems digitization solutions provides a comprehensive smart, cohesive, interconnected network, extensive connectivity between digital assets, physical power plants, and resources to form digital economies. However, digitization has exposed the classical air gapped power plants to the rapid spread of cyber threats and attacks in the process delaying and forcing many organizations to rethink their cyber security policies and standards before they can augment their operation the new advanced digital devices. Cyber Security requirements for power systems (and industry control systems therein) demand a new approach, unique methodology, and design process that is completely different to Cyber Security measures designed for the IT systems. In practice, Cyber Security strategy, as applied to power systems, tends to be closely aligned to those measures applied for IT system purposes. The differentiator for Cyber Security in terms of power systems are the physical assets and applications used, alongside the ever-growing rate of expansion within the industry controls sector (in comparison to the relatively saturated growth observed for corporate IT systems). These factors increase the magnitude of the cyber security risk within such systems. The introduction of smart devices and sensors along the grid initiate vulnerable entry points to the systems. Every installed Smart Meter is a target; the way these devices communicate with each other may instigate a Denial of Service (DoS) and Distributed Denial of Service (DDoS) attack. Attacking one sensor or meter has the potential to propagate itself throughout the power grid reaching the IT network, where it may manifest itself as a malware infiltration.Keywords: supply chain, cybersecurity, maturity model, risk, smart grid
Procedia PDF Downloads 1112744 Evaluating the Hepato-Protective Activities of Combination of Aqueous Extract of Roots of Tinospora cordifolia and Rhizomes of Curcuma longa against Paracetamol Induced Hepatic Damage in Rats
Authors: Amberkar Mohanbabu Vittalrao, Avin, Meena Kumari Kamalkishore, Padmanabha Udupa, Vinaykumar Bavimane, Honnegouda
Abstract:
Objective: To evaluate the hepato-protective activity of Tinospora cordiofolia (Tc) against paracetamol induced hepatic damage in rats. Methods: The plant stem (test drug) was procured locally, shade dried, powdered and extracted with water. Silymarin was used as standard hepatoprotective drugs and 2% gum acacia as a control (vehicle) against paracetamol (PCT) induced hepatotoxicity. Results and Discussion: The hepato-protective activity of aqueous stem extract was assessed by paracetamol induced hepatotoxicity preventive model in rats. Alteration in the levels of biochemical markers of hepatic damage like AST, ALT, ALP and lipid peroxides were tested in both paracetamol treated and untreated groups. Paracetamol (3g/kg) had enhanced the AST, ALT, ALP and the lipid peroxides in the serum. Treatment of silymarin and aqueous stem extract of Tc (200 and 400mg/kg) extract showed significant hepatoprotective activity by altering biochemical marker levels to the near normal. Preliminary phytochemical tests were done. Aqueous Tc extract showed presence of phenolic compound and flavonoids. Our findings suggested that Tc extract possessed hepatoprotective activity in a dose dependent manner. Conclusions: Tc was found to possess significant hepatoprotective property when treated with PCT. This was evident by decreasing the liver enzymes significantly when treated with PCT as compared to PCT only treated group (P < 0.05). Hence Tinospora cardiofolia could be a good, promising, preventive agent against PCT induced hepatotoxicity.Keywords: Tinospora cardiofolia, hepatoprotection, paracetamol, silymarin
Procedia PDF Downloads 2012743 Mending Broken Fences Policing: Developing the Intelligence-Led/Community-Based Policing Model(IP-CP) and Quality/Quantity/Crime(QQC) Model
Authors: Anil Anand
Abstract:
Despite enormous strides made during the past decade, particularly with the adoption and expansion of community policing, there remains much that police leaders can do to improve police-public relations. The urgency is particularly evident in cities across the United States and Europe where an increasing number of police interactions over the past few years have ignited large, sometimes even national, protests against police policy and strategy, highlighting a gap between what police leaders feel they have archived in terms of public satisfaction, support, and legitimacy and the perception of bias among many marginalized communities. The decision on which one policing strategy is chosen over another, how many resources are allocated, and how strenuously the policy is applied resides primarily with the police and the units and subunits tasked with its enforcement. The scope and opportunity for police officers in impacting social attitudes and social policy are important elements that cannot be overstated. How do police leaders, for instance, decide when to apply one strategy—say community-based policing—over another, like intelligence-led policing? How do police leaders measure performance and success? Should these measures be based on quantitative preferences over qualitative, or should the preference be based on some other criteria? And how do police leaders define, allow, and control discretionary decision-making? Mending Broken Fences Policing provides police and security services leaders with a model based on social cohesion, that incorporates intelligence-led and community policing (IP-CP), supplemented by a quality/quantity/crime (QQC) framework to provide a four-step process for the articulable application of police intervention, performance measurement, and application of discretion.Keywords: social cohesion, quantitative performance measurement, qualitative performance measurement, sustainable leadership
Procedia PDF Downloads 2942742 Directly Observed Treatment Short-Course (DOTS) for TB Control Program: A Ten Years Experience
Authors: Solomon Sisay, Belete Mengistu, Woldargay Erku, Desalegne Woldeyohannes
Abstract:
Background: Tuberculosis is still the leading cause of illness in the world which accounted for 2.5% of the global burden of disease, and 25% of all avoidable deaths in developing countries. Objectives: The aim of study was to assess impact of DOTS strategy on tuberculosis case finding and treatment outcome in Gambella Regional State, Ethiopia from 2003 up to 2012 and from 2002 up to 2011, respectively. Methods: Health facility-based retrospective study was conducted. Data were collected and reported in quarterly basis using WHO reporting format for TB case finding and treatment outcome from all DOTS implementing health facilities in all zones of the region to Federal Ministry of Health. Results: A total of 10024 all form of TB cases had been registered between the periods from 2003 up to 2012. Of them, 4100 (40.9%) were smear-positive pulmonary TB, 3164 (31.6%) were smear-negative pulmonary TB and 2760 (27.5%) had extra-pulmonary TB. Case detection rate of smear-positive pulmonary TB had increased from 31.7% to 46.5% from the total TB cases and treatment success rate increased from 13% to 92% with average mean value of being 40.9% (SD= 0.1) and 55.7% (SD=0.28), respectively for the specified year periods. Moreover, the average values of treatment defaulter and treatment failure rates were 4.2% and 0.3%, respectively. Conclusion: It is possible to achieve the recommended WHO target which is 70% of CDR for smear-positive pulmonary TB, and 85% of TSR as it was already been fulfilled the targets for treatments more than 85% from 2009 up to 2011 in the region. However, it requires strong efforts to enhance case detection rate of 40.9% for smear-positive pulmonary TB through implementing alternative case finding strategies.Keywords: Gambella Region, case detection rate, directly observed treatment short-course, treatment success rate, tuberculosis
Procedia PDF Downloads 3422741 Comparative Ethnography and Urban Health: A Multisite Study on Obesogenic Cities
Authors: Carlos Rios Llamas
Abstract:
Urban health challenges, like the obesity epidemic, need to be studied from a dialogue between different disciplines and geographical conditions. Public health uses quantitative analysis and local samples, but qualitative data and multisite analysis would help to better understand how obesity has become a health problem. In the last decades, obesity rates have increased in most of the countries, especially in the Western World. Concerned about the problem, the American Medical Association has recently voted obesity as a disease. Suddenly, a ‘war on obesity’ attracted scientists from different disciplines to explore various ways to control and even reverse the trends. Medical sciences have taken the advance with quantitative methodologies focused on individual behaviors. Only a few scientist have extended their studies to the environment where obesity is produced as social risk, and less of them have taken into consideration the political and cultural aspects. This paper presents a multisite ethnography in South Bronx, USA, La Courneuve, France, and Lomas del Sur, Mexico, where obesity rates are as relevant as urban degradation. The comparative ethnography offers a possibility to unveil the mechanisms producing health risks from the urban tissue. The analysis considers three main categories: 1) built environment and access to food and physical activity, 2) biocultural construction of the healthy body, 3) urban inequalities related to health and body size. Major findings from a comparative ethnography on obesogenic environments, refer to the anthropological values related to food and body image, as well as the multidimensional oppression expressed in fat people who live in stigmatized urban zones. At the end, obesity, like many other diseases, is the result of political and cultural constructions structured in urbanization processes.Keywords: comparative ethnography, urban health, obesogenic cities, biopolitics
Procedia PDF Downloads 2462740 Marketing of Non Timber Forest Products and Forest Management in Kaffa Biosphere Reserve, Ethiopia
Authors: Amleset Haile
Abstract:
Non-timber forest products (NTFPs) are harvested for both subsistence and commercial use and play a key role in the livelihoods of millions of rural people. Non-timber forest products (NTFPs) are important in rural southwest Ethiopia, Kaffa as a source of household income. market players at various levels in marketing chains are interviewed to getther information on elements of marketing system–products, product differentiation, value addition, pricing, promotion, distribution, and marketing chains. The study, therefore, was conducted in Kaffa Biosphere reserve of southwest Ethiopia with the main objective of assessing and analyzing the contribution of NTFPs to rural livelihood and to the conservation of the biosphere reserve and to identify factors influencing in the marketing of the NTFP. Five villages were selected based on their proximity gradient from Bonga town and availability of NTFP. Formal survey was carried out on rural households selected using stratified random sampling. The results indicate that Local people practice diverse livelihood activities mainly crops cultivation (cereals and cash crops) and livestock husbandry, gather forest products and off-farm/off-forest activities for surviva. NTFP trade is not a common phenomenon in southwest Ethiopia. The greatest opportunity exists for local level marketing of spices and other non timber forest products. Very little local value addition takes place within the region,and as a result local market players have little control. Policy interventions arc required to enhance the returns to local collectors, which will also contribute to sustainable management of forest resources in Kaffa biosphere reserve.Keywords: forest management, biosphere reserve, marketing, local people
Procedia PDF Downloads 5382739 Enhancing Project Performance Forecasting using Machine Learning Techniques
Authors: Soheila Sadeghi
Abstract:
Accurate forecasting of project performance metrics is crucial for successfully managing and delivering urban road reconstruction projects. Traditional methods often rely on static baseline plans and fail to consider the dynamic nature of project progress and external factors. This research proposes a machine learning-based approach to forecast project performance metrics, such as cost variance and earned value, for each Work Breakdown Structure (WBS) category in an urban road reconstruction project. The proposed model utilizes time series forecasting techniques, including Autoregressive Integrated Moving Average (ARIMA) and Long Short-Term Memory (LSTM) networks, to predict future performance based on historical data and project progress. The model also incorporates external factors, such as weather patterns and resource availability, as features to enhance the accuracy of forecasts. By applying the predictive power of machine learning, the performance forecasting model enables proactive identification of potential deviations from the baseline plan, which allows project managers to take timely corrective actions. The research aims to validate the effectiveness of the proposed approach using a case study of an urban road reconstruction project, comparing the model's forecasts with actual project performance data. The findings of this research contribute to the advancement of project management practices in the construction industry, offering a data-driven solution for improving project performance monitoring and control.Keywords: project performance forecasting, machine learning, time series forecasting, cost variance, earned value management
Procedia PDF Downloads 462738 An Ultrasonic Approach to Investigate the Effect of Aeration on Rheological Properties of Soft Biological Materials with Bubbles Embedded
Authors: Hussein M. Elmehdi
Abstract:
In this paper, we present the results of our recent experiments done to examine the effect of air bubbles, which were introduced to bio-samples during preparation, on the rheological properties of soft biological materials. To effectively achieve this, we three samples each prepared with differently. Our soft biological systems comprised of three types of flour dough systems made from different flour varieties with variable protein concentrations. The samples were investigated using ultrasonic waves operated at low frequency in transmission mode. The sample investigated included dough made from bread flour, wheat flour and all-purpose flour. During mixing, the main ingredient of the samples (the flour) was transformed into cohesive dough comprised of the continuous dough matrix and air pebbles. The rheological properties of such materials determine the quality of the end cereal product. Two ultrasonic parameters, the longitudinal velocity and attenuation coefficient were found to be very sensitive to properties such as the size of the occluded bubbles, and hence have great potential of providing quantitative evaluation of the properties of such materials. The results showed that the magnitudes of the ultrasonic velocity and attenuation coefficient peaked at optimum mixing times; the latter of which is taken as an indication of the end of the mixing process. There was an agreement between the results obtained by conventional rheology and ultrasound measurements, thus showing the potential of the use of ultrasound as an on-line quality control technique for dough-based products. The results of this work are explained with respect to the molecular changes occurring in the dough system as the mixing process proceeds; particular emphasis is placed on the presence of free water and bound water.Keywords: ultrasound, soft biological materials, velocity, attenuation
Procedia PDF Downloads 2762737 Learning-by-Heart vs. Learning by Thinking: Fostering Thinking in Foreign Language Learning A Comparison of Two Approaches
Authors: Danijela Vranješ, Nataša Vukajlović
Abstract:
Turning to learner-centered teaching instead of the teacher-centered approach brought a whole new perspective into the process of teaching and learning and set a new goal for improving the educational process itself. However, recently a tremendous decline in students’ performance on various standardized tests can be observed, above all on the PISA-test. The learner-centeredness on its own is not enough anymore: the students’ ability to think is deteriorating. Especially in foreign language learning, one can encounter a lot of learning by heart: whether it is grammar or vocabulary, teachers often seem to judge the students’ success merely on how well they can recall a specific word, phrase, or grammar rule, but they rarely aim to foster their ability to think. Convinced that foreign language teaching can do both, this research aims to discover how two different approaches to teaching foreign language foster the students’ ability to think as well as to what degree they help students get to the state-determined level of foreign language at the end of the semester as defined in the Common European Framework. For this purpose, two different curricula were developed: one is a traditional, learner-centered foreign language curriculum that aims at teaching the four competences as defined in the Common European Framework and serves as a control variable, whereas the second one has been enriched with various thinking routines and aims at teaching the foreign language as a means to communicate ideas and thoughts rather than reducing it to the four competences. Moreover, two types of tests were created for each approach, each based on the content taught during the semester. One aims to test the students’ competences as defined in the CER, and the other aims to test the ability of students to draw on the knowledge gained and come to their own conclusions based on the content taught during the semester. As it is an ongoing study, the results are yet to be interpreted.Keywords: common european framework of reference, foreign language learning, foreign language teaching, testing and assignment
Procedia PDF Downloads 1042736 Fall Prevention: Evidence-Based Intervention in Exercise Program Implementation for Keeping Older Adults Safe and Active
Authors: Jennifer Holbein, Maritza Wiedel
Abstract:
Background: Aging is associated with an increased risk of falls in older adults, and as a result, falls have become public health crises. However, the incidence of falls can be reduced through healthy aging and the implementation of a regular exercise and strengthening program. Public health and healthcare professionals authorize the use of evidence‐based, exercise‐focused fall interventions, but there are major obstacles to translating and disseminating research findings into healthcare practices. The purpose of this study was to assess the feasibility of an intervention, A Matter of Balance, in terms of demand, acceptability, and implementation into current exercise programs. Subjects: Seventy-five participants from rural communities, above the age of sixty, were randomized to an intervention or attention-control of the standardized senior fitness test. Methods: Subject completes the intervention, which combines two components: (1) motivation and (2) fall-reducing physical activities with protocols derived from baseline strength and balanced assessments. Participants (n=75) took part in the program after completing baseline functional assessments as well as evaluations of their personal knowledge, health outcomes, demand, and implementation interventions. After 8-weeks of the program, participants were invited to complete follow-up assessments with results that were compared to their baseline functional analyses. Out of all the participants in the study who complete the initial assessment, approximately 80% are expected to maintain enrollment in the implemented prescription. Furthermore, those who commit to the program should show mitigation of fall risk upon completion of their final assessment.Keywords: aging population, exercise, falls, functional assessment, healthy aging
Procedia PDF Downloads 1002735 The Birth Connection: An Examination of the Relationship between Her Birth Event and Infant Feeding among African American Mothers
Authors: Nicole Banton
Abstract:
The maternal and infant mortality rate of Blacks is three times that of Whites in the US. Research indicates that breastfeeding lowers both. In this paper, the researcher examines how the ideas that Black/African American mothers had about breastfeeding before, during, and after pregnancy (postpartum) affected whether or not they initiated breastfeeding. The researcher used snowball sampling to recruit thirty African-American mothers from the Orlando area. At the time of her interview, each mother had at least one child who was at least three years old. Through in-depth face-to-face interviews, the researcher investigated how mothers’ healthcare providers affected their decision-making about infant feeding, as well as how the type of birth that she had (e.g., preterm, vaginal, c-section, full term) affected her actual versus idealized infant feeding practice. Through our discussions, we explored how pre-pregnancy perceptions, birth and postpartum experiences, social support, and the discourses surrounding motherhood within an African-American context affected the perceptions and experiences that the mothers in the study had with their infant feeding practice(s). Findings suggest that the pregnancy and birth experiences of the mothers in the study influenced whether or not they breastfed exclusively, combined breastfeeding and infant formula use, or used infant formula exclusively. Specifically, the interplay of invocation of agency (the ability to control their bodies before, during, and after birth), birth outcomes, and the interaction that the mothers in this study had with resources, human and material, had the highest impact on the initiation, duration, and attitude toward breastfeeding.Keywords: African American mothers, maternal health, breastfeeding, birth, midwives, obstetricians, hospital birth, breast pumps, formula use, infant feeding, lactation consultant, postpartum, vaginal birth, c-section, familial support, social support, work, pregnancy
Procedia PDF Downloads 812734 Spontaneous Generation of Wrinkled Patterns on pH-Sensitive Smart-Hydrogel Films
Authors: Carmen M. Gonzalez-Henriquez, Mauricio A. Sarabia-Vallejos, Juan Rodriguez-Hernandez
Abstract:
DMAEMA, as a monomer, has been widely studied and used in several application fields due to their pH-sensitive capacity (tertiary amine protonation), being relevant in the biomedical area as a potential carrier for drugs focused on the treatment of genetic or acquired diseases (efficient gene transfection), among others. Additionally, the inhibition of bacterial growth and, therefore, their antimicrobial activity, can be used as dual-functional antifogging/antimicrobial polymer coatings. According to their interesting physicochemical characteristics and biocompatible properties, DMAEMA was used as a monomer to synthesize a smart pH-sensitive hydrogel, namely poly(HEMA-co-PEGDA575-co-DMAEMA). Thus, different mole ratios (ranging from 5:1:0 to 0:1:5, according to the mole ratio between HEMA, PEGDA, and DEAEMA, respectively) were used in this research. The surface patterns formed via a two-step polymerization (redox- and photo-polymerization) were first chemically studied via 1H-NMR and elemental analysis. Secondly, the samples were morphologically analyzed by using Field-Emission Scanning Electron Microscopy (FE-SEM) and Atomic Force Microscopy (AFM) techniques. Then, a particular relation between HEMA, PEGDA, and DEAEMA (0:1:5) was also characterized at three different pH (5.4, 7.4 and 8.3). The hydrodynamic radius and zeta potential of the micro-hydrogel particles (emulsion) were carried out as a possible control for morphology, exploring the effect that produces hydrogel micelle dimensions in the wavelength, height, and roughness of the wrinkled patterns. Finally, contact angle and cross-hatch adhesion test was carried out for the hydrogels supported on glass using TSM-silanized surfaces in order to measure their mechanical properties.Keywords: wrinkled patterns, smart pH-sensitive hydrogels, hydrogel micelle diameter, adhesion tests
Procedia PDF Downloads 2052733 Integrating Molecular Approaches to Understand Diatom Assemblages in Marine Environment
Authors: Shruti Malviya, Chris Bowler
Abstract:
Environmental processes acting at multiple spatial scales control marine diatom community structure. However, the contribution of local factors (e.g., temperature, salinity, etc.) in these highly complex systems is poorly understood. We, therefore, investigated the diatom community organization as a function of environmental predictors and determined the relative contribution of various environmental factors on the structure of marine diatoms assemblages in the world’s ocean. The dataset for this study was derived from the Tara Oceans expedition, constituting 46 sampling stations from diverse oceanic provinces. The V9 hypervariable region of 18s rDNA was organized into assemblages based on their distributional co-occurrence. Using Ward’s hierarchical clustering, nine clusters were defined. The number of ribotypes and reads varied within each cluster-three clusters (II, VIII and IX) contained only a few reads whereas two of them (I and IV) were highly abundant. Of the nine clusters, seven can be divided into two categories defined by a positive correlation with phosphate and nitrate and a negative correlation with longitude and, the other by a negative correlation with salinity, temperature, latitude and positive correlation with Lyapunov exponent. All the clusters were found to be remarkably dominant in South Pacific Ocean and can be placed into three classes, namely Southern Ocean-South Pacific Ocean clusters (I, II, V, VIII, IX), South Pacific Ocean clusters (IV and VII), and cosmopolitan clusters (III and VI). Our findings showed that co-occurring ribotypes can be significantly associated into recognizable clusters which exhibit a distinct response to environmental variables. This study, thus, demonstrated distinct behavior of each recognized assemblage displaying a taxonomic and environmental signature.Keywords: assemblage, diatoms, hierarchical clustering, Tara Oceans
Procedia PDF Downloads 2002732 Evaluation of Chemopreventive Activity of Medicinal Plant, Gromwell Seed against Tumor Promoting Stage
Authors: Harukuni Tokuda, Takanari Arai, Xu FengHao, Nobutaka Suzuki
Abstract:
In our continuous search for anti-tumor promoting, chemopreventive active potency from natural source material, a kind of healthy tea, Gromwell seed (Coix lachryma-jobi) ext., and including compounds Monoolein and Trilinolein have been screened using the in vitro synergistic assay indicated by inhibitory effects on the induction of Epstein-Barr virus early antigen (EBV-EA) by TPA. In assay, Gromwell seed aqueous extract and hot aqueous extract exhibited the potential inhibitory effects on EBV-EA activation without strong cytotoxicity on Raji cells. In our experimental system, the inhibitory effects of both Gromwell extracts and compounds were greater than that of beta-carotene, which is known anti-tumor promoting agent and/or chemopreventive agent. These compounds were evaluated for their in vitro inhibitory effect on EBV-EA activation induced by TPA. The percentages of the inhibition of TPA-induced EBV-EA activation for these materials were 60% and 30% at concentration 100 μg. Based on the results obtained in vitro, we studied the inhibitory effect of compounds, in an in vivo two-stage carcinogenesis test of mouse skin papilloma using DMBA as an initiator and TPA as a potential promoter. The control animals showed a 100% incidence of papilloma at 20 weeks after DMBA-TPA tumor promotion, while treatment with compounds reduced the percentage of number of tumor to 60 % after 20 weeks. Results from in vitro and in vivo studies showing chemopreventive activity against TPA promoting stage and these data support the effective potency of carcinogenic stage in clinical evaluation of integrative oncology.Keywords: gromwell seed, medicinal plant, chemoprevention, pharmaceutical medicine
Procedia PDF Downloads 4032731 Variable Refrigerant Flow (VRF) Zonal Load Prediction Using a Transfer Learning-Based Framework
Authors: Junyu Chen, Peng Xu
Abstract:
In the context of global efforts to enhance building energy efficiency, accurate thermal load forecasting is crucial for both device sizing and predictive control. Variable Refrigerant Flow (VRF) systems are widely used in buildings around the world, yet VRF zonal load prediction has received limited attention. Due to differences between VRF zones in building-level prediction methods, zone-level load forecasting could significantly enhance accuracy. Given that modern VRF systems generate high-quality data, this paper introduces transfer learning to leverage this data and further improve prediction performance. This framework also addresses the challenge of predicting load for building zones with no historical data, offering greater accuracy and usability compared to pure white-box models. The study first establishes an initial variable set of VRF zonal building loads and generates a foundational white-box database using EnergyPlus. Key variables for VRF zonal loads are identified using methods including SRRC, PRCC, and Random Forest. XGBoost and LSTM are employed to generate pre-trained black-box models based on the white-box database. Finally, real-world data is incorporated into the pre-trained model using transfer learning to enhance its performance in operational buildings. In this paper, zone-level load prediction was integrated with transfer learning, and a framework was proposed to improve the accuracy and applicability of VRF zonal load prediction.Keywords: zonal load prediction, variable refrigerant flow (VRF) system, transfer learning, energyplus
Procedia PDF Downloads 252730 Transcranial and Sacral Magnetic Stimulation as a Therapeutic Resource for Urinary Incontinence – A Brief Bibliographic Review
Authors: Ana Lucia Molina
Abstract:
Transcranial magnetic stimulation (TMS) is a non-invasive neuromodulation technique for the investigation and modulation of cortical excitability in humans. The modulation of the processing of different cortical areas can result in several areas for rehabilitation, showing great potential in the treatment of motor disorders. In the human brain, the supplementary motor area (SMA) is involved in the control of the pelvic floor muscles (MAP), where dysfunctions of these muscles can lead to urinary incontinence. Peripheral magnetic stimulation, specifically sacral magnetic stimulation, has been used as a safe and effective treatment option for patients with lower urinary tract dysfunction. A systematic literature review was carried out (Pubmed, Medline and Google academic database) without a time limit using the keywords: "transcranial magnetic stimulation", "sacral neuromodulation", and "urinary incontinence", where 11 articles attended to the inclusion criteria. Results: Thirteen articles were selected. Magnetic stimulation is a non-invasive neuromodulation technique widely used in the evaluation of cortical areas and their respective peripheral areas, as well as in the treatment of lesions of brain origin. With regard to pelvic-perineal disorders, repetitive transcranial stimulation showed significant effects in controlling urinary incontinence, as well as sacral peripheral magnetic stimulation, in addition to exerting the potential to restore bladder sphincter function. Conclusion: Data from the literature suggest that both transcranial stimulation and peripheral stimulation are non-invasive references that can be promising and effective means of treatment in pelvic and perineal disorders. More prospective and randomized studies on a larger scale are needed, adapting the most appropriate and resolving parameters.Keywords: urinary incontinence, non-invasive neuromodulation, sacral neuromodulation, transcranial magnetic stimulation.
Procedia PDF Downloads 962729 The Spread of Drugs in Higher Education
Authors: Wantana Amatariyakul, Chumnong Amatariyakul
Abstract:
The research aims to examine the spread of drugs in higher education, especially amphetamine which is rapidly increasing in Thai society, its causes and effects, including the sociological perspective, in order to explain, prevent, control, and solve the problems. The students who participated in this research are regular students of Rajamangala University of Technology Isan, Khon Kaen Campus. The data were collected using questionnaires, group discussions, and in-depth interviews. The quantity data were analyzed using frequency, percentage, mean and standard deviation and using content analysis to analyzed quality data. The result of the study showed that the students had the results of examination on level of knowledge and understanding on drug abuse projected that the majority of sample group attained their knowledge on drug abuse respectively. Despite their uncertainty, the majority of samples presumed that amphetamine, marijuana and grathom (Mitragyna Speciosa Korth) would most likely be abused. The reason for first drug abuse is because they want to try and their friends convince them, as well as, they want to relax or solve the problems in life, respectively. The bad effects appearing to the drug addicts shows that their health deteriorates or worsens, as well as, they not only lose their money but also face with worse mental states. The reasons that respondents tried to avoid using drugs or refused drugs offered by friends were: not wanting to disappoint or upset their family members, fear of rejection by family members, afraid of being arrested by the police, afraid of losing their educational opportunity and ruining their future respectively. Students therefore defended themselves against drug addiction by refusing to try all drugs. Besides this, the knowledge about the danger and the harm of drugs persuaded them to stay away from drugs.Keywords: drugs, higher education, drug addiction, spread of drugs
Procedia PDF Downloads 3182728 Adhesion of Biofilm to Surfaces Employed in Pipelines for Transporting Crude Oil
Authors: Hadjer Didouh, Izzaddine Sameut Bouhaik, Mohammed Hadj Meliani
Abstract:
This research delves into the intricate dynamics of biofilm adhesion on surfaces, particularly focusing on the widely employed X52 surface in oil and gas industry pipelines. Biofilms, characterized by microorganisms within a self-produced matrix, pose significant challenges due to their detrimental impact on surfaces. Our study integrates advanced molecular techniques and cutting-edge microscopy, such as scanning electron microscopy (SEM), to identify microbial communities and visually assess biofilm adhesion. Simultaneously, we concentrate on the X52 surface, utilizing impedance spectroscopy and potentiodynamic polarization to gather electrochemical responses under various conditions. In conjunction with the broader investigation, we propose a novel approach to mitigate biofilm-induced corrosion challenges. This involves environmentally friendly inhibitors derived from plants, offering a sustainable alternative to conventional chemical treatments. Our inquiry screens and selects inhibitors based on their efficacy in hindering biofilm formation and reducing corrosion rates on the X52 surface. This study contributes valuable insights into the interplay between electrochemical processes and biofilm attachment on the X52 surface. Furthermore, the outcomes of this research have broader implications for the oil and gas industry, where biofilm-related corrosion is a persistent concern. The exploration of eco-friendly inhibitors not only holds promise for corrosion control but also aligns with environmental considerations and sustainability goals. The comprehensive nature of this research aims to enhance our understanding of biofilm dynamics, provide effective strategies for corrosion mitigation, and contribute to sustainable practices in pipeline management within the oil and gas sector.Keywords: bio-corrosion, biofilm, attachment, X52, metal/bacteria interface
Procedia PDF Downloads 452727 Optimum Dewatering Network Design Using Firefly Optimization Algorithm
Authors: S. M. Javad Davoodi, Mojtaba Shourian
Abstract:
Groundwater table close to the ground surface causes major problems in construction and mining operation. One of the methods to control groundwater in such cases is using pumping wells. These pumping wells remove excess water from the site project and lower the water table to a desirable value. Although the efficiency of this method is acceptable, it needs high expenses to apply. It means even small improvement in a design of pumping wells can lead to substantial cost savings. In order to minimize the total cost in the method of pumping wells, a simulation-optimization approach is applied. The proposed model integrates MODFLOW as the simulation model with Firefly as the optimization algorithm. In fact, MODFLOW computes the drawdown due to pumping in an aquifer and the Firefly algorithm defines the optimum value of design parameters which are numbers, pumping rates and layout of the designing wells. The developed Firefly-MODFLOW model is applied to minimize the cost of the dewatering project for the ancient mosque of Kerman city in Iran. Repetitive runs of the Firefly-MODFLOW model indicates that drilling two wells with the total rate of pumping 5503 m3/day is the result of the minimization problem. Results show that implementing the proposed solution leads to at least 1.5 m drawdown in the aquifer beneath mosque region. Also, the subsidence due to groundwater depletion is less than 80 mm. Sensitivity analyses indicate that desirable groundwater depletion has an enormous impact on total cost of the project. Besides, in a hypothetical aquifer decreasing the hydraulic conductivity contributes to decrease in total water extraction for dewatering.Keywords: groundwater dewatering, pumping wells, simulation-optimization, MODFLOW, firefly algorithm
Procedia PDF Downloads 2932726 Synergistic Effects of Hydrogen Sulfide and Melatonin in Alleviating Vanadium Toxicity in Solanum lycopersicum L. Plants
Authors: Abazar Ghorbani, W. M. Wishwajith W. Kandegama, Seyed Mehdi Razavi, Moxian Chen
Abstract:
The roles of hydrogen sulfide (H₂S) and melatonin (MT) as gasotransmitters in plants are widely recognised. Nevertheless, the precise nature of their involvement in defensive reactions remains uncertain. This study investigates the impact of the ML-H2S interaction on tomato plants exposed to vanadium (V) toxicity, focusing on synthesising secondary metabolites and V metal sequestration. The treatments applied in this study included a control (T1), V stress (T2), MT+V (T3), MT+H2S+V (T4), MT+hypotaurine (HT)+V (T5), and MT+H2S+HT+V (T6). These treatments were administered: MT (150 µM) as a foliar spray pre-treatment (3X), HT treatment (0.1 mM, an H2S scavenger) as root immersion for 12 hours as pre-treatments, and H2S (NaHS, 0.2 mM) and V (40 mg/L) treatments added to the Hoagland solution for 2 weeks. Results demonstrate that ML and H2S+ML treatments alleviate V toxicity by promoting the transcription of key genes (ANS, F3H, CHS, DFR, PAL, and CHI) involved in phenolic and anthocyanin biosynthesis. Moreover, they decreased V uptake and accumulation and enhanced the transcription of genes involved in glutathione and phytochelatin synthesis (GSH1, PCS, and ABC1), leading to V sequestration in roots and protection against V-induced damage. Additionally, ML and H2S+ML treatments optimize chlorophyll metabolism, and increase internal H2S levels, thereby promoting tomato growth under V stress. The combined treatment of ML+H2S shows superior effects compared to ML alone, suggesting synergistic/interactive effects between these two substances. Furthermore, inhibition of the beneficial impact of ML+H2S and ML treatments by HT, an H2S scavenger, underscores the significant involvement of H₂S in the signaling pathway activated by ML during V toxicity. Overall, these findings suggest that ML requires the presence of endogenous H₂S to mitigate V-induced adverse effects on tomato seedlings.Keywords: vanadium toxicity, secondary metabolites, vanadium sequestration, h2s-melatonin crosstalk
Procedia PDF Downloads 442725 Role of Cellulose Fibers in Tuning the Microstructure and Crystallographic Phase of α-Fe₂O₃ and α-FeOOH Nanoparticles
Authors: Indu Chauhan, Bhupendra S. Butola, Paritosh Mohanty
Abstract:
It is very well known that properties of material changes as their size approach to nanoscale level due to the high surface area to volume ratio. However, in last few decades, a tenet ‘structure dictates function’ is quickly being adopted by researchers working with nanomaterials. The design and exploitation of nanoparticles with tailored shape and size has become one of the primary goals of materials science researchers to expose the properties of nanostructures. To date, various methods, including soft/hard template/surfactant assisted route hydrothermal reaction, seed mediated growth method, capping molecule-assisted synthesis, polyol process, etc. have been adopted to synthesize the nanostructures with controlled size and shape and monodispersity. However controlling the shape and size of nanoparticles is an ultimate challenge of modern material research. In particular, many efforts have been devoted to rational and skillful control of hierarchical and complex nanostructures. Thus in our research work, role of cellulose in manipulating the nanostructures has been discussed. Nanoparticles of α-Fe₂O₃ (diameter ca. 15 to 130 nm) were immobilized on the cellulose fiber surface by a single step in situ hydrothermal method. However, nanoflakes of α-FeOOH having thickness ca. ~25 nm and length ca. ~250 nm were obtained by the same method in absence of cellulose fibers. A possible nucleation and growth mechanism of the formation of nanostructures on cellulose fibers have been proposed. The covalent bond formation between the cellulose fibers and nanostructures has been discussed with supporting evidence from the spectroscopic and other analytical studies such as Fourier transform infrared spectroscopy and X-ray photoelectron spectroscopy. The role of cellulose in manipulating the nanostructures has been discussed.Keywords: cellulose fibers, α-Fe₂O₃, α-FeOOH, hydrothermal, nanoflakes, nanoparticles
Procedia PDF Downloads 1482724 Cost-Benefit Analysis for the Optimization of Noise Abatement Treatments at the Workplace
Authors: Paolo Lenzuni
Abstract:
Cost-effectiveness of noise abatement treatments at the workplace has not yet received adequate consideration. Furthermore, most of the published work is focused on productivity, despite the poor correlation of this quantity with noise levels. There is currently no tool to estimate the social benefit associated to a specific noise abatement treatment, and no comparison among different options is accordingly possible. In this paper, we present an algorithm which has been developed to predict the cost-effectiveness of any planned noise control treatment in a workplace. This algorithm is based the estimates of hearing threshold shifts included in ISO 1999, and on compensations that workers are entitled to once their work-related hearing impairments have been certified. The benefits of a noise abatement treatment are estimated by means of the lower compensation costs which are paid to the impaired workers. Although such benefits have no real meaning in strictly monetary terms, they allow a reliable comparison between different treatments, since actual social costs can be assumed to be proportional to compensation costs. The existing European legislation on occupational exposure to noise it mandates that the noise exposure level be reduced below the upper action limit (85 dBA). There is accordingly little or no motivation for employers to sustain the extra costs required to lower the noise exposure below the lower action limit (80 dBA). In order to make this goal more appealing for employers, the algorithm proposed in this work also includes an ad-hoc element that promotes actions which bring the noise exposure down below 80 dBA. The algorithm has a twofold potential: 1) it can be used as a quality index to promote cost-effective practices; 2) it can be added to the existing criteria used by workers’ compensation authorities to evaluate the cost-effectiveness of technical actions, and support dedicated employers.Keywords: cost-effectiveness, noise, occupational exposure, treatment
Procedia PDF Downloads 3172723 REFLEX: A Randomized Controlled Trial to Test the Efficacy of an Emotion Regulation Flexibility Program with Daily Measures
Authors: Carla Nardelli, Jérome Holtzmann, Céline Baeyens, Catherine Bortolon
Abstract:
Background. Emotion regulation (ER) is a process associated with difficulties in mental health. Given its transdiagnostic features, its improvement could facilitate the recovery of various psychological issues. A limit of current studies is the lack of knowledge regarding whether available interventionsimprove ER flexibility (i.e., the ability to implement ER strategies in line with contextual demands), even though this capacity has been associated with better mental health and well-being. Therefore, the aim of the study is to test the efficacy of a 9-weeks ER group program (the Affect Regulation Training-ART), using the most appropriate measures (i.e., experience sampling method) in a student population. Plus, the goal of the study is to explore the potential mediative role of ER flexibility on mental health improvement. Method. This Randomized Controlled Trial will comparethe ER program group to an active control group (a relaxation program) in 100 participants. To test the mediative role of ER flexibility on mental health, daily measures will be used before, during, and after the interventions to evaluate the extent to which participants are flexible in their ER. Expected outcomes. Using multilevel analyses, we expect an improvement in anxious-depressive symptomatology for both groups. However, we expect the ART group to improve specifically on ER flexibility ability and the last to be a mediative variable on mental health. Conclusion. This study will enhance knowledge on interventions for students and the impact of interventions on ER flexibility. Also, this research will improve knowledge on ecological measures for assessing the effect of interventions. Overall, this project represents new opportunities to improve ER skills to improve mental health in undergraduate students.Keywords: emotion regulation flexibility, experience sampling method, psychological intervention, emotion regulation skills
Procedia PDF Downloads 1332722 Evaluation of Visco-Elastic Properties and Microbial Quality of Oat-Based Dietetic Food
Authors: Uchegbu Nneka Nkechi, Okoye Ogochukwu Peace
Abstract:
The evaluation of the visco-elastic properties and microbial quality of a formulated oat-based dietetic food were investigated. Oat flour, pumpkin seed flour, carrot flour and skimmed milk powder were blended in varying proportions to formulate a product with codes OCF, which contains 70% oat flour, 10 % carrot flour, 10 % pumpkin seed flour and 10% skimmed milk powder, OCF which contains 65 % oat flour, 10 % carrot flour, 10 % pumpkin seed flour and 15 % skimmed milk powder, OCF which contains 60 % oat flour, 10 % carrot flour, 10 % pumpkin seed flour and 20 % skimmed milk powder, OCF which contains 55 % oat flour, 10 % carrot flour, 10 % pumpkin seed flour and 25 % skimmed milk powder and OF with 95 % oat as the commercial control. All the samples were assessed for their proximate composition, microbial quality and visco-elastic properties. The moisture content was highest at sample OF (10.73%) and lowest at OCF (7.10%) (P<0.05). Crude protein ranged from 13.38%-22.86%, with OCF having the highest (P<0.05) protein content and OF having the lowest. Crude fat was 3.74% for OCF and 6.31% for OF. Crude fiber ranged from 3.58% - 17.39% with OF having the lowest (P<0.05) fiber content and OCF having the highest. Ash content was 1.30% for OCF and 2.75% for OCF. There was no mold growth in the samples. The total viable ml/wl count ranged from 1.5×10³ cfu/g - 2.6×10³ cfu/g, with OCF having the lowest and OF having the highest (P<0.05) total viable count. The peak viscosity of the sample ranged from 75.00 cP-2895.00 cP, with OCF having the lowest and OF having the highest value. The final viscosity was 130.50 cP in OCF and 3572.50 cP in OF. The setback viscosity was 58.00 cP in OCF and 1680.50 cP in OF. The peak time was 6.93 mins in OCF to 5.57 mins in OF. There was no pasting temperature for all samples except the OF, which had 86.43. Sample OF was the highest in terms of overall acceptability. This study showed that the oat-based composite flour produced had a nutritional profile that would be acceptable for the aged population.Keywords: dietetic, pumpkin, visco-elastic, microbial
Procedia PDF Downloads 1962721 Compact LWIR Borescope Sensor for Thermal Imaging of 2D Surface Temperature in Gas-Turbine Engines
Authors: Andy Zhang, Awnik Roy, Trevor B. Chen, Bibik Oleksandar, Subodh Adhikari, Paul S. Hsu
Abstract:
The durability of a combustor in gas-turbine engines is a strong function of its component temperatures and requires good control of these temperatures. Since the temperature of combustion gases frequently exceeds the melting point of the combustion liner walls, an efficient air-cooling system with optimized flow rates of cooling air is significantly important to elongate the lifetime of liner walls. To determine the effectiveness of the air-cooling system, accurate two-dimensional (2D) surface temperature measurement of combustor liner walls is crucial for advanced engine development. Traditional diagnostic techniques for temperature measurement in this application include the rmocouples, thermal wall paints, pyrometry, and phosphors. They have shown some disadvantages, including being intrusive and affecting local flame/flow dynamics, potential flame quenching, and physical damages to instrumentation due to harsh environments inside the combustor and strong optical interference from strong combustion emission in UV-Mid IR wavelength. To overcome these drawbacks, a compact and small borescope long-wave-infrared (LWIR) sensor is developed to achieve 2D high-spatial resolution, high-fidelity thermal imaging of 2D surface temperature in gas-turbine engines, providing the desired engine component temperature distribution. The compactLWIRborescope sensor makes it feasible to promote the durability of a combustor in gas-turbine engines and, furthermore, to develop more advanced gas-turbine engines.Keywords: borescope, engine, low-wave-infrared, sensor
Procedia PDF Downloads 1312720 Risk Issues for Controlling Floods through Unsafe, Dual Purpose, Gated Dams
Authors: Gregory Michael McMahon
Abstract:
Risk management for the purposes of minimizing the damages from the operations of dams has met with opposition emerging from organisations and authorities, and their practitioners. It appears that the cause may be a misunderstanding of risk management arising from exchanges that mix deterministic thinking with risk-centric thinking and that do not separate uncertainty from reliability and accuracy from probability. This paper sets out those misunderstandings that arose from dam operations at Wivenhoe in 2011, using a comparison of outcomes that have been based on the methodology and its rules and those that have been operated by applying misunderstandings of the rules. The paper addresses the performance of one risk-centric Flood Manual for Wivenhoe Dam in achieving a risk management outcome. A mixture of engineering, administrative, and legal factors appear to have combined to reduce the outcomes from the risk approach. These are described. The findings are that a risk-centric Manual may need to assist administrations in the conduct of scenario training regimes, in responding to healthy audit reporting, and in the development of decision-support systems. The principal assistance needed from the Manual, however, is to assist engineering and the law to a good understanding of how risks are managed – do not assume that risk management is understood. The wider findings are that the critical profession for decision-making downstream of the meteorologist is not dam engineering or hydrology, or hydraulics; it is risk management. Risk management will provide the minimum flood damage outcome where actual rainfalls match or exceed forecasts of rainfalls, that therefore risk management will provide the best approach for the likely history of flooding in the life of a dam, and provisions made for worst cases may be state of the art in risk management. The principal conclusion is the need for training in both risk management as a discipline and also in the application of risk management rules to particular dam operational scenarios.Keywords: risk management, flood control, dam operations, deterministic thinking
Procedia PDF Downloads 85