Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9332

Search results for: minimum root mean square (RMS) error matching algorithm

602 Ethical Decision-Making by Healthcare Professionals during Disasters: Izmir Province Case

Authors: Gulhan Sen

Abstract:

Disasters could result in many deaths and injuries. In these difficult times, accessible resources are limited, demand and supply balance is distorted, and there is a need to make urgent interventions. Disproportionateness between accessible resources and intervention capacity makes triage a necessity in every stage of disaster response. Healthcare professionals, who are in charge of triage, have to evaluate swiftly and make ethical decisions about which patients need priority and urgent intervention given the limited available resources. For such critical times in disaster triage, 'doing the greatest good for the greatest number of casualties' is adopted as a code of practice. But there is no guide for healthcare professionals about ethical decision-making during disasters, and this study is expected to use as a source in the preparation of the guide. This study aimed to examine whether the qualities healthcare professionals in Izmir related to disaster triage were adequate and whether these qualities influence their capacity to make ethical decisions. The researcher used a survey developed for data collection. The survey included two parts. In part one, 14 questions solicited information about socio-demographic characteristics and knowledge levels of the respondents on ethical principles of disaster triage and allocation of scarce resources. Part two included four disaster scenarios adopted from existing literature and respondents were asked to make ethical decisions in triage based on the provided scenarios. The survey was completed by 215 healthcare professional working in Emergency-Medical Stations, National Medical Rescue Teams and Search-Rescue-Health Teams in Izmir. The data was analyzed with SPSS software. Chi-Square Test, Mann-Whitney U Test, Kruskal-Wallis Test and Linear Regression Analysis were utilized. According to results, it was determined that 51.2% of the participants had inadequate knowledge level of ethical principles of disaster triage and allocation of scarce resources. It was also found that participants did not tend to make ethical decisions on four disaster scenarios which included ethical dilemmas. They stayed in ethical dilemmas that perform cardio-pulmonary resuscitation, manage limited resources and make decisions to die. Results also showed that participants who had more experience in disaster triage teams, were more likely to make ethical decisions on disaster triage than those with little or no experience in disaster triage teams(p < 0.01). Moreover, as their knowledge level of ethical principles of disaster triage and allocation of scarce resources increased, their tendency to make ethical decisions also increased(p < 0.001). In conclusion, having inadequate knowledge level of ethical principles and being inexperienced affect their ethical decision-making during disasters. So results of this study suggest that more training on disaster triage should be provided on the areas of the pre-impact phase of disaster. In addition, ethical dimension of disaster triage should be included in the syllabi of the ethics classes in the vocational training for healthcare professionals. Drill, simulations, and board exercises can be used to improve ethical decision making abilities of healthcare professionals. Disaster scenarios where ethical dilemmas are faced should be prepared for such applied training programs.

Keywords: disaster triage, medical ethics, ethical principles of disaster triage, ethical decision-making

Procedia PDF Downloads 245
601 Silk Fibroin-PVP-Nanoparticles-Based Barrier Membranes for Tissue Regeneration

Authors: Ivone R. Oliveira, Isabela S. Gonçalves, Tiago M. B. Campos, Leandro J. Raniero, Luana M. R. Vasconcellos, João H. Lopes

Abstract:

Originally, the principles of guided tissue/bone regeneration (GTR/GBR) were followed to restore the architecture and functionality of the periodontal system. In essence, a biocompatible polymer-based occlusive membrane is used as a barrier to prevent migration of epithelial and connective tissue to the regenerating site. In this way, progenitor cells located in the remaining periodontal ligament can recolonize the root area and differentiate into new periodontal tissues, alveolar bone, and new connective attachment. The use of synthetic or collagen-derived membranes with or without calcium phosphate-based bone graft materials has been the treatment used. Ideally, these membranes need to exhibit sufficient initial mechanical strength to allow handling and implantation, withstand the various mechanical stresses suffered during surgery while maintaining their integrity, and support the process of bone tissue regeneration and repair by resisting cellular traction forces and wound contraction forces during tissue healing in vivo. Although different RTG/ROG products are available on the market, they have serious deficiencies in terms of mechanical strength. Aiming to improve the mechanical strength and osteogenic properties of the membrane, this work evaluated the production of membranes that integrate the biocompatibility of the natural polymer (silk fibroin - FS) and the synthetic polymer poly(vinyl pyrrolidone - PVP) with graphene nanoplates (NPG) and gold nanoparticles (AuNPs), using the electrospinning equipment (AeroSpinner L1.0 from Areka) which allows the execution of high voltage spinning and/or solution blowing and with a high production rate, enabling development on an industrial scale. Silk fibroin uniquely solved many of the problems presented by collagen and was used in this work because it has unique combined merits, such as programmable biodegradability, biocompatibility and sustainable large-scale production. Graphene has attracted considerable attention in recent years as a potential biomaterial for mechanical reinforcement because of its unique physicochemical properties and was added to improve the mechanical properties of the membranes associated or not with the presence of AuNPs, which have shown great potential in regulating osteoblast activity. The preparation of FS from silkworm cocoons involved cleaning, degumming, dissolution in lithium bromide, dialysis, lyophilization and dissolution in hexafluoroisopropanol (HFIP) to prepare the solution for electrospinning, and crosslinking tests were performed in methanol. The NPGs were characterized and underwent treatment in nitric acid for functionalization to improve the adhesion of the nanoplates to the PVP fibers. PVP-NPG membranes were produced with 0.5, 1.0 and 1.5 wt% functionalized or not and evaluated by SEM/FEG, FTIR, mechanical strength and cell culture assays. Functionalized GNP particles showed stronger binding, remaining adhered to the fibers. Increasing the graphene content resulted in higher mechanical strength of the membrane and greater biocompatibility. The production of FS-PVP-NPG-AuNPs hybrid membranes was performed by electrospinning in separate syringes and simultaneously the FS solution and the solution containing PVP-NPG 1.5 wt% in the presence or absence of AuNPs. After cross-linking, they were characterized by SEM/FEG, FTIR and behavior in cell culture. The presence of NPG-AuNPs increased the viability and the presence of mineralization nodules.

Keywords: barrier membranes, silk fibroin, nanoparticles, tissue regeneration.

Procedia PDF Downloads 12
600 Improved Distance Estimation in Dynamic Environments through Multi-Sensor Fusion with Extended Kalman Filter

Authors: Iffat Ara Ebu, Fahmida Islam, Mohammad Abdus Shahid Rafi, Mahfuzur Rahman, Umar Iqbal, John Ball

Abstract:

The application of multi-sensor fusion for enhanced distance estimation accuracy in dynamic environments is crucial for advanced driver assistance systems (ADAS) and autonomous vehicles. Limitations of single sensors such as cameras or radar in adverse conditions motivate the use of combined camera and radar data to improve reliability, adaptability, and object recognition. A multi-sensor fusion approach using an extended Kalman filter (EKF) is proposed to combine sensor measurements with a dynamic system model, achieving robust and accurate distance estimation. The research utilizes the Mississippi State University Autonomous Vehicular Simulator (MAVS) to create a controlled environment for data collection. Data analysis is performed using MATLAB. Qualitative (visualization of fused data vs ground truth) and quantitative metrics (RMSE, MAE) are employed for performance assessment. Initial results with simulated data demonstrate accurate distance estimation compared to individual sensors. The optimal sensor measurement noise variance and plant noise variance parameters within the EKF are identified, and the algorithm is validated with real-world data from a Chevrolet Blazer. In summary, this research demonstrates that multi-sensor fusion with an EKF significantly improves distance estimation accuracy in dynamic environments. This is supported by comprehensive evaluation metrics, with validation transitioning from simulated to real-world data, paving the way for safer and more reliable autonomous vehicle control.

Keywords: sensor fusion, EKF, MATLAB, MAVS, autonomous vehicle, ADAS

Procedia PDF Downloads 43
599 Airport Pavement Crack Measurement Systems and Crack Density for Pavement Evaluation

Authors: Ali Ashtiani, Hamid Shirazi

Abstract:

This paper reviews the status of existing practice and research related to measuring pavement cracking and using crack density as a pavement surface evaluation protocol. Crack density for pavement evaluation is currently not widely used within the airport community and its use by the highway community is limited. However, surface cracking is a distress that is closely monitored by airport staff and significantly influences the development of maintenance, rehabilitation and reconstruction plans for airport pavements. Therefore crack density has the potential to become an important indicator of pavement condition if the type, severity and extent of surface cracking can be accurately measured. A pavement distress survey is an essential component of any pavement assessment. Manual crack surveying has been widely used for decades to measure pavement performance. However, the accuracy and precision of manual surveys can vary depending upon the surveyor and performing surveys may disrupt normal operations. Given the variability of manual surveys, this method has shown inconsistencies in distress classification and measurement. This can potentially impact the planning for pavement maintenance, rehabilitation and reconstruction and the associated funding strategies. A substantial effort has been devoted for the past 20 years to reduce the human intervention and the error associated with it by moving toward automated distress collection methods. The automated methods refer to the systems that identify, classify and quantify pavement distresses through processes that require no or very minimal human intervention. This principally involves the use of a digital recognition software to analyze and characterize pavement distresses. The lack of established protocols for measurement and classification of pavement cracks captured using digital images is a challenge to developing a reliable automated system for distress assessment. Variations in types and severity of distresses, different pavement surface textures and colors and presence of pavement joints and edges all complicate automated image processing and crack measurement and classification. This paper summarizes the commercially available systems and technologies for automated pavement distress evaluation. A comprehensive automated pavement distress survey involves collection, interpretation, and processing of the surface images to identify the type, quantity and severity of the surface distresses. The outputs can be used to quantitatively calculate the crack density. The systems for automated distress survey using digital images reviewed in this paper can assist the airport industry in the development of a pavement evaluation protocol based on crack density. Analysis of automated distress survey data can lead to a crack density index. This index can be used as a means of assessing pavement condition and to predict pavement performance. This can be used by airport owners to determine the type of pavement maintenance and rehabilitation in a more consistent way.

Keywords: airport pavement management, crack density, pavement evaluation, pavement management

Procedia PDF Downloads 185
598 Safeguarding the Construction Industry: Interrogating and Mitigating Emerging Risks from AI in Construction

Authors: Abdelrhman Elagez, Rolla Monib

Abstract:

This empirical study investigates the observed risks associated with adopting Artificial Intelligence (AI) technologies in the construction industry and proposes potential mitigation strategies. While AI has transformed several industries, the construction industry is slowly adopting advanced technologies like AI, introducing new risks that lack critical analysis in the current literature. A comprehensive literature review identified a research gap, highlighting the lack of critical analysis of risks and the need for a framework to measure and mitigate the risks of AI implementation in the construction industry. Consequently, an online survey was conducted with 24 project managers and construction professionals, possessing experience ranging from 1 to 30 years (with an average of 6.38 years), to gather industry perspectives and concerns relating to AI integration. The survey results yielded several significant findings. Firstly, respondents exhibited a moderate level of familiarity (66.67%) with AI technologies, while the industry's readiness for AI deployment and current usage rates remained low at 2.72 out of 5. Secondly, the top-ranked barriers to AI adoption were identified as lack of awareness, insufficient knowledge and skills, data quality concerns, high implementation costs, absence of prior case studies, and the uncertainty of outcomes. Thirdly, the most significant risks associated with AI use in construction were perceived to be a lack of human control (decision-making), accountability, algorithm bias, data security/privacy, and lack of legislation and regulations. Additionally, the participants acknowledged the value of factors such as education, training, organizational support, and communication in facilitating AI integration within the industry. These findings emphasize the necessity for tailored risk assessment frameworks, guidelines, and governance principles to address the identified risks and promote the responsible adoption of AI technologies in the construction sector.

Keywords: risk management, construction, artificial intelligence, technology

Procedia PDF Downloads 99
597 Tool for Maxillary Sinus Quantification in Computed Tomography Exams

Authors: Guilherme Giacomini, Ana Luiza Menegatti Pavan, Allan Felipe Fattori Alves, Marcela de Oliveira, Fernando Antonio Bacchim Neto, José Ricardo de Arruda Miranda, Seizo Yamashita, Diana Rodrigues de Pina

Abstract:

The maxillary sinus (MS), part of the paranasal sinus complex, is one of the most enigmatic structures in modern humans. The literature has suggested that MSs function as olfaction accessories, to heat or humidify inspired air, for thermoregulation, to impart resonance to the voice and others. Thus, the real function of the MS is still uncertain. Furthermore, the MS anatomy is complex and varies from person to person. Many diseases may affect the development process of sinuses. The incidence of rhinosinusitis and other pathoses in the MS is comparatively high, so, volume analysis has clinical value. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure, which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust, and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression, and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to quantify MS volume proved to be robust, fast, and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases. Providing volume values for MS could be helpful in evaluating the presence of any abnormality and could be used for treatment planning and evaluation of the outcome. The computed tomography (CT) has allowed a more exact assessment of this structure which enables a quantitative analysis. However, this is not always possible in the clinical routine, and if possible, it involves much effort and/or time. Therefore, it is necessary to have a convenient, robust and practical tool correlated with the MS volume, allowing clinical applicability. Nowadays, the available methods for MS segmentation are manual or semi-automatic. Additionally, manual methods present inter and intraindividual variability. Thus, the aim of this study was to develop an automatic tool to quantity the MS volume in CT scans of paranasal sinuses. This study was developed with ethical approval from the authors’ institutions and national review panels. The research involved 30 retrospective exams of University Hospital, Botucatu Medical School, São Paulo State University, Brazil. The tool for automatic MS quantification, developed in Matlab®, uses a hybrid method, combining different image processing techniques. For MS detection, the algorithm uses a Support Vector Machine (SVM), by features such as pixel value, spatial distribution, shape and others. The detected pixels are used as seed point for a region growing (RG) segmentation. Then, morphological operators are applied to reduce false-positive pixels, improving the segmentation accuracy. These steps are applied in all slices of CT exam, obtaining the MS volume. To evaluate the accuracy of the developed tool, the automatic method was compared with manual segmentation realized by an experienced radiologist. For comparison, we used Bland-Altman statistics, linear regression and Jaccard similarity coefficient. From the statistical analyses for the comparison between both methods, the linear regression showed a strong association and low dispersion between variables. The Bland–Altman analyses showed no significant differences between the analyzed methods. The Jaccard similarity coefficient was > 0.90 in all exams. In conclusion, the developed tool to automatically quantify MS volume proved to be robust, fast and efficient, when compared with manual segmentation. Furthermore, it avoids the intra and inter-observer variations caused by manual and semi-automatic methods. As future work, the tool will be applied in clinical practice. Thus, it may be useful in the diagnosis and treatment determination of MS diseases.

Keywords: maxillary sinus, support vector machine, region growing, volume quantification

Procedia PDF Downloads 504
596 Comparison of the Effect of Heart Rate Variability Biofeedback and Slow Breathing Training on Promoting Autonomic Nervous Function Related Performance

Authors: Yi Jen Wang, Yu Ju Chen

Abstract:

Background: Heart rate variability (HRV) biofeedback can promote autonomic nervous function, sleep quality and reduce psychological stress. In HRV biofeedback training, it is hoped that through the guidance of machine video or audio, the patient can breathe slowly according to his own heart rate changes so that the heart and lungs can achieve resonance, thereby promoting the related effects of autonomic nerve function; while, it is also pointed out that if slow breathing of 6 times per minute can also guide the case to achieve the effect of cardiopulmonary resonance. However, there is no relevant research to explore the comparison of the effectiveness of cardiopulmonary resonance by using video or audio HRV biofeedback training and metronome-guided slow breathing. Purpose: To compare the promotion of autonomic nervous function performance between using HRV biofeedback and slow breathing guided by a metronome. Method: This research is a kind of experimental design with convenient sampling; the cases are randomly divided into the heart rate variability biofeedback training group and the slow breathing training group. The HRV biofeedback training group will conduct HRV biofeedback training in a four-week laboratory and use the home training device for autonomous training; while the slow breathing training group will conduct slow breathing training in the four-week laboratory using the mobile phone APP breathing metronome to guide the slow breathing training, and use the mobile phone APP for autonomous training at home. After two groups were enrolled and four weeks after the intervention, the autonomic nervous function-related performance was repeatedly measured. Using the chi-square test, student’s t-test and other statistical methods to analyze the results, and use p <0.05 as the basis for statistical significance. Results: A total of 27 subjects were included in the analysis. After four weeks of training, the HRV biofeedback training group showed significant improvement in the HRV indexes (SDNN, RMSSD, HF, TP) and sleep quality. Although the stress index also decreased, it did not reach statistical significance; the slow breathing training group was not statistically significant after four weeks of training, only sleep quality improved significantly, while the HRV indexes (SDNN, RMSSD, TP) all increased. Although HF and stress indexes decreased, they were not statistically significant. Comparing the difference between the two groups after training, it was found that the HF index improved significantly and reached statistical significance in the HRV biofeedback training group. Although the sleep quality of the two groups improved, it did not reach that level in a statistically significant difference. Conclusion: HRV biofeedback training is more effective in promoting autonomic nervous function than slow breathing training, but the effects of reducing stress and promoting sleep quality need to be explored after increasing the number of samples. The results of this study can provide a reference for clinical or community health promotion. In the future, it can also be further designed to integrate heart rate variability biological feedback training into the development of AI artificial intelligence wearable devices, which can make it more convenient for people to train independently and get effective feedback in time.

Keywords: autonomic nervous function, HRV biofeedback, heart rate variability, slow breathing

Procedia PDF Downloads 176
595 The Administration of Infection Diseases During the Pandemic COVID-19 and the Role of the Differential Diagnosis with Biomarkers VB10

Authors: Sofia Papadimitriou

Abstract:

INTRODUCTION: The differential diagnosis between acute viral and bacterial infections is an important cost-effectiveness parameter at the stage of the treatment process in order to achieve the maximum benefits in therapeutic intervention by combining the minimum cost to ensure the proper use of antibiotics.The discovery of sensitive and robust molecular diagnostic tests in response to the role of the host in infections has enhanced the accurate diagnosis and differentiation of infections. METHOD: The study used a sample of six independent blood samples (total=756) which are associated with human proteins-proteins, each of which at the transcription stage expresses a different response in the host network between viral and bacterial infections.Τhe individual blood samples are subjected to a sequence of computer filters that identify a gene panel corresponding to an autonomous diagnostic score. The data set and the correspondence of the gene panel to the diagnostic patents a new Bangalore -Viral Bacterial (BL-VB). FINDING: We use a biomarker based on the blood of 10 genes(Panel-VB) that are an important prognostic value for the detection of viruses from bacterial infections with a weighted average AUROC of 0.97(95% CL:0.96-0.99) in eleven independent samples (sets n=898). We discovered a base with a patient score (VB 10 ) according to the table, which is a significant diagnostic value with a weighted average of AUROC 0.94(95% CL: 0.91-0.98) in 2996 patient samples from 56 public sets of data from 19 different countries. We also studied VB 10 in a new cohort of South India (BL-VB,n=56) and found 97% accuracy in confirmed cases of viral and bacterial infections. We found that VB 10 (a)accurately identifies the type of infection even in unspecified cases negative to the culture (b) shows its clinical condition recovery and (c) applies to all age groups, covering a wide range of acute bacterial and viral infectious, including non-specific pathogens. We applied our VB 10 rating to publicly available COVID 19 data and found that our rating diagnosed viral infection in patient samples. RESULTS: Τhe results of the study showed the diagnostic power of the biomarker VB 10 as a diagnostic test for the accurate diagnosis of acute infections in recovery conditions. We look forward to helping you make clinical decisions about prescribing antibiotics and integrating them into your policies management of antibiotic stewardship efforts. CONCLUSIONS: Overall, we are developing a new property of the RNA-based biomarker and a new blood test to differentiate between viral and bacterial infections to assist a physician in designing the optimal treatment regimen to contribute to the proper use of antibiotics and reduce the burden on antimicrobial resistance, AMR.

Keywords: acute infections, antimicrobial resistance, biomarker, blood transcriptome, systems biology, classifier diagnostic score

Procedia PDF Downloads 155
594 Balanced Score Card a Tool to Improve Naac Accreditation – a Case Study in Indian Higher Education

Authors: CA Kishore S. Peshori

Abstract:

Introduction: India, a country with vast diversity and huge population is going to have largest young population by 2020. Higher education has and will always be the basic requirement for making a developing nation to a developed nation. To improve any system it needs to be bench-marked. There have been various tools for bench-marking the systems. Education is delivered in India by universities which are mainly funded by government. This universities for delivering the education sets up colleges which are again funded mainly by government. Recently however there has also been autonomy given to universities and colleges. Moreover foreign universities are waiting to enter Indian boundaries. With a large number of universities and colleges it has become more and more necessary to measure this institutes for bench-marking. There have been various tools for measuring the institute. In India college assessments have been made compulsory by UGC. Naac has been offically recognised as the accrediation criteria. The Naac criteria has been based on seven criterias namely: 1. Curricular assessments, 2. Teaching learning and evaluation, 3. Research Consultancy and Extension, 4. Infrastructure and learning resources, 5. Student support and progression, 6. Governance leadership and management, 7. Innovation and best practices. The Naac tries to bench mark the institution for identification, sustainability, dissemination and adaption of best practices. It grades the institution according to this seven criteria and the funding of institution is based on these grades. Many of the colleges are struggling to get best of grades but they have not come across a systematic tool to achieve the results. Balanced Scorecard developed by Kaplan has been a successful tool for corporates to develop best of practices so as to increase their financial performance and also retain and increase their customers so as to grow the organization to next level.It is time to test this tool for an educational institute. Methodology: The paper tries to develop a prototype for college based on the secondary data. Once a prototype is developed the researcher based on questionnaire will try to test this tool for successful implementation. The success of this research will depend on its implementation of BSC on an institute and its grading improved due to this successful implementation. Limitation of time is a major constraint in this research as Naac cycle takes minimum 4 years for accreditation and reaccreditation the methodology will limit itself to secondary data and questionnaire to be circulated to colleges along with the prototype model of BSC. Conclusion: BSC is a successful tool for enhancing growth of an organization. Educational institutes are no exception to these. BSC will only have to be realigned to suit the Naac criteria. Once this prototype is developed the success will be tested only on its implementation but this research paper will be the first step towards developing this tool and will also initiate the success by developing a questionnaire and getting and evaluating the responses for moving to the next level of actual implementation

Keywords: balanced scorecard, bench marking, Naac, UGC

Procedia PDF Downloads 272
593 Application of a Model-Free Artificial Neural Networks Approach for Structural Health Monitoring of the Old Lidingö Bridge

Authors: Ana Neves, John Leander, Ignacio Gonzalez, Raid Karoumi

Abstract:

Systematic monitoring and inspection are needed to assess the present state of a structure and predict its future condition. If an irregularity is noticed, repair actions may take place and the adequate intervention will most probably reduce the future costs with maintenance, minimize downtime and increase safety by avoiding the failure of the structure as a whole or of one of its structural parts. For this to be possible decisions must be made at the right time, which implies using systems that can detect abnormalities in their early stage. In this sense, Structural Health Monitoring (SHM) is seen as an effective tool for improving the safety and reliability of infrastructures. This paper explores the decision-making problem in SHM regarding the maintenance of civil engineering structures. The aim is to assess the present condition of a bridge based exclusively on measurements using the suggested method in this paper, such that action is taken coherently with the information made available by the monitoring system. Artificial Neural Networks are trained and their ability to predict structural behavior is evaluated in the light of a case study where acceleration measurements are acquired from a bridge located in Stockholm, Sweden. This relatively old bridge is presently still in operation despite experiencing obvious problems already reported in previous inspections. The prediction errors provide a measure of the accuracy of the algorithm and are subjected to further investigation, which comprises concepts like clustering analysis and statistical hypothesis testing. These enable to interpret the obtained prediction errors, draw conclusions about the state of the structure and thus support decision making regarding its maintenance.

Keywords: artificial neural networks, clustering analysis, model-free damage detection, statistical hypothesis testing, structural health monitoring

Procedia PDF Downloads 209
592 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 263
591 Short Text Classification Using Part of Speech Feature to Analyze Students' Feedback of Assessment Components

Authors: Zainab Mutlaq Ibrahim, Mohamed Bader-El-Den, Mihaela Cocea

Abstract:

Students' textual feedback can hold unique patterns and useful information about learning process, it can hold information about advantages and disadvantages of teaching methods, assessment components, facilities, and other aspects of teaching. The results of analysing such a feedback can form a key point for institutions’ decision makers to advance and update their systems accordingly. This paper proposes a data mining framework for analysing end of unit general textual feedback using part of speech feature (PoS) with four machine learning algorithms: support vector machines, decision tree, random forest, and naive bays. The proposed framework has two tasks: first, to use the above algorithms to build an optimal model that automatically classifies the whole data set into two subsets, one subset is tailored to assessment practices (assessment related), and the other one is the non-assessment related data. Second task to use the same algorithms to build an optimal model for whole data set, and the new data subsets to automatically detect their sentiment. The significance of this paper is to compare the performance of the above four algorithms using part of speech feature to the performance of the same algorithms using n-grams feature. The paper follows Knowledge Discovery and Data Mining (KDDM) framework to construct the classification and sentiment analysis models, which is understanding the assessment domain, cleaning and pre-processing the data set, selecting and running the data mining algorithm, interpreting mined patterns, and consolidating the discovered knowledge. The results of this paper experiments show that both models which used both features performed very well regarding first task. But regarding the second task, models that used part of speech feature has underperformed in comparison with models that used unigrams and bigrams.

Keywords: assessment, part of speech, sentiment analysis, student feedback

Procedia PDF Downloads 142
590 Mechanical Properties and Antibiotic Release Characteristics of Poly(methyl methacrylate)-based Bone Cement Formulated with Mesoporous Silica Nanoparticles

Authors: Kumaran Letchmanan, Shou-Cang Shen, Wai Kiong Ng

Abstract:

Postoperative implant-associated infections in soft tissues and bones remain a serious complication in orthopaedic surgery, which leads to impaired healing, re-implantation, prolong hospital stay and increase cost. Drug-loaded implants with sustained release of antibiotics at the local site are current research interest to reduce the risk of post-operative infections and osteomyelitis, thus, minimize the need for follow-up care and increase patient comfort. However, the improved drug release of the drug-loaded bone cements is usually accompanied by a loss in mechanical strength, which is critical for weight-bearing bone cement. Recently, more attempts have been undertaken to develop techniques to enhance the antibiotic elution as well as preserve the mechanical properties of the bone cements. The present study investigates the potential influence of addition of mesoporous silica nanoparticles (MSN) on the in vitro drug release kinetics of gentamicin (GTMC), along with the mechanical properties of bone cements. Simplex P was formulated with MSN and loaded with GTMC by direct impregnation. Meanwhile, Simplex P with water soluble poragen (xylitol) and high loading of GTMC as well as commercial bone cement CMW Smartset GHV were used as controls. MSN-formulated bone cements are able to increase the drug release of GTMC by 3-fold with a cumulative release of more than 46% as compared with other control groups. Furthermore, a sustained release could be achieved for two months. The loaded nano-sized MSN with uniform pore channels significantly build up an effective nano-network path in the bone cement facilitates the diffusion and extended release of GTMC. Compared with formulations using xylitol and high GTMC loading, incorporation of MSN shows no detrimental effect on biomechanical properties of the bone cements as no significant changes in the mechanical properties as compared with original bone cement. After drug release for two months, the bending modulus of MSN-formulated bone cements is 4.49 ± 0.75 GPa and the compression strength is 92.7 ± 2.1 MPa (similar to the compression strength of Simplex-P: 93.0 ± 1.2 MPa). The unaffected mechanical properties of MSN-formulated bone cements was due to the unchanged microstructures of bone cement, whereby more than 98% of MSN remains in the matrix and supports the bone cement structures. In contrast, the large portions of extra voids can be observed for the formulations using xylitol and high drug loading after the drug release study, thus caused compressive strength below the ASTM F541 and ISO 5833 minimum of 70 MPa. These results demonstrate the potential applicability of MSN-functionalized poly(methyl methacrylate)-based bone cement as a highly efficient, sustained and local drug delivery system with good mechanical properties.

Keywords: antibiotics, biomechanical properties, bone cement, sustained release

Procedia PDF Downloads 257
589 Barriers to Tuberculosis Detection in Portuguese Prisons

Authors: M. F. Abreu, A. I. Aguiar, R. Gaio, R. Duarte

Abstract:

Background: Prison establishments constitute high-risk environments for the transmission and spread of tuberculosis (TB), given their epidemiological context and the difficulty of implementing preventive and control measures. Guidelines for control and prevention of tuberculosis in prisons have been described as incomplete and heterogeneous internationally, due to several identified obstacles, for example scarcity of human resources and funding of prisoner health services. In Portugal, a protocol was created in 2014 with the aim to define and standardize procedures of detection and prevention of tuberculosis within prisons. Objective: The main objective of this study was to identify and describe barriers to tuberculosis detection in prisons of Porto and Lisbon districts in Portugal. Methods: A cross-sectional study was conducted from 2ⁿᵈ January 2018 till 30ᵗʰ June 2018. Semi-structured questionnaires were applied to health care professionals working in the prisons of the districts of Porto (n=6) and Lisbon (n=8). As inclusion criteria we considered having work experience in the area of tuberculosis (either in diagnosis, treatment, or follow up). The questionnaires were self-administered, in paper format. Descriptive analyses of the questionnaire variables were made using frequencies and median. Afterwards, a hierarchical agglomerative clusters analysis was performed. After obtaining the clusters, the chi-square test was applied to study the association between the variables collected and the clusters. The level of significance considered was 0.05. Results: From the total of 186 health professionals, 139 met the criteria of inclusion and 82 health professionals were interviewed (62,2% of participation). Most were female, nurses, with a median age of 34 years, with term employment contract. From the cluster analysis, two groups were identified with different characteristics and behaviors for the procedures of this protocol. Statistically significant results were found in: elements of cluster 1 (78% of the total participants) work in prisons for a longer time (p=0.003), 45,3% work > 4 years while 50% of the elements of cluster 2 work for less than a year, and more frequently answered they know and apply the procedures of the protocol (p=0.000). Both clusters answered frequently the need of having theoretical-practical training for TB (p=0.000), especially in the areas of diagnosis, treatment and prevention and that there is scarcity of funding to prisoner health services (p=0.000). Regarding procedures for TB screening (periodic and contact screening) and procedures for transferring a prisoner with this disease, cluster 1 also answered more frequently to perform them (p=0.000). They also referred that the material/equipment for TB screening is accessible and available (p=0.000). From this clusters we identified as barriers scarcity of human resources, the need to theoretical-practical training for tuberculosis, inexperience in working in health services prisons and limited knowledge of protocol procedures. Conclusions: The barriers found in this study are the same described internationally. This protocol is mostly being applied in portuguese prisons. The study also showed the need to invest in human and material resources. This investigation bridged gaps in knowledge that could help prison health services optimize the care provided for early detection and adherence of prisoners to treatment of tuberculosis.

Keywords: barriers, health care professionals, prisons, protocol, tuberculosis

Procedia PDF Downloads 147
588 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan

Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen

Abstract:

In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.

Keywords: automation, integration, value, communication

Procedia PDF Downloads 146
587 Physicochemical Properties and Toxicity Studies on a Lectin from the Bulb of Dioscorea bulbifera

Authors: Uchenna Nkiruka Umeononihu, Adenike Kuku, Oludele Odekanyin, Olubunmi Babalola, Femi Agboola, Rapheal Okonji

Abstract:

In this study, a lectin from the bulb of Dioscorea bulbifera was purified, characterised, and its acute and sub-acute toxicity was investigated with a view to evaluate its toxic effects in mice. The protein from the bulb was extracted by homogenising 50 g of the bulb in 500 ml of phosphate buffered saline (0.025 M) of pH 7.2, stirred for 3 hr, and centrifuged at the speed of 3000 rpm. Blood group and sugar specificity assays of the crude extract were determined. The lectin was purified in a two-step procedure- gel filtration on Sephadex G-75 and affinity chromatography on Sepharose 4-B arabinose. The degree of purity of the purified lectin was ascertained by SDS-polyacrylamide gel electrophoresis. Detection of covalently bound carbohydrate was carried out with Periodic Acid-Schiffs (PAS) reagent staining technique. Effects of temperature, pH, and EDTA on the lectin were carried out using standard methods. This was followed by acute toxicity studies via oral and subcutaneous routes using mice. The animals were monitored for mortality and signs of toxicity. The sub-acute toxicity studies were carried out using rats. Different concentrations of the lectin were administered twice daily for 5 days via the subcutaneous route. The animals were sacrificed on the sixth day; blood samples and liver tissues were collected. Biochemical assays (determination of total protein, direct bilirubin, Alanine aminotransferase (ALT), Aspartate aminotransferase (AST), catalase (CAT), and superoxide dismutase (SOD)) were carried out on the serum and liver homogenates. The collected organs (heart, liver, kidney, and spleen) were subjected to histopathological analysis. The results showed that lectin from the bulbs of Dioscorea bulbifera agglutinated non-specifically the erythrocytes of the human ABO system as well as rabbit erythrocytes. The haemagglutinating activity was strongly inhibited by arabinose and dulcitol with minimum inhibitory concentrations of 0.781 and 6.25, respectively. The lectin was purified to homogeneity with native and subunit molecular weights of 56,273 and 29,373 Daltons, respectively. The lectin was thermostable up to 30 0C and lost 25 %, 33.3 %, and 100 % of its heamagglutinating activity at 40°C, 50°C, and 60°C, respectively. The lectin was maximally active at pH 4 and 5 but lost its total activity at pH eight, while EDTA (10 mM) had no effect on its haemagglutinating activity. PAS reagent staining showed that the lectin was not a glycoprotein. The sub-acute studies on rats showed elevated levels of ALT, AST, serum bilirubin, total protein in serum and liver homogenates suggesting damage to liver and spleen. The study concluded that the aerial bulb of D. bulbifera lectin was non-specific in its heamagglutinating activity and dimeric in its structure. The lectin shared some physicochemical characteristics with lectins from other Dioscorecea species and was moderately toxic to the liver and spleen of treated animals.

Keywords: Dioscorea bulbifera, heamagglutinin, lectin, toxicity

Procedia PDF Downloads 128
586 Fish Catch Composition from Gobind Sagar Reservoir during 2006-2012

Authors: Krishan Lal, Anish Dua

Abstract:

Gobind Sagar Reservoir has been created in Himachal Pradesh, India (31° 25´ N and 76 ° 25´E) by damming River Sutlej at village Bhakra in 1963. The average water spread area of this reservoir is 10,000 hectares. Fishermen have organized themselves in the form of co-operative societies. 26 fisheries co-operative societies were working in Gobind Sagar Reservoir up till 2012. June and July months were observed as closed season, no fishing was done during this period. Proper record maintaining of fish catch was done at different levels by the state fisheries department. Different measures like minimum harvestable size, mesh size regulation and prohibition of illegal fishing etc. were taken for fish conservation. Fishermen were actively involved in the management. Gill nets were used for catching fishes from this reservoir. State fisheries department is realizing 15% royalty of the sold fish. Data used in this paper is about the fish catch during 2006-2012 and were obtained from the state fisheries department, Himachal Pradesh. Catla catla, Labeo rohita, Cirrhinus mrigala, Sperata seenghala, Cyprinus carpio, Tor putitora, Hypophthalmichthys molitrix, Labeo calbasu, Labeo dero and Ctenopharyngodon idella etc., were the fish species exploited for commercial purposes. Total number of individuals of all species caught was 3141236 weighing 5637108.9 kg during 2006-2012. H. molitrix was introduced accidently in this reservoir and was making a good share of fish catch in this reservoir. The annual catch of this species was varying between 161279.6 kg, caught in 2011 and 788030.8 kg caught in 2009. Total numbers of individuals of C. idella caught were 8966 weighing 64320.2 kg. The catch of Cyprinus carpio was varying between 144826.1 kg caught in 2006 and 214480.1 kg caught in 2010. Total catch of Tor putitora was 180263.2 kg during 2006-2012. Total catch of L. dero, S. seenghala and Catla catla remained 100637.4 kg, 75297.8 kg and 561802.9 kg, respectively, during 2006-2012. Maximum fish catch was observed during the months of August (after observing Closed Season). Maximum catch of exotic carps was from Bhakra area of the reservoir which has fewer fluctuations in water levels. The reservoir has been divided into eight beats for administrative purpose, to avoid conflicts between operating fisheries co-operative societies for area of operation. Fish catch was more by co-operative societies operating in the area of reservoir having fewer fluctuations in water level and catch was less by co-operative societies operating in the area of more fluctuations in water level. Species-wise fish catch by different co-operative societies from their allotted area was studied. This reservoir is one of most scientifically managed reservoirs.

Keywords: co-operative societies, fish catch, fish species, reservoir

Procedia PDF Downloads 210
585 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode, a short mitochondrial DNA fragment, made up of three subunits; a phosphate group, sugar and nucleic bases (A, T, C, and G). They provide good sources of information needed to classify living species. Such intuition has been confirmed by many experimental results. Species classification with DNA Barcode sequences has been studied by several researchers. The classification problem assigns unknown species to known ones by analyzing their Barcode. This task has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. To make this type of analysis feasible, heuristics, like progressive alignment, have been developed. Another tool for similarity search against a database of sequences is BLAST, which outputs shorter regions of high similarity between a query sequence and matched sequences in the database. However, all these methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. This method permits to avoid the complex problem of form and structure in different classes of organisms. On empirical data and their classification performances are compared with other methods. Our system consists of three phases. The first is called transformation, which is composed of three steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. The second is called approximation, which is empowered by the use of Multi Llibrary Wavelet Neural Networks (MLWNN).The third is called the classification of DNA Barcodes, which is realized by applying the algorithm of hierarchical classification.

Keywords: DNA barcode, electron-ion interaction pseudopotential, Multi Library Wavelet Neural Networks (MLWNN)

Procedia PDF Downloads 318
584 Control Algorithm Design of Single-Phase Inverter For ZnO Breakdown Characteristics Tests

Authors: Kashif Habib, Zeeshan Ayyub

Abstract:

ZnO voltage dependent resistor was widely used as components of the electrical system for over-voltage protection. It has a wide application prospect in superconducting energy-removal, generator de-excitation, overvoltage protection of electrical & electronics equipment. At present, the research for the application of ZnO voltage dependent resistor stop, it uses just in the field of its nonlinear voltage current characteristic and overvoltage protection areas. There is no further study over the over-voltage breakdown characteristics, such as the combustion phenomena and the measure of the voltage/current when it breakdown, and the affect to its surrounding equipment. It is also a blind spot in its application. So, when we do the feature test of ZnO voltage dependent resistor, we need to design a reasonable test power supply, making the terminal voltage keep for sine wave, simulating the real use of PF voltage in power supply conditions. We put forward the solutions of using inverter to generate a controllable power. The paper mainly focuses on the breakdown characteristic test power supply of nonlinear ZnO voltage dependent resistor. According to the current mature switching power supply technology, we proposed power control system using the inverter as the core. The power mainly realize the sin-voltage output on the condition of three-phase PF-AC input, and 3 control modes (RMS, Peak, Average) of the current output. We choose TMS320F2812M as the control part of the hardware platform. It is used to convert the power from three-phase to a controlled single-phase sin-voltage through a rectifier, filter, and inverter. Design controller produce SPWM, to get the controlled voltage source via appropriate multi-loop control strategy, while execute data acquisition and display, system protection, start logic control, etc. The TMS320F2812M is able to complete the multi-loop control quickly and can be a good completion of the inverter output control.

Keywords: ZnO, multi-loop control, SPWM, non-linear load

Procedia PDF Downloads 325
583 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 216
582 Effect of 12 Weeks Pedometer-Based Workplace Program on Inflammation and Arterial Stiffness in Young Men with Cardiovascular Risks

Authors: Norsuhana Omar, Amilia Aminuddina Zaiton Zakaria, Raifana Rosa Mohamad Sattar, Kalaivani Chellappan, Mohd Alauddin Mohd Ali, Norizam Salamt, Zanariyah Asmawi, Norliza Saari, Aini Farzana Zulkefli, Nor Anita Megat Mohd. Nordin

Abstract:

Inflammation plays an important role in the pathogenesis of vascular dysfunction leading to arterial stiffness. Pulse wave velocity (PWV) and augmentation index (AS), as tools for the assessment of vascular damages are widely used and have been shown to predict cardiovascular disease (CVD). C-reactive protein (CRP) is a marker of inflammation. Several studies noted that regular exercise is associated with reduced arterial stiffness. The lack of exercise among Malaysians and the increasing CVD morbidity and mortality among young men are of concern. In Malaysia data on the workplace exercise intervention is scarce. A programme was designed to enable subjects to increase their level of walking as part of their daily work routine and self-monitored by using pedometers. The aim of this study to evaluate the reducing of inflammation by measuring CRP and improvement arterial stiffness measured by carotid femoral PWV (PWVCF) and AI. A total of 70 young men (20 - 40 years) who were sedentary, achieving less than 5,000 steps/day in casual walking with 2 or more cardiovascular risk factors were recruited in Institute of Vocational Skills for Youth (IKBN Hulu Langat). Subjects were randomly assigned to a control (CG) (n=34; no change in walking) and pedometer group (PG) (n=36; minimum target: 8,000 steps/day). The CRP was measured by using immunological method while PWVCF and AI were measured using Vicorder. All parameters were measured at baseline and after 12 weeks. Data for analysis was conducted using Statistical Package of Social Sciences Version 22 (SPSS Inc., Chicago, IL, USA). At post intervention, the CG step counts were similar (4983 ± 366vs 5697 ± 407steps/day). The PG increased step count from 4996 ± 805 to 10,128 ±511 steps/day (P<0.001). The PG showed significant improvement in anthropometric variables and lipid (time and group effect p<0.001). For vascular assessment, the PG showed significantly decreased for time and effect (p<0.001) for PWV (7.21± 0.83 to 6.42 ± 0.89) m/s; AI (11.88± 6.25 to 8.83 ± 3.7) % and CRP (pre= 2.28 ± 3.09, post=1.08± 1.37mg/L). However, no changes were seen in CG. As a conclusion, a pedometer-based walking programme may be an effective strategy for promoting increased daily physical activity which reduces cardiovascular risk markers and thus improve cardiovascular health in terms of inflammation and arterial stiffness. The community intervention for health maintenance has potential to adopt walking as an exercise and adopting vascular fitness index as the performance measuring tools.

Keywords: arterial stiffness, exercise, inflammation, pedometer

Procedia PDF Downloads 354
581 The Direct Deconvolution Model for the Large Eddy Simulation of Turbulence

Authors: Ning Chang, Zelong Yuan, Yunpeng Wang, Jianchun Wang

Abstract:

Large eddy simulation (LES) has been extensively used in the investigation of turbulence. LES calculates the grid-resolved large-scale motions and leaves small scales modeled by sub lfilterscale (SFS) models. Among the existing SFS models, the deconvolution model has been used successfully in the LES of the engineering flows and geophysical flows. Despite the wide application of deconvolution models, the effects of subfilter scale dynamics and filter anisotropy on the accuracy of SFS modeling have not been investigated in depth. The results of LES are highly sensitive to the selection of fi lters and the anisotropy of the grid, which has been overlooked in previous research. In the current study, two critical aspects of LES are investigated. Firstly, we analyze the influence of sub-fi lter scale (SFS) dynamics on the accuracy of direct deconvolution models (DDM) at varying fi lter-to-grid ratios (FGR) in isotropic turbulence. An array of invertible filters are employed, encompassing Gaussian, Helmholtz I and II, Butterworth, Chebyshev I and II, Cauchy, Pao, and rapidly decaying filters. The signi ficance of FGR becomes evident, as it acts as a pivotal factor in error control for precise SFS stress prediction. When FGR is set to 1, the DDM models cannot accurately reconstruct the SFS stress due to the insufficient resolution of SFS dynamics. Notably, prediction capabilities are enhanced at an FGR of 2, resulting in accurate SFS stress reconstruction, except for cases involving Helmholtz I and II fi lters. A remarkable precision close to 100% is achieved at an FGR of 4 for all DDM models. Additionally, the further exploration extends to the fi lter anisotropy to address its impact on the SFS dynamics and LES accuracy. By employing dynamic Smagorinsky model (DSM), dynamic mixed model (DMM), and direct deconvolution model (DDM) with the anisotropic fi lter, aspect ratios (AR) ranging from 1 to 16 in LES fi lters are evaluated. The findings highlight the DDM's pro ficiency in accurately predicting SFS stresses under highly anisotropic filtering conditions. High correlation coefficients exceeding 90% are observed in the a priori study for the DDM's reconstructed SFS stresses, surpassing those of the DSM and DMM models. However, these correlations tend to decrease as lter anisotropy increases. In the a posteriori studies, the DDM model consistently outperforms the DSM and DMM models across various turbulence statistics, encompassing velocity spectra, probability density functions related to vorticity, SFS energy flux, velocity increments, strain-rate tensors, and SFS stress. It is observed that as fi lter anisotropy intensify , the results of DSM and DMM become worse, while the DDM continues to deliver satisfactory results across all fi lter-anisotropy scenarios. The fi ndings emphasize the DDM framework's potential as a valuable tool for advancing the development of sophisticated SFS models for LES of turbulence.

Keywords: deconvolution model, large eddy simulation, subfilter scale modeling, turbulence

Procedia PDF Downloads 75
580 Perceptions of Teachers toward Inclusive Education Focus on Hearing Impairment

Authors: Chalise Kiran

Abstract:

The prime idea of inclusive education is to mainstream every child in education. However, it will be challenging for implementation when there are policy and practice gaps. It will be even more challenging when children have disabilities. Generally, the focus will be on the policy gap, but the problem may not always be with policy. The proper practice could be a challenge in the countries like Nepal. In determining practice, the teachers’ perceptions toward inclusive will play a vital role. Nepal has categorized disability in 7 types (physical, visual, hearing, vision/hearing, speech, mental, and multiple). Out of these, hearing impairment is the study realm. In the context of a limited number of researches on children with disabilities and rare researches on CWHI and their education in Nepal, this study is a pioneering effort in knowing basically the problems and challenges of CWHI focused on inclusive education in the schools including gaps and barriers in its proper implementation. Philosophically, the paradigm of the study is post-positivism. In the post-positivist worldview, the quantitative approach with the description of the situation and inferential relationship are revealed out in the study. This is related to the natural model of objective reality. The data were collected from an individual survey with the teachers and head teachers of 35 schools in Nepal. The survey questionnaire was prepared and filled by the respondents from the schools where the CWHI study in 7 provincial 20 districts of Nepal. Through these considerations, the perceptions of CWHI focused inclusive education were explored in the study. The data were analyzed using both descriptive and inferential tools on which the Likert scale-based analysis was done for descriptive analysis, and chi-square mathematical tool was used to know the significant relationship between dependent variables and independent variables. The descriptive analysis showed that the majority of teachers have positive perceptions toward implementing CWHI focused inclusive education, and the majority of them have positive perceptions toward CWHI focused inclusive education, though there are some problems and challenges. The study has found out the major challenges and problems categorically. Some of them are: a large number of students in a single class; availability of generic textbooks for CWHI and no availability of textbooks to all students; less opportunity for teachers to acquire knowledge on CWHI; not adequate teachers in the schools; no flexibility in the curriculum; less information system in schools; no availability of educational consular; disaster-prone students; no child abuse control strategy; no disabled-friendly schools; no free health check-up facility; no participation of the students in school activities and in child clubs and so on. By and large, it is found that teachers’ age, gender, years of experience, position, employment status, and disability with him or her show no statistically significant relation to successfully implement CWHI focused inclusive education and perceptions to CWHI focused inclusive education in schools. However, in some of the cases, the set null hypothesis was rejected, and some are completely retained. The study has suggested policy implications, implications for educational authority, and implications for teachers and parents categorically.

Keywords: children with hearing impairment, disability, inclusive education, perception

Procedia PDF Downloads 112
579 Distribution and Ecological Risk Assessment of Trace Elements in Sediments along the Ganges River Estuary, India

Authors: Priyanka Mondal, Santosh K. Sarkar

Abstract:

The present study investigated the spatiotemporal distribution and ecological risk assessment of trace elements of surface sediments (top 0 - 5 cm; grain size ≤ 0.63 µm) in relevance to sediment quality characteristics along the Ganges River Estuary, India. Sediment samples were collected during ebb tide from intertidal regions covering seven sampling sites of diverse environmental stresses. The elements were analyzed with the help of ICPAES. This positive, mixohaline, macro-tidal estuary has global significance contributing ecological and economic services. Presence of fine-clayey particle (47.03%) enhances the adsorption as well as transportation of trace elements. There is a remarkable inter-metallic variation (mg kg-1 dry weight) in the distribution pattern in the following manner: Al (31801± 15943) > Fe (23337± 7584) > Mn (461±147) > S(381±235) > Zn(54 ±18) > V(43 ±14) > Cr(39 ±15) > As (34±15) > Cu(27 ±11) > Ni (24 ±9) > Se (17 ±8) > Co(11 ±3) > Mo(10 ± 2) > Hg(0.02 ±0.01). An overall trend of enrichment of majority of trace elements was very much pronounced at the site Lot 8, ~ 35km upstream of the estuarine mouth. In contrast, the minimum concentration was recorded at site Gangasagar, mouth of the estuary, with high energy profile. The prevalent variations in trace element distribution are being liable for a set of cumulative factors such as hydrodynamic conditions, sediment dispersion pattern and textural variations as well as non-homogenous input of contaminants from point and non-point sources. In order to gain insight into the trace elements distribution, accumulation, and their pollution status, geoaccumulation index (Igeo) and enrichment factor (EF) were used. The Igeo indicated that surface sediments were moderately polluted with As (0.60) and Mo (1.30) and strongly contaminated with Se (4.0). The EF indicated severe pollution of Se (53.82) and significant pollution of As (4.05) and Mo (6.0) and indicated the influx of As, Mo and Se in sediments from anthropogenic sources (such as industrial and municipal sewage, atmospheric deposition, agricultural run-off, etc.). The significant role of the megacity Calcutta in relevance to the untreated sewage discharge, atmospheric inputs and other anthropogenic activities is worthwhile to mention. The ecological risk for different trace elements was evaluated using sediment quality guidelines, effects range low (ERL), and effect range median (ERM). The concentration of As, Cu and Ni at 100%, 43% and 86% of the sampling sites has exceeded the ERL value while none of the element concentration exceeded ERM. The potential ecological risk index values revealed that As at 14.3% of the sampling sites would pose relatively moderate risk to benthic organisms. The effective role of finer clay particles for trace element distribution was revealed by multivariate analysis. The authors strongly recommend regular monitoring emphasizing on accurate appraisal of the potential risk of trace elements for effective and sustainable management of this estuarine environment.

Keywords: pollution assessment, sediment contamination, sediment quality, trace elements

Procedia PDF Downloads 257
578 An Intelligent Search and Retrieval System for Mining Clinical Data Repositories Based on Computational Imaging Markers and Genomic Expression Signatures for Investigative Research and Decision Support

Authors: David J. Foran, Nhan Do, Samuel Ajjarapu, Wenjin Chen, Tahsin Kurc, Joel H. Saltz

Abstract:

The large-scale data and computational requirements of investigators throughout the clinical and research communities demand an informatics infrastructure that supports both existing and new investigative and translational projects in a robust, secure environment. In some subspecialties of medicine and research, the capacity to generate data has outpaced the methods and technology used to aggregate, organize, access, and reliably retrieve this information. Leading health care centers now recognize the utility of establishing an enterprise-wide, clinical data warehouse. The primary benefits that can be realized through such efforts include cost savings, efficient tracking of outcomes, advanced clinical decision support, improved prognostic accuracy, and more reliable clinical trials matching. The overarching objective of the work presented here is the development and implementation of a flexible Intelligent Retrieval and Interrogation System (IRIS) that exploits the combined use of computational imaging, genomics, and data-mining capabilities to facilitate clinical assessments and translational research in oncology. The proposed System includes a multi-modal, Clinical & Research Data Warehouse (CRDW) that is tightly integrated with a suite of computational and machine-learning tools to provide insight into the underlying tumor characteristics that are not be apparent by human inspection alone. A key distinguishing feature of the System is a configurable Extract, Transform and Load (ETL) interface that enables it to adapt to different clinical and research data environments. This project is motivated by the growing emphasis on establishing Learning Health Systems in which cyclical hypothesis generation and evidence evaluation become integral to improving the quality of patient care. To facilitate iterative prototyping and optimization of the algorithms and workflows for the System, the team has already implemented a fully functional Warehouse that can reliably aggregate information originating from multiple data sources including EHR’s, Clinical Trial Management Systems, Tumor Registries, Biospecimen Repositories, Radiology PAC systems, Digital Pathology archives, Unstructured Clinical Documents, and Next Generation Sequencing services. The System enables physicians to systematically mine and review the molecular, genomic, image-based, and correlated clinical information about patient tumors individually or as part of large cohorts to identify patterns that may influence treatment decisions and outcomes. The CRDW core system has facilitated peer-reviewed publications and funded projects, including an NIH-sponsored collaboration to enhance the cancer registries in Georgia, Kentucky, New Jersey, and New York, with machine-learning based classifications and quantitative pathomics, feature sets. The CRDW has also resulted in a collaboration with the Massachusetts Veterans Epidemiology Research and Information Center (MAVERIC) at the U.S. Department of Veterans Affairs to develop algorithms and workflows to automate the analysis of lung adenocarcinoma. Those studies showed that combining computational nuclear signatures with traditional WHO criteria through the use of deep convolutional neural networks (CNNs) led to improved discrimination among tumor growth patterns. The team has also leveraged the Warehouse to support studies to investigate the potential of utilizing a combination of genomic and computational imaging signatures to characterize prostate cancer. The results of those studies show that integrating image biomarkers with genomic pathway scores is more strongly correlated with disease recurrence than using standard clinical markers.

Keywords: clinical data warehouse, decision support, data-mining, intelligent databases, machine-learning.

Procedia PDF Downloads 127
577 Finite Element Analysis of the Anaconda Device: Efficiently Predicting the Location and Shape of a Deployed Stent

Authors: Faidon Kyriakou, William Dempster, David Nash

Abstract:

Abdominal Aortic Aneurysm (AAA) is a major life-threatening pathology for which modern approaches reduce the need for open surgery through the use of stenting. The success of stenting though is sometimes jeopardized by the final position of the stent graft inside the human artery which may result in migration, endoleaks or blood flow occlusion. Herein, a finite element (FE) model of the commercial medical device AnacondaTM (Vascutek, Terumo) has been developed and validated in order to create a numerical tool able to provide useful clinical insight before the surgical procedure takes place. The AnacondaTM device consists of a series of NiTi rings sewn onto woven polyester fabric, a structure that despite its column stiffness is flexible enough to be used in very tortuous geometries. For the purposes of this study, a FE model of the device was built in Abaqus® (version 6.13-2) with the combination of beam, shell and surface elements; the choice of these building blocks was made to keep the computational cost to a minimum. The validation of the numerical model was performed by comparing the deployed position of a full stent graft device inside a constructed AAA with a duplicate set-up in Abaqus®. Specifically, an AAA geometry was built in CAD software and included regions of both high and low tortuosity. Subsequently, the CAD model was 3D printed into a transparent aneurysm, and a stent was deployed in the lab following the steps of the clinical procedure. Images on the frontal and sagittal planes of the experiment allowed the comparison with the results of the numerical model. By overlapping the experimental and computational images, the mean and maximum distances between the rings of the two models were measured in the longitudinal, and the transverse direction and, a 5mm upper bound was set as a limit commonly used by clinicians when working with simulations. The two models showed very good agreement of their spatial positioning, especially in the less tortuous regions. As a result, and despite the inherent uncertainties of a surgical procedure, the FE model allows confidence that the final position of the stent graft, when deployed in vivo, can also be predicted with significant accuracy. Moreover, the numerical model run in just a few hours, an encouraging result for applications in the clinical routine. In conclusion, the efficient modelling of a complicated structure which combines thin scaffolding and fabric has been demonstrated to be feasible. Furthermore, the prediction capabilities of the location of each stent ring, as well as the global shape of the graft, has been shown. This can allow surgeons to better plan their procedures and medical device manufacturers to optimize their designs. The current model can further be used as a starting point for patient specific CFD analysis.

Keywords: AAA, efficiency, finite element analysis, stent deployment

Procedia PDF Downloads 191
576 The Role of Two Macrophyte Species in Mineral Nutrient Cycling in Human-Impacted Water Reservoirs

Authors: Ludmila Polechonska, Agnieszka Klink

Abstract:

The biogeochemical studies of macrophytes shed light on elements bioavailability, transfer through the food webs and their possible effects on the biota, and provide a basis for their practical application in aquatic monitoring and remediation. Measuring the accumulation of elements in plants can provide time-integrated information about the presence of chemicals in aquatic ecosystems. The aim of the study was to determine and compare the contents of micro- and macroelements in two cosmopolitan macrophytes, submerged Ceratophyllum demersum (hornworth) and free-floating Hydrocharis morsus-ranae (European frog-bit), in order to assess their bioaccumulation potential, elements stock accumulated in each plant and their role in nutrients cycling in small water reservoirs. Sampling sites were designated in 25 oxbow lakes in urban areas in Lower Silesia (SW Poland). In each sampling site, fresh whole plants of C. demersum and H. morsus-ranae were collected from squares of 1x1 meters each where the species coexisted. European frog-bit was separated into leaves, stems and roots. For biomass measurement all plants growing on 1 square meter were collected, dried and weighed. At the same time, water samples were collected from each reservoir and their pH and EC were determined. Water samples were filtered and acidified and plant samples were digested in concentrated nitric acid. Next, the content of Ca, Cu, Fe, K, Mg, Mn, Ni and Zn was determined using atomic absorption method (AAS). Statistical analysis showed that C. demersum and organs of H. morsus-ranae differed significantly in respect of metals content (Kruskal-Wallis Anova, p<0.05). Contents of Cu, Mn, Ni and Zn were higher in hornwort, while European frog-bit contained more Ca, Fe, K, Mg. Bioaccumulation Factors (BCF=content in plant/concentration in water) showed similar pattern of metal bioaccumulation – microelements were more intensively accumulated by hornwort and macroelements by frog-bit. Based on BCF values both species may be positively evaluated as good accumulators of Cu, Fe, Mn, Ni and Zn. However, the distribution of metals in H. morsus-ranae was uneven – the majority of studied elements were retained in roots, which may indicate to existence of physiological barriers developed for dealing with toxicity. Some percent of Ca and K was actively transported to stems, but to leaves Mg only. Although the biomass of C. demersum was two times greater than biomass of H. morsus-ranae, the element off-take was greater only for Cu, Mn, Ni and Zn. Nevertheless, it can be stated that despite a relatively small biomass, compared to other macrophytes, both species may have an influence on the removal of trace elements from aquatic ecosystems and, as they serve as food for some animals, also on the incorporation of toxic elements into food chains. There was a significant positive correlation between content of Mn and Fe in water and roots of H. morus-ranae (R=0.51 and R=0.60, respectively) as well as between Cu concentration in water and in C. demersum (R=0.41) (Spearman rank correlation, p<0.05). High bioaccumulation rates and correlation between plants and water elements concentrations point to their possible use as passive biomonitors of aquatic pollution.

Keywords: aquatic plants, bioaccumulation, biomonitoring, macroelements, phytoremediation, trace metals

Procedia PDF Downloads 189
575 Enhancing the Resilience of Combat System-Of-Systems Under Certainty and Uncertainty: Two-Phase Resilience Optimization Model and Deep Reinforcement Learning-Based Recovery Optimization Method

Authors: Xueming Xu, Jiahao Liu, Jichao Li, Kewei Yang, Minghao Li, Bingfeng Ge

Abstract:

A combat system-of-systems (CSoS) comprises various types of functional combat entities that interact to meet corresponding task requirements in the present and future. Enhancing the resilience of CSoS holds significant military value in optimizing the operational planning process, improving military survivability, and ensuring the successful completion of operational tasks. Accordingly, this research proposes an integrated framework called CSoS resilience enhancement (CSoSRE) to enhance the resilience of CSoS from a recovery perspective. Specifically, this research presents a two-phase resilience optimization model to define a resilience optimization objective for CSoS. This model considers not only task baseline, recovery cost, and recovery time limit but also the characteristics of emergency recovery and comprehensive recovery. Moreover, the research extends it from the deterministic case to the stochastic case to describe the uncertainty in the recovery process. Based on this, a resilience-oriented recovery optimization method based on deep reinforcement learning (RRODRL) is proposed to determine a set of entities requiring restoration and their recovery sequence, thereby enhancing the resilience of CSoS. This method improves the deep Q-learning algorithm by designing a discount factor that adapts to changes in CSoS state at different phases, simultaneously considering the network’s structural and functional characteristics within CSoS. Finally, extensive experiments are conducted to test the feasibility, effectiveness and superiority of the proposed framework. The obtained results offer useful insights for guiding operational recovery activity and designing a more resilient CSoS.

Keywords: combat system-of-systems, resilience optimization model, recovery optimization method, deep reinforcement learning, certainty and uncertainty

Procedia PDF Downloads 16
574 A Pilot Study of Influences of Scan Speed on Image Quality for Digital Tomosynthesis

Authors: Li-Ting Huang, Yu-Hsiang Shen, Cing-Ciao Ke, Sheng-Pin Tseng, Fan-Pin Tseng, Yu-Ching Ni, Chia-Yu Lin

Abstract:

Chest radiography is the most common technique for the diagnosis and follow-up of pulmonary diseases. However, the lesions superimposed with normal structures are difficult to be detected in chest radiography. Chest tomosynthesis is a relatively new technique to obtain 3D section images from a set of low-dose projections acquired over a limited angular range. However, there are some limitations with chest tomosynthesis. Patients undergoing tomosynthesis have to be able to hold their breath firmly for 10 seconds. A digital tomosynthesis system with advanced reconstruction algorithm and high-stability motion mechanism was developed by our research group. The potential for the system to perform a bidirectional chest scan within 10 seconds is expected. The purpose of this study is to realize the influences of the scan speed on the image quality for our digital tomosynthesis system. The major factors that lead image blurring are the motion of the X-ray source and the patient. For the fore one, an experiment of imaging a chest phantom with three different scan speeds, which are 6 cm/s, 8 cm/s, and 15 cm/s, was proceeded to understand the scan speed influences on the image quality. For the rear factor, a normal SD (Sprague-Dawley) rat was imaged with it alive and sacrificed to assess the impact on the image quality due to breath motion. In both experiments, the profile of the ROIs (region of interest) and the CNRs (contrast-to-noise ratio) of the ROIs to the normal tissue of the reconstructed images was examined to realize the degradations of the qualities of the images. The preliminary results show that no obvious degradation of the image quality was observed with increasing scan speed, possibly due to the advanced designs for the hardware and software of the system. It implies that higher speed (15 cm/s) than that of the commercialized tomosynthesis system (12 cm/s) for the proposed system is achieved, and therefore a complete chest scan within 10 seconds is expected.

Keywords: chest radiography, digital tomosynthesis, image quality, scan speed

Procedia PDF Downloads 332
573 A Case Study on the Estimation of Design Discharge for Flood Management in Lower Damodar Region, India

Authors: Susmita Ghosh

Abstract:

Catchment area of Damodar River, India experiences seasonal rains due to the south-west monsoon every year and depending upon the intensity of the storms, floods occur. During the monsoon season, the rainfall in the area is mainly due to active monsoon conditions. The upstream reach of Damodar river system has five dams store the water for utilization for various purposes viz, irrigation, hydro-power generation, municipal supplies and last but not the least flood moderation. But, in the downstream reach of Damodar River, known as Lower Damodar region, is severely and frequently suffering from flood due to heavy monsoon rainfall and also release from upstream reservoirs. Therefore, an effective flood management study is required to know in depth the nature and extent of flood, water logging, and erosion related problems, affected area, and damages in the Lower Damodar region, by conducting mathematical model study. The design flood or discharge is needed to decide to assign the respective model for getting several scenarios from the simulation runs. The ultimate aim is to achieve a sustainable flood management scheme from the several alternatives. there are various methods for estimating flood discharges to be carried through the rivers and their tributaries for quick drainage from inundated areas due to drainage congestion and excess rainfall. In the present study, the flood frequency analysis is performed to decide the design flood discharge of the study area. This, on the other hand, has limitations in respect of availability of long peak flood data record for determining long type of probability density function correctly. If sufficient past records are available, the maximum flood on a river with a given frequency can safely be determined. The floods of different frequency for the Damodar has been calculated by five candidate distributions i.e., generalized extreme value, extreme value-I, Pearson type III, Log Pearson and normal. Annual peak discharge series are available at Durgapur barrage for the period of 1979 to 2013 (35 years). The available series are subjected to frequency analysis. The primary objective of the flood frequency analysis is to relate the magnitude of extreme events to their frequencies of occurrence through the use of probability distributions. The design flood for return periods of 10, 15 and 25 years return period at Durgapur barrage are estimated by flood frequency method. It is necessary to develop flood hydrographs for the above floods to facilitate the mathematical model studies to find the depth and extent of inundation etc. Null hypothesis that the distributions fit the data at 95% confidence is checked with goodness of fit test, i.e., Chi Square Test. It is revealed from the goodness of fit test that the all five distributions do show a good fit on the sample population and is therefore accepted. However, it is seen that there is considerable variation in the estimation of frequency flood. It is therefore considered prudent to average out the results of these five distributions for required frequencies. The inundated area from past data is well matched using this flood.

Keywords: design discharge, flood frequency, goodness of fit, sustainable flood management

Procedia PDF Downloads 201