Search results for: biocontrol methods
13097 Sudden Death and Chronic Disseminated Intravascular Coagulation (DIC): Two Case Reports
Authors: Saker Lilia, Youcef Mellouki, Lakhdar Sellami, Yacine Zerairia, Abdelhaid Zetili, Fatma Guahria, Fateh Kaious, Nesrine Belkhodja, Abdelhamid Mira
Abstract:
Background: Sudden death is regarded as a suspicious demise necessitating autopsy, as stipulated by legal authorities. Chronic disseminated intravascular coagulation (DIC) is an acquired clinical and biological syndrome characterized by a severe and fatal prognosis, stemming from systemic, uncontrolled, diffuse coagulation activation. Irrespective of their origins, DIC is associated with a diverse spectrum of manifestations, encompassing minor biological coagulation alterations to profoundly severe conditions wherein hemorrhagic complications may take precedence. Simultaneously, microthrombi contribute to the development of multi-organ failures. Objective This study seeks to evaluate the role of autopsy in determining the causes of death. Materials and Methods: We present two instances of sudden death involving females who underwent autopsy at the Forensic Medicine Department of the University Hospital of Annaba, Algeria. These autopsies were performed at the request of the prosecutor, aiming to determine the causes of death and illuminate the exact circumstances surrounding it. Methods Utilized: Analysis of the initial information report; Findings from postmortem examinations; Histological assessments and toxicological analyses. Results: The presence of DIC was noted, affecting nearly all veins with distinct etiologies. Conclusion: For the establishment of a meaningful diagnosis: • Thorough understanding of the subject matter is imperative; • Precise alignment with medicolegal data is essential.Keywords: chronic disseminated intravascular coagulation, sudden death, autopsy, causes of death
Procedia PDF Downloads 8513096 A Robust and Efficient Segmentation Method Applied for Cardiac Left Ventricle with Abnormal Shapes
Authors: Peifei Zhu, Zisheng Li, Yasuki Kakishita, Mayumi Suzuki, Tomoaki Chono
Abstract:
Segmentation of left ventricle (LV) from cardiac ultrasound images provides a quantitative functional analysis of the heart to diagnose disease. Active Shape Model (ASM) is a widely used approach for LV segmentation but suffers from the drawback that initialization of the shape model is not sufficiently close to the target, especially when dealing with abnormal shapes in disease. In this work, a two-step framework is proposed to improve the accuracy and speed of the model-based segmentation. Firstly, a robust and efficient detector based on Hough forest is proposed to localize cardiac feature points, and such points are used to predict the initial fitting of the LV shape model. Secondly, to achieve more accurate and detailed segmentation, ASM is applied to further fit the LV shape model to the cardiac ultrasound image. The performance of the proposed method is evaluated on a dataset of 800 cardiac ultrasound images that are mostly of abnormal shapes. The proposed method is compared to several combinations of ASM and existing initialization methods. The experiment results demonstrate that the accuracy of feature point detection for initialization was improved by 40% compared to the existing methods. Moreover, the proposed method significantly reduces the number of necessary ASM fitting loops, thus speeding up the whole segmentation process. Therefore, the proposed method is able to achieve more accurate and efficient segmentation results and is applicable to unusual shapes of heart with cardiac diseases, such as left atrial enlargement.Keywords: hough forest, active shape model, segmentation, cardiac left ventricle
Procedia PDF Downloads 33913095 Heuristics for Optimizing Power Consumption in the Smart Grid
Authors: Zaid Jamal Saeed Almahmoud
Abstract:
Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.Keywords: heuristics, optimization, smart grid, peak demand, power supply
Procedia PDF Downloads 8813094 The Reliability of Wireless Sensor Network
Authors: Bohuslava Juhasova, Igor Halenar, Martin Juhas
Abstract:
The wireless communication is one of the widely used methods of data transfer at the present days. The benefit of this communication method is the partial independence of the infrastructure and the possibility of mobility. In some special applications it is the only way how to connect. This paper presents some problems in the implementation of a sensor network connection for measuring environmental parameters in the area of manufacturing plants.Keywords: network, communication, reliability, sensors
Procedia PDF Downloads 65213093 Climate Change and Sustainable Development among Agricultural Communities in Tanzania; An Analysis of Southern Highland Rural Communities
Authors: Paschal Arsein Mugabe
Abstract:
This paper examines sustainable development planning in the context of environmental concerns in rural areas of the Tanzania. It challenges mainstream approaches to development, focusing instead upon transformative action for environmental justice. The goal is to help shape future sustainable development agendas in local government, international agencies and civil society organisations. Research methods: The approach of the study is geographical, but also involves various Trans-disciplinary elements, particularly from development studies, sociology and anthropology, management, geography, agriculture and environmental science. The research methods included thematic and questionnaire interviews, participatory tools such as focus group discussion, participatory research appraisal and expert interviews for primary data. Secondary data were gathered through the analysis of land use/cover data and official documents on climate, agriculture, marketing and health. Also several earlier studies that were made in the area provided an important reference base. Findings: The findings show that, agricultural sustainability in Tanzania appears likely to deteriorate as a consequence of climate change. Noteworthy differences in impacts across households are also present both by district and by income category. Also food security cannot be explained by climate as the only influencing factor. A combination of economic, political and socio-cultural context of the community are crucial. Conclusively, it is worthy knowing that people understand their relationship between climate change and their livelihood.Keywords: agriculture, climate change, environment, sustainable development
Procedia PDF Downloads 32513092 Building Information Modelling: A Solution to the Limitations of Prefabricated Construction
Authors: Lucas Peries, Rolla Monib
Abstract:
The construction industry plays a vital role in the global economy, contributing billions of dollars annually. However, the industry has been struggling with persistently low productivity levels for years, unlike other sectors that have shown significant improvements. Modular and prefabricated construction methods have been identified as potential solutions to boost productivity in the construction industry. These methods offer time advantages over traditional construction methods. Despite their potential benefits, modular and prefabricated construction face hindrances and limitations that are not present in traditional building systems. Building information modelling (BIM) has the potential to address some of these hindrances, but barriers are preventing its widespread adoption in the construction industry. This research aims to enhance understanding of the shortcomings of modular and prefabricated building systems and develop BIM-based solutions to alleviate or eliminate these hindrances. The research objectives include identifying and analysing key issues hindering the use of modular and prefabricated building systems, investigating the current state of BIM adoption in the construction industry and factors affecting its successful implementation, proposing BIM-based solutions to address the issues associated with modular and prefabricated building systems, and assessing the effectiveness of the developed solutions in removing barriers to their use. The research methodology involves conducting a critical literature review to identify the key issues and challenges in modular and prefabricated construction and BIM adoption. Additionally, an online questionnaire will be used to collect primary data from construction industry professionals, allowing for feedback and evaluation of the proposed BIM-based solutions. The data collected will be analysed to evaluate the effectiveness of the solutions and their potential impact on the adoption of modular and prefabricated building systems. The main findings of the research indicate that the identified issues from the literature review align with the opinions of industry professionals, and the proposed BIM-based solutions are considered effective in addressing the challenges associated with modular and prefabricated construction. However, the research has limitations, such as a small sample size and the need to assess the feasibility of implementing the proposed solutions. In conclusion, this research contributes to enhancing the understanding of modular and prefabricated building systems' limitations and proposes BIM-based solutions to overcome these limitations. The findings are valuable to construction industry professionals and BIM software developers, providing insights into the challenges and potential solutions for implementing modular and prefabricated construction systems in future projects. Further research should focus on addressing the limitations and assessing the feasibility of implementing the proposed solutions from technical and legal perspectives.Keywords: building information modelling, modularisation, prefabrication, technology
Procedia PDF Downloads 9813091 Management of ASD with Co-Morbid OCD: A Literature Review to Compare the Pharmacological and Psychological Treatment Options in Individuals Under the Age of 18
Authors: Melissa Nelson, Simran Jandu, Hana Jalal, Mia Ingram, Chrysi Stefanidou
Abstract:
There is a significant overlap between autism spectrum disorder (ASD) and obsessive compulsive disorder (OCD), with up to 90% of young people diagnosed with ASD having this co-morbidity. Distinguishing between the symptoms of the two leads to issues with accurate treatment, yet this is paramount in benefitting the young person. There are two distinct methods of treatment, psychological or pharmacological, with clinicians tending to choose one or the other, potentially due to the lack of research available. This report reviews the efficacy of psychological and pharmacological treatments for young people diagnosed with ASD and co-morbid OCD. A literature review was performed on papers from the last fifteen years, including “ASD,” “OCD,” and individuals under the age of 18. Eleven papers were selected as relevant. The report looks at the comparison between more traditional methods, such as selective serotonin reuptake inhibitors (SSRI) and cognitive behavior therapy (CBT), and newer therapies, such as modified or intensive ASD-focused psychotherapies and the use of other medication classes. On reviewing the data, it was identified that there was a distinct lack of information on this important topic. The most widely used treatment was medication such as Fluoxetine, an SSRI, which rarely showed an improvement in symptoms or outcomes. This is in contrast to modified forms of CBT, which often reduces symptoms or even results in OCD remission. With increased research into the non-traditional management of these co-morbid conditions, it is clear there is scope that modified CBT may become the future treatment of choice for OCD in young people with ASD.Keywords: autism spectrum disorder, intensive or adapted cognitive behavioral therapy, obsessive compulsive disorder, pharmacological management
Procedia PDF Downloads 913090 Characterization of 2,4,6-Trinitrotoluene (Tnt)-Metabolizing Bacillus Cereus Sp TUHP2 Isolated from TNT-Polluted Soils in the Vellore District, Tamilnadu, India
Authors: S. Hannah Elizabeth, A. Panneerselvam
Abstract:
Objective: The main objective was to evaluate the degradative properties of Bacillus cereus sp TUHP2 isolated from TNT-Polluted soils in the Vellore District, Tamil Nadu, India. Methods: Among the 3 bacterial genera isolated from different soil samples, one potent TNT degrading strain Bacillus cereus sp TUHP2 was identified. The morphological, physiological and the biochemical properties of the strain Bacillus cereus sp TUHP2 was confirmed by conventional methods and genotypic characterization was carried out using 16S r-DNA partial gene amplification and sequencing. The broken down by products of DNT in the extract was determined by Gas Chromatogram- Mass spectrometry (GC-MS). Supernatant samples from the broth studied at 24 h interval were analyzed by HPLC analysis and the effect on various nutritional and environmental factors were analysed and optimized for the isolate. Results: Out of three isolates one strain TUHP2 were found to have potent efficiency to degrade TNT and revealed the genus Bacillus. 16S rDNA gene sequence analysis showed highest homology (98%) with Bacillus cereus and was assigned as Bacillus cereus sp TUHP2. Based on the energy of the predicted models, the secondary structure predicted by MFE showed the more stable structure with a minimum energy. Products of TNT Transformation showed colour change in the medium during cultivation. TNT derivates such as 2HADNT and 4HADNT were detected by HPLC chromatogram and 2ADNT, 4ADNT by GC/MS analysis. Conclusion: Hence this study presents the clear evidence for the biodegradation process of TNT by strain Bacillus cereus sp TUHP2.Keywords: bioremediation, biodegradation, biotransformation, sequencing
Procedia PDF Downloads 46213089 Real Estate Trend Prediction with Artificial Intelligence Techniques
Authors: Sophia Liang Zhou
Abstract:
For investors, businesses, consumers, and governments, an accurate assessment of future housing prices is crucial to critical decisions in resource allocation, policy formation, and investment strategies. Previous studies are contradictory about macroeconomic determinants of housing price and largely focused on one or two areas using point prediction. This study aims to develop data-driven models to accurately predict future housing market trends in different markets. This work studied five different metropolitan areas representing different market trends and compared three-time lagging situations: no lag, 6-month lag, and 12-month lag. Linear regression (LR), random forest (RF), and artificial neural network (ANN) were employed to model the real estate price using datasets with S&P/Case-Shiller home price index and 12 demographic and macroeconomic features, such as gross domestic product (GDP), resident population, personal income, etc. in five metropolitan areas: Boston, Dallas, New York, Chicago, and San Francisco. The data from March 2005 to December 2018 were collected from the Federal Reserve Bank, FBI, and Freddie Mac. In the original data, some factors are monthly, some quarterly, and some yearly. Thus, two methods to compensate missing values, backfill or interpolation, were compared. The models were evaluated by accuracy, mean absolute error, and root mean square error. The LR and ANN models outperformed the RF model due to RF’s inherent limitations. Both ANN and LR methods generated predictive models with high accuracy ( > 95%). It was found that personal income, GDP, population, and measures of debt consistently appeared as the most important factors. It also showed that technique to compensate missing values in the dataset and implementation of time lag can have a significant influence on the model performance and require further investigation. The best performing models varied for each area, but the backfilled 12-month lag LR models and the interpolated no lag ANN models showed the best stable performance overall, with accuracies > 95% for each city. This study reveals the influence of input variables in different markets. It also provides evidence to support future studies to identify the optimal time lag and data imputing methods for establishing accurate predictive models.Keywords: linear regression, random forest, artificial neural network, real estate price prediction
Procedia PDF Downloads 10313088 Frequent Pattern Mining for Digenic Human Traits
Authors: Atsuko Okazaki, Jurg Ott
Abstract:
Some genetic diseases (‘digenic traits’) are due to the interaction between two DNA variants. For example, certain forms of Retinitis Pigmentosa (a genetic form of blindness) occur in the presence of two mutant variants, one in the ROM1 gene and one in the RDS gene, while the occurrence of only one of these mutant variants leads to a completely normal phenotype. Detecting such digenic traits by genetic methods is difficult. A common approach to finding disease-causing variants is to compare 100,000s of variants between individuals with a trait (cases) and those without the trait (controls). Such genome-wide association studies (GWASs) have been very successful but hinge on genetic effects of single variants, that is, there should be a difference in allele or genotype frequencies between cases and controls at a disease-causing variant. Frequent pattern mining (FPM) methods offer an avenue at detecting digenic traits even in the absence of single-variant effects. The idea is to enumerate pairs of genotypes (genotype patterns) with each of the two genotypes originating from different variants that may be located at very different genomic positions. What is needed is for genotype patterns to be significantly more common in cases than in controls. Let Y = 2 refer to cases and Y = 1 to controls, with X denoting a specific genotype pattern. We are seeking association rules, ‘X → Y’, with high confidence, P(Y = 2|X), significantly higher than the proportion of cases, P(Y = 2) in the study. Clearly, generally available FPM methods are very suitable for detecting disease-associated genotype patterns. We use fpgrowth as the basic FPM algorithm and built a framework around it to enumerate high-frequency digenic genotype patterns and to evaluate their statistical significance by permutation analysis. Application to a published dataset on opioid dependence furnished results that could not be found with classical GWAS methodology. There were 143 cases and 153 healthy controls, each genotyped for 82 variants in eight genes of the opioid system. The aim was to find out whether any of these variants were disease-associated. The single-variant analysis did not lead to significant results. Application of our FPM implementation resulted in one significant (p < 0.01) genotype pattern with both genotypes in the pattern being heterozygous and originating from two variants on different chromosomes. This pattern occurred in 14 cases and none of the controls. Thus, the pattern seems quite specific to this form of substance abuse and is also rather predictive of disease. An algorithm called Multifactor Dimension Reduction (MDR) was developed some 20 years ago and has been in use in human genetics ever since. This and our algorithms share some similar properties, but they are also very different in other respects. The main difference seems to be that our algorithm focuses on patterns of genotypes while the main object of inference in MDR is the 3 × 3 table of genotypes at two variants.Keywords: digenic traits, DNA variants, epistasis, statistical genetics
Procedia PDF Downloads 12213087 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings
Authors: Youlu Huang, Huanjun Jiang
Abstract:
A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.Keywords: old buildings, adding story, seismic strengthening, seismic performance
Procedia PDF Downloads 12113086 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 3013085 Prevalence of Knee Pain and Risk Factors and Its Impact on Functional Impairment among Saudi Adolescents
Authors: Ali H.Alyami, Hussam Darraj, Faisal Hakami, Mohammed Awaf, Sulaiman Hamdi, Nawaf Bakri, Abdulaziz Saber, Khalid Hakami, Almuhanad Alyami, Mohammed khashab
Abstract:
Introduction: Adolescents frequently self-report pain, according to epidemiological research. The knee is one of the sites where the pain is most common. One of the main factors contributing to the number of years people spend disabled and having substantial personal, societal, and economic burdens globally are musculoskeletal disorders. Adolescents may have knee pain due to an abrupt, traumatic injury or an insidious, slowly building onset that neither the adolescent nor the parent is aware of. Objectives: The present study’s authors aimed to estimate the prevalence of knee pain in Saudi adolescents. Methods: This cross-sectional survey, carried out from June to November 2022, included 676 adolescents ages 10 to 18. Data are presented as frequencies and percentages for categorical variables. Analysis of variance (ANOVA) was used to compare means between groups, while the chi-square test was used for the comparison of categorical variables. Statistical significance was set at P< 0.05.Result: Adolescents were invited to take part in the study. 57.5% were girls, and 42.5% were males,68.8% were 676 aged between 15 and 18. The prevalence of knee pain was considerably high among females (26%), while it was 19.2% among males. Moreover, age was a significant predictor for knee pain; also BMI was significant for knee pain. Conclusion: Our study noted a high rate of knee pain among adolescents, so we need to raise awareness about risk factors. Adolescent knee pain can be prevented with conservative methods and some minor lifestyle/activity modifications.Keywords: knee pain, prevalence of knee pain, exercise training, physical activity
Procedia PDF Downloads 11113084 Improvement of Visual Acuity in Patient Undergoing Occlusion Therapy
Authors: Rajib Husain, Mezbah Uddin, Mohammad Shamsal Islam, Rabeya Siddiquee
Abstract:
Purpose: To determine the improvement of visual acuity in patients undergoing occlusion therapy. Methods: This was a prospective hospital-based study of newly diagnosed of amblyopia seen at the pediatric clinic of Chittagong Eye Infirmary & Training Complex. There were 32 refractive amblyopia subjects were examined & questionnaire was piloted. Included were all patients diagnosed with refractive amblyopia between 5 to 8 years, without previous amblyopia treatment, and whose parents were interested to participate in the study. Patients diagnosed with strabismic amblyopia were excluded. Patients were first corrected with the best correction for a month. When the VA in the amblyopic eye did not improve over a month, then occlusion treatment was started. Occlusion was done daily for 6-8 h together with vision therapy. The occlusion was carried out for three months. Results: Out of study 32 children, 31 of them have a good compliance of amblyopic treatment whereas one child has poor compliance. About 6% Children have amblyopia from Myopia, 7% Hyperopia, 32% from myopic astigmatism, 42% from hyperopic astigmatism and 13% have mixed astigmatism. The mean and Standard deviation of present average VA was 0.452±0.275 Log MAR and after an intervention of amblyopia therapy with vision therapy mean and Standard deviation VA was 0.155±0.157 Log MAR. Out of total respondent 21.85% have BCVA in range from (0-.2) log MAR, 37.5% have BCVA in range from (0.22-0.5) log MAR, 35.95% have in range from (0.52-0.8) log MAR, 4.7% have in range from (0.82-1) log MAR and after intervention of occlusion therapy with vision therapy 76.6% have VA in range from (0-.2) log MAR, 21.85% have VA in range from (0.22-0.5) log MAR, 1.5% have in range from (0.52-0.8) log MAR. Conclusion: Amblyopia is a most important factor in pediatric age group because it can lead to visual impairment. Thus, this study concludes that occlusion therapy with vision therapy is probably one of the best treatment methods for amblyopic patients (age 5-8 years), and compliance and age were the most critical factor predicting a successful outcome.Keywords: amblyopia, occlusion therapy, vision therapy, eccentric fixation, visuoscopy
Procedia PDF Downloads 50313083 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 15813082 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 17813081 Evaluation of Virtual Reality for the Rehabilitation of Athlete Lower Limb Musculoskeletal Injury: A Method for Obtaining Practitioner’s Viewpoints through Observation and Interview
Authors: Hannah K. M. Tang, Muhammad Ateeq, Mark J. Lake, Badr Abdullah, Frederic A. Bezombes
Abstract:
Based on a theoretical assessment of current literature, virtual reality (VR) could help to treat sporting injuries in a number of ways. However, it is important to obtain rehabilitation specialists’ perspectives in order to design, develop and validate suitable content for a VR application focused on treatment. Subsequently, a one-day observation and interview study focused on the use of VR for the treatment of lower limb musculoskeletal conditions in athletes was conducted at St George’s Park England National Football Centre with rehabilitation specialists. The current paper established the methods suitable for obtaining practitioner’s viewpoints through observation and interview in this context. Particular detail was provided regarding the method of qualitatively processing interview results using the qualitative data analysis software tool NVivo, in order to produce a narrative of overarching themes. The observations and overarching themes identified could be used as a framework and success criteria of a VR application developed in future research. In conclusion, this work explained the methods deemed suitable for obtaining practitioner’s viewpoints through observation and interview. This was required in order to highlight characteristics and features of a VR application designed to treat lower limb musculoskeletal injury of athletes and could be built upon to direct future work.Keywords: athletes, lower-limb musculoskeletal injury, rehabilitation, return-to-sport, virtual reality
Procedia PDF Downloads 25713080 Mayan Culture and Attitudes towards Sustainability
Authors: Sarah Ryu
Abstract:
Agricultural methods and ecological approaches employed by the pre-colonial Mayans may provide valuable insights into forest management and viable alternatives for resource sustainability in the face of major deforestation across Central and South America.Using a combination of observation data collected from the modern indigenous inhabitants near Mixco in Guatemala and historical data, this study was able to create a holistic picture of how the Maya maintained their ecosystems. Surveys and observations were conducted in the field, over a period of twelve weeks across two years. Geographic and archaeological data for this area was provided by Guatemalan organizations such as the Universidad de San Carlos de Guatemala. Observations of current indigenous populations around Mixco showed that they adhered to traditional Mayan methods of agriculture, such as terrace construction and arboriculture. Rather than planting one cash crop as was done by the Spanish, indigenous peoples practice agroforestry, cultivating forests that would provide trees for construction material, wild plant foods, habitat for game, and medicinal herbs. The emphasis on biodiversity prevented deforestation and created a sustainable balance between human consumption and forest regrowth. Historical data provided by MayaSim showed that the Mayans successfully maintained their ecosystems from about 800BCE to 700CE. When the Mayans practiced natural resource conservation and cultivated a harmonious relationship with the forest around them, they were able to thrive and prosper alongside nature. Having lasted over a thousand years, the Mayan empire provides a valuable lesson in sustainability and human attitudes towards the environment.Keywords: biodiversity, forestry, mayan, sustainability
Procedia PDF Downloads 17713079 CD133 and CD44 - Stem Cell Markers for Prediction of Clinically Aggressive Form of Colorectal Cancer
Authors: Ognen Kostovski, Svetozar Antovic, Rubens Jovanovic, Irena Kostovska, Nikola Jankulovski
Abstract:
Introduction:Colorectal carcinoma (CRC) is one of the most common malignancies in the world. The cancer stem cell (CSC) markers are associated with aggressive cancer types and poor prognosis. The aim of study was to determine whether the expression of colorectal cancer stem cell markers CD133 and CD44 could be significant in prediction of clinically aggressive form of CRC. Materials and methods: Our study included ninety patients (n=90) with CRC. Patients were divided into two subgroups: with metatstatic CRC and non-metastatic CRC. Tumor samples were analyzed with standard histopathological methods, than was performed immunohistochemical analysis with monoclonal antibodies against CD133 and CD44 stem cell markers. Results: High coexpression of CD133 and CD44 was observed in 71.4% of patients with metastatic disease, compared to 37.9% in patients without metastases. Discordant expression of both markers was found in 8% of the subgroup with metastatic CRC, and in 13.4% of the subgroup without metastatic CRC. Statistical analyses showed a significant association of increased expression of CD133 and CD44 with the disease stage, T - category and N - nodal status. With multiple regression analysis the stage of disease was designate as a factor with the greatest statistically significant influence on expression of CD133 (p <0.0001) and CD44 (p <0.0001). Conclusion: Our results suggest that the coexpression of CD133 and CD44 have an important role in prediction of clinically aggressive form of CRC. Both stem cell markers can be routinely implemented in standard pathohistological diagnostics and can be useful markers for pre-therapeutic oncology screening.Keywords: colorectal carcinoma, stem cells, CD133+, CD44+
Procedia PDF Downloads 15013078 Drive Sharing with Multimodal Interaction: Enhancing Safety and Efficiency
Authors: Sagar Jitendra Mahendrakar
Abstract:
Exploratory testing is a dynamic and adaptable method of software quality assurance that is frequently praised for its ability to find hidden flaws and improve the overall quality of the product. Instead of using preset test cases, exploratory testing allows testers to explore the software application dynamically. This is in contrast to scripted testing methodologies, which primarily rely on tester intuition, creativity, and adaptability. There are several tools and techniques that can aid testers in the exploratory testing process which we will be discussing in this talk.Tests of this kind are able to find bugs of this kind that are harder to find during structured testing or that other testing methods may have overlooked.The purpose of this abstract is to examine the nature and importance of exploratory testing in modern software development methods. It explores the fundamental ideas of exploratory testing, highlighting the value of domain knowledge and tester experience in spotting possible problems that may escape the notice of traditional testing methodologies. Throughout the software development lifecycle, exploratory testing promotes quick feedback loops and continuous improvement by giving testers the ability to make decisions in real time based on their observations. This abstract also clarifies the unique features of exploratory testing, like its non-linearity and capacity to replicate user behavior in real-world settings. Testers can find intricate bugs, usability problems, and edge cases in software through impromptu exploration that might go undetected. Exploratory testing's flexible and iterative structure fits in well with agile and DevOps processes, allowing for a quicker time to market without sacrificing the quality of the final product.Keywords: exploratory, testing, automation, quality
Procedia PDF Downloads 5113077 The Effect of Education on Nurses' Knowledge Level for Ventrogluteal Site Injection: Pilot Study
Authors: Emel Bayraktar, Gulengun Turk
Abstract:
Introduction and Objective: Safe administration of medicines is one of the main responsibilities of nurses. Intramuscular drug administration is among the most common methods used by nurses among all drug applications. This study was carried out in order to determine determine the effect of education given on injection in ventrogluteal area on the level of knowledge of nurses on this subject. Methods: The sample of the study consisted of 20 nurses who agreed to participate in the study between 01 October and 31 December 2019. The research is a pretest-posttest comparative, quasi-experimental type pilot study. The nurses were given a 4-hour training prepared on injection into the ventrogluteal area. The training consisted of two hours of theoretical and two hours of laboratory practice. Before the training and 4 weeks after the training, a questionnaire form containing questions about their knowledge and practices regarding the injection of the ventrogluteal region was applied to the nurses. Results: The average age of the nurses is 26.55 ± 7.60, 35% (n = 7) of them are undergraduate and 30% (n = 6) of them work in intensive care units. Before the training, 35% (n = 7) of the nurses stated that the most frequently used intramuscular injection site was the ventrogluteal area, and 75% (n = 15) stated that the safest area was the rectus femoris muscle. After the training, 55% (n = 11) of the nurses stated that they most frequently used the ventrogluteal area and 100% (n = 20) of them stated that the ventrogluteal area was the safest area. The average score the nurses got from the premises before the training is 14.15 ± 6.63 (min = 0, max = 20), the total score is 184. The average score obtained after the training was determined as 18.69 ± 2.35 (min = 12, max = 20), and the total score was 243. Conclusion: As a result of the research, it was determined that the training given on the injection of ventrogluteal area increased the knowledge level of the nurses. It is recommended to organize in-service trainings for all nurses on the injection of ventrogluteal area.Keywords: safe injection, knowledge level, nurse, intramuscular injection, ventrogluteal area
Procedia PDF Downloads 21213076 Methods Employed to Mitigate Wind Damage on Ancient Egyptian Architecture
Authors: Hossam Mohamed Abdelfattah Helal Hegazi
Abstract:
Winds and storms are considered crucial weathering factors, representing primary causes of destruction and erosion for all materials on the Earth's surface. This naturally includes historical structures, with the impact of winds and storms intensifying their deterioration, particularly when carrying high-hardness sand particles during their passage across the ground. Ancient Egyptians utilized various methods to prevent wind damage to their ancient architecture throughout the ancient Egyptian periods . One of the techniques employed by ancient Egyptians was the use of clay or compacted earth as a filling material between opposing walls made of stone, bricks, or mud bricks. The walls made of reeds or woven tree branches were covered with clay to prevent the infiltration of winds and rain, enhancing structural integrity, this method was commonly used in hollow layers . Additionally, Egyptian engineers innovated a type of adobe brick with uniformly leveled sides, manufactured from dried clay. They utilized stone barriers, constructed wind traps, and planted trees in rows parallel to the prevailing wind direction. Moreover, they employed receptacles to drain rainwater resulting from wind-loaded rain and used mortar to fill gaps in roofs and structures. Furthermore, proactive measures such as the removal of sand from around historical and archaeological buildings were taken to prevent adverse effectsKeywords: winds, storms, weathering, destruction, erosion, materials, Earth's surface, historical structures, impact
Procedia PDF Downloads 6213075 Caregivers Roles, Care Home Management, Funding and Administration in Challenged Communities: Focus in North Eastern Nigeria
Authors: Chukwuka Justus Iwegbu
Abstract:
Background: A major concern facing the world is providing senior citizens, individuals with disabilities, and other vulnerable groups with high-quality care. This issue is more serious in Nigeria's North Eastern area, where the burden of disease and disability is heavy, and access to care is constrained. This study aims to fill this gap by exploring the roles, challenges and support needs of caregivers, care home management, funding and administration in challenged communities in North Eastern Nigeria. The study will also provide a comprehensive understanding of the current situation and identify opportunities for improving the quality of care and support for caregivers and care recipients in these communities. Methods: A mixed-methods design, including both quantitative and qualitative data collection methods, will be used, and it will be guided by the stress process model of caregiving. The qualitative stage approach will comprise a survey, In-depth interviews, observations, and focus group discussion and the quantitative analysis will be used in order to comprehend the variations between caregiver's roles and care home management. A review of relevant documents, such as care home policies and funding reports, would be used to gather quantitative data on the administrative and financial aspects of care. The data collected will be analyzed using both descriptive statistics and thematic analysis. A sample size of around 200-300 participants, including caregivers, care recipients, care home managers and administrators, policymakers and health care providers, would be recruited. Findings: The study revealed that caregivers in challenged communities in North Eastern Nigeria face significant challenges, including lack of training and support, limited access to funding and resources, and high levels of burnout. Care home management and administration were also found to be inadequate, with a lack of clear policies and procedures and limited oversight and accountability. Conclusion: There is a need for increased investment in training and support for caregivers, as well as a need for improved care home management and administration in challenged communities in North Eastern Nigeria. It also highlights the importance of involving community members in decision-making and planning processes related to care homes and services. The study would contribute to the existing body of knowledge by providing a detailed understanding of the challenges faced by caregivers, care home managers and administrators.Keywords: caregivers, care home management, funding, administration, challenge communities, North Eastern Nigeria
Procedia PDF Downloads 10713074 A Study on the Effect of Cod to Sulphate Ratio on Performance of Lab Scale Upflow Anaerobic Sludge Blanket Reactor
Authors: Neeraj Sahu, Ahmad Saadiq
Abstract:
Anaerobic sulphate reduction has the potential for being effective and economically viable over conventional treatment methods for the treatment of sulphate-rich wastewater. However, a major challenge in anaerobic sulphate reduction is the diversion of a fraction of organic carbon towards methane production and some minor problem such as odour problems, corrosion, and increase of effluent chemical oxygen demand. A high-rate anaerobic technology has encouraged researchers to extend its application to the treatment of complex wastewaters with relatively low cost and energy consumption compared to physicochemical methods. Therefore, the aim of this study was to investigate the effects of COD/SO₄²⁻ ratio on the performance of lab scale UASB reactor. A lab-scale upflow anaerobic sludge blanket (UASB) reactor was operated for 170 days. In which first 60 days, for successful start-up with acclimation under methanogenesis and sulphidogenesis at COD/SO₄²⁻ of 18 and were operated at COD/SO₄²⁻ ratios of 12, 8, 4 and 1 to evaluate the effects of the presence of sulfate on the reactor performance. The reactor achieved maximum COD removal efficiency and biogas evolution at the end of acclimation (control). This phase lasted 53 days with 89.5% efficiency. The biogas was 0.6 L/d at (OLR) of 1.0 kg COD/m³d when it was treating synthetic wastewater with effective volume of reactor as 2.8 L. When COD/SO₄²⁻ ratio changed from 12 to 1, slight decrease in COD removal efficiencies (76.8–87.4%) was observed, biogas production decreased from 0.58 to 0.32 L/d, while the sulfate removal efficiency increased from 42.5% to 72.7%.Keywords: anaerobic, chemical oxygen demand, organic loading rate, sulphate, up-flow anaerobic sludge blanket reactor
Procedia PDF Downloads 21813073 Effects of Cooking and Drying on the Phenolic Compounds, and Antioxidant Activity of Cleome gynandra (Spider Plant)
Authors: E. Kayitesi, S. Moyo, V. Mavumengwana
Abstract:
Cleome gynandra (spider plant) is an African green leafy vegetable categorized as an indigenous, underutilized and has been reported to contain essential phenolic compounds. Phenolic compounds play a significant role in human diets due to their proposed health benefits. These compounds however may be affected by different processing methods such as cooking and drying. Cleome gynandra was subjected to boiling, steam blanching, and drying processes and analysed for Total Phenolic Content (TPC), Total Flavonoid Content (TFC), antioxidant activity and flavonoid composition. Cooking and drying significantly (p < 0.05) increased the levels of phenolic compounds and antioxidant activity of the vegetable. The boiled sample filtrate exhibited the lowest TPC followed by the raw sample while the steamed sample depicted the highest TPC levels. Antioxidant activity results showed that steamed sample showed the highest DPPH, FRAP and ABTS with mean values of 499.38 ± 2.44, 578.68 ± 5.19, and 214.39 ± 12.33 μM Trolox Equivalent/g respectively. An increase in quercetin-3-rutinoside, quercetin-rhamnoside and kaempferol-3-rutinoside occurred after all the cooking and drying methods employed. Cooking and drying exerted positive effects on the vegetable’s phenolic content, antioxidant activity as a whole, but with varied effects on the individual flavonoid molecules. The results obtained help in defining the importance of African green leafy vegetable and resultant processed products as functional foods and their potential to exert health promoting properties.Keywords: Cleome gynandra, phenolic compounds, cooking, drying, health promoting properties
Procedia PDF Downloads 17013072 Characteristing Aquifer Layers of Karstic Springs in Nahavand Plain Using Geoelectrical and Electromagnetic Methods
Authors: A. Taheri Tizro, Rojin Fasihi
Abstract:
Geoelectrical method is one of the most effective tools in determining subsurface lithological layers. The electromagnetic method is also a newer method that can play an important role in determining and separating subsurface layers with acceptable accuracy. In the present research, 10 electromagnetic soundings were collected in the upstream of 5 karstic springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood in Nahavand plain of Hamadan province. By using the emerging data, the belectromagnetic logs were prepared at different depths and compared with 5 logs of the geoelectric method. The comparison showed that the value of NRMSE in the geoelectric method for the 5 springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood were 7.11, 7.50, respectively. It is 44.93, 3.99, and 2.99, and in the electromagnetic method, the value of this coefficient for the investigated springs is about 1.4, 1.1, 1.2, 1.5, and 1.3, respectively. In addition to the similarity of the results of the two methods, it is found that, the accuracy of the electromagnetic method based on the NRMSE value is higher than the geoelectric method. The advantage of the electromagnetic method compared to geoelectric is on less time consuming and its cost prohibitive. The depth to water table is the final result of this research work , which showed that in the springs of Famaseb, Faresban, Ghale Baroodab, Gian and Gonbad kabood, having depth of about 6, 20, 10, 2 36 meters respectively. The maximum thickness of the aquifer layer was estimated in Gonbad kabood spring (36 meters) and the lowest in Gian spring (2 meters). These results can be used to identify the water potential of the region in order to better manage water resources.Keywords: karst spring, geoelectric, aquifer layers, nahavand
Procedia PDF Downloads 7113071 Research on Strategies of Building a Child Friendly City in Wuhan
Authors: Tianyue Wan
Abstract:
Building a child-friendly city (CFC) contributes to improving the quality of urbanization. It also forms a local system committed to fulfilling children's rights and development. Yet, the work related to CFC is still at the initial stage in China. Therefore, taking Wuhan, the most populous city in central China, as the pilot city would offer some reference for other cities. Based on the analysis of theories and practice examples, this study puts forward the challenges of building a child-friendly city under the particularity of China's national conditions. To handle these challenges, this study uses four methods to collect status data: literature research, site observation, research inquiry, and semantic differential (SD). And it adopts three data analysis methods: case analysis, geographic information system (GIS) analysis, and analytic hierarchy process (AHP) method. Through data analysis, this study identifies the evaluation system and appraises the current situation of Wuhan. According to the status of Wuhan's child-friendly city, this study proposes three strategies: 1) construct the evaluation system; 2) establish a child-friendly space system integrating 'point-line-surface'; 3) build a digitalized service platform. At the same time, this study suggests building a long-term mechanism for children's participation and multi-subject supervision from laws, medical treatment, education, safety protection, social welfare, and other aspects. Finally, some conclusions of strategies about CFC are tried to be drawn to promote the highest quality of life for all citizens in Wuhan.Keywords: action plan, child friendly city, construction strategy, urban space
Procedia PDF Downloads 9013070 Slope Stability Study at Jalan Tun Sardon and Sungai Batu, Pulau Pinang, Malaysia by Using 2-D Resistivity Method
Authors: Muhamad Iqbal Mubarak Faharul Azman, Azim Hilmy Mohd Yusof, Nur Azwin Ismail, Noer El Hidayah Ismail
Abstract:
Landslides and rock falls are the examples of environmental and engineering problems in Malaysia. There are various methods that can be applied for the environmental and engineering problems but geophysical methods are seldom applied as the main investigation technique. This paper aims to study the slope stability by using 2-D resistivity method at Jalan Tun Sardon and Sungai Batu, Pulau Pinang. These areas are considered as highly potential for unstable slope in Penang Island based on recent cases of rockfall and landslide reported especially during raining season. At both study areas, resistivity values greater than 5000 ohm-m are detected and considered as the fresh granite. The weathered granite is indicated by resistivity value of 750-1500 ohm-m with depth of < 14 meters at Sungai Batu area while at Jalan Tun Sardon area, the weathered granite with resistivity values of 750-2000 ohm-m is found at depth < 14 meter at distance 0-90 meter but at distance of 95-150 meter, the weathered granite is found at depth < 26 meter. Saturated zone is detected only at Sungai Batu with resistivity value <250 ohm-m at distance 100-120 meter. A fracture is detected at distance about 70 meter at Jalan Tun Sardon area. Unstable slope is expected to be affected by the weathered granite that dominates the subsurface of the study areas along with triggering factor such as heavy rainfall.Keywords: 2-D resistivity, environmental issue, landslide, slope stability
Procedia PDF Downloads 22813069 Simplified Stress Gradient Method for Stress-Intensity Factor Determination
Authors: Jeries J. Abou-Hanna
Abstract:
Several techniques exist for determining stress-intensity factors in linear elastic fracture mechanics analysis. These techniques are based on analytical, numerical, and empirical approaches that have been well documented in literature and engineering handbooks. However, not all techniques share the same merit. In addition to overly-conservative results, the numerical methods that require extensive computational effort, and those requiring copious user parameters hinder practicing engineers from efficiently evaluating stress-intensity factors. This paper investigates the prospects of reducing the complexity and required variables to determine stress-intensity factors through the utilization of the stress gradient and a weighting function. The heart of this work resides in the understanding that fracture emanating from stress concentration locations cannot be explained by a single maximum stress value approach, but requires use of a critical volume in which the crack exists. In order to understand the effectiveness of this technique, this study investigated components of different notch geometry and varying levels of stress gradients. Two forms of weighting functions were employed to determine stress-intensity factors and results were compared to analytical exact methods. The results indicated that the “exponential” weighting function was superior to the “absolute” weighting function. An error band +/- 10% was met for cases ranging from a steep stress gradient in a sharp v-notch to the less severe stress transitions of a large circular notch. The incorporation of the proposed method has shown to be a worthwhile consideration.Keywords: fracture mechanics, finite element method, stress intensity factor, stress gradient
Procedia PDF Downloads 13513068 Self-Healing Phenomenon Evaluation in Cementitious Matrix with Different Water/Cement Ratios and Crack Opening Age
Authors: V. G. Cappellesso, D. M. G. da Silva, J. A. Arndt, N. dos Santos Petry, A. B. Masuero, D. C. C. Dal Molin
Abstract:
Concrete elements are subject to cracking, which can be an access point for deleterious agents that can trigger pathological manifestations reducing the service life of these structures. Finding ways to minimize or eliminate the effects of this aggressive agents’ penetration, such as the sealing of these cracks, is a manner of contributing to the durability of these structures. The cementitious self-healing phenomenon can be classified in two different processes. The autogenous self-healing that can be defined as a natural process in which the sealing of this cracks occurs without the stimulation of external agents, meaning, without different materials being added to the mixture, while on the other hand, the autonomous seal-healing phenomenon depends on the insertion of a specific engineered material added to the cement matrix in order to promote its recovery. This work aims to evaluate the autogenous self-healing of concretes produced with different water/cement ratios and exposed to wet/dry cycles, considering two ages of crack openings, 3 days and 28 days. The self-healing phenomenon was evaluated using two techniques: crack healing measurement using ultrasonic waves and image analysis performed with an optical microscope. It is possible to observe that by both methods, it possible to observe the self-healing phenomenon of the cracks. For young ages of crack openings and lower water/cement ratios, the self-healing capacity is higher when compared to advanced ages of crack openings and higher water/cement ratios. Regardless of the crack opening age, these concretes were found to stabilize the self-healing processes after 80 days or 90 days.Keywords: sealf-healing, autogenous, water/cement ratio, curing cycles, test methods
Procedia PDF Downloads 161