Search results for: classification algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3894

Search results for: classification algorithms

474 Rehabilitation of Dilapidated Buildings in Morocco: Turning Urban Challenges into Opportunities

Authors: Derradji A., Ben El Mamoun M., Zakaria E., Charadi I. Anrur

Abstract:

The issue of dilapidated buildings represents a significant opportunity for constructive and beneficial interventions in Morocco. Faced with challenges associated with aging constructions and rapid urbanization, the country is committed to developing innovative strategies aimed at revitalizing urban areas and enhancing the sustainability of infrastructure, thereby ensuring citizens' safety. Through targeted investments in the renovation and modernization of existing buildings, Morocco aims to stimulate job creation, boost the local economy, and improve the quality of life for residents. Additionally, the integration of sustainable construction standards and the strengthening of regulations will promote resilient and environmentally friendly urban development. In this proactive perspective, LABOTEST has been commissioned by the National Agency for Urban Renewal (ANRUR) to conduct an in-depth study. This study focuses on the technical expertise of 1800 buildings identified as dilapidated in the prefectures of Rabat and Skhirat-Témara following an initial clearance operation. The primary objective of this initiative is to conduct a comprehensive diagnosis of these buildings and define the necessary interventions to eliminate potential risks while ensuring appropriate treatment. The article presents the adopted intervention methodology, taking into account the social dimensions involved, as well as the results of the technical expertise. These results include the classification of buildings according to their degree of urgency and recommendations for appropriate conservatory measures. Additionally, different pathologies are identified and accompanied by specific treatment proposals for each type of building. Since this study, the adopted approach has been generalized to the entire territory of Morocco. LABOTEST has been solicited by other cities such as Casablanca, Chefchaouen, Ouazzane, Azilal, Bejaad, and Demnate. This extension of the initiative demonstrates Morocco's commitment to addressing urban challenges in a proactive and inclusive manner. These efforts also illustrate the endeavors undertaken to transform urban challenges into opportunities for sustainable development and socio-economic progress for the entire population.

Keywords: building, dilapidated, rehabilitation, Morocco

Procedia PDF Downloads 63
473 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 307
472 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects

Authors: Lukas Vierus, Thomas Schuster

Abstract:

A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.

Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions

Procedia PDF Downloads 50
471 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 95
470 The Thinking of Dynamic Formulation of Rock Aging Agent Driven by Data

Authors: Longlong Zhang, Xiaohua Zhu, Ping Zhao, Yu Wang

Abstract:

The construction of mines, railways, highways, water conservancy projects, etc., have formed a large number of high steep slope wounds in China. Under the premise of slope stability and safety, the minimum cost, green and close to natural wound space repair, has become a new problem. Nowadays, in situ element testing and analysis, monitoring, field quantitative factor classification, and assignment evaluation will produce vast amounts of data. Data processing and analysis will inevitably differentiate the morphology, mineral composition, physicochemical properties between rock wounds, by which to dynamically match the appropriate techniques and materials for restoration. In the present research, based on the grid partition of the slope surface, tested the content of the combined oxide of rock mineral (SiO₂, CaO, MgO, Al₂O₃, Fe₃O₄, etc.), and classified and assigned values to the hardness and breakage of rock texture. The data of essential factors are interpolated and normalized in GIS, which formed the differential zoning map of slope space. According to the physical and chemical properties and spatial morphology of rocks in different zones, organic acids (plant waste fruit, fruit residue, etc.), natural mineral powder (zeolite, apatite, kaolin, etc.), water-retaining agent, and plant gum (melon powder) were mixed in different proportions to form rock aging agents. To spray the aging agent with different formulas on the slopes in different sections can affectively age the fresh rock wound, providing convenience for seed implantation, and reducing the transformation of heavy metals in the rocks. Through many practical engineering practices, a dynamic data platform of rock aging agent formula system is formed, which provides materials for the restoration of different slopes. It will also provide a guideline for the mixed-use of various natural materials to solve the complex, non-uniformity ecological restoration problem.

Keywords: data-driven, dynamic state, high steep slope, rock aging agent, wounds

Procedia PDF Downloads 111
469 A Literature Review of Precision Agriculture: Applications of Diagnostic Diseases in Corn, Potato, and Rice Based on Artificial Intelligence

Authors: Carolina Zambrana, Grover Zurita

Abstract:

The food loss production that occurs in deficient agricultural production is one of the major problems worldwide. This puts the population's food security and the efficiency of farming investments at risk. It is to be expected that this food security will be achieved with the own and efficient production of each country. It will have an impact on the well-being of its population and, thus, also on food sovereignty. The production losses in quantity and quality occur due to the lack of efficient detection of diseases at an early stage. It is very difficult to solve the agriculture efficiency using traditional methods since it takes a long time to be carried out due to detection imprecision of the main diseases, especially when the production areas are extensive. Therefore, the main objective of this research study is to perform a systematic literature review, of the latest five years, of Precision Agriculture (PA) to be able to understand the state of the art of the set of new technologies, procedures, and optimization processes with Artificial Intelligence (AI). This study will focus on Corns, Potatoes, and Rice diagnostic diseases. The extensive literature review will be performed on Elsevier, Scopus, and IEEE databases. In addition, this research will focus on advanced digital imaging processing and the development of software and hardware for PA. The convolution neural network will be handling special attention due to its outstanding diagnostic results. Moreover, the studied data will be incorporated with artificial intelligence algorithms for the automatic diagnosis of crop quality. Finally, precision agriculture with technology applied to the agricultural sector allows the land to be exploited efficiently. This system requires sensors, drones, data acquisition cards, and global positioning systems. This research seeks to merge different areas of science, control engineering, electronics, digital image processing, and artificial intelligence for the development, in the near future, of a low-cost image measurement system that allows the optimization of crops with AI.

Keywords: precision agriculture, convolutional neural network, deep learning, artificial intelligence

Procedia PDF Downloads 77
468 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 392
467 The Role of Twitter Bots in Political Discussion on 2019 European Elections

Authors: Thomai Voulgari, Vasilis Vasilopoulos, Antonis Skamnakis

Abstract:

The aim of this study is to investigate the effect of the European election campaigns (May 23-26, 2019) on Twitter achieving with artificial intelligence tools such as troll factories and automated inauthentic accounts. Our research focuses on the last European Parliamentary elections that took place between 23 and 26 May 2019 specifically in Italy, Greece, Germany and France. It is difficult to estimate how many Twitter users are actually bots (Echeverría, 2017). Detection for fake accounts is becoming even more complicated as AI bots are made more advanced. A political bot can be programmed to post comments on a Twitter account for a political candidate, target journalists with manipulated content or engage with politicians and artificially increase their impact and popularity. We analyze variables related to 1) the scope of activity of automated bots accounts and 2) degree of coherence and 3) degree of interaction taking into account different factors, such as the type of content of Twitter messages and their intentions, as well as the spreading to the general public. For this purpose, we collected large volumes of Twitter accounts of party leaders and MEP candidates between 10th of May and 26th of July based on content analysis of tweets based on hashtags while using an innovative network analysis tool known as MediaWatch.io (https://mediawatch.io/). According to our findings, one of the highest percentage (64.6%) of automated “bot” accounts during 2019 European election campaigns was in Greece. In general terms, political bots aim to proliferation of misinformation on social media. Targeting voters is a way that it can be achieved contribute to social media manipulation. We found that political parties and individual politicians create and promote purposeful content on Twitter using algorithmic tools. Based on this analysis, online political advertising play an important role to the process of spreading misinformation during elections campaigns. Overall, inauthentic accounts and social media algorithms are being used to manipulate political behavior and public opinion.

Keywords: artificial intelligence tools, human-bot interactions, political manipulation, social networking, troll factories

Procedia PDF Downloads 137
466 Place and Importance of Goats in the Milk Sector in Algeria

Authors: Tennah Safia, Azzag Naouelle, Derdour Salima, Hafsi Fella, Laouadi Mourad, Laamari Abdalouahab, Ghalmi Farida, Kafidi Nacerredine

Abstract:

Currently, goat farming is widely practiced among the rural population of Algeria. Although milk yield of goats is low (110 liters per goat and per year on average), this milk partly ensures the feeding of small children and provides raw milk, curd, and fermented milk to the whole family. In addition, given its investment cost, which is ten times lower than that of a cow, this level of production is still of interest. This interest is reinforced by the qualities of goat's milk, highly sought after for its nutritional value superior to that of cow's milk. In the same way, its aptitude for the transformation, in particular in quality cheeses, is very sought after. The objective of this study is to give the situation of goat milk production in rural areas of Algeria and to establish a classification of goat breeds according to their production potential. For this, a survey was carried out with goat farmers in Algerian steppe. Three indigenous breeds were encountered in this study: the breed Arabia, Mozabite, and Mekatia; Arabia being the most dominant. The Mekatia breed and the Mozabite breed appear to have higher production and milking abilities than other local breeds. They are therefore indicated to play the role of local dairy breeds par excellence. The other breed that could be improved milk performance is the Arabia breed. There, however, the milk performance of this breed is low. However, in order to increase milk production, uncontrolled crosses with imported breeds (mainly Saanen and Alpine) were carried out. The third population that can be included in the category for dairy production is the dairy breed group of imported origin. There are farms in Algeria composed of Alpine and Saanen breeds born locally. Improved milk performance of local goats, Crusader population, and dairy breeds of imported origin could be done by selection. For this, it is necessary to set up a milk control to detect the best animals. This control could be carried out among interested farmers in each large goat breeding area. In conclusion, sustained efforts must be made to enable the sustainable development of the goat sector in Algeria. It will, therefore, be necessary to deepen the reflection on a national strategy to valorize goat's milk, taking into account the specificities of the environment, the genetic biodiversity, and the eating habits of the Algerian consumer.

Keywords: goat, milk, Algeria, biodiversity

Procedia PDF Downloads 179
465 Automatic Detection of Sugarcane Diseases: A Computer Vision-Based Approach

Authors: Himanshu Sharma, Karthik Kumar, Harish Kumar

Abstract:

The major problem in crop cultivation is the occurrence of multiple crop diseases. During the growth stage, timely identification of crop diseases is paramount to ensure the high yield of crops, lower production costs, and minimize pesticide usage. In most cases, crop diseases produce observable characteristics and symptoms. The Surveyors usually diagnose crop diseases when they walk through the fields. However, surveyor inspections tend to be biased and error-prone due to the nature of the monotonous task and the subjectivity of individuals. In addition, visual inspection of each leaf or plant is costly, time-consuming, and labour-intensive. Furthermore, the plant pathologists and experts who can often identify the disease within the plant according to their symptoms in early stages are not readily available in remote regions. Therefore, this study specifically addressed early detection of leaf scald, red rot, and eyespot types of diseases within sugarcane plants. The study proposes a computer vision-based approach using a convolutional neural network (CNN) for automatic identification of crop diseases. To facilitate this, firstly, images of sugarcane diseases were taken from google without modifying the scene, background, or controlling the illumination to build the training dataset. Then, the testing dataset was developed based on the real-time collected images from the sugarcane field from India. Then, the image dataset is pre-processed for feature extraction and selection. Finally, the CNN-based Visual Geometry Group (VGG) model was deployed on the training and testing dataset to classify the images into diseased and healthy sugarcane plants and measure the model's performance using various parameters, i.e., accuracy, sensitivity, specificity, and F1-score. The promising result of the proposed model lays the groundwork for the automatic early detection of sugarcane disease. The proposed research directly sustains an increase in crop yield.

Keywords: automatic classification, computer vision, convolutional neural network, image processing, sugarcane disease, visual geometry group

Procedia PDF Downloads 114
464 A New Obesity Index Derived from Waist Circumference and Hip Circumference Well-Matched with Other Indices in Children with Obesity

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

Anthropometric obesity indices such as waist circumference (WC), indices derived from anthropometric measurements such as waist-to-hip ratio (WHR), and indices created from body fat mass composition such as trunk-to-leg fat ratio (TLFR) are commonly used for the evaluation of mild or severe forms of obesity. Their clinical utilities are being compared using body mass index (BMI) percentiles to classify obesity groups. The best of them is still being investigated to make a clear-cut discrimination between healthy normal individuals (N-BMI) and overweight or obese (OB) or morbid obese patients. The aim of this study is to derive a new index, which best suits the purpose for the discrimination of children with N-BMI from OB children. A total of eighty-three children participated in the study. Two groups were constituted. The first group comprised 42 children with N-BMI, and the second group was composed of 41 OB children, whose age- and sex- adjusted BMI percentile values vary between 95 and 99. The corresponding values for the first group were between 15 and 85. This classification was based upon the tables created by World Health Organization. The institutional ethics committee approved the study protocol. Informed consent forms were filled by the parents of the participants. Anthropometric measurements were taken and recorded following a detailed physical examination. Within this context, weight, height (Ht), WC, hip C (HC), neck C (NC) values were taken. Body mass index, WHR, (WC+HC)/2, WC/Ht, (WC/HC)/Ht, WC*NC were calculated. Bioelectrical impedance analysis was performed to obtain body’s fat compartments in terms of total fat, trunk fat, leg fat, arm fat masses. Trunk-to-leg fat ratio, trunk-to-appendicular fat ratio (TAFR), (trunk fat+leg fat)/2 ((TF+LF)/2) were calculated. Fat mass index (FMI) and diagnostic obesity notation model assessment-II (D2I) index values were calculated. Statistical analysis of the data was performed. Significantly increased values of (WC+HC)/2, (TF+LF)/2, D2I, and FMI were observed in OB group in comparison with those of N-BMI group. Significant correlations were calculated between BMI and WC, (WC+HC)/2, (TF+LF)/2, TLFR, TAFR, D2I as well as FMI both in N-BMI and OB groups. The same correlations were obtained for WC. (WC+HC)/2 was correlated with TLFR, TAFR, (TF+LF)/2, D2I, and FMI in N-BMI group. In OB group, the correlations were the same except those with TLFR and TAFR. These correlations were not present with WHR. Correlations were observed between TLFR and BMI, WC, (WC+HC)/2, (TF+LF)/2, D2I as well as FMI in N-BMI group. Same correlations were observed also with TAFR. In OB group, correlations between TLFR or TAFR and BMI, WC as well as (WC+HC)/2 were missing. None was noted with WHR. From these findings, it was concluded that (WC+HC)/2, but not WHR, was much more suitable as an anthropometric obesity index. The only correlation valid in both groups was that exists between (WC+HC)/2 and (TF+LF)/2. This index was suggested as a link between anthropometric and fat-based indices.

Keywords: children, hip circumference, obesity, waist circumference

Procedia PDF Downloads 167
463 Transformation of Antitrust Policy against Collusion in Russia and Transition Economies

Authors: Andrey Makarov

Abstract:

This article will focus on the development of antitrust policy in transition economies in the context of preventing explicit and tacit collusion. Experience of BRICS, CIS (Ukraine, Kazakhstan) and CEE countries (Bulgaria, Hungary, Latvia, Lithuania, Poland, Romania, Slovakia, Slovenia, Czech Republic, Estonia) in the creation of antitrust institutions was analyzed, including both legislation and enforcement practice. Most of these countries in the early 90th were forced to develop completely new legislation in the field of protection of competition and it is important to compare different ways of building antitrust institutions and policy results. The article proposes a special approach to evaluation of preventing collusion mechanisms. This approach takes into account such enforcement problems as: classification problems (tacit vs explicit collusion, vertical vs horizontal agreements), flexibility of prohibitions (the balance between “per se” vs “rule of reason” approaches de jure and in practice), design of sanctions, private enforcement challenge, leniency program mechanisms, the role of antitrust authorities etc. The analysis is conducted using both official data, published by competition authorities, and expert assessments. The paper will show how the integration process within the EU predetermined some aspects of the development of antitrust policy in CEE countries, including the trend of the use of "rule of reason" approach. Simultaneously was analyzed the experience of CEE countries in special mechanisms of government intervention. CIS countries in the development of antitrust policy followed more or less original ways, without such a great impact from the European Union, more attention will be given to Russian experience in this field, including the analysis of judicial decisions in antitrust cases. Main problems and challenges for transition economies in this field will be shown, including: Legal uncertainty problem; Problem of rigidity of prohibitions; Enforcement priorities of the regulator; Interaction of administrative and criminal law, limited effectiveness of criminal sanctions in the antitrust field; The effectiveness of leniency program design; Private enforcement challenge.

Keywords: collusion, antitrust policy, leniency program, transition economies, Russia, CEE

Procedia PDF Downloads 444
462 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation

Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy

Abstract:

A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.

Keywords: cognitive activity, EEG, machine learning, personalized recovery

Procedia PDF Downloads 218
461 Q Slope Rock Mass Classification and Slope Stability Assessment Methodology Application in Steep Interbedded Sedimentary Rock Slopes for a Motorway Constructed North of Auckland, New Zealand

Authors: Azariah Sosa, Carlos Renedo Sanchez

Abstract:

The development of a new motorway north of Auckland (New Zealand) includes steep rock cuts, from 63 up to 85 degrees, in an interbedded sandstone and siltstone rock mass of the geological unit Waitemata Group (Pakiri Formation), which shows sub-horizontal bedding planes, various sub-vertical joint sets, and a diverse weathering profile. In this kind of rock mass -that can be classified as a weak rock- the definition of the stable maximum geometry is not only governed by discontinuities and defects evident in the rock but is important to also consider the global stability of the rock slope, including (in the analysis) the rock mass characterisation, influence of the groundwater, the geological evolution, and the weathering processes. Depending on the weakness of the rock and the processes suffered, the global stability could, in fact, be a more restricting element than the potential instability of individual blocks through discontinuities. This paper discusses those elements that govern the stability of the rock slopes constructed in a rock formation with favourable bedding and distribution of discontinuities (horizontal and vertical) but with a weak behaviour in terms of global rock mass characterisation. In this context, classifications as Q-Slope and slope stability assessment methodology (SSAM) have been demonstrated as important tools which complement the assessment of the global stability together with the analytical tools related to the wedge-type failures and limit equilibrium methods. The paper focuses on the applicability of these two new empirical classifications to evaluate the slope stability in 18 already excavated rock slopes in the Pakiri formation through comparison between the predicted and observed stability issues and by reviewing the outcome of analytical methods (Rocscience slope stability software suite) compared against the expected stability determined from these rock classifications. This exercise will help validate such findings and correlations arising from the two empirical methods in order to adjust the methods to the nature of this specific kind of rock mass and provide a better understanding of the long-term stability of the slopes studied.

Keywords: Pakiri formation, Q-slope, rock slope stability, SSAM, weak rock

Procedia PDF Downloads 207
460 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 75
459 Gas Chromatography and Mass Spectrometry in Honey Fingerprinting: The Occurrence of 3,4-dihydro-3-oxoedulan and (E)-4-(r-1',t-2',c-4'-trihydroxy-3',6',6'-trimethylcyclohexyl)-but-3-en-2-one

Authors: Igor Jerkovic

Abstract:

Owing to the attractive sensory properties and low odour thresholds, norisoprenoids (degraded carotenoid-like structures with 3,5,5-trimethylcyclohex-2-enoic unit) have been identified as aroma contributors in a number of different matrices. C₁₃-Norisoprenoids have been found among volatile organic compounds of various honey types as well as C₉//C₁₀-norisoprenoids or C₁₄/C₁₅-norisoprenoids. Besides degradation of abscisic acid (which produces, e.g., dehydrovomifoliol, vomifoliol, others), the cleavage of the C(9)=C(10) bond of other carotenoid precursors directly generates nonspecific C₁₃-norisoprenoids such as trans-β-damascenone, 3-hydroxy-trans-β-damascone, 3-oxo-α-ionol, 3-oxo-α-ionone, β-ionone found in various honey types. β-Damascenone and β-ionone smelling like honey, exhibit the lowest odour threshold values of all C₁₃-norisoprenoids. The presentation is targeted on two uncommon C₁₃-norisoprenoids in the honey flavor that could be used as specific or nonspecific chemical markers of the botanical origin. Namely, after screening of different honey types, the focus was directed on Centaruea cyanus L. and Allium ursinum L. honey. The samples were extracted by headspace solid-phase microextraction (HS-SPME) and ultrasonic solvent extraction (USE) and the extracts were analysed by gas chromatography and mass spectrometry (GC-MS). SPME fiber with divinylbenzene/carboxen/polydimethylsiloxane (DVB/CAR/PDMS) coating was applied for the research of C. cyanus honey headspace and predominant identified compound was 3,4-dihydro-3-oxoedulan (2,5,5,8a-tetramethyl-2,3,5,6,8,8a-hexahydro-7H-chromen-7-one also known as 2,3,5,6,8,8a-hexahydro-2,5,5,8a-tetramethyl-7H-1-benzo-pyran-7-one). The oxoedulan structure contains epoxide and it is more volatile in comparison with its hydroxylated precursors. This compound has not been found in other honey types and can be considered specific for C. cyanus honey. The dichloromethane extract of A. ursinum honey contained abundant (E)-4-(r-1',t-2',c-4'-trihydroxy-3',6',6'-trimethylcyclohexyl)-but-3-en-2-one that was previously isolated as dominant substance from the ether extracts of New Zealand thyme honey. Although a wide variety of degraded carotenoid-like substances have been identified from different honey types, this appears to be rare situation where 3,4-dihydro-3-oxoedulan and (E)-4-(r-1',t-2',c-4'-trihydroxy-3',6',6'-trimethylcyclohexyl)-but-3-en-2-one have been found that is of great importance for chemical fingerprinting and identification of the chemical biomarkers that can complement the pollen analysis as the major method for the honey classification.

Keywords: 3, 4-dihydro-3-oxoedulan, (E)-4-(r-1', t-2', c-4'-trihydroxy-3', 6', 6'-trimethylcyclohexyl)-but-3-en-2-one, honey flavour, C₁₃-norisoprenoids

Procedia PDF Downloads 330
458 Red-Tide Detection and Prediction Using MODIS Data in the Arabian Gulf of Qatar

Authors: Yasir E. Mohieldeen

Abstract:

Qatar is one of the most water scarce countries in the World. In 2014, the average per capita rainfall was less than 29 m3/y/ca, while the global average is 6,000 m3/y/ca. However, the per capita water consumption in Qatar is among the highest in the World: more than 500 liters per person per day, whereas the global average is 160 liters per person per day. Since the early 2000s, Qatar has been relying heavily on desalinated water from the Arabian Gulf as the main source of fresh water. In 2009, about 99.9% of the total potable water produced was desalinated. Reliance on desalinated water makes Qatar very vulnerable to water related natural disasters, such as the red-tide phenomenon. Qatar’s strategic water reserve lasts for only 7 days. In case of red-tide outbreak, the country would not be able to desalinate water for days, let alone the months that this disaster would bring about (as it clogs the desalination equipment). The 2008-09 red-tide outbreak, for instance, lasted for more than eight months and forced the closure of desalination plants in the region for weeks. This study aims at identifying favorite conditions for red-tide outbreaks, using satellite data along with in-situ measurements. This identification would allow the prediction of these outbreaks and their hotspots. Prediction and monitoring of outbreaks are crucial to water security in the country, as different measures could be put in place in advance to prevent an outbreak and mitigate its impact if it happened. Red-tide outbreaks are detected using different algorithms for chlorophyll concentration in the Gulf waters. Vegetation indices, such as Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) were used along with Surface Algae Bloom Index (SABI) to detect known outbreaks. MODIS (or Moderate Resolution Imaging Spectroradiometer) bands are used to calculate these indices. A red-tide outbreaks atlas in the Arabian Gulf is being produced. Prediction of red-tide outbreaks ahead of their occurrences would give critical information on possible water-shortage in the country. Detecting known outbreaks in the past few decades and related parameters (e.g. water salinity, water surface temperature, nutrition, sandstorms, … etc) enables the identification of favorite conditions of red-tide outbreak that are key to the prediction of these outbreaks.

Keywords: Arabian Gulf, MODIS, red-tide detection, strategic water reserve, water desalination

Procedia PDF Downloads 105
457 Impact of Clinical Pharmacist Intervention in Improving Drug Related Problems in Patients with Chronic Kidney Disease

Authors: Aneena Suresh, C. S. Sidharth

Abstract:

Drug related problems (DRPs) are common in chronic kidney disease (CKD) patients and end stage patients undergoing hemodialysis. To treat the co-morbid conditions of the patients, more complex therapeutic regimen is required, and it leads to development of DRPs. So, this calls for frequent monitoring of the patients. Due to the busy work schedules, physicians are unable to deliver optimal care to these patients. Addition of a clinical pharmacist in the team will improve the standard of care offered to CKD patients by minimizing DRPs. In India, the role of clinical pharmacists in the improving the health outcomes in CKD patients is poorly recognized. Therefore, this study is conducted to put an insight on the role of clinical pharmacist in improving Drug Related Problems in patients with chronic kidney disease, thereby helping them to achieve desired therapeutic outcomes in the patients. A prospective interventional study was conducted for a year in a 620 bedded tertiary care hospital in India. Data was collected using an unstructured questionnaire, medication charts, etc. DRPs were categorized using Hepler and Strand classification. Relationships between the age, weight, GFR, average no of medication taken, average no of comorbidities, and average length of hospital days with the DRPs were identified using Mann Whitney U test. The study population primarily constituted of patients above the age of 50 years with a mean age of 59.91±13.59. Our study showed that 25% of the population presented with DRPs. On an average, CKD patients are prescribed at least 8 medications for the treatment in our study. This explains the high incidence of drug interactions in patients suffering from CKD (45.65%). The least common DRPs in our study were found to be sub therapeutic dose (2%) and adverse drug reactions (2%). Out of this, 60 % of the DRPs were addressed successfully. In our study, there is an association between the DRPs with the average number of medications prescribed, the average number of comorbidities, and the length of the hospital days with p value of 0.022, 0.004, and 0.000, respectively. In the current study, 86% of the proposed interventions were accepted, and 41 % were implemented by the physician, and only 14% were rejected. Hence, it is evident that clinical pharmacist interventions will contribute significantly to diminish the DRPs in CKD patients, thereby decreasing the economic burden of healthcare costs and improving patient’s quality of life.

Keywords: chronic kidney disease, clinical pharmacist, drug related problem, intervention

Procedia PDF Downloads 127
456 Associations between Surrogate Insulin Resistance Indices and the Risk of Metabolic Syndrome in Children

Authors: Mustafa M. Donma, Orkide Donma

Abstract:

A well-defined insulin resistance (IR) is one of the requirements for the good understanding and evaluation of metabolic syndrome (MetS). However, underlying causes for the development of IR are not clear. Endothelial dysfunction also participates in the pathogenesis of this disease. IR indices are being determined in various obesity groups and also in diagnosing MetS. Components of MetS have been well established and used in adult studies. However, there are some ambiguities particularly in the field of pediatrics. The aims of this study were to compare the performance of fasting blood glucose (FBG), one of MetS components, with some other IR indices and check whether FBG may be replaced by some other parameter or ratio for a better evaluation of pediatric MetS. Five-hundred and forty-nine children were involved in the study. Five groups were constituted. Groups 109, 40, 100, 166, 110, 24 children were included in normal-body mass index (N-BMI), overweight (OW), obese (OB), morbid obese (MO), MetS with two components (MetS2) and MetS with three components (MetS3) groups, respectively. Age and sex-adjusted BMI percentiles tabulated by World Health Organization were used for the classification of obesity groups. MetS components were determined. Aside from one of the MetS components-FBG, eight measures of IR [homeostatic model assessment of IR (HOMA-IR), homeostatic model assessment of beta cell function (HOMA-%β), alanine transaminase-to-aspartate transaminase ratio (ALT/AST), alanine transaminase (ALT), insulin (INS), insulin-to-FBG ratio (INS/FBG), the product of fasting triglyceride and glucose (TyG) index, McAuley index] were evaluated. Statistical analyses were performed. A p value less than 0.05 was accepted as the statistically significance degree. Mean values for BMI of the groups were 15.7 kg/m2, 21.0 kg/m2, 24.7 kg/m2, 27.1 kg/m2, 28.7 kg/m2, 30.4 kg/m2 for N-BMI, OW, OB, MO, MetS2, MetS3, respectively. Differences between the groups were significant (p < 0.001). The only exception was MetS2-MetS3 couple, in spite of an increase detected in MetS3 group. Waist-to-hip circumference ratios significantly differed only for N-BMI vs, OB, MO, MetS2; OW vs MO; OB vs MO, MetS2 couples. ALT and ALT/AST did not differ significantly among MO-MetS2-MetS3. HOMA-%β differed only between MO and MetS2. INS/FBG, McAuley index and TyG were not significant between MetS2 and MetS3. HOMA-IR and FBG were not significant between MO and MetS2. INS was the only parameter, which showed statistically significant differences between MO-MetS2, MO-MetS3, and MetS2-MetS3. In conclusion, these findings have suggested that FBG presently considered as one of the five MetS components, may be replaced by INS during the evaluation of pediatric morbid obesity and MetS.

Keywords: children, insulin resistance indices, metabolic syndrome, obesity

Procedia PDF Downloads 119
455 Characterization of Articular Cartilage Based on the Response of Cartilage Surface to Loading/Unloading

Authors: Z. Arabshahi, I. Afara, A. Oloyede, H. Moody, J. Kashani, T. Klein

Abstract:

Articular cartilage is a fluid-swollen tissue of synovial joints that functions by providing a lubricated surface for articulation and to facilitate the load transmission. The biomechanical function of this tissue is highly dependent on the integrity of its ultrastructural matrix. Any alteration of articular cartilage matrix, either by injury or degenerative conditions such as osteoarthritis (OA), compromises its functional behaviour. Therefore, the assessment of articular cartilage is important in early stages of degenerative process to prevent or reduce further joint damage with associated socio-economic impact. Therefore, there has been increasing research interest into the functional assessment of articular cartilage. This study developed a characterization parameter for articular cartilage assessment based on the response of cartilage surface to loading/unloading. This is because the response of articular cartilage to compressive loading is significantly depth-dependent, where the superficial zone and underlying matrix respond differently to deformation. In addition, the alteration of cartilage matrix in the early stages of degeneration is often characterized by PG loss in the superficial layer. In this study, it is hypothesized that the response of superficial layer is different in normal and proteoglycan depleted tissue. To establish the viability of this hypothesis, samples of visually intact and artificially proteoglycan-depleted bovine cartilage were subjected to compression at a constant rate to 30 percent strain using a ring-shaped indenter with an integrated ultrasound probe and then unloaded. The response of articular surface which was indirectly loaded was monitored using ultrasound during the time of loading/unloading (deformation/recovery). It was observed that the rate of cartilage surface response to loading/unloading was different for normal and PG-depleted cartilage samples. Principal Component Analysis was performed to identify the capability of the cartilage surface response to loading/unloading, to distinguish between normal and artificially degenerated cartilage samples. The classification analysis of this parameter showed an overlap between normal and degenerated samples during loading. While there was a clear distinction between normal and degenerated samples during unloading. This study showed that the cartilage surface response to loading/unloading has the potential to be used as a parameter for cartilage assessment.

Keywords: cartilage integrity parameter, cartilage deformation/recovery, cartilage functional assessment, ultrasound

Procedia PDF Downloads 191
454 Blue Finance: A Systematical Review of the Academic Literature on Investment Streams for Marine Conservation

Authors: David Broussard

Abstract:

This review article delves into the realm of marine conservation finance, addressing the inadequacies in current financial streams from the private sector and the underutilization of existing financing mechanisms. The study emphasizes the emerging field of “blue finance”, which contributes to economic growth, improved livelihoods, and marine ecosystem health. The financial burden of marine conservation projects typically falls on philanthropists and governments, contrary to the polluter-pays principle. However, the private sector’s increasing commitment to NetZero and growing environmental and social responsibility goals prompts the need for alternative funding sources for marine conservation initiatives like marine protected areas. The article explores the potential of utilizing several financing mechanisms like carbon credits and other forms of payment for ecosystem services in the marine context, providing a solution to the lack of private funding for marine conservation. The methodology employed involves a systematic and quantitative approach, combining traditional review methods and elements of meta-analysis. A comprehensive search of the years 2000 - 2023, using relevant keywords on the Scopus platform, resulted in a review of 252 articles. The temporal evolution of blue finance studies reveals a significant increase in annual articles from 2010 to 2022, with notable peaks in 2011 and 2022. Marine Policy, Ecosystem Services, and Frontiers in Marine Science are prominent journals in this field. While the majority of articles focus on payment for ecosystem services, there is a growing awareness of the need for holistic approaches in conservation finance. Utilizing bibliometric techniques, the article showcases the dominant share of payment for ecosystem services in the literature with a focus on blue carbon. The classification of articles based on various criteria, including financing mechanisms and conservation types, aids in categorizing and understanding the diversity of research objectives and perspectives in this complex field of marine conservation finance.

Keywords: biodiversity offsets, carbon credits, ecosystem services, impact investment, payment for ecosystem services

Procedia PDF Downloads 82
453 Efficacy and Safety of Eucalyptus for Relief Cough Symptom: A Systematic Review and Meta-Analysis

Authors: Ladda Her, Juntip Kanjanasilp, Ratree Sawangjit, Nathorn Chaiyakunapruk

Abstract:

Cough is the common symptom of the respiratory tract infections or non-infections; the duration of cough indicates a classification and severity of disease. Herbal medicines can be used as the alternative to drugs for relief of cough symptoms from acute and chronic disease. Eucalyptus was used for reducing cough with evidences suggesting it has an active role in reduction of airway inflammation. The present study aims to evaluate efficacy and safety of eucalyptus for relief of cough symptom in respiratory disease. Method: The Cochrane Library, MEDLINE (PubMed), Scopus, CINAHL, Springer, Science direct, ProQuest, and THAILIS databases. From its inception until 01/02/2019 for randomized control trials. We follow for the efficacy and safety of eucalyptus for reducing cough. Methodological quality was evaluated by using the Cochrane risk of bias tool; two reviewers in our team screened eligibility and extracted data. Result: Six studies were included for the review and five studies were included in the meta-analysis, there were 1.911 persons including children (n: 1) and adult (n: 5) studies; for study in children and adult were between 1 and 80 years old, respectively. Eucalyptus was used as mono herb (n: 2) and in combination with other herbs form (n: 4). All of the studies with eucalyptus were compared for efficacy and safety with placebo or standard treatment, Eucalyptus dosage form in studies included capsules, spray, and syrup. Heterogeneity was 32.44 used random effect model (I² = 1.2%, χ² = 1.01; P-value = 0.314). The efficacy of eucalyptus was showed a reduced cough symptom statistically significant (n = 402, RR: 1.40, 95%CI [1.19, 1.65], P-value < 0.0001) when compared with placebo. Adverse events (AEs) were reported mild to moderate intensity with mostly gastrointestinal symptom. The methodological quality of the included trials was overall poor. Conclusion: Eucalyptus appears to be beneficial and safe for relieving in respiratory diseases focus on cough frequency. The evidence was inconclusive due to limited quality trial. Well-designed trials for evaluating the effectiveness in humans, the effectiveness for reducing cough symptom in human is needed. Eucalyptus had safety as monotherapy or in combination with other herbs.

Keywords: cough, eucalyptus, cineole, herbal medicine, systematic review, meta-analysis

Procedia PDF Downloads 151
452 Recommendations for Data Quality Filtering of Opportunistic Species Occurrence Data

Authors: Camille Van Eupen, Dirk Maes, Marc Herremans, Kristijn R. R. Swinnen, Ben Somers, Stijn Luca

Abstract:

In ecology, species distribution models are commonly implemented to study species-environment relationships. These models increasingly rely on opportunistic citizen science data when high-quality species records collected through standardized recording protocols are unavailable. While these opportunistic data are abundant, uncertainty is usually high, e.g., due to observer effects or a lack of metadata. Data quality filtering is often used to reduce these types of uncertainty in an attempt to increase the value of studies relying on opportunistic data. However, filtering should not be performed blindly. In this study, recommendations are built for data quality filtering of opportunistic species occurrence data that are used as input for species distribution models. Using an extensive database of 5.7 million citizen science records from 255 species in Flanders, the impact on model performance was quantified by applying three data quality filters, and these results were linked to species traits. More specifically, presence records were filtered based on record attributes that provide information on the observation process or post-entry data validation, and changes in the area under the receiver operating characteristic (AUC), sensitivity, and specificity were analyzed using the Maxent algorithm with and without filtering. Controlling for sample size enabled us to study the combined impact of data quality filtering, i.e., the simultaneous impact of an increase in data quality and a decrease in sample size. Further, the variation among species in their response to data quality filtering was explored by clustering species based on four traits often related to data quality: commonness, popularity, difficulty, and body size. Findings show that model performance is affected by i) the quality of the filtered data, ii) the proportional reduction in sample size caused by filtering and the remaining absolute sample size, and iii) a species ‘quality profile’, resulting from a species classification based on the four traits related to data quality. The findings resulted in recommendations on when and how to filter volunteer generated and opportunistically collected data. This study confirms that correctly processed citizen science data can make a valuable contribution to ecological research and species conservation.

Keywords: citizen science, data quality filtering, species distribution models, trait profiles

Procedia PDF Downloads 200
451 Measuring the Resilience of e-Governments Using an Ontology

Authors: Onyekachi Onwudike, Russell Lock, Iain Phillips

Abstract:

The variability that exists across governments, her departments and the provisioning of services has been areas of concern in the E-Government domain. There is a need for reuse and integration across government departments which are accompanied by varying degrees of risks and threats. There is also the need for assessment, prevention, preparation, response and recovery when dealing with these risks or threats. The ability of a government to cope with the emerging changes that occur within it is known as resilience. In order to forge ahead with concerted efforts to manage reuse and integration induced risks or threats to governments, the ambiguities contained within resilience must be addressed. Enhancing resilience in the E-Government domain is synonymous with reducing risks governments face with provisioning of services as well as reuse of components across departments. Therefore, it can be said that resilience is responsible for the reduction in government’s vulnerability to changes. In this paper, we present the use of the ontology to measure the resilience of governments. This ontology is made up of a well-defined construct for the taxonomy of resilience. A specific class known as ‘Resilience Requirements’ is added to the ontology. This class embraces the concept of resilience into the E-Government domain ontology. Considering that the E-Government domain is a highly complex one made up of different departments offering different services, the reliability and resilience of the E-Government domain have become more complex and critical to understand. We present questions that can help a government access how prepared they are in the face of risks and what steps can be taken to recover from them. These questions can be asked with the use of queries. The ontology focuses on developing a case study section that is used to explore ways in which government departments can become resilient to the different kinds of risks and threats they may face. A collection of resilience tools and resources have been developed in our ontology to encourage governments to take steps to prepare for emergencies and risks that a government may face with the integration of departments and reuse of components across government departments. To achieve this, the ontology has been extended by rules. We present two tools for understanding resilience in the E-Government domain as a risk analysis target and the output of these tools when applied to resilience in the E-Government domain. We introduce the classification of resilience using the defined taxonomy and modelling of existent relationships based on the defined taxonomy. The ontology is constructed on formal theory and it provides a semantic reference framework for the concept of resilience. Key terms which fall under the purview of resilience with respect to E-Governments are defined. Terms are made explicit and the relationships that exist between risks and resilience are made explicit. The overall aim of the ontology is to use it within standards that would be followed by all governments for government-based resilience measures.

Keywords: E-Government, Ontology, Relationships, Resilience, Risks, Threats

Procedia PDF Downloads 337
450 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 219
449 Long-Term Resilience Performance Assessment of Dual and Singular Water Distribution Infrastructures Using a Complex Systems Approach

Authors: Kambiz Rasoulkhani, Jeanne Cole, Sybil Sharvelle, Ali Mostafavi

Abstract:

Dual water distribution systems have been proposed as solutions to enhance the sustainability and resilience of urban water systems by improving performance and decreasing energy consumption. The objective of this study was to evaluate the long-term resilience and robustness of dual water distribution systems versus singular water distribution systems under various stressors such as demand fluctuation, aging infrastructure, and funding constraints. To this end, the long-term dynamics of these infrastructure systems was captured using a simulation model that integrates institutional agency decision-making processes with physical infrastructure degradation to evaluate the long-term transformation of water infrastructure. A set of model parameters that varies for dual and singular distribution infrastructure based on the system attributes, such as pipes length and material, energy intensity, water demand, water price, average pressure and flow rate, as well as operational expenditures, were considered and input in the simulation model. Accordingly, the model was used to simulate various scenarios of demand changes, funding levels, water price growth, and renewal strategies. The long-term resilience and robustness of each distribution infrastructure were evaluated based on various performance measures including network average condition, break frequency, network leakage, and energy use. An ecologically-based resilience approach was used to examine regime shifts and tipping points in the long-term performance of the systems under different stressors. Also, Classification and Regression Tree analysis was adopted to assess the robustness of each system under various scenarios. Using data from the City of Fort Collins, the long-term resilience and robustness of the dual and singular water distribution systems were evaluated over a 100-year analysis horizon for various scenarios. The results of the analysis enabled: (i) comparison between dual and singular water distribution systems in terms of long-term performance, resilience, and robustness; (ii) identification of renewal strategies and decision factors that enhance the long-term resiliency and robustness of dual and singular water distribution systems under different stressors.

Keywords: complex systems, dual water distribution systems, long-term resilience performance, multi-agent modeling, sustainable and resilient water systems

Procedia PDF Downloads 291
448 Determination of Genotypic Relationship among 12 Sugarcane (Saccharum officinarum) Varieties

Authors: Faith Eweluegim Enahoro-Ofagbe, Alika Eke Joseph

Abstract:

Information on genetic variation within a population is crucial for utilizing heterozygosity for breeding programs that aim to improve crop species. The study was conducted to ascertain the genotypic similarities among twelve sugarcane (Saccharum officinarum) varieties to group them for purposes of hybridizations for cane yield improvement. The experiment was conducted at the University of Benin, Faculty of Agriculture Teaching and Research Farm, Benin City. Twelve sugarcane varieties obtained from National Cereals Research Institute, Badeggi, Niger State, Nigeria, were planted in three replications in a randomized complete block design. Each variety was planted on a five-row plot of 5.0 m in length. Data were collected on 12 agronomic traits, including; the number of millable cane, cane girth, internode length, number of male and female flowers (fuss), days to flag leaf, days to flowering, brix%, cane yield, and others. There were significant differences, according to the findings among the twelve genotypes for the number of days to flag leaf, number of male and female flowers (fuss), and cane yield. The relationship between the twelve sugarcane varieties was expressed using hierarchical cluster analysis. The twelve genotypes were grouped into three major clusters based on hierarchical classification. Cluster I had five genotypes, cluster II had four, and cluster III had three. Cluster III was dominated by varieties characterized by higher cane yield, number of leaves, internode length, brix%, number of millable stalks, stalk/stool, cane girth, and cane length. Cluster II contained genotypes with early maturity characteristics, such as early flowering, early flag leaf development, growth rate, and the number of female and male flowers (fuss). The maximum inter-cluster distance between clusters III and I indicated higher genetic diversity between the two groups. Hybridization between the two groups could result in transgressive recombinants for agronomically important traits.

Keywords: sugarcane, Saccharum officinarum, genotype, cluster analysis, principal components analysis

Procedia PDF Downloads 80
447 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks

Authors: Tesfaye Mengistu

Abstract:

Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.

Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net

Procedia PDF Downloads 111
446 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: landsat 8, oligotrophic lake, remote sensing, water quality

Procedia PDF Downloads 395
445 Estimation of Forces Applied to Forearm Using EMG Signal Features to Control of Powered Human Arm Prostheses

Authors: Faruk Ortes, Derya Karabulut, Yunus Ziya Arslan

Abstract:

Myoelectric features gathering from musculature environment are considered on a preferential basis to perceive muscle activation and control human arm prostheses according to recent experimental researches. EMG (electromyography) signal based human arm prostheses have shown a promising performance in terms of providing basic functional requirements of motions for the amputated people in recent years. However, these assistive devices for neurorehabilitation still have important limitations in enabling amputated people to perform rather sophisticated or functional movements. Surface electromyogram (EMG) is used as the control signal to command such devices. This kind of control consists of activating a motion in prosthetic arm using muscle activation for the same particular motion. Extraction of clear and certain neural information from EMG signals plays a major role especially in fine control of hand prosthesis movements. Many signal processing methods have been utilized for feature extraction from EMG signals. The specific objective of this study was to compare widely used time domain features of EMG signal including integrated EMG(IEMG), root mean square (RMS) and waveform length(WL) for prediction of externally applied forces to human hands. Obtained features were classified using artificial neural networks (ANN) to predict the forces. EMG signals supplied to process were recorded during only type of muscle contraction which is isometric and isotonic one. Experiments were performed by three healthy subjects who are right-handed and in a range of 25-35 year-old aging. EMG signals were collected from muscles of the proximal part of the upper body consisting of: biceps brachii, triceps brachii, pectorialis major and trapezius. The force prediction results obtained from the ANN were statistically analyzed and merits and pitfalls of the extracted features were discussed with detail. The obtained results are anticipated to contribute classification process of EMG signal and motion control of powered human arm prosthetics control.

Keywords: assistive devices for neurorehabilitation, electromyography, feature extraction, force estimation, human arm prosthesis

Procedia PDF Downloads 366