Search results for: adaptive algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2847

Search results for: adaptive algorithms

297 Barriers for Appropriate Palliative Symptom Management: A Qualitative Research in Kazakhstan, a Medium-Income Transitional-Economy Country

Authors: Ibragim Issabekov, Byron Crape, Lyazzat Toleubekova

Abstract:

Background: Palliative care substantially improves the quality of life of terminally-ill patients. Symptom control is one of the keystones in the management of patients in palliative care settings, lowering distress as well as improving the quality of life of patients with end-stage diseases. The most common symptoms causing significant distress for patients are pain, nausea and vomiting, increased respiratory secretions and mental health issues like depression. Aims are: 1. to identify best practices in symptom management in palliative patients in accordance with internationally approved guidelines and compare aforementioned with actual practices in Kazakhstan; to evaluate the criteria for assessing symptoms in terminally-ill patients, 2. to review the availability and utilization of pharmaceutical agents for pain control, management of excessive respiratory secretions, nausea, and vomiting, and delirium and 3. to develop recommendations for the systematic approach to end-of-life symptom management in Kazakhstan. Methods: The use of qualitative research methods together with systematic literature review have been employed to provide a rigorous research process to evaluate current approaches for symptom management of palliative patients in Kazakhstan. Qualitative methods include in-depth semi-structured interviews of the healthcare professionals involved in palliative care provision. Results: Obstacles were found in appropriate provision of palliative care. Inadequate education and training to manage severe symptoms, poorly defined laws and regulations for palliative care provision, and a lack of algorithms and guidelines for care were major barriers in the effective provision of palliative care. Conclusion: Assessment of palliative care in this medium-income transitional-economy country is one of the first steps in the initiation of integration of palliative care into the existing health system. Achieving this requires identifying obstacles and resolving these issues.

Keywords: end-of-life care, middle income country, palliative care, symptom control

Procedia PDF Downloads 174
296 Constructing a Semi-Supervised Model for Network Intrusion Detection

Authors: Tigabu Dagne Akal

Abstract:

While advances in computer and communications technology have made the network ubiquitous, they have also rendered networked systems vulnerable to malicious attacks devised from a distance. These attacks or intrusions start with attackers infiltrating a network through a vulnerable host and then launching further attacks on the local network or Intranet. Nowadays, system administrators and network professionals can attempt to prevent such attacks by developing intrusion detection tools and systems using data mining technology. In this study, the experiments were conducted following the Knowledge Discovery in Database Process Model. The Knowledge Discovery in Database Process Model starts from selection of the datasets. The dataset used in this study has been taken from Massachusetts Institute of Technology Lincoln Laboratory. After taking the data, it has been pre-processed. The major pre-processing activities include fill in missed values, remove outliers; resolve inconsistencies, integration of data that contains both labelled and unlabelled datasets, dimensionality reduction, size reduction and data transformation activity like discretization tasks were done for this study. A total of 21,533 intrusion records are used for training the models. For validating the performance of the selected model a separate 3,397 records are used as a testing set. For building a predictive model for intrusion detection J48 decision tree and the Naïve Bayes algorithms have been tested as a classification approach for both with and without feature selection approaches. The model that was created using 10-fold cross validation using the J48 decision tree algorithm with the default parameter values showed the best classification accuracy. The model has a prediction accuracy of 96.11% on the training datasets and 93.2% on the test dataset to classify the new instances as normal, DOS, U2R, R2L and probe classes. The findings of this study have shown that the data mining methods generates interesting rules that are crucial for intrusion detection and prevention in the networking industry. Future research directions are forwarded to come up an applicable system in the area of the study.

Keywords: intrusion detection, data mining, computer science, data mining

Procedia PDF Downloads 270
295 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 84
294 Kinematic Analysis of the Calf Raise Test Using a Mobile iOS Application: Validation of the Calf Raise Application

Authors: Ma. Roxanne Fernandez, Josie Athens, Balsalobre-Fernandez, Masayoshi Kubo, Kim Hébert-Losier

Abstract:

Objectives: The calf raise test (CRT) is used in rehabilitation and sports medicine to evaluate calf muscle function. For testing, individuals stand on one leg and go up on their toes and back down to volitional fatigue. The newly developed Calf Raise application (CRapp) for iOS uses computer-vision algorithms enabling objective measurement of CRT outcomes. We aimed to validate the CRapp by examining its concurrent validity and agreement levels against laboratory-based equipment and establishing its intra- and inter-rater reliability. Methods: CRT outcomes (i.e., repetitions, positive work, total height, peak height, fatigue index, and peak power) were assessed in thirteen healthy individuals (6 males, 7 females) on three occasions and both legs using the CRapp, 3D motion capture, and force plate technologies simultaneously. Data were extracted from two markers: one placed immediately below the lateral malleolus and another on the heel. Concurrent validity and agreement measures were determined using intraclass correlation coefficients (ICC₃,ₖ), typical errors expressed as coefficient of variations (CV), and Bland-Altman methods to assess biases and precision. Reliability was assessed using ICC3,1 and CV values. Results: Validity of CRapp outcomes was good to excellent across measures for both markers (mean ICC ≥0.878), with precision plots showing good agreement and precision. CV ranged from 0% (repetitions) to 33.3% (fatigue index) and were, on average better for the lateral malleolus marker. Additionally, inter- and intra-rater reliability were excellent (mean ICC ≥0.949, CV ≤5.6%). Conclusion: These results confirm the CRapp is valid and reliable within and between users for measuring CRT outcomes in healthy adults. The CRapp provides a tool to objectivise CRT outcomes in research and practice, aligning with recent advances in mobile technologies and their increased use in healthcare.

Keywords: calf raise test, mobile application, validity, reliability

Procedia PDF Downloads 146
293 Separating Landform from Noise in High-Resolution Digital Elevation Models through Scale-Adaptive Window-Based Regression

Authors: Anne M. Denton, Rahul Gomes, David W. Franzen

Abstract:

High-resolution elevation data are becoming increasingly available, but typical approaches for computing topographic features, like slope and curvature, still assume small sliding windows, for example, of size 3x3. That means that the digital elevation model (DEM) has to be resampled to the scale of the landform features that are of interest. Any higher resolution is lost in this resampling. When the topographic features are computed through regression that is performed at the resolution of the original data, the accuracy can be much higher, and the reported result can be adjusted to the length scale that is relevant locally. Slope and variance are calculated for overlapping windows, meaning that one regression result is computed per raster point. The number of window centers per area is the same for the output as for the original DEM. Slope and variance are computed by performing regression on the points in the surrounding window. Such an approach is computationally feasible because of the additive nature of regression parameters and variance. Any doubling of window size in each direction only takes a single pass over the data, corresponding to a logarithmic scaling of the resulting algorithm as a function of the window size. Slope and variance are stored for each aggregation step, allowing the reported slope to be selected to minimize variance. The approach thereby adjusts the effective window size to the landform features that are characteristic to the area within the DEM. Starting with a window size of 2x2, each iteration aggregates 2x2 non-overlapping windows from the previous iteration. Regression results are stored for each iteration, and the slope at minimal variance is reported in the final result. As such, the reported slope is adjusted to the length scale that is characteristic of the landform locally. The length scale itself and the variance at that length scale are also visualized to aid in interpreting the results for slope. The relevant length scale is taken to be half of the window size of the window over which the minimum variance was achieved. The resulting process was evaluated for 1-meter DEM data and for artificial data that was constructed to have defined length scales and added noise. A comparison with ESRI ArcMap was performed and showed the potential of the proposed algorithm. The resolution of the resulting output is much higher and the slope and aspect much less affected by noise. Additionally, the algorithm adjusts to the scale of interest within the region of the image. These benefits are gained without additional computational cost in comparison with resampling the DEM and computing the slope over 3x3 images in ESRI ArcMap for each resolution. In summary, the proposed approach extracts slope and aspect of DEMs at the lengths scales that are characteristic locally. The result is of higher resolution and less affected by noise than existing techniques.

Keywords: high resolution digital elevation models, multi-scale analysis, slope calculation, window-based regression

Procedia PDF Downloads 104
292 Examining the Independent Effects of Early Exposure to Game Consoles and Parent-Child Activities on Psychosocial Development

Authors: Rosa S. Wong, Keith T. S. Tung, Frederick K. Ho, Winnie W. Y. Tso, King-wa Fu, Nirmala Rao, Patrick Ip

Abstract:

As technology advances, exposures in early childhood are no longer confined to stimulations in the surrounding physical environments. Children nowadays are also subject to influences from the digital world. In particular, early access to game consoles can cause risks to child development, especially when the game is not developmentally appropriate for young children. Overstimulation is possible and could impair brain development. On the other hand, recreational parent-child activities, including outdoor activities and visits to museums, require child interaction with parents, which is beneficial for developing adaptive emotion regulation and social skills. Given the differences between these two types of exposures, this study investigated and compared the independent effects of early exposure to a game console and early play-based parent-child activities on children’s long-term psychosocial outcomes. This study used data from a subset of children (n=304, 142 male and 162 female) in the longitudinal cohort study, which studied the long-term impact of family socioeconomic status on child development. In 2012/13, we recruited a group of children at Kindergarten 3 (K3) randomly from Hong Kong local kindergartens and collected data regarding their duration of exposure to game console and recreational parent-child activities at that time. In 2018/19, we re-surveyed the parents of these children who were matriculated as Form 1 (F1) students (ages ranging from 11 to 13 years) in secondary schools and asked the parents to rate their children’s psychosocial problems in F1. Linear regressions were conducted to examine the associations between early exposures and adolescent psychosocial problems with and without adjustment for child gender and K3 family socioeconomic status. On average, K3 children spent about 42 minutes on a game console every day and had 2-3 recreational activities with their parents every week. Univariate analyses showed that more time spent on game consoles at K3 was associated with more psychosocial difficulties in F1 particularly more externalizing problems. The effect of early exposure to game console on externalizing behavior remained significant (B=0.59, 95%CI: 0.15 to 1.03, p=0.009) after adjusting for recreational parent-child activities and child gender. For recreational parent-child activities at K3, its effect on overall psychosocial difficulties became insignificant after adjusting for early exposure to game consoles and child gender. However, it was found to have significant protective effect on externalizing problems (B=-0.65, 95%CI: -1.23 to -0.07, p=0.028) even after adjusting for the confounders. Early exposure to game consoles has negative impact on children’s psychosocial health, whereas play-based parent-child activities can foster positive psychosocial outcomes. More efforts should be directed to propagate the risks and benefits of these activities and urge the parents and caregivers to replace child-alone screen time with parent-child play time in daily routine.

Keywords: early childhood, electronic device, parenting, psychosocial wellbeing

Procedia PDF Downloads 139
291 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities

Authors: Shaurya Chauhan, Sagar Gupta

Abstract:

Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.

Keywords: open source, public participation, urbanization, urban development

Procedia PDF Downloads 121
290 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 137
289 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 240
288 A Galectin from Rock Bream Oplegnathus fasciatus: Molecular Characterization and Immunological Properties

Authors: W. S. Thulasitha, N. Umasuthan, G. I. Godahewa, Jehee Lee

Abstract:

In fish, innate immune defense is the first immune response against microbial pathogens which consists of several antimicrobial components. Galectins are one of the carbohydrate binding lectins that have the ability to identify pathogen by recognition of pathogen associated molecular patterns. Galectins play a vital role in the regulation of innate and adaptive immune responses. Rock bream Oplegnathus fasciatus is one of the most important cultured species in Korea and Japan. Considering the losses due to microbial pathogens, present study was carried out to understand the molecular and functional characteristics of a galectin in normal and pathogenic conditions, which could help to establish an understanding about immunological components of rock bream. Complete cDNA of rock bream galectin like protein B (rbGal like B) was identified from the cDNA library, and the in silico analysis was carried out using bioinformatic tools. Genomic structure was derived from the BAC library by sequencing a specific clone and using Spidey. Full length of rbGal like B (contig14775) cDNA containing 517 nucleotides was identified from the cDNA library which comprised of 435 bp in the open reading frame encoding a deduced protein composed of 145 amino acids. The molecular mass of putative protein was predicted as 16.14 kDa with an isoelectric point of 8.55. A characteristic conserved galactose binding domain was located from 12 to 145 amino acids. Genomic structure of rbGal like B consisted of 4 exons and 3 introns. Moreover, pairwise alignment showed that rock bream rbGal like B shares highest similarity (95.9 %) and identity (91 %) with Takifugu rubripes galectin related protein B like and lowest similarity (55.5 %) and identity (32.4 %) with Homo sapiens. Multiple sequence alignment demonstrated that the galectin related protein B was conserved among vertebrates. A phylogenetic analysis revealed that rbGal like B protein clustered together with other fish homologs in fish clade. It showed closer evolutionary link with Takifugu rubripes. Tissue distribution and expression patterns of rbGal like B upon immune challenges were performed using qRT-PCR assays. Among all tested tissues, level of rbGal like B expression was significantly high in gill tissue followed by kidney, intestine, heart and spleen. Upon immune challenges, it showed an up-regulated pattern of expression with Edwardsiella tarda, rock bream irido virus and poly I:C up to 6 h post injection and up to 24 h with LPS. However, In the presence of Streptococcus iniae rbGal like B showed an up and down pattern of expression with the peak at 6 - 12 h. Results from the present study revealed the phylogenetic position and role of rbGal like B in response to microbial infection in rock bream.

Keywords: galectin like protein B, immune response, Oplegnathus fasciatus, molecular characterization

Procedia PDF Downloads 328
287 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms

Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios

Abstract:

Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.

Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction

Procedia PDF Downloads 153
286 Uncertainty Quantification of Corrosion Anomaly Length of Oil and Gas Steel Pipelines Based on Inline Inspection and Field Data

Authors: Tammeen Siraj, Wenxing Zhou, Terry Huang, Mohammad Al-Amin

Abstract:

The high resolution inline inspection (ILI) tool is used extensively in the pipeline industry to identify, locate, and measure metal-loss corrosion anomalies on buried oil and gas steel pipelines. Corrosion anomalies may occur singly (i.e. individual anomalies) or as clusters (i.e. a colony of corrosion anomalies). Although the ILI technology has advanced immensely, there are measurement errors associated with the sizes of corrosion anomalies reported by ILI tools due limitations of the tools and associated sizing algorithms, and detection threshold of the tools (i.e. the minimum detectable feature dimension). Quantifying the measurement error in the ILI data is crucial for corrosion management and developing maintenance strategies that satisfy the safety and economic constraints. Studies on the measurement error associated with the length of the corrosion anomalies (in the longitudinal direction of the pipeline) has been scarcely reported in the literature and will be investigated in the present study. Limitations in the ILI tool and clustering process can sometimes cause clustering error, which is defined as the error introduced during the clustering process by including or excluding a single or group of anomalies in or from a cluster. Clustering error has been found to be one of the biggest contributory factors for relatively high uncertainties associated with ILI reported anomaly length. As such, this study focuses on developing a consistent and comprehensive framework to quantify the measurement errors in the ILI-reported anomaly length by comparing the ILI data and corresponding field measurements for individual and clustered corrosion anomalies. The analysis carried out in this study is based on the ILI and field measurement data for a set of anomalies collected from two segments of a buried natural gas pipeline currently in service in Alberta, Canada. Data analyses showed that the measurement error associated with the ILI-reported length of the anomalies without clustering error, denoted as Type I anomalies is markedly less than that for anomalies with clustering error, denoted as Type II anomalies. A methodology employing data mining techniques is further proposed to classify the Type I and Type II anomalies based on the ILI-reported corrosion anomaly information.

Keywords: clustered corrosion anomaly, corrosion anomaly assessment, corrosion anomaly length, individual corrosion anomaly, metal-loss corrosion, oil and gas steel pipeline

Procedia PDF Downloads 286
285 Enhanced Tensor Tomographic Reconstruction: Integrating Absorption, Refraction and Temporal Effects

Authors: Lukas Vierus, Thomas Schuster

Abstract:

A general framework is examined for dynamic tensor field tomography within an inhomogeneous medium characterized by refraction and absorption, treated as an inverse source problem concerning the associated transport equation. Guided by Fermat’s principle, the Riemannian metric within the specified domain is determined by the medium's refractive index. While considerable literature exists on the inverse problem of reconstructing a tensor field from its longitudinal ray transform within a static Euclidean environment, limited inversion formulas and algorithms are available for general Riemannian metrics and time-varying tensor fields. It is established that tensor field tomography, akin to an inverse source problem for a transport equation, persists in dynamic scenarios. Framing dynamic tensor tomography as an inverse source problem embodies a comprehensive perspective within this domain. Ensuring well-defined forward mappings necessitates establishing existence and uniqueness for the underlying transport equations. However, the bilinear forms of the associated weak formulations fail to meet the coercivity condition. Consequently, recourse to viscosity solutions is taken, demonstrating their unique existence within suitable Sobolev spaces (in the static case) and Sobolev-Bochner spaces (in the dynamic case), under a specific assumption restricting variations in the refractive index. Notably, the adjoint problem can also be reformulated as a transport equation, with analogous results regarding uniqueness. Analytical solutions are expressed as integrals over geodesics, facilitating more efficient evaluation of forward and adjoint operators compared to solving partial differential equations. Certainly, here's the revised sentence in English: Numerical experiments are conducted using a Nesterov-accelerated Landweber method, encompassing various fields, absorption coefficients, and refractive indices, thereby illustrating the enhanced reconstruction achieved through this holistic modeling approach.

Keywords: attenuated refractive dynamic ray transform of tensor fields, geodesics, transport equation, viscosity solutions

Procedia PDF Downloads 22
284 Adolescent-Parent Relationship as the Most Important Factor in Preventing Mood Disorders in Adolescents: An Application of Artificial Intelligence to Social Studies

Authors: Elżbieta Turska

Abstract:

Introduction: One of the most difficult times in a person’s life is adolescence. The experiences in this period may shape the future life of this person to a large extent. This is the reason why many young people experience sadness, dejection, hopelessness, sense of worthlessness, as well as losing interest in various activities and social relationships, all of which are often classified as mood disorders. As many as 15-40% adolescents experience depressed moods and for most of them they resolve and are not carried into adulthood. However, (5-6%) of those affected by mood disorders develop the depressive syndrome and as many as (1-3%) develop full-blown clinical depression. Materials: A large questionnaire was given to 2508 students, aged 13–16 years old, and one of its parts was the Burns checklist, i.e. the standard test for identifying depressed mood. The questionnaire asked about many aspects of the student’s life, it included a total of 53 questions, most of which had subquestions. It is important to note that the data suffered from many problems, the most important of which were missing data and collinearity. Aim: In order to identify the correlates of mood disorders we built predictive models which were then trained and validated. Our aim was not to be able to predict which students suffer from mood disorders but rather to explore the factors influencing mood disorders. Methods: The problems with data described above practically excluded using all classical statistical methods. For this reason, we attempted to use the following Artificial Intelligence (AI) methods: classification trees with surrogate variables, random forests and xgboost. All analyses were carried out with the use of the mlr package for the R programming language. Resuts: The predictive model built by classification trees algorithm outperformed the other algorithms by a large margin. As a result, we were able to rank the variables (questions and subquestions from the questionnaire) from the most to least influential as far as protection against mood disorder is concerned. Thirteen out of twenty most important variables reflect the relationships with parents. This seems to be a really significant result both from the cognitive point of view and also from the practical point of view, i.e. as far as interventions to correct mood disorders are concerned.

Keywords: mood disorders, adolescents, family, artificial intelligence

Procedia PDF Downloads 83
283 Next Generation Radiation Risk Assessment and Prediction Tools Generation Applying AI-Machine (Deep) Learning Algorithms

Authors: Selim M. Khan

Abstract:

Indoor air quality is strongly influenced by the presence of radioactive radon (222Rn) gas. Indeed, exposure to high 222Rn concentrations is unequivocally linked to DNA damage and lung cancer and is a worsening issue in North American and European built environments, having increased over time within newer housing stocks as a function of as yet unclear variables. Indoor air radon concentration can be influenced by a wide range of environmental, structural, and behavioral factors. As some of these factors are quantitative while others are qualitative, no single statistical model can determine indoor radon level precisely while simultaneously considering all these variables across a complex and highly diverse dataset. The ability of AI- machine (deep) learning to simultaneously analyze multiple quantitative and qualitative features makes it suitable to predict radon with a high degree of precision. Using Canadian and Swedish long-term indoor air radon exposure data, we are using artificial deep neural network models with random weights and polynomial statistical models in MATLAB to assess and predict radon health risk to human as a function of geospatial, human behavioral, and built environmental metrics. Our initial artificial neural network with random weights model run by sigmoid activation tested different combinations of variables and showed the highest prediction accuracy (>96%) within the reasonable iterations. Here, we present details of these emerging methods and discuss strengths and weaknesses compared to the traditional artificial neural network and statistical methods commonly used to predict indoor air quality in different countries. We propose an artificial deep neural network with random weights as a highly effective method for assessing and predicting indoor radon.

Keywords: radon, radiation protection, lung cancer, aI-machine deep learnng, risk assessment, risk prediction, Europe, North America

Procedia PDF Downloads 77
282 A Literature Review of Precision Agriculture: Applications of Diagnostic Diseases in Corn, Potato, and Rice Based on Artificial Intelligence

Authors: Carolina Zambrana, Grover Zurita

Abstract:

The food loss production that occurs in deficient agricultural production is one of the major problems worldwide. This puts the population's food security and the efficiency of farming investments at risk. It is to be expected that this food security will be achieved with the own and efficient production of each country. It will have an impact on the well-being of its population and, thus, also on food sovereignty. The production losses in quantity and quality occur due to the lack of efficient detection of diseases at an early stage. It is very difficult to solve the agriculture efficiency using traditional methods since it takes a long time to be carried out due to detection imprecision of the main diseases, especially when the production areas are extensive. Therefore, the main objective of this research study is to perform a systematic literature review, of the latest five years, of Precision Agriculture (PA) to be able to understand the state of the art of the set of new technologies, procedures, and optimization processes with Artificial Intelligence (AI). This study will focus on Corns, Potatoes, and Rice diagnostic diseases. The extensive literature review will be performed on Elsevier, Scopus, and IEEE databases. In addition, this research will focus on advanced digital imaging processing and the development of software and hardware for PA. The convolution neural network will be handling special attention due to its outstanding diagnostic results. Moreover, the studied data will be incorporated with artificial intelligence algorithms for the automatic diagnosis of crop quality. Finally, precision agriculture with technology applied to the agricultural sector allows the land to be exploited efficiently. This system requires sensors, drones, data acquisition cards, and global positioning systems. This research seeks to merge different areas of science, control engineering, electronics, digital image processing, and artificial intelligence for the development, in the near future, of a low-cost image measurement system that allows the optimization of crops with AI.

Keywords: precision agriculture, convolutional neural network, deep learning, artificial intelligence

Procedia PDF Downloads 57
281 Cities Under Pressure: Unraveling Urban Resilience Challenges

Authors: Sherine S. Aly, Fahd A. Hemeida, Mohamed A. Elshamy

Abstract:

In the face of rapid urbanization and the myriad challenges posed by climate change, population growth, and socio-economic disparities, fostering urban resilience has become paramount. This abstract offers a comprehensive overview of the study on "Urban Resilience Challenges," exploring the background, methodologies, major findings, and concluding insights. The paper unveils a spectrum of challenges encompassing environmental stressors and deep-seated socio-economic issues, such as unequal access to resources and opportunities. Emphasizing their interconnected nature, the study underscores the imperative for holistic and integrated approaches to urban resilience, recognizing the intricate web of factors shaping the urban landscape. Urbanization has witnessed an unprecedented surge, transforming cities into dynamic and complex entities. With this growth, however, comes an array of challenges that threaten the sustainability and resilience of urban environments. This study seeks to unravel the multifaceted urban resilience challenges, exploring their origins and implications for contemporary cities. Cities serve as hubs of economic, social, and cultural activities, attracting diverse populations seeking opportunities and a higher quality of life. However, the urban fabric is increasingly strained by climate-related events, infrastructure vulnerabilities, and social inequalities. Understanding the nuances of these challenges is crucial for developing strategies that enhance urban resilience and ensure the longevity of cities as vibrant and adaptive entities. This paper endeavors to discern strategic guidelines for enhancing urban resilience amidst the dynamic challenges posed by rapid urbanization. The study aims to distill actionable insights that can inform strategic approaches. Guiding the formulation of effective strategies to fortify cities against multifaceted pressures. The study employs a multifaceted approach to dissect urban resilience challenges. A qualitative method will be employed, including comprehensive literature reviews and data analysis of urban vulnerabilities that provided valuable insights into the lived experiences of resilience challenges in diverse urban settings. In conclusion, this study underscores the urgency of addressing urban resilience challenges to ensure the sustained vitality of cities worldwide. The interconnected nature of these challenges necessitates a paradigm shift in urban planning and governance. By adopting holistic strategies that integrate environmental, social, and economic considerations, cities can navigate the complexities of the 21st century. The findings provide a roadmap for policymakers, planners, and communities to collaboratively forge resilient urban futures that withstand the challenges of an ever-evolving urban landscape.

Keywords: resilient principles, risk management, sustainable cities, urban resilience

Procedia PDF Downloads 33
280 Computational Study of Composite Films

Authors: Rudolf Hrach, Stanislav Novak, Vera Hrachova

Abstract:

Composite and nanocomposite films represent the class of promising materials and are often objects of the study due to their mechanical, electrical and other properties. The most interesting ones are probably the composite metal/dielectric structures consisting of a metal component embedded in an oxide or polymer matrix. Behaviour of composite films varies with the amount of the metal component inside what is called filling factor. The structures contain individual metal particles or nanoparticles completely insulated by the dielectric matrix for small filling factors and the films have more or less dielectric properties. The conductivity of the films increases with increasing filling factor and finally a transition into metallic state occurs. The behaviour of composite films near a percolation threshold, where the change of charge transport mechanism from a thermally-activated tunnelling between individual metal objects to an ohmic conductivity is observed, is especially important. Physical properties of composite films are given not only by the concentration of metal component but also by the spatial and size distributions of metal objects which are influenced by a technology used. In our contribution, a study of composite structures with the help of methods of computational physics was performed. The study consists of two parts: -Generation of simulated composite and nanocomposite films. The techniques based on hard-sphere or soft-sphere models as well as on atomic modelling are used here. Characterizations of prepared composite structures by image analysis of their sections or projections follow then. However, the analysis of various morphological methods must be performed as the standard algorithms based on the theory of mathematical morphology lose their sensitivity when applied to composite films. -The charge transport in the composites was studied by the kinetic Monte Carlo method as there is a close connection between structural and electric properties of composite and nanocomposite films. It was found that near the percolation threshold the paths of tunnel current forms so-called fuzzy clusters. The main aim of the present study was to establish the correlation between morphological properties of composites/nanocomposites and structures of conducting paths in them in the dependence on the technology of composite films.

Keywords: composite films, computer modelling, image analysis, nanocomposite films

Procedia PDF Downloads 367
279 The Role of Twitter Bots in Political Discussion on 2019 European Elections

Authors: Thomai Voulgari, Vasilis Vasilopoulos, Antonis Skamnakis

Abstract:

The aim of this study is to investigate the effect of the European election campaigns (May 23-26, 2019) on Twitter achieving with artificial intelligence tools such as troll factories and automated inauthentic accounts. Our research focuses on the last European Parliamentary elections that took place between 23 and 26 May 2019 specifically in Italy, Greece, Germany and France. It is difficult to estimate how many Twitter users are actually bots (Echeverría, 2017). Detection for fake accounts is becoming even more complicated as AI bots are made more advanced. A political bot can be programmed to post comments on a Twitter account for a political candidate, target journalists with manipulated content or engage with politicians and artificially increase their impact and popularity. We analyze variables related to 1) the scope of activity of automated bots accounts and 2) degree of coherence and 3) degree of interaction taking into account different factors, such as the type of content of Twitter messages and their intentions, as well as the spreading to the general public. For this purpose, we collected large volumes of Twitter accounts of party leaders and MEP candidates between 10th of May and 26th of July based on content analysis of tweets based on hashtags while using an innovative network analysis tool known as MediaWatch.io (https://mediawatch.io/). According to our findings, one of the highest percentage (64.6%) of automated “bot” accounts during 2019 European election campaigns was in Greece. In general terms, political bots aim to proliferation of misinformation on social media. Targeting voters is a way that it can be achieved contribute to social media manipulation. We found that political parties and individual politicians create and promote purposeful content on Twitter using algorithmic tools. Based on this analysis, online political advertising play an important role to the process of spreading misinformation during elections campaigns. Overall, inauthentic accounts and social media algorithms are being used to manipulate political behavior and public opinion.

Keywords: artificial intelligence tools, human-bot interactions, political manipulation, social networking, troll factories

Procedia PDF Downloads 116
278 Potential of Water Purification of Turbid Surface Water Sources in Remote Arid and Semi-Arid Rural Areas of Rajasthan by Moringa Oleifera (Drumstick) Tree Seeds

Authors: Pomila Sharma

Abstract:

Rajasthan is among regions with greatest climate sensitivity and lowest adaptive capabilities. In many parts of the Rajasthan surface water which can be highly turbid and contaminated with fecal coliform bacteria is used for drinking purposes. The majority rely almost exclusively upon traditional sources of highly turbid and untreated pathogenic surface water for their domestic water needs. In many parts of rural areas of Rajasthan, it is still difficult to obtain clean water, especially remote habitations with no groundwater due to quality issues or depletion and limited feasibility to connect with surface water schemes due to low density of population in these areas to justify large infrastructure investment. The most viable sources are rain water harvesting, community managed open wells, private wells, ponds and small-scale irrigation reservoirs have often been the main traditional sources of rural drinking water. Turbidity is conventionally removed by treating the water with expensive chemicals. This study has to investigate the use of crushed seeds from the tree Moringa oleifera (drumstick) as a natural alternative to conventional coagulant chemicals. The use of Moringa oleifera seed powder can produce potable water of higher quality than the original source. Moringa oleifera a native species of northern India, the tree is now grown extensively throughout the tropics and found in many countries of Africa, Asia & South America. The seeds of tree contains significant quantities of low molecular weight, water soluble proteins which carries the positive charge when the crushed seeds are added to water. This protein binds in raw water with negatively charged turbid water with bacteria, clay, algae, etc. Under proper mixing, these particles make flocks, which may be left to settle by gravity or be removed by filtration. Using Moringa oleifera as a replacement coagulation in such surface sources of arid and semi-arid areas can meet the need for water purification in remote places of Rajasthan state of India. The present study accesses to find out laboratory based investigation of the effect of seeds of Moringa tree on its coagulation effectiveness (purification) using turbid water samples of surface source of the Rajasthan state. In this study, moringa seed powder showed that filtering with seed powder may diminish water pollution and bacterial counts. Results showed Moringa oleifera seeds coagulate 90-95% of turbidity and color efficiently leading to an aesthetically clear supernatant & reduced about 85-90% of bacterial load reduction in samples.

Keywords: bacterial load, coagulant, turbidity, water purification

Procedia PDF Downloads 118
277 Algorithm for Predicting Cognitive Exertion and Cognitive Fatigue Using a Portable EEG Headset for Concussion Rehabilitation

Authors: Lou J. Pino, Mark Campbell, Matthew J. Kennedy, Ashleigh C. Kennedy

Abstract:

A concussion is complex and nuanced, with cognitive rest being a key component of recovery. Cognitive overexertion during rehabilitation from a concussion is associated with delayed recovery. However, daily living imposes cognitive demands that may be unavoidable and difficult to quantify. Therefore, a portable tool capable of alerting patients before cognitive overexertion occurs could allow patients to maintain their quality of life while preventing symptoms and recovery setbacks. EEG allows for a sensitive measure of cognitive exertion. Clinical 32-lead EEG headsets are not practical for day-to-day concussion rehabilitation management. However, there are now commercially available and affordable portable EEG headsets. Thus, these headsets can potentially be used to continuously monitor cognitive exertion during mental tasks to alert the wearer of overexertion, with the aim of preventing the occurrence of symptoms to speed recovery times. The objective of this study was to test an algorithm for predicting cognitive exertion from EEG data collected from a portable headset. EEG data were acquired from 10 participants (5 males, 5 females). Each participant wore a portable 4 channel EEG headband while completing 10 tasks: rest (eyes closed), rest (eyes open), three levels of the increasing difficulty of logic puzzles, three levels of increasing difficulty in multiplication questions, rest (eyes open), and rest (eyes closed). After each task, the participant was asked to report their perceived level of cognitive exertion using the NASA Task Load Index (TLX). Each participant then completed a second session on a different day. A customized machine learning model was created using data from the first session. The performance of each model was then tested using data from the second session. The mean correlation coefficient between TLX scores and predicted cognitive exertion was 0.75 ± 0.16. The results support the efficacy of the algorithm for predicting cognitive exertion. This demonstrates that the algorithms developed in this study used with portable EEG devices have the potential to aid in the concussion recovery process by monitoring and warning patients of cognitive overexertion. Preventing cognitive overexertion during recovery may reduce the number of symptoms a patient experiences and may help speed the recovery process.

Keywords: cognitive activity, EEG, machine learning, personalized recovery

Procedia PDF Downloads 203
276 Crime Prevention with Artificial Intelligence

Authors: Mehrnoosh Abouzari, Shahrokh Sahraei

Abstract:

Today, with the increase in quantity and quality and variety of crimes, the discussion of crime prevention has faced a serious challenge that human resources alone and with traditional methods will not be effective. One of the developments in the modern world is the presence of artificial intelligence in various fields, including criminal law. In fact, the use of artificial intelligence in criminal investigations and fighting crime is a necessity in today's world. The use of artificial intelligence is far beyond and even separate from other technologies in the struggle against crime. Second, its application in criminal science is different from the discussion of prevention and it comes to the prediction of crime. Crime prevention in terms of the three factors of the offender, the offender and the victim, following a change in the conditions of the three factors, based on the perception of the criminal being wise, and therefore increasing the cost and risk of crime for him in order to desist from delinquency or to make the victim aware of self-care and possibility of exposing him to danger or making it difficult to commit crimes. While the presence of artificial intelligence in the field of combating crime and social damage and dangers, like an all-seeing eye, regardless of time and place, it sees the future and predicts the occurrence of a possible crime, thus prevent the occurrence of crimes. The purpose of this article is to collect and analyze the studies conducted on the use of artificial intelligence in predicting and preventing crime. How capable is this technology in predicting crime and preventing it? The results have shown that the artificial intelligence technologies in use are capable of predicting and preventing crime and can find patterns in the data set. find large ones in a much more efficient way than humans. In crime prediction and prevention, the term artificial intelligence can be used to refer to the increasing use of technologies that apply algorithms to large sets of data to assist or replace police. The use of artificial intelligence in our debate is in predicting and preventing crime, including predicting the time and place of future criminal activities, effective identification of patterns and accurate prediction of future behavior through data mining, machine learning and deep learning, and data analysis, and also the use of neural networks. Because the knowledge of criminologists can provide insight into risk factors for criminal behavior, among other issues, computer scientists can match this knowledge with the datasets that artificial intelligence uses to inform them.

Keywords: artificial intelligence, criminology, crime, prevention, prediction

Procedia PDF Downloads 59
275 Red-Tide Detection and Prediction Using MODIS Data in the Arabian Gulf of Qatar

Authors: Yasir E. Mohieldeen

Abstract:

Qatar is one of the most water scarce countries in the World. In 2014, the average per capita rainfall was less than 29 m3/y/ca, while the global average is 6,000 m3/y/ca. However, the per capita water consumption in Qatar is among the highest in the World: more than 500 liters per person per day, whereas the global average is 160 liters per person per day. Since the early 2000s, Qatar has been relying heavily on desalinated water from the Arabian Gulf as the main source of fresh water. In 2009, about 99.9% of the total potable water produced was desalinated. Reliance on desalinated water makes Qatar very vulnerable to water related natural disasters, such as the red-tide phenomenon. Qatar’s strategic water reserve lasts for only 7 days. In case of red-tide outbreak, the country would not be able to desalinate water for days, let alone the months that this disaster would bring about (as it clogs the desalination equipment). The 2008-09 red-tide outbreak, for instance, lasted for more than eight months and forced the closure of desalination plants in the region for weeks. This study aims at identifying favorite conditions for red-tide outbreaks, using satellite data along with in-situ measurements. This identification would allow the prediction of these outbreaks and their hotspots. Prediction and monitoring of outbreaks are crucial to water security in the country, as different measures could be put in place in advance to prevent an outbreak and mitigate its impact if it happened. Red-tide outbreaks are detected using different algorithms for chlorophyll concentration in the Gulf waters. Vegetation indices, such as Normalized Difference Vegetation Index (NDVI) and Enhanced Vegetation Index (EVI) were used along with Surface Algae Bloom Index (SABI) to detect known outbreaks. MODIS (or Moderate Resolution Imaging Spectroradiometer) bands are used to calculate these indices. A red-tide outbreaks atlas in the Arabian Gulf is being produced. Prediction of red-tide outbreaks ahead of their occurrences would give critical information on possible water-shortage in the country. Detecting known outbreaks in the past few decades and related parameters (e.g. water salinity, water surface temperature, nutrition, sandstorms, … etc) enables the identification of favorite conditions of red-tide outbreak that are key to the prediction of these outbreaks.

Keywords: Arabian Gulf, MODIS, red-tide detection, strategic water reserve, water desalination

Procedia PDF Downloads 82
274 Leveraging Remote Assessments and Central Raters to Optimize Data Quality in Rare Neurodevelopmental Disorders Clinical Trials

Authors: Pamela Ventola, Laurel Bales, Sara Florczyk

Abstract:

Background: Fully remote or hybrid administration of clinical outcome measures in rare neurodevelopmental disorders trials is increasing due to the ongoing pandemic and recognition that remote assessments reduce the burden on families. Many assessments in rare neurodevelopmental disorders trials are complex; however, remote/hybrid trials readily allow for the use of centralized raters to administer and score the scales. The use of centralized raters has many benefits, including reducing site burden; however, a specific impact on data quality has not yet been determined. Purpose: The current study has two aims: a) evaluate differences in data quality between administration of a standardized clinical interview completed by centralized raters compared to those completed by site raters and b) evaluate improvement in accuracy of scoring standardized developmental assessments when scored centrally compared to when scored by site raters. Methods: For aim 1, the Vineland-3, a widely used measure of adaptive functioning, was administered by site raters (n= 52) participating in one of four rare disease trials. The measure was also administered as part of two additional trials that utilized central raters (n=7). Each rater completed a comprehensive training program on the assessment. Following completion of the training, each clinician completed a Vineland-3 with a mock caregiver. Administrations were recorded and reviewed by a neuropsychologist for administration and scoring accuracy. Raters were able to certify for the trials after demonstrating an accurate administration of the scale. For site raters, 25% of each rater’s in-study administrations were reviewed by a neuropsychologist for accuracy of administration and scoring. For central raters, the first two administrations and every 10th administration were reviewed. Aim 2 evaluated the added benefit of centralized scoring on the accuracy of scoring of the Bayley-3, a comprehensive developmental assessment widely used in rare neurodevelopmental disorders trials. Bayley-3 administrations across four rare disease trials were centrally scored. For all administrations, the site rater who administered the Bayley-3 scored the scale, and a centralized rater reviewed the video recordings of the administrations and also scored the scales to confirm accuracy. Results: For aim 1, site raters completed 138 Vineland-3 administrations. Of the138 administrations, 53 administrations were reviewed by a neuropsychologist. Four of the administrations had errors that compromised the validity of the assessment. The central raters completed 180 Vineland-3 administrations, 38 administrations were reviewed, and none had significant errors. For aim 2, 68 administrations of the Bayley-3 were reviewed and scored by both a site rater and a centralized rater. Of these administrations, 25 had errors in scoring that were corrected by the central rater. Conclusion: In rare neurodevelopmental disorders trials, sample sizes are often small, so data quality is critical. The use of central raters inherently decreases site burden, but it also decreases rater variance, as illustrated by the small team of central raters (n=7) needed to conduct all of the assessments (n=180) in these trials compared to the number of site raters (n=53) required for even fewer assessments (n=138). In addition, the use of central raters dramatically improves the quality of scoring the assessments.

Keywords: neurodevelopmental disorders, clinical trials, rare disease, central raters, remote trials, decentralized trials

Procedia PDF Downloads 142
273 Content-Aware Image Augmentation for Medical Imaging Applications

Authors: Filip Rusak, Yulia Arzhaeva, Dadong Wang

Abstract:

Machine learning based Computer-Aided Diagnosis (CAD) is gaining much popularity in medical imaging and diagnostic radiology. However, it requires a large amount of high quality and labeled training image datasets. The training images may come from different sources and be acquired from different radiography machines produced by different manufacturers, digital or digitized copies of film radiographs, with various sizes as well as different pixel intensity distributions. In this paper, a content-aware image augmentation method is presented to deal with these variations. The results of the proposed method have been validated graphically by plotting the removed and added seams of pixels on original images. Two different chest X-ray (CXR) datasets are used in the experiments. The CXRs in the datasets defer in size, some are digital CXRs while the others are digitized from analog CXR films. With the proposed content-aware augmentation method, the Seam Carving algorithm is employed to resize CXRs and the corresponding labels in the form of image masks, followed by histogram matching used to normalize the pixel intensities of digital radiography, based on the pixel intensity values of digitized radiographs. We implemented the algorithms, resized the well-known Montgomery dataset, to the size of the most frequently used Japanese Society of Radiological Technology (JSRT) dataset and normalized our digital CXRs for testing. This work resulted in the unified off-the-shelf CXR dataset composed of radiographs included in both, Montgomery and JSRT datasets. The experimental results show that even though the amount of augmentation is large, our algorithm can preserve the important information in lung fields, local structures, and global visual effect adequately. The proposed method can be used to augment training and testing image data sets so that the trained machine learning model can be used to process CXRs from various sources, and it can be potentially used broadly in any medical imaging applications.

Keywords: computer-aided diagnosis, image augmentation, lung segmentation, medical imaging, seam carving

Procedia PDF Downloads 187
272 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: landsat 8, oligotrophic lake, remote sensing, water quality

Procedia PDF Downloads 373
271 Using Q Methodology to Capture Attitudes about Academic Resilience in an Online Postgraduate Psychology Course

Authors: Eleanor F. Willard

Abstract:

The attrition rate on distance learning courses can be high. This research examines how online students often react when faced with poor results. Using q methodology, it was found that the emotional response level and the type of social support sought by students were key influences on their attitude to failure. As educational and psychological researchers, we are adept at measuring learning and achievement, but examining attitudes towards barriers to learning are not so well researched. The distance learning student has differing needs from onsite learners and, as the attrition rate is notoriously high in the online student population, examining learners’ attitude towards adversity and barriers is important. Self-report measures such as questionnaires are useful in terms of ascertaining levels of constructs such as resilience and academic confidence. Interviewing, too, can gain in depth detail of the opinions of such a population, but only in individuals. The aim of this research was to ascertain what the feelings and attitudes of online students were when faced with a setback. This was achieved using q methodology due to its use of both quantitative and qualitative methodology and its suitability for exploratory research. The emphasis with this methodology is the attitudes, not the individuals. The work was focused upon a population of distance learning students who attended a school on site for one week as part of their studies. They were engaged in a psychology masters conversion course and, as such, were graduate students. The Q sort had 30 items taken from the Academic Resilience Scale (ARS-30). The scale items represent three constructs; perseverance, reflecting (including adaptive help-seeking) and negative affect. These are widely acknowledged as being relevant concepts underpinning psychological resilience. The q sort was conducted with 19 students in total. This is done by participants arranging statement cards regarding how similar to themselves they believe each statement to be. This was done after reading a vignette describing an experience of academic failure. Commonalities and differences between the sorts from all participants are then analyzed in terms of correlations and response patterns. Following data collection, the participants' responses were initially analyzed and the key perspectives (factors) to emerge were labelled ‘persevering individuals’ and ‘emotional networkers’. The differences between the two perspectives centre around the level of emotion felt when faced with barriers and the extent that students enlist the help of others inside and outside of the university. The dominant factor to emerge from the sorts of ‘persevering individuals’ demonstrated that many distance learners are tenacious. However, for other students, the level of emotional and social support is pivotal in helping them complete their studies when facing adversity. This was demonstrated by the ‘emotional networkers’ perspective. This research forms a starting point for further work on engaging and retaining online students at university and can potentially provide insight into how universities can lower attrition rates on distance learning courses.

Keywords: academic resilience, distance learning, online learning, q methodology

Procedia PDF Downloads 106
270 Crack Growth Life Prediction of a Fighter Aircraft Wing Splice Joint Under Spectrum Loading Using Random Forest Regression and Artificial Neural Networks with Hyperparameter Optimization

Authors: Zafer Yüce, Paşa Yayla, Alev Taşkın

Abstract:

There are heaps of analytical methods to estimate the crack growth life of a component. Soft computing methods have an increasing trend in predicting fatigue life. Their ability to build complex relationships and capability to handle huge amounts of data are motivating researchers and industry professionals to employ them for challenging problems. This study focuses on soft computing methods, especially random forest regressors and artificial neural networks with hyperparameter optimization algorithms such as grid search and random grid search, to estimate the crack growth life of an aircraft wing splice joint under variable amplitude loading. TensorFlow and Scikit-learn libraries of Python are used to build the machine learning models for this study. The material considered in this work is 7050-T7451 aluminum, which is commonly preferred as a structural element in the aerospace industry, and regarding the crack type; corner crack is used. A finite element model is built for the joint to calculate fastener loads and stresses on the structure. Since finite element model results are validated with analytical calculations, findings of the finite element model are fed to AFGROW software to calculate analytical crack growth lives. Based on Fighter Aircraft Loading Standard for Fatigue (FALSTAFF), 90 unique fatigue loading spectra are developed for various load levels, and then, these spectrums are utilized as inputs to the artificial neural network and random forest regression models for predicting crack growth life. Finally, the crack growth life predictions of the machine learning models are compared with analytical calculations. According to the findings, a good correlation is observed between analytical and predicted crack growth lives.

Keywords: aircraft, fatigue, joint, life, optimization, prediction.

Procedia PDF Downloads 144
269 Performance Demonstration of Extendable NSPO Space-Borne GPS Receiver

Authors: Hung-Yuan Chang, Wen-Lung Chiang, Kuo-Liang Wu, Chen-Tsung Lin

Abstract:

National Space Organization (NSPO) has completed in 2014 the development of a space-borne GPS receiver, including design, manufacture, comprehensive functional test, environmental qualification test and so on. The main performance of this receiver include 8-meter positioning accuracy, 0.05 m/sec speed-accuracy, the longest 90 seconds of cold start time, and up to 15g high dynamic scenario. The receiver will be integrated in the autonomous FORMOSAT-7 NSPO-Built satellite scheduled to be launched in 2019 to execute pre-defined scientific missions. The flight model of this receiver manufactured in early 2015 will pass comprehensive functional tests and environmental acceptance tests, etc., which are expected to be completed by the end of 2015. The space-borne GPS receiver is a pure software design in which all GPS baseband signal processing are executed by a digital signal processor (DSP), currently only 50% of its throughput being used. In response to the booming global navigation satellite systems, NSPO will gradually expand this receiver to become a multi-mode, multi-band, high-precision navigation receiver, and even a science payload, such as the reflectometry receiver of a global navigation satellite system. The fundamental purpose of this extension study is to port some software algorithms such as signal acquisition and correlation, reused code and large amount of computation load to the FPGA whose processor is responsible for operational control, navigation solution, and orbit propagation and so on. Due to the development and evolution of the FPGA is pretty fast, the new system architecture upgraded via an FPGA should be able to achieve the goal of being a multi-mode, multi-band high-precision navigation receiver, or scientific receiver. Finally, the results of tests show that the new system architecture not only retains the original overall performance, but also sets aside more resources available for future expansion possibility. This paper will explain the detailed DSP/FPGA architecture, development, test results, and the goals of next development stage of this receiver.

Keywords: space-borne, GPS receiver, DSP, FPGA, multi-mode multi-band

Procedia PDF Downloads 347
268 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives

Authors: Roberto Cabezas H

Abstract:

The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.

Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance

Procedia PDF Downloads 118