Search results for: normalized architectures
61 Comparison of GIS-Based Soil Erosion Susceptibility Models Using Support Vector Machine, Binary Logistic Regression and Artificial Neural Network in the Southwest Amazon Region
Authors: Elaine Lima Da Fonseca, Eliomar Pereira Da Silva Filho
Abstract:
The modeling of areas susceptible to soil loss by hydro erosive processes consists of a simplified instrument of reality with the purpose of predicting future behaviors from the observation and interaction of a set of geoenvironmental factors. The models of potential areas for soil loss will be obtained through binary logistic regression, artificial neural networks, and support vector machines. The choice of the municipality of Colorado do Oeste in the south of the western Amazon is due to soil degradation due to anthropogenic activities, such as agriculture, road construction, overgrazing, deforestation, and environmental and socioeconomic configurations. Initially, a soil erosion inventory map constructed through various field investigations will be designed, including the use of remotely piloted aircraft, orbital imagery, and the PLANAFLORO/RO database. 100 sampling units with the presence of erosion will be selected based on the assumptions indicated in the literature, and, to complement the dichotomous analysis, 100 units with no erosion will be randomly designated. The next step will be the selection of the predictive parameters that exert, jointly, directly, or indirectly, some influence on the mechanism of occurrence of soil erosion events. The chosen predictors are altitude, declivity, aspect or orientation of the slope, curvature of the slope, composite topographic index, flow power index, lineament density, normalized difference vegetation index, drainage density, lithology, soil type, erosivity, and ground surface temperature. After evaluating the relative contribution of each predictor variable, the erosion susceptibility model will be applied to the municipality of Colorado do Oeste - Rondônia through the SPSS Statistic 26 software. Evaluation of the model will occur through the determination of the values of the R² of Cox & Snell and the R² of Nagelkerke, Hosmer and Lemeshow Test, Log Likelihood Value, and Wald Test, in addition to analysis of the Confounding Matrix, ROC Curve and Accumulated Gain according to the model specification. The validation of the synthesis map resulting from both models of the potential risk of soil erosion will occur by means of Kappa indices, accuracy, and sensitivity, as well as by field verification of the classes of susceptibility to erosion using drone photogrammetry. Thus, it is expected to obtain the mapping of the following classes of susceptibility to erosion very low, low, moderate, very high, and high, which may constitute a screening tool to identify areas where more detailed investigations need to be carried out, applying more efficient social resources.Keywords: modeling, susceptibility to erosion, artificial intelligence, Amazon
Procedia PDF Downloads 6660 Existential Affordances and Psychopathology: A Gibsonian Analysis of Dissociative Identity Disorder
Authors: S. Alina Wang
Abstract:
A Gibsonian approach is used to understand the existential dimensions of the human ecological niche. Then, this existential-Gibsonian framework is applied to rethinking Hacking’s historical analysis of multiple personality disorder. This research culminates in a generalized account of psychiatric illness from an enactivist lens. In conclusion, reflections on the implications of this account on approaches to psychiatric treatment are mentioned. J.J. Gibson’s theory of affordances centered on affordances of sensorimotor varieties, which guide basic behaviors relative to organisms’ vital needs and physiological capacities (1979). Later theorists, notably Neisser (1988) and Rietveld (2014), expanded on the theory of affordances to account for uniquely human activities relative to the emotional, intersubjective, cultural, and narrative aspects of the human ecological niche. This research shows that these affordances are structured by what Haugeland (1998) calls existential commitments, which draws on Heidegger’s notion of dasein (1927) and Merleau-Ponty’s account of existential freedom (1945). These commitments organize the existential affordances that fill an individual’s environment and guide their thoughts, emotions, and behaviors. This system of a priori existential commitments and a posteriori affordances is called existential enactivism. For humans, affordances do not only elicit motor responses and appear as objects with instrumental significance. Affordances also, and possibly primarily, determine so-called affective and cognitive activities and structure the wide range of kinds (e.g., instrumental, aesthetic, ethical) of significances of objects found in the world. Then existential enactivism is applied to understanding the psychiatric phenomenon of multiple personality disorder (precursor of the current diagnosis of dissociative identity disorder). A reinterpretation of Hacking’s (1998) insights into the history of this particular disorder and his generalizations on the constructed nature of most psychiatric illness is taken on. Enactivist approaches sensitive to existential phenomenology can provide a deeper understanding of these matters. Conceptualizing psychiatric illness as strictly a disorder in the head (whether parsed as a disorder of brain chemicals or meaning-making capacities encoded in psychological modules) is incomplete. Rather, psychiatric illness must also be understood as a disorder in the world, or in the interconnected networks of existential affordances that regulate one’s emotional, intersubjective, and narrative capacities. All of this suggests that an adequate account of psychiatric illness must involve (1) the affordances that are the sources of existential hindrance, (2) the existential commitments structuring these affordances, and (3) the conditions of these existential commitments. Approaches to treatment of psychiatric illness would be more effective by centering on the interruption of normalized behaviors corresponding to affordances targeted as sources of hindrance, the development of new existential commitments, and the practice of new behaviors that erect affordances relative to these reformed commitments.Keywords: affordance, enaction, phenomenology, psychiatry, psychopathology
Procedia PDF Downloads 13759 Quantitative, Preservative Methodology for Review of Interview Transcripts Using Natural Language Processing
Authors: Rowan P. Martnishn
Abstract:
During the execution of a National Endowment of the Arts grant, approximately 55 interviews were collected from professionals across various fields. These interviews were used to create deliverables – historical connections for creations that began as art and evolved entirely into computing technology. With dozens of hours’ worth of transcripts to be analyzed by qualitative coders, a quantitative methodology was created to sift through the documents. The initial step was to both clean and format all the data. First, a basic spelling and grammar check was applied, as well as a Python script for normalized formatting which used an open-source grammatical formatter to make the data as coherent as possible. 10 documents were randomly selected to manually review, where words often incorrectly translated during the transcription were recorded and replaced throughout all other documents. Then, to remove all banter and side comments, the transcripts were spliced into paragraphs (separated by change in speaker) and all paragraphs with less than 300 characters were removed. Secondly, a keyword extractor, a form of natural language processing where significant words in a document are selected, was run on each paragraph for all interviews. Every proper noun was put into a data structure corresponding to that respective interview. From there, a Bidirectional and Auto-Regressive Transformer (B.A.R.T.) summary model was then applied to each paragraph that included any of the proper nouns selected from the interview. At this stage the information to review had been sent from about 60 hours’ worth of data to 20. The data was further processed through light, manual observation – any summaries which proved to fit the criteria of the proposed deliverable were selected, as well their locations within the document. This narrowed that data down to about 5 hours’ worth of processing. The qualitative researchers were then able to find 8 more connections in addition to our previous 4, exceeding our minimum quota of 3 to satisfy the grant. Major findings of the study and subsequent curation of this methodology raised a conceptual finding crucial to working with qualitative data of this magnitude. In the use of artificial intelligence there is a general trade off in a model between breadth of knowledge and specificity. If the model has too much knowledge, the user risks leaving out important data (too general). If the tool is too specific, it has not seen enough data to be useful. Thus, this methodology proposes a solution to this tradeoff. The data is never altered outside of grammatical and spelling checks. Instead, the important information is marked, creating an indicator of where the significant data is without compromising the purity of it. Secondly, the data is chunked into smaller paragraphs, giving specificity, and then cross-referenced with the keywords (allowing generalization over the whole document). This way, no data is harmed, and qualitative experts can go over the raw data instead of using highly manipulated results. Given the success in deliverable creation as well as the circumvention of this tradeoff, this methodology should stand as a model for synthesizing qualitative data while maintaining its original form.Keywords: B.A.R.T.model, keyword extractor, natural language processing, qualitative coding
Procedia PDF Downloads 2958 Deep Learning Approach for Colorectal Cancer’s Automatic Tumor Grading on Whole Slide Images
Authors: Shenlun Chen, Leonard Wee
Abstract:
Tumor grading is an essential reference for colorectal cancer (CRC) staging and survival prognostication. The widely used World Health Organization (WHO) grading system defines histological grade of CRC adenocarcinoma based on the density of glandular formation on whole slide images (WSI). Tumors are classified as well-, moderately-, poorly- or un-differentiated depending on the percentage of the tumor that is gland forming; >95%, 50-95%, 5-50% and <5%, respectively. However, manually grading WSIs is a time-consuming process and can cause observer error due to subjective judgment and unnoticed regions. Furthermore, pathologists’ grading is usually coarse while a finer and continuous differentiation grade may help to stratifying CRC patients better. In this study, a deep learning based automatic differentiation grading algorithm was developed and evaluated by survival analysis. Firstly, a gland segmentation model was developed for segmenting gland structures. Gland regions of WSIs were delineated and used for differentiation annotating. Tumor regions were annotated by experienced pathologists into high-, medium-, low-differentiation and normal tissue, which correspond to tumor with clear-, unclear-, no-gland structure and non-tumor, respectively. Then a differentiation prediction model was developed on these human annotations. Finally, all enrolled WSIs were processed by gland segmentation model and differentiation prediction model. The differentiation grade can be calculated by deep learning models’ prediction of tumor regions and tumor differentiation status according to WHO’s defines. If multiple WSIs were possessed by a patient, the highest differentiation grade was chosen. Additionally, the differentiation grade was normalized into scale between 0 to 1. The Cancer Genome Atlas, project COAD (TCGA-COAD) project was enrolled into this study. For the gland segmentation model, receiver operating characteristic (ROC) reached 0.981 and accuracy reached 0.932 in validation set. For the differentiation prediction model, ROC reached 0.983, 0.963, 0.963, 0.981 and accuracy reached 0.880, 0.923, 0.668, 0.881 for groups of low-, medium-, high-differentiation and normal tissue in validation set. Four hundred and one patients were selected after removing WSIs without gland regions and patients without follow up data. The concordance index reached to 0.609. Optimized cut off point of 51% was found by “Maxstat” method which was almost the same as WHO system’s cut off point of 50%. Both WHO system’s cut off point and optimized cut off point performed impressively in Kaplan-Meier curves and both p value of logrank test were below 0.005. In this study, gland structure of WSIs and differentiation status of tumor regions were proven to be predictable through deep leaning method. A finer and continuous differentiation grade can also be automatically calculated through above models. The differentiation grade was proven to stratify CAC patients well in survival analysis, whose optimized cut off point was almost the same as WHO tumor grading system. The tool of automatically calculating differentiation grade may show potential in field of therapy decision making and personalized treatment.Keywords: colorectal cancer, differentiation, survival analysis, tumor grading
Procedia PDF Downloads 13457 Methods for Early Detection of Invasive Plant Species: A Case Study of Hueston Woods State Nature Preserve
Authors: Suzanne Zazycki, Bamidele Osamika, Heather Craska, Kaelyn Conaway, Reena Murphy, Stephanie Spence
Abstract:
Invasive Plant Species (IPS) are an important component of effective preservation and conservation of natural lands management. IPS are non-native plants which can aggressively encroach upon native species and pose a significant threat to the ecology, public health, and social welfare of a community. The presence of IPS in U.S. nature preserves has caused economic costs, which has estimated to exceed $26 billion a year. While different methods have been identified to control IPS, few methods have been recognized for early detection of IPS. This study examined identified methods for early detection of IPS in Hueston Woods State Nature Preserve. Mixed methods research design was adopted in this four-phased study. The first phase entailed data gathering, the phase described the characteristics and qualities of IPS and the importance of early detection (ED). The second phase explored ED methods, Geographic Information Systems (GIS) and Citizen Science were discovered as ED methods for IPS. The third phase of the study involved the creation of hotspot maps to identify likely areas for IPS growth. While the fourth phase involved testing and evaluating mobile applications that can support the efforts of citizen scientists in IPS detection. Literature reviews were conducted on IPS and ED methods, and four regional experts from ODNR and Miami University were interviewed. A questionnaire was used to gather information about ED methods used across the state. The findings revealed that geospatial methods, including Unmanned Aerial Vehicles (UAVs), Multispectral Satellites (MSS), and Normalized Difference Vegetation Index (NDVI), are not feasible for early detection of IPS, as they require GIS expertise, are still an emerging technology, and are not suitable for every habitat for the ED of IPS. Therefore, Other ED methods options were explored, which include predicting areas where IPS will grow, which can be done through monitoring areas that are like the species’ native habitat. Through literature review and interviews, IPS are known to grow in frequently disturbed areas such as along trails, shorelines, and streambanks. The research team called these areas “hotspots” and created maps of these hotspots specifically for HW NP to support and narrow the efforts of citizen scientists and staff in the ED of IPS. The results further showed that utilizing citizen scientists in the ED of IPS is feasible, especially through single day events or passive monitoring challenges. The study concluded that the creation of hotspot maps to direct the efforts of citizen scientists are effective for the early detection of IPS. Several recommendations were made, among which is the creation of hotspot maps to narrow the ED efforts as citizen scientists continues to work in the preserves and utilize citizen science volunteers to identify and record emerging IPS.Keywords: early detection, hueston woods state nature preserve, invasive plant species, hotspots
Procedia PDF Downloads 10356 Positron Emission Tomography Parameters as Predictors of Pathologic Response and Nodal Clearance in Patients with Stage IIIA NSCLC Receiving Trimodality Therapy
Authors: Andrea L. Arnett, Ann T. Packard, Yolanda I. Garces, Kenneth W. Merrell
Abstract:
Objective: Pathologic response following neoadjuvant chemoradiation (CRT) has been associated with improved overall survival (OS). Conflicting results have been reported regarding the pathologic predictive value of positron emission tomography (PET) response in patients with stage III lung cancer. The aim of this study was to evaluate the correlation between post-treatment PET response and pathologic response utilizing novel FDG-PET parameters. Methods: This retrospective study included patients with non-metastatic, stage IIIA (N2) NSCLC cancer treated with CRT followed by resection. All patients underwent PET prior to and after neoadjuvant CRT. Univariate analysis was utilized to assess correlations between PET response, nodal clearance, pCR, and near-complete pathologic response (defined as the microscopic residual disease or less). Maximal standard uptake value (SUV), standard uptake ratio (SUR) [normalized independently to the liver (SUR-L) and blood pool (SUR-BP)], metabolic tumor volume (MTV), and total lesion glycolysis (TLG) were measured pre- and post-chemoradiation. Results: A total of 44 patients were included for review. Median age was 61.9 years, and median follow-up was 2.6 years. Histologic subtypes included adenocarcinoma (72.2%) and squamous cell carcinoma (22.7%), and the majority of patients had the T2 disease (59.1%). The rate of pCR and near-complete pathologic response within the primary lesion was 28.9% and 44.4%, respectively. The average reduction in SUVmₐₓ was 9.2 units (range -1.9-32.8), and the majority of patients demonstrated some degree of favorable treatment response. SUR-BP and SUR-L showed a mean reduction of 4.7 units (range -0.1-17.3) and 3.5 units (range –1.7-12.6), respectively. Variation in PET response was not significantly associated with histologic subtype, concurrent chemotherapy type, stage, or radiation dose. No significant correlation was found between pathologic response and absolute change in MTV or TLG. Reduction in SUVmₐₓ and SUR were associated with increased rate of pathologic response (p ≤ 0.02). This correlation was not impacted by normalization of SUR to liver versus mediastinal blood pool. A threshold of > 75% decrease in SUR-L correlated with near-complete response, with a sensitivity of 57.9% and specificity of 85.7%, as well as positive and negative predictive values of 78.6% and 69.2%, respectively (diagnostic odds ratio [DOR]: 5.6, p=0.02). A threshold of >50% decrease in SUR was also significantly associated pathologic response (DOR 12.9, p=0.2), but specificity was substantially lower when utilizing this threshold value. No significant association was found between nodal PET parameters and pathologic nodal clearance. Conclusions: Our results suggest that treatment response to neoadjuvant therapy as assessed on PET imaging can be a predictor of pathologic response when evaluated via SUV and SUR. SUR parameters were associated with higher diagnostic odds ratios, suggesting improved predictive utility compared to SUVmₐₓ. MTV and TLG did not prove to be significant predictors of pathologic response but may warrant further investigation in a larger cohort of patients.Keywords: lung cancer, positron emission tomography (PET), standard uptake ratio (SUR), standard uptake value (SUV)
Procedia PDF Downloads 23455 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 23254 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 27953 The Brain’s Attenuation Coefficient as a Potential Estimator of Temperature Elevation during Intracranial High Intensity Focused Ultrasound Procedures
Authors: Daniel Dahis, Haim Azhari
Abstract:
Noninvasive image-guided intracranial treatments using high intensity focused ultrasound (HIFU) are on the course of translation into clinical applications. They include, among others, tumor ablation, hyperthermia, and blood-brain-barrier (BBB) penetration. Since many of these procedures are associated with local temperature elevation, thermal monitoring is essential. MRI constitutes an imaging method with high spatial resolution and thermal mapping capacity. It is the currently leading modality for temperature guidance, commonly under the name MRgHIFU (magnetic-resonance guided HIFU). Nevertheless, MRI is a very expensive non-portable modality which jeopardizes its accessibility. Ultrasonic thermal monitoring, on the other hand, could provide a modular, cost-effective alternative with higher temporal resolution and accessibility. In order to assess the feasibility of ultrasonic brain thermal monitoring, this study investigated the usage of brain tissue attenuation coefficient (AC) temporal changes as potential estimators of thermal changes. Newton's law of cooling describes a temporal exponential decay behavior for the temperature of a heated object immersed in a relatively cold surrounding. Similarly, in the case of cerebral HIFU treatments, the temperature in the region of interest, i.e., focal zone, is suggested to follow the same law. Thus, it was hypothesized that the AC of the irradiated tissue may follow a temporal exponential behavior during cool down regime. Three ex-vivo bovine brain tissue specimens were inserted into plastic containers along with four thermocouple probes in each sample. The containers were placed inside a specially built ultrasonic tomograph and scanned at room temperature. The corresponding pixel-averaged AC was acquired for each specimen and used as a reference. Subsequently, the containers were placed in a beaker containing hot water and gradually heated to about 45ᵒC. They were then repeatedly rescanned during cool down using ultrasonic through-transmission raster trajectory until reaching about 30ᵒC. From the obtained images, the normalized AC and its temporal derivative as a function of temperature and time were registered. The results have demonstrated high correlation (R² > 0.92) between both the brain AC and its temporal derivative to temperature. This indicates the validity of the hypothesis and the possibility of obtaining brain tissue temperature estimation from the temporal AC thermal changes. It is important to note that each brain yielded different AC values and slopes. This implies that a calibration step is required for each specimen. Thus, for a practical acoustic monitoring of the brain, two steps are suggested. The first step consists of simply measuring the AC at normal body temperature. The second step entails measuring the AC after small temperature elevation. In face of the urging need for a more accessible thermal monitoring technique for brain treatments, the proposed methodology enables a cost-effective high temporal resolution acoustical temperature estimation during HIFU treatments.Keywords: attenuation coefficient, brain, HIFU, image-guidance, temperature
Procedia PDF Downloads 16152 Use of Artificial Neural Networks to Estimate Evapotranspiration for Efficient Irrigation Management
Authors: Adriana Postal, Silvio C. Sampaio, Marcio A. Villas Boas, Josué P. Castro
Abstract:
This study deals with the estimation of reference evapotranspiration (ET₀) in an agricultural context, focusing on efficient irrigation management to meet the growing interest in the sustainable management of water resources. Given the importance of water in agriculture and its scarcity in many regions, efficient use of this resource is essential to ensure food security and environmental sustainability. The methodology used involved the application of artificial intelligence techniques, specifically Multilayer Perceptron (MLP) Artificial Neural Networks (ANNs), to predict ET₀ in the state of Paraná, Brazil. The models were trained and validated with meteorological data from the Brazilian National Institute of Meteorology (INMET), together with data obtained from a producer's weather station in the western region of Paraná. Two optimizers (SGD and Adam) and different meteorological variables, such as temperature, humidity, solar radiation, and wind speed, were explored as inputs to the models. Nineteen configurations with different input variables were tested; amidst them, configuration 9, with 8 input variables, was identified as the most efficient of all. Configuration 10, with 4 input variables, was considered the most effective, considering the smallest number of variables. The main conclusions of this study show that MLP ANNs are capable of accurately estimating ET₀, providing a valuable tool for irrigation management in agriculture. Both configurations (9 and 10) showed promising performance in predicting ET₀. The validation of the models with cultivator data underlined the practical relevance of these tools and confirmed their generalization ability for different field conditions. The results of the statistical metrics, including Mean Absolute Error (MAE), Mean Squared Error (MSE), Root Mean Squared Error (RMSE), and Coefficient of Determination (R²), showed excellent agreement between the model predictions and the observed data, with MAE as low as 0.01 mm/day and 0.03 mm/day, respectively. In addition, the models achieved an R² between 0.99 and 1, indicating a satisfactory fit to the real data. This agreement was also confirmed by the Kolmogorov-Smirnov test, which evaluates the agreement of the predictions with the statistical behavior of the real data and yields values between 0.02 and 0.04 for the producer data. In addition, the results of this study suggest that the developed technique can be applied to other locations by using specific data from these sites to further improve ET₀ predictions and thus contribute to sustainable irrigation management in different agricultural regions. The study has some limitations, such as the use of a single ANN architecture and two optimizers, the validation with data from only one producer, and the possible underestimation of the influence of seasonality and local climate variability. An irrigation management application using the most efficient models from this study is already under development. Future research can explore different ANN architectures and optimization techniques, validate models with data from multiple producers and regions, and investigate the model's response to different seasonal and climatic conditions.Keywords: agricultural technology, neural networks in agriculture, water efficiency, water use optimization
Procedia PDF Downloads 4951 An Interoperability Concept for Detect and Avoid and Collision Avoidance Systems: Results from a Human-In-The-Loop Simulation
Authors: Robert Rorie, Lisa Fern
Abstract:
The integration of Unmanned Aircraft Systems (UAS) into the National Airspace System (NAS) poses a variety of technical challenges to UAS developers and aviation regulators. In response to growing demand for access to civil airspace in the United States, the Federal Aviation Administration (FAA) has produced a roadmap identifying key areas requiring further research and development. One such technical challenge is the development of a ‘detect and avoid’ system (DAA; previously referred to as ‘sense and avoid’) to replace the ‘see and avoid’ requirement in manned aviation. The purpose of the DAA system is to support the pilot, situated at a ground control station (GCS) rather than in the cockpit of the aircraft, in maintaining ‘well clear’ of nearby aircraft through the use of GCS displays and alerts. In addition to its primary function of aiding the pilot in maintaining well clear, the DAA system must also safely interoperate with existing NAS systems and operations, such as the airspace management procedures of air traffic controllers (ATC) and collision avoidance (CA) systems currently in use by manned aircraft, namely the Traffic alert and Collision Avoidance System (TCAS) II. It is anticipated that many UAS architectures will integrate both a DAA system and a TCAS II. It is therefore necessary to explicitly study the integration of DAA and TCAS II alerting structures and maneuver guidance formats to ensure that pilots understand the appropriate type and urgency of their response to the various alerts. This paper presents a concept of interoperability for the two systems. The concept was developed with the goal of avoiding any negative impact on the performance level of TCAS II (understanding that TCAS II must largely be left as-is) while retaining a DAA system that still effectively enables pilots to maintain well clear, and, as a result, successfully reduces the frequency of collision hazards. The interoperability concept described in the paper focuses primarily on facilitating the transition from a late-stage DAA encounter (where a loss of well clear is imminent) to a TCAS II corrective Resolution Advisory (RA), which requires pilot compliance with the directive RA guidance (e.g., climb, descend) within five seconds of its issuance. The interoperability concept was presented to 10 participants (6 active UAS pilots and 4 active commercial pilots) in a medium-fidelity, human-in-the-loop simulation designed to stress different aspects of the DAA and TCAS II systems. Pilot response times, compliance rates and subjective assessments were recorded. Results indicated that pilots exhibited comprehension of, and appropriate prioritization within, the DAA-TCAS II combined alert structure. Pilots demonstrated a high rate of compliance with TCAS II RAs and were also seen to respond to corrective RAs within the five second requirement established for manned aircraft. The DAA system presented under test was also shown to be effective in supporting pilots’ ability to maintain well clear in the overwhelming majority of cases in which pilots had sufficient time to respond. The paper ends with a discussion of next steps for research on integrating UAS into civil airspace.Keywords: detect and avoid, interoperability, traffic alert and collision avoidance system (TCAS II), unmanned aircraft systems
Procedia PDF Downloads 27250 Unleashing the Power of Cerebrospinal System for a Better Computer Architecture
Authors: Lakshmi N. Reddi, Akanksha Varma Sagi
Abstract:
Studies on biomimetics are largely developed, deriving inspiration from natural processes in our objective world to develop novel technologies. Recent studies are diverse in nature, making their categorization quite challenging. Based on an exhaustive survey, we developed categorizations based on either the essential elements of nature - air, water, land, fire, and space, or on form/shape, functionality, and process. Such diverse studies as aircraft wings inspired by bird wings, a self-cleaning coating inspired by a lotus petal, wetsuits inspired by beaver fur, and search algorithms inspired by arboreal ant path networks lend themselves to these categorizations. Our categorizations of biomimetic studies allowed us to define a different dimension of biomimetics. This new dimension is not restricted to inspiration from the objective world. It is based on the premise that the biological processes observed in the objective world find their reflections in our human bodies in a variety of ways. For example, the lungs provide the most efficient example for liquid-gas phase exchange, the heart exemplifies a very efficient pumping and circulatory system, and the kidneys epitomize the most effective cleaning system. The main focus of this paper is to bring out the magnificence of the cerebro-spinal system (CSS) insofar as it relates to our current computer architecture. In particular, the paper uses four key measures to analyze the differences between CSS and human- engineered computational systems. These are adaptability, sustainability, energy efficiency, and resilience. We found that the cerebrospinal system reveals some important challenges in the development and evolution of our current computer architectures. In particular, the myriad ways in which the CSS is integrated with other systems/processes (circulatory, respiration, etc) offer useful insights on how the human-engineered computational systems could be made more sustainable, energy-efficient, resilient, and adaptable. In our paper, we highlight the energy consumption differences between CSS and our current computational designs. Apart from the obvious differences in materials used between the two, the systemic nature of how CSS functions provides clues to enhance life-cycles of our current computational systems. The rapid formation and changes in the physiology of dendritic spines and their synaptic plasticity causing memory changes (ex., long-term potentiation and long-term depression) allowed us to formulate differences in the adaptability and resilience of CSS. In addition, the CSS is sustained by integrative functions of various organs, and its robustness comes from its interdependence with the circulatory system. The paper documents and analyzes quantifiable differences between the two in terms of the four measures. Our analyses point out the possibilities in the development of computational systems that are more adaptable, sustainable, energy efficient, and resilient. It concludes with the potential approaches for technological advancement through creation of more interconnected and interdependent systems to replicate the effective operation of cerebro-spinal system.Keywords: cerebrospinal system, computer architecture, adaptability, sustainability, resilience, energy efficiency
Procedia PDF Downloads 9749 FracXpert: Ensemble Machine Learning Approach for Localization and Classification of Bone Fractures in Cricket Athletes
Authors: Madushani Rodrigo, Banuka Athuraliya
Abstract:
In today's world of medical diagnosis and prediction, machine learning stands out as a strong tool, transforming old ways of caring for health. This study analyzes the use of machine learning in the specialized domain of sports medicine, with a focus on the timely and accurate detection of bone fractures in cricket athletes. Failure to identify bone fractures in real time can result in malunion or non-union conditions. To ensure proper treatment and enhance the bone healing process, accurately identifying fracture locations and types is necessary. When interpreting X-ray images, it relies on the expertise and experience of medical professionals in the identification process. Sometimes, radiographic images are of low quality, leading to potential issues. Therefore, it is necessary to have a proper approach to accurately localize and classify fractures in real time. The research has revealed that the optimal approach needs to address the stated problem and employ appropriate radiographic image processing techniques and object detection algorithms. These algorithms should effectively localize and accurately classify all types of fractures with high precision and in a timely manner. In order to overcome the challenges of misidentifying fractures, a distinct model for fracture localization and classification has been implemented. The research also incorporates radiographic image enhancement and preprocessing techniques to overcome the limitations posed by low-quality images. A classification ensemble model has been implemented using ResNet18 and VGG16. In parallel, a fracture segmentation model has been implemented using the enhanced U-Net architecture. Combining the results of these two implemented models, the FracXpert system can accurately localize exact fracture locations along with fracture types from the available 12 different types of fracture patterns, which include avulsion, comminuted, compressed, dislocation, greenstick, hairline, impacted, intraarticular, longitudinal, oblique, pathological, and spiral. This system will generate a confidence score level indicating the degree of confidence in the predicted result. Using ResNet18 and VGG16 architectures, the implemented fracture segmentation model, based on the U-Net architecture, achieved a high accuracy level of 99.94%, demonstrating its precision in identifying fracture locations. Simultaneously, the classification ensemble model achieved an accuracy of 81.0%, showcasing its ability to categorize various fracture patterns, which is instrumental in the fracture treatment process. In conclusion, FracXpert has become a promising ML application in sports medicine, demonstrating its potential to revolutionize fracture detection processes. By leveraging the power of ML algorithms, this study contributes to the advancement of diagnostic capabilities in cricket athlete healthcare, ensuring timely and accurate identification of bone fractures for the best treatment outcomes.Keywords: multiclass classification, object detection, ResNet18, U-Net, VGG16
Procedia PDF Downloads 12048 Variation of Warp and Binder Yarn Tension across the 3D Weaving Process and its Impact on Tow Tensile Strength
Authors: Reuben Newell, Edward Archer, Alistair McIlhagger, Calvin Ralph
Abstract:
Modern industry has developed a need for innovative 3D composite materials due to their attractive material properties. Composite materials are composed of a fibre reinforcement encased in a polymer matrix. The fibre reinforcement consists of warp, weft and binder yarns or tows woven together into a preform. The mechanical performance of composite material is largely controlled by the properties of the preform. As a result, the bulk of recent textile research has been focused on the design of high-strength preform architectures. Studies looking at optimisation of the weaving process have largely been neglected. It has been reported that yarns experience varying levels of damage during weaving, resulting in filament breakage and ultimately compromised composite mechanical performance. The weaving parameters involved in causing this yarn damage are not fully understood. Recent studies indicate that poor yarn tension control may be an influencing factor. As tension is increased, the yarn-to-yarn and yarn-to-weaving-equipment interactions are heightened, maximising damage. The correlation between yarn tension variation and weaving damage severity has never been adequately researched or quantified. A novel study is needed which accesses the influence of tension variation on the mechanical properties of woven yarns. This study has looked to quantify the variation of yarn tension throughout weaving and sought to link the impact of tension to weaving damage. Multiple yarns were randomly selected, and their tension was measured across the creel and shedding stages of weaving, using a hand-held tension meter. Sections of the same yarn were subsequently cut from the loom machine and tensile tested. A comparison study was made between the tensile strength of pristine and tensioned yarns to determine the induced weaving damage. Yarns from bobbins at the rear of the creel were under the least amount of tension (0.5-2.0N) compared to yarns positioned at the front of the creel (1.5-3.5N). This increase in tension has been linked to the sharp turn in the yarn path between bobbins at the front of the creel and creel I-board. Creel yarns under the lower tension suffered a 3% loss of tensile strength, compared to 7% for the greater tensioned yarns. During shedding, the tension on the yarns was higher than in the creel. The upper shed yarns were exposed to a decreased tension (3.0-4.5N) compared to the lower shed yarns (4.0-5.5N). Shed yarns under the lower tension suffered a 10% loss of tensile strength, compared to 14% for the greater tensioned yarns. Interestingly, the most severely damaged yarn was exposed to both the largest creel and shedding tensions. This study confirms for the first time that yarns under a greater level of tension suffer an increased amount of weaving damage. Significant variation of yarn tension has been identified across the creel and shedding stages of weaving. This leads to a variance of mechanical properties across the woven preform and ultimately the final composite part. The outcome from this study highlights the need for optimised yarn tension control during preform manufacture to minimize yarn-induced weaving damage.Keywords: optimisation of preform manufacture, tensile testing of damaged tows, variation of yarn weaving tension, weaving damage
Procedia PDF Downloads 23647 Sensor Network Structural Integration for Shape Reconstruction of Morphing Trailing Edge
Authors: M. Ciminello, I. Dimino, S. Ameduri, A. Concilio
Abstract:
Improving aircraft's efficiency is one of the key elements of Aeronautics. Modern aircraft possess many advanced functions, such as good transportation capability, high Mach number, high flight altitude, and increasing rate of climb. However, no aircraft has a possibility to reach all of this optimized performance in a single airframe configuration. The aircraft aerodynamic efficiency varies considerably depending on the specific mission and on environmental conditions within which the aircraft must operate. Structures that morph their shape in response to their surroundings may at first seem like the stuff of science fiction, but take a look at nature and lots of examples of plants and animals that adapt to their environment would arise. In order to ensure both the controllable and the static robustness of such complex structural systems, a monitoring network is aimed at verifying the effectiveness of the given control commands together with the elastic response. In order to achieve this kind of information, the use of FBG sensors network is, in this project, proposed. The sensor network is able to measure morphing structures shape which may show large, global displacements due to non-standard architectures and materials adopted. Chord -wise variations may allow setting and chasing the best layout as a function of the particular and transforming reference state, always targeting best aerodynamic performance. The reason why an optical sensor solution has been selected is that while keeping a few of the contraindication of the classical systems (like cabling, continuous deployment, and so on), fibre optic sensors may lead to a dramatic reduction of the wires mass and weight thanks to an extreme multiplexing capability. Furthermore, the use of the ‘light’ as ‘information carrier’, permits dealing with nimbler, non-shielded wires, and avoids any kind of interference with the on-board instrumentation. The FBG-based transducers, herein presented, aim at monitoring the actual shape of adaptive trailing edge. Compared to conventional systems, these transducers allow more fail-safe measurements, by taking advantage of a supporting structure, hosting FBG, whose properties may be tailored depending on the architectural requirements and structural constraints, acting as strain modulator. The direct strain may, in fact, be difficult because of the large deformations occurring in morphing elements. A modulation transducer is then necessary to keep the measured strain inside the allowed range. In this application, chord-wise transducer device is a cantilevered beam sliding trough the spars and copying the camber line of the ATE ribs. FBG sensors array position are dimensioned and integrated along the path. A theoretical model describing the system behavior is implemented. To validate the design, experiments are then carried out with the purpose of estimating the functions between rib rotation and measured strain.Keywords: fiber optic sensor, morphing structures, strain sensor, shape reconstruction
Procedia PDF Downloads 32946 Correlation between Defect Suppression and Biosensing Capability of Hydrothermally Grown ZnO Nanorods
Authors: Mayoorika Shukla, Pramila Jakhar, Tejendra Dixit, I. A. Palani, Vipul Singh
Abstract:
Biosensors are analytical devices with wide range of applications in biological, chemical, environmental and clinical analysis. It comprises of bio-recognition layer which has biomolecules (enzymes, antibodies, DNA, etc.) immobilized over it for detection of analyte and transducer which converts the biological signal into the electrical signal. The performance of biosensor primarily the depends on the bio-recognition layer and therefore it has to be chosen wisely. In this regard, nanostructures of metal oxides such as ZnO, SnO2, V2O5, and TiO2, etc. have been explored extensively as bio-recognition layer. Recently, ZnO has the attracted attention of researchers due to its unique properties like high iso-electric point, biocompatibility, stability, high electron mobility and high electron binding energy, etc. Although there have been many reports on usage of ZnO as bio-recognition layer but to the authors’ knowledge, none has ever observed correlation between optical properties like defect suppression and biosensing capability of the sensor. Here, ZnO nanorods (ZNR) have been synthesized by a low cost, simple and low-temperature hydrothermal growth process, over Platinum (Pt) coated glass substrate. The ZNR have been synthesized in two steps viz. initially a seed layer was coated over substrate (Pt coated glass) followed by immersion of it into nutrient solution of Zinc nitrate and Hexamethylenetetramine (HMTA) with in situ addition of KMnO4. The addition of KMnO4 was observed to have a profound effect over the growth rate anisotropy of ZnO nanostructures. Clustered and powdery growth of ZnO was observed without addition of KMnO4, although by addition of it during the growth, uniform and crystalline ZNR were found to be grown over the substrate. Moreover, the same has resulted in suppression of defects as observed by Normalized Photoluminescence (PL) spectra since KMnO4 is a strong oxidizing agent which provides an oxygen rich growth environment. Further, to explore the correlation between defect suppression and biosensing capability of the ZNR Glucose oxidase (Gox) was immobilized over it, using physical adsorption technique followed by drop casting of nafion. Here the main objective of the work was to analyze effect of defect suppression over biosensing capability, and therefore Gox has been chosen as model enzyme, and electrochemical amperometric glucose detection was performed. The incorporation of KMnO4 during growth has resulted in variation of optical and charge transfer properties of ZNR which in turn were observed to have deep impact on biosensor figure of merits. The sensitivity of biosensor was found to increase by 12-18 times, due to variations introduced by addition of KMnO4 during growth. The amperometric detection of glucose in continuously stirred buffer solution was performed. Interestingly, defect suppression has been observed to contribute towards the improvement of biosensor performance. The detailed mechanism of growth of ZNR along with the overall influence of defect suppression on the sensing capabilities of the resulting enzymatic electrochemical biosensor and different figure of merits of the biosensor (Glass/Pt/ZNR/Gox/Nafion) will be discussed during the conference.Keywords: biosensors, defects, KMnO4, ZnO nanorods
Procedia PDF Downloads 28245 Learning the History of a Tuscan Village: A Serious Game Using Geolocation Augmented Reality
Authors: Irene Capecchi, Tommaso Borghini, Iacopo Bernetti
Abstract:
An important tool for the enhancement of cultural sites is serious games (SG), i.e., games designed for educational purposes; SG is applied in cultural sites through trivia, puzzles, and mini-games for participation in interactive exhibitions, mobile applications, and simulations of past events. The combination of Augmented Reality (AR) and digital cultural content has also produced examples of cultural heritage recovery and revitalization around the world. Through AR, the user perceives the information of the visited place in a more real and interactive way. Another interesting technological development for the revitalization of cultural sites is the combination of AR and Global Positioning System (GPS), which integrated have the ability to enhance the user's perception of reality by providing historical and architectural information linked to specific locations organized on a route. To the author’s best knowledge, there are currently no applications that combine GPS AR and SG for cultural heritage revitalization. The present research focused on the development of an SG based on GPS and AR. The study area is the village of Caldana in Tuscany, Italy. Caldana is a fortified Renaissance village; the most important architectures are the walls, the church of San Biagio, the rectory, and the marquis' palace. The historical information is derived from extensive research by the Department of Architecture at the University of Florence. The storyboard of the SG is based on the history of the three characters who built the village: marquis Marcello Agostini, who was commissioned by Cosimo I de Medici, Grand Duke of Tuscany, to build the village, his son Ippolito and his architect Lorenzo Pomarelli. The three historical characters were modeled in 3D using the freeware MakeHuman and imported into Blender and Mixamo to associate a skeleton and blend shapes to have gestural animations and reproduce lip movement during speech. The Unity Rhubarb Lip Syncer plugin was used for the lip sync animation. The historical costumes were created by Marvelous Designer. The application was developed using the Unity 3D graphics and game engine. The AR+GPS Location plugin was used to position the 3D historical characters based on GPS coordinates. The ARFoundation library was used to display AR content. The SG is available in two versions: for children and adults. the children's version consists of finding a digital treasure consisting of valuable items and historical rarities. Players must find 9 village locations where 3D AR models of historical figures explaining the history of the village provide clues. To stimulate players, there are 3 levels of rewards for every 3 clues discovered. The rewards consist of AR masks for archaeologist, professor, and explorer. At the adult level, the SG consists of finding the 16 historical landmarks in the village, and learning historical and architectural information interactively and engagingly. The application is being tested on a sample of adults and children. Test subjects will be surveyed on a Likert scale to find out their perceptions of using the app and the learning experience between the guided tour and interaction with the app.Keywords: augmented reality, cultural heritage, GPS, serious game
Procedia PDF Downloads 9544 Optimizing Data Transfer and Processing in Multi-Cloud Environments for Big Data Workloads
Authors: Gaurav Kumar Sinha
Abstract:
In an era defined by the proliferation of data and the utilization of cloud computing environments, the efficient transfer and processing of big data workloads across multi-cloud platforms have emerged as critical challenges. This research paper embarks on a comprehensive exploration of the complexities associated with managing and optimizing big data in a multi-cloud ecosystem.The foundation of this study is rooted in the recognition that modern enterprises increasingly rely on multiple cloud providers to meet diverse business needs, enhance redundancy, and reduce vendor lock-in. As a consequence, managing data across these heterogeneous cloud environments has become intricate, necessitating innovative approaches to ensure data integrity, security, and performance.The primary objective of this research is to investigate strategies and techniques for enhancing the efficiency of data transfer and processing in multi-cloud scenarios. It recognizes that big data workloads are characterized by their sheer volume, variety, velocity, and complexity, making traditional data management solutions insufficient for harnessing the full potential of multi-cloud architectures.The study commences by elucidating the challenges posed by multi-cloud environments in the context of big data. These challenges encompass data fragmentation, latency, security concerns, and cost optimization. To address these challenges, the research explores a range of methodologies and solutions. One of the key areas of focus is data transfer optimization. The paper delves into techniques for minimizing data movement latency, optimizing bandwidth utilization, and ensuring secure data transmission between different cloud providers. It evaluates the applicability of dedicated data transfer protocols, intelligent data routing algorithms, and edge computing approaches in reducing transfer times.Furthermore, the study examines strategies for efficient data processing across multi-cloud environments. It acknowledges that big data processing requires distributed and parallel computing capabilities that span across cloud boundaries. The research investigates containerization and orchestration technologies, serverless computing models, and interoperability standards that facilitate seamless data processing workflows.Security and data governance are paramount concerns in multi-cloud environments. The paper explores methods for ensuring data security, access control, and compliance with regulatory frameworks. It considers encryption techniques, identity and access management, and auditing mechanisms as essential components of a robust multi-cloud data security strategy.The research also evaluates cost optimization strategies, recognizing that the dynamic nature of multi-cloud pricing models can impact the overall cost of data transfer and processing. It examines approaches for workload placement, resource allocation, and predictive cost modeling to minimize operational expenses while maximizing performance.Moreover, this study provides insights into real-world case studies and best practices adopted by organizations that have successfully navigated the challenges of multi-cloud big data management. It presents a comparative analysis of various multi-cloud management platforms and tools available in the market.Keywords: multi-cloud environments, big data workloads, data transfer optimization, data processing strategies
Procedia PDF Downloads 6843 Medical and Dietary Potentials of Mare's Milk in Liver Diseases
Authors: Bakytzhan Bimbetov, Abay Zhangabilov, Saule Aitbaeva, Galymzhan Meirambekov
Abstract:
Mare’s milk (saumal) contains in total about 40 biological components necessary for the human body. The most significant among them are amino acids, fats, carbohydrates, enzymes (lysozyme, amylase), more minerals and vitamins which are well balanced with each other. In Kazakhstan, Company "Eurasia Invest Ltd.” produces a freeze-dried saumal in form of powder by the use of modern German innovative technology by means of evaporating at low temperature (-35°C) with an appropriate pasteurization. Research of freeze-dried biomilk for the qualitative content showed that main ingredients of freshly drown milk are being preserved. We are currently studying medical and dietary properties of freeze-dried mare's milk for diseases of the digestive system, including for nonalcoholic steatohepatitis (NASH) and liver cirrhosis (LC) viral etiology. The studied group consisted of 14 patients with NASH, and 7 patients with LC viral etiology of Class A severity degree as per Child-Pugh. Patients took freeze-dried saumal, preliminary dissolved in boiled warm water (24 g. powder per 200 ml water) 3-4 times a day for a month in conjunction with basic therapy. The results were compared to a control group (11 patients with NASH and LC) who received only basic therapy without mare’s milk. Results of preliminary research showed an improvement of subjective and objective conditions of all patients, but more significant improvement of clinical symptoms and syndromes were observed in the treatment group compared to the control one. Patients with NASH significantly over time compared to the beginning of therapy decreased asthenic and dyspeptic syndromes (p<0,01). Hepatomegaly, identified on the basis of ultrasound prior to treatment was observed in 92,8±2,4% of patients, and after combination therapy hepatomegaly the rate decreased by 14,3%, amounting to 78,5±2,8%. Patients with LC also noted the improvement of asthenic (p<0,01) and dyspeptic (p<0,05) syndromes and hemorrhagic syndrome (nosebleeds and bleeding gums when brushing your teeth, p<0,05), and jaundice. Laboratory study also showed improvement in the research group, but more significant changes were observed in the experimental group. Group of patients with NASH showed a significant improvement of index in cytolysis in conjunction with a combination therapy (p<0,05). In the control group, these indicators were also improved, but they were not statistically reliable (p>0,05). Markers of liver failure were additionally studied during the study of laboratory parameters in patients with liver cirrhosis, in particular, bilirubin, albumin and prothrombin index (PTI). Combined therapy with the use of basic treatment and mare's milk showed a significant improvement in cytolysis and bilirubin (p<0,05). In our opinion, a very important and interesting fact is that, in conjunction with basic therapy, the use of mare's milk revealed an improvement of liver function in the form of normalized PTI and albumin in patients with liver cirrhosis viral etiology. Results of this work have shown therapeutic efficiency of the use of mare's milk in complex treatment of patients with liver disease and require further in-depth study.Keywords: liver cirrhosis, non-alcohol steatohepatitis, saumal, mare’s milk
Procedia PDF Downloads 22742 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 29041 Importance of Remote Sensing and Information Communication Technology to Improve Climate Resilience in Low Land of Ethiopia
Authors: Hasen Keder Edris, Ryuji Matsunaga, Toshi Yamanaka
Abstract:
The issue of climate change and its impact is a major contemporary global concern. Ethiopia is one of the countries experiencing adverse climate change impact including frequent extreme weather events that are exacerbating drought and water scarcity. Due to this reason, the government of Ethiopia develops a strategic document which focuses on the climate resilience green economy. One of the major components of the strategic framework is designed to improve community adaptation capacity and mitigation of drought. For effective implementation of the strategy, identification of regions relative vulnerability to drought is vital. There is a growing tendency of applying Geographic Information System (GIS) and Remote Sensing technologies for collecting information on duration and severity of drought by direct measure of the topography as well as an indirect measure of land cover. This study aims to show an application of remote sensing technology and GIS for developing drought vulnerability index by taking lowland of Ethiopia as a case study. In addition, it assesses integrated Information Communication Technology (ICT) potential of Ethiopia lowland and proposes integrated solution. Satellite data is used to detect the beginning of the drought. The severity of drought risk prone areas of livestock keeping pastoral is analyzed through normalized difference vegetation index (NDVI) and ten years rainfall data. The change from the existing and average SPOT NDVI and vegetation condition index is used to identify the onset of drought and potential risks. Secondary data is used to analyze geographical coverage of mobile and internet usage in the region. For decades, the government of Ethiopia introduced some technologies and approach to overcoming climate change related problems. However, lack of access to information and inadequate technical support for the pastoral area remains a major challenge. In conventional business as usual approach, the lowland pastorals continue facing a number of challenges. The result indicated that 80% of the region face frequent drought occurrence and out of this 60% of pastoral area faces high drought risk. On the other hand, the target area mobile phone and internet coverage is rapidly growing. One of identified ICT solution enabler technology is telecom center which covers 98% of the region. It was possible to identify the frequently affected area and potential drought risk using the NDVI remote-sensing data analyses. We also found that ICT can play an important role in mitigating climate change challenge. Hence, there is a need to strengthen implementation efforts of climate change adaptation through integrated Remote Sensing and web based information dissemination and mobile alert of extreme events.Keywords: climate changes, ICT, pastoral, remote sensing
Procedia PDF Downloads 31540 Automatic Adult Age Estimation Using Deep Learning of the ResNeXt Model Based on CT Reconstruction Images of the Costal Cartilage
Authors: Ting Lu, Ya-Ru Diao, Fei Fan, Ye Xue, Lei Shi, Xian-e Tang, Meng-jun Zhan, Zhen-hua Deng
Abstract:
Accurate adult age estimation (AAE) is a significant and challenging task in forensic and archeology fields. Attempts have been made to explore optimal adult age metrics, and the rib is considered a potential age marker. The traditional way is to extract age-related features designed by experts from macroscopic or radiological images followed by classification or regression analysis. Those results still have not met the high-level requirements for practice, and the limitation of using feature design and manual extraction methods is loss of information since the features are likely not designed explicitly for extracting information relevant to age. Deep learning (DL) has recently garnered much interest in imaging learning and computer vision. It enables learning features that are important without a prior bias or hypothesis and could be supportive of AAE. This study aimed to develop DL models for AAE based on CT images and compare their performance to the manual visual scoring method. Chest CT data were reconstructed using volume rendering (VR). Retrospective data of 2500 patients aged 20.00-69.99 years were obtained between December 2019 and September 2021. Five-fold cross-validation was performed, and datasets were randomly split into training and validation sets in a 4:1 ratio for each fold. Before feeding the inputs into networks, all images were augmented with random rotation and vertical flip, normalized, and resized to 224×224 pixels. ResNeXt was chosen as the DL baseline due to its advantages of higher efficiency and accuracy in image classification. Mean absolute error (MAE) was the primary parameter. Independent data from 100 patients acquired between March and April 2022 were used as a test set. The manual method completely followed the prior study, which reported the lowest MAEs (5.31 in males and 6.72 in females) among similar studies. CT data and VR images were used. The radiation density of the first costal cartilage was recorded using CT data on the workstation. The osseous and calcified projections of the 1 to 7 costal cartilages were scored based on VR images using an eight-stage staging technique. According to the results of the prior study, the optimal models were the decision tree regression model in males and the stepwise multiple linear regression equation in females. Predicted ages of the test set were calculated separately using different models by sex. A total of 2600 patients (training and validation sets, mean age=45.19 years±14.20 [SD]; test set, mean age=46.57±9.66) were evaluated in this study. Of ResNeXt model training, MAEs were obtained with 3.95 in males and 3.65 in females. Based on the test set, DL achieved MAEs of 4.05 in males and 4.54 in females, which were far better than the MAEs of 8.90 and 6.42 respectively, for the manual method. Those results showed that the DL of the ResNeXt model outperformed the manual method in AAE based on CT reconstruction of the costal cartilage and the developed system may be a supportive tool for AAE.Keywords: forensic anthropology, age determination by the skeleton, costal cartilage, CT, deep learning
Procedia PDF Downloads 7339 Enhancing Financial Security: Real-Time Anomaly Detection in Financial Transactions Using Machine Learning
Authors: Ali Kazemi
Abstract:
The digital evolution of financial services, while offering unprecedented convenience and accessibility, has also escalated the vulnerabilities to fraudulent activities. In this study, we introduce a distinct approach to real-time anomaly detection in financial transactions, aiming to fortify the defenses of banking and financial institutions against such threats. Utilizing unsupervised machine learning algorithms, specifically autoencoders and isolation forests, our research focuses on identifying irregular patterns indicative of fraud within transactional data, thus enabling immediate action to prevent financial loss. The data we used in this study included the monetary value of each transaction. This is a crucial feature as fraudulent transactions may have distributions of different amounts than legitimate ones, such as timestamps indicating when transactions occurred. Analyzing transactions' temporal patterns can reveal anomalies (e.g., unusual activity in the middle of the night). Also, the sector or category of the merchant where the transaction occurred, such as retail, groceries, online services, etc. Specific categories may be more prone to fraud. Moreover, the type of payment used (e.g., credit, debit, online payment systems). Different payment methods have varying risk levels associated with fraud. This dataset, anonymized to ensure privacy, reflects a wide array of transactions typical of a global banking institution, ranging from small-scale retail purchases to large wire transfers, embodying the diverse nature of potentially fraudulent activities. By engineering features that capture the essence of transactions, including normalized amounts and encoded categorical variables, we tailor our data to enhance model sensitivity to anomalies. The autoencoder model leverages its reconstruction error mechanism to flag transactions that deviate significantly from the learned normal pattern, while the isolation forest identifies anomalies based on their susceptibility to isolation from the dataset's majority. Our experimental results, validated through techniques such as k-fold cross-validation, are evaluated using precision, recall, and the F1 score alongside the area under the receiver operating characteristic (ROC) curve. Our models achieved an F1 score of 0.85 and a ROC AUC of 0.93, indicating high accuracy in detecting fraudulent transactions without excessive false positives. This study contributes to the academic discourse on financial fraud detection and provides a practical framework for banking institutions seeking to implement real-time anomaly detection systems. By demonstrating the effectiveness of unsupervised learning techniques in a real-world context, our research offers a pathway to significantly reduce the incidence of financial fraud, thereby enhancing the security and trustworthiness of digital financial services.Keywords: anomaly detection, financial fraud, machine learning, autoencoders, isolation forest, transactional data analysis
Procedia PDF Downloads 5738 Flood Risk Assessment, Mapping Finding the Vulnerability to Flood Level of the Study Area and Prioritizing the Study Area of Khinch District Using and Multi-Criteria Decision-Making Model
Authors: Muhammad Karim Ahmadzai
Abstract:
Floods are natural phenomena and are an integral part of the water cycle. The majority of them are the result of climatic conditions, but are also affected by the geology and geomorphology of the area, topography and hydrology, the water permeability of the soil and the vegetation cover, as well as by all kinds of human activities and structures. However, from the moment that human lives are at risk and significant economic impact is recorded, this natural phenomenon becomes a natural disaster. Flood management is now a key issue at regional and local levels around the world, affecting human lives and activities. The majority of floods are unlikely to be fully predicted, but it is feasible to reduce their risks through appropriate management plans and constructions. The aim of this Case Study is to identify, and map areas of flood risk in the Khinch District of Panjshir Province, Afghanistan specifically in the area of Peshghore, causing numerous damages. The main purpose of this study is to evaluate the contribution of remote sensing technology and Geographic Information Systems (GIS) in assessing the susceptibility of this region to flood events. Panjsher is facing Seasonal floods and human interventions on streams caused floods. The beds of which have been trampled to build houses and hotels or have been converted into roads, are causing flooding after every heavy rainfall. The streams crossing settlements and areas with high touristic development have been intensively modified by humans, as the pressure for real estate development land is growing. In particular, several areas in Khinch are facing a high risk of extensive flood occurrence. This study concentrates on the construction of a flood susceptibility map, of the study area, by combining vulnerability elements, using the Analytical Hierarchy Process/ AHP. The Analytic Hierarchy Process, normally called AHP, is a powerful yet simple method for making decisions. It is commonly used for project prioritization and selection. AHP lets you capture your strategic goals as a set of weighted criteria that you then use to score projects. This method is used to provide weights for each criterion which Contributes to the Flood Event. After processing of a digital elevation model (DEM), important secondary data were extracted, such as the slope map, the flow direction and the flow accumulation. Together with additional thematic information (Landuse and Landcover, topographic wetness index, precipitation, Normalized Difference Vegetation Index, Elevation, River Density, Distance from River, Distance to Road, Slope), these led to the final Flood Risk Map. Finally, according to this map, the Priority Protection Areas and Villages and the structural and nonstructural measures were demonstrated to Minimize the Impacts of Floods on residential and Agricultural areas.Keywords: flood hazard, flood risk map, flood mitigation measures, AHP analysis
Procedia PDF Downloads 11837 Ectopic Osteoinduction of Porous Composite Scaffolds Reinforced with Graphene Oxide and Hydroxyapatite Gradient Density
Authors: G. M. Vlasceanu, H. Iovu, E. Vasile, M. Ionita
Abstract:
Herein, the synthesis and characterization of chitosan-gelatin highly porous scaffold reinforced with graphene oxide, and hydroxyapatite (HAp), crosslinked with genipin was targeted. In tissue engineering, chitosan and gelatin are two of the most robust biopolymers with wide applicability due to intrinsic biocompatibility, biodegradability, low antigenicity properties, affordability, and ease of processing. HAp, per its exceptional activity in tuning cell-matrix interactions, is acknowledged for its capability of sustaining cellular proliferation by promoting bone-like native micro-media for cell adjustment. Genipin is regarded as a top class cross-linker, while graphene oxide (GO) is viewed as one of the most performant and versatile fillers. The composites with natural bone HAp/biopolymer ratio were obtained by cascading sonochemical treatments, followed by uncomplicated casting methods and by freeze-drying. Their structure was characterized by Fourier Transform Infrared Spectroscopy and X-ray Diffraction, while overall morphology was investigated by Scanning Electron Microscopy (SEM) and micro-Computer Tomography (µ-CT). Ensuing that, in vitro enzyme degradation was performed to detect the most promising compositions for the development of in vivo assays. Suitable GO dispersion was ascertained within the biopolymer mix as nanolayers specific signals lack in both FTIR and XRD spectra, and the specific spectral features of the polymers persisted with GO load enhancement. Overall, correlations between the GO induced material structuration, crystallinity variations, and chemical interaction of the compounds can be correlated with the physical features and bioactivity of each composite formulation. Moreover, the HAp distribution within follows an auspicious density gradient tuned for hybrid osseous/cartilage matter architectures, which were mirrored in the mice model tests. Hence, the synthesis route of a natural polymer blend/hydroxyapatite-graphene oxide composite material is anticipated to emerge as influential formulation in bone tissue engineering. Acknowledgement: This work was supported by the project 'Work-based learning systems using entrepreneurship grants for doctoral and post-doctoral students' (Sisteme de invatare bazate pe munca prin burse antreprenor pentru doctoranzi si postdoctoranzi) - SIMBA, SMIS code 124705 and by a grant of the National Authority for Scientific Research and Innovation, Operational Program Competitiveness Axis 1 - Section E, Program co-financed from European Regional Development Fund 'Investments for your future' under the project number 154/25.11.2016, P_37_221/2015. The nano-CT experiments were possible due to European Regional Development Fund through Competitiveness Operational Program 2014-2020, Priority axis 1, ID P_36_611, MySMIS code 107066, INOVABIOMED.Keywords: biopolymer blend, ectopic osteoinduction, graphene oxide composite, hydroxyapatite
Procedia PDF Downloads 10436 GIS and Remote Sensing Approach in Earthquake Hazard Assessment and Monitoring: A Case Study in the Momase Region of Papua New Guinea
Authors: Tingneyuc Sekac, Sujoy Kumar Jana, Indrajit Pal, Dilip Kumar Pal
Abstract:
Tectonism induced Tsunami, landslide, ground shaking leading to liquefaction, infrastructure collapse, conflagration are the common earthquake hazards that are experienced worldwide. Apart from human casualty, the damage to built-up infrastructures like roads, bridges, buildings and other properties are the collateral episodes. The appropriate planning must precede with a view to safeguarding people’s welfare, infrastructures and other properties at a site based on proper evaluation and assessments of the potential level of earthquake hazard. The information or output results can be used as a tool that can assist in minimizing risk from earthquakes and also can foster appropriate construction design and formulation of building codes at a particular site. Different disciplines adopt different approaches in assessing and monitoring earthquake hazard throughout the world. For the present study, GIS and Remote Sensing potentials were utilized to evaluate and assess earthquake hazards of the study region. Subsurface geology and geomorphology were the common features or factors that were assessed and integrated within GIS environment coupling with seismicity data layers like; Peak Ground Acceleration (PGA), historical earthquake magnitude and earthquake depth to evaluate and prepare liquefaction potential zones (LPZ) culminating in earthquake hazard zonation of our study sites. The liquefaction can eventuate in the aftermath of severe ground shaking with amenable site soil condition, geology and geomorphology. The latter site conditions or the wave propagation media were assessed to identify the potential zones. The precept has been that during any earthquake event the seismic wave is generated and propagates from earthquake focus to the surface. As it propagates, it passes through certain geological or geomorphological and specific soil features, where these features according to their strength/stiffness/moisture content, aggravates or attenuates the strength of wave propagation to the surface. Accordingly, the resulting intensity of shaking may or may not culminate in the collapse of built-up infrastructures. For the case of earthquake hazard zonation, the overall assessment was carried out through integrating seismicity data layers with LPZ. Multi-criteria Evaluation (MCE) with Saaty’s Analytical Hierarchy Process (AHP) was adopted for this study. It is a GIS technology that involves integration of several factors (thematic layers) that can have a potential contribution to liquefaction triggered by earthquake hazard. The factors are to be weighted and ranked in the order of their contribution to earthquake induced liquefaction. The weightage and ranking assigned to each factor are to be normalized with AHP technique. The spatial analysis tools i.e., Raster calculator, reclassify, overlay analysis in ArcGIS 10 software were mainly employed in the study. The final output of LPZ and Earthquake hazard zones were reclassified to ‘Very high’, ‘High’, ‘Moderate’, ‘Low’ and ‘Very Low’ to indicate levels of hazard within a study region.Keywords: hazard micro-zonation, liquefaction, multi criteria evaluation, tectonism
Procedia PDF Downloads 26635 A Geographic Information System Mapping Method for Creating Improved Satellite Solar Radiation Dataset Over Qatar
Authors: Sachin Jain, Daniel Perez-Astudillo, Dunia A. Bachour, Antonio P. Sanfilippo
Abstract:
The future of solar energy in Qatar is evolving steadily. Hence, high-quality spatial solar radiation data is of the uttermost requirement for any planning and commissioning of solar technology. Generally, two types of solar radiation data are available: satellite data and ground observations. Satellite solar radiation data is developed by the physical and statistical model. Ground data is collected by solar radiation measurement stations. The ground data is of high quality. However, they are limited to distributed point locations with the high cost of installation and maintenance for the ground stations. On the other hand, satellite solar radiation data is continuous and available throughout geographical locations, but they are relatively less accurate than ground data. To utilize the advantage of both data, a product has been developed here which provides spatial continuity and higher accuracy than any of the data alone. The popular satellite databases: National Solar radiation Data Base, NSRDB (PSM V3 model, spatial resolution: 4 km) is chosen here for merging with ground-measured solar radiation measurement in Qatar. The spatial distribution of ground solar radiation measurement stations is comprehensive in Qatar, with a network of 13 ground stations. The monthly average of the daily total Global Horizontal Irradiation (GHI) component from ground and satellite data is used for error analysis. The normalized root means square error (NRMSE) values of 3.31%, 6.53%, and 6.63% for October, November, and December 2019 were observed respectively when comparing in-situ and NSRDB data. The method is based on the Empirical Bayesian Kriging Regression Prediction model available in ArcGIS, ESRI. The workflow of the algorithm is based on the combination of regression and kriging methods. A regression model (OLS, ordinary least square) is fitted between the ground and NSBRD data points. A semi-variogram is fitted into the experimental semi-variogram obtained from the residuals. The kriging residuals obtained after fitting the semi-variogram model were added to NSRBD data predicted values obtained from the regression model to obtain the final predicted values. The NRMSE values obtained after merging are respectively 1.84%, 1.28%, and 1.81% for October, November, and December 2019. One more explanatory variable, that is the ground elevation, has been incorporated in the regression and kriging methods to reduce the error and to provide higher spatial resolution (30 m). The final GHI maps have been created after merging, and NRMSE values of 1.24%, 1.28%, and 1.28% have been observed for October, November, and December 2019, respectively. The proposed merging method has proven as a highly accurate method. An additional method is also proposed here to generate calibrated maps by using regression and kriging model and further to use the calibrated model to generate solar radiation maps from the explanatory variable only when not enough historical ground data is available for long-term analysis. The NRMSE values obtained after the comparison of the calibrated maps with ground data are 5.60% and 5.31% for November and December 2019 month respectively.Keywords: global horizontal irradiation, GIS, empirical bayesian kriging regression prediction, NSRDB
Procedia PDF Downloads 8934 Oblique Radiative Solar Nano-Polymer Gel Coating Heat Transfer and Slip Flow: Manufacturing Simulation
Authors: Anwar Beg, Sireetorn Kuharat, Rashid Mehmood, Rabil Tabassum, Meisam Babaie
Abstract:
Nano-polymeric solar paints and sol-gels have emerged as a major new development in solar cell/collector coatings offering significant improvements in durability, anti-corrosion and thermal efficiency. They also exhibit substantial viscosity variation with temperature which can be exploited in solar collector designs. Modern manufacturing processes for such nano-rheological materials frequently employ stagnation flow dynamics under high temperature which invokes radiative heat transfer. Motivated by elaborating in further detail the nanoscale heat, mass and momentum characteristics of such sol gels, the present article presents a mathematical and computational study of the steady, two-dimensional, non-aligned thermo-fluid boundary layer transport of copper metal-doped water-based nano-polymeric sol gels under radiative heat flux. To simulate real nano-polymer boundary interface dynamics, thermal slip is analysed at the wall. A temperature-dependent viscosity is also considered. The Tiwari-Das nanofluid model is deployed which features a volume fraction for the nanoparticle concentration. This approach also features a Maxwell-Garnet model for the nanofluid thermal conductivity. The conservation equations for mass, normal and tangential momentum and energy (heat) are normalized via appropriate transformations to generate a multi-degree, ordinary differential, non-linear, coupled boundary value problem. Numerical solutions are obtained via the stable, efficient Runge-Kutta-Fehlberg scheme with shooting quadrature in MATLAB symbolic software. Validation of solutions is achieved with a Variational Iterative Method (VIM) utilizing Langrangian multipliers. The impact of key emerging dimensionless parameters i.e. obliqueness parameter, radiation-conduction Rosseland number (Rd), thermal slip parameter (α), viscosity parameter (m), nanoparticles volume fraction (ϕ) on non-dimensional normal and tangential velocity components, temperature, wall shear stress, local heat flux and streamline distributions is visualized graphically. Shear stress and temperature are boosted with increasing radiative effect whereas local heat flux is reduced. Increasing wall thermal slip parameter depletes temperatures. With greater volume fraction of copper nanoparticles temperature and thermal boundary layer thickness is elevated. Streamlines are found to be skewed markedly towards the left with positive obliqueness parameter.Keywords: non-orthogonal stagnation-point heat transfer, solar nano-polymer coating, MATLAB numerical quadrature, Variational Iterative Method (VIM)
Procedia PDF Downloads 13533 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry
Authors: Dhanuj M. Gandikota
Abstract:
Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry
Procedia PDF Downloads 10332 Development of PCL/Chitosan Core-Shell Electrospun Structures
Authors: Hilal T. Sasmazel, Seda Surucu
Abstract:
Skin tissue engineering is a promising field for the treatment of skin defects using scaffolds. This approach involves the use of living cells and biomaterials to restore, maintain, or regenerate tissues and organs in the body by providing; (i) larger surface area for cell attachment, (ii) proper porosity for cell colonization and cell to cell interaction, and (iii) 3-dimensionality at macroscopic scale. Recent studies on this area mainly focus on fabrication of scaffolds that can closely mimic the natural extracellular matrix (ECM) for creation of tissue specific niche-like environment at the subcellular scale. Scaffolds designed as ECM-like architectures incorporating into the host with minimal scarring/pain and facilitate angiogenesis. This study is related to combining of synthetic PCL and natural chitosan polymers to form 3D PCL/Chitosan core-shell structures for skin tissue engineering applications. Amongst the polymers used in tissue engineering, natural polymer chitosan and synthetic polymer poly(ε-caprolactone) (PCL) are widely preferred in the literature. Chitosan has been among researchers for a very long time because of its superior biocompatibility and structural resemblance to the glycosaminoglycan of bone tissue. However, the low mechanical flexibility and limited biodegradability properties reveals the necessity of using this polymer in a composite structure. On the other hand, PCL is a versatile polymer due to its low melting point (60°C), ease of processability, degradability with non-enzymatic processes (hydrolysis) and good mechanical properties. Nevertheless, there are also several disadvantages of PCL such as its hydrophobic structure, limited bio-interaction and susceptibility to bacterial biodegradation. Therefore, it became crucial to use both of these polymers together as a hybrid material in order to overcome the disadvantages of both polymers and combine advantages of those. The scaffolds here were fabricated by using electrospinning technique and the characterizations of the samples were done by contact angle (CA) measurements, scanning electron microscopy (SEM), transmission electron microscopy (TEM) and X-Ray Photoelectron spectroscopy (XPS). Additionally, gas permeability test, mechanical test, thickness measurement and PBS absorption and shrinkage tests were performed for all type of scaffolds (PCL, chitosan and PCL/chitosan core-shell). By using ImageJ launcher software program (USA) from SEM photographs the average inter-fiber diameter values were calculated as 0.717±0.198 µm for PCL, 0.660±0.070 µm for chitosan and 0.412±0.339 µm for PCL/chitosan core-shell structures. Additionally, the average inter-fiber pore size values exhibited decrease of 66.91% and 61.90% for the PCL and chitosan structures respectively, compare to PCL/chitosan core-shell structures. TEM images proved that homogenous and continuous bead free core-shell fibers were obtained. XPS analysis of the PCL/chitosan core-shell structures exhibited the characteristic peaks of PCL and chitosan polymers. Measured average gas permeability value of produced PCL/chitosan core-shell structure was determined 2315±3.4 g.m-2.day-1. In the future, cell-material interactions of those developed PCL/chitosan core-shell structures will be carried out with L929 ATCC CCL-1 mouse fibroblast cell line. Standard MTT assay and microscopic imaging methods will be used for the investigation of the cell attachment, proliferation and growth capacities of the developed materials.Keywords: chitosan, coaxial electrospinning, core-shell, PCL, tissue scaffold
Procedia PDF Downloads 481