Search results for: similarity metrics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1256

Search results for: similarity metrics

146 Cyclocoelids (Trematoda: Echinostomata) from Gadwall Mareca strepera in the South of the Russian Far East

Authors: Konstantin S. Vainutis, Mark E. Andreev, Anastasia N. Voronova, Mikhail Yu. Shchelkanov

Abstract:

Introduction: The trematodes from the family Cyclocoelidae (cyclocoelids) belong to the superfamily Echinostomatoidea infecting air sacs and trachea of wild birds. At present, the family Cyclocoelidae comprises nine valid genera in three subfamilies: Cyclocoelinae (type taxon), Haematotrephinae, and Typhlocoelinae. To our best knowledge, in this study, molecular genetic methods were used for the first time for studying cyclocoelids from the Russian Far East. Here we provide the data on the morphology and phylogeny of cyclocoelids from gadwall from the Russian Far East. The morphological and genetic data obtained for cyclocoelids indicated the necessity to revise the previously proposed classification within the family Cyclocoelidae. Objectives: The first objective was performing the morphological study of cyclocoelids found in M. strepera from the Russian Far East. The second objective is to reconstruct the phylogenetic relationships of the studied trematodes with other cyclocoelids using the 28S gene. Material and methods: During the field studies in the Khasansky district of the Primorsky region, 21 cyclocoelids were recovered from the air sacs of a single gadwall Mareca strepera. Seven samples of cyclocoelids were overstained in alum carmine, dehydrated in a graded ethanol series, cleared in clove oil, and mounted in Canada balsam. Genomic DNA was extracted from four cyclocoelids using the alkaline lysis method HotShot. The 28S rDNA fragment was amplified using the forward primer Digl2 and the reverse primer 1500R. Results: According to morphological features (ovary intratesticular, forming a triangle with the testes), the studied worms belong to the subfamily Cyclocoelinae Stossich, 1902. In particular, the highest morphological similarity was observed in relation to the trematodes of the genus Cyclocoelum Brandes, 1892 – genital pores are pharyngeal. However, the genetic analysis has shown significant discrepancies between the trematodes studied regarding the genus Cyclocoelum. On the phylogenetic tree, these trematodes took the sister position in relation to the genus Morishitium (previously considered in the subfamily Szidatitrematinae). Conclusion: Based on the results of the morphological and genetic studies, cyclocoelids isolated from Mareca strepera are suggested to be described in the previously unknown genus and differentiated from the type genus Cyclocoelum of the type subfamily Cyclocoelinae. Considering the available molecular data, including described cyclocoelids, the family Cyclocoelidae comprises ten valid genera in the three subfamilies mentioned above.

Keywords: new species, trematoda, phylogeny, cyclocoelidae

Procedia PDF Downloads 841
145 Enabling Self-Care and Shared Decision Making for People Living with Dementia

Authors: Jonathan Turner, Julie Doyle, Laura O’Philbin, Dympna O’Sullivan

Abstract:

People living with dementia should be at the centre of decision-making regarding goals for daily living. These goals include basic activities (dressing, hygiene, and mobility), advanced activities (finances, transportation, and shopping), and meaningful activities that promote well-being (pastimes and intellectual pursuits). However, there is limited involvement of people living with dementia in the design of technology to support their goals. A project is described that is co-designing intelligent computer-based support for, and with, people affected by dementia and their carers. The technology will support self-management, empower participation in shared decision-making with carers and help people living with dementia remain healthy and independent in their homes for longer. It includes information from the patient’s care plan, which documents medications, contacts, and the patient's wishes on end-of-life care. Importantly for this work, the plan can outline activities that should be maintained or worked towards, such as exercise or social contact. The authors discuss how to integrate care goal information from such a care plan with data collected from passive sensors in the patient’s home in order to deliver individualized planning and interventions for persons with dementia. A number of scientific challenges are addressed: First, to co-design with dementia patients and their carers computerized support for shared decision-making about their care while allowing the patient to share the care plan. Second, to develop a new and open monitoring framework with which to configure sensor technologies to collect data about whether goals and actions specified for a person in their care plan are being achieved. This is developed top-down by associating care quality types and metrics elicited from the co-design activities with types of data that can be collected within the home, from passive and active sensors, and from the patient’s feedback collected through a simple co-designed interface. These activities and data will be mapped to appropriate sensors and technological infrastructure with which to collect the data. Third, the application of machine learning models to analyze data collected via the sensing devices in order to investigate whether and to what extent activities outlined via the care plan are being achieved. The models will capture longitudinal data to track disease progression over time; as the disease progresses and captured data show that activities outlined in the care plan are not being achieved, the care plan may recommend alternative activities. Disease progression may also require care changes, and a data-driven approach can capture changes in a condition more quickly and allow care plans to evolve and be updated.

Keywords: care goals, decision-making, dementia, self-care, sensors

Procedia PDF Downloads 169
144 Basics for Corruption Reduction and Fraud Prevention in Industrial/Humanitarian Organizations through Supplier Management in Supply Chain Systems

Authors: Ibrahim Burki

Abstract:

Unfortunately, all organizations (Industrial and Humanitarian/ Non-governmental organizations) are prone to fraud and corruption in their supply chain management routines. The reputational and financial fallout can be disastrous. With the growing number of companies using suppliers based in the local market has certainly increased the threat of fraud as well as corruption. There are various potential threats like, poor or non-existent record keeping, purchasing of lower quality goods at higher price, excessive entertainment of staff by suppliers, deviations in communications between procurement staff and suppliers, such as calls or text messaging to mobile phones, staff demanding extended periods of notice before they allow an audit to take place, inexperienced buyers and more. But despite all the above-mentioned threats, this research paper emphasize upon the effectiveness of well-maintained vendor/s records and sorting/filtration of vendor/s to cut down the possible threats of corruption and fraud. This exercise is applied in a humanitarian organization of Pakistan but it is applicable to whole South Asia region due to the similarity of culture and contexts. In that firm, there were more than 550 (five hundred and fifty) registered vendors. As during the disasters or emergency phases requirements are met on urgent basis thus, providing golden opportunities for the fake companies or for the brother/sister companies of the already registered companies to be involved in the tendering process without declaration or even under some different (new) company’s name. Therefore, a list of required documents (along with checklist) was developed and sent to all of the vendor(s) in the current database and based upon the receipt of the requested documents vendors were sorted out. Furthermore, these vendors were divided into active (meeting the entire set criterion) and non-active groups. This initial filtration stage allowed the firm to continue its work without a complete shutdown that is only vendors falling in the active group shall be allowed to participate in the tenders by the time whole process is completed. Likewise only those companies or firms meeting the set criterion (active category) shall be allowed to get registered in the future along with a dedicated filing system (soft and hard shall be maintained), and all of the companies/firms in the active group shall be physically verified (visited) by the Committee comprising of senior members of at least Finance department, Supply Chain (other than procurement) and Security department.

Keywords: corruption reduction, fraud prevention, supplier management, industrial/humanitarian organizations

Procedia PDF Downloads 536
143 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification

Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens

Abstract:

Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.

Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage

Procedia PDF Downloads 188
142 Relations between the Internal Employment Conditions of International Organizations and the Characteristics of the National Civil Service

Authors: Renata Hrecska

Abstract:

This research seeks to fully examine the internal employment law of international organizations by comparing it with the characteristics of the national civil service. The aim of the research is to compare the legal system that has developed over many centuries and the relatively new internal staffing regulations to find out what solution schemes can help each other through mutual legal development in order to respond effectively to the social challenges of everyday life. Generally, the rules of civil service of any country or international entity have in common that they have, in their pragmatics inherently, the characteristic that makes them serving public interests. Though behind the common base there are many differences: there is the clear fragmentation of state regulation and the unity of organizational regulation. On the other hand, however, this difference disappears to some extent: the public service regulation of international organizations can be considered uniform until we examine it within, but not outside an organization. As soon as we compare the different organizations we may find many different solutions for staffing regulations. It is clear that the national civil service is a strong model for international organizations, but the question may be whether the staffing policy of international organizations can serve the national civil service as an example, too. In this respect, the easiest way to imagine a legislative environment would be to have a single comprehensive code, the general part of which is the Civil Service Act itself, and the specific part containing specific, necessarily differentiating rules for each layer of the civil service. Would it be advantageous to follow the footsteps of the leading international organizations, or is there any speciality in national level civil service that we cannot avoid during regulating processes? In addition to the above, the personal competencies of officials working in international organizations and public administrations also show a high degree of similarity, regardless of the type of employment. Thus, the whole public service system is characterized by the fundamental and special values that a person capable of holding a public office must be able to demonstrate, in some cases, even without special qualifications. It is also interesting how we can compare the two spheres of employment in light of the theory of Lawyer Louis Brandeis, a judge at the US Supreme Court, who formulated a complex theory of profession as distinguished from other occupations. From this point of view we can examine the continuous development of research and specialized knowledge at work; the community recognition and social status; that to what extent we can see a close-knit professional organization of altruistic philosophy; that how stability grows in the working conditions due to the stability of the profession; and that how the autonomy of the profession can prevail.

Keywords: civil service, comparative law, international organizations, regulatory systems

Procedia PDF Downloads 130
141 Predicting the Impact of Scope Changes on Project Cost and Schedule Using Machine Learning Techniques

Authors: Soheila Sadeghi

Abstract:

In the dynamic landscape of project management, scope changes are an inevitable reality that can significantly impact project performance. These changes, whether initiated by stakeholders, external factors, or internal project dynamics, can lead to cost overruns and schedule delays. Accurately predicting the consequences of these changes is crucial for effective project control and informed decision-making. This study aims to develop predictive models to estimate the impact of scope changes on project cost and schedule using machine learning techniques. The research utilizes a comprehensive dataset containing detailed information on project tasks, including the Work Breakdown Structure (WBS), task type, productivity rate, estimated cost, actual cost, duration, task dependencies, scope change magnitude, and scope change timing. Multiple machine learning models are developed and evaluated to predict the impact of scope changes on project cost and schedule. These models include Linear Regression, Decision Tree, Ridge Regression, Random Forest, Gradient Boosting, and XGBoost. The dataset is split into training and testing sets, and the models are trained using the preprocessed data. Cross-validation techniques are employed to assess the robustness and generalization ability of the models. The performance of the models is evaluated using metrics such as Mean Squared Error (MSE) and R-squared. Residual plots are generated to assess the goodness of fit and identify any patterns or outliers. Hyperparameter tuning is performed to optimize the XGBoost model and improve its predictive accuracy. The feature importance analysis reveals the relative significance of different project attributes in predicting the impact on cost and schedule. Key factors such as productivity rate, scope change magnitude, task dependencies, estimated cost, actual cost, duration, and specific WBS elements are identified as influential predictors. The study highlights the importance of considering both cost and schedule implications when managing scope changes. The developed predictive models provide project managers with a data-driven tool to proactively assess the potential impact of scope changes on project cost and schedule. By leveraging these insights, project managers can make informed decisions, optimize resource allocation, and develop effective mitigation strategies. The findings of this research contribute to improved project planning, risk management, and overall project success.

Keywords: cost impact, machine learning, predictive modeling, schedule impact, scope changes

Procedia PDF Downloads 38
140 DNA Hypomethylating Agents Induced Histone Acetylation Changes in Leukemia

Authors: Sridhar A. Malkaram, Tamer E. Fandy

Abstract:

Purpose: 5-Azacytidine (5AC) and decitabine (DC) are DNA hypomethylating agents. We recently demonstrated that both drugs increase the enzymatic activity of the histone deacetylase enzyme SIRT6. Accordingly, we are comparing the changes H3K9 acetylation changes in the whole genome induced by both drugs using leukemia cells. Description of Methods & Materials: Mononuclear cells from the bone marrow of six de-identified naive acute myeloid leukemia (AML) patients were cultured with either 500 nM of DC or 5AC for 72 h followed by ChIP-Seq analysis using a ChIP-validated acetylated-H3K9 (H3K9ac) antibody. Chip-Seq libraries were prepared from treated and untreated cells using SMARTer ThruPLEX DNA- seq kit (Takara Bio, USA) according to the manufacturer’s instructions. Libraries were purified and size-selected with AMPure XP beads at 1:1 (v/v) ratio. All libraries were pooled prior to sequencing on an Illumina HiSeq 1500. The dual-indexed single-read Rapid Run was performed with 1x120 cycles at 5 pM final concentration of the library pool. Sequence reads with average Phred quality < 20, with length < 35bp, PCR duplicates, and those aligning to blacklisted regions of the genome were filtered out using Trim Galore v0.4.4 and cutadapt v1.18. Reads were aligned to the reference human genome (hg38) using Bowtie v2.3.4.1 in end-to-end alignment mode. H3K9ac enriched (peak) regions were identified using diffReps v1.55.4 software using input samples for background correction. The statistical significance of differential peak counts was assessed using a negative binomial test using all individuals as replicates. Data & Results: The data from the six patients showed significant (Padj<0.05) acetylation changes at 925 loci after 5AC treatment versus 182 loci after DC treatment. Both drugs induced H3K9 acetylation changes at different chromosomal regions, including promoters, coding exons, introns, and distal intergenic regions. Ten common genes showed H3K9 acetylation changes by both drugs. Approximately 84% of the genes showed an H3K9 acetylation decrease by 5AC versus 54% only by DC. Figures 1 and 2 show the heatmaps for the top 100 genes and the 99 genes showing H3K9 acetylation decrease after 5AC treatment and DC treatment, respectively. Conclusion: Despite the similarity in hypomethylating activity and chemical structure, the effect of both drugs on H3K9 acetylation change was significantly different. More changes in H3K9 acetylation were observed after 5 AC treatments compared to DC. The impact of these changes on gene expression and the clinical efficacy of these drugs requires further investigation.

Keywords: DNA methylation, leukemia, decitabine, 5-Azacytidine, epigenetics

Procedia PDF Downloads 144
139 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison

Authors: Xiangtuo Chen, Paul-Henry Cournéde

Abstract:

Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.

Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest

Procedia PDF Downloads 229
138 Use of Satellite Altimetry and Moderate Resolution Imaging Technology of Flood Extent to Support Seasonal Outlooks of Nuisance Flood Risk along United States Coastlines and Managed Areas

Authors: Varis Ransibrahmanakul, Doug Pirhalla, Scott Sheridan, Cameron Lee

Abstract:

U.S. coastal areas and ecosystems are facing multiple sea level rise threats and effects: heavy rain events, cyclones, and changing wind and weather patterns all influence coastal flooding, sedimentation, and erosion along critical barrier islands and can strongly impact habitat resiliency and water quality in protected habitats. These impacts are increasing over time and have accelerated the need for new tracking techniques, models and tools of flood risk to support enhanced preparedness for coastal management and mitigation. To address this issue, NOAA National Ocean Service (NOS) evaluated new metrics from satellite altimetry AVISO/Copernicus and MODIS IR flood extents to isolate nodes atmospheric variability indicative of elevated sea level and nuisance flood events. Using de-trended time series of cross-shelf sea surface heights (SSH), we identified specific Self Organizing Maps (SOM) nodes and transitions having a strongest regional association with oceanic spatial patterns (e.g., heightened downwelling favorable wind-stress and enhanced southward coastal transport) indicative of elevated coastal sea levels. Results show the impacts of the inverted barometer effect as well as the effects of surface wind forcing; Ekman-induced transport along broad expanses of the U.S. eastern coastline. Higher sea levels and corresponding localized flooding are associated with either pattern indicative of enhanced on-shore flow, deepening cyclones, or local- scale winds, generally coupled with an increased local to regional precipitation. These findings will support an integration of satellite products and will inform seasonal outlook model development supported through NOAAs Climate Program Office and NOS office of Center for Operational Oceanographic Products and Services (CO-OPS). Overall results will prioritize ecological areas and coastal lab facilities at risk based on numbers of nuisance flood projected and inform coastal management of flood risk around low lying areas subjected to bank erosion.

Keywords: AVISO satellite altimetry SSHA, MODIS IR flood map, nuisance flood, remote sensing of flood

Procedia PDF Downloads 139
137 Machine Learning Prediction of Diabetes Prevalence in the U.S. Using Demographic, Physical, and Lifestyle Indicators: A Study Based on NHANES 2009-2018

Authors: Oluwafunmibi Omotayo Fasanya, Augustine Kena Adjei

Abstract:

To develop a machine learning model to predict diabetes (DM) prevalence in the U.S. population using demographic characteristics, physical indicators, and lifestyle habits, and to analyze how these factors contribute to the likelihood of diabetes. We analyzed data from 23,546 participants aged 20 and older, who were non-pregnant, from the 2009-2018 National Health and Nutrition Examination Survey (NHANES). The dataset included key demographic (age, sex, ethnicity), physical (BMI, leg length, total cholesterol [TCHOL], fasting plasma glucose), and lifestyle indicators (smoking habits). A weighted sample was used to account for NHANES survey design features such as stratification and clustering. A classification machine learning model was trained to predict diabetes status. The target variable was binary (diabetes or non-diabetes) based on fasting plasma glucose measurements. The following models were evaluated: Logistic Regression (baseline), Random Forest Classifier, Gradient Boosting Machine (GBM), Support Vector Machine (SVM). Model performance was assessed using accuracy, F1-score, AUC-ROC, and precision-recall metrics. Feature importance was analyzed using SHAP values to interpret the contributions of variables such as age, BMI, ethnicity, and smoking status. The Gradient Boosting Machine (GBM) model outperformed other classifiers with an AUC-ROC score of 0.85. Feature importance analysis revealed the following key predictors: Age: The most significant predictor, with diabetes prevalence increasing with age, peaking around the 60s for males and 70s for females. BMI: Higher BMI was strongly associated with a higher risk of diabetes. Ethnicity: Black participants had the highest predicted prevalence of diabetes (14.6%), followed by Mexican-Americans (13.5%) and Whites (10.6%). TCHOL: Diabetics had lower total cholesterol levels, particularly among White participants (mean decline of 23.6 mg/dL). Smoking: Smoking showed a slight increase in diabetes risk among Whites (0.2%) but had a limited effect in other ethnic groups. Using machine learning models, we identified key demographic, physical, and lifestyle predictors of diabetes in the U.S. population. The results confirm that diabetes prevalence varies significantly across age, BMI, and ethnic groups, with lifestyle factors such as smoking contributing differently by ethnicity. These findings provide a basis for more targeted public health interventions and resource allocation for diabetes management.

Keywords: diabetes, NHANES, random forest, gradient boosting machine, support vector machine

Procedia PDF Downloads 4
136 Extraction of Urban Building Damage Using Spectral, Height and Corner Information

Authors: X. Wang

Abstract:

Timely and accurate information on urban building damage caused by earthquake is important basis for disaster assessment and emergency relief. Very high resolution (VHR) remotely sensed imagery containing abundant fine-scale information offers a large quantity of data for detecting and assessing urban building damage in the aftermath of earthquake disasters. However, the accuracy obtained using spectral features alone is comparatively low, since building damage, intact buildings and pavements are spectrally similar. Therefore, it is of great significance to detect urban building damage effectively using multi-source data. Considering that in general height or geometric structure of buildings change dramatically in the devastated areas, a novel multi-stage urban building damage detection method, using bi-temporal spectral, height and corner information, was proposed in this study. The pre-event height information was generated using stereo VHR images acquired from two different satellites, while the post-event height information was produced from airborne LiDAR data. The corner information was extracted from pre- and post-event panchromatic images. The proposed method can be summarized as follows. To reduce the classification errors caused by spectral similarity and errors in extracting height information, ground surface, shadows, and vegetation were first extracted using the post-event VHR image and height data and were masked out. Two different types of building damage were then extracted from the remaining areas: the height difference between pre- and post-event was used for detecting building damage showing significant height change; the difference in the density of corners between pre- and post-event was used for extracting building damage showing drastic change in geometric structure. The initial building damage result was generated by combining above two building damage results. Finally, a post-processing procedure was adopted to refine the obtained initial result. The proposed method was quantitatively evaluated and compared to two existing methods in Port au Prince, Haiti, which was heavily hit by an earthquake in January 2010, using pre-event GeoEye-1 image, pre-event WorldView-2 image, post-event QuickBird image and post-event LiDAR data. The results showed that the method proposed in this study significantly outperformed the two comparative methods in terms of urban building damage extraction accuracy. The proposed method provides a fast and reliable method to detect urban building collapse, which is also applicable to relevant applications.

Keywords: building damage, corner, earthquake, height, very high resolution (VHR)

Procedia PDF Downloads 212
135 Radar on Bike: Coarse Classification based on Multi-Level Clustering for Cyclist Safety Enhancement

Authors: Asma Omri, Noureddine Benothman, Sofiane Sayahi, Fethi Tlili, Hichem Besbes

Abstract:

Cycling, a popular mode of transportation, can also be perilous due to cyclists' vulnerability to collisions with vehicles and obstacles. This paper presents an innovative cyclist safety system based on radar technology designed to offer real-time collision risk warnings to cyclists. The system incorporates a low-power radar sensor affixed to the bicycle and connected to a microcontroller. It leverages radar point cloud detections, a clustering algorithm, and a supervised classifier. These algorithms are optimized for efficiency to run on the TI’s AWR 1843 BOOST radar, utilizing a coarse classification approach distinguishing between cars, trucks, two-wheeled vehicles, and other objects. To enhance the performance of clustering techniques, we propose a 2-Level clustering approach. This approach builds on the state-of-the-art Density-based spatial clustering of applications with noise (DBSCAN). The objective is to first cluster objects based on their velocity, then refine the analysis by clustering based on position. The initial level identifies groups of objects with similar velocities and movement patterns. The subsequent level refines the analysis by considering the spatial distribution of these objects. The clusters obtained from the first level serve as input for the second level of clustering. Our proposed technique surpasses the classical DBSCAN algorithm in terms of geometrical metrics, including homogeneity, completeness, and V-score. Relevant cluster features are extracted and utilized to classify objects using an SVM classifier. Potential obstacles are identified based on their velocity and proximity to the cyclist. To optimize the system, we used the View of Delft dataset for hyperparameter selection and SVM classifier training. The system's performance was assessed using our collected dataset of radar point clouds synchronized with a camera on an Nvidia Jetson Nano board. The radar-based cyclist safety system is a practical solution that can be easily installed on any bicycle and connected to smartphones or other devices, offering real-time feedback and navigation assistance to cyclists. We conducted experiments to validate the system's feasibility, achieving an impressive 85% accuracy in the classification task. This system has the potential to significantly reduce the number of accidents involving cyclists and enhance their safety on the road.

Keywords: 2-level clustering, coarse classification, cyclist safety, warning system based on radar technology

Procedia PDF Downloads 78
134 The Acquisition of Spanish L4 by Learners with Croatian L1, English L2 and Italian L3

Authors: Barbara Peric

Abstract:

The study of acquiring a third and additional language has garnered significant focus within second language acquisition (SLA) research. Initially, it was commonly viewed as merely an extension of second language acquisition (SLA). However, in the last two decades, numerous researchers have emphasized the need to recognize the unique characteristics of third language acquisition (TLA). This recognition is crucial for understanding the intricate cognitive processes that arise from the interaction of more than two linguistic systems in the learner's mind. This study investigates cross-linguistic influences in the acquisition of Spanish as a fourth language by students who have Croatian as a first language (L1). English as a second language (L2), and Italian as a third language (L3). Observational data suggests that influence or transfer of linguistic elements can arise not only from one's native language (L1) but also from non-native languages. This implies that, for individuals proficient in multiple languages, the native language doesn't consistently hold a superior position. Instead, it should be examined alongside other potential sources of linguistic transfer. Earlier studies have demonstrated that high proficiency in a second language can significantly impact cross-linguistic influences when acquiring a third and additional language. Among the extensively examined factors, the typological relationship stands out as one of the most scrutinized variables. The goal of the present study was to explore whether language typology and formal similarity or proficiency in the second language had a more significant impact on L4 acquisition. Participants in this study were third-year undergraduate students at Rochester Institute of Technology’s subsidiary in Croatia (RIT Croatia). All the participants had exclusively Croatian as L1, English as L2, Italian as L3 and were learning Spanish as L4 at the time of the study. All the participants had a high level of proficiency in English and low level of proficiency in Italian. Based on the error analysis the findings indicate that for some types of lexical errors such as coinage, language typology had a more significant impact and Italian language was the preferred source of transfer despite the law proficiency in that language. For some other types of lexical errors, such as calques, second language proficiency had a more significant impact, and English language was the preferred source of transfer. On the other hand, Croatian, Italian, and Spanish are more similar in the area of morphology due to higher degree of inflection compared to English and the strongest influence of the Croatian language was precisely in the area of morphology. The results emphasize the need to consider linguistic resemblances between the native language (L1) and the third and additional language as well as the learners' proficiency in the second language when developing successful teaching strategies for acquiring the third and additional language. These conclusions add to the expanding knowledge in the realm of Second Language Acquisition (SLA) and offer practical insights for language educators aiming to enhance the effectiveness of learning experiences in acquiring a third and additional language.

Keywords: third and additional language acquisition, cross-linguistic influences, language proficiency, language typology

Procedia PDF Downloads 54
133 Exploring the Impact of Input Sequence Lengths on Long Short-Term Memory-Based Streamflow Prediction in Flashy Catchments

Authors: Farzad Hosseini Hossein Abadi, Cristina Prieto Sierra, Cesar Álvarez Díaz

Abstract:

Predicting streamflow accurately in flashy catchments prone to floods is a major research and operational challenge in hydrological modeling. Recent advancements in deep learning, particularly Long Short-Term Memory (LSTM) networks, have shown to be promising in achieving accurate hydrological predictions at daily and hourly time scales. In this work, a multi-timescale LSTM (MTS-LSTM) network was applied to the context of regional hydrological predictions at an hourly time scale in flashy catchments. The case study includes 40 catchments allocated in the Basque Country, north of Spain. We explore the impact of hyperparameters on the performance of streamflow predictions given by regional deep learning models through systematic hyperparameter tuning - where optimal regional values for different catchments are identified. The results show that predictions are highly accurate, with Nash-Sutcliffe (NSE) and Kling-Gupta (KGE) metrics values as high as 0.98 and 0.97, respectively. A principal component analysis reveals that a hyperparameter related to the length of the input sequence contributes most significantly to the prediction performance. The findings suggest that input sequence lengths have a crucial impact on the model prediction performance. Moreover, employing catchment-scale analysis reveals distinct sequence lengths for individual basins, highlighting the necessity of customizing this hyperparameter based on each catchment’s characteristics. This aligns with well known “uniqueness of the place” paradigm. In prior research, tuning the length of the input sequence of LSTMs has received limited focus in the field of streamflow prediction. Initially it was set to 365 days to capture a full annual water cycle. Later, performing limited systematic hyper-tuning using grid search, revealed a modification to 270 days. However, despite the significance of this hyperparameter in hydrological predictions, usually studies have overlooked its tuning and fixed it to 365 days. This study, employing a simultaneous systematic hyperparameter tuning approach, emphasizes the critical role of input sequence length as an influential hyperparameter in configuring LSTMs for regional streamflow prediction. Proper tuning of this hyperparameter is essential for achieving accurate hourly predictions using deep learning models.

Keywords: LSTMs, streamflow, hyperparameters, hydrology

Procedia PDF Downloads 69
132 Assessment of the Effects of Urban Development on Urban Heat Islands and Community Perception in Semi-Arid Climates: Integrating Remote Sensing, GIS Tools, and Social Analysis - A Case Study of the Aures Region (Khanchela), Algeria

Authors: Amina Naidja, Zedira Khammar, Ines Soltani

Abstract:

This study investigates the impact of urban development on the urban heat island (UHI) effect in the semi-arid Aures region of Algeria, integrating remote sensing data with statistical analysis and community surveys to examine the interconnected environmental and social dynamics. Using Landsat 8 satellite imagery, temporal variations in the Normalized Difference Vegetation Index (NDVI), Normalized Difference Built-up Index (NDBI), and land use/land cover (LULC) changes are analyzed to understand patterns of urbanization and environmental transformation. These environmental metrics are correlated with land surface temperature (LST) data derived from remote sensing to quantify the UHI effect. To incorporate the social dimension, a structured questionnaire survey is conducted among residents in selected urban areas. The survey assesses community perceptions of urban heat, its impacts on daily life, health concerns, and coping strategies. Statistical analysis is employed to analyze survey responses, identifying correlations between demographic factors, socioeconomic status, and perceived heat stress. Preliminary findings reveal significant correlations between built-up areas (NDBI) and higher LST, indicating the contribution of urbanization to local warming. Conversely, areas with higher vegetation cover (NDVI) exhibit lower LST, highlighting the cooling effect of green spaces. Social survey results provide insights into how UHI affects different demographic groups, with vulnerable populations experiencing greater heat-related challenges. By integrating remote sensing analysis with statistical modeling and community surveys, this study offers a comprehensive understanding of the environmental and social implications of urban development in semi-arid climates. The findings contribute to evidence-based urban planning strategies that prioritize environmental sustainability and social well-being. Future research should focus on policy recommendations and community engagement initiatives to mitigate UHI impacts and promote climate-resilient urban development.

Keywords: urban heat island, remote sensing, social analysis, NDVI, NDBI, LST, community perception

Procedia PDF Downloads 41
131 Predicting Football Player Performance: Integrating Data Visualization and Machine Learning

Authors: Saahith M. S., Sivakami R.

Abstract:

In the realm of football analytics, particularly focusing on predicting football player performance, the ability to forecast player success accurately is of paramount importance for teams, managers, and fans. This study introduces an elaborate examination of predicting football player performance through the integration of data visualization methods and machine learning algorithms. The research entails the compilation of an extensive dataset comprising player attributes, conducting data preprocessing, feature selection, model selection, and model training to construct predictive models. The analysis within this study will involve delving into feature significance using methodologies like Select Best and Recursive Feature Elimination (RFE) to pinpoint pertinent attributes for predicting player performance. Various machine learning algorithms, including Random Forest, Decision Tree, Linear Regression, Support Vector Regression (SVR), and Artificial Neural Networks (ANN), will be explored to develop predictive models. The evaluation of each model's performance utilizing metrics such as Mean Squared Error (MSE) and R-squared will be executed to gauge their efficacy in predicting player performance. Furthermore, this investigation will encompass a top player analysis to recognize the top-performing players based on the anticipated overall performance scores. Nationality analysis will entail scrutinizing the player distribution based on nationality and investigating potential correlations between nationality and player performance. Positional analysis will concentrate on examining the player distribution across various positions and assessing the average performance of players in each position. Age analysis will evaluate the influence of age on player performance and identify any discernible trends or patterns associated with player age groups. The primary objective is to predict a football player's overall performance accurately based on their individual attributes, leveraging data-driven insights to enrich the comprehension of player success on the field. By amalgamating data visualization and machine learning methodologies, the aim is to furnish valuable tools for teams, managers, and fans to effectively analyze and forecast player performance. This research contributes to the progression of sports analytics by showcasing the potential of machine learning in predicting football player performance and offering actionable insights for diverse stakeholders in the football industry.

Keywords: football analytics, player performance prediction, data visualization, machine learning algorithms, random forest, decision tree, linear regression, support vector regression, artificial neural networks, model evaluation, top player analysis, nationality analysis, positional analysis

Procedia PDF Downloads 36
130 Facial Recognition of University Entrance Exam Candidates using FaceMatch Software in Iran

Authors: Mahshid Arabi

Abstract:

In recent years, remarkable advancements in the fields of artificial intelligence and machine learning have led to the development of facial recognition technologies. These technologies are now employed in a wide range of applications, including security, surveillance, healthcare, and education. In the field of education, the identification of university entrance exam candidates has been one of the fundamental challenges. Traditional methods such as using ID cards and handwritten signatures are not only inefficient and prone to fraud but also susceptible to errors. In this context, utilizing advanced technologies like facial recognition can be an effective and efficient solution to increase the accuracy and reliability of identity verification in entrance exams. This article examines the use of FaceMatch software for recognizing the faces of university entrance exam candidates in Iran. The main objective of this research is to evaluate the efficiency and accuracy of FaceMatch software in identifying university entrance exam candidates to prevent fraud and ensure the authenticity of individuals' identities. Additionally, this research investigates the advantages and challenges of using this technology in Iran's educational systems. This research was conducted using an experimental method and random sampling. In this study, 1000 university entrance exam candidates in Iran were selected as samples. The facial images of these candidates were processed and analyzed using FaceMatch software. The software's accuracy and efficiency were evaluated using various metrics, including accuracy rate, error rate, and processing time. The research results indicated that FaceMatch software could accurately identify candidates with a precision of 98.5%. The software's error rate was less than 1.5%, demonstrating its high efficiency in facial recognition. Additionally, the average processing time for each candidate's image was less than 2 seconds, indicating the software's high efficiency. Statistical evaluation of the results using precise statistical tests, including analysis of variance (ANOVA) and t-test, showed that the observed differences were significant, and the software's accuracy in identity verification is high. The findings of this research suggest that FaceMatch software can be effectively used as a tool for identifying university entrance exam candidates in Iran. This technology not only enhances security and prevents fraud but also simplifies and streamlines the exam administration process. However, challenges such as preserving candidates' privacy and the costs of implementation must also be considered. The use of facial recognition technology with FaceMatch software in Iran's educational systems can be an effective solution for preventing fraud and ensuring the authenticity of university entrance exam candidates' identities. Given the promising results of this research, it is recommended that this technology be more widely implemented and utilized in the country's educational systems.

Keywords: facial recognition, FaceMatch software, Iran, university entrance exam

Procedia PDF Downloads 43
129 Landscape Pattern Evolution and Optimization Strategy in Wuhan Urban Development Zone, China

Authors: Feng Yue, Fei Dai

Abstract:

With the rapid development of urbanization process in China, its environmental protection pressure is severely tested. So, analyzing and optimizing the landscape pattern is an important measure to ease the pressure on the ecological environment. This paper takes Wuhan Urban Development Zone as the research object, and studies its landscape pattern evolution and quantitative optimization strategy. First, remote sensing image data from 1990 to 2015 were interpreted by using Erdas software. Next, the landscape pattern index of landscape level, class level, and patch level was studied based on Fragstats. Then five indicators of ecological environment based on National Environmental Protection Standard of China were selected to evaluate the impact of landscape pattern evolution on the ecological environment. Besides, the cost distance analysis of ArcGIS was applied to simulate wildlife migration thus indirectly measuring the improvement of ecological environment quality. The result shows that the area of land for construction increased 491%. But the bare land, sparse grassland, forest, farmland, water decreased 82%, 47%, 36%, 25% and 11% respectively. They were mainly converted into construction land. On landscape level, the change of landscape index all showed a downward trend. Number of patches (NP), Landscape shape index (LSI), Connection index (CONNECT), Shannon's diversity index (SHDI), Aggregation index (AI) separately decreased by 2778, 25.7, 0.042, 0.6, 29.2%, all of which indicated that the NP, the degree of aggregation and the landscape connectivity declined. On class level, the construction land and forest, CPLAND, TCA, AI and LSI ascended, but the Distribution Statistics Core Area (CORE_AM) decreased. As for farmland, water, sparse grassland, bare land, CPLAND, TCA and DIVISION, the Patch Density (PD) and LSI descended, yet the patch fragmentation and CORE_AM increased. On patch level, patch area, Patch perimeter, Shape index of water, farmland and bare land continued to decline. The three indexes of forest patches increased overall, sparse grassland decreased as a whole, and construction land increased. It is obvious that the urbanization greatly influenced the landscape evolution. Ecological diversity and landscape heterogeneity of ecological patches clearly dropped. The Habitat Quality Index continuously declined by 14%. Therefore, optimization strategy based on greenway network planning is raised for discussion. This paper contributes to the study of landscape pattern evolution in planning and design and to the research on spatial layout of urbanization.

Keywords: landscape pattern, optimization strategy, ArcGIS, Erdas, landscape metrics, landscape architecture

Procedia PDF Downloads 163
128 The Effect of Chloride Dioxide and High Concentration of CO2 Gas Injection on the Quality and Shelf-Life for Exporting Strawberry 'Maehyang' in Modified Atmosphere Condition

Authors: Hyuk Sung Yoon, In-Lee Choi, Mohammad Zahirul Islam, Jun Pill Baek, Ho-Min Kang

Abstract:

The strawberry ‘Maehyang’ cultivated in South Korea has been increased to export to Southeast Asia. The degradation of quality often occurs in strawberries during short export period. Botrytis cinerea has been known to cause major damage to the export strawberries and the disease was caused during shipping and distribution. This study was conducted to find out the sterilized effect of chlorine dioxide(ClO2) gas and high concentration of CO2 gas injection for ‘Maehyang’ strawberry and it was packaged with oxygen transmission rate (OTR) films. The strawberry was harvested at 80% color changed stage and packaged with OTR film and perforated film (control). The treatments were a MAP used by with 20,000 cc·m-2·day·atm OTR film and gas injection in packages. The gas type of ClO2 and CO2 were injected into OTR film packages, and treatments were 6 mg/L ClO2, 15% CO2, and they were combined. The treated strawberries were stored at 3℃ for 30 days. Fresh weight loss rate was less than 1% in all OTR film packages but it was more than 15% in a perforated film treatment that showed severe deterioration of visual quality during storage. Carbon dioxide concentration within a package showed approximately 15% of the maximum CO2 concentration in all treatments except control until the 21st day, it was the tolerated range of maximum CO2 concentration of strawberry in recommended CA or MA conditions. But, it increased to almost 50% on the 30th day. Oxygen concentration showed a decrease down to approximately 0% in all treatments except control for 25 days. Ethylene concentration was shown to be steady until the 17th day, but it quickly increased on the 17th day and dropped down on the final storage day (30th day). All treatments did not show any significant differences in gas treatments. Firmness increased in CO2 (15%) and ClO2 (6mg/L) + CO2 (15%) treatments during storage. It might be the effect of high concentration CO2 known by reducing decay and cell wall degradation. The soluble solid decreased in all treatments during storage. These results were caused to use up the sugar by the increase of respiration during storage. The titratable acidity showed a similarity in all treatments. Incidence of fungi was 0% in CO2 (15%) and ClO2 (6mg/L)+ CO2 (15%), but was more than 20% in a perforated film treatment. Consequently, The result indicates that Chloride Dioxide(ClO2) and high concentration of CO2 inhibited fungi growth. Due to the fact that fresh weight loss rate and incidence of fungi were lower, the ClO2(6mg/L)+ CO2(15%) prove to be most efficient in sterilization. These results suggest that Chloride Dioxide (ClO2) and high concentration of CO2 gas injection treatments were an effective decontamination technique for improving the safety of strawberries.

Keywords: chloride dioxide, high concentration of CO2, modified atmosphere condition, oxygen transmission rate films

Procedia PDF Downloads 338
127 Structural and Biochemical Characterization of Red and Green Emitting Luciferase Enzymes

Authors: Wael M. Rabeh, Cesar Carrasco-Lopez, Juliana C. Ferreira, Pance Naumov

Abstract:

Bioluminescence, the emission of light from a biological process, is found in various living organisms including bacteria, fireflies, beetles, fungus and different marine organisms. Luciferase is an enzyme that catalyzes a two steps oxidation of luciferin in the presence of Mg2+ and ATP to produce oxyluciferin and releases energy in the form of light. The luciferase assay is used in biological research and clinical applications for in vivo imaging, cell proliferation, and protein folding and secretion analysis. The luciferase enzyme consists of two domains, a large N-terminal domain (1-436 residues) that is connected to a small C-terminal domain (440-544) by a flexible loop that functions as a hinge for opening and closing the active site. The two domains are separated by a large cleft housing the active site that closes after binding the substrates, luciferin and ATP. Even though all insect luciferases catalyze the same chemical reaction and share 50% to 90% sequence homology and high structural similarity, they emit light of different colors from green at 560nm to red at 640 nm. Currently, the majority of the structural and biochemical studies have been conducted on green-emitting firefly luciferases. To address the color emission mechanism, we expressed and purified two luciferase enzymes with blue-shifted green and red emission from indigenous Brazilian species Amydetes fanestratus and Phrixothrix, respectively. The two enzymes naturally emit light of different colors and they are an excellent system to study the color-emission mechanism of luciferases, as the current proposed mechanisms are based on mutagenesis studies. Using a vapor-diffusion method and a high-throughput approach, we crystallized and solved the crystal structure of both enzymes, at 1.7 Å and 3.1 Å resolution respectively, using X-ray crystallography. The free enzyme adopted two open conformations in the crystallographic unit cell that are different from the previously characterized firefly luciferase. The blue-shifted green luciferase crystalized as a monomer similar to other luciferases reported in literature, while the red luciferases crystalized as an octamer and was also purified as an octomer in solution. The octomer conformation is the first of its kind for any insect’s luciferase, which might be relate to the red color emission. Structurally designed mutations confirmed the importance of the transition between the open and close conformations in the fine-tuning of the color and the characterization of other interesting mutants is underway.

Keywords: bioluminescence, enzymology, structural biology, x-ray crystallography

Procedia PDF Downloads 325
126 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning

Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih

Abstract:

Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.

Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network

Procedia PDF Downloads 185
125 Towards Accurate Velocity Profile Models in Turbulent Open-Channel Flows: Improved Eddy Viscosity Formulation

Authors: W. Meron Mebrahtu, R. Absi

Abstract:

Velocity distribution in turbulent open-channel flows is organized in a complex manner. This is due to the large spatial and temporal variability of fluid motion resulting from the free-surface turbulent flow condition. This phenomenon is complicated further due to the complex geometry of channels and the presence of solids transported. Thus, several efforts were made to understand the phenomenon and obtain accurate mathematical models that are suitable for engineering applications. However, predictions are inaccurate because oversimplified assumptions are involved in modeling this complex phenomenon. Therefore, the aim of this work is to study velocity distribution profiles and obtain simple, more accurate, and predictive mathematical models. Particular focus will be made on the acceptable simplification of the general transport equations and an accurate representation of eddy viscosity. Wide rectangular open-channel seems suitable to begin the study; other assumptions are smooth-wall, and sediment-free flow under steady and uniform flow conditions. These assumptions will allow examining the effect of the bottom wall and the free surface only, which is a necessary step before dealing with more complex flow scenarios. For this flow condition, two ordinary differential equations are obtained for velocity profiles; from the Reynolds-averaged Navier-Stokes (RANS) equation and equilibrium consideration between turbulent kinetic energy (TKE) production and dissipation. Then different analytic models for eddy viscosity, TKE, and mixing length were assessed. Computation results for velocity profiles were compared to experimental data for different flow conditions and the well-known linear, log, and log-wake laws. Results show that the model based on the RANS equation provides more accurate velocity profiles. In the viscous sublayer and buffer layer, the method based on Prandtl’s eddy viscosity model and Van Driest mixing length give a more precise result. For the log layer and outer region, a mixing length equation derived from Von Karman’s similarity hypothesis provides the best agreement with measured data except near the free surface where an additional correction based on a damping function for eddy viscosity is used. This method allows more accurate velocity profiles with the same value of the damping coefficient that is valid under different flow conditions. This work continues with investigating narrow channels, complex geometries, and the effect of solids transported in sewers.

Keywords: accuracy, eddy viscosity, sewers, velocity profile

Procedia PDF Downloads 110
124 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide

Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva

Abstract:

Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.

Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning

Procedia PDF Downloads 158
123 Enhancing Large Language Models' Data Analysis Capability with Planning-and-Execution and Code Generation Agents: A Use Case for Southeast Asia Real Estate Market Analytics

Authors: Kien Vu, Jien Min Soh, Mohamed Jahangir Abubacker, Piyawut Pattamanon, Soojin Lee, Suvro Banerjee

Abstract:

Recent advances in Generative Artificial Intelligence (GenAI), in particular Large Language Models (LLMs) have shown promise to disrupt multiple industries at scale. However, LLMs also present unique challenges, notably, these so-called "hallucination" which is the generation of outputs that are not grounded in the input data that hinders its adoption into production. Common practice to mitigate hallucination problem is utilizing Retrieval Agmented Generation (RAG) system to ground LLMs'response to ground truth. RAG converts the grounding documents into embeddings, retrieve the relevant parts with vector similarity between user's query and documents, then generates a response that is not only based on its pre-trained knowledge but also on the specific information from the retrieved documents. However, the RAG system is not suitable for tabular data and subsequent data analysis tasks due to multiple reasons such as information loss, data format, and retrieval mechanism. In this study, we have explored a novel methodology that combines planning-and-execution and code generation agents to enhance LLMs' data analysis capabilities. The approach enables LLMs to autonomously dissect a complex analytical task into simpler sub-tasks and requirements, then convert them into executable segments of code. In the final step, it generates the complete response from output of the executed code. When deployed beta version on DataSense, the property insight tool of PropertyGuru, the approach yielded promising results, as it was able to provide market insights and data visualization needs with high accuracy and extensive coverage by abstracting the complexities for real-estate agents and developers from non-programming background. In essence, the methodology not only refines the analytical process but also serves as a strategic tool for real estate professionals, aiding in market understanding and enhancement without the need for programming skills. The implication extends beyond immediate analytics, paving the way for a new era in the real estate industry characterized by efficiency and advanced data utilization.

Keywords: large language model, reasoning, planning and execution, code generation, natural language processing, prompt engineering, data analysis, real estate, data sense, PropertyGuru

Procedia PDF Downloads 86
122 Inertial Spreading of Drop on Porous Surfaces

Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi

Abstract:

The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.

Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium

Procedia PDF Downloads 137
121 Structure Clustering for Milestoning Applications of Complex Conformational Transitions

Authors: Amani Tahat, Serdal Kirmizialtin

Abstract:

Trajectory fragment methods such as Markov State Models (MSM), Milestoning (MS) and Transition Path sampling are the prime choice of extending the timescale of all atom Molecular Dynamics simulations. In these approaches, a set of structures that covers the accessible phase space has to be chosen a priori using cluster analysis. Structural clustering serves to partition the conformational state into natural subgroups based on their similarity, an essential statistical methodology that is used for analyzing numerous sets of empirical data produced by Molecular Dynamics (MD) simulations. Local transition kernel among these clusters later used to connect the metastable states using a Markovian kinetic model in MSM and a non-Markovian model in MS. The choice of clustering approach in constructing such kernel is crucial since the high dimensionality of the biomolecular structures might easily confuse the identification of clusters when using the traditional hierarchical clustering methodology. Of particular interest, in the case of MS where the milestones are very close to each other, accurate determination of the milestone identity of the trajectory becomes a challenging issue. Throughout this work we present two cluster analysis methods applied to the cis–trans isomerism of dinucleotide AA. The choice of nucleic acids to commonly used proteins to study the cluster analysis is two fold: i) the energy landscape is rugged; hence transitions are more complex, enabling a more realistic model to study conformational transitions, ii) Nucleic acids conformational space is high dimensional. A diverse set of internal coordinates is necessary to describe the metastable states in nucleic acids, posing a challenge in studying the conformational transitions. Herein, we need improved clustering methods that accurately identify the AA structure in its metastable states in a robust way for a wide range of confused data conditions. The single linkage approach of the hierarchical clustering available in GROMACS MD-package is the first clustering methodology applied to our data. Self Organizing Map (SOM) neural network, that also known as a Kohonen network, is the second data clustering methodology. The performance comparison of the neural network as well as hierarchical clustering method is studied by means of computing the mean first passage times for the cis-trans conformational rates. Our hope is that this study provides insight into the complexities and need in determining the appropriate clustering algorithm for kinetic analysis. Our results can improve the effectiveness of decisions based on clustering confused empirical data in studying conformational transitions in biomolecules.

Keywords: milestoning, self organizing map, single linkage, structure clustering

Procedia PDF Downloads 222
120 The GRIT Study: Getting Global Rare Disease Insights Through Technology Study

Authors: Aneal Khan, Elleine Allapitan, Desmond Koo, Katherine-Ann Piedalue, Shaneel Pathak, Utkarsh Subnis

Abstract:

Background: Disease management of metabolic, genetic disorders is long-term and can be cumbersome to patients and caregivers. Patient-Reported Outcome Measures (PROMs) have been a useful tool in capturing patient perspectives to help enhance treatment compliance and engagement with health care providers, reduce utilization of emergency services, and increase satisfaction with their treatment choices. Currently, however, PROMs are collected during infrequent and decontextualized clinic visits, which makes translation of patient experiences challenging over time. The GRIT study aims to evaluate a digital health journal application called Zamplo that provides a personalized health diary to record self-reported health outcomes accurately and efficiently in patients with metabolic, genetic disorders. Methods: This is a randomized controlled trial (RCT) (1:1) that assesses the efficacy of Zamplo to increase patient activation (primary outcome), improve healthcare satisfaction and confidence to manage medications (secondary outcomes), and reduce costs to the healthcare system (exploratory). Using standardized online surveys, assessments will be collected at baseline, 1 month, 3 months, 6 months, and 12 months. Outcomes will be compared between patients who were given access to the application versus those with no access. Results: Seventy-seven patients were recruited as of November 30, 2021. Recruitment for the study commenced in November 2020 with a target of n=150 patients. The accrual rate was 50% from those eligible and invited for the study, with the majority of patients having Fabry disease (n=48) and the remaining having Pompe disease and mitochondrial disease. Real-time clinical responses, such as pain, are being measured and correlated to disease-modifying therapies, supportive treatments like pain medications, and lifestyle interventions. Engagement with the application, along with compliance metrics of surveys and journal entries, are being analyzed. An interim analysis of the engagement data along with preliminary findings from this pilot RCT, and qualitative patient feedback will be presented. Conclusions: The digital self-care journal provides a unique approach to disease management, allowing patients direct access to their progress and actively participating in their care. Findings from the study can help serve the virtual care needs of patients with metabolic, genetic disorders in North America and the world over.

Keywords: eHealth, mobile health, rare disease, patient outcomes, quality of life (QoL), pain, Fabry disease, Pompe disease

Procedia PDF Downloads 150
119 The Emerging Role of Cannabis as an Anti-Nociceptive Agent in the Treatment of Chronic Back Pain

Authors: Josiah Damisa, Michelle Louise Richardson, Morenike Adewuyi

Abstract:

Lower back pain is a significant cause of disability worldwide and associated with great implications in terms of the well-being of affected individuals and society as a whole due to its undeniable socio-economic impact. With its prevalence on the increase as a result of an aging global population, the need for novel forms of pain management is ever paramount. This review aims to provide further insight into current research regarding a role for the endocannabinoid signaling pathway as a target in the treatment of chronic pain, with particular emphasis on its potential use as part of the treatment of lower back pain. Potential advantages and limitations of cannabis-based medicines over other forms of analgesia currently licensed for medical use are discussed in addition to areas that require ongoing consideration and research. To evaluate the efficacy of cannabis-based medicines in chronic pain, studies pertaining to the role of medical cannabis in chronic disease were reviewed. Standard searches of PubMed, Google Scholar and Web of Science databases were undertaken with peer-reviewed journal articles reviewed based on the indication for pain management, cannabis treatment modality used and study outcomes. Multiple studies suggest an emerging role for cannabis-based medicines as therapeutic agents in the treatment of chronic back pain. A potential synergistic effect has also been purported if these medicines are co-administered with opiate analgesia due to the similarity of the opiate and endocannabinoid signaling pathways. However, whilst recent changes to legislation in the United Kingdom mean that cannabis is now licensed for medicinal use on NHS prescription for a number of chronic health conditions, concerns remain as to the efficacy and safety of cannabis-based medicines. Research is lacking into both their side effect profiles and the long-term effects of cannabis use. Legal and ethical considerations to the use of these products in standardized medical practice also persist due to the notoriety of cannabis as a drug of abuse. Despite this, cannabis is beginning to gain traction as an alternative or even complementary drug to opiates, with some preclinical studies showing opiate-sparing effects. Whilst there is a paucity of clinical trials in this field, there is scope for cannabinoids to be successful anti-nociceptive agents in managing chronic back pain. The ultimate aim would be to utilize cannabis-based medicines as alternative or complementary therapies, thereby reducing opiate over-reliance and providing hope to individuals who have exhausted all other forms of standard treatment.

Keywords: endocannabinoids, cannabis-based medicines, chronic pain, lower back pain

Procedia PDF Downloads 199
118 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning

Authors: Shayla He

Abstract:

Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.

Keywords: homeless, prediction, model, RNN

Procedia PDF Downloads 119
117 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center

Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael

Abstract:

Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.

Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency

Procedia PDF Downloads 30