Search results for: data databases
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 25511

Search results for: data databases

24731 Investigation of Medicinal Applications of Maclura Pomifera Extract

Authors: Mahdi Asghari Ozma

Abstract:

Background and Objective:Maclurapomifera (Rafin.) Schneider, known as osage orange, is a north american native plant which has multiple applications in herbal medicine. The extract of this plant has many therapeutic effects, including antimicrobial, anti-tumor, anti-inflammation, etc., that discussed in this study. Materials and Methods: For this study, the keywords "Maclurapomifera", "osage orange, ""herbal medicine ", and "plant extract" in the databases PubMed and Google Scholar between 2002 and 2021 were searched, and 20 articles were chosen, studied and analyzed. Results: Due to the increased resistance of microbes to antibiotics, the need for antimicrobial plants is increasing. Maclurapomifera is one of the plants with antimicrobial properties that can affect all microbes, especially Gram-negative bacteria, and fungi. This plant also has anti-tumor, anti-inflammatory, anti-oxidant, anti-aging, antiviral, anti-fungal, anti-ulcerogenic, anti-diabetic, and anti-nociceptive effects, which can be used as a substance with many amazing therapeutic applications. Conclusion: These results suggest that the extract of Maclurapomifera can be used in clinical medicine as a remedial agent, which can be substituted for chemical drugs or help them in the treatment of diseases.

Keywords: maclura pomifera, osage orange, herbal medicine, plant extract

Procedia PDF Downloads 242
24730 Experimental Evaluation of Succinct Ternary Tree

Authors: Dmitriy Kuptsov

Abstract:

Tree data structures, such as binary or in general k-ary trees, are essential in computer science. The applications of these data structures can range from data search and retrieval to sorting and ranking algorithms. Naive implementations of these data structures can consume prohibitively large volumes of random access memory limiting their applicability in certain solutions. Thus, in these cases, more advanced representation of these data structures is essential. In this paper we present the design of the compact version of ternary tree data structure and demonstrate the results for the experimental evaluation using static dictionary problem. We compare these results with the results for binary and regular ternary trees. The conducted evaluation study shows that our design, in the best case, consumes up to 12 times less memory (for the dictionary used in our experimental evaluation) than a regular ternary tree and in certain configuration shows performance comparable to regular ternary trees. We have evaluated the performance of the algorithms using both 32 and 64 bit operating systems.

Keywords: algorithms, data structures, succinct ternary tree, per- formance evaluation

Procedia PDF Downloads 160
24729 Predicting Data Center Resource Usage Using Quantile Regression to Conserve Energy While Fulfilling the Service Level Agreement

Authors: Ahmed I. Alutabi, Naghmeh Dezhabad, Sudhakar Ganti

Abstract:

Data centers have been growing in size and dema nd continuously in the last two decades. Planning for the deployment of resources has been shallow and always resorted to over-provisioning. Data center operators try to maximize the availability of their services by allocating multiple of the needed resources. One resource that has been wasted, with little thought, has been energy. In recent years, programmable resource allocation has paved the way to allow for more efficient and robust data centers. In this work, we examine the predictability of resource usage in a data center environment. We use a number of models that cover a wide spectrum of machine learning categories. Then we establish a framework to guarantee the client service level agreement (SLA). Our results show that using prediction can cut energy loss by up to 55%.

Keywords: machine learning, artificial intelligence, prediction, data center, resource allocation, green computing

Procedia PDF Downloads 108
24728 Prosperous Digital Image Watermarking Approach by Using DCT-DWT

Authors: Prabhakar C. Dhavale, Meenakshi M. Pawar

Abstract:

In this paper, everyday tons of data is embedded on digital media or distributed over the internet. The data is so distributed that it can easily be replicated without error, putting the rights of their owners at risk. Even when encrypted for distribution, data can easily be decrypted and copied. One way to discourage illegal duplication is to insert information known as watermark, into potentially valuable data in such a way that it is impossible to separate the watermark from the data. These challenges motivated researchers to carry out intense research in the field of watermarking. A watermark is a form, image or text that is impressed onto paper, which provides evidence of its authenticity. Digital watermarking is an extension of the same concept. There are two types of watermarks visible watermark and invisible watermark. In this project, we have concentrated on implementing watermark in image. The main consideration for any watermarking scheme is its robustness to various attacks

Keywords: watermarking, digital, DCT-DWT, security

Procedia PDF Downloads 422
24727 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 64
24726 Analysis of Composite Health Risk Indicators Built at a Regional Scale and Fine Resolution to Detect Hotspot Areas

Authors: Julien Caudeville, Muriel Ismert

Abstract:

Analyzing the relationship between environment and health has become a major preoccupation for public health as evidenced by the emergence of the French national plans for health and environment. These plans have identified the following two priorities: (1) to identify and manage geographic areas, where hotspot exposures are suspected to generate a potential hazard to human health; (2) to reduce exposure inequalities. At a regional scale and fine resolution of exposure outcome prerequisite, environmental monitoring networks are not sufficient to characterize the multidimensionality of the exposure concept. In an attempt to increase representativeness of spatial exposure assessment approaches, risk composite indicators could be built using additional available databases and theoretical framework approaches to combine factor risks. To achieve those objectives, combining data process and transfer modeling with a spatial approach is a fundamental prerequisite that implies the need to first overcome different scientific limitations: to define interest variables and indicators that could be built to associate and describe the global source-effect chain; to link and process data from different sources and different spatial supports; to develop adapted methods in order to improve spatial data representativeness and resolution. A GIS-based modeling platform for quantifying human exposure to chemical substances (PLAINE: environmental inequalities analysis platform) was used to build health risk indicators within the Lorraine region (France). Those indicators combined chemical substances (in soil, air and water) and noise risk factors. Tools have been developed using modeling, spatial analysis and geostatistic methods to build and discretize interest variables from different supports and resolutions on a 1 km2 regular grid within the Lorraine region. By example, surface soil concentrations have been estimated by developing a Kriging method able to integrate surface and point spatial supports. Then, an exposure model developed by INERIS was used to assess the transfer from soil to individual exposure through ingestion pathways. We used distance from polluted soil site to build a proxy for contaminated site. Air indicator combined modeled concentrations and estimated emissions to take in account 30 polluants in the analysis. For water, drinking water concentrations were compared to drinking water standards to build a score spatialized using a distribution unit serve map. The Lden (day-evening-night) indicator was used to map noise around road infrastructures. Aggregation of the different factor risks was made using different methodologies to discuss weighting and aggregation procedures impact on the effectiveness of risk maps to take decisions for safeguarding citizen health. Results permit to identify pollutant sources, determinants of exposure, and potential hotspots areas. A diagnostic tool was developed for stakeholders to visualize and analyze the composite indicators in an operational and accurate manner. The designed support system will be used in many applications and contexts: (1) mapping environmental disparities throughout the Lorraine region; (2) identifying vulnerable population and determinants of exposure to set priorities and target for pollution prevention, regulation and remediation; (3) providing exposure database to quantify relationships between environmental indicators and cancer mortality data provided by French Regional Health Observatories.

Keywords: health risk, environment, composite indicator, hotspot areas

Procedia PDF Downloads 247
24725 A Comparison of Image Data Representations for Local Stereo Matching

Authors: André Smith, Amr Abdel-Dayem

Abstract:

The stereo matching problem, while having been present for several decades, continues to be an active area of research. The goal of this research is to find correspondences between elements found in a set of stereoscopic images. With these pairings, it is possible to infer the distance of objects within a scene, relative to the observer. Advancements in this field have led to experimentations with various techniques, from graph-cut energy minimization to artificial neural networks. At the basis of these techniques is a cost function, which is used to evaluate the likelihood of a particular match between points in each image. While at its core, the cost is based on comparing the image pixel data; there is a general lack of consistency as to what image data representation to use. This paper presents an experimental analysis to compare the effectiveness of more common image data representations. The goal is to determine the effectiveness of these data representations to reduce the cost for the correct correspondence relative to other possible matches.

Keywords: colour data, local stereo matching, stereo correspondence, disparity map

Procedia PDF Downloads 370
24724 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System

Authors: Karima Qayumi, Alex Norta

Abstract:

The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.

Keywords: agent-oriented modeling (AOM), business intelligence model (BIM), distributed data mining (DDM), multi-agent system (MAS)

Procedia PDF Downloads 432
24723 Timing and Noise Data Mining Algorithm and Software Tool in Very Large Scale Integration (VLSI) Design

Authors: Qing K. Zhu

Abstract:

Very Large Scale Integration (VLSI) design becomes very complex due to the continuous integration of millions of gates in one chip based on Moore’s law. Designers have encountered numerous report files during design iterations using timing and noise analysis tools. This paper presented our work using data mining techniques combined with HTML tables to extract and represent critical timing/noise data. When we apply this data-mining tool in real applications, the running speed is important. The software employs table look-up techniques in the programming for the reasonable running speed based on performance testing results. We added several advanced features for the application in one industry chip design.

Keywords: VLSI design, data mining, big data, HTML forms, web, VLSI, EDA, timing, noise

Procedia PDF Downloads 254
24722 Food Security Indicators in Deltaic and Coastal Research: A Scoping Review

Authors: Sylvia Szabo, Thilini Navaratne, Indrajit Pal, Seree Park

Abstract:

Deltaic and coastal regions are often strategically important both from local and regional perspectives. While deltas are known to be bread baskets of the world, delta inhabitants often face the risk of food and nutritional insecurity. These risks are highly exacerbated by the impacts of climate and environmental change. While numerous regional studies examined the prevalence and the determinants of food security in specific delta and coastal regions, there is still a lack of a systematic analysis of the most widely used scientific food security indicators. In order to fill this gap, a systematic review was carried out using Covidence, a Cochrane-adopted systematic review processing software. Papers included in the review were selected from the SCOPUS, Thomson Reuters Web of Science, Science Direct, ProQuest, and Google Scholar databases. Both scientific papers and grey literature (e.g., reports by international organizations) were considered. The results were analyzed by food security components (access, availability, quality, and strategy) and by world regions. Suggestions for further food security, nutrition, and health research, as well as policy-related implications, are also discussed.

Keywords: delta regions, coastal, food security, indicators, systematic review

Procedia PDF Downloads 239
24721 Data Presentation of Lane-Changing Events Trajectories Using HighD Dataset

Authors: Basma Khelfa, Antoine Tordeux, Ibrahima Ba

Abstract:

We present a descriptive analysis data of lane-changing events in multi-lane roads. The data are provided from The Highway Drone Dataset (HighD), which are microscopic trajectories in highway. This paper describes and analyses the role of the different parameters and their significance. Thanks to HighD data, we aim to find the most frequent reasons that motivate drivers to change lanes. We used the programming language R for the processing of these data. We analyze the involvement and relationship of different variables of each parameter of the ego vehicle and the four vehicles surrounding it, i.e., distance, speed difference, time gap, and acceleration. This was studied according to the class of the vehicle (car or truck), and according to the maneuver it undertook (overtaking or falling back).

Keywords: autonomous driving, physical traffic model, prediction model, statistical learning process

Procedia PDF Downloads 261
24720 An Image Stitching Approach for Scoliosis Analysis

Authors: Siti Salbiah Samsudin, Hamzah Arof, Ainuddin Wahid Abdul Wahab, Mohd Yamani Idna Idris

Abstract:

Standard X-ray spine images produced by conventional screen-film technique have a limited field of view. This limitation may obstruct a complete inspection of the spine unless images of different parts of the spine are placed next to each other contiguously to form a complete structure. Another solution to producing a whole spine image is by assembling the digitized x-ray images of its parts automatically using image stitching. This paper presents a new Medical Image Stitching (MIS) method that utilizes Minimum Average Correlation Energy (MACE) filters to identify and merge pairs of x-ray medical images. The effectiveness of the proposed method is demonstrated in two sets of experiments involving two databases which contain a total of 40 pairs of overlapping and non-overlapping spine images. The experimental results are compared to those produced by the Normalized Cross Correlation (NCC) and Phase Only Correlation (POC) methods for comparison. It is found that the proposed method outperforms those of the NCC and POC methods in identifying both the overlapping and non-overlapping medical images. The efficacy of the proposed method is further vindicated by its average execution time which is about two to five times shorter than those of the POC and NCC methods.

Keywords: image stitching, MACE filter, panorama image, scoliosis

Procedia PDF Downloads 458
24719 Evaluation of Golden Beam Data for the Commissioning of 6 and 18 MV Photons Beams in Varian Linear Accelerator

Authors: Shoukat Ali, Abdul Qadir Jandga, Amjad Hussain

Abstract:

Objective: The main purpose of this study is to compare the Percent Depth dose (PDD) and In-plane and cross-plane profiles of Varian Golden beam data to the measured data of 6 and 18 MV photons for the commissioning of Eclipse treatment planning system. Introduction: Commissioning of treatment planning system requires an extensive acquisition of beam data for the clinical use of linear accelerators. Accurate dose delivery require to enter the PDDs, Profiles and dose rate tables for open and wedges fields into treatment planning system, enabling to calculate the MUs and dose distribution. Varian offers a generic set of beam data as a reference data, however not recommend for clinical use. In this study, we compared the generic beam data with the measured beam data to evaluate the reliability of generic beam data to be used for the clinical purpose. Methods and Material: PDDs and Profiles of Open and Wedge fields for different field sizes and at different depths measured as per Varian’s algorithm commissioning guideline. The measurement performed with PTW 3D-scanning water phantom with semi-flex ion chamber and MEPHYSTO software. The online available Varian Golden Beam Data compared with the measured data to evaluate the accuracy of the golden beam data to be used for the commissioning of Eclipse treatment planning system. Results: The deviation between measured vs. golden beam data was in the range of 2% max. In PDDs, the deviation increases more in the deeper depths than the shallower depths. Similarly, profiles have the same trend of increasing deviation at large field sizes and increasing depths. Conclusion: Study shows that the percentage deviation between measured and golden beam data is within the acceptable tolerance and therefore can be used for the commissioning process; however, verification of small subset of acquired data with the golden beam data should be mandatory before clinical use.

Keywords: percent depth dose, flatness, symmetry, golden beam data

Procedia PDF Downloads 489
24718 Variable-Fidelity Surrogate Modelling with Kriging

Authors: Selvakumar Ulaganathan, Ivo Couckuyt, Francesco Ferranti, Tom Dhaene, Eric Laermans

Abstract:

Variable-fidelity surrogate modelling offers an efficient way to approximate function data available in multiple degrees of accuracy each with varying computational cost. In this paper, a Kriging-based variable-fidelity surrogate modelling approach is introduced to approximate such deterministic data. Initially, individual Kriging surrogate models, which are enhanced with gradient data of different degrees of accuracy, are constructed. Then these Gradient enhanced Kriging surrogate models are strategically coupled using a recursive CoKriging formulation to provide an accurate surrogate model for the highest fidelity data. While, intuitively, gradient data is useful to enhance the accuracy of surrogate models, the primary motivation behind this work is to investigate if it is also worthwhile incorporating gradient data of varying degrees of accuracy.

Keywords: Kriging, CoKriging, Surrogate modelling, Variable- fidelity modelling, Gradients

Procedia PDF Downloads 558
24717 Robust Barcode Detection with Synthetic-to-Real Data Augmentation

Authors: Xiaoyan Dai, Hsieh Yisan

Abstract:

Barcode processing of captured images is a huge challenge, as different shooting conditions can result in different barcode appearances. This paper proposes a deep learning-based barcode detection using synthetic-to-real data augmentation. We first augment barcodes themselves; we then augment images containing the barcodes to generate a large variety of data that is close to the actual shooting environments. Comparisons with previous works and evaluations with our original data show that this approach achieves state-of-the-art performance in various real images. In addition, the system uses hybrid resolution for barcode “scan” and is applicable to real-time applications.

Keywords: barcode detection, data augmentation, deep learning, image-based processing

Procedia PDF Downloads 169
24716 Does Mirror Therapy Improve Motor Recovery After Stroke? A Meta-Analysis of Randomized Controlled Trials

Authors: Hassan Abo Salem, Guo Feng, Xiaolin Huang

Abstract:

The objective of this study is to determine the effectiveness of mirror therapy on motor recovery and functional abilities after stroke. The following databases were searched from inception to May 2014: Cochrane Stroke, Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, AMED, PsycINFO, and PEDro. Two reviewers independently screened and selected all randomized controlled trials that evaluate the effect of mirror therapy in stroke rehabilitation.12 randomized controlled trials studies met the inclusion criteria; 10 studies utilized the effect of mirror therapy for the upper limb and 2 studies for the lower limb. Mirror therapy had a positive effect on motor recover and function; however, we found no consistent influence on activity of daily living, Spasticity and balance. This meta-analysis suggests that, Mirror therapy has additional effect on motor recovery but has a small positive effect on functional abilities after stroke. Further high-quality studies with greater statistical power are required in order to accurately determine the effectiveness of mirror therapy following stroke.

Keywords: mirror therapy, motor recovery, stroke, balance

Procedia PDF Downloads 552
24715 Copy Number Variants in Children with Non-Syndromic Congenital Heart Diseases from Mexico

Authors: Maria Lopez-Ibarra, Ana Velazquez-Wong, Lucelli Yañez-Gutierrez, Maria Araujo-Solis, Fabio Salamanca-Gomez, Alfonso Mendez-Tenorio, Haydeé Rosas-Vargas

Abstract:

Congenital heart diseases (CHD) are the most common congenital abnormalities. These conditions can occur as both an element of distinct chromosomal malformation syndromes or as non-syndromic forms. Their etiology is not fully understood. Genetic variants such copy number variants have been associated with CHD. The aim of our study was to analyze these genomic variants in peripheral blood from Mexican children diagnosed with non-syndromic CHD. We included 16 children with atrial and ventricular septal defects and 5 healthy subjects without heart malformations as controls. To exclude the most common heart disease-associated syndrome alteration, we performed a fluorescence in situ hybridization test to identify the 22q11.2, responsible for congenital heart abnormalities associated with Di-George Syndrome. Then, a microarray based comparative genomic hybridization was used to identify global copy number variants. The identification of copy number variants resulted from the comparison and analysis between our results and data from main genetic variation databases. We identified copy number variants gain in three chromosomes regions from pediatric patients, 4q13.2 (31.25%), 9q34.3 (25%) and 20q13.33 (50%), where several genes associated with cellular, biosynthetic, and metabolic processes are located, UGT2B15, UGT2B17, SNAPC4, SDCCAG3, PMPCA, INPP6E, C9orf163, NOTCH1, C20orf166, and SLCO4A1. In addition, after a hierarchical cluster analysis based on the fluorescence intensity ratios from the comparative genomic hybridization, two congenital heart disease groups were generated corresponding to children with atrial or ventricular septal defects. Further analysis with a larger sample size is needed to corroborate these copy number variants as possible biomarkers to differentiate between heart abnormalities. Interestingly, the 20q13.33 gain was present in 50% of children with these CHD which could suggest that alterations in both coding and non-coding elements within this chromosomal region may play an important role in distinct heart conditions.

Keywords: aCGH, bioinformatics, congenital heart diseases, copy number variants, fluorescence in situ hybridization

Procedia PDF Downloads 292
24714 Using Machine-Learning Methods for Allergen Amino Acid Sequence's Permutations

Authors: Kuei-Ling Sun, Emily Chia-Yu Su

Abstract:

Allergy is a hypersensitive overreaction of the immune system to environmental stimuli, and a major health problem. These overreactions include rashes, sneezing, fever, food allergies, anaphylaxis, asthmatic, shock, or other abnormal conditions. Allergies can be caused by food, insect stings, pollen, animal wool, and other allergens. Their development of allergies is due to both genetic and environmental factors. Allergies involve immunoglobulin E antibodies, a part of the body’s immune system. Immunoglobulin E antibodies will bind to an allergen and then transfer to a receptor on mast cells or basophils triggering the release of inflammatory chemicals such as histamine. Based on the increasingly serious problem of environmental change, changes in lifestyle, air pollution problem, and other factors, in this study, we both collect allergens and non-allergens from several databases and use several machine learning methods for classification, including logistic regression (LR), stepwise regression, decision tree (DT) and neural networks (NN) to do the model comparison and determine the permutations of allergen amino acid’s sequence.

Keywords: allergy, classification, decision tree, logistic regression, machine learning

Procedia PDF Downloads 303
24713 Effectiveness of Computer-Based Cognitive Training in Improving Attention-Deficit/Hyperactivity Disorder Rehabilitation

Authors: Marjan Ghazisaeedi, Azadeh Bashiri

Abstract:

Background: Attention-Deficit/Hyperactivity Disorder(ADHD), is one of the most common psychiatric disorders in early childhood that in addition to its main symptoms provide significant deficits in the areas of educational, social and individual relationship. Considering the importance of rehabilitation in ADHD patients to control these problems, this study investigated the advantages of computer-based cognitive training in these patients. Methods: This review article has been conducted by searching articles since 2005 in scientific databases and e-Journals and by using keywords including computerized cognitive rehabilitation, computer-based training and ADHD. Results: Since drugs have short term effects and also they have many side effects in the rehabilitation of ADHD patients, using supplementary methods such as computer-based cognitive training is one of the best solutions. This approach has quick feedback and also has no side effects. So, it provides promising results in cognitive rehabilitation of ADHD especially on the working memory and attention. Conclusion: Considering different cognitive dysfunctions in ADHD patients, application of the computerized cognitive training has the potential to improve cognitive functions and consequently social, academic and behavioral performances in patients with this disorder.

Keywords: ADHD, computer-based cognitive training, cognitive functions, rehabilitation

Procedia PDF Downloads 278
24712 A Case-Control Study on Dietary Heme/Nonheme Iron and Colorectal Cancer Risk

Authors: Alvaro L. Ronco

Abstract:

Background and purpose: Although our country is a developing one, it has a typical Western meat-rich dietary style. Based on estimates of heme and nonheme iron contents in representative foods, we carried out the present epidemiologic study, with the aim of accurately analyzing dietary iron and its role on CRC risk. Subjects/methods: Patients (611 CRC incident cases and 2394 controls, all belonging to public hospitals of our capital city) were interviewed through a questionnaire including socio-demographic, reproductive and lifestyle variables, and a food frequency questionnaire of 64 items, which asked about food intake 5 years before the interview. The sample included 1937 men and 1068 women. Controls were matched by sex and age (± 5 years) to cases. Food-derived nutrients were calculated from available databases. Total dietary iron was calculated and classified by heme or nonheme source, following data of specific Dutch and Canadian studies, and additionally adjusted by energy. Odds Ratios (OR) and 95% confidence intervals were calculated through unconditional logistic regression, adjusting for relevant potential confounders (education, body mass index, family history of cancer, energy, infusions, and others). A heme/nonheme (H/NH) ratio was created and the interest variables were categorized into tertiles, for analysis purposes. Results: The following risk estimations correspond to the highest tertiles. Total iron intake showed no association with CRC risk neither among men (OR=0.83, ptrend =.18) nor among women (OR=1.48, ptrend =.09). Heme iron was positively associated among men (OR=1.88, ptrend < .001) and for the overall sample (OR=1.44, ptrend =.002), however, it was not associated among women (OR=0.91, ptrend =.83). Nonheme iron showed an inverse association among men (OR=0.53, ptrend < .001) and the overall sample (OR=0.78, ptrend =.04), but was not associated among women (OR=1.46, ptrend =.14). Regarding H/NH ratio, risks increased only among men (OR=2.12, ptrend < .001) but lacked of association among women (OR=0.81, ptrend =.29). Conclusions. We have observed different types of associations between CRC risk and high dietary heme, nonheme and H/NH iron ratio. Therefore, the source of the available iron might be of importance as a link to colorectal carcinogenesis, perhaps pointing to reconsider the animal/plant proportions of this vital mineral within diet. Nevertheless, the different associations observed for each sex, demand further studies in order to clarify these points.

Keywords: chelation, colorectal cancer, heme, iron, nonheme

Procedia PDF Downloads 170
24711 Analysis of Delivery of Quad Play Services

Authors: Rahul Malhotra, Anurag Sharma

Abstract:

Fiber based access networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparative investigation and suitability of various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be accommodated decreases due to increase in bit error rate.

Keywords: FTTH, quad play, play service, access networks, data rate

Procedia PDF Downloads 415
24710 Classification of Manufacturing Data for Efficient Processing on an Edge-Cloud Network

Authors: Onyedikachi Ulelu, Andrew P. Longstaff, Simon Fletcher, Simon Parkinson

Abstract:

The widespread interest in 'Industry 4.0' or 'digital manufacturing' has led to significant research requiring the acquisition of data from sensors, instruments, and machine signals. In-depth research then identifies methods of analysis of the massive amounts of data generated before and during manufacture to solve a particular problem. The ultimate goal is for industrial Internet of Things (IIoT) data to be processed automatically to assist with either visualisation or autonomous system decision-making. However, the collection and processing of data in an industrial environment come with a cost. Little research has been undertaken on how to specify optimally what data to capture, transmit, process, and store at various levels of an edge-cloud network. The first step in this specification is to categorise IIoT data for efficient and effective use. This paper proposes the required attributes and classification to take manufacturing digital data from various sources to determine the most suitable location for data processing on the edge-cloud network. The proposed classification framework will minimise overhead in terms of network bandwidth/cost and processing time of machine tool data via efficient decision making on which dataset should be processed at the ‘edge’ and what to send to a remote server (cloud). A fast-and-frugal heuristic method is implemented for this decision-making. The framework is tested using case studies from industrial machine tools for machine productivity and maintenance.

Keywords: data classification, decision making, edge computing, industrial IoT, industry 4.0

Procedia PDF Downloads 182
24709 Denoising Transient Electromagnetic Data

Authors: Lingerew Nebere Kassie, Ping-Yu Chang, Hsin-Hua Huang, , Chaw-Son Chen

Abstract:

Transient electromagnetic (TEM) data plays a crucial role in hydrogeological and environmental applications, providing valuable insights into geological structures and resistivity variations. However, the presence of noise often hinders the interpretation and reliability of these data. Our study addresses this issue by utilizing a FASTSNAP system for the TEM survey, which operates at different modes (low, medium, and high) with continuous adjustments to discretization, gain, and current. We employ a denoising approach that processes the raw data obtained from each acquisition mode to improve signal quality and enhance data reliability. We use a signal-averaging technique for each mode, increasing the signal-to-noise ratio. Additionally, we utilize wavelet transform to suppress noise further while preserving the integrity of the underlying signals. This approach significantly improves the data quality, notably suppressing severe noise at late times. The resulting denoised data exhibits a substantially improved signal-to-noise ratio, leading to increased accuracy in parameter estimation. By effectively denoising TEM data, our study contributes to a more reliable interpretation and analysis of underground structures. Moreover, the proposed denoising approach can be seamlessly integrated into existing ground-based TEM data processing workflows, facilitating the extraction of meaningful information from noisy measurements and enhancing the overall quality and reliability of the acquired data.

Keywords: data quality, signal averaging, transient electromagnetic, wavelet transform

Procedia PDF Downloads 85
24708 Attribute Analysis of Quick Response Code Payment Users Using Discriminant Non-negative Matrix Factorization

Authors: Hironori Karachi, Haruka Yamashita

Abstract:

Recently, the system of quick response (QR) code is getting popular. Many companies introduce new QR code payment services and the services are competing with each other to increase the number of users. For increasing the number of users, we should grasp the difference of feature of the demographic information, usage information, and value of users between services. In this study, we conduct an analysis of real-world data provided by Nomura Research Institute including the demographic data of users and information of users’ usages of two services; LINE Pay, and PayPay. For analyzing such data and interpret the feature of them, Nonnegative Matrix Factorization (NMF) is widely used; however, in case of the target data, there is a problem of the missing data. EM-algorithm NMF (EMNMF) to complete unknown values for understanding the feature of the given data presented by matrix shape. Moreover, for comparing the result of the NMF analysis of two matrices, there is Discriminant NMF (DNMF) shows the difference of users features between two matrices. In this study, we combine EMNMF and DNMF and also analyze the target data. As the interpretation, we show the difference of the features of users between LINE Pay and Paypay.

Keywords: data science, non-negative matrix factorization, missing data, quality of services

Procedia PDF Downloads 131
24707 Developing Guidelines for Public Health Nurse Data Management and Use in Public Health Emergencies

Authors: Margaret S. Wright

Abstract:

Background/Significance: During many recent public health emergencies/disasters, public health nursing data has been missing or delayed, potentially impacting the decision-making and response. Data used as evidence for decision-making in response, planning, and mitigation has been erratic and slow, decreasing the ability to respond. Methodology: Applying best practices in data management and data use in public health settings, and guided by the concepts outlined in ‘Disaster Standards of Care’ models leads to the development of recommendations for a model of best practices in data management and use in public health disasters/emergencies by public health nurses. As the ‘patient’ in public health disasters/emergencies is the community (local, regional or national), guidelines for patient documentation are incorporated in the recommendations. Findings: Using model public health nurses could better plan how to prepare for, respond to, and mitigate disasters in their communities, and better participate in decision-making in all three phases bringing public health nursing data to the discussion as part of the evidence base for decision-making.

Keywords: data management, decision making, disaster planning documentation, public health nursing

Procedia PDF Downloads 221
24706 Dental Implants in Breast Cancer Patients Receiving Bisphosphonate Therapy

Authors: Mai Ashraf Talaat

Abstract:

Objectives: The aim of this review article is to assess the success of dental implants in breast cancer patients receiving bisphosphonate therapy and to evaluate the risk of developing bisphosphonate-related osteonecrosis of the jaw following dental implant surgery. Materials and Methods: A thorough search was conducted, with no time or language restriction, using: PubMed, PubMed Central, Web of Science, and ResearchGate electronic databases. Medical Subject Headings (MeSH) terms such as “bisphosphonate”, “dental implant”, “bisphosphonate-related osteonecrosis of the jaw (BRONJ)”, “osteonecrosis”, “breast cancer, MRONJ”, and their related entry terms were used. Eligibility criteria included studies and clinical trials that evaluated the impact of bisphosphonates on dental implants. Conclusion: Breast cancer patients undergoing bisphosphonate therapy may receive dental implants. However, the risk of developing BRONJ and implant failure is high. Risk factors such as the type of BP received, the route of administration, and the length of treatment prior to surgery should be considered. More randomized controlled trials with long-term follow-ups are needed to draw more evidence-based conclusions.

Keywords: dental implants, breast cancer, bisphosphonates, osteonecrosis, bisphosphonate-related osteonecrosis of the jaw

Procedia PDF Downloads 112
24705 An Embarrassingly Simple Semi-supervised Approach to Increase Recall in Online Shopping Domain to Match Structured Data with Unstructured Data

Authors: Sachin Nagargoje

Abstract:

Complete labeled data is often difficult to obtain in a practical scenario. Even if one manages to obtain the data, the quality of the data is always in question. In shopping vertical, offers are the input data, which is given by advertiser with or without a good quality of information. In this paper, an author investigated the possibility of using a very simple Semi-supervised learning approach to increase the recall of unhealthy offers (has badly written Offer Title or partial product details) in shopping vertical domain. The author found that the semisupervised learning method had improved the recall in the Smart Phone category by 30% on A=B testing on 10% traffic and increased the YoY (Year over Year) number of impressions per month by 33% at production. This also made a significant increase in Revenue, but that cannot be publicly disclosed.

Keywords: semi-supervised learning, clustering, recall, coverage

Procedia PDF Downloads 122
24704 Genodata: The Human Genome Variation Using BigData

Authors: Surabhi Maiti, Prajakta Tamhankar, Prachi Uttam Mehta

Abstract:

Since the accomplishment of the Human Genome Project, there has been an unparalled escalation in the sequencing of genomic data. This project has been the first major vault in the field of medical research, especially in genomics. This project won accolades by using a concept called Bigdata which was earlier, extensively used to gain value for business. Bigdata makes use of data sets which are generally in the form of files of size terabytes, petabytes, or exabytes and these data sets were traditionally used and managed using excel sheets and RDBMS. The voluminous data made the process tedious and time consuming and hence a stronger framework called Hadoop was introduced in the field of genetic sciences to make data processing faster and efficient. This paper focuses on using SPARK which is gaining momentum with the advancement of BigData technologies. Cloud Storage is an effective medium for storage of large data sets which is generated from the genetic research and the resultant sets produced from SPARK analysis.

Keywords: human genome project, Bigdata, genomic data, SPARK, cloud storage, Hadoop

Procedia PDF Downloads 259
24703 Skills for Family Support Workforce: A Systematic Review

Authors: Anita Burgund Isakov, Cristina Nunes, Nevenka Zegarac, Ana Antunes

Abstract:

Contemporary societies are facing a noticeable shift in family realities, urging to need for the development of new policies, service, and practice orientation that has application across different sectors who serves families with children across the world. A challenge for the field of family support is diversity in conceptual assumptions and epistemological frameworks. Since many disciplines and professionals are working in the family support field, there is a need to map and gain a deeper insight into the skills for the workforce in this field. Under the umbrella of the COST action 'The Pan-European Family Support Research Network: A bottom-up, evidence-based and multidisciplinary approach', a review of the current state of knowledge published from the European studies on family support workforce skills standards is performed. Contributing to the aim of mapping and catalogization of skills standards, key stages of literature review were identified in order to extract and systematize the data. We have considered inclusion and exclusion criteria for this literature review. Inclusion criteria were: a) families living with their children and families using family support services; different methodological approaches were included: qualitative, quantitative, mix method, literature review and theoretical reflections various topic appeared in journals like working with families that are facing difficulties or culturally sensitive practice and relationship-based approaches; b) the dates ranged from 1995 to February 2020. Articles published prior to 1995 were excluded due to modernization of family support services across world; c) the sources and languages included peer-reviewed articles published in scientific journals in English. Six databases were searched and once we have extracted all the relevant papers (n=29), we searched the list of reference in each and we found 11 additional papers. In total 40 papers have been extracted from six data basis. Findings could be summarized in: 1) only five countries emerged with production in the specific topic, that is, workforce skills to family support (UK, USA, Canada, Australia, and Spain), 2) studies revealed that diverse skills support family topics were investigated, namely the professional support skills to help families of neglected/abused children or in care; the professional support skills to help families with children who suffer from behavioral problems and families with children with disabilities; and the professional support skills to help minority ethnic parents, 3) social workers were the main targeted professionals' studies albeit other child protection workers were studied too, 4) the workforce skills to family support were grouped in three topics: the qualities of the professionals (attitudes and attributes); technical skills, and specific knowledge. The framework of analyses, literature strategy and findings with study limitations will be further discussed. As an implication, this study contributes and advocates for the structuring of a common base for cross-sectoral and interdisciplinary qualification standards for the family support workforce.

Keywords: family support, skill standards, systemic review, workforce

Procedia PDF Downloads 112
24702 Design and Development of a Platform for Analyzing Spatio-Temporal Data from Wireless Sensor Networks

Authors: Walid Fantazi

Abstract:

The development of sensor technology (such as microelectromechanical systems (MEMS), wireless communications, embedded systems, distributed processing and wireless sensor applications) has contributed to a broad range of WSN applications which are capable of collecting a large amount of spatiotemporal data in real time. These systems require real-time data processing to manage storage in real time and query the data they process. In order to cover these needs, we propose in this paper a Snapshot spatiotemporal data model based on object-oriented concepts. This model allows saving storing and reducing data redundancy which makes it easier to execute spatiotemporal queries and save analyzes time. Further, to ensure the robustness of the system as well as the elimination of congestion from the main access memory we propose a spatiotemporal indexing technique in RAM called Captree *. As a result, we offer an RIA (Rich Internet Application) -based SOA application architecture which allows the remote monitoring and control.

Keywords: WSN, indexing data, SOA, RIA, geographic information system

Procedia PDF Downloads 254