Search results for: data maturity
24858 Wavelets Contribution on Textual Data Analysis
Authors: Habiba Ben Abdessalem
Abstract:
The emergence of giant set of textual data was the push that has encouraged researchers to invest in this field. The purpose of textual data analysis methods is to facilitate access to such type of data by providing various graphic visualizations. Applying these methods requires a corpus pretreatment step, whose standards are set according to the objective of the problem studied. This step determines the forms list contained in contingency table by keeping only those information carriers. This step may, however, lead to noisy contingency tables, so the use of wavelet denoising function. The validity of the proposed approach is tested on a text database that offers economic and political events in Tunisia for a well definite period.Keywords: textual data, wavelet, denoising, contingency table
Procedia PDF Downloads 27724857 Customer Churn Analysis in Telecommunication Industry Using Data Mining Approach
Authors: Burcu Oralhan, Zeki Oralhan, Nilsun Sariyer, Kumru Uyar
Abstract:
Data mining has been becoming more and more important and a wide range of applications in recent years. Data mining is the process of find hidden and unknown patterns in big data. One of the applied fields of data mining is Customer Relationship Management. Understanding the relationships between products and customers is crucial for every business. Customer Relationship Management is an approach to focus on customer relationship development, retention and increase on customer satisfaction. In this study, we made an application of a data mining methods in telecommunication customer relationship management side. This study aims to determine the customers profile who likely to leave the system, develop marketing strategies, and customized campaigns for customers. Data are clustered by applying classification techniques for used to determine the churners. As a result of this study, we will obtain knowledge from international telecommunication industry. We will contribute to the understanding and development of this subject in Customer Relationship Management.Keywords: customer churn analysis, customer relationship management, data mining, telecommunication industry
Procedia PDF Downloads 31624856 On Pooling Different Levels of Data in Estimating Parameters of Continuous Meta-Analysis
Authors: N. R. N. Idris, S. Baharom
Abstract:
A meta-analysis may be performed using aggregate data (AD) or an individual patient data (IPD). In practice, studies may be available at both IPD and AD level. In this situation, both the IPD and AD should be utilised in order to maximize the available information. Statistical advantages of combining the studies from different level have not been fully explored. This study aims to quantify the statistical benefits of including available IPD when conducting a conventional summary-level meta-analysis. Simulated meta-analysis were used to assess the influence of the levels of data on overall meta-analysis estimates based on IPD-only, AD-only and the combination of IPD and AD (mixed data, MD), under different study scenario. The percentage relative bias (PRB), root mean-square-error (RMSE) and coverage probability were used to assess the efficiency of the overall estimates. The results demonstrate that available IPD should always be included in a conventional meta-analysis using summary level data as they would significantly increased the accuracy of the estimates. On the other hand, if more than 80% of the available data are at IPD level, including the AD does not provide significant differences in terms of accuracy of the estimates. Additionally, combining the IPD and AD has moderating effects on the biasness of the estimates of the treatment effects as the IPD tends to overestimate the treatment effects, while the AD has the tendency to produce underestimated effect estimates. These results may provide some guide in deciding if significant benefit is gained by pooling the two levels of data when conducting meta-analysis.Keywords: aggregate data, combined-level data, individual patient data, meta-analysis
Procedia PDF Downloads 37524855 Analyzing On-Line Process Data for Industrial Production Quality Control
Authors: Hyun-Woo Cho
Abstract:
The monitoring of industrial production quality has to be implemented to alarm early warning for unusual operating conditions. Furthermore, identification of their assignable causes is necessary for a quality control purpose. For such tasks many multivariate statistical techniques have been applied and shown to be quite effective tools. This work presents a process data-based monitoring scheme for production processes. For more reliable results some additional steps of noise filtering and preprocessing are considered. It may lead to enhanced performance by eliminating unwanted variation of the data. The performance evaluation is executed using data sets from test processes. The proposed method is shown to provide reliable quality control results, and thus is more effective in quality monitoring in the example. For practical implementation of the method, an on-line data system must be available to gather historical and on-line data. Recently large amounts of data are collected on-line in most processes and implementation of the current scheme is feasible and does not give additional burdens to users.Keywords: detection, filtering, monitoring, process data
Procedia PDF Downloads 55924854 A Review of Travel Data Collection Methods
Authors: Muhammad Awais Shafique, Eiji Hato
Abstract:
Household trip data is of crucial importance for managing present transportation infrastructure as well as to plan and design future facilities. It also provides basis for new policies implemented under Transportation Demand Management. The methods used for household trip data collection have changed with passage of time, starting with the conventional face-to-face interviews or paper-and-pencil interviews and reaching to the recent approach of employing smartphones. This study summarizes the step-wise evolution in the travel data collection methods. It provides a comprehensive review of the topic, for readers interested to know the changing trends in the data collection field.Keywords: computer, smartphone, telephone, travel survey
Procedia PDF Downloads 31324853 A Business-to-Business Collaboration System That Promotes Data Utilization While Encrypting Information on the Blockchain
Authors: Hiroaki Nasu, Ryota Miyamoto, Yuta Kodera, Yasuyuki Nogami
Abstract:
To promote Industry 4.0 and Society 5.0 and so on, it is important to connect and share data so that every member can trust it. Blockchain (BC) technology is currently attracting attention as the most advanced tool and has been used in the financial field and so on. However, the data collaboration using BC has not progressed sufficiently among companies on the supply chain of manufacturing industry that handle sensitive data such as product quality, manufacturing conditions, etc. There are two main reasons why data utilization is not sufficiently advanced in the industrial supply chain. The first reason is that manufacturing information is top secret and a source for companies to generate profits. It is difficult to disclose data even between companies with transactions in the supply chain. In the blockchain mechanism such as Bitcoin using PKI (Public Key Infrastructure), in order to confirm the identity of the company that has sent the data, the plaintext must be shared between the companies. Another reason is that the merits (scenarios) of collaboration data between companies are not specifically specified in the industrial supply chain. For these problems this paper proposes a Business to Business (B2B) collaboration system using homomorphic encryption and BC technique. Using the proposed system, each company on the supply chain can exchange confidential information on encrypted data and utilize the data for their own business. In addition, this paper considers a scenario focusing on quality data, which was difficult to collaborate because it is a top secret. In this scenario, we show a implementation scheme and a benefit of concrete data collaboration by proposing a comparison protocol that can grasp the change in quality while hiding the numerical value of quality data.Keywords: business to business data collaboration, industrial supply chain, blockchain, homomorphic encryption
Procedia PDF Downloads 13624852 Multivariate Assessment of Mathematics Test Scores of Students in Qatar
Authors: Ali Rashash Alzahrani, Elizabeth Stojanovski
Abstract:
Data on various aspects of education are collected at the institutional and government level regularly. In Australia, for example, students at various levels of schooling undertake examinations in numeracy and literacy as part of NAPLAN testing, enabling longitudinal assessment of such data as well as comparisons between schools and states within Australia. Another source of educational data collected internationally is via the PISA study which collects data from several countries when students are approximately 15 years of age and enables comparisons in the performance of science, mathematics and English between countries as well as ranking of countries based on performance in these standardised tests. As well as student and school outcomes based on the tests taken as part of the PISA study, there is a wealth of other data collected in the study including parental demographics data and data related to teaching strategies used by educators. Overall, an abundance of educational data is available which has the potential to be used to help improve educational attainment and teaching of content in order to improve learning outcomes. A multivariate assessment of such data enables multiple variables to be considered simultaneously and will be used in the present study to help develop profiles of students based on performance in mathematics using data obtained from the PISA study.Keywords: cluster analysis, education, mathematics, profiles
Procedia PDF Downloads 12624851 Humic Acid and Azadirachtin Derivatives for the Management of Crop Pests
Authors: R. S. Giraddi, C. M. Poleshi
Abstract:
Organic cultivation of crops is gaining importance consumer awareness towards pesticide residue free foodstuffs is increasing globally. This is also because of high costs of synthetic fertilizers and pesticides, making the conventional farming non-remunerative. In India, organic manures (such as vermicompost) are an important input in organic agriculture. Though vermicompost obtained through earthworm and microbe-mediated processes is known to comprise most of the crop nutrients, but they are in small amounts thus necessitating enrichment of nutrients so that crop nourishment is complete. Another characteristic of organic manures is that the pest infestations are kept under check due to induced resistance put up by the crop plants. In the present investigation, deoiled neem cake containing azadirachtin, copper ore tailings (COT), a source of micro-nutrients and microbial consortia were added for enrichment of vermicompost. Neem cake is a by-product obtained during the process of oil extraction from neem plant seeds. Three enriched vermicompost blends were prepared using vermicompost (at 70, 65 and 60%), deoiled neem cake (25, 30 and 35%), microbial consortia and COTwastes (5%). Enriched vermicompost was thoroughly mixed, moistened (25+5%), packed and incubated for 15 days at room temperature. In the crop response studies, the field trials on chili (Capsicum annum var. longum) and soybean, (Glycine max cv JS 335) were conducted during Kharif 2015 at the Main Agricultural Research Station, UAS, Dharwad-Karnataka, India. The vermicompost blend enriched with neem cake (known to possess higher amounts of nutrients) and vermicompost were applied to the crops and at two dosages and at two intervals of crop cycle (at sowing and 30 days after sowing) as per the treatment plan along with 50% recommended dose of fertilizer (RDF). 10 plants selected randomly in each plot were studied for pest density and plant damage. At maturity, crops were harvested, and the yields were recorded as per the treatments, and the data were analyzed using appropriate statistical tools and procedures. In the crops, chili and soybean, crop nourishment with neem enriched vermicompost reduced insect density and plant damage significantly compared to other treatments. These treatments registered as much yield (16.7 to 19.9 q/ha) as that realized in conventional chemical control (18.2 q/ha) in soybean, while 72 to 77 q/ha of green chili was harvested in the same treatments, being comparable to the chemical control (74 q/ha). The yield superiority of the treatments was of the order neem enriched vermicompost>conventional chemical control>neem cake>vermicompost>untreated control. The significant features of the result are that it reduces use of inorganic manures by 50% and synthetic chemical insecticides by 100%.Keywords: humic acid, azadirachtin, vermicompost, insect-pest
Procedia PDF Downloads 27724850 Dataset Quality Index:Development of Composite Indicator Based on Standard Data Quality Indicators
Authors: Sakda Loetpiparwanich, Preecha Vichitthamaros
Abstract:
Nowadays, poor data quality is considered one of the majority costs for a data project. The data project with data quality awareness almost as much time to data quality processes while data project without data quality awareness negatively impacts financial resources, efficiency, productivity, and credibility. One of the processes that take a long time is defining the expectations and measurements of data quality because the expectation is different up to the purpose of each data project. Especially, big data project that maybe involves with many datasets and stakeholders, that take a long time to discuss and define quality expectations and measurements. Therefore, this study aimed at developing meaningful indicators to describe overall data quality for each dataset to quick comparison and priority. The objectives of this study were to: (1) Develop a practical data quality indicators and measurements, (2) Develop data quality dimensions based on statistical characteristics and (3) Develop Composite Indicator that can describe overall data quality for each dataset. The sample consisted of more than 500 datasets from public sources obtained by random sampling. After datasets were collected, there are five steps to develop the Dataset Quality Index (SDQI). First, we define standard data quality expectations. Second, we find any indicators that can measure directly to data within datasets. Thirdly, each indicator aggregates to dimension using factor analysis. Next, the indicators and dimensions were weighted by an effort for data preparing process and usability. Finally, the dimensions aggregate to Composite Indicator. The results of these analyses showed that: (1) The developed useful indicators and measurements contained ten indicators. (2) the developed data quality dimension based on statistical characteristics, we found that ten indicators can be reduced to 4 dimensions. (3) The developed Composite Indicator, we found that the SDQI can describe overall datasets quality of each dataset and can separate into 3 Level as Good Quality, Acceptable Quality, and Poor Quality. The conclusion, the SDQI provide an overall description of data quality within datasets and meaningful composition. We can use SQDI to assess for all data in the data project, effort estimation, and priority. The SDQI also work well with Agile Method by using SDQI to assessment in the first sprint. After passing the initial evaluation, we can add more specific data quality indicators into the next sprint.Keywords: data quality, dataset quality, data quality management, composite indicator, factor analysis, principal component analysis
Procedia PDF Downloads 13924849 Predictive Analysis for Big Data: Extension of Classification and Regression Trees Algorithm
Authors: Ameur Abdelkader, Abed Bouarfa Hafida
Abstract:
Since its inception, predictive analysis has revolutionized the IT industry through its robustness and decision-making facilities. It involves the application of a set of data processing techniques and algorithms in order to create predictive models. Its principle is based on finding relationships between explanatory variables and the predicted variables. Past occurrences are exploited to predict and to derive the unknown outcome. With the advent of big data, many studies have suggested the use of predictive analytics in order to process and analyze big data. Nevertheless, they have been curbed by the limits of classical methods of predictive analysis in case of a large amount of data. In fact, because of their volumes, their nature (semi or unstructured) and their variety, it is impossible to analyze efficiently big data via classical methods of predictive analysis. The authors attribute this weakness to the fact that predictive analysis algorithms do not allow the parallelization and distribution of calculation. In this paper, we propose to extend the predictive analysis algorithm, Classification And Regression Trees (CART), in order to adapt it for big data analysis. The major changes of this algorithm are presented and then a version of the extended algorithm is defined in order to make it applicable for a huge quantity of data.Keywords: predictive analysis, big data, predictive analysis algorithms, CART algorithm
Procedia PDF Downloads 14224848 Canopy Temperature Acquired from Daytime and Nighttime Aerial Data as an Indicator of Trees’ Health Status
Authors: Agata Zakrzewska, Dominik Kopeć, Adrian Ochtyra
Abstract:
The growing number of new cameras, sensors, and research methods allow for a broader application of thermal data in remote sensing vegetation studies. The aim of this research was to check whether it is possible to use thermal infrared data with a spectral range (3.6-4.9 μm) obtained during the day and the night to assess the health condition of selected species of deciduous trees in an urban environment. For this purpose, research was carried out in the city center of Warsaw (Poland) in 2020. During the airborne data acquisition, thermal data, laser scanning, and orthophoto map images were collected. Synchronously with airborne data, ground reference data were obtained for 617 studied species (Acer platanoides, Acer pseudoplatanus, Aesculus hippocastanum, Tilia cordata, and Tilia × euchlora) in different health condition states. The results were as follows: (i) healthy trees are cooler than trees in poor condition and dying both in the daytime and nighttime data; (ii) the difference in the canopy temperatures between healthy and dying trees was 1.06oC of mean value on the nighttime data and 3.28oC of mean value on the daytime data; (iii) condition classes significantly differentiate on both daytime and nighttime thermal data, but only on daytime data all condition classes differed statistically significantly from each other. In conclusion, the aerial thermal data can be considered as an alternative to hyperspectral data, a method of assessing the health condition of trees in an urban environment. Especially data obtained during the day, which can differentiate condition classes better than data obtained at night. The method based on thermal infrared and laser scanning data fusion could be a quick and efficient solution for identifying trees in poor health that should be visually checked in the field.Keywords: middle wave infrared, thermal imagery, tree discoloration, urban trees
Procedia PDF Downloads 11524847 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.Keywords: clustering, unsupervised learning, algorithms, hierarchical
Procedia PDF Downloads 88524846 End to End Monitoring in Oracle Fusion Middleware for Data Verification
Authors: Syed Kashif Ali, Usman Javaid, Abdullah Chohan
Abstract:
In large enterprises multiple departments use different sort of information systems and databases according to their needs. These systems are independent and heterogeneous in nature and sharing information/data between these systems is not an easy task. The usage of middleware technologies have made data sharing between systems very easy. However, monitoring the exchange of data/information for verification purposes between target and source systems is often complex or impossible for maintenance department due to security/access privileges on target and source systems. In this paper, we are intended to present our experience of an end to end data monitoring approach at middle ware level implemented in Oracle BPEL for data verification without any help of monitoring tool.Keywords: service level agreement, SOA, BPEL, oracle fusion middleware, web service monitoring
Procedia PDF Downloads 48024845 Dissimilarity Measure for General Histogram Data and Its Application to Hierarchical Clustering
Authors: K. Umbleja, M. Ichino
Abstract:
Symbolic data mining has been developed to analyze data in very large datasets. It is also useful in cases when entry specific details should remain hidden. Symbolic data mining is quickly gaining popularity as datasets in need of analyzing are becoming ever larger. One type of such symbolic data is a histogram, which enables to save huge amounts of information into a single variable with high-level of granularity. Other types of symbolic data can also be described in histograms, therefore making histogram a very important and general symbolic data type - a method developed for histograms - can also be applied to other types of symbolic data. Due to its complex structure, analyzing histograms is complicated. This paper proposes a method, which allows to compare two histogram-valued variables and therefore find a dissimilarity between two histograms. Proposed method uses the Ichino-Yaguchi dissimilarity measure for mixed feature-type data analysis as a base and develops a dissimilarity measure specifically for histogram data, which allows to compare histograms with different number of bins and bin widths (so called general histogram). Proposed dissimilarity measure is then used as a measure for clustering. Furthermore, linkage method based on weighted averages is proposed with the concept of cluster compactness to measure the quality of clustering. The method is then validated with application on real datasets. As a result, the proposed dissimilarity measure is found producing adequate and comparable results with general histograms without the loss of detail or need to transform the data.Keywords: dissimilarity measure, hierarchical clustering, histograms, symbolic data analysis
Procedia PDF Downloads 16224844 Digital Twin Smart Hospital: A Guide for Implementation and Improvements
Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar
Abstract:
This study investigates the application of Digital Twins (DT) in Smart Hospital Environments (SHE), through a bibliometric study and literature review, including comparison with the principles of Industry 4.0. It aims to analyze the current state of the implementation of digital twins in clinical and non-clinical operations in healthcare settings, identifying trends and challenges, comparing these practices with Industry 4.0 concepts and technologies, in order to present a basic framework including stages and maturity levels. The bibliometric methodology will allow mapping the existing scientific production on the theme, while the literature review will synthesize and critically analyze the relevant studies, highlighting pertinent methodologies and results, additionally the comparison with Industry 4.0 will provide insights on how the principles of automation, interconnectivity and digitalization can be applied in healthcare environments/operations, aiming at improvements in operational efficiency and quality of care. The results of this study will contribute to a deeper understanding of the potential of Digital Twins in Smart Hospitals, in addition to the future potential from the effective integration of Industry 4.0 concepts in this specific environment, presented through the practical framework, after all, the urgent need for changes addressed in this article is undeniable, as well as all their value contribution to human sustainability, designed in SDG3 – Health and well-being: ensuring that all citizens have a healthy life and well-being, at all ages and in all situations. We know that the validity of these relationships will be constantly discussed, and technology can always change the rules of the game.Keywords: digital twin, smart hospital, healthcare operations, industry 4.0, SDG3, technology
Procedia PDF Downloads 5324843 WiFi Data Offloading: Bundling Method in a Canvas Business Model
Authors: Majid Mokhtarnia, Alireza Amini
Abstract:
Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.Keywords: bundling, canvas business model, telecommunication, WiFi data offloading
Procedia PDF Downloads 20024842 Distributed Perceptually Important Point Identification for Time Series Data Mining
Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung
Abstract:
In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining
Procedia PDF Downloads 43324841 Analysing Techniques for Fusing Multimodal Data in Predictive Scenarios Using Convolutional Neural Networks
Authors: Philipp Ruf, Massiwa Chabbi, Christoph Reich, Djaffar Ould-Abdeslam
Abstract:
In recent years, convolutional neural networks (CNN) have demonstrated high performance in image analysis, but oftentimes, there is only structured data available regarding a specific problem. By interpreting structured data as images, CNNs can effectively learn and extract valuable insights from tabular data, leading to improved predictive accuracy and uncovering hidden patterns that may not be apparent in traditional structured data analysis. In applying a single neural network for analyzing multimodal data, e.g., both structured and unstructured information, significant advantages in terms of time complexity and energy efficiency can be achieved. Converting structured data into images and merging them with existing visual material offers a promising solution for applying CNN in multimodal datasets, as they often occur in a medical context. By employing suitable preprocessing techniques, structured data is transformed into image representations, where the respective features are expressed as different formations of colors and shapes. In an additional step, these representations are fused with existing images to incorporate both types of information. This final image is finally analyzed using a CNN.Keywords: CNN, image processing, tabular data, mixed dataset, data transformation, multimodal fusion
Procedia PDF Downloads 12324840 Knowledge Discovery and Data Mining Techniques in Textile Industry
Authors: Filiz Ersoz, Taner Ersoz, Erkin Guler
Abstract:
This paper addresses the issues and technique for textile industry using data mining techniques. Data mining has been applied to the stitching of garments products that were obtained from a textile company. Data mining techniques were applied to the data obtained from the CHAID algorithm, CART algorithm, Regression Analysis and, Artificial Neural Networks. Classification technique based analyses were used while data mining and decision model about the production per person and variables affecting about production were found by this method. In the study, the results show that as the daily working time increases, the production per person also decreases. In addition, the relationship between total daily working and production per person shows a negative result and the production per person show the highest and negative relationship.Keywords: data mining, textile production, decision trees, classification
Procedia PDF Downloads 34924839 Effect of a Single Injection of hCG on Testosterone Concentration in Male Alpacas
Authors: A. ElZawam, D. McLean, A. Tibary
Abstract:
In alpaca, age at puberty is variable and the factors regulating the pattern of puberty and sexual maturation are a subject of controversy. Plasma testosterone level is often used as an indicator of sexual maturity. Our hypothesis is that hCG treatment will cause an increase in testosterone level that is correlated with animal age. The specific aim was to investigate the testicular tissue response to a single hCG injection by monitoring the serum testosterone concentration. Eighty four (n=84) males ranging in age from 6 to 60 months were used. Alpacas were grouped based on their ages into 15 groups. Each group had three to five male animals. Blood samples were collected from the jugular vein before treatment with hCG and 2 hours after intravenous administration of 3000 IU of hCG (Chorulon®). The serum was harvested and stored at -20ºC until the analysis. The effect of age on basal testosterone level and response to hCG treatment was evaluated by Analysis of Variance. As a result, basal serum testosterone concentrations were very low (<0.1ng/ml) until 9 months of age. Although basal serum testosterone concentrations increased steadily with age there was a significant variation amongst males within the same age group. Administration of 3000 IU of hCG, resulted in an average increase of 50% (P<0.05) in serum testosterone concentration after 2 hours. The percentage increase in serum testosterone in response to hCG stimulation varied from 51 to 81%. There was no correlation between the degree of response and age. However, the response to hCG injection presented two modes of increase depending on the age of animals. The first mode occurred at ages 9 to 14 months and the second mode was observed between 22 and 36 months. In conclusion, our results suggest that testicular growth and sensitivity to LH stimulation may be bimodal in the male alpaca with a rapid increase in growth and sensitivity between 9 and 14 months of age and a second phase of increased responsiveness after 21 months of ages.Keywords: alpaca, testosterone, hCG, animal science
Procedia PDF Downloads 57024838 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network
Authors: Ashima Anurag Sharma
Abstract:
Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 52724837 Microarray Gene Expression Data Dimensionality Reduction Using PCA
Authors: Fuad M. Alkoot
Abstract:
Different experimental technologies such as microarray sequencing have been proposed to generate high-resolution genetic data, in order to understand the complex dynamic interactions between complex diseases and the biological system components of genes and gene products. However, the generated samples have a very large dimension reaching thousands. Therefore, hindering all attempts to design a classifier system that can identify diseases based on such data. Additionally, the high overlap in the class distributions makes the task more difficult. The data we experiment with is generated for the identification of autism. It includes 142 samples, which is small compared to the large dimension of the data. The classifier systems trained on this data yield very low classification rates that are almost equivalent to a guess. We aim at reducing the data dimension and improve it for classification. Here, we experiment with applying a multistage PCA on the genetic data to reduce its dimensionality. Results show a significant improvement in the classification rates which increases the possibility of building an automated system for autism detection.Keywords: PCA, gene expression, dimensionality reduction, classification, autism
Procedia PDF Downloads 56024836 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic
Authors: Fei Gao, Rodolfo C. Raga Jr.
Abstract:
This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle
Procedia PDF Downloads 7524835 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0
Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini
Abstract:
Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling
Procedia PDF Downloads 9424834 Effect of Spontaneous Ripening and Drying Techniques on the Bioactive Activities Peel of Plantain (Musa paradisiaca) Fruit
Authors: Famuwagun A. A., Abiona O. O., Gbadamosi S.O., Adeboye O. A., Adebooye O. C.
Abstract:
The need to provide more information on the perceived bioactive status of the peel of plantain fruit informed the design of this research. Matured Plantain fruits were harvested, and fruits were allowed to ripen spontaneously. Samples of plantain fruit were taken every fortnight, and the peels were removed. The peels were dried using two different drying techniques (Oven drying and sun drying) and milled into powdery forms. Other samples were picked and processed in a similar manner on the first, third, seventh and tenth day until the peels of the fruits were fully ripped, resulting in eight different samples. The anti-oxidative properties of the samples using different assays (DPPH, FRAP, MCA, HRSA, SRSA, ABTS, ORAC), inhibitory activities against enzymes related to diabetes (alpha-amylase and glucosidase) and inhibition against angiotensin-converting enzymes (ACE) were evaluated. The result showed that peels of plantain fruits on the 7th day of ripening and sundried exhibited greater inhibitions against free radicals, which enhanced its antioxidant activities, resulting in greater inhibitions against alpha-amylase and alpha-glucosidase enzymes. Also, oven oven-dried sample of the peel of plantain fruit on the 7th day of ripening had greater phenolic contents than the other samples, which also resulted in higher inhibition against angiotensin converting enzymes when compared with other samples. The results showed that even though the unripe peel of plantain fruit is assumed to contain excellent bioactive activities, consumption of the peel should be allowed to ripen for seven days after maturity and harvesting so as to derive maximum benefit from the peel.Keywords: functional ingredient, diabetics, hypertension, functional foods
Procedia PDF Downloads 5124833 Experimental Study for Examination of Nature of Diffusion Process during Wine Microoxygenation
Authors: Ilirjan Malollari, Redi Buzo, Lorina Lici
Abstract:
This study was done for the characterization of polyphenols changes of anthocyanins, flavonoids, the color intensity and total polyphenols index, maturity and oxidation index during the process of micro-oxygenation of wine that comes from a specific geographic area in the southeastern region of the country. Also, through mathematical modeling of the oxygen distribution within solution of wort for wine fermentation, was shown the strong impact of carbon dioxide present in the liquor. Analytical results show periodic increases of color intensity and tonality, reduction level of free anthocyanins and flavonoids free because of polycondensation reactions between tannins and anthocyanins, increased total polyphenols index and decrease the ratio between the flavonoids and anthocyanins offering a red stabilize wine proved by sensory degustation tasting for color intensity, tonality, body, tannic perception, taste and remained back taste which comes by specific area associated with environmental indications. Micro-oxygenation of wine is a wine-making technique, which consists in the addition of small and controlled amounts of oxygen in the different stages of wine production but more efficiently after end of alcoholic fermentation. The objectives of the process include improved mouth feel (body and texture), color enhanced stability, increased oxidative stability, and decreased vegetative aroma during polyphenols changes process. A very important factor is polyphenolics organic grape composition strongly associated with the environment geographical specifics area in which it is grown the grape.Keywords: micro oxygenation, polyphenols, environment, wine stability, diffusion modeling
Procedia PDF Downloads 21024832 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Waziri Victor Onomza, John K. Alhassan, Idris Ismaila, Noel Dogonyaro Moses
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute theoretical presentations in high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.Keywords: big data analytics, security, privacy, bootstrapping, homomorphic, homomorphic encryption scheme
Procedia PDF Downloads 37924831 Protecting Privacy and Data Security in Online Business
Authors: Bilquis Ferdousi
Abstract:
With the exponential growth of the online business, the threat to consumers’ privacy and data security has become a serious challenge. This literature review-based study focuses on a better understanding of those threats and what legislative measures have been taken to address those challenges. Research shows that people are increasingly involved in online business using different digital devices and platforms, although this practice varies based on age groups. The threat to consumers’ privacy and data security is a serious hindrance in developing trust among consumers in online businesses. There are some legislative measures taken at the federal and state level to protect consumers’ privacy and data security. The study was based on an extensive review of current literature on protecting consumers’ privacy and data security and legislative measures that have been taken.Keywords: privacy, data security, legislation, online business
Procedia PDF Downloads 10624830 Flowing Online Vehicle GPS Data Clustering Using a New Parallel K-Means Algorithm
Authors: Orhun Vural, Oguz Bayat, Rustu Akay, Osman N. Ucan
Abstract:
This study presents a new parallel approach clustering of GPS data. Evaluation has been made by comparing execution time of various clustering algorithms on GPS data. This paper aims to propose a parallel based on neighborhood K-means algorithm to make it faster. The proposed parallelization approach assumes that each GPS data represents a vehicle and to communicate between vehicles close to each other after vehicles are clustered. This parallelization approach has been examined on different sized continuously changing GPS data and compared with serial K-means algorithm and other serial clustering algorithms. The results demonstrated that proposed parallel K-means algorithm has been shown to work much faster than other clustering algorithms.Keywords: parallel k-means algorithm, parallel clustering, clustering algorithms, clustering on flowing data
Procedia PDF Downloads 22124829 An Analysis of Privacy and Security for Internet of Things Applications
Authors: Dhananjay Singh, M. Abdullah-Al-Wadud
Abstract:
The Internet of Things is a concept of a large scale ecosystem of wireless actuators. The actuators are defined as things in the IoT, those which contribute or produces some data to the ecosystem. However, ubiquitous data collection, data security, privacy preserving, large volume data processing, and intelligent analytics are some of the key challenges into the IoT technologies. In order to solve the security requirements, challenges and threats in the IoT, we have discussed a message authentication mechanism for IoT applications. Finally, we have discussed data encryption mechanism for messages authentication before propagating into IoT networks.Keywords: Internet of Things (IoT), message authentication, privacy, security
Procedia PDF Downloads 382