Search results for: linked data
25139 Big Data in Telecom Industry: Effective Predictive Techniques on Call Detail Records
Authors: Sara ElElimy, Samir Moustafa
Abstract:
Mobile network operators start to face many challenges in the digital era, especially with high demands from customers. Since mobile network operators are considered a source of big data, traditional techniques are not effective with new era of big data, Internet of things (IoT) and 5G; as a result, handling effectively different big datasets becomes a vital task for operators with the continuous growth of data and moving from long term evolution (LTE) to 5G. So, there is an urgent need for effective Big data analytics to predict future demands, traffic, and network performance to full fill the requirements of the fifth generation of mobile network technology. In this paper, we introduce data science techniques using machine learning and deep learning algorithms: the autoregressive integrated moving average (ARIMA), Bayesian-based curve fitting, and recurrent neural network (RNN) are employed for a data-driven application to mobile network operators. The main framework included in models are identification parameters of each model, estimation, prediction, and final data-driven application of this prediction from business and network performance applications. These models are applied to Telecom Italia Big Data challenge call detail records (CDRs) datasets. The performance of these models is found out using a specific well-known evaluation criteria shows that ARIMA (machine learning-based model) is more accurate as a predictive model in such a dataset than the RNN (deep learning model).Keywords: big data analytics, machine learning, CDRs, 5G
Procedia PDF Downloads 13825138 A Data Mining Approach for Analysing and Predicting the Bank's Asset Liability Management Based on Basel III Norms
Authors: Nidhin Dani Abraham, T. K. Sri Shilpa
Abstract:
Asset liability management is an important aspect in banking business. Moreover, the today’s banking is based on BASEL III which strictly regulates on the counterparty default. This paper focuses on prediction and analysis of counter party default risk, which is a type of risk occurs when the customers fail to repay the amount back to the lender (bank or any financial institutions). This paper proposes an approach to reduce the counterparty risk occurring in the financial institutions using an appropriate data mining technique and thus predicts the occurrence of NPA. It also helps in asset building and restructuring quality. Liability management is very important to carry out banking business. To know and analyze the depth of liability of bank, a suitable technique is required. For that a data mining technique is being used to predict the dormant behaviour of various deposit bank customers. Various models are implemented and the results are analyzed of saving bank deposit customers. All these data are cleaned using data cleansing approach from the bank data warehouse.Keywords: data mining, asset liability management, BASEL III, banking
Procedia PDF Downloads 55025137 A Dynamic Ensemble Learning Approach for Online Anomaly Detection in Alibaba Datacenters
Authors: Wanyi Zhu, Xia Ming, Huafeng Wang, Junda Chen, Lu Liu, Jiangwei Jiang, Guohua Liu
Abstract:
Anomaly detection is a first and imperative step needed to respond to unexpected problems and to assure high performance and security in large data center management. This paper presents an online anomaly detection system through an innovative approach of ensemble machine learning and adaptive differentiation algorithms, and applies them to performance data collected from a continuous monitoring system for multi-tier web applications running in Alibaba data centers. We evaluate the effectiveness and efficiency of this algorithm with production traffic data and compare with the traditional anomaly detection approaches such as a static threshold and other deviation-based detection techniques. The experiment results show that our algorithm correctly identifies the unexpected performance variances of any running application, with an acceptable false positive rate. This proposed approach has already been deployed in real-time production environments to enhance the efficiency and stability in daily data center operations.Keywords: Alibaba data centers, anomaly detection, big data computation, dynamic ensemble learning
Procedia PDF Downloads 19825136 Unsupervised Text Mining Approach to Early Warning System
Authors: Ichihan Tai, Bill Olson, Paul Blessner
Abstract:
Traditional early warning systems that alarm against crisis are generally based on structured or numerical data; therefore, a system that can make predictions based on unstructured textual data, an uncorrelated data source, is a great complement to the traditional early warning systems. The Chicago Board Options Exchange (CBOE) Volatility Index (VIX), commonly referred to as the fear index, measures the cost of insurance against market crash, and spikes in the event of crisis. In this study, news data is consumed for prediction of whether there will be a market-wide crisis by predicting the movement of the fear index, and the historical references to similar events are presented in an unsupervised manner. Topic modeling-based prediction and representation are made based on daily news data between 1990 and 2015 from The Wall Street Journal against VIX index data from CBOE.Keywords: early warning system, knowledge management, market prediction, topic modeling.
Procedia PDF Downloads 33525135 The Role of Synthetic Data in Aerial Object Detection
Authors: Ava Dodd, Jonathan Adams
Abstract:
The purpose of this study is to explore the characteristics of developing a machine learning application using synthetic data. The study is structured to develop the application for the purpose of deploying the computer vision model. The findings discuss the realities of attempting to develop a computer vision model for practical purpose, and detail the processes, tools, and techniques that were used to meet accuracy requirements. The research reveals that synthetic data represents another variable that can be adjusted to improve the performance of a computer vision model. Further, a suite of tools and tuning recommendations are provided.Keywords: computer vision, machine learning, synthetic data, YOLOv4
Procedia PDF Downloads 22125134 Perception-Oriented Model Driven Development for Designing Data Acquisition Process in Wireless Sensor Networks
Authors: K. Indra Gandhi
Abstract:
Wireless Sensor Networks (WSNs) have always been characterized for application-specific sensing, relaying and collection of information for further analysis. However, software development was not considered as a separate entity in this process of data collection which has posed severe limitations on the software development for WSN. Software development for WSN is a complex process since the components involved are data-driven, network-driven and application-driven in nature. This implies that there is a tremendous need for the separation of concern from the software development perspective. A layered approach for developing data acquisition design based on Model Driven Development (MDD) has been proposed as the sensed data collection process itself varies depending upon the application taken into consideration. This work focuses on the layered view of the data acquisition process so as to ease the software point of development. A metamodel has been proposed that enables reusability and realization of the software development as an adaptable component for WSN systems. Further, observing users perception indicates that proposed model helps in improving the programmer's productivity by realizing the collaborative system involved.Keywords: data acquisition, model-driven development, separation of concern, wireless sensor networks
Procedia PDF Downloads 43425133 Comparative Analysis of Data Gathering Protocols with Multiple Mobile Elements for Wireless Sensor Network
Authors: Bhat Geetalaxmi Jairam, D. V. Ashoka
Abstract:
Wireless Sensor Networks are used in many applications to collect sensed data from different sources. Sensed data has to be delivered through sensors wireless interface using multi-hop communication towards the sink. The data collection in wireless sensor networks consumes energy. Energy consumption is the major constraints in WSN .Reducing the energy consumption while increasing the amount of generated data is a great challenge. In this paper, we have implemented two data gathering protocols with multiple mobile sinks/elements to collect data from sensor nodes. First, is Energy-Efficient Data Gathering with Tour Length-Constrained Mobile Elements in Wireless Sensor Networks (EEDG), in which mobile sinks uses vehicle routing protocol to collect data. Second is An Intelligent Agent-based Routing Structure for Mobile Sinks in WSNs (IAR), in which mobile sinks uses prim’s algorithm to collect data. Authors have implemented concepts which are common to both protocols like deployment of mobile sinks, generating visiting schedule, collecting data from the cluster member. Authors have compared the performance of both protocols by taking statistics based on performance parameters like Delay, Packet Drop, Packet Delivery Ratio, Energy Available, Control Overhead. Authors have concluded this paper by proving EEDG is more efficient than IAR protocol but with few limitations which include unaddressed issues likes Redundancy removal, Idle listening, Mobile Sink’s pause/wait state at the node. In future work, we plan to concentrate more on these limitations to avail a new energy efficient protocol which will help in improving the life time of the WSN.Keywords: aggregation, consumption, data gathering, efficiency
Procedia PDF Downloads 49725132 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives
Authors: Chao Wang
Abstract:
Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.Keywords: sebum layer, topical and similarity, interactive technology, image narrative
Procedia PDF Downloads 38825131 Status and Results from EXO-200
Authors: Ryan Maclellan
Abstract:
EXO-200 has provided one of the most sensitive searches for neutrinoless double-beta decay utilizing 175 kg of enriched liquid xenon in an ultra-low background time projection chamber. This detector has demonstrated excellent energy resolution and background rejection capabilities. Using the first two years of data, EXO-200 has set a limit of 1.1x10^25 years at 90% C.L. on the neutrinoless double-beta decay half-life of Xe-136. The experiment has experienced a brief hiatus in data taking during a temporary shutdown of its host facility: the Waste Isolation Pilot Plant. EXO-200 expects to resume data taking in earnest this fall with upgraded detector electronics. Results from the analysis of EXO-200 data and an update on the current status of EXO-200 will be presented.Keywords: double-beta, Majorana, neutrino, neutrinoless
Procedia PDF Downloads 41325130 Remaining Useful Life (RUL) Assessment Using Progressive Bearing Degradation Data and ANN Model
Authors: Amit R. Bhende, G. K. Awari
Abstract:
Remaining useful life (RUL) prediction is one of key technologies to realize prognostics and health management that is being widely applied in many industrial systems to ensure high system availability over their life cycles. The present work proposes a data-driven method of RUL prediction based on multiple health state assessment for rolling element bearings. Bearing degradation data at three different conditions from run to failure is used. A RUL prediction model is separately built in each condition. Feed forward back propagation neural network models are developed for prediction modeling.Keywords: bearing degradation data, remaining useful life (RUL), back propagation, prognosis
Procedia PDF Downloads 43425129 Spatio-Temporal Data Mining with Association Rules for Lake Van
Authors: Tolga Aydin, M. Fatih Alaeddinoğlu
Abstract:
People, throughout the history, have made estimates and inferences about the future by using their past experiences. Developing information technologies and the improvements in the database management systems make it possible to extract useful information from knowledge in hand for the strategic decisions. Therefore, different methods have been developed. Data mining by association rules learning is one of such methods. Apriori algorithm, one of the well-known association rules learning algorithms, is not commonly used in spatio-temporal data sets. However, it is possible to embed time and space features into the data sets and make Apriori algorithm a suitable data mining technique for learning spatio-temporal association rules. Lake Van, the largest lake of Turkey, is a closed basin. This feature causes the volume of the lake to increase or decrease as a result of change in water amount it holds. In this study, evaporation, humidity, lake altitude, amount of rainfall and temperature parameters recorded in Lake Van region throughout the years are used by the Apriori algorithm and a spatio-temporal data mining application is developed to identify overflows and newly-formed soil regions (underflows) occurring in the coastal parts of Lake Van. Identifying possible reasons of overflows and underflows may be used to alert the experts to take precautions and make the necessary investments.Keywords: apriori algorithm, association rules, data mining, spatio-temporal data
Procedia PDF Downloads 37225128 Building Data Infrastructure for Public Use and Informed Decision Making in Developing Countries-Nigeria
Authors: Busayo Fashoto, Abdulhakeem Shaibu, Justice Agbadu, Samuel Aiyeoribe
Abstract:
Data has gone from just rows and columns to being an infrastructure itself. The traditional medium of data infrastructure has been managed by individuals in different industries and saved on personal work tools; one of such is the laptop. This hinders data sharing and Sustainable Development Goal (SDG) 9 for infrastructure sustainability across all countries and regions. However, there has been a constant demand for data across different agencies and ministries by investors and decision-makers. The rapid development and adoption of open-source technologies that promote the collection and processing of data in new ways and in ever-increasing volumes are creating new data infrastructure in sectors such as lands and health, among others. This paper examines the process of developing data infrastructure and, by extension, a data portal to provide baseline data for sustainable development and decision making in Nigeria. This paper employs the FAIR principle (Findable, Accessible, Interoperable, and Reusable) of data management using open-source technology tools to develop data portals for public use. eHealth Africa, an organization that uses technology to drive public health interventions in Nigeria, developed a data portal which is a typical data infrastructure that serves as a repository for various datasets on administrative boundaries, points of interest, settlements, social infrastructure, amenities, and others. This portal makes it possible for users to have access to datasets of interest at any point in time at no cost. A skeletal infrastructure of this data portal encompasses the use of open-source technology such as Postgres database, GeoServer, GeoNetwork, and CKan. These tools made the infrastructure sustainable, thus promoting the achievement of SDG 9 (Industries, Innovation, and Infrastructure). As of 6th August 2021, a wider cross-section of 8192 users had been created, 2262 datasets had been downloaded, and 817 maps had been created from the platform. This paper shows the use of rapid development and adoption of technologies that facilitates data collection, processing, and publishing in new ways and in ever-increasing volumes. In addition, the paper is explicit on new data infrastructure in sectors such as health, social amenities, and agriculture. Furthermore, this paper reveals the importance of cross-sectional data infrastructures for planning and decision making, which in turn can form a central data repository for sustainable development across developing countries.Keywords: data portal, data infrastructure, open source, sustainability
Procedia PDF Downloads 9725127 Process Data-Driven Representation of Abnormalities for Efficient Process Control
Authors: Hyun-Woo Cho
Abstract:
Unexpected operational events or abnormalities of industrial processes have a serious impact on the quality of final product of interest. In terms of statistical process control, fault detection and diagnosis of processes is one of the essential tasks needed to run the process safely. In this work, nonlinear representation of process measurement data is presented and evaluated using a simulation process. The effect of using different representation methods on the diagnosis performance is tested in terms of computational efficiency and data handling. The results have shown that the nonlinear representation technique produced more reliable diagnosis results and outperforms linear methods. The use of data filtering step improved computational speed and diagnosis performance for test data sets. The presented scheme is different from existing ones in that it attempts to extract the fault pattern in the reduced space, not in the original process variable space. Thus this scheme helps to reduce the sensitivity of empirical models to noise.Keywords: fault diagnosis, nonlinear technique, process data, reduced spaces
Procedia PDF Downloads 24525126 Text-to-Speech in Azerbaijani Language via Transfer Learning in a Low Resource Environment
Authors: Dzhavidan Zeinalov, Bugra Sen, Firangiz Aslanova
Abstract:
Most text-to-speech models cannot operate well in low-resource languages and require a great amount of high-quality training data to be considered good enough. Yet, with the improvements made in ASR systems, it is now much easier than ever to collect data for the design of custom text-to-speech models. In this work, our work on using the ASR model to collect data to build a viable text-to-speech system for one of the leading financial institutions of Azerbaijan will be outlined. NVIDIA’s implementation of the Tacotron 2 model was utilized along with the HiFiGAN vocoder. As for the training, the model was first trained with high-quality audio data collected from the Internet, then fine-tuned on the bank’s single speaker call center data. The results were then evaluated by 50 different listeners and got a mean opinion score of 4.17, displaying that our method is indeed viable. With this, we have successfully designed the first text-to-speech model in Azerbaijani and publicly shared 12 hours of audiobook data for everyone to use.Keywords: Azerbaijani language, HiFiGAN, Tacotron 2, text-to-speech, transfer learning, whisper
Procedia PDF Downloads 4325125 An Empirical Evaluation of Performance of Machine Learning Techniques on Imbalanced Software Quality Data
Authors: Ruchika Malhotra, Megha Khanna
Abstract:
The development of change prediction models can help the software practitioners in planning testing and inspection resources at early phases of software development. However, a major challenge faced during the training process of any classification model is the imbalanced nature of the software quality data. A data with very few minority outcome categories leads to inefficient learning process and a classification model developed from the imbalanced data generally does not predict these minority categories correctly. Thus, for a given dataset, a minority of classes may be change prone whereas a majority of classes may be non-change prone. This study explores various alternatives for adeptly handling the imbalanced software quality data using different sampling methods and effective MetaCost learners. The study also analyzes and justifies the use of different performance metrics while dealing with the imbalanced data. In order to empirically validate different alternatives, the study uses change data from three application packages of open-source Android data set and evaluates the performance of six different machine learning techniques. The results of the study indicate extensive improvement in the performance of the classification models when using resampling method and robust performance measures.Keywords: change proneness, empirical validation, imbalanced learning, machine learning techniques, object-oriented metrics
Procedia PDF Downloads 41825124 Hydrogel Based on Cellulose Acetate Used as Scaffold for Cell Growth
Authors: A. Maria G. Melero, A. M. Senna, J. A. Domingues, M. A. Hausen, E. Aparecida R. Duek, V. R. Botaro
Abstract:
A hydrogel from cellulose acetate cross linked with ethylenediaminetetraacetic dianhydride (HAC-EDTA) was synthesized by our research group, and submitted to characterization and biological tests. Cytocompatibility analysis was performed by confocal microscopy using human adipocyte derived stem cells (ASCs). The FTIR analysis showed characteristic bands of cellulose acetate and hydroxyl groups and the tensile tests evidence that HAC-EDTA present a Young’s modulus of 643.7 MPa. The confocal analysis revealed that there was cell growth at the surface of HAC-EDTA. After one day of culture the cells presented spherical morphology, which may be caused by stress of the sequestration of Ca2+ and Mg2+ ions at the cell medium by HAC-EDTA, as demonstrated by ICP-MS. However, after seven days and 14 days of culture, the cells present fibroblastoid morphology, phenotype expected by this cellular type. The results give efforts to indicate this new material as a potential biomaterial for tissue engineering, in the future in vivo approach.Keywords: cellulose acetate, hydrogel, biomaterial, cellular growth
Procedia PDF Downloads 19325123 Correlation between the Ratios of House Dust Mite-Specific IgE/Total IgE and Asthma Control Test Score as a Biomarker of Immunotherapy Response Effectiveness in Pediatric Allergic Asthma Patients
Authors: Bela Siska Afrida, Wisnu Barlianto, Desy Wulandari, Ery Olivianto
Abstract:
Background: Allergic asthma, caused by IgE-mediated allergic reactions, remains a global health issue with high morbidity and mortality rates. Immunotherapy is the only etiology-based approach to treating asthma, but no standard biomarkers have been established to evaluate the therapy’s effectiveness. This study aims to determine the correlation between the ratios of serum levels of HDM-specific IgE/total IgE and Asthma Control Test (ACT) score as a biomarker of the response to immunotherapy in pediatric allergic asthma patients. Patient and Methods: This retrospective cohort study involved 26 pediatric allergic asthma patients who underwent HDM-specific subcutaneous immunotherapy for 14 weeks at the Pediatric Allergy Immunology Outpatient Clinic at Saiful Anwar General Hospital, Malang. Serum levels of HDM-Specific IgE and Total IgE were measured before and after immunotherapy using Chemiluminescence Immunoassay and Enzyme-linked Immunosorbent Assay (ELISA) method. Changes in asthma control were assessed using the ACT score. The Wilcoxon Signed Ranked Test and Spearman correlation test were used for data analysis. Results: There were 14 boys and 12 girls with a mean age of 6.48 ± 2.54 years. The study showed a significant decrease in serum HMD-specific levels before immunotherapy [9.88 ± 5.74 kuA/L] compared to those of 14 weeks after immunotherapy [4.51 ± 3.98 kuA/L], p = 0.000. Serum Total IgE levels significant decrease before immunotherapy [207.6 ± 120.8IU/ml] compared to those of 14 weeks after immunotherapy [109.83 ± 189.39 IU/mL], p = 0.000. The ratios of serum HDM-specific IgE/total IgE levels significant decrease before immunotherapy [0.063 ± 0.05] compared to those of 14 weeks after immunotherapy [0.041 ± 0.039], p = 0.012. There was also a significant increase in ACT scores before and after immunotherapy (each 15.5 ± 1.79 and 20.96 ± 2.049, p = 0.000). The correlation test showed a weak negative correlation between the ratios of HDM-specific IgE/total IgE levels and ACT score (p = 0.034 and r = -0.29). Conclusion: In conclusion, this study showed that a decrease in HDM-specific IgE levels, total IgE levels, and HDM-specific IgE/total IgE ratios, and an increase in ACT score, was observed after 14 weeks of HDM-specific subcutaneous immunotherapy. The weak negative correlation between the HDM-specific IgE/total IgE ratio and the ACT score suggests that this ratio can serve as a potential biomarker of the effectiveness of immunotherapy in treating pediatric allergic asthma patients.Keywords: HDM-specific IgE/total IgE ratio, ACT score, immunotherapy, allergic asthma
Procedia PDF Downloads 6525122 Variance-Aware Routing and Authentication Scheme for Harvesting Data in Cloud-Centric Wireless Sensor Networks
Authors: Olakanmi Oladayo Olufemi, Bamifewe Olusegun James, Badmus Yaya Opeyemi, Adegoke Kayode
Abstract:
The wireless sensor network (WSN) has made a significant contribution to the emergence of various intelligent services or cloud-based applications. Most of the time, these data are stored on a cloud platform for efficient management and sharing among different services or users. However, the sensitivity of the data makes them prone to various confidentiality and performance-related attacks during and after harvesting. Various security schemes have been developed to ensure the integrity and confidentiality of the WSNs' data. However, their specificity towards particular attacks and the resource constraint and heterogeneity of WSNs make most of these schemes imperfect. In this paper, we propose a secure variance-aware routing and authentication scheme with two-tier verification to collect, share, and manage WSN data. The scheme is capable of classifying WSN into different subnets, detecting any attempt of wormhole and black hole attack during harvesting, and enforcing access control on the harvested data stored in the cloud. The results of the analysis showed that the proposed scheme has more security functionalities than other related schemes, solves most of the WSNs and cloud security issues, prevents wormhole and black hole attacks, identifies the attackers during data harvesting, and enforces access control on the harvested data stored in the cloud at low computational, storage, and communication overheads.Keywords: data block, heterogeneous IoT network, data harvesting, wormhole attack, blackhole attack access control
Procedia PDF Downloads 8325121 Quality of Age Reporting from Tanzania 2012 Census Results: An Assessment Using Whipple’s Index, Myer’s Blended Index, and Age-Sex Accuracy Index
Authors: A. Sathiya Susuman, Hamisi F. Hamisi
Abstract:
Background: Many socio-economic and demographic data are age-sex attributed. However, a variety of irregularities and misstatement are noted with respect to age-related data and less to sex data because of its biological differences between the genders. Noting the misstatement/misreporting of age data regardless of its significance importance in demographics and epidemiological studies, this study aims at assessing the quality of 2012 Tanzania Population and Housing Census Results. Methods: Data for the analysis are downloaded from Tanzania National Bureau of Statistics. Age heaping and digit preference were measured using summary indices viz., Whipple’s index, Myers’ blended index, and Age-Sex Accuracy index. Results: The recorded Whipple’s index for both sexes was 154.43; male has the lowest index of about 152.65 while female has the highest index of about 156.07. For Myers’ blended index, the preferences were at digits ‘0’ and ‘5’ while avoidance were at digits ‘1’ and ‘3’ for both sexes. Finally, Age-sex index stood at 59.8 where sex ratio score was 5.82 and age ratio scores were 20.89 and 21.4 for males and female respectively. Conclusion: The evaluation of the 2012 PHC data using the demographic techniques has qualified the data inaccurate as the results of systematic heaping and digit preferences/avoidances. Thus, innovative methods in data collection along with measuring and minimizing errors using statistical techniques should be used to ensure accuracy of age data.Keywords: age heaping, digit preference/avoidance, summary indices, Whipple’s index, Myer’s index, age-sex accuracy index
Procedia PDF Downloads 47425120 Short-Term Energy Efficiency Decay and Risk Analysis of Ground Source Heat Pump System
Authors: Tu Shuyang, Zhang Xu, Zhou Xiang
Abstract:
The objective of this paper is to investigate the effect of short-term heat exchange decay of ground heat exchanger (GHE) on the ground source heat pump (GSHP) energy efficiency and capacity. A resistance-capacitance (RC) model was developed and adopted to simulate the transient characteristics of the ground thermal condition and heat exchange. The capacity change of the GSHP was linked to the inlet and outlet water temperature by polynomial fitting according to measured parameters given by heat pump manufacturers. Thus, the model, which combined the heat exchange decay with the capacity change, reflected the energy efficiency decay of the whole system. A case of GSHP system was analyzed by the model, and the result showed that there was risk that the GSHP might not meet the load demand because of the efficiency decay in a short-term operation. The conclusion would provide some guidances for GSHP system design to overcome the risk.Keywords: capacity, energy efficiency, GSHP, heat exchange
Procedia PDF Downloads 34825119 Determination of Some Etiologic Agents in Calves with Diarrhea
Authors: Nermin Isik, Ozlem Derinbay Ekici, Oguzhan Avci
Abstract:
The aim of this study was to determination of role infection in neonatal calves in Central Anatolian, Turkey. A total 300 fecal samples were collected from diarrheic neonatal calves, aged between 0–90 days from Konya, Karaman, and Aksaray from January to April 2014. Fecal specimens from calves with clinically diarrheic symptoms were examined for the presence of Bovine Coronavirus, Bovine Rotavirus, Cryptosporidium sp., and E. coli by commercially available capture direct enzyme linked immunosorbent assay (ELISA) kit and Modified Ziehl Neelsen method (MZN). Calves were grouped according to their age as follows: 1-14, 15-29, and 30-90 days. Cryptosporidium sp. infection was detected in 52.8%, 58.8%, and 39.2% by ELISA and 33.9%, 47%, 26.7% by MZN in the respective age groups. The seroprevalance of Rotavirus (12.5 %, 40 %, 12.5 %), Coronavirus (2.5%, 0%, 3.5%) and E. coli (5%, 4.7%, 8.9%) infections were determined according to the age groups respectively. Cryptosporidium sp. was the most detected enteropathogen (52 %) of calves and coronavirus was the least detected (2 %). The detection rate of the mixed enfection was 12.3%. In conclusion, it must be evaluated by mix infections in calves with diarrhea. These results will provide an important contribution against the factors that cause diarrheaKeywords: cryptosporidium sp., bovine coronavirus, bovine rotavirus, E.coli, calve, ELISA
Procedia PDF Downloads 55125118 Model for Introducing Products to New Customers through Decision Tree Using Algorithm C4.5 (J-48)
Authors: Komol Phaisarn, Anuphan Suttimarn, Vitchanan Keawtong, Kittisak Thongyoun, Chaiyos Jamsawang
Abstract:
This article is intended to analyze insurance information which contains information on the customer decision when purchasing life insurance pay package. The data were analyzed in order to present new customers with Life Insurance Perfect Pay package to meet new customers’ needs as much as possible. The basic data of insurance pay package were collect to get data mining; thus, reducing the scattering of information. The data were then classified in order to get decision model or decision tree using Algorithm C4.5 (J-48). In the classification, WEKA tools are used to form the model and testing datasets are used to test the decision tree for the accurate decision. The validation of this model in classifying showed that the accurate prediction was 68.43% while 31.25% were errors. The same set of data were then tested with other models, i.e. Naive Bayes and Zero R. The results showed that J-48 method could predict more accurately. So, the researcher applied the decision tree in writing the program used to introduce the product to new customers to persuade customers’ decision making in purchasing the insurance package that meets the new customers’ needs as much as possible.Keywords: decision tree, data mining, customers, life insurance pay package
Procedia PDF Downloads 42525117 A Risk Assessment Tool for the Contamination of Aflatoxins on Dried Figs Based on Machine Learning Algorithms
Authors: Kottaridi Klimentia, Demopoulos Vasilis, Sidiropoulos Anastasios, Ihara Diego, Nikolaidis Vasileios, Antonopoulos Dimitrios
Abstract:
Aflatoxins are highly poisonous and carcinogenic compounds produced by species of the genus Aspergillus spp. that can infect a variety of agricultural foods, including dried figs. Biological and environmental factors, such as population, pathogenicity, and aflatoxinogenic capacity of the strains, topography, soil, and climate parameters of the fig orchards, are believed to have a strong effect on aflatoxin levels. Existing methods for aflatoxin detection and measurement, such as high performance liquid chromatography (HPLC), and enzyme-linked immunosorbent assay (ELISA), can provide accurate results, but the procedures are usually time-consuming, sample-destructive, and expensive. Predicting aflatoxin levels prior to crop harvest is useful for minimizing the health and financial impact of a contaminated crop. Consequently, there is interest in developing a tool that predicts aflatoxin levels based on topography and soil analysis data of fig orchards. This paper describes the development of a risk assessment tool for the contamination of aflatoxin on dried figs, based on the location and altitude of the fig orchards, the population of the fungus Aspergillus spp. in the soil, and soil parameters such as pH, saturation percentage (SP), electrical conductivity (EC), organic matter, particle size analysis (sand, silt, clay), the concentration of the exchangeable cations (Ca, Mg, K, Na), extractable P, and trace of elements (B, Fe, Mn, Zn and Cu), by employing machine learning methods. In particular, our proposed method integrates three machine learning techniques, i.e., dimensionality reduction on the original dataset (principal component analysis), metric learning (Mahalanobis metric for clustering), and k-nearest neighbors learning algorithm (KNN), into an enhanced model, with mean performance equal to 85% by terms of the Pearson correlation coefficient (PCC) between observed and predicted values.Keywords: aflatoxins, Aspergillus spp., dried figs, k-nearest neighbors, machine learning, prediction
Procedia PDF Downloads 18225116 Study of Sustainability Practices Ingrained in Indian Culture
Authors: Shraddha Mahore Manjrekar
Abstract:
Culture has been an integral part of the civilizations in the world. Architectural works, in the material form of buildings, are often perceived as cultural symbols and as works of art. Historical civilizations are often identified with their surviving architectural achievements. Author has observed and thought about the relation of Indian traditional cultural beliefs and their relation to the sustainable environment. There are some unwritten norms regarding the use of resources and the environment in Indian continent, that have been commonly accepted by the people for building houses and settlements since the Vedic period . The research has been done on the chanting and prayers done in a number of houses and temples in Madhya Pradesh and Maharashtra. After doing some research, it was also found that resource assessment had also been done for the entire country, and an idea of conservation of these resources was imbibed in the common people by means of some traditions, customs and beliefs. The sensitization and gratefulness about natural resources have been observed in the major beliefs and customs. This paper describes few of such beliefs and customs that are directly linked with the built environment and landscape.Keywords: Indian culture, sacred groves, sustainability in built environment, sustainability practices
Procedia PDF Downloads 29525115 Effect of Oral Administration of “Gadagi” Tea on Superoxide Dismutase Activity in Humans
Authors: A. M. Gadanya, B. A. Ahmad, U. Maigatari
Abstract:
Effect of oral administration of Gadagi tea on superoxide dismutase activity was assessed on twenty (20) male subjects (aged 21-40years). Ten (10) male non Gadagi tea consumers (aged 20-26 years), were used as control. Blood samples were collected from the subjects and analysed for serum superoxide dismutase activity using R&D Enzyme Linked Immunosorbent Assay method (ELISA). The subjects were grouped into four based on age i.e group I (21-25 years), group II (26-30 years), and also based on duration of the tea consumption, i.e group A (5-9 years) , group B (10-14 years), group C (15-19 years) and group D (20-24 years). The subjects in group I (0.12 U mg-l +0.05), group II (0.11 U mg-l +0.01), group III (0.14 U mg-l +0.08) and group IV (0.17 U mg-l +0.11) showed increased activity of serum superoxide dismutase when compared with the control subjects (0.88 U mg-l +0.02) (P<0.05). There was no statistical significant difference in superoxide dismutase activity within the case groups (P<0.05), based on age and duration of consumption of the tea. Thus, Gadagi tea consumption could increase serum superoxide dismutase activity in humans.Keywords: “Gadagi” tea, Serum, Superoxide dismutase, Humans.
Procedia PDF Downloads 37925114 Exploring the Role of Data Mining in Crime Classification: A Systematic Literature Review
Authors: Faisal Muhibuddin, Ani Dijah Rahajoe
Abstract:
This in-depth exploration, through a systematic literature review, scrutinizes the nuanced role of data mining in the classification of criminal activities. The research focuses on investigating various methodological aspects and recent developments in leveraging data mining techniques to enhance the effectiveness and precision of crime categorization. Commencing with an exposition of the foundational concepts of crime classification and its evolutionary dynamics, this study details the paradigm shift from conventional methods towards approaches supported by data mining, addressing the challenges and complexities inherent in the modern crime landscape. Specifically, the research delves into various data mining techniques, including K-means clustering, Naïve Bayes, K-nearest neighbour, and clustering methods. A comprehensive review of the strengths and limitations of each technique provides insights into their respective contributions to improving crime classification models. The integration of diverse data sources takes centre stage in this research. A detailed analysis explores how the amalgamation of structured data (such as criminal records) and unstructured data (such as social media) can offer a holistic understanding of crime, enriching classification models with more profound insights. Furthermore, the study explores the temporal implications in crime classification, emphasizing the significance of considering temporal factors to comprehend long-term trends and seasonality. The availability of real-time data is also elucidated as a crucial element in enhancing responsiveness and accuracy in crime classification.Keywords: data mining, classification algorithm, naïve bayes, k-means clustering, k-nearest neigbhor, crime, data analysis, sistematic literature review
Procedia PDF Downloads 6225113 Linking Market Performance to Exploration and Exploitation in The Pharmaceutical Industry
Authors: Johann Valentowitsch, Wolfgang Burr
Abstract:
In organizational research, strategies of exploration and exploitation are often considered to be contradictory. Building on the tradeoff argument, many authors have assumed that a company's market performance should be positively dependent on its strategic balance between exploration and exploitation over time. In this study, we apply this reasoning to the pharmaceutical industry. Using exploratory regression analysis we show that the long-term market performance of a pharmaceutical company is linked to both its ability to carry out exploratory projects and its ability to develop exploitative competencies. In particular, our findings demonstrate that, on average, the company's annual sales performance is higher the better the strategic alignment between exploration and exploitation is balanced. The contribution of our research is twofold. On the one hand, we provide empirical evidence for the initial tradeoff hypothesis and thus support the theoretical position of those who understand exploration and exploitation as strategic substitutes. On the other hand, our findings show that a balanced relationship between exploration and exploitation is also important in research-intensive industries, which naturally tend to place more emphasis on exploration.Keywords: exploitation, exploration, market performance, pharmaceutical industry, strategy
Procedia PDF Downloads 21625112 Hidden Stones When Implementing Artificial Intelligence Solutions in the Engineering, Procurement, and Construction Industry
Authors: Rimma Dzhusupova, Jan Bosch, Helena Holmström Olsson
Abstract:
Artificial Intelligence (AI) in the Engineering, Procurement, and Construction (EPC) industry has not yet a proven track record in large-scale projects. Since AI solutions for industrial applications became available only recently, deployment experience and lessons learned are still to be built up. Nevertheless, AI has become an attractive technology for organizations looking to automate repetitive tasks to reduce manual work. Meanwhile, the current AI market has started offering various solutions and services. The contribution of this research is that we explore in detail the challenges and obstacles faced in developing and deploying AI in a large-scale project in the EPC industry based on real-life use cases performed in an EPC company. Those identified challenges are not linked to a specific technology or a company's know-how and, therefore, are universal. The findings in this paper aim to provide feedback to academia to reduce the gap between research and practice experience. They also help reveal the hidden stones when implementing AI solutions in the industry.Keywords: artificial intelligence, machine learning, deep learning, innovation, engineering, procurement and construction industry, AI in the EPC industry
Procedia PDF Downloads 11825111 Laccase Catalysed Conjugation of Tea Polyphenols for Enhanced Antioxidant Properties
Authors: Parikshit Gogo, N. N. Dutta
Abstract:
The oxidative enzymes specially laccase (benzenediol: oxygen oxidoreductase, E.C.1.10.3.2) from bacteria, fungi and plants have been playing an important role in green technologies due to their specific advantageous properties. Laccase from different sources and in different forms was used as a biocatalyst in many oxidation and conjugation reactions starting from phenol to hydrocarbons. Tea polyphenols and its derivatives attract the scientific community because of their potential use as antioxidants in food, pharmaceutical and cosmetic industries. Conjugate of polyphenols emerged as a novel materials which shows better stability and antioxidant properties in applied fields. The conjugation reaction of catechin with poly (allylamine) has been studied using free, immobilized and cross-linked enzyme crystals (CLEC) of laccase from Trametes versicolor with particular emphasis on the effect of pertinent variables and kinetic aspects of the reaction. The stability and antioxidant property of the conjugated product was improved as compared to the unconjugated tea polyphenols. The reaction was studied in 11 different solvents in order to deduce the solvent effect through an attempt to correlate the initial reaction rate with solvent properties such as hydrophobicity (logP), water solubility (logSw), electron pair acceptance (ETN) and donation abilities (DNN), polarisibility and dielectric constant which exhibit reasonable correlations. The study revealed, in general that polar solvents favour the initial reaction rate. The kinetics of the conjugation reaction conformed to the so-called Ping-Pong-Bi-Bi mechanism with catechin inhibition. The stability as well as activity of the CLEC was better than the free enzymes and immobilized laccase for practical application. In case of immobilized laccase system marginal diffusional limitation could be inferred from the experimental data. The kinetic parameters estimated by non-linear regression analysis were found to be KmPAA(mM) = 0.75, 1.8967 and Kmcat (mM) = 11.769, 15.1816 for free and immobilized laccase respectively. An attempt has been made to assess the activity of the laccase for the conjugation reaction in relation to other reactions such as dimerisation of ferulic acids and develop a protocol to enhance polyphenol antioxidant activity.Keywords: laccase, catechin, conjugation reaction, antioxidant properties
Procedia PDF Downloads 26925110 Assessing Supply Chain Performance through Data Mining Techniques: A Case of Automotive Industry
Authors: Emin Gundogar, Burak Erkayman, Nusret Sazak
Abstract:
Providing effective management performance through the whole supply chain is critical issue and hard to applicate. The proper evaluation of integrated data may conclude with accurate information. Analysing the supply chain data through OLAP (On-Line Analytical Processing) technologies may provide multi-angle view of the work and consolidation. In this study, association rules and classification techniques are applied to measure the supply chain performance metrics of an automotive manufacturer in Turkey. Main criteria and important rules are determined. The comparison of the results of the algorithms is presented.Keywords: supply chain performance, performance measurement, data mining, automotive
Procedia PDF Downloads 512