Search results for: cluster validation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2139

Search results for: cluster validation

2109 Quantitative Structure-Activity Relationship Analysis of Binding Affinity of a Series of Anti-Prion Compounds to Human Prion Protein

Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Milica Karadžić

Abstract:

The present study is based on the quantitative structure-activity relationship (QSAR) analysis of eighteen compounds with anti-prion activity. The structures and anti-prion activities (expressed in response units, RU%) of the analyzed compounds are taken from CHEMBL database. In the first step of analysis 85 molecular descriptors were calculated and based on them the hierarchical cluster analysis (HCA) and principal component analysis (PCA) were carried out in order to detect potential significant similarities or dissimilarities among the studied compounds. The calculated molecular descriptors were physicochemical, lipophilicity and ADMET (absorption, distribution, metabolism, excretion and toxicity) descriptors. The first stage of the QSAR analysis was simple linear regression modeling. It resulted in one acceptable model that correlates Henry's law constant with RU% units. The obtained 2D-QSAR model was validated by cross-validation as an internal validation method. The validation procedure confirmed the model’s quality and therefore it can be used for prediction of anti-prion activity. The next stage of the analysis of anti-prion activity will include 3D-QSAR and molecular docking approaches in order to select the most promising compounds in treatment of prion diseases. These results are the part of the project No. 114-451-268/2016-02 financially supported by the Provincial Secretariat for Science and Technological Development of AP Vojvodina.

Keywords: anti-prion activity, chemometrics, molecular modeling, QSAR

Procedia PDF Downloads 272
2108 Scientific Linux Cluster for BIG-DATA Analysis (SLBD): A Case of Fayoum University

Authors: Hassan S. Hussein, Rania A. Abul Seoud, Amr M. Refaat

Abstract:

Scientific researchers face in the analysis of very large data sets that is increasing noticeable rate in today’s and tomorrow’s technologies. Hadoop and Spark are types of software that developed frameworks. Hadoop framework is suitable for many Different hardware platforms. In this research, a scientific Linux cluster for Big Data analysis (SLBD) is presented. SLBD runs open source software with large computational capacity and high performance cluster infrastructure. SLBD composed of one cluster contains identical, commodity-grade computers interconnected via a small LAN. SLBD consists of a fast switch and Gigabit-Ethernet card which connect four (nodes). Cloudera Manager is used to configure and manage an Apache Hadoop stack. Hadoop is a framework allows storing and processing big data across the cluster by using MapReduce algorithm. MapReduce algorithm divides the task into smaller tasks which to be assigned to the network nodes. Algorithm then collects the results and form the final result dataset. SLBD clustering system allows fast and efficient processing of large amount of data resulting from different applications. SLBD also provides high performance, high throughput, high availability, expandability and cluster scalability.

Keywords: big data platforms, cloudera manager, Hadoop, MapReduce

Procedia PDF Downloads 332
2107 Using the Cluster Computing to Improve the Computational Speed of the Modular Exponentiation in RSA Cryptography System

Authors: Te-Jen Chang, Ping-Sheng Huang, Shan-Ten Cheng, Chih-Lin Lin, I-Hui Pan, Tsung- Hsien Lin

Abstract:

RSA system is a great contribution for the encryption and the decryption. It is based on the modular exponentiation. We call this system as “a large of numbers for calculation”. The operation of a large of numbers is a very heavy burden for CPU. For increasing the computational speed, in addition to improve these algorithms, such as the binary method, the sliding window method, the addition chain method, and so on, the cluster computer can be used to advance computational speed. The cluster system is composed of the computers which are installed the MPICH2 in laboratory. The parallel procedures of the modular exponentiation can be processed by combining the sliding window method with the addition chain method. It will significantly reduce the computational time of the modular exponentiation whose digits are more than 512 bits and even more than 1024 bits.

Keywords: cluster system, modular exponentiation, sliding window, addition chain

Procedia PDF Downloads 497
2106 Genomic Diversity of Clostridium perfringens Strains in Food and Human Sources

Authors: Asma Afshari, Abdollah Jamshidi, Jamshid Razmyar, Mehrnaz Rad

Abstract:

Clostridium perfringens is a serious pathogen which causes enteric diseases in domestic animals and food poisoning in humans. Spores can survive cooking processes and play an important role in the possible onset of disease. In this study RAPD-PCR and REP-PCR were used to examine the genetic diversity of 49isolates ofC. Perfringens type A from 3 different sources. The results of RAPD-PCR revealed the most genetic diversity among poultry isolates, while human isolates showed the least genetic diversity. Cluster analysis obtained from RAPD_PCR and based on the genetic distances split the 49 strains into five distinct major clusters (A, B, C, D, and E). Cluster A and C were composed of isolates from poultry meat, cluster B was composed of isolates from human feces, cluster D was composed of isolates from minced meat, poultry meat and human feces and cluster E was composed of isolates from minced meat. Further characterization of these strains by using (GTG) 5 fingerprint repetitive sequence-based PCR analysis did not show further differentiation between various types of strains. To our knowledge, this is the first study in which the genetic diversity of C. perfringens isolates from different types of meats and human feces has been investigated.

Keywords: C. perfringens, genetic diversity, RAPD-PCR, REP-PCR

Procedia PDF Downloads 459
2105 Event Driven Dynamic Clustering and Data Aggregation in Wireless Sensor Network

Authors: Ashok V. Sutagundar, Sunilkumar S. Manvi

Abstract:

Energy, delay and bandwidth are the prime issues of wireless sensor network (WSN). Energy usage optimization and efficient bandwidth utilization are important issues in WSN. Event triggered data aggregation facilitates such optimal tasks for event affected area in WSN. Reliable delivery of the critical information to sink node is also a major challenge of WSN. To tackle these issues, we propose an event driven dynamic clustering and data aggregation scheme for WSN that enhances the life time of the network by minimizing redundant data transmission. The proposed scheme operates as follows: (1) Whenever the event is triggered, event triggered node selects the cluster head. (2) Cluster head gathers data from sensor nodes within the cluster. (3) Cluster head node identifies and classifies the events out of the collected data using Bayesian classifier. (4) Aggregation of data is done using statistical method. (5) Cluster head discovers the paths to the sink node using residual energy, path distance and bandwidth. (6) If the aggregated data is critical, cluster head sends the aggregated data over the multipath for reliable data communication. (7) Otherwise aggregated data is transmitted towards sink node over the single path which is having the more bandwidth and residual energy. The performance of the scheme is validated for various WSN scenarios to evaluate the effectiveness of the proposed approach in terms of aggregation time, cluster formation time and energy consumed for aggregation.

Keywords: wireless sensor network, dynamic clustering, data aggregation, wireless communication

Procedia PDF Downloads 415
2104 Digital Forensics Compute Cluster: A High Speed Distributed Computing Capability for Digital Forensics

Authors: Daniel Gonzales, Zev Winkelman, Trung Tran, Ricardo Sanchez, Dulani Woods, John Hollywood

Abstract:

We have developed a distributed computing capability, Digital Forensics Compute Cluster (DFORC2) to speed up the ingestion and processing of digital evidence that is resident on computer hard drives. DFORC2 parallelizes evidence ingestion and file processing steps. It can be run on a standalone computer cluster or in the Amazon Web Services (AWS) cloud. When running in a virtualized computing environment, its cluster resources can be dynamically scaled up or down using Kubernetes. DFORC2 is an open source project that uses Autopsy, Apache Spark and Kafka, and other open source software packages. It extends the proven open source digital forensics capabilities of Autopsy to compute clusters and cloud architectures, so digital forensics tasks can be accomplished efficiently by a scalable array of cluster compute nodes. In this paper, we describe DFORC2 and compare it with a standalone version of Autopsy when both are used to process evidence from hard drives of different sizes.

Keywords: digital forensics, cloud computing, cyber security, spark, Kubernetes, Kafka

Procedia PDF Downloads 367
2103 Feature Selection of Personal Authentication Based on EEG Signal for K-Means Cluster Analysis Using Silhouettes Score

Authors: Jianfeng Hu

Abstract:

Personal authentication based on electroencephalography (EEG) signals is one of the important field for the biometric technology. More and more researchers have used EEG signals as data source for biometric. However, there are some disadvantages for biometrics based on EEG signals. The proposed method employs entropy measures for feature extraction from EEG signals. Four type of entropies measures, sample entropy (SE), fuzzy entropy (FE), approximate entropy (AE) and spectral entropy (PE), were deployed as feature set. In a silhouettes calculation, the distance from each data point in a cluster to all another point within the same cluster and to all other data points in the closest cluster are determined. Thus silhouettes provide a measure of how well a data point was classified when it was assigned to a cluster and the separation between them. This feature renders silhouettes potentially well suited for assessing cluster quality in personal authentication methods. In this study, “silhouettes scores” was used for assessing the cluster quality of k-means clustering algorithm is well suited for comparing the performance of each EEG dataset. The main goals of this study are: (1) to represent each target as a tuple of multiple feature sets, (2) to assign a suitable measure to each feature set, (3) to combine different feature sets, (4) to determine the optimal feature weighting. Using precision/recall evaluations, the effectiveness of feature weighting in clustering was analyzed. EEG data from 22 subjects were collected. Results showed that: (1) It is possible to use fewer electrodes (3-4) for personal authentication. (2) There was the difference between each electrode for personal authentication (p<0.01). (3) There is no significant difference for authentication performance among feature sets (except feature PE). Conclusion: The combination of k-means clustering algorithm and silhouette approach proved to be an accurate method for personal authentication based on EEG signals.

Keywords: personal authentication, K-mean clustering, electroencephalogram, EEG, silhouettes

Procedia PDF Downloads 256
2102 Critical Psychosocial Risk Treatment for Engineers and Technicians

Authors: R. Berglund, T. Backström, M. Bellgran

Abstract:

This study explores how management addresses psychosocial risks in seven teams of engineers and technicians in the midst of the fourth industrial revolution. The sample is from an ongoing quasi-experiment about psychosocial risk management in a manufacturing company in Sweden. Each of the seven teams belongs to one of two clusters: a positive cluster or a negative cluster. The positive cluster reports a significantly positive change in psychosocial risk levels between two time-points and the negative cluster reports a significantly negative change. The data are collected using semi-structured interviews. The results of the computer aided thematic analysis show that there are more differences than similarities when comparing the risk treatment actions taken between the two clusters. Findings show that the managers in the positive cluster use more enabling actions that foster and support formal and informal relationship building. In contrast, managers that use less enabling actions hinder the development of positive group processes and contribute negative changes in psychosocial risk levels. This exploratory study sheds some light on how management can influence significant positive and negative changes in psychosocial risk levels during a risk management process.

Keywords: group process model, risk treatment, risk management, psychosocial

Procedia PDF Downloads 126
2101 An Enhanced Distributed Weighted Clustering Algorithm for Intra and Inter Cluster Routing in MANET

Authors: K. Gomathi

Abstract:

Mobile Ad hoc Networks (MANET) is defined as collection of routable wireless mobile nodes with no centralized administration and communicate each other using radio signals. Especially MANETs deployed in hostile environments where hackers will try to disturb the secure data transfer and drain the valuable network resources. Since MANET is battery operated network, preserving the network resource is essential one. For resource constrained computation, efficient routing and to increase the network stability, the network is divided into smaller groups called clusters. The clustering architecture consists of Cluster Head(CH), ordinary node and gateway. The CH is responsible for inter and intra cluster routing. CH election is a prominent research area and many more algorithms are developed using many different metrics. The CH with longer life sustains network lifetime, for this purpose Secondary Cluster Head(SCH) also elected and it is more economical. To nominate efficient CH, a Enhanced Distributed Weighted Clustering Algorithm (EDWCA) has been proposed. This approach considers metrics like battery power, degree difference and speed of the node for CH election. The proficiency of proposed one is evaluated and compared with existing algorithm using Network Simulator(NS-2).

Keywords: MANET, EDWCA, clustering, cluster head

Procedia PDF Downloads 365
2100 Creation of Greater Mekong Subregion Regional Competitiveness through Cluster Mapping

Authors: Danuvasin Charoen

Abstract:

This research investigates cluster development in the area called the Greater Mekong Subregion (GMS), which consists of Thailand, the People’s Republic of China (PRC), the Yunnan Province and Guangxi Zhuang Autonomous Region, Myanmar, the Lao People’s Democratic Republic (Lao PDR), Cambodia, and Vietnam. The study utilized Porter’s competitiveness theory and the cluster mapping approach to analyze the competitiveness of the region. The data collection consists of interviews, focus groups, and the analysis of secondary data. The findings identify some evidence of cluster development in the GMS; however, there is no clear indication of collaboration among the components in the clusters. GMS clusters tend to be stand-alone. The clusters in Vietnam, Lao PDR, Myanmar, and Cambodia tend to be labor intensive, whereas the clusters in Thailand and the PRC (Yunnan) have the potential to successfully develop into innovative clusters. The collaboration and integration among the clusters in the GMS area are promising, though it could take a long time. The most likely relationship between the GMS countries could be, for example, suppliers of the low-end, labor-intensive products will be located in the low income countries such as Myanmar, Lao PDR, and Cambodia, and these countries will be providing input materials for innovative clusters in the middle income countries such as Thailand and the PRC.

Keywords: cluster, GMS, competitiveness, development

Procedia PDF Downloads 231
2099 Preliminary Study of Standardization and Validation of Micronuclei Technique to Assess the DNA Damages Cause for the X-Rays

Authors: L. J. Díaz, M. A. Hernández, A. K. Molina, A. Bermúdez, C. Crane, V. M. Pabón

Abstract:

One of the most important biological indicators that show the exposure to the radiation is the micronuclei (MN). This technique is using to determinate the radiation effects in blood cultures as a biological control and a complement to the physics dosimetry. In Colombia the necessity to apply this analysis has emerged due to the current biological indicator most used is the chromosomal aberrations (CA), that is why it is essential the MN technique’s standardization and validation to have enough tools to improve the radioprotection topic in the country. Besides, this technique will be applied on the construction of a dose-response curve, that allow measure an approximately dose to irradiated people according to MN frequency found. Inside the steps that carried out to accomplish the standardization and validation is the statistic analysis from the lectures of “in vitro” peripheral blood cultures with different analysts, also it was determinate the best culture medium and conditions for the MN can be detected easily.

Keywords: micronuclei, radioprotection, standardization, validation

Procedia PDF Downloads 464
2098 Proposal to Increase the Efficiency, Reliability and Safety of the Centre of Data Collection Management and Their Evaluation Using Cluster Solutions

Authors: Martin Juhas, Bohuslava Juhasova, Igor Halenar, Andrej Elias

Abstract:

This article deals with the possibility of increasing efficiency, reliability and safety of the system for teledosimetric data collection management and their evaluation as a part of complex study for activity “Research of data collection, their measurement and evaluation with mobile and autonomous units” within project “Research of monitoring and evaluation of non-standard conditions in the area of nuclear power plants”. Possible weaknesses in existing system are identified. A study of available cluster solutions with possibility of their deploying to analysed system is presented.

Keywords: teledosimetric data, efficiency, reliability, safety, cluster solution

Procedia PDF Downloads 486
2097 Specific Frequency of Globular Clusters in Different Galaxy Types

Authors: Ahmed H. Abdullah, Pavel Kroupa

Abstract:

Globular clusters (GC) are important objects for tracing the early evolution of a galaxy. We study the correlation between the cluster population and the global properties of the host galaxy. We found that the correlation between cluster population (NGC) and the baryonic mass (Mb) of the host galaxy are best described as 10 −5.6038Mb. In order to understand the origin of the U -shape relation between the GC specific frequency (SN) and Mb (caused by the high value of SN for dwarfs galaxies and giant ellipticals and a minimum SN for intermediate mass galaxies≈ 1010M), we derive a theoretical model for the specific frequency (SNth). The theoretical model for SNth is based on the slope of the power-law embedded cluster mass function (β) and different time scale (Δt) of the forming galaxy. Our results show a good agreement between the observation and the model at a certain β and Δt. The model seems able to reproduce higher value of SNth of β = 1.5 at the midst formation time scale.

Keywords: galaxies: dwarf, globular cluster: specific frequency, number of globular clusters, formation time scale

Procedia PDF Downloads 292
2096 Clustering Performance Analysis using New Correlation-Based Cluster Validity Indices

Authors: Nathakhun Wiroonsri

Abstract:

There are various cluster validity measures used for evaluating clustering results. One of the main objectives of using these measures is to seek the optimal unknown number of clusters. Some measures work well for clusters with different densities, sizes and shapes. Yet, one of the weaknesses that those validity measures share is that they sometimes provide only one clear optimal number of clusters. That number is actually unknown and there might be more than one potential sub-optimal option that a user may wish to choose based on different applications. We develop two new cluster validity indices based on a correlation between an actual distance between a pair of data points and a centroid distance of clusters that the two points are located in. Our proposed indices constantly yield several peaks at different numbers of clusters which overcome the weakness previously stated. Furthermore, the introduced correlation can also be used for evaluating the quality of a selected clustering result. Several experiments in different scenarios, including the well-known iris data set and a real-world marketing application, have been conducted to compare the proposed validity indices with several well-known ones.

Keywords: clustering algorithm, cluster validity measure, correlation, data partitions, iris data set, marketing, pattern recognition

Procedia PDF Downloads 81
2095 The Use of Ward Linkage in Cluster Integration with a Path Analysis Approach

Authors: Adji Achmad Rinaldo Fernandes

Abstract:

Path analysis is an analytical technique to study the causal relationship between independent and dependent variables. In this study, the integration of Clusters in the Ward Linkage method was used in a variety of clusters with path analysis. The variables used are character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₄) to on time pay (y₂) through the variable willingness to pay (y₁). The purpose of this study was to compare the Ward Linkage method cluster integration in various clusters with path analysis to classify willingness to pay (y₁). The data used are primary data from questionnaires filled out by customers of Bank X, using purposive sampling. The measurement method used is the average score method. The results showed that the Ward linkage method cluster integration with path analysis on 2 clusters is the best method, by comparing the coefficient of determination. Variable character (x₁), capacity (x₂), capital (x₃), collateral (x₄), and condition of economy (x₅) to on time pay (y₂) through willingness to pay (y₁) can be explained by 58.3%, while the remaining 41.7% is explained by variables outside the model.

Keywords: cluster integration, linkage, path analysis, compliant paying behavior

Procedia PDF Downloads 147
2094 A Near-Optimal Domain Independent Approach for Detecting Approximate Duplicates

Authors: Abdelaziz Fellah, Allaoua Maamir

Abstract:

We propose a domain-independent merging-cluster filter approach complemented with a set of algorithms for identifying approximate duplicate entities efficiently and accurately within a single and across multiple data sources. The near-optimal merging-cluster filter (MCF) approach is based on the Monge-Elkan well-tuned algorithm and extended with an affine variant of the Smith-Waterman similarity measure. Then we present constant, variable, and function threshold algorithms that work conceptually in a divide-merge filtering fashion for detecting near duplicates as hierarchical clusters along with their corresponding representatives. The algorithms take recursive refinement approaches in the spirit of filtering, merging, and updating, cluster representatives to detect approximate duplicates at each level of the cluster tree. Experiments show a high effectiveness and accuracy of the MCF approach in detecting approximate duplicates by outperforming the seminal Monge-Elkan’s algorithm on several real-world benchmarks and generated datasets.

Keywords: data mining, data cleaning, approximate duplicates, near-duplicates detection, data mining applications and discovery

Procedia PDF Downloads 358
2093 Lambda-Levelwise Statistical Convergence of a Sequence of Fuzzy Numbers

Authors: F. Berna Benli, Özgür Keskin

Abstract:

Lately, many mathematicians have been studied the statistical convergence of a sequence of fuzzy numbers. We know that Lambda-statistically convergence is a kind of convergence between ordinary convergence and statistical convergence. In this paper, we will introduce the new kind of convergence such as λ-levelwise statistical convergence. Then, we will define the concept of the λ-levelwise statistical cluster and limit points of a sequence of fuzzy numbers. Also, we will discuss the relations between the sets of λ-levelwise statistical cluster points and λ-levelwise statistical limit points of sequences of fuzzy numbers. This work has been extended in this paper, where some relations have been considered such that when lambda-statistical limit inferior and lambda-statistical limit superior for lambda-statistically convergent sequences of fuzzy numbers are equal. Furthermore, lambda-statistical boundedness condition for different sequences of fuzzy numbers has been studied.

Keywords: fuzzy number, λ-levelwise statistical cluster points, λ-levelwise statistical convergence, λ-levelwise statistical limit points, λ-statistical cluster points, λ-statistical convergence, λ-statistical limit points

Procedia PDF Downloads 437
2092 Comparing the Apparent Error Rate of Gender Specifying from Human Skeletal Remains by Using Classification and Cluster Methods

Authors: Jularat Chumnaul

Abstract:

In forensic science, corpses from various homicides are different; there are both complete and incomplete, depending on causes of death or forms of homicide. For example, some corpses are cut into pieces, some are camouflaged by dumping into the river, some are buried, some are burned to destroy the evidence, and others. If the corpses are incomplete, it can lead to the difficulty of personally identifying because some tissues and bones are destroyed. To specify gender of the corpses from skeletal remains, the most precise method is DNA identification. However, this method is costly and takes longer so that other identification techniques are used instead. The first technique that is widely used is considering the features of bones. In general, an evidence from the corpses such as some pieces of bones, especially the skull and pelvis can be used to identify their gender. To use this technique, forensic scientists are required observation skills in order to classify the difference between male and female bones. Although this technique is uncomplicated, saving time and cost, and the forensic scientists can fairly accurately determine gender by using this technique (apparently an accuracy rate of 90% or more), the crucial disadvantage is there are only some positions of skeleton that can be used to specify gender such as supraorbital ridge, nuchal crest, temporal lobe, mandible, and chin. Therefore, the skeletal remains that will be used have to be complete. The other technique that is widely used for gender specifying in forensic science and archeology is skeletal measurements. The advantage of this method is it can be used in several positions in one piece of bones, and it can be used even if the bones are not complete. In this study, the classification and cluster analysis are applied to this technique, including the Kth Nearest Neighbor Classification, Classification Tree, Ward Linkage Cluster, K-mean Cluster, and Two Step Cluster. The data contains 507 particular individuals and 9 skeletal measurements (diameter measurements), and the performance of five methods are investigated by considering the apparent error rate (APER). The results from this study indicate that the Two Step Cluster and Kth Nearest Neighbor method seem to be suitable to specify gender from human skeletal remains because both yield small apparent error rate of 0.20% and 4.14%, respectively. On the other hand, the Classification Tree, Ward Linkage Cluster, and K-mean Cluster method are not appropriate since they yield large apparent error rate of 10.65%, 10.65%, and 16.37%, respectively. However, there are other ways to evaluate the performance of classification such as an estimate of the error rate using the holdout procedure or misclassification costs, and the difference methods can make the different conclusions.

Keywords: skeletal measurements, classification, cluster, apparent error rate

Procedia PDF Downloads 227
2091 Innovation Management Strategy towards the Detroit of Asia

Authors: Jarunee Wonglimpiyarat

Abstract:

This paper explores the innovation management strategy of Thailand in moving towards the Detroit of Asia. The study analyses Thailand’s automotive cluster based on Porter’s Diamond Model and national innovation system (NIS) framework. A qualitative methodology was carried out, using semi-structured interviews with the players in the Thai automotive industry. Thailand took a different NIS approach by pursuing an Original Equipment Manufacture (OEM) strategy to attract foreign investments in building its automotive cluster, a different path from other Asian countries that competed with Own Brand Manufacture (OBM) strategies. The findings provide useful lessons for other newly industrialized countries (NICs) in adopting the cluster policies to move up the technological ladders.

Keywords: innovation management strategy, national innovation system (NIS), Detroit of Asia, original equipment manufacturer (OEM)

Procedia PDF Downloads 318
2090 Impacts of Teachers’ Cluster Model Meeting Intervention on Pupils’ Learning, Academic Achievement and Attitudinal Development in Oyo State, Nigeria

Authors: Olusola Joseph Adesina, Abiodun Ezekiel Adesina

Abstract:

Efforts at improving the falling standard of education in the country call for the need-based assessment of the primary tier of education in Nigeria. Teachers’ cluster meeting intervention is a step towards enhancing the teachers’ professional competency, efficient and effective pupils’ academic achievement and attitudinal development. The study thus determined the impact of the intervention on pupils’ achievement in Oyo State, Nigeria. Three research questions and four hypotheses guided the study. Pre-test, post-test control group, quasi-experimental design was adopted for the study. Eight intact classes from eight different schools were randomly selected into treatment and control groups. Two response instruments, pupils academic achievement test (PAAT; r = 0.87) and pupils attitude to lesson scale (PALS; r = 0.80) were used for data collection. Mean, standard deviation and analysis of covariance (ANCOVA) were used to analyse the collected data. The results showed that the teachers’ cluster meeting have significant impact on pupils academic achievement (F (1,327) =41.79; p<0.05) and attitudinal development (F (1,327) =26.01; p<0.05) in the core subjects of primary schools in Oyo State, Nigeria. The study therefore recommended among others that teachers’ cluster meeting should be sustained for teachers’ professional development and pupils’ upgradement in the State.

Keywords: teachers’ cluster meeting, pupils’ academic achievement, pupils’ attitudinal development, academic achievement

Procedia PDF Downloads 435
2089 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 405
2088 An Enhanced Approach in Validating Analytical Methods Using Tolerance-Based Design of Experiments (DoE)

Authors: Gule Teri

Abstract:

The effective validation of analytical methods forms a crucial component of pharmaceutical manufacturing. However, traditional validation techniques can occasionally fail to fully account for inherent variations within datasets, which may result in inconsistent outcomes. This deficiency in validation accuracy is particularly noticeable when quantifying low concentrations of active pharmaceutical ingredients (APIs), excipients, or impurities, introducing a risk to the reliability of the results and, subsequently, the safety and effectiveness of the pharmaceutical products. In response to this challenge, we introduce an enhanced, tolerance-based Design of Experiments (DoE) approach for the validation of analytical methods. This approach distinctly measures variability with reference to tolerance or design margins, enhancing the precision and trustworthiness of the results. This method provides a systematic, statistically grounded validation technique that improves the truthfulness of results. It offers an essential tool for industry professionals aiming to guarantee the accuracy of their measurements, particularly for low-concentration components. By incorporating this innovative method, pharmaceutical manufacturers can substantially advance their validation processes, subsequently improving the overall quality and safety of their products. This paper delves deeper into the development, application, and advantages of this tolerance-based DoE approach and demonstrates its effectiveness using High-Performance Liquid Chromatography (HPLC) data for verification. This paper also discusses the potential implications and future applications of this method in enhancing pharmaceutical manufacturing practices and outcomes.

Keywords: tolerance-based design, design of experiments, analytical method validation, quality control, biopharmaceutical manufacturing

Procedia PDF Downloads 40
2087 An Energy-Balanced Clustering Method on Wireless Sensor Networks

Authors: Yu-Ting Tsai, Chiun-Chieh Hsu, Yu-Chun Chu

Abstract:

In recent years, due to the development of wireless network technology, many researchers have devoted to the study of wireless sensor networks. The applications of wireless sensor network mainly use the sensor nodes to collect the required information, and send the information back to the users. Since the sensed area is difficult to reach, there are many restrictions on the design of the sensor nodes, where the most important restriction is the limited energy of sensor nodes. Because of the limited energy, researchers proposed a number of ways to reduce energy consumption and balance the load of sensor nodes in order to increase the network lifetime. In this paper, we proposed the Energy-Balanced Clustering method with Auxiliary Members on Wireless Sensor Networks(EBCAM)based on the cluster routing. The main purpose is to balance the energy consumption on the sensed area and average the distribution of dead nodes in order to avoid excessive energy consumption because of the increasing in transmission distance. In addition, we use the residual energy and average energy consumption of the nodes within the cluster to choose the cluster heads, use the multi hop transmission method to deliver the data, and dynamically adjust the transmission radius according to the load conditions. Finally, we use the auxiliary cluster members to change the delivering path according to the residual energy of the cluster head in order to its load. Finally, we compare the proposed method with the related algorithms via simulated experiments and then analyze the results. It reveals that the proposed method outperforms other algorithms in the numbers of used rounds and the average energy consumption.

Keywords: auxiliary nodes, cluster, load balance, routing algorithm, wireless sensor network

Procedia PDF Downloads 253
2086 Industry 4.0 Platforms as 'Cluster' ecosystems for small and medium enterprises (SMEs)

Authors: Vivek Anand, Rainer Naegele

Abstract:

Industry 4.0 is a global mega-trend revolutionizing the world of advanced manufacturing, but also bringing up challenges for SMEs. In response, many regional, as well as digital Industry 4.0 Platforms, have been set up to boost the competencies of established enterprises as well as SMEs. The concept of 'Clusters' is a policy tool that aims to be a starting point to establish sustainable and self-supporting structures in industries of a region by identifying competencies and supporting cluster actors with services that match their growth needs. This paper is motivated by the idea that Clusters have the potential to enable firms, particularly SMEs, to accelerate the innovation process and transition to digital technologies. In this research, the efficacy of Industry 4.0 platforms as Cluster ecosystems is evaluated, especially for SMEs. Focusing on the Baden Wurttemberg region in Germany, an action research method is employed to study how SMEs leverage other actors on Industry 4.0 Platforms to further their Industry 4.0 journeys. The aim is to evaluate how such Industry 4.0 platforms stimulate innovation, cooperation and competitiveness. Additionally, the barriers to these platforms fulfilling their promise to serve as capacity building cluster ecosystems for SMEs in a region will also be identified. The findings will be helpful for academicians and policymakers alike, who can leverage a ‘cluster policy’ to enable Industry 4.0 ecosystems in their regions. Furthermore, relevant management and policy implications stem from the analysis. This will also be of interest to the various players in a cluster ecosystem - like SMEs and service providers - who benefit from the cooperation and competition. The paper will improve the understanding of how a dialogue orientation, a bottom-up approach and active integration of all involved cluster actors enhance the potential of Industry 4.0 Platforms. A strong collaborative culture is a key driver of digital transformation and technology adoption across sectors, value chains and supply chains; and will position Industry 4.0 Platforms at the forefront of the industrial renaissance. Motivated by this argument and based on the results of the qualitative research, a roadmap will be proposed to position Industry 4.0 Platforms as effective clusters ecosystems to support Industry 4.0 adoption in a region.

Keywords: cluster policy, digital transformation, industry 4.0, innovation clusters, innovation policy, SMEs and startups

Procedia PDF Downloads 185
2085 The Detection of Implanted Radioactive Seeds on Ultrasound Images Using Convolution Neural Networks

Authors: Edward Holupka, John Rossman, Tye Morancy, Joseph Aronovitz, Irving Kaplan

Abstract:

A common modality for the treatment of early stage prostate cancer is the implantation of radioactive seeds directly into the prostate. The radioactive seeds are positioned inside the prostate to achieve optimal radiation dose coverage to the prostate. These radioactive seeds are positioned inside the prostate using Transrectal ultrasound imaging. Once all of the planned seeds have been implanted, two dimensional transaxial transrectal ultrasound images separated by 2 mm are obtained through out the prostate, beginning at the base of the prostate up to and including the apex. A common deep neural network, called DetectNet was trained to automatically determine the position of the implanted radioactive seeds within the prostate under ultrasound imaging. The results of the training using 950 training ultrasound images and 90 validation ultrasound images. The commonly used metrics for successful training were used to evaluate the efficacy and accuracy of the trained deep neural network and resulted in an loss_bbox (train) = 0.00, loss_coverage (train) = 1.89e-8, loss_bbox (validation) = 11.84, loss_coverage (validation) = 9.70, mAP (validation) = 66.87%, precision (validation) = 81.07%, and a recall (validation) = 82.29%, where train and validation refers to the training image set and validation refers to the validation training set. On the hardware platform used, the training expended 12.8 seconds per epoch. The network was trained for over 10,000 epochs. In addition, the seed locations as determined by the Deep Neural Network were compared to the seed locations as determined by a commercial software based on a one to three months after implant CT. The Deep Learning approach was within \strikeout off\uuline off\uwave off2.29\uuline default\uwave default mm of the seed locations determined by the commercial software. The Deep Learning approach to the determination of radioactive seed locations is robust, accurate, and fast and well within spatial agreement with the gold standard of CT determined seed coordinates.

Keywords: prostate, deep neural network, seed implant, ultrasound

Procedia PDF Downloads 166
2084 The Effects of Yield and Yield Components of Some Quality Increase Applications on Razakı Grape Variety

Authors: Şehri Çınar, Aydın Akın

Abstract:

This study was conducted Razakı grape variety (Vitis vinifera L.) and its vine which was aged 19 was grown on 5 BB rootstock in a vegetation period of 2014 in Afyon province in Turkey. In this research, it was investigated whether the applications of Control (C), 1/3 Cluster Tip Reduction (1/3 CTR), Shoot Tip Reduction (STR), 1/3 CTR + STR, Boric Acid (BA), 1/3 CTR + BA, STR + BA, 1/3 CTR + STR + BA on yield and yield components of Razakı grape variety. The results were obtained as the highest fresh grape yield (7.74 kg/vine) with C application, as the highest cluster weight (244.62 g) with STR application, as the highest 100 berry weight (504.08 g) with C application, as the highest maturity index (36.89) with BA application, as the highest must yield (695.00 ml) with BA and (695.00 ml) with 1/3 CTR + STR + BA applications, as the highest intensity of L* color (46.93) with STR and (46.10) with 1/3 CTR + STR + BA applications, as the highest intensity of a* color (-5.37) with 1/3 CTR + STR and (-5.01) with STR, as the highest intensity of b* color (12.59) with STR application. The shoot tip reduction to increase cluster weight and boric acid application to increase maturity index of Razakı grape variety can be recommended.

Keywords: razakı, 1/3 cluster tip reduction, shoot tip reduction, boric acid, yield and yield components

Procedia PDF Downloads 439
2083 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting

Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi

Abstract:

An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.

Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power

Procedia PDF Downloads 377
2082 Verification and Validation of Simulated Process Models of KALBR-SIM Training Simulator

Authors: T. Jayanthi, K. Velusamy, H. Seetha, S. A. V. Satya Murty

Abstract:

Verification and Validation of Simulated Process Model is the most important phase of the simulator life cycle. Evaluation of simulated process models based on Verification and Validation techniques checks the closeness of each component model (in a simulated network) with the real system/process with respect to dynamic behaviour under steady state and transient conditions. The process of Verification and validation helps in qualifying the process simulator for the intended purpose whether it is for providing comprehensive training or design verification. In general, model verification is carried out by comparison of simulated component characteristics with the original requirement to ensure that each step in the model development process completely incorporates all the design requirements. Validation testing is performed by comparing the simulated process parameters to the actual plant process parameters either in standalone mode or integrated mode. A Full Scope Replica Operator Training Simulator for PFBR - Prototype Fast Breeder Reactor has been developed at IGCAR, Kalpakkam, INDIA named KALBR-SIM (Kalpakkam Breeder Reactor Simulator) wherein the main participants are engineers/experts belonging to Modeling Team, Process Design and Instrumentation and Control design team. This paper discusses the Verification and Validation process in general, the evaluation procedure adopted for PFBR operator training Simulator, the methodology followed for verifying the models, the reference documents and standards used etc. It details out the importance of internal validation by design experts, subsequent validation by external agency consisting of experts from various fields, model improvement by tuning based on expert’s comments, final qualification of the simulator for the intended purpose and the difficulties faced while co-coordinating various activities.

Keywords: Verification and Validation (V&V), Prototype Fast Breeder Reactor (PFBR), Kalpakkam Breeder Reactor Simulator (KALBR-SIM), steady state, transient state

Procedia PDF Downloads 228
2081 Design and Optimization of Open Loop Supply Chain Distribution Network Using Hybrid K-Means Cluster Based Heuristic Algorithm

Authors: P. Suresh, K. Gunasekaran, R. Thanigaivelan

Abstract:

Radio frequency identification (RFID) technology has been attracting considerable attention with the expectation of improved supply chain visibility for consumer goods, apparel, and pharmaceutical manufacturers, as well as retailers and government procurement agencies. It is also expected to improve the consumer shopping experience by making it more likely that the products they want to purchase are available. Recent announcements from some key retailers have brought interest in RFID to the forefront. A modified K- Means Cluster based Heuristic approach, Hybrid Genetic Algorithm (GA) - Simulated Annealing (SA) approach, Hybrid K-Means Cluster based Heuristic-GA and Hybrid K-Means Cluster based Heuristic-GA-SA for Open Loop Supply Chain Network problem are proposed. The study incorporated uniform crossover operator and combined crossover operator in GAs for solving open loop supply chain distribution network problem. The algorithms are tested on 50 randomly generated data set and compared with each other. The results of the numerical experiments show that the Hybrid K-means cluster based heuristic-GA-SA, when tested on 50 randomly generated data set, shows superior performance to the other methods for solving the open loop supply chain distribution network problem.

Keywords: RFID, supply chain distribution network, open loop supply chain, genetic algorithm, simulated annealing

Procedia PDF Downloads 126
2080 Wind Velocity Climate Zonation Based on Observation Data in Indonesia Using Cluster and Principal Component Analysis

Authors: I Dewa Gede Arya Putra

Abstract:

Principal Component Analysis (PCA) is a mathematical procedure that uses orthogonal transformation techniques to change a set of data with components that may be related become components that are not related to each other. This can have an impact on clustering wind speed characteristics in Indonesia. This study uses data daily wind speed observations of the Site Meteorological Station network for 30 years. Multicollinearity tests were also performed on all of these data before doing clustering with PCA. The results show that the four main components have a total diversity of above 80% which will be used for clusters. Division of clusters using Ward's method obtained 3 types of clusters. Cluster 1 covers the central part of Sumatra Island, northern Kalimantan, northern Sulawesi, and northern Maluku with the climatological pattern of wind speed that does not have an annual cycle and a weak speed throughout the year with a low-speed ranging from 0 to 1,5 m/s². Cluster 2 covers the northern part of Sumatra Island, South Sulawesi, Bali, northern Papua with the climatological pattern conditions of wind speed that have annual cycle variations with low speeds ranging from 1 to 3 m/s². Cluster 3 covers the eastern part of Java Island, the Southeast Nusa Islands, and the southern Maluku Islands with the climatological pattern of wind speed conditions that have annual cycle variations with high speeds ranging from 1 to 4.5 m/s².

Keywords: PCA, cluster, Ward's method, wind speed

Procedia PDF Downloads 166