Search results for: consensus clustering
791 Embedded Hybrid Intuition: A Deep Learning and Fuzzy Logic Approach to Collective Creation and Computational Assisted Narratives
Authors: Roberto Cabezas H
Abstract:
The current work shows the methodology developed to create narrative lighting spaces for the multimedia performance piece 'cluster: the vanished paradise.' This empirical research is focused on exploring unconventional roles for machines in subjective creative processes, by delving into the semantics of data and machine intelligence algorithms in hybrid technological, creative contexts to expand epistemic domains trough human-machine cooperation. The creative process in scenic and performing arts is guided mostly by intuition; from that idea, we developed an approach to embed collective intuition in computational creative systems, by joining the properties of Generative Adversarial Networks (GAN’s) and Fuzzy Clustering based on a semi-supervised data creation and analysis pipeline. The model makes use of GAN’s to learn from phenomenological data (data generated from experience with lighting scenography) and algorithmic design data (augmented data by procedural design methods), fuzzy logic clustering is then applied to artificially created data from GAN’s to define narrative transitions built on membership index; this process allowed for the creation of simple and complex spaces with expressive capabilities based on position and light intensity as the parameters to guide the narrative. Hybridization comes not only from the human-machine symbiosis but also on the integration of different techniques for the implementation of the aided design system. Machine intelligence tools as proposed in this work are well suited to redefine collaborative creation by learning to express and expand a conglomerate of ideas and a wide range of opinions for the creation of sensory experiences. We found in GAN’s and Fuzzy Logic an ideal tool to develop new computational models based on interaction, learning, emotion and imagination to expand the traditional algorithmic model of computation.Keywords: fuzzy clustering, generative adversarial networks, human-machine cooperation, hybrid collective data, multimedia performance
Procedia PDF Downloads 142790 Remote Assessment and Change Detection of GreenLAI of Cotton Crop Using Different Vegetation Indices
Authors: Ganesh B. Shinde, Vijaya B. Musande
Abstract:
Cotton crop identification based on the timely information has significant advantage to the different implications of food, economic and environment. Due to the significant advantages, the accurate detection of cotton crop regions using supervised learning procedure is challenging problem in remote sensing. Here, classifiers on the direct image are played a major role but the results are not much satisfactorily. In order to further improve the effectiveness, variety of vegetation indices are proposed in the literature. But, recently, the major challenge is to find the better vegetation indices for the cotton crop identification through the proposed methodology. Accordingly, fuzzy c-means clustering is combined with neural network algorithm, trained by Levenberg-Marquardt for cotton crop classification. To experiment the proposed method, five LISS-III satellite images was taken and the experimentation was done with six vegetation indices such as Simple Ratio, Normalized Difference Vegetation Index, Enhanced Vegetation Index, Green Atmospherically Resistant Vegetation Index, Wide-Dynamic Range Vegetation Index, Green Chlorophyll Index. Along with these indices, Green Leaf Area Index is also considered for investigation. From the research outcome, Green Atmospherically Resistant Vegetation Index outperformed with all other indices by reaching the average accuracy value of 95.21%.Keywords: Fuzzy C-Means clustering (FCM), neural network, Levenberg-Marquardt (LM) algorithm, vegetation indices
Procedia PDF Downloads 318789 The Effect of Classroom Atmospherics on Second Language Learning
Authors: Sresha Yadav, Ishwar Kumar
Abstract:
Second language learning is an important area of research in the language and linguistic domains. Literature suggests that several factors impact second language learning, including age, motivation, objectives, teacher, instructional material, classroom interaction, intelligence and previous background, previous linguistic experience, other student characteristics. Previous researchers have also highlighted that classroom atmospherics has a significant impact on learning as well as on the performance of students. However, the impact of classroom atmospherics on second language learning is still not known in the existing literature. Therefore, the purpose of the present study is to explore whether classroom atmospherics has an impact on second language learning or not? And if it does, it would be worthwhile to explore the nature of such relationship. The present study aims to explore the impact of classroom atmospherics on second language learning by dwelling into the existing literature to explore factors which impact second language learning, classroom atmospherics which impact language learning and the metrics through which such learning impacts could be measured. Based on the findings of literature review, the researchers have adopted a clustering approach for categorization and positioning of various measures of second language learning. Based on the clustering approach, the researchers have approach for measuring the impact of classroom atmospherics on second language learning by drawing a student sample consisting of 80 respondents. The results of the study uncover various basic premises of second language learning, especially with regard to classroom atmospherics. The present study is important not only from the point of view of language learning but implications could be drawn with regard to the design of classroom atmospherics, environmental psychology, anthropometrics, etc as well.Keywords: classroom atmospherics, cluster analysis, linguistics, second language learning
Procedia PDF Downloads 456788 Physiotherapy Assessment of People with Neurological Conditions in Australia: A National Survey of Clinical Practice
Authors: Jill Garner, Belinda Lange, Sheila Lennon, Maayken van den Berg
Abstract:
Currently, there are approximately one billion people worldwide affected by a neurological condition. Many of whom are assessed and treated by a physiotherapist in a variety of settings. There is a lack of consensus in the literature related to what is clinically assessed by physiotherapists in people with neurological conditions. This study aimed to explore assessment in people with neurological conditions, including how health care setting, experience, and therapeutic approach, may influence neurological assessment. A national survey targeted Australian physiotherapists who assess adults with neurological conditions as part of their clinical practice. The survey consisted of 39 questions and was distributed to physiotherapists through the Australian Physiotherapy Association, and Chief Allied Health Officers across Australia and advertised on the National Neurological Physiotherapy Facebook page. In total, 395 respondents consented to the survey from all states within Australia. Most respondents were female (85.4%) with a mean (SD) age of 35.7 years. Respondents reported working clinically in acute, community, outpatients, and community settings. Stroke was the most assessed condition (58.0%). There is variability in domains assessed by Australian physiotherapists, with common inclusions of balance, muscle strength, gait, falls and safety, function, goal setting, range of movement, pain, coordination, activity tolerance, postural alignment and symmetry and upper limb. There is little evidence to support what physiotherapists assess in practice, in different settings, and in different states within Australia and not enough information to develop a decision tree regarding what is important for assessment in different settings. Further research is needed to explore this area and develop a consensus around best practices.Keywords: physiotherapy, neurological, assessment, domains
Procedia PDF Downloads 92787 A Comparative Study on the Effects of Different Clustering Layouts and Geometry of Urban Street Canyons on Urban Heat Island in Residential Neighborhoods of Kolkata
Authors: Shreya Banerjee, Roshmi Sen, Subrata Chattopadhyay
Abstract:
Urbanization during the second half of the last century has created many serious environment related issues leading to global warming and climate change. India is not an exception as the country is also facing the problems of global warming and urban heat islands (UHI) in all the major metropolises. This paper discusses the effect of different housing cluster layouts, site geometry, and geometry of urban street canyons on the urban heat island profile. The study is carried out using the three dimensional microclimatic computational fluid dynamics model ENVI-met version 3.1. Simulation models are done for a typical summer day of 21st June, 2015 in four different residential neighborhoods in the city of Kolkata which predominantly belongs to Warm-Humid Monsoon Climate. The results show the changing pattern of urban heat island profile with respect to different clustering layouts, geometry, and morphology of urban street canyons. The comparison between the four neighborhoods shows that different microclimatic variables are strongly dependant on the neighborhood layout pattern and geometry. The inferences obtained from this study can be indicative towards the formulation of neighborhood design by-laws that will attenuate the urban heat island effect.Keywords: urban heat island, neighborhood morphology, site microclimate, ENVI-met, numerical analysis
Procedia PDF Downloads 368786 Genetic Diversity in Capsicum Germplasm Based on Inter Simple Sequence Repeat Markers
Authors: Siwapech Silapaprayoon, Januluk Khanobdee, Sompid Samipak
Abstract:
Chili peppers are the fruits of Capsicum pepper plants well known for their fiery burning sensation on the tongue after consumption. They are members of the Solanaceae or common nightshade family along with potato, tomato and eggplant. Thai cuisine has gained popularity for its distinct flavors due to usages of various spices and its heat from the addition of chili pepper. Though being used in little quantity for each dish, chili pepper holds a special place in Thai cuisine. There are many varieties of chili peppers in Thailand, and thirty accessions were collected at Rajamangala University of Technology Lanna, Lampang, Thailand. To effectively manage any germplasm it is essential to know the diversity and relationships among members. Thirty-six Inter Simple Sequence Repeat (ISSRs) DNA markers were used to analyze the germplasm. Total of 335 polymorphic bands was obtained giving the average of 9.3 alleles per marker. Unweighted pair-group mean arithmetic method (UPGMA) clustering of data using NTSYS-pc software indicated that the accessions showed varied levels of genetic similarity ranging from 0.57-1.00 similarity coefficient index indicating significant levels of variation. At SM coefficient of 0.81, the germplasm was separated into four groups. Phenotypic variation was discussed in context of phylogenetic tree clustering.Keywords: diversity, germplasm, Chili pepper, ISSR
Procedia PDF Downloads 152785 Quantitative Trait Loci Analysis in Multiple Sorghum Mapping Populations Facilitates the Dissection of Genetic Control of Drought Tolerance Related Traits in Sorghum [Sorghum bicolor (Moench)]
Authors: Techale B., Hongxu Dong, Mihrete Getinet, Aregash Gabizew, Andrew H. Paterson, Kassahun Bantte
Abstract:
The genetic architecture of drought tolerance is expected to involve multiple loci that are unlikely to all segregate for alternative alleles in a single bi-parental population. Therefore, the identification of quantitative trait loci (QTL) that are expressed in diverse genetic backgrounds of multiple bi-parental populations provides evidence about both background-specific and common genetic variants. The purpose of this study was to map QTL related to drought tolerance using three connected mapping populations of different genetic backgrounds to gain insight into the genomic landscape of this important trait in elite Ethiopian germplasm. The three bi-parental populations, each with 207 F₂:₃ lines, were evaluated using an alpha lattice design with two replications under two moisture stress environments. Drought tolerance related traits were analyzed separately for each population using composite interval mapping, finding a total of 105 QTLs. All the QTLs identified from individual populations were projected on a combined consensus map, comprising a total of 25 meta QTLs for seven traits. The consensus map allowed us to deduce locations of a larger number of markers than possible in any individual map, providing a reference for genetic studies in different genetic backgrounds. The mQTL identified in this study could be used for marker-assisted breeding programs in sorghum after validation. Only one trait, reduced leaf senescence, showed a striking bias of allele distribution, indicating substantial standing variation among present varieties that might be employed in improving drought tolerance of Ethiopian and other sorghums.Keywords: Drought tolerance , Mapping populations, Meta QTL, QTL mapping, Sorghum
Procedia PDF Downloads 180784 Comparison of Blockchain Ecosystem for Identity Management
Authors: K. S. Suganya, R. Nedunchezhian
Abstract:
In recent years, blockchain technology has been found to be the most significant discovery in this digital era, after the discovery of the Internet and Cloud Computing. Blockchain is a simple, distributed public ledger that contains all the user’s transaction details in a block. The global copy of the block is then shared among all its peer-peer network users after validation by the Blockchain miners. Once a block is validated and accepted, it cannot be altered by any users making it a trust-free transaction. It also resolves the problem of double-spending by using traditional cryptographic methods. Since the advent of bitcoin, blockchain has been the backbone for all its transactions. But in recent years, it has found its roots and uses in many fields like Smart Contracts, Smart City management, healthcare, etc. Identity management against digital identity theft has become a major concern among financial and other organizations. To solve this digital identity theft, blockchain technology can be employed with existing identity management systems, which maintain a distributed public ledger containing details of an individual’s identity containing information such as Digital birth certificates, Citizenship number, Bank details, voter details, driving license in the form of blocks verified on the blockchain becomes time-stamped, unforgeable and publicly visible for any legitimate users. The main challenge in using blockchain technology to prevent digital identity theft is ensuring the pseudo-anonymity and privacy of the users. This survey paper will exert to study the blockchain concepts, consensus protocols, and various blockchain-based Digital Identity Management systems with their research scope. This paper also discusses the role of Blockchain in COVID-19 pandemic management by self-sovereign identity and supply chain management.Keywords: blockchain, consensus protocols, bitcoin, identity theft, digital identity management, pandemic, COVID-19, self-sovereign identity
Procedia PDF Downloads 130783 Towards Consensus: Mapping Humanitarian-Development Integration Concepts and Their Interrelationship over Time
Authors: Matthew J. B. Wilson
Abstract:
Disaster Risk Reduction relies heavily on the effective cooperation of both humanitarian and development actors, particularly in the wake of a disaster, implementing lasting recovery measures that better protect communities from disasters to come. This can be seen to fit within a broader discussion around integrating humanitarian and development work stretching back to the 1980s. Over time, a number of key concepts have been put forward, including Linking Relief, Rehabilitation, and Development (LRRD), Early Recovery (ER), ‘Build Back Better’ (BBB), and the most recent ‘Humanitarian-Development-Peace Nexus’ or ‘Triple Nexus’ (HDPN) to define these goals and relationship. While this discussion has evolved greatly over time, from a continuum to a more integrative synergistic relationship, there remains a lack of consensus around how to describe it, and as such, the reality of effectively closing this gap has yet to be seen. The objective of this research was twofold. First, to map these four identified concepts (LRRD, ER, BBB & HDPN) used in the literature since 1995 to understand the overall trends in how this relationship is discussed. Second, map articles reference a combination of these concepts to understand their interrelationship. A scoping review was conducted for each concept identified. Results were gathered from Google Scholar by firstly inputting specific boolean search phrases for each concept as they related specifically to disasters each year since 1995 to identify the total number of articles discussing each concept over time. A second search was then done by pairing concepts together within a boolean search phrase and inputting the results into a matrix to understand how many articles contained references to more than one of the concepts. This latter search was limited to articles published after 2017 to account for the more recent emergence of HDPN. It was found that ER and particularly BBB are referred to much more widely than LRRD and HDPN. ER increased particularly in the mid-2000’s coinciding with the formation of the ER cluster, and BBB, whilst emerging gradually in the mid-2000s due to its usage in the wake of the Boxing Day Tsunami, increased significantly from about 2015 after its prominent inclusion in Sendai Framework. HDPN has only just started to increase in the last 4-5 years. In regards to the relationship between concepts, it was found the vast majority of all concepts identified were referred to in isolation from each other. The strongest relationship was between LRRD and HDPN (8% of articles referring to both), whilst ER-BBB and ER-HDPN both were about 3%, LRRD-ER 2%, and BBB-HDPN 1% and BBB-LRRD 1%. This research identified a fundamental issue around the lack of consensus and even awareness of different approaches referred to within academic literature relating to integrating humanitarian and development work. More research into synthesizing and learning from a range of approaches could work towards better closing this gap.Keywords: build back better, disaster risk reduction, early recovery, linking relief rehabilitation and development, humanitarian development integration, humanitarian-development (peace) nexus, recovery, triple nexus
Procedia PDF Downloads 80782 Implementing a Hospitalist Co-Management Service in Orthopaedic Surgery
Authors: Diane Ghanem, Whitney Kagabo, Rebecca Engels, Uma Srikumaran, Babar Shafiq
Abstract:
Hospitalist co-management of orthopaedic surgery patients is a growing trend across the country. It was created as a collaborative effort to provide overarching care to patients with the goal of improving their postoperative care and decreasing in-hospital medical complications. The aim of this project is to provide a guide for implementing and optimizing a hospitalist co-management service in orthopaedic surgery. Key leaders from the hospitalist team, orthopaedic team and quality, safety and service team were identified. Multiple meetings were convened to discuss the comanagement service and determine the necessary building blocks behind an efficient and well-designed co-management framework. After meticulous deliberation, a consensus was reached on the final service agreement and a written guide was drafted. Fundamental features of the service include the identification of service stakeholders and leaders, frequent consensus meetings, a well-defined framework, with goals, program metrics and unified commands, and a regular satisfaction assessment to update and improve the program. Identified pearls for co-managing orthopaedic surgery patients are standardization, timing, adequate patient selection, and two-way feedback between hospitalists and orthopaedic surgeons to optimize the protocols. Developing a service agreement is a constant work in progress, with meetings, discussions, revisions, and multiple piloting attempts before implementation. It is a partnership created to provide hospitals with a streamlined admission process where at-risk patients are identified early, and patient care is optimized regardless of the number or nature of medical comorbidities. A wellestablished hospitalist co-management service can increase patient care quality and safety, as well as health care value.Keywords: co-management, hospitalist co-management, implementation, orthopaedic surgery, quality improvement
Procedia PDF Downloads 88781 The Analyzer: Clustering Based System for Improving Business Productivity by Analyzing User Profiles to Enhance Human Computer Interaction
Authors: Dona Shaini Abhilasha Nanayakkara, Kurugamage Jude Pravinda Gregory Perera
Abstract:
E-commerce platforms have revolutionized the shopping experience, offering convenient ways for consumers to make purchases. To improve interactions with customers and optimize marketing strategies, it is essential for businesses to understand user behavior, preferences, and needs on these platforms. This paper focuses on recommending businesses to customize interactions with users based on their behavioral patterns, leveraging data-driven analysis and machine learning techniques. Businesses can improve engagement and boost the adoption of e-commerce platforms by aligning behavioral patterns with user goals of usability and satisfaction. We propose TheAnalyzer, a clustering-based system designed to enhance business productivity by analyzing user-profiles and improving human-computer interaction. The Analyzer seamlessly integrates with business applications, collecting relevant data points based on users' natural interactions without additional burdens such as questionnaires or surveys. It defines five key user analytics as features for its dataset, which are easily captured through users' interactions with e-commerce platforms. This research presents a study demonstrating the successful distinction of users into specific groups based on the five key analytics considered by TheAnalyzer. With the assistance of domain experts, customized business rules can be attached to each group, enabling The Analyzer to influence business applications and provide an enhanced personalized user experience. The outcomes are evaluated quantitatively and qualitatively, demonstrating that utilizing TheAnalyzer’s capabilities can optimize business outcomes, enhance customer satisfaction, and drive sustainable growth. The findings of this research contribute to the advancement of personalized interactions in e-commerce platforms. By leveraging user behavioral patterns and analyzing both new and existing users, businesses can effectively tailor their interactions to improve customer satisfaction, loyalty and ultimately drive sales.Keywords: data clustering, data standardization, dimensionality reduction, human computer interaction, user profiling
Procedia PDF Downloads 72780 Machine Learning Approaches Based on Recency, Frequency, Monetary (RFM) and K-Means for Predicting Electrical Failures and Voltage Reliability in Smart Cities
Authors: Panaya Sudta, Wanchalerm Patanacharoenwong, Prachya Bumrungkun
Abstract:
As With the evolution of smart grids, ensuring the reliability and efficiency of electrical systems in smart cities has become crucial. This paper proposes a distinct approach that combines advanced machine learning techniques to accurately predict electrical failures and address voltage reliability issues. This approach aims to improve the accuracy and efficiency of reliability evaluations in smart cities. The aim of this research is to develop a comprehensive predictive model that accurately predicts electrical failures and voltage reliability in smart cities. This model integrates RFM analysis, K-means clustering, and LSTM networks to achieve this objective. The research utilizes RFM analysis, traditionally used in customer value assessment, to categorize and analyze electrical components based on their failure recency, frequency, and monetary impact. K-means clustering is employed to segment electrical components into distinct groups with similar characteristics and failure patterns. LSTM networks are used to capture the temporal dependencies and patterns in customer data. This integration of RFM, K-means, and LSTM results in a robust predictive tool for electrical failures and voltage reliability. The proposed model has been tested and validated on diverse electrical utility datasets. The results show a significant improvement in prediction accuracy and reliability compared to traditional methods, achieving an accuracy of 92.78% and an F1-score of 0.83. This research contributes to the proactive maintenance and optimization of electrical infrastructures in smart cities. It also enhances overall energy management and sustainability. The integration of advanced machine learning techniques in the predictive model demonstrates the potential for transforming the landscape of electrical system management within smart cities. The research utilizes diverse electrical utility datasets to develop and validate the predictive model. RFM analysis, K-means clustering, and LSTM networks are applied to these datasets to analyze and predict electrical failures and voltage reliability. The research addresses the question of how accurately electrical failures and voltage reliability can be predicted in smart cities. It also investigates the effectiveness of integrating RFM analysis, K-means clustering, and LSTM networks in achieving this goal. The proposed approach presents a distinct, efficient, and effective solution for predicting and mitigating electrical failures and voltage issues in smart cities. It significantly improves prediction accuracy and reliability compared to traditional methods. This advancement contributes to the proactive maintenance and optimization of electrical infrastructures, overall energy management, and sustainability in smart cities.Keywords: electrical state prediction, smart grids, data-driven method, long short-term memory, RFM, k-means, machine learning
Procedia PDF Downloads 56779 The Application of Video Segmentation Methods for the Purpose of Action Detection in Videos
Authors: Nassima Noufail, Sara Bouhali
Abstract:
In this work, we develop a semi-supervised solution for the purpose of action detection in videos and propose an efficient algorithm for video segmentation. The approach is divided into video segmentation, feature extraction, and classification. In the first part, a video is segmented into clips, and we used the K-means algorithm for this segmentation; our goal is to find groups based on similarity in the video. The application of k-means clustering into all the frames is time-consuming; therefore, we started by the identification of transition frames where the scene in the video changes significantly, and then we applied K-means clustering into these transition frames. We used two image filters, the gaussian filter and the Laplacian of Gaussian. Each filter extracts a set of features from the frames. The Gaussian filter blurs the image and omits the higher frequencies, and the Laplacian of gaussian detects regions of rapid intensity changes; we then used this vector of filter responses as an input to our k-means algorithm. The output is a set of cluster centers. Each video frame pixel is then mapped to the nearest cluster center and painted with a corresponding color to form a visual map. The resulting visual map had similar pixels grouped. We then computed a cluster score indicating how clusters are near each other and plotted a signal representing frame number vs. clustering score. Our hypothesis was that the evolution of the signal would not change if semantically related events were happening in the scene. We marked the breakpoints at which the root mean square level of the signal changes significantly, and each breakpoint is an indication of the beginning of a new video segment. In the second part, for each segment from part 1, we randomly selected a 16-frame clip, then we extracted spatiotemporal features using convolutional 3D network C3D for every 16 frames using a pre-trained model. The C3D final output is a 512-feature vector dimension; hence we used principal component analysis (PCA) for dimensionality reduction. The final part is the classification. The C3D feature vectors are used as input to a multi-class linear support vector machine (SVM) for the training model, and we used a multi-classifier to detect the action. We evaluated our experiment on the UCF101 dataset, which consists of 101 human action categories, and we achieved an accuracy that outperforms the state of art by 1.2%.Keywords: video segmentation, action detection, classification, Kmeans, C3D
Procedia PDF Downloads 77778 Spatiotemporal Propagation and Pattern of Epileptic Spike Predict Seizure Onset Zone
Authors: Mostafa Mohammadpour, Christoph Kapeller, Christy Li, Josef Scharinger, Christoph Guger
Abstract:
Interictal spikes provide valuable information on electrocorticography (ECoG), which aids in surgical planning for patients who suffer from refractory epilepsy. However, the shape and temporal dynamics of these spikes remain unclear. The purpose of this work was to analyze the shape of interictal spikes and measure their distance to the seizure onset zone (SOZ) to use in epilepsy surgery. Thirteen patients' data from the iEEG portal were retrospectively studied. For analysis, half an hour of ECoG data was used from each patient, with the data being truncated before the onset of a seizure. Spikes were first detected and grouped in a sequence, then clustered into interictal epileptiform discharges (IEDs) and non-IED groups using two-step clustering. The distance of the spikes from IED and non-IED groups to SOZ was quantified and compared using the Wilcoxon rank-sum test. Spikes in the IED group tended to be in SOZ or close to it, while spikes in the non-IED group were in distance of SOZ or non-SOZ area. At the group level, the distribution for sharp wave, positive baseline shift, slow wave, and slow wave to sharp wave ratio was significantly different for IED and non-IED groups. The distance of the IED cluster was 10.00mm and significantly closer to the SOZ than the 17.65mm for non-IEDs. These findings provide insights into the shape and spatiotemporal dynamics of spikes that could influence the network mechanisms underlying refractory epilepsy.Keywords: spike propagation, spike pattern, clustering, SOZ
Procedia PDF Downloads 63777 Finding the Longest Common Subsequence in Normal DNA and Disease Affected Human DNA Using Self Organizing Map
Authors: G. Tamilpavai, C. Vishnuppriya
Abstract:
Bioinformatics is an active research area which combines biological matter as well as computer science research. The longest common subsequence (LCSS) is one of the major challenges in various bioinformatics applications. The computation of the LCSS plays a vital role in biomedicine and also it is an essential task in DNA sequence analysis in genetics. It includes wide range of disease diagnosing steps. The objective of this proposed system is to find the longest common subsequence which presents in a normal and various disease affected human DNA sequence using Self Organizing Map (SOM) and LCSS. The human DNA sequence is collected from National Center for Biotechnology Information (NCBI) database. Initially, the human DNA sequence is separated as k-mer using k-mer separation rule. Mean and median values are calculated from each separated k-mer. These calculated values are fed as input to the Self Organizing Map for the purpose of clustering. Then obtained clusters are given to the Longest Common Sub Sequence (LCSS) algorithm for finding common subsequence which presents in every clusters. It returns nx(n-1)/2 subsequence for each cluster where n is number of k-mer in a specific cluster. Experimental outcomes of this proposed system produce the possible number of longest common subsequence of normal and disease affected DNA data. Thus the proposed system will be a good initiative aid for finding disease causing sequence. Finally, performance analysis is carried out for different DNA sequences. The obtained values show that the retrieval of LCSS is done in a shorter time than the existing system.Keywords: clustering, k-mers, longest common subsequence, SOM
Procedia PDF Downloads 267776 Wind Velocity Climate Zonation Based on Observation Data in Indonesia Using Cluster and Principal Component Analysis
Authors: I Dewa Gede Arya Putra
Abstract:
Principal Component Analysis (PCA) is a mathematical procedure that uses orthogonal transformation techniques to change a set of data with components that may be related become components that are not related to each other. This can have an impact on clustering wind speed characteristics in Indonesia. This study uses data daily wind speed observations of the Site Meteorological Station network for 30 years. Multicollinearity tests were also performed on all of these data before doing clustering with PCA. The results show that the four main components have a total diversity of above 80% which will be used for clusters. Division of clusters using Ward's method obtained 3 types of clusters. Cluster 1 covers the central part of Sumatra Island, northern Kalimantan, northern Sulawesi, and northern Maluku with the climatological pattern of wind speed that does not have an annual cycle and a weak speed throughout the year with a low-speed ranging from 0 to 1,5 m/s². Cluster 2 covers the northern part of Sumatra Island, South Sulawesi, Bali, northern Papua with the climatological pattern conditions of wind speed that have annual cycle variations with low speeds ranging from 1 to 3 m/s². Cluster 3 covers the eastern part of Java Island, the Southeast Nusa Islands, and the southern Maluku Islands with the climatological pattern of wind speed conditions that have annual cycle variations with high speeds ranging from 1 to 4.5 m/s².Keywords: PCA, cluster, Ward's method, wind speed
Procedia PDF Downloads 195775 Designing Floor Planning in 2D and 3D with an Efficient Topological Structure
Authors: V. Nagammai
Abstract:
Very-large-scale integration (VLSI) is the process of creating an integrated circuit (IC) by combining thousands of transistors into a single chip. Development of technology increases the complexity in IC manufacturing which may vary the power consumption, increase the size and latency period. Topology defines a number of connections between network. In this project, NoC topology is generated using atlas tool which will increase performance in turn determination of constraints are effective. The routing is performed by XY routing algorithm and wormhole flow control. In NoC topology generation, the value of power, area and latency are predetermined. In previous work, placement, routing and shortest path evaluation is performed using an algorithm called floor planning with cluster reconstruction and path allocation algorithm (FCRPA) with the account of 4 3x3 switch, 6 4x4 switch, and 2 5x5 switches. The usage of the 4x4 and 5x5 switch will increase the power consumption and area of the block. In order to avoid the problem, this paper has used one 8x8 switch and 4 3x3 switches. This paper uses IPRCA which of 3 steps they are placement, clustering, and shortest path evaluation. The placement is performed using min – cut placement and clustering are performed using an algorithm called cluster generation. The shortest path is evaluated using an algorithm called Dijkstra's algorithm. The power consumption of each block is determined. The experimental result shows that the area, power, and wire length improved simultaneously.Keywords: application specific noc, b* tree representation, floor planning, t tree representation
Procedia PDF Downloads 393774 Feature Based Unsupervised Intrusion Detection
Authors: Deeman Yousif Mahmood, Mohammed Abdullah Hussein
Abstract:
The goal of a network-based intrusion detection system is to classify activities of network traffics into two major categories: normal and attack (intrusive) activities. Nowadays, data mining and machine learning plays an important role in many sciences; including intrusion detection system (IDS) using both supervised and unsupervised techniques. However, one of the essential steps of data mining is feature selection that helps in improving the efficiency, performance and prediction rate of proposed approach. This paper applies unsupervised K-means clustering algorithm with information gain (IG) for feature selection and reduction to build a network intrusion detection system. For our experimental analysis, we have used the new NSL-KDD dataset, which is a modified dataset for KDDCup 1999 intrusion detection benchmark dataset. With a split of 60.0% for the training set and the remainder for the testing set, a 2 class classifications have been implemented (Normal, Attack). Weka framework which is a java based open source software consists of a collection of machine learning algorithms for data mining tasks has been used in the testing process. The experimental results show that the proposed approach is very accurate with low false positive rate and high true positive rate and it takes less learning time in comparison with using the full features of the dataset with the same algorithm.Keywords: information gain (IG), intrusion detection system (IDS), k-means clustering, Weka
Procedia PDF Downloads 296773 Web Proxy Detection via Bipartite Graphs and One-Mode Projections
Authors: Zhipeng Chen, Peng Zhang, Qingyun Liu, Li Guo
Abstract:
With the Internet becoming the dominant channel for business and life, many IPs are increasingly masked using web proxies for illegal purposes such as propagating malware, impersonate phishing pages to steal sensitive data or redirect victims to other malicious targets. Moreover, as Internet traffic continues to grow in size and complexity, it has become an increasingly challenging task to detect the proxy service due to their dynamic update and high anonymity. In this paper, we present an approach based on behavioral graph analysis to study the behavior similarity of web proxy users. Specifically, we use bipartite graphs to model host communications from network traffic and build one-mode projections of bipartite graphs for discovering social-behavior similarity of web proxy users. Based on the similarity matrices of end-users from the derived one-mode projection graphs, we apply a simple yet effective spectral clustering algorithm to discover the inherent web proxy users behavior clusters. The web proxy URL may vary from time to time. Still, the inherent interest would not. So, based on the intuition, by dint of our private tools implemented by WebDriver, we examine whether the top URLs visited by the web proxy users are web proxies. Our experiment results based on real datasets show that the behavior clusters not only reduce the number of URLs analysis but also provide an effective way to detect the web proxies, especially for the unknown web proxies.Keywords: bipartite graph, one-mode projection, clustering, web proxy detection
Procedia PDF Downloads 245772 Short Association Bundle Atlas for Lateralization Studies from dMRI Data
Authors: C. Román, M. Guevara, P. Salas, D. Duclap, J. Houenou, C. Poupon, J. F. Mangin, P. Guevara
Abstract:
Diffusion Magnetic Resonance Imaging (dMRI) allows the non-invasive study of human brain white matter. From diffusion data, it is possible to reconstruct fiber trajectories using tractography algorithms. Our previous work consists in an automatic method for the identification of short association bundles of the superficial white matter (SWM), based on a whole brain inter-subject hierarchical clustering applied to a HARDI database. The method finds representative clusters of similar fibers, belonging to a group of subjects, according to a distance measure between fibers, using a non-linear registration (DTI-TK). The algorithm performs an automatic labeling based on the anatomy, defined by a cortex mesh parcelated with FreeSurfer software. The clustering was applied to two independent groups of 37 subjects. The clusters resulting from both groups were compared using a restrictive threshold of mean distance between each pair of bundles from different groups, in order to keep reproducible connections. In the left hemisphere, 48 reproducible bundles were found, while 43 bundles where found in the right hemisphere. An inter-hemispheric bundle correspondence was then applied. The symmetric horizontal reflection of the right bundles was calculated, in order to obtain the position of them in the left hemisphere. Next, the intersection between similar bundles was calculated. The pairs of bundles with a fiber intersection percentage higher than 50% were considered similar. The similar bundles between both hemispheres were fused and symmetrized. We obtained 30 common bundles between hemispheres. An atlas was created with the resulting bundles and used to segment 78 new subjects from another HARDI database, using a distance threshold between 6-8 mm according to the bundle length. Finally, a laterality index was calculated based on the bundle volume. Seven bundles of the atlas presented right laterality (IP_SP_1i, LO_LO_1i, Op_Tr_0i, PoC_PoC_0i, PoC_PreC_2i, PreC_SM_0i, y RoMF_RoMF_0i) and one presented left laterality (IP_SP_2i), there is no tendency of lateralization according to the brain region. Many factors can affect the results, like tractography artifacts, subject registration, and bundle segmentation. Further studies are necessary in order to establish the influence of these factors and evaluate SWM laterality.Keywords: dMRI, hierarchical clustering, lateralization index, tractography
Procedia PDF Downloads 331771 Graph-Based Semantical Extractive Text Analysis
Authors: Mina Samizadeh
Abstract:
In the past few decades, there has been an explosion in the amount of available data produced from various sources with different topics. The availability of this enormous data necessitates us to adopt effective computational tools to explore the data. This leads to an intense growing interest in the research community to develop computational methods focused on processing this text data. A line of study focused on condensing the text so that we are able to get a higher level of understanding in a shorter time. The two important tasks to do this are keyword extraction and text summarization. In keyword extraction, we are interested in finding the key important words from a text. This makes us familiar with the general topic of a text. In text summarization, we are interested in producing a short-length text which includes important information about the document. The TextRank algorithm, an unsupervised learning method that is an extension of the PageRank (algorithm which is the base algorithm of Google search engine for searching pages and ranking them), has shown its efficacy in large-scale text mining, especially for text summarization and keyword extraction. This algorithm can automatically extract the important parts of a text (keywords or sentences) and declare them as a result. However, this algorithm neglects the semantic similarity between the different parts. In this work, we improved the results of the TextRank algorithm by incorporating the semantic similarity between parts of the text. Aside from keyword extraction and text summarization, we develop a topic clustering algorithm based on our framework, which can be used individually or as a part of generating the summary to overcome coverage problems.Keywords: keyword extraction, n-gram extraction, text summarization, topic clustering, semantic analysis
Procedia PDF Downloads 70770 Promises versus Realities: A Critical Assessment of the Integrated Design Process
Authors: Firdous Nizar, Carmela Cucuzzella
Abstract:
This paper explores how the integrated design process (IDP) was adopted for an architectural project. The IDP is a relatively new approach to collaborative design in architectural design projects in Canada. It has gained much traction recently as the closest possible approach to the successful management of low energy building projects and has been advocated as a productive method for multi-disciplinary collaboration within complex projects. This study is based on the premise that there are explicit and implicit dimensions of power within the integrated design process (IDP) in the green building industry that may or may not lead to irreconcilable differences in a process that demands consensus. To gain insight on the potential gap between the theoretical promises and practical realities of the IDP, a review of existing IDP literature is compared with a case study analysis of a competition-based architectural project in Canada, a first to incorporate the IDP in its overall design format. This paper aims to address the undertheorized power relations of the IDP in a real project. It presents a critical assessment through the lens of the combined theories of deliberative democracy by Jürgen Habermas, with that of agonistic pluralism by political theorist Chantal Mouffe. These two theories are intended to more appropriately embrace the conflictual situations in collaborative environments, and shed light on the relationships of power, between engineers, city officials, architects, and designers in this conventional consensus-based model. In addition, propositions for a shift in approach that embraces conflictual differences among its participants are put forth based on concepts of critical spatial practice by Markus Meissen. As IDP is a relatively new design process, it requires much deliberation on its structure from the theoretical framework built in this paper in order to unlock its true potential.Keywords: agonistic pluralism, critical spatial practice, deliberative democracy, integrated design process
Procedia PDF Downloads 173769 Applying Participatory Design for the Reuse of Deserted Community Spaces
Authors: Wei-Chieh Yeh, Yung-Tang Shen
Abstract:
The concept of community building started in 1994 in Taiwan. After years of development, it fostered the notion of active local resident participation in community issues as co-operators, instead of minions. Participatory design gives participants more control in the decision-making process, helps to reduce the friction caused by arguments and assists in bringing different parties to consensus. This results in an increase in the efficiency of projects run in the community. Therefore, the participation of local residents is key to the success of community building. This study applied participatory design to develop plans for the reuse of deserted spaces in the community from the first stage of brainstorming for design ideas, making creative models to be employed later, through to the final stage of construction. After conducting a series of participatory designed activities, it aimed to integrate the different opinions of residents, develop a sense of belonging and reach a consensus. Besides this, it also aimed at building the residents’ awareness of their responsibilities for the environment and related issues of sustainable development. By reviewing relevant literature and understanding the history of related studies, the study formulated a theory. It took the “2012-2014 Changhua County Community Planner Counseling Program” as a case study to investigate the implementation process of participatory design. Research data are collected by document analysis, participants’ observation and in-depth interviews. After examining the three elements of “Design Participation”, “Construction Participation”, and” Follow–up Maintenance Participation” in the case, the study emerged with a promising conclusion: Maintenance works were carried out better compared to common public works. Besides this, maintenance costs were lower. Moreover, the works that residents were involved in were more creative. Most importantly, the community characteristics could be easy be recognized.Keywords: participatory design, deserted space, community building, reuse
Procedia PDF Downloads 371768 The Survey Research and Evaluation of Green Residential Building Based on the Improved Group Analytical Hierarchy Process Method in Yinchuan
Abstract:
Due to the economic downturn and the deterioration of the living environment, the development of residential buildings as high energy consuming building is gradually changing from “extensive” to green building in China. So, the evaluation system of green building is continuously improved, but the current evaluation work has the following problems: (1) There are differences in the cost of the actual investment and the purchasing power of residents, also construction target of green residential building is single and lacks multi-objective performance development. (2) Green building evaluation lacks regional characteristics and cannot reflect the different regional residents demand. (3) In the process of determining the criteria weight, the experts’ judgment matrix is difficult to meet the requirement of consistency. Therefore, to solve those problems, questionnaires which are about the green residential building for Ningxia area are distributed, and the results of questionnaires can feedback the purchasing power of residents and the acceptance of the green building cost. Secondly, combined with the geographical features of Ningxia minority areas, the evaluation criteria system of green residential building is constructed. Finally, using the improved group AHP method and the grey clustering method, the criteria weight is determined, and a real case is evaluated, which is located in Xing Qing district, Ningxia. A conclusion can be obtained that the professional evaluation for this project and good social recognition is basically the same.Keywords: evaluation, green residential building, grey clustering method, group AHP
Procedia PDF Downloads 397767 Creation of Computerized Benchmarks to Facilitate Preparedness for Biological Events
Abstract:
Introduction: Communicable diseases and pandemics pose a growing threat to the well-being of the global population. A vital component of protecting the public health is the creation and sustenance of a continuous preparedness for such hazards. A joint Israeli-German task force was deployed in order to develop an advanced tool for self-evaluation of emergency preparedness for variable types of biological threats. Methods: Based on a comprehensive literature review and interviews with leading content experts, an evaluation tool was developed based on quantitative and qualitative parameters and indicators. A modified Delphi process was used to achieve consensus among over 225 experts from both Germany and Israel concerning items to be included in the evaluation tool. Validity and applicability of the tool for medical institutions was examined in a series of simulation and field exercises. Results: Over 115 German and Israeli experts reviewed and examined the proposed parameters as part of the modified Delphi cycles. A consensus of over 75% of experts was attained for 183 out of 188 items. The relative importance of each parameter was rated as part of the Delphi process, in order to define its impact on the overall emergency preparedness. The parameters were integrated in computerized web-based software that enables to calculate scores of emergency preparedness for biological events. Conclusions: The parameters developed in the joint German-Israeli project serve as benchmarks that delineate actions to be implemented in order to create and maintain an ongoing preparedness for biological events. The computerized evaluation tool enables to continuously monitor the level of readiness and thus strengths and gaps can be identified and corrected appropriately. Adoption of such a tool is recommended as an integral component of quality assurance of public health and safety.Keywords: biological events, emergency preparedness, bioterrorism, natural biological events
Procedia PDF Downloads 423766 Global Low Carbon Transitions in the Power Sector: A Machine Learning Archetypical Clustering Approach
Authors: Abdullah Alotaiq, David Wallom, Malcolm McCulloch
Abstract:
This study presents an archetype-based approach to designing effective strategies for low-carbon transitions in the power sector. To achieve global energy transition goals, a renewable energy transition is critical, and understanding diverse energy landscapes across different countries is essential to design effective renewable energy policies and strategies. Using a clustering approach, this study identifies 12 energy archetypes based on the electricity mix, socio-economic indicators, and renewable energy contribution potential of 187 UN countries. Each archetype is characterized by distinct challenges and opportunities, ranging from high dependence on fossil fuels to low electricity access, low economic growth, and insufficient contribution potential of renewables. Archetype A, for instance, consists of countries with low electricity access, high poverty rates, and limited power infrastructure, while Archetype J comprises developed countries with high electricity demand and installed renewables. The study findings have significant implications for renewable energy policymaking and investment decisions, with policymakers and investors able to use the archetype approach to identify suitable renewable energy policies and measures and assess renewable energy potential and risks. Overall, the archetype approach provides a comprehensive framework for understanding diverse energy landscapes and accelerating decarbonisation of the power sector.Keywords: fossil fuels, power plants, energy transition, renewable energy, archetypes
Procedia PDF Downloads 51765 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 348764 Genetic Trait Analysis of RIL Barley Genotypes to Sort-out the Top Ranked Elites for Advanced Yield Breeding Across Multi Environments of Tigray, Ethiopia
Authors: Hailekiros Tadesse Tekle, Yemane Tsehaye, Fetien Abay
Abstract:
Barley (Hordeum vulgare L.) is one of the most important cereal crops in the world, grown for the poor farmers in Tigray with low yield production. The purpose of this research was to estimate the performance of 166 barley genotypes against the quantitative traits with detailed analysis of the variance component, heritability, genetic advance, and genetic usefulness parameters. The finding of ANOVA was highly significant variation (p ≤ 0:01) for all the genotypes. We found significant differences in coefficient of variance (CV of 15%) for 5 traits out of the 12 quantitative traits. The topmost broad sense heritability (H2) was recorded for seeds per spike (98.8%), followed by thousand seed weight (96.5%) with 79.16% and 56.25%, respectively, of GAM. The traits with H2 ≥ 60% and GA/GAM ≥ 20% suggested the least influenced by the environment, governed by the additive genes and direct selection for improvement of such beneficial traits for the studied genotypes. Hence, the 20 outstanding recombinant inbred lines (RIL) barley genotypes performing early maturity, high yield, and 1000 seed weight traits simultaneously were the top ranked group barley genotypes out of the 166 genotypes. These are; G5, G25, G33, G118, G36, G123, G28, G34, G14, G10, G3, G13, G11, G32, G8, G39, G23, G30, G37, and G26. They were early in maturity, high TSW and GYP (TSW ≥ 55 g, GYP ≥ 15.22 g/plant, and DTM below 106 days). In general, the 166 genotypes were classified as high (group 1), medium (group 2), and low yield production (group 3) genotypes in terms of yield and yield component trait analysis by clustering; and genotype parameter analysis such as the heritability, genetic advance, and genetic usefulness traits in this investigation.Keywords: barley, clustering, genetic advance, heritability, usefulness, variability, yield
Procedia PDF Downloads 86763 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model
Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You
Abstract:
The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.Keywords: DBSCAN, potential function, speech signal, the UBSS model
Procedia PDF Downloads 135762 Blockchain-Based Decentralized Architecture for Secure Medical Records Management
Authors: Saeed M. Alshahrani
Abstract:
This research integrated blockchain technology to reform medical records management in healthcare informatics. It was aimed at resolving the limitations of centralized systems by establishing a secure, decentralized, and user-centric platform. The system was architected with a sophisticated three-tiered structure, integrating advanced cryptographic methodologies, consensus algorithms, and the Fast Healthcare Interoperability Resources (HL7 FHIR) standard to ensure data security, transaction validity, and semantic interoperability. The research has profound implications for healthcare delivery, patient care, legal compliance, operational efficiency, and academic advancements in blockchain technology and healthcare IT sectors. The methodology adapted in this research comprises of Preliminary Feasibility Study, Literature Review, Design and Development, Cryptographic Algorithm Integration, Modeling the data and testing the system. The research employed a permissioned blockchain with a Practical Byzantine Fault Tolerance (PBFT) consensus algorithm and Ethereum-based smart contracts. It integrated advanced cryptographic algorithms, role-based access control, multi-factor authentication, and RESTful APIs to ensure security, regulate access, authenticate user identities, and facilitate seamless data exchange between the blockchain and legacy healthcare systems. The research contributed to the development of a secure, interoperable, and decentralized system for managing medical records, addressing the limitations of the centralized systems that were in place. Future work will delve into optimizing the system further, exploring additional blockchain use cases in healthcare, and expanding the adoption of the system globally, contributing to the evolution of global healthcare practices and policies.Keywords: healthcare informatics, blockchain, medical records management, decentralized architecture, data security, cryptographic algorithms
Procedia PDF Downloads 55