Search results for: statistical features
6883 Local Texture and Global Color Descriptors for Content Based Image Retrieval
Authors: Tajinder Kaur, Anu Bala
Abstract:
An image retrieval system is a computer system for browsing, searching, and retrieving images from a large database of digital images a new algorithm meant for content-based image retrieval (CBIR) is presented in this paper. The proposed method combines the color and texture features which are extracted the global and local information of the image. The local texture feature is extracted by using local binary patterns (LBP), which are evaluated by taking into consideration of local difference between the center pixel and its neighbors. For the global color feature, the color histogram (CH) is used which is calculated by RGB (red, green, and blue) spaces separately. In this paper, the combination of color and texture features are proposed for content-based image retrieval. The performance of the proposed method is tested on Corel 1000 database which is the natural database. The results after being investigated show a significant improvement in terms of their evaluation measures as compared to LBP and CH.Keywords: color, texture, feature extraction, local binary patterns, image retrieval
Procedia PDF Downloads 3686882 A Multivariate Statistical Approach for Water Quality Assessment of River Hindon, India
Authors: Nida Rizvi, Deeksha Katyal, Varun Joshi
Abstract:
River Hindon is an important river catering the demand of highly populated rural and industrial cluster of western Uttar Pradesh, India. Water quality of river Hindon is deteriorating at an alarming rate due to various industrial, municipal and agricultural activities. The present study aimed at identifying the pollution sources and quantifying the degree to which these sources are responsible for the deteriorating water quality of the river. Various water quality parameters, like pH, temperature, electrical conductivity, total dissolved solids, total hardness, calcium, chloride, nitrate, sulphate, biological oxygen demand, chemical oxygen demand and total alkalinity were assessed. Water quality data obtained from eight study sites for one year has been subjected to the two multivariate techniques, namely, principal component analysis and cluster analysis. Principal component analysis was applied with the aim to find out spatial variability and to identify the sources responsible for the water quality of the river. Three Varifactors were obtained after varimax rotation of initial principal components using principal component analysis. Cluster analysis was carried out to classify sampling stations of certain similarity, which grouped eight different sites into two clusters. The study reveals that the anthropogenic influence (municipal, industrial, waste water and agricultural runoff) was the major source of river water pollution. Thus, this study illustrates the utility of multivariate statistical techniques for analysis and elucidation of multifaceted data sets, recognition of pollution sources/factors and understanding temporal/spatial variations in water quality for effective river water quality management.Keywords: cluster analysis, multivariate statistical techniques, river Hindon, water quality
Procedia PDF Downloads 4676881 The Influence of Celebrity Endorsement on Consumers’ Attitude and Purchas Intention Towards Skincare Products in Malaysia
Authors: Tew Leh Ghee
Abstract:
The study's goal is to determine how celebrity endorsement affects Malaysian consumers' attitudes and intentions to buy skincare products. Since customers now largely rely on celebrity endorsement to influence purchasing decisions in almost every business, celebrity endorsement is not, in reality, a new phenomenon. Even though the market for skincare products has a vast potential to be exploited, corporations have yet to seize this niche via celebrity endorsement. Basically, there hasn't been much study done to recognize the significance of celebrity endorsement in this industry. This research combined descriptive and quantitative methods with a self-administered survey as the primary data-gathering tool. All of the characteristics under study were measured using a 5-point Likert scale, and the questionnaire was written in English. A convenience sample method was used to choose respondents, and 360 sets of valid questionnaires were gathered for the study's statistical analysis. Preliminary statistical analyses were analyzed using SPSS version 20.0 (Statistical Package for the Social Sciences). The backdrop of the respondents' demographics was examined using descriptive analysis. All concept assessments' validity and reliability were examined using exploratory factor analysis, item-total statistics, and reliability statistics. Pearson correlation and regression analysis were used, respectively, to assess relationships and impacts between the variables under study. The research showed that, apart from competence, celebrity endorsements of skincare products in Malaysia had a favorable impact on attitudes and purchase intentions as evaluated by attractiveness and dependability. The research indicated that the most significant element influencing attitude and buy intention was the credibility of a celebrity endorsement. The study offered implications in order to provide potential improvements of celebrity endorsement in skincare goods in Malaysia. The study's last portion includes its limits and ideas for the future.Keywords: trustworthiness, influential, phenomenon, celebrity emdorsement
Procedia PDF Downloads 816880 Clustering Based Level Set Evaluation for Low Contrast Images
Authors: Bikshalu Kalagadda, Srikanth Rangu
Abstract:
The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images.Keywords: segmentation, clustering, level set function, re-initialization, Kernel fuzzy, swarm optimization
Procedia PDF Downloads 3526879 Status of India towards Achieving the Millennium Development Goals
Authors: Rupali Satsangi
Abstract:
14 years ago, leaders from every country agreed on a vision for the future – a world with less poverty, hunger and disease, greater survival prospects for mothers and their infants, better educated children, equal opportunities for women, and a healthier environment; a world in which developed and developing countries work in partnership for the betterment of all. This vision took the shape of eight Millennium Development Goals, which provide countries around the world a framework for development and time-bound targets by which progress can be measured. However, India has found 35 of the indicators as relevant to India. India’s MDG-framework has been contextualized through a concordance with the existing official indicators of corresponding dimensions in the national statistical system. The present study based on secondary data analyzed the status of India towards achieving the MDGs after reviewing the data study find out that India can miss the MDGs Bus in women health, sanitation and global partnership. These goals were less addressed by India in his policies and takeoffs.Keywords: millennium development goals, national statistical system, global partnership, healthier environment
Procedia PDF Downloads 4046878 Job in Modern Arabic Poetry: A Semantic and Comparative Approach to Two Poems Referring to the Poet Al-Sayyab
Authors: Jeries Khoury
Abstract:
The use of legendary, folkloric and religious symbols is one of the most important phenomena in modern Arabic poetry. Interestingly enough, most of the modern Arabic poetry’s pioneers were so fascinated by the biblical symbols and they managed to use many modern techniques to make these symbols adequate for their personal life from one side and fit to their Islamic beliefs from the other. One of the most famous poets to do so was al-Sayya:b. The way he employed one of these symbols ‘job’, the new features he adds to this character and the link between this character and his personal life will be discussed in this study. Besides, the study will examine the influence of al-Sayya:b on another modern poet Saadi Yusuf, who, following al-Sayya:b, used the character of Job in a special way, by mixing its features with al-Sayya:b’s personal features and in this way creating a new mixed character. A semantic, cultural and comparative analysis of the poems written by al-Sayya:b himself and the other poets who evoked the mixed image of al-Sayya:b-Job, can reveal the changes Arab poets made to the original biblical figure of Job to bring it closer to Islamic culture. The paper will make an intensive use of intertextuality idioms in order to shed light on the network of relations between three kinds of texts (indeed three ‘palimpsests’: 1- biblical- the primary text; 2- poetic- al-Syya:b’s secondary version; 3- re-poetic- Sa’di Yusuf’s tertiary version). The bottom line in this paper is that that al-Sayya:b was directly influenced by the dramatic biblical story of Job more than the brief Quranic version of the story. In fact, the ‘new’ character of Job designed by al-Sayya:b himself differs from the original one in many aspects that we can safely say it is the Sayyabian-Job that cannot be found in the poems of any other poets, unless they are evoking the own tragedy of al-Sayya:b himself, like what Saadi Yusuf did.Keywords: Arabic poetry, intertextuality, job, meter, modernism, symbolism
Procedia PDF Downloads 2006877 Intelligent Production Machine
Authors: A. Şahinoğlu, R. Gürbüz, A. Güllü, M. Karhan
Abstract:
This study in production machines, it is aimed that machine will automatically perceive cutting data and alter cutting parameters. The two most important parameters have to be checked in machine control unit are progress feed rate and speeds. These parameters are aimed to be controlled by sounds of machine. Optimum sound’s features introduced to computer. During process, real time data is received and converted by Matlab software. Data is converted into numerical values. According to them progress and speeds decreases/increases at a certain rate and thus optimum sound is acquired. Cutting process is made in respect of optimum cutting parameters. During chip remove progress, features of cutting tools, kind of cut material, cutting parameters and used machine; affects on various parameters. Instead of required parameters need to be measured such as temperature, vibration, and tool wear that emerged during cutting process; detailed analysis of the sound emerged during cutting process will provide detection of various data that included in the cutting process by the much more easy and economic way. The relation between cutting parameters and sound is being identified.Keywords: cutting process, sound processing, intelligent late, sound analysis
Procedia PDF Downloads 3346876 Morphological Variation of the Mesenteric Lymph Node in Dromedary Camels: The Impact of Rearing Systems
Authors: Khenenou Tarek, Mohamed Amine Fares, Djallal Eddine Rahmoun
Abstract:
The study intends to evaluate the morphological changes in the mesenteric lymph nodes of dromedaries in different rearing systems. we aimed to evaluate the adaptative behavior of the animal’s immune system with environmental variations, and to conduct a comparative analysis on the morphological features of the mesenteric lymph node of the one-humped camel (Camelus dromedarius) in the region of El Oued, with two different rearing systems, with different practices and different purposes. The study was conducted using histo-morphometric techniques to analyze the morphological features of the mesenteric lymph node of the one-humped camel (Camelus dromedarius) in the region of El Oued. Two groups of dromedaries were used in the study, one group raised in a free-roaming housing system and another group raised in a restricted-roaming housing system. The results revealed that there were significant differences between the two groups in terms of active follicle ratio and size and also the cellular population of functional zones. Animals living and roaming outside the farm barriers were more exposed to pathogens, which leads to the installation of an adaptative process, whereas the animals living under restricted-roaming housing system were not exposed to pathogens. This study indicated that the adaptative behavior of the animal’s immune system with environmental variations is the functional translation of morphological changes. The obtained findings revealed that the morphological features of the mesenteric lymph node of the one-humped camel (Camelus dromedarius) in the region of El Oued are directly linked to the rearing system practicesKeywords: adaptative behavior, dromedary, lymph node, morphology, rearing systems
Procedia PDF Downloads 266875 Meta-Review of Scholarly Publications on Biosensors: A Bibliometric Study
Authors: Nasrine Olson
Abstract:
With over 70,000 scholarly publications on the topic of biosensors, an overview of the field has become a challenge. To facilitate, there are currently over 700 expert-reviews of publications on biosensors and related topics. This study focuses on these review papers in order to provide a Meta-Review of the area. This paper provides a statistical analysis and overview of biosensor-related review papers. Comprehensive searches are conducted in the Web of Science, and PubMed databases and the resulting empirical material are analyzed using bibliometric methods and tools. The study finds that the biosensor-related review papers can be categorized in five related subgroups, broadly denoted by (i) properties of materials and particles, (ii) analysis and indicators, (iii) diagnostics, (iv) pollutant and analytical devices, and (v) treatment/ application. For an easy and clear access to the findings visualization of clusters and networks of connections are presented. The study includes a temporal dimension and identifies the trends over the years with an emphasis on the most recent developments. This paper provides useful insights for those who wish to form a better understanding of the research trends in the area of biosensors.Keywords: bibliometrics, biosensors, meta-review, statistical analysis, trends visualization
Procedia PDF Downloads 2196874 Statistical Optimization of Vanillin Production by Pycnoporus Cinnabarinus 1181
Authors: Swarali Hingse, Shraddha Digole, Uday Annapure
Abstract:
The present study investigates the biotransformation of ferulic acid to vanillin by Pycnoporus cinnabarinus and its optimization using one-factor-at-a-time method as well as statistical approach. Effect of various physicochemical parameters and medium components was studied using one-factor-at-a-time method. Screening of the significant factors was carried out using L25 Taguchi orthogonal array and then these selected significant factors were further optimized using response surface methodology (RSM). Significant media components obtained using Taguchi L25 orthogonal array were glucose, KH2PO4 and yeast extract. Further, a Box Behnken design was used to investigate the interactive effects of the three most significant media components. The final medium obtained after optimization using RSM containing glucose (34.89 g/L), diammonium tartrate (1 g/L), yeast extract (1.47 g/L), MgSO4•7H2O (0.5 g/L), KH2PO4 (0.15 g/L), and CaCl2•2H2O (20 mg/L) resulted in amplification of vanillin production from 30.88 mg/L to 187.63 mg/L.Keywords: ferulic acid, pycnoporus cinnabarinus, response surface methodology, vanillin
Procedia PDF Downloads 3846873 The Results of the Archaeological Excavations at the Site of Qurh in Al Ula Region
Authors: Ahmad Al Aboudi
Abstract:
The Department of Archaeology at King Saud University conduct a long Term excavations since 2004 at the archaeological site of (Qurh) in Al-Ula area. The history of the site goes back to the eighth century AD. The main aim of the excavations is the training of the students on the archaeological field work associated with the scientific skills of exploring, surveying, classifying, documentations and other necessary in the field archaeology. During the 12th Season of Excavations, an area of 20 × 40 m2 of the site was excavated. The depth of the excavating the site was reached to 2-3 m. Many of the architectural features of a residential area in the northern part of the site were excavated this season. Circular walls made of mud-brick and a brick column drums and tiles made of clay were revealed this season. Additionally, lots of findings such as Gemstones, jars, ceramic plates, metal, glass, and fabric, as well as some jewelers and coins were discovered. This paper will deal with the main results of this field project including the architectural features and phenomena and their interpretations, the classification of excavated material culture remains and stratigraphy.Keywords: Islamic architecture, Islamic art, excavations, early Islamic city
Procedia PDF Downloads 2766872 Integrating Cyber-Physical System toward Advance Intelligent Industry: Features, Requirements and Challenges
Authors: V. Reyes, P. Ferreira
Abstract:
In response to high levels of competitiveness, industrial systems have evolved to improve productivity. As a consequence, a rapid increase in volume production and simultaneously, a customization process require lower costs, more variety, and accurate quality of products. Reducing time-cycle production, enabling customizability, and ensure continuous quality improvement are key features in advance intelligent industry. In this scenario, customers and producers will be able to participate in the ongoing production life cycle through real-time interaction. To achieve this vision, transparency, predictability, and adaptability are key features that provide the industrial systems the capability to adapt to customer demands modifying the manufacturing process through an autonomous response and acting preventively to avoid errors. The industrial system incorporates a diversified number of components that in advanced industry are expected to be decentralized, end to end communicating, and with the capability to make own decisions through feedback. The evolving process towards advanced intelligent industry defines a set of stages to empower components of intelligence and enhancing efficiency to achieve the decision-making stage. The integrated system follows an industrial cyber-physical system (CPS) architecture whose real-time integration, based on a set of enabler technologies, links the physical and virtual world generating the digital twin (DT). This instance allows incorporating sensor data from real to virtual world and the required transparency for real-time monitoring and control, contributing to address important features of the advanced intelligent industry and simultaneously improve sustainability. Assuming the industrial CPS as the core technology toward the latest advanced intelligent industry stage, this paper reviews and highlights the correlation and contributions of the enabler technologies for the operationalization of each stage in the path toward advanced intelligent industry. From this research, a real-time integration architecture for a cyber-physical system with applications to collaborative robotics is proposed. The required functionalities and issues to endow the industrial system of adaptability are identified.Keywords: cyber-physical systems, digital twin, sensor data, system integration, virtual model
Procedia PDF Downloads 1186871 A Machine Learning-Based Model to Screen Antituberculosis Compound Targeted against LprG Lipoprotein of Mycobacterium tuberculosis
Authors: Syed Asif Hassan, Syed Atif Hassan
Abstract:
Multidrug-resistant Tuberculosis (MDR-TB) is an infection caused by the resistant strains of Mycobacterium tuberculosis that do not respond either to isoniazid or rifampicin, which are the most important anti-TB drugs. The increase in the occurrence of a drug-resistance strain of MTB calls for an intensive search of novel target-based therapeutics. In this context LprG (Rv1411c) a lipoprotein from MTB plays a pivotal role in the immune evasion of Mtb leading to survival and propagation of the bacterium within the host cell. Therefore, a machine learning method will be developed for generating a computational model that could predict for a potential anti LprG activity of the novel antituberculosis compound. The present study will utilize dataset from PubChem database maintained by National Center for Biotechnology Information (NCBI). The dataset involves compounds screened against MTB were categorized as active and inactive based upon PubChem activity score. PowerMV, a molecular descriptor generator, and visualization tool will be used to generate the 2D molecular descriptors for the actives and inactive compounds present in the dataset. The 2D molecular descriptors generated from PowerMV will be used as features. We feed these features into three different classifiers, namely, random forest, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model based on the accuracy of predicting novel antituberculosis compound with an anti LprG activity. Additionally, the efficacy of predicted active compounds will be screened using SMARTS filter to choose molecule with drug-like features.Keywords: antituberculosis drug, classifier, machine learning, molecular descriptors, prediction
Procedia PDF Downloads 3926870 The Impact of Recurring Events in Fake News Detection
Authors: Ali Raza, Shafiq Ur Rehman Khan, Raja Sher Afgun Usmani, Asif Raza, Basit Umair
Abstract:
Detection of Fake news and missing information is gaining popularity, especially after the advancement in social media and online news platforms. Social media platforms are the main and speediest source of fake news propagation, whereas online news websites contribute to fake news dissipation. In this study, we propose a framework to detect fake news using the temporal features of text and consider user feedback to identify whether the news is fake or not. In recent studies, the temporal features in text documents gain valuable consideration from Natural Language Processing and user feedback and only try to classify the textual data as fake or true. This research article indicates the impact of recurring and non-recurring events on fake and true news. We use two models BERT and Bi-LSTM to investigate, and it is concluded from BERT we get better results and 70% of true news are recurring and rest of 30% are non-recurring.Keywords: natural language processing, fake news detection, machine learning, Bi-LSTM
Procedia PDF Downloads 256869 A Novel Method for Face Detection
Authors: H. Abas Nejad, A. R. Teymoori
Abstract:
Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, etc. in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as the user stays neutral for the majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this work, we propose a light-weight neutral vs. emotion classification engine, which acts as a preprocessor to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at Key Emotion (KE) points using a textural statistical model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a textural statistical model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves ER accuracy and simultaneously reduces the computational complexity of ER system, as validated on multiple databases.Keywords: neutral vs. emotion classification, Constrained Local Model, procrustes analysis, Local Binary Pattern Histogram, statistical model
Procedia PDF Downloads 3406868 A Topological Approach for Motion Track Discrimination
Authors: Tegan H. Emerson, Colin C. Olson, George Stantchev, Jason A. Edelberg, Michael Wilson
Abstract:
Detecting small targets at range is difficult because there is not enough spatial information present in an image sub-region containing the target to use correlation-based methods to differentiate it from dynamic confusers present in the scene. Moreover, this lack of spatial information also disqualifies the use of most state-of-the-art deep learning image-based classifiers. Here, we use characteristics of target tracks extracted from video sequences as data from which to derive distinguishing topological features that help robustly differentiate targets of interest from confusers. In particular, we calculate persistent homology from time-delayed embeddings of dynamic statistics calculated from motion tracks extracted from a wide field-of-view video stream. In short, we use topological methods to extract features related to target motion dynamics that are useful for classification and disambiguation and show that small targets can be detected at range with high probability.Keywords: motion tracks, persistence images, time-delay embedding, topological data analysis
Procedia PDF Downloads 1146867 A Statistical Approach to Rationalise the Number of Working Load Test for Quality Control of Pile Installation in Singapore Jurong Formation
Authors: Nuo Xu, Kok Hun Goh, Jeyatharan Kumarasamy
Abstract:
Pile load testing is significant during foundation construction due to its traditional role of design validation and routine quality control of the piling works. In order to verify whether piles can take loadings at specified settlements, piles will have to undergo working load test where the test load should normally up to 150% of the working load of a pile. Selection or sampling of piles for the working load test is done subject to the number specified in Singapore National Annex to Eurocode 7 SS EN 1997-1:2010. This paper presents an innovative way to rationalize the number of pile load test by adopting statistical analysis approach and looking at the coefficient of variance of pile elastic modulus using a case study at Singapore Tuas depot. Results are very promising and have shown that it is possible to reduce the number of working load test without influencing the reliability and confidence on the pile quality. Moving forward, it is suggested that more load test data from other geological formations to be examined to compare with the findings from this paper.Keywords: elastic modulus of pile under soil interaction, jurong formation, kentledge test, pile load test
Procedia PDF Downloads 3866866 Statistical Analysis to Compare between Smart City and Traditional Housing
Authors: Taha Anjamrooz, Sareh Rajabi, Ayman Alzaatreh
Abstract:
Smart cities are playing important roles in real life. Integration and automation between different features of modern cities and information technologies improve smart city efficiency, energy management, human and equipment resource management, life quality and better utilization of resources for the customers. One of difficulties in this path, is use, interface and link between software, hardware, and other IT technologies to develop and optimize processes in various business fields such as construction, supply chain management and transportation in parallel to cost-effective and resource reduction impacts. Also, Smart cities are certainly intended to demonstrate a vital role in offering a sustainable and efficient model for smart houses while mitigating environmental and ecological matters. Energy management is one of the most important matters within smart houses in the smart cities and communities, because of the sensitivity of energy systems, reduction in energy wastage and maximization in utilizing the required energy. Specially, the consumption of energy in the smart houses is important and considerable in the economic balance and energy management in smart city as it causes significant increment in energy-saving and energy-wastage reduction. This research paper develops features and concept of smart city in term of overall efficiency through various effective variables. The selected variables and observations are analyzed through data analysis processes to demonstrate the efficiency of smart city and compare the effectiveness of each variable. There are ten chosen variables in this study to improve overall efficiency of smart city through increasing effectiveness of smart houses using an automated solar photovoltaic system, RFID System, smart meter and other major elements by interfacing between software and hardware devices as well as IT technologies. Secondly to enhance aspect of energy management by energy-saving within smart house through efficient variables. The main objective of smart city and smart houses is to reproduce energy and increase its efficiency through selected variables with a comfortable and harmless atmosphere for the customers within a smart city in combination of control over the energy consumption in smart house using developed IT technologies. Initially the comparison between traditional housing and smart city samples is conducted to indicate more efficient system. Moreover, the main variables involved in measuring overall efficiency of system are analyzed through various processes to identify and prioritize the variables in accordance to their influence over the model. The result analysis of this model can be used as comparison and benchmarking with traditional life style to demonstrate the privileges of smart cities. Furthermore, due to expensive and expected shortage of natural resources in near future, insufficient and developed research study in the region, and available potential due to climate and governmental vision, the result and analysis of this study can be used as key indicator to select most effective variables or devices during construction phase and designKeywords: smart city, traditional housing, RFID, photovoltaic system, energy efficiency, energy saving
Procedia PDF Downloads 1136865 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: convolutional neural network, discrete wavelet transform, deep learning, heart sound classification
Procedia PDF Downloads 3496864 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments
Authors: Skyler Kim
Abstract:
An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning
Procedia PDF Downloads 1876863 Evaluation of Gesture-Based Password: User Behavioral Features Using Machine Learning Algorithms
Authors: Lakshmidevi Sreeramareddy, Komalpreet Kaur, Nane Pothier
Abstract:
Graphical-based passwords have existed for decades. Their major advantage is that they are easier to remember than an alphanumeric password. However, their disadvantage (especially recognition-based passwords) is the smaller password space, making them more vulnerable to brute force attacks. Graphical passwords are also highly susceptible to the shoulder-surfing effect. The gesture-based password method that we developed is a grid-free, template-free method. In this study, we evaluated the gesture-based passwords for usability and vulnerability. The results of the study are significant. We developed a gesture-based password application for data collection. Two modes of data collection were used: Creation mode and Replication mode. In creation mode (Session 1), users were asked to create six different passwords and reenter each password five times. In replication mode, users saw a password image created by some other user for a fixed duration of time. Three different duration timers, such as 5 seconds (Session 2), 10 seconds (Session 3), and 15 seconds (Session 4), were used to mimic the shoulder-surfing attack. After the timer expired, the password image was removed, and users were asked to replicate the password. There were 74, 57, 50, and 44 users participated in Session 1, Session 2, Session 3, and Session 4 respectfully. In this study, the machine learning algorithms have been applied to determine whether the person is a genuine user or an imposter based on the password entered. Five different machine learning algorithms were deployed to compare the performance in user authentication: namely, Decision Trees, Linear Discriminant Analysis, Naive Bayes Classifier, Support Vector Machines (SVMs) with Gaussian Radial Basis Kernel function, and K-Nearest Neighbor. Gesture-based password features vary from one entry to the next. It is difficult to distinguish between a creator and an intruder for authentication. For each password entered by the user, four features were extracted: password score, password length, password speed, and password size. All four features were normalized before being fed to a classifier. Three different classifiers were trained using data from all four sessions. Classifiers A, B, and C were trained and tested using data from the password creation session and the password replication with a timer of 5 seconds, 10 seconds, and 15 seconds, respectively. The classification accuracies for Classifier A using five ML algorithms are 72.5%, 71.3%, 71.9%, 74.4%, and 72.9%, respectively. The classification accuracies for Classifier B using five ML algorithms are 69.7%, 67.9%, 70.2%, 73.8%, and 71.2%, respectively. The classification accuracies for Classifier C using five ML algorithms are 68.1%, 64.9%, 68.4%, 71.5%, and 69.8%, respectively. SVMs with Gaussian Radial Basis Kernel outperform other ML algorithms for gesture-based password authentication. Results confirm that the shorter the duration of the shoulder-surfing attack, the higher the authentication accuracy. In conclusion, behavioral features extracted from the gesture-based passwords lead to less vulnerable user authentication.Keywords: authentication, gesture-based passwords, machine learning algorithms, shoulder-surfing attacks, usability
Procedia PDF Downloads 1076862 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 1786861 A Comparative Study of Social Entrepreneurship Centers in Universities of the World
Authors: Farnoosh Alami, Nazgol Azimi
Abstract:
Universities have recently paid much attention to the subject of social entrepreneurship. As a result, many of the highly ranked universities have established centers in this regard. The present research aims to investigate vision and mission of social entrepreneurship centers of the best universities ranked under 50 by Shanghai List 2013. It tries to find the common goals and features of their mission, vision, and activities which lead to their present success. This investigation is based on the web content of the first top 10 universities; among which six had social entrepreneurship centers. This is a qualitative research, and the findings are based on content analysis of documents. The findings confirm that education, research, talent development, innovative solutions, and supporting social innovation, are shared in the vision of these centers. In regard to their missions, social participation, networking, and leader education are the most shared features. Their common activities are focused on five categories of education, research, support, promotion, and networking.Keywords: comparative study, qualitative research, social entrepreneurship centers, universities in the world
Procedia PDF Downloads 2976860 Wind Velocity Mitigation for Conceptual Design: A Spatial Decision (Support Framework)
Authors: Mohamed Khallaf, Hossein M Rizeei
Abstract:
Simulating wind pattern behavior over proposed urban features is critical in the early stage of the conceptual design of both architectural and urban disciplines. However, it is typically not possible for designers to explore the impact of wind flow profiles across new urban developments due to a lack of real data and inaccurate estimation of building parameters. Modeling the details of existing and proposed urban features and testing them against wind flows is the missing part of the conceptual design puzzle where architectural and urban discipline can focus. This research aims to develop a spatial decision-support design method utilizing LiDAR, GIS, and performance-based wind simulation technology to mitigate wind-related hazards on a design by simulating alternative design scenarios at the pedestrian level prior to its implementation in Sydney, Australia. The result of the experiment demonstrates the capability of the proposed framework to improve pedestrian comfort in relation to wind profile.Keywords: spatial decision-support design, performance-based wind simulation, LiDAR, GIS
Procedia PDF Downloads 1266859 Probability Sampling in Matched Case-Control Study in Drug Abuse
Authors: Surya R. Niraula, Devendra B Chhetry, Girish K. Singh, S. Nagesh, Frederick A. Connell
Abstract:
Background: Although random sampling is generally considered to be the gold standard for population-based research, the majority of drug abuse research is based on non-random sampling despite the well-known limitations of this kind of sampling. Method: We compared the statistical properties of two surveys of drug abuse in the same community: one using snowball sampling of drug users who then identified “friend controls” and the other using a random sample of non-drug users (controls) who then identified “friend cases.” Models to predict drug abuse based on risk factors were developed for each data set using conditional logistic regression. We compared the precision of each model using bootstrapping method and the predictive properties of each model using receiver operating characteristics (ROC) curves. Results: Analysis of 100 random bootstrap samples drawn from the snowball-sample data set showed a wide variation in the standard errors of the beta coefficients of the predictive model, none of which achieved statistical significance. One the other hand, bootstrap analysis of the random-sample data set showed less variation, and did not change the significance of the predictors at the 5% level when compared to the non-bootstrap analysis. Comparison of the area under the ROC curves using the model derived from the random-sample data set was similar when fitted to either data set (0.93, for random-sample data vs. 0.91 for snowball-sample data, p=0.35); however, when the model derived from the snowball-sample data set was fitted to each of the data sets, the areas under the curve were significantly different (0.98 vs. 0.83, p < .001). Conclusion: The proposed method of random sampling of controls appears to be superior from a statistical perspective to snowball sampling and may represent a viable alternative to snowball sampling.Keywords: drug abuse, matched case-control study, non-probability sampling, probability sampling
Procedia PDF Downloads 4936858 Combination between Intrusion Systems and Honeypots
Authors: Majed Sanan, Mohammad Rammal, Wassim Rammal
Abstract:
Today, security is a major concern. Intrusion Detection, Prevention Systems and Honeypot can be used to moderate attacks. Many researchers have proposed to use many IDSs ((Intrusion Detection System) time to time. Some of these IDS’s combine their features of two or more IDSs which are called Hybrid Intrusion Detection Systems. Most of the researchers combine the features of Signature based detection methodology and Anomaly based detection methodology. For a signature based IDS, if an attacker attacks slowly and in organized way, the attack may go undetected through the IDS, as signatures include factors based on duration of the events but the actions of attacker do not match. Sometimes, for an unknown attack there is no signature updated or an attacker attack in the mean time when the database is updating. Thus, signature-based IDS fail to detect unknown attacks. Anomaly based IDS suffer from many false-positive readings. So there is a need to hybridize those IDS which can overcome the shortcomings of each other. In this paper we propose a new approach to IDS (Intrusion Detection System) which is more efficient than the traditional IDS (Intrusion Detection System). The IDS is based on Honeypot Technology and Anomaly based Detection Methodology. We have designed Architecture for the IDS in a packet tracer and then implemented it in real time. We have discussed experimental results performed: both the Honeypot and Anomaly based IDS have some shortcomings but if we hybridized these two technologies, the newly proposed Hybrid Intrusion Detection System (HIDS) is capable enough to overcome these shortcomings with much enhanced performance. In this paper, we present a modified Hybrid Intrusion Detection System (HIDS) that combines the positive features of two different detection methodologies - Honeypot methodology and anomaly based intrusion detection methodology. In the experiment, we ran both the Intrusion Detection System individually first and then together and recorded the data from time to time. From the data we can conclude that the resulting IDS are much better in detecting intrusions from the existing IDSs.Keywords: security, intrusion detection, intrusion prevention, honeypot, anomaly-based detection, signature-based detection, cloud computing, kfsensor
Procedia PDF Downloads 3846857 Feature-Based Summarizing and Ranking from Customer Reviews
Authors: Dim En Nyaung, Thin Lai Lai Thein
Abstract:
Due to the rapid increase of Internet, web opinion sources dynamically emerge which is useful for both potential customers and product manufacturers for prediction and decision purposes. These are the user generated contents written in natural languages and are unstructured-free-texts scheme. Therefore, opinion mining techniques become popular to automatically process customer reviews for extracting product features and user opinions expressed over them. Since customer reviews may contain both opinionated and factual sentences, a supervised machine learning technique applies for subjectivity classification to improve the mining performance. In this paper, we dedicate our work is the task of opinion summarization. Therefore, product feature and opinion extraction is critical to opinion summarization, because its effectiveness significantly affects the identification of semantic relationships. The polarity and numeric score of all the features are determined by Senti-WordNet Lexicon. The problem of opinion summarization refers how to relate the opinion words with respect to a certain feature. Probabilistic based model of supervised learning will improve the result that is more flexible and effective.Keywords: opinion mining, opinion summarization, sentiment analysis, text mining
Procedia PDF Downloads 3326856 Statistical Process Control in Manufacturing, a Case Study on an Iranian Automobile Company
Authors: M. E. Khiav, D. J. Borah, H. T. S. Santos, V. T. Faria
Abstract:
For automobile companies, it has become very important to ensure sound quality in manufacturing and assembling in order to prevent occurrence of defects and to reduce the amount of parts replacements to be done in the service centers during the warranty period. Statistical Process Control (SPC) is widely used as the tool to analyze the quality of such processes and plays a significant role in the improvement of the processes by identifying the patterns and the location of the defects. In this paper, a case study has been conducted on an Iranian automobile company. This paper performs a quality analysis of a particular component called “Internal Bearing for the Back Wheel” of a particular car model, manufactured by the company, based on the 10 million data received from its service centers located all over the country. By creating control charts including X bar–S charts and EWMA charts, it has been observed after the year 2009, the specific component underwent frequent failures and there has been a sharp dip in the average distance covered by the cars till the specific component requires replacement/maintenance. Correlation analysis was performed to find out the reasons that might have affected the quality of the specific component in all the cars produced by the company after the year 2009. Apart from manufacturing issues, some political and environmental factors have been identified to have a potential impact on the quality of the component. A maiden attempt has been made to analyze the quality issues within an Iranian automobile manufacturer; such issues often get neglected in developing countries. The paper also discusses the possibility of political scenario of Iran and the country’s environmental conditions affecting the quality of the end products, which not only strengthens the extant literature but also provides a new direction for future research.Keywords: capability analysis, car manufacturing, statistical process control, quality control, quality tools
Procedia PDF Downloads 3816855 Vertebral Pain Features in Women of Different Age Depending on Body Mass Index
Authors: Vladyslav Povoroznyuk, Tetiana Orlуk, Nataliia Dzerovych
Abstract:
Introduction: Back pain is an extremely common health care problem worldwide. Many studies show a link between an obesity and risk of lower back pain. The aim is to study correlation and peculiarities of vertebral pain in women of different age depending on their anthropometric indicators. Materials: 1886 women aged 25-89 years were examined. The patients were divided into groups according to age (25-44, 45-59, 60-74, 75-89 years old) and body mass index (BMI: to 18.4 kg/m2 (underweight), 18.5-24.9 kg/m2 (normal), 25-30 kg/m2 (overweight) and more than 30.1 kg/m2 (obese). Methods: The presence and intensity of pain was evaluated in the thoracic and lumbar spine using a visual analogue scale (VAS). BMI is calculated by the standard formula based on body weight and height measurements. Statistical analysis was performed using parametric and nonparametric methods. Significant changes were considered as p <0.05. Results: The intensity of pain in the thoracic spine was significantly higher in the underweight women in the age groups of 25-44 years (p = 0.04) and 60-74 years (p=0.005). The intensity of pain in the lumbar spine was significantly higher in the women of 45-59 years (p = 0.001) and 60-74 years (p = 0.0003) with obesity. In the women of 45-74 years BMI was significantly positively correlated with the level of pain in the lumbar spine. Obesity significantly increases the relative risk of pain in the lumbar region (RR=0.07 (95% CI: 1.03-1.12; p=0.002)), while underweight significantly increases the risk of pain in the thoracic region (RR=1.21 (95% CI: 1.00-1.46; p=0.05)). Conclusion: In women, vertebral pain syndrome may be related to the anthropometric characteristics (e.g., BMI). Underweight may indirectly influence the development of pain in the thoracic spine and increase the risk of pain in this part by 1.21 times. Obesity influences the development of pain in the lumbar spine increasing the risk by 1.07 times.Keywords: body mass index, age, pain in thoracic and lumbar spine, women
Procedia PDF Downloads 3666854 Analyzing the Commentator Network Within the French YouTube Environment
Authors: Kurt Maxwell Kusterer, Sylvain Mignot, Annick Vignes
Abstract:
To our best knowledge YouTube is the largest video hosting platform in the world. A high number of creators, viewers, subscribers and commentators act in this specific eco-system which generates huge sums of money. Views, subscribers, and comments help to increase the popularity of content creators. The most popular creators are sponsored by brands and participate in marketing campaigns. For a few of them, this becomes a financially rewarding profession. This is made possible through the YouTube Partner Program, which shares revenue among creators based on their popularity. We believe that the role of comments in increasing the popularity is to be emphasized. In what follows, YouTube is considered as a bilateral network between the videos and the commentators. Analyzing a detailed data set focused on French YouTubers, we consider each comment as a link between a commentator and a video. Our research question asks what are the predominant features of a video which give it the highest probability to be commented on. Following on from this question, how can we use these features to predict the action of the agent in commenting one video instead of another, considering the characteristics of the commentators, videos, topics, channels, and recommendations. We expect to see that the videos of more popular channels generate higher viewer engagement and thus are more frequently commented. The interest lies in discovering features which have not classically been considered as markers for popularity on the platform. A quick view of our data set shows that 96% of the commentators comment only once on a certain video. Thus, we study a non-weighted bipartite network between commentators and videos built on the sub-sample of 96% of unique comments. A link exists between two nodes when a commentator makes a comment on a video. We run an Exponential Random Graph Model (ERGM) approach to evaluate which characteristics influence the probability of commenting a video. The creation of a link will be explained in terms of common video features, such as duration, quality, number of likes, number of views, etc. Our data is relevant for the period of 2020-2021 and focuses on the French YouTube environment. From this set of 391 588 videos, we extract the channels which can be monetized according to YouTube regulations (channels with at least 1000 subscribers and more than 4000 hours of viewing time during the last twelve months).In the end, we have a data set of 128 462 videos which consist of 4093 channels. Based on these videos, we have a data set of 1 032 771 unique commentators, with a mean of 2 comments per a commentator, a minimum of 1 comment each, and a maximum of 584 comments.Keywords: YouTube, social networks, economics, consumer behaviour
Procedia PDF Downloads 69