Search results for: deceptive features
3353 Bag of Local Features for Person Re-Identification on Large-Scale Datasets
Authors: Yixiu Liu, Yunzhou Zhang, Jianning Chi, Hao Chu, Rui Zheng, Libo Sun, Guanghao Chen, Fangtong Zhou
Abstract:
In the last few years, large-scale person re-identification has attracted a lot of attention from video surveillance since it has a potential application prospect in public safety management. However, it is still a challenging job considering the variation in human pose, the changing illumination conditions and the lack of paired samples. Although the accuracy has been significantly improved, the data dependence of the sample training is serious. To tackle this problem, a new strategy is proposed based on bag of visual words (BoVW) model of designing the feature representation which has been widely used in the field of image retrieval. The local features are extracted, and more discriminative feature representation is obtained by cross-view dictionary learning (CDL), then the assignment map is obtained through k-means clustering. Finally, the BoVW histograms are formed which encodes the images with the statistics of the feature classes in the assignment map. Experiments conducted on the CUHK03, Market1501 and MARS datasets show that the proposed method performs favorably against existing approaches.Keywords: bag of visual words, cross-view dictionary learning, person re-identification, reranking
Procedia PDF Downloads 1953352 Visual Thing Recognition with Binary Scale-Invariant Feature Transform and Support Vector Machine Classifiers Using Color Information
Authors: Wei-Jong Yang, Wei-Hau Du, Pau-Choo Chang, Jar-Ferr Yang, Pi-Hsia Hung
Abstract:
The demands of smart visual thing recognition in various devices have been increased rapidly for daily smart production, living and learning systems in recent years. This paper proposed a visual thing recognition system, which combines binary scale-invariant feature transform (SIFT), bag of words model (BoW), and support vector machine (SVM) by using color information. Since the traditional SIFT features and SVM classifiers only use the gray information, color information is still an important feature for visual thing recognition. With color-based SIFT features and SVM, we can discard unreliable matching pairs and increase the robustness of matching tasks. The experimental results show that the proposed object recognition system with color-assistant SIFT SVM classifier achieves higher recognition rate than that with the traditional gray SIFT and SVM classification in various situations.Keywords: color moments, visual thing recognition system, SIFT, color SIFT
Procedia PDF Downloads 4673351 Fuzzy-Machine Learning Models for the Prediction of Fire Outbreak: A Comparative Analysis
Authors: Uduak Umoh, Imo Eyoh, Emmauel Nyoho
Abstract:
This paper compares fuzzy-machine learning algorithms such as Support Vector Machine (SVM), and K-Nearest Neighbor (KNN) for the predicting cases of fire outbreak. The paper uses the fire outbreak dataset with three features (Temperature, Smoke, and Flame). The data is pre-processed using Interval Type-2 Fuzzy Logic (IT2FL) algorithm. Min-Max Normalization and Principal Component Analysis (PCA) are used to predict feature labels in the dataset, normalize the dataset, and select relevant features respectively. The output of the pre-processing is a dataset with two principal components (PC1 and PC2). The pre-processed dataset is then used in the training of the aforementioned machine learning models. K-fold (with K=10) cross-validation method is used to evaluate the performance of the models using the matrices – ROC (Receiver Operating Curve), Specificity, and Sensitivity. The model is also tested with 20% of the dataset. The validation result shows KNN is the better model for fire outbreak detection with an ROC value of 0.99878, followed by SVM with an ROC value of 0.99753.Keywords: Machine Learning Algorithms , Interval Type-2 Fuzzy Logic, Fire Outbreak, Support Vector Machine, K-Nearest Neighbour, Principal Component Analysis
Procedia PDF Downloads 1813350 Histopathological Features of Infections Caused by Fusarium equiseti (Mart.) Sacc. in Onion Plants from Kebbi State, Northern Nigeria
Authors: Wadzani Dauda Palnam, Alao S. Emmanuel Laykay, Afiniki Bawa Zarafi, Olufunmilola Alabi, Dora N. Iortsuun
Abstract:
Onion production is affected by several diseases including fusariosis. Study was conducted to investigate the histopathological features of different onion tissues infected with Fusarium equiseti by inoculation with soil drench, root dip and mycelia paste methods. This was carried out by fixation, dehydration, clearing, wax embedding, sectioning, staining and mounting of leaf and root sections for microscopical examination at 400x. Once infection occurred in the roots, the pathogen moved through the vascular system to colonize the whole plant. At first, it grew in the intercellular spaces of the root cortex but soon invaded the cells, followed by colonization of the cells by its hyphae and microconidia. At later stages of infection, the cortex tissue became completely disorganized and decomposed as the pathogen advance to the shoot system via the vessel elements; this may be responsible for the early wilting symptom of infected plants arising from the severe water stress due to blockage of the xylem tissues.Keywords: onion, histopathology, infection, fusaria, inoculation
Procedia PDF Downloads 2783349 Automated Heart Sound Classification from Unsegmented Phonocardiogram Signals Using Time Frequency Features
Authors: Nadia Masood Khan, Muhammad Salman Khan, Gul Muhammad Khan
Abstract:
Cardiologists perform cardiac auscultation to detect abnormalities in heart sounds. Since accurate auscultation is a crucial first step in screening patients with heart diseases, there is a need to develop computer-aided detection/diagnosis (CAD) systems to assist cardiologists in interpreting heart sounds and provide second opinions. In this paper different algorithms are implemented for automated heart sound classification using unsegmented phonocardiogram (PCG) signals. Support vector machine (SVM), artificial neural network (ANN) and cartesian genetic programming evolved artificial neural network (CGPANN) without the application of any segmentation algorithm has been explored in this study. The signals are first pre-processed to remove any unwanted frequencies. Both time and frequency domain features are then extracted for training the different models. The different algorithms are tested in multiple scenarios and their strengths and weaknesses are discussed. Results indicate that SVM outperforms the rest with an accuracy of 73.64%.Keywords: pattern recognition, machine learning, computer aided diagnosis, heart sound classification, and feature extraction
Procedia PDF Downloads 2623348 Development of a Robust Procedure for Generating Structural Models of Calcium Aluminosilicate Glass Surfaces
Authors: S. Perera, T. R. Walsh, M. Solvang
Abstract:
The structure-property relationships of calcium aluminosilicate (CAS) glass surfaces are of scientific and technological interest regarding dissolution phenomena. Molecular dynamics (MD) simulations can provide atomic-scale insights into the structure and properties of the CAS interfaces in vacuo as the first step to conducting computational dissolution studies on CAS surfaces. However, one limitation to date is that although the bulk properties of CAS glasses have been well studied by MD simulation, corresponding efforts on CAS surface properties are relatively few in number (both theoretical and experimental). Here, a systematic computational protocol to create CAS surfaces in vacuo is developed by evaluating the sensitivity of the resultant surface structure with respect to different factors. Factors such as the relative thickness of the surface layer, the relative thickness of the bulk region, the cooling rate, and the annealing schedule (time and temperature) are explored. Structural features such as ring size distribution, defect concentrations (five-coordinated aluminium (AlV), non-bridging oxygen (NBO), and tri-cluster oxygen (TBO)), and linkage distribution are identified as significant features in dissolution studies.Keywords: MD simulation, CAS glasses, surface structure, structure-property, CAS interface
Procedia PDF Downloads 983347 The Policia Internacional e de Defesa do Estado 1933–1969 and Valtiollinen Poliisi 1939–1948 on Screen: Comparing and Contrasting the Images of the Political Police in Portuguese and Finnish Films between the 1930s and the 1960s
Authors: Riikka Elina Kallio
Abstract:
“The walls have ears” phrase is defining the era of dictatorship in Portugal (1926–1974) and political unrest decades in Finland (1917–1948). The phrase is referring to the policing of the political, secret police, PIDE (Policia Internacional e de Defesa do Estado 1933–1969) in Portugal and VALPO (Valtiollinen Poliisi 1939–1948) in Finland. Free speech at any public space and even in private events could be fatal. The members of the PIDE/VALPO or informers/collaborators could be listening. Strict censorship under the Salazar´s regime was controlling media for example newspapers, music, and the film industry. Similarly, the politically affected censorship influenced the media in Finland in those unrest decades. This article examines the similarities and the differences in the images of the political police in Finland and Portugal, by analyzing Finnish and Portuguese films from the nineteen-thirties to nineteensixties. The text addresses two main research questions: what are the common and different features in the representations of the Finnish and Portuguese political police in films between the 1930s and 1960s, and how did the national censorship affect these representations? This study approach is interdisciplinary, and it combines film studies and criminology. Close reading is a practical qualitative method for analyzing films and in this study, close reading emphasizes the features of the police officer. Criminology provides the methodological tools for analysis of the police universal features and European common policies. The characterization of the police in this study is based on Robert Reiner´s 1980s and Timo Korander´s 2010s definitions of the police officer. The research material consisted of the Portuguese films from online film archives and Finnish films from Movie Making Finland -project´s metadata which offered suitable material by data mining the keywords such as poliisi, poliisipäällikkö and konstaapeli (police, police chief, police constable). The findings of this study suggest that even though there are common features of the images of the political police in Finland and Portugal, there are still national and cultural differences in the representations of the political police and policing.Keywords: censorship, film studies, images, PIDE, political police, VALPO
Procedia PDF Downloads 713346 Analysing Modern City Heritage through Modernization Transformation: A Case of Wuhan, China
Authors: Ziwei Guo, Liangping Hong, Zhiguo Ye
Abstract:
The exogenous modernization process in China and other late-coming countries, is not resulted from a gradual growth of their own modernity features, but a conscious response to external challenges. Under this context, it had been equally important for Chinese cities to make themselves ‘Chinese’ as well as ‘modern’. Wuhan was the first opened inland treaty port in late Qing Dynasty. In the following one hundred years, Wuhan transferred from a feudal town to a modern industrial city. It is a good example to illustrate the urban construction and cultural heritage through the process and impact of social transformation. An overall perspective on transformation will contribute to develop the city`s uniqueness and enhance its inclusive development. The study chooses the history of Wuhan from 1861 to 1957 as the study period. The whole transformation process will be divided into four typical periods based on key historical events, and the paper analyzes the changes on urban structure and constructions activities in each period. Then, a lot of examples are used to compare the features of Wuhan modern city heritage in the four periods. In this way, three characteristics of Wuhan modern city heritage are summarized. The paper finds that globalization and localization worked together to shape the urban physical space environment. For Wuhan, social transformation has a profound and comprehensive impact on urban construction, which can be analyzed in the aspects of main construction, architecture style, location and actors. Moreover, the three towns of Wuhan have a disparate cityscape that is reflected by the varied heritages and architecture features over different transformation periods. Lastly, the protection regulations and conservation planning of heritage in Wuhan are discussed, and suggestions about the conservation of Wuhan modern heritage are tried to be drawn. The implications of the study are providing a new perspective on modern city heritage for cities like Wuhan, and the future local planning system and heritage conservation policies can take into consideration the ‘Modern Cultural Transformation Route’ in this paper.Keywords: modern city heritage, transformation, identity, Wuhan
Procedia PDF Downloads 1313345 Improving Security Features of Traditional Automated Teller Machines-Based Banking Services via Fingerprint Biometrics Scheme
Authors: Anthony I. Otuonye, Juliet N. Odii, Perpetual N. Ibe
Abstract:
The obvious challenges faced by most commercial bank customers while using the services of ATMs (Automated Teller Machines) across developing countries have triggered the need for an improved system with better security features. Current ATM systems are password-based, and research has proved the vulnerabilities of these systems to heinous attacks and manipulations. We have discovered by research that the security of current ATM-assisted banking services in most developing countries of the world is easily broken and maneuvered by fraudsters, majorly because it is quite difficult for these systems to identify an impostor with privileged access as against the authentic bank account owner. Again, PIN (Personal Identification Number) code passwords are easily guessed, just to mention a few of such obvious limitations of traditional ATM operations. In this research work also, we have developed a system of fingerprint biometrics with PIN code Authentication that seeks to improve the security features of traditional ATM installations as well as other Banking Services. The aim is to ensure better security at all ATM installations and raise the confidence of bank customers. It is hoped that our system will overcome most of the challenges of the current password-based ATM operation if properly applied. The researchers made use of the OOADM (Object-Oriented Analysis and Design Methodology), a software development methodology that assures proper system design using modern design diagrams. Implementation and coding were carried out using Visual Studio 2010 together with other software tools. Results obtained show a working system that provides two levels of security at the client’s side using a fingerprint biometric scheme combined with the existing 4-digit PIN code to guarantee the confidence of bank customers across developing countries.Keywords: fingerprint biometrics, banking operations, verification, ATMs, PIN code
Procedia PDF Downloads 423344 Radiomics: Approach to Enable Early Diagnosis of Non-Specific Breast Nodules in Contrast-Enhanced Magnetic Resonance Imaging
Authors: N. D'Amico, E. Grossi, B. Colombo, F. Rigiroli, M. Buscema, D. Fazzini, G. Cornalba, S. Papa
Abstract:
Purpose: To characterize, through a radiomic approach, the nature of nodules considered non-specific by expert radiologists, recognized in magnetic resonance mammography (MRm) with T1-weighted (T1w) sequences with paramagnetic contrast. Material and Methods: 47 cases out of 1200 undergoing MRm, in which the MRm assessment gave uncertain classification (non-specific nodules), were admitted to the study. The clinical outcome of the non-specific nodules was later found through follow-up or further exams (biopsy), finding 35 benign and 12 malignant. All MR Images were acquired at 1.5T, a first basal T1w sequence and then four T1w acquisitions after the paramagnetic contrast injection. After a manual segmentation of the lesions, done by a radiologist, and the extraction of 150 radiomic features (30 features per 5 subsequent times) a machine learning (ML) approach was used. An evolutionary algorithm (TWIST system based on KNN algorithm) was used to subdivide the dataset into training and validation test and to select features yielding the maximal amount of information. After this pre-processing, different machine learning systems were applied to develop a predictive model based on a training-testing crossover procedure. 10 cases with a benign nodule (follow-up older than 5 years) and 18 with an evident malignant tumor (clear malignant histological exam) were added to the dataset in order to allow the ML system to better learn from data. Results: NaiveBayes algorithm working on 79 features selected by a TWIST system, resulted to be the best performing ML system with a sensitivity of 96% and a specificity of 78% and a global accuracy of 87% (average values of two training-testing procedures ab-ba). The results showed that in the subset of 47 non-specific nodules, the algorithm predicted the outcome of 45 nodules which an expert radiologist could not identify. Conclusion: In this pilot study we identified a radiomic approach allowing ML systems to perform well in the diagnosis of a non-specific nodule at MR mammography. This algorithm could be a great support for the early diagnosis of malignant breast tumor, in the event the radiologist is not able to identify the kind of lesion and reduces the necessity for long follow-up. Clinical Relevance: This machine learning algorithm could be essential to support the radiologist in early diagnosis of non-specific nodules, in order to avoid strenuous follow-up and painful biopsy for the patient.Keywords: breast, machine learning, MRI, radiomics
Procedia PDF Downloads 2673343 Development of a Real-Time Brain-Computer Interface for Interactive Robot Therapy: An Exploration of EEG and EMG Features during Hypnosis
Authors: Maryam Alimardani, Kazuo Hiraki
Abstract:
This study presents a framework for development of a new generation of therapy robots that can interact with users by monitoring their physiological and mental states. Here, we focused on one of the controversial methods of therapy, hypnotherapy. Hypnosis has shown to be useful in treatment of many clinical conditions. But, even for healthy people, it can be used as an effective technique for relaxation or enhancement of memory and concentration. Our aim is to develop a robot that collects information about user’s mental and physical states using electroencephalogram (EEG) and electromyography (EMG) signals and performs costeffective hypnosis at the comfort of user’s house. The presented framework consists of three main steps: (1) Find the EEG-correlates of mind state before, during, and after hypnosis and establish a cognitive model for state changes, (2) Develop a system that can track the changes in EEG and EMG activities in real time and determines if the user is ready for suggestion, and (3) Implement our system in a humanoid robot that will talk and conduct hypnosis on users based on their mental states. This paper presents a pilot study in regard to the first stage, detection of EEG and EMG features during hypnosis.Keywords: hypnosis, EEG, robotherapy, brain-computer interface (BCI)
Procedia PDF Downloads 2563342 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 783341 Juxtaposition of the Past and the Present: A Pragmatic Stylistic Analysis of the Short Story “Too Much Happiness” by Alice Munro
Authors: Inas Hussein
Abstract:
Alice Munro is a Canadian short-story writer who has been regarded as one of the greatest writers of fiction. Owing to her great contribution to fiction, she was the first Canadian woman and the only short-story writer ever to be rewarded the Nobel Prize for Literature in 2013. Her literary works include collections of short stories and one book published as a novel. Her stories concentrate on the human condition and the human relationships as seen through the lens of daily life. The setting in most of her stories is her native Canada- small towns much similar to the one where she grew up. Her writing style is not only realistic but is also characterized by autobiographical, historical and regional features. The aim of this research is to analyze one of the key stylistic devices often adopted by Munro in her fictions: the juxtaposition of the past and the present, with reference to the title story in Munro's short story collection Too Much Happiness. The story under exploration is a brief biography of the Russian Mathematician and novelist Sophia Kovalevsky (1850 – 1891), the first woman to be appointed as a professor of Mathematics at a European University in Stockholm. Thus, the story has a historical protagonist and is set on the European continent. Munro dramatizes the severe historical and cultural constraints that hindered the career of the protagonist. A pragmatic stylistic framework is being adopted and the qualitative analysis is supported by textual reference. The stylistic analysis reveals that the juxtaposition of the past and the present is one of the distinctive features that characterize the author; in a typical Munrovian manner, the protagonist often moves between the units of time: the past, the present and, sometimes, the future. Munro's style is simple and direct but cleverly constructed and densely complicated by the presence of deeper layers and stories within the story. Findings of the research reveal that the story under investigation merits reading and analyzing. It is recommended that this story and other stories by Munro are analyzed to further explore the features of her art and style.Keywords: Alice Munro, Too Much Happiness, style, stylistic analysis
Procedia PDF Downloads 1453340 Local Spectrum Feature Extraction for Face Recognition
Authors: Muhammad Imran Ahmad, Ruzelita Ngadiran, Mohd Nazrin Md Isa, Nor Ashidi Mat Isa, Mohd ZaizuIlyas, Raja Abdullah Raja Ahmad, Said Amirul Anwar Ab Hamid, Muzammil Jusoh
Abstract:
This paper presents two technique, local feature extraction using image spectrum and low frequency spectrum modelling using GMM to capture the underlying statistical information to improve the performance of face recognition system. Local spectrum features are extracted using overlap sub block window that are mapping on the face image. For each of this block, spatial domain is transformed to frequency domain using DFT. A low frequency coefficient is preserved by discarding high frequency coefficients by applying rectangular mask on the spectrum of the facial image. Low frequency information is non Gaussian in the feature space and by using combination of several Gaussian function that has different statistical properties, the best feature representation can be model using probability density function. The recognition process is performed using maximum likelihood value computed using pre-calculate GMM components. The method is tested using FERET data sets and is able to achieved 92% recognition rates.Keywords: local features modelling, face recognition system, Gaussian mixture models, Feret
Procedia PDF Downloads 6673339 Examining the Role of Willingness to Communicate in Cross-Cultural Adaptation in East-Asia
Authors: Baohua Yu
Abstract:
Despite widely reported 'Mainland-Hong Kong conflicts', recent years have witnessed progressive growth in the numbers of Mainland Chinese students in Hong Kong’s universities. This research investigated Mainland Chinese students’ intercultural communication in relation to cross-cultural adaptation in a major university in Hong Kong. The features of intercultural communication examined in this study were competence in the second language (L2) communication and L2 Willingness to Communicate (WTC), while the features of cross-cultural adaptation examined were socio-cultural, psychological and academic adaptation. Based on a questionnaire, structural equation modelling was conducted among a sample of 196 Mainland Chinese students. Results showed that the competence in L2 communication played a significant role in L2 WTC, which had an influential effect on academic adaptation, which was itself identified as a mediator between the psychological adaptation and socio-cultural adaptation. Implications for curriculum design for courses and instructional practice on international students are discussed.Keywords: L2 willingness to communicate, competence in L2 communication, psychological adaptation, socio-cultural adaptation, academic adaptation, structural equation modelling
Procedia PDF Downloads 3553338 Lexical Semantic Analysis to Support Ontology Modeling of Maintenance Activities– Case Study of Offshore Riser Integrity
Authors: Vahid Ebrahimipour
Abstract:
Word representation and context meaning of text-based documents play an essential role in knowledge modeling. Business procedures written in natural language are meant to store technical and engineering information, management decision and operation experience during the production system life cycle. Context meaning representation is highly dependent upon word sense, lexical relativity, and sematic features of the argument. This paper proposes a method for lexical semantic analysis and context meaning representation of maintenance activity in a mass production system. Our approach constructs a straightforward lexical semantic approach to analyze facilitates semantic and syntactic features of context structure of maintenance report to facilitate translation, interpretation, and conversion of human-readable interpretation into computer-readable representation and understandable with less heterogeneity and ambiguity. The methodology will enable users to obtain a representation format that maximizes shareability and accessibility for multi-purpose usage. It provides a contextualized structure to obtain a generic context model that can be utilized during the system life cycle. At first, it employs a co-occurrence-based clustering framework to recognize a group of highly frequent contextual features that correspond to a maintenance report text. Then the keywords are identified for syntactic and semantic extraction analysis. The analysis exercises causality-driven logic of keywords’ senses to divulge the structural and meaning dependency relationships between the words in a context. The output is a word contextualized representation of maintenance activity accommodating computer-based representation and inference using OWL/RDF.Keywords: lexical semantic analysis, metadata modeling, contextual meaning extraction, ontology modeling, knowledge representation
Procedia PDF Downloads 1053337 The Capacity of Mel Frequency Cepstral Coefficients for Speech Recognition
Authors: Fawaz S. Al-Anzi, Dia AbuZeina
Abstract:
Speech recognition is of an important contribution in promoting new technologies in human computer interaction. Today, there is a growing need to employ speech technology in daily life and business activities. However, speech recognition is a challenging task that requires different stages before obtaining the desired output. Among automatic speech recognition (ASR) components is the feature extraction process, which parameterizes the speech signal to produce the corresponding feature vectors. Feature extraction process aims at approximating the linguistic content that is conveyed by the input speech signal. In speech processing field, there are several methods to extract speech features, however, Mel Frequency Cepstral Coefficients (MFCC) is the popular technique. It has been long observed that the MFCC is dominantly used in the well-known recognizers such as the Carnegie Mellon University (CMU) Sphinx and the Markov Model Toolkit (HTK). Hence, this paper focuses on the MFCC method as the standard choice to identify the different speech segments in order to obtain the language phonemes for further training and decoding steps. Due to MFCC good performance, the previous studies show that the MFCC dominates the Arabic ASR research. In this paper, we demonstrate MFCC as well as the intermediate steps that are performed to get these coefficients using the HTK toolkit.Keywords: speech recognition, acoustic features, mel frequency, cepstral coefficients
Procedia PDF Downloads 2593336 Risk Screening in Digital Insurance Distribution: Evidence and Explanations
Authors: Finbarr Murphy, Wei Xu, Xian Xu
Abstract:
The embedding of digital technologies in the global economy has attracted increasing attention from economists. With a large and detailed dataset, this study examines the specific case where consumers have a choice between offline and digital channels in the context of insurance purchases. We find that digital channels screen consumers with lower unobserved risk. For the term life, endowment, and disease insurance products, the average risk of the policies purchased through digital channels was 75%, 21%, and 31%, respectively, lower than those purchased offline. As a consequence, the lower unobserved risk leads to weaker information asymmetry and higher profitability of digital channels. We highlight three mechanisms of the risk screening effect: heterogeneous marginal influence of channel features on insurance demand, the channel features directly related to risk control, and the link between the digital divide and risk. We also find that the risk screening effect mainly comes from the extensive margin, i.e., from new consumers. This paper contributes to three connected areas in the insurance context: the heterogeneous economic impacts of digital technology adoption, insurer-side risk selection, and insurance marketing.Keywords: digital economy, information asymmetry, insurance, mobile application, risk screening
Procedia PDF Downloads 733335 Using of the Fractal Dimensions for the Analysis of Hyperkinetic Movements in the Parkinson's Disease
Authors: Sadegh Marzban, Mohamad Sobhan Sheikh Andalibi, Farnaz Ghassemi, Farzad Towhidkhah
Abstract:
Parkinson's disease (PD), which is characterized by the tremor at rest, rigidity, akinesia or bradykinesia and postural instability, affects the quality of life of involved individuals. The concept of a fractal is most often associated with irregular geometric objects that display self-similarity. Fractal dimension (FD) can be used to quantify the complexity and the self-similarity of an object such as tremor. In this work, we are aimed to propose a new method for evaluating hyperkinetic movements such as tremor, by using the FD and other correlated parameters in patients who are suffered from PD. In this study, we used 'the tremor data of Physionet'. The database consists of fourteen participants, diagnosed with PD including six patients with high amplitude tremor and eight patients with low amplitude. We tried to extract features from data, which can distinguish between patients before and after medication. We have selected fractal dimensions, including correlation dimension, box dimension, and information dimension. Lilliefors test has been used for normality test. Paired t-test or Wilcoxon signed rank test were also done to find differences between patients before and after medication, depending on whether the normality is detected or not. In addition, two-way ANOVA was used to investigate the possible association between the therapeutic effects and features extracted from the tremor. Just one of the extracted features showed significant differences between patients before and after medication. According to the results, correlation dimension was significantly different before and after the patient's medication (p=0.009). Also, two-way ANOVA demonstrates significant differences just in medication effect (p=0.033), and no significant differences were found between subject's differences (p=0.34) and interaction (p=0.97). The most striking result emerged from the data is that correlation dimension could quantify medication treatment based on tremor. This study has provided a technique to evaluate a non-linear measure for quantifying medication, nominally the correlation dimension. Furthermore, this study supports the idea that fractal dimension analysis yields additional information compared with conventional spectral measures in the detection of poor prognosis patients.Keywords: correlation dimension, non-linear measure, Parkinson’s disease, tremor
Procedia PDF Downloads 2443334 Feature Extraction and Impact Analysis for Solid Mechanics Using Supervised Finite Element Analysis
Authors: Edward Schwalb, Matthias Dehmer, Michael Schlenkrich, Farzaneh Taslimi, Ketron Mitchell-Wynne, Horen Kuecuekyan
Abstract:
We present a generalized feature extraction approach for supporting Machine Learning (ML) algorithms which perform tasks similar to Finite-Element Analysis (FEA). We report results for estimating the Head Injury Categorization (HIC) of vehicle engine compartments across various impact scenarios. Our experiments demonstrate that models learned using features derived with a simple discretization approach provide a reasonable approximation of a full simulation. We observe that Decision Trees could be as effective as Neural Networks for the HIC task. The simplicity and performance of the learned Decision Trees could offer a trade-off of a multiple order of magnitude increase in speed and cost improvement over full simulation for a reasonable approximation. When used as a complement to full simulation, the approach enables rapid approximate feedback to engineering teams before submission for full analysis. The approach produces mesh independent features and is further agnostic of the assembly structure.Keywords: mechanical design validation, FEA, supervised decision tree, convolutional neural network.
Procedia PDF Downloads 1393333 Bioinformatics Approach to Identify Physicochemical and Structural Properties Associated with Successful Cell-free Protein Synthesis
Authors: Alexander A. Tokmakov
Abstract:
Cell-free protein synthesis is widely used to synthesize recombinant proteins. It allows genome-scale expression of various polypeptides under strictly controlled uniform conditions. However, only a minor fraction of all proteins can be successfully expressed in the systems of protein synthesis that are currently used. The factors determining expression success are poorly understood. At present, the vast volume of data is accumulated in cell-free expression databases. It makes possible comprehensive bioinformatics analysis and identification of multiple features associated with successful cell-free expression. Here, we describe an approach aimed at identification of multiple physicochemical and structural properties of amino acid sequences associated with protein solubility and aggregation and highlight major correlations obtained using this approach. The developed method includes: categorical assessment of the protein expression data, calculation and prediction of multiple properties of expressed amino acid sequences, correlation of the individual properties with the expression scores, and evaluation of statistical significance of the observed correlations. Using this approach, we revealed a number of statistically significant correlations between calculated and predicted features of protein sequences and their amenability to cell-free expression. It was found that some of the features, such as protein pI, hydrophobicity, presence of signal sequences, etc., are mostly related to protein solubility, whereas the others, such as protein length, number of disulfide bonds, content of secondary structure, etc., affect mainly the expression propensity. We also demonstrated that amenability of polypeptide sequences to cell-free expression correlates with the presence of multiple sites of post-translational modifications. The correlations revealed in this study provide a plethora of important insights into protein folding and rationalization of protein production. The developed bioinformatics approach can be of practical use for predicting expression success and optimizing cell-free protein synthesis.Keywords: bioinformatics analysis, cell-free protein synthesis, expression success, optimization, recombinant proteins
Procedia PDF Downloads 4193332 The Classification of Parkinson Tremor and Essential Tremor Based on Frequency Alteration of Different Activities
Authors: Chusak Thanawattano, Roongroj Bhidayasiri
Abstract:
This paper proposes a novel feature set utilized for classifying the Parkinson tremor and essential tremor. Ten ET and ten PD subjects are asked to perform kinetic, postural and resting tests. The empirical mode decomposition (EMD) is used to decompose collected tremor signal to a set of intrinsic mode functions (IMF). The IMFs are used for reconstructing representative signals. The feature set is composed of peak frequencies of IMFs and reconstructed signals. Hypothesize that the dominant frequency components of subjects with PD and ET change in different directions for different tests, difference of peak frequencies of IMFs and reconstructed signals of pairwise based tests (kinetic-resting, kinetic-postural and postural-resting) are considered as potential features. Sets of features are used to train and test by classifier including the quadratic discriminant classifier (QLC) and the support vector machine (SVM). The best accuracy, the best sensitivity and the best specificity are 90%, 87.5%, and 92.86%, respectively.Keywords: tremor, Parkinson, essential tremor, empirical mode decomposition, quadratic discriminant, support vector machine, peak frequency, auto-regressive, spectrum estimation
Procedia PDF Downloads 4433331 Reading as Moral Afternoon Tea: An Empirical Study on the Compensation Effect between Literary Novel Reading and Readers’ Moral Motivation
Authors: Chong Jiang, Liang Zhao, Hua Jian, Xiaoguang Wang
Abstract:
The belief that there is a strong relationship between reading narrative and morality has generally become the basic assumption of scholars, philosophers, critics, and cultural critics. The virtuality constructed by literary novels inspires readers to regard the narrative as a thinking experiment, creating the distance between readers and events so that they can freely and morally experience the positions of different roles. Therefore, the virtual narrative combined with literary characteristics is always considered as a "moral laboratory." Well-established findings revealed that people show less lying and deceptive behaviors in the morning than in the afternoon, called the morning morality effect. As a limited self-regulation resource, morality will be constantly depleted with the change of time rhythm under the influence of the morning morality effect. It can also be compensated and restored in various ways, such as eating, sleeping, etc. As a common form of entertainment in modern society, literary novel reading gives people more virtual experience and emotional catharsis, just as a relaxing afternoon tea that helps people break away from fast-paced work, restore physical strength, and relieve stress in a short period of leisure. In this paper, inspired by the compensation control theory, we wonder whether reading literary novels in the digital environment could replenish a kind of spiritual energy for self-regulation to compensate for people's moral loss in the afternoon. Based on this assumption, we leverage the social annotation text content generated by readers in digital reading to represent the readers' reading attention. We then recognized the semantics and calculated the readers' moral motivation expressed in the annotations and investigated the fine-grained dynamics of the moral motivation changing in each time slot within 24 hours of a day. Comprehensively comparing the division of different time intervals, sufficient experiments showed that the moral motivation reflected in the annotations in the afternoon is significantly higher than that in the morning. The results robustly verified the hypothesis that reading compensates for moral motivation, which we called the moral afternoon tea effect. Moreover, we quantitatively identified that such moral compensation can last until 14:00 in the afternoon and 21:00 in the evening. In addition, it is interesting to find that the division of time intervals of different units impacts the identification of moral rhythms. Dividing the time intervals by four-hour time slot brings more insights of moral rhythms compared with that of three-hour and six-hour time slot.Keywords: digital reading, social annotation, moral motivation, morning morality effect, control compensation
Procedia PDF Downloads 1493330 Environmental Interactions in Riparian Vegetation Cover in an Urban Stream Corridor: A Case Study of Duzce Asar Suyu
Authors: Engin Eroğlu, Oktay Yıldız, Necmi Aksoy, Akif Keten, Mehmet Kıvanç Ak, Şeref Keskin, Elif Atmaca, Sertaç Kaya
Abstract:
Nowadays, green spaces in urban areas are under threat and decreasing their percentages in the urban areas because of increasing population, urbanization, migration, and some cultural changes in quality. An important element of the natural landscape water and water-related natural ecosystems are exposed to corruption due to these pressures. A landscape has owned many different types of elements or units, a more dominant structure than other landscapes as good or bad perceptible extent different direction and variable reveals a unique structure and character of the landscape. Whereas landscapes deal with two main groups as urban and rural according to their location on the world, especially intersection areas of urban and rural named semi-urban or semi-rural present variety landscape features. The main components of the landscape are defined as patch-matrix-corridor. The corridors include quite various vegetation types such as riparian, wetland and the others. In urban areas, natural water corridors are an important elements of the diversity of the riparian vegetation cover. In particular, water corridors attract attention with a natural diversity and lack of fragmentation, degradation and artificial results. Thanks to these features, without a doubt, water corridors are the important component of all cities in the world. These corridors not only divide the city into two separate sides, but also assured the ecological connectivity between the two sides of the city. The main objective of this study is to determine the vegetation and habitat features of urban stream corridor according to environmental interactions. Within this context, this study will be realized that 'Asar Suyu' is an important component of the city of Düzce. Moreover, the riparian zone touched contiguous area borders of the city and overlaid the urban development limits of the city, determining of characteristics of the corridor will be carried out as floristic and habitat analysis. Consequently, vegetation structure and habitat features which play an important role between riparian zone vegetation covers and environmental interaction will be determined. This study includes first results of The Scientific and Technological Research Council of Turkey (TUBITAK-116O596; 'Determining of Landscape Character of Urban Water Corridors as Visual and Ecological; A Case Study of Asar Suyu in Duzce').Keywords: corridor, Duzce, landscape ecology, riparian vegetation
Procedia PDF Downloads 3373329 3D Reconstruction of Human Body Based on Gender Classification
Authors: Jiahe Liu, Hongyang Yu, Feng Qian, Miao Luo
Abstract:
SMPL-X was a powerful parametric human body model that included male, neutral, and female models, with significant gender differences between these three models. During the process of 3D human body reconstruction, the correct selection of standard templates was crucial for obtaining accurate results. To address this issue, we developed an efficient gender classification algorithm to automatically select the appropriate template for 3D human body reconstruction. The key to this gender classification algorithm was the precise analysis of human body features. By using the SMPL-X model, the algorithm could detect and identify gender features of the human body, thereby determining which standard template should be used. The accuracy of this algorithm made the 3D reconstruction process more accurate and reliable, as it could adjust model parameters based on individual gender differences. SMPL-X and the related gender classification algorithm have brought important advancements to the field of 3D human body reconstruction. By accurately selecting standard templates, they have improved the accuracy of reconstruction and have broad potential in various application fields. These technologies continue to drive the development of the 3D reconstruction field, providing us with more realistic and accurate human body models.Keywords: gender classification, joint detection, SMPL-X, 3D reconstruction
Procedia PDF Downloads 703328 Application of Electrical Resistivity Tomography to Image the Subsurface Structure of a Sinkhole, a Case Study in Southwestern Missouri
Authors: Shishay T. Kidanu
Abstract:
The study area is located in Southwestern Missouri and is mainly underlain by Mississippian Age limestone which is highly susceptible to karst processes. The area is known for the presence of various karst features like caves, springs and more importantly Sinkholes. Sinkholes are one of the most common karst features and the primary hazard in karst areas. Investigating the subsurface structure and development mechanism of existing sinkholes enables to understand their long-term impact and chance of reactivation and also helps to provide effective mitigation measures. In this study ERT (Electrical Resistivity Tomography), MASW (Multichannel Analysis of Surface Waves) and borehole control data have been used to image the subsurface structure and investigate the development mechanism of a sinkhole in Southwestern Missouri. The study shows that the main process responsible for the development of the sinkhole is the downward piping of fine grained soils. Furthermore, the study reveals that the sinkhole developed along a north-south oriented vertical joint set characterized by a vertical zone of water seepage and associated fine grained soil piping into preexisting fractures.Keywords: ERT, Karst, MASW, sinkhole
Procedia PDF Downloads 2133327 Intelligent Rheumatoid Arthritis Identification System Based Image Processing and Neural Classifier
Authors: Abdulkader Helwan
Abstract:
Rheumatoid joint inflammation is characterized as a perpetual incendiary issue which influences the joints by hurting body tissues Therefore, there is an urgent need for an effective intelligent identification system of knee Rheumatoid arthritis especially in its early stages. This paper is to develop a new intelligent system for the identification of Rheumatoid arthritis of the knee utilizing image processing techniques and neural classifier. The system involves two principle stages. The first one is the image processing stage in which the images are processed using some techniques such as RGB to gryascale conversion, rescaling, median filtering, background extracting, images subtracting, segmentation using canny edge detection, and features extraction using pattern averaging. The extracted features are used then as inputs for the neural network which classifies the X-ray knee images as normal or abnormal (arthritic) based on a backpropagation learning algorithm which involves training of the network on 400 X-ray normal and abnormal knee images. The system was tested on 400 x-ray images and the network shows good performance during that phase, resulting in a good identification rate 97%.Keywords: rheumatoid arthritis, intelligent identification, neural classifier, segmentation, backpropoagation
Procedia PDF Downloads 5323326 BROTHERS: World-class Ergonomic Sofa Development
Authors: Aminur Rahman
Abstract:
The Unique feature of BROTHERS Furniture sofa stands in ergonomic Design, skilled hand work and art work. Present world market is passing through a contentious competitive situation that is rapidly and dramatic. Competitive strategy concerns how to create competitive advantage in upholstery businesses. In order to competitive advantage in upholstery sofa market, Design and develop a sofa that have to ergonomic features. Design an ergonomic upholstery sofa knowing and understanding the appropriate seat depth, seat height, angle between Seat & back, back height which is concurrent market demand, world class sofa has to incorporate ergonomic factors. The study the relationships between human, seat and context variables comfort and discomfort. We must have conduct market survey among users whose are need and use sofa. Health & safety factors should be examined from a variety of angle. An attractive design and meet customer requirements, ergonomically fit should be considered for sofa development. This paper will explain how to design & develop sofa’s as per standard specifications which have ergonomic features for users all over the world.Keywords: ergonomics, angle between seat & back, standard dimension, seat comfort
Procedia PDF Downloads 1383325 Early Detection of Breast Cancer in Digital Mammograms Based on Image Processing and Artificial Intelligence
Authors: Sehreen Moorat, Mussarat Lakho
Abstract:
A method of artificial intelligence using digital mammograms data has been proposed in this paper for detection of breast cancer. Many researchers have developed techniques for the early detection of breast cancer; the early diagnosis helps to save many lives. The detection of breast cancer through mammography is effective method which detects the cancer before it is felt and increases the survival rate. In this paper, we have purposed image processing technique for enhancing the image to detect the graphical table data and markings. Texture features based on Gray-Level Co-Occurrence Matrix and intensity based features are extracted from the selected region. For classification purpose, neural network based supervised classifier system has been used which can discriminate between benign and malignant. Hence, 68 digital mammograms have been used to train the classifier. The obtained result proved that automated detection of breast cancer is beneficial for early diagnosis and increases the survival rates of breast cancer patients. The proposed system will help radiologist in the better interpretation of breast cancer.Keywords: medical imaging, cancer, processing, neural network
Procedia PDF Downloads 2593324 Deep Learning Approaches for Accurate Detection of Epileptic Seizures from Electroencephalogram Data
Authors: Ramzi Rihane, Yassine Benayed
Abstract:
Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures resulting from abnormal electrical activity in the brain. Timely and accurate detection of these seizures is essential for improving patient care. In this study, we leverage the UK Bonn University open-source EEG dataset and employ advanced deep-learning techniques to automate the detection of epileptic seizures. By extracting key features from both time and frequency domains, as well as Spectrogram features, we enhance the performance of various deep learning models. Our investigation includes architectures such as Long Short-Term Memory (LSTM), Bidirectional LSTM (Bi-LSTM), 1D Convolutional Neural Networks (1D-CNN), and hybrid CNN-LSTM and CNN-BiLSTM models. The models achieved impressive accuracies: LSTM (98.52%), Bi-LSTM (98.61%), CNN-LSTM (98.91%), CNN-BiLSTM (98.83%), and CNN (98.73%). Additionally, we utilized a data augmentation technique called SMOTE, which yielded the following results: CNN (97.36%), LSTM (97.01%), Bi-LSTM (97.23%), CNN-LSTM (97.45%), and CNN-BiLSTM (97.34%). These findings demonstrate the effectiveness of deep learning in capturing complex patterns in EEG signals, providing a reliable and scalable solution for real-time seizure detection in clinical environments.Keywords: electroencephalogram, epileptic seizure, deep learning, LSTM, CNN, BI-LSTM, seizure detection
Procedia PDF Downloads 12