Search results for: discrete feature
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2194

Search results for: discrete feature

1684 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks

Authors: Radhika Ranjan Roy

Abstract:

Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.

Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve

Procedia PDF Downloads 75
1683 Audio-Visual Recognition Based on Effective Model and Distillation

Authors: Heng Yang, Tao Luo, Yakun Zhang, Kai Wang, Wei Qin, Liang Xie, Ye Yan, Erwei Yin

Abstract:

Recent years have seen that audio-visual recognition has shown great potential in a strong noise environment. The existing method of audio-visual recognition has explored methods with ResNet and feature fusion. However, on the one hand, ResNet always occupies a large amount of memory resources, restricting the application in engineering. On the other hand, the feature merging also brings some interferences in a high noise environment. In order to solve the problems, we proposed an effective framework with bidirectional distillation. At first, in consideration of the good performance in extracting of features, we chose the light model, Efficientnet as our extractor of spatial features. Secondly, self-distillation was applied to learn more information from raw data. Finally, we proposed a bidirectional distillation in decision-level fusion. In more detail, our experimental results are based on a multi-model dataset from 24 volunteers. Eventually, the lipreading accuracy of our framework was increased by 2.3% compared with existing systems, and our framework made progress in audio-visual fusion in a high noise environment compared with the system of audio recognition without visual.

Keywords: lipreading, audio-visual, Efficientnet, distillation

Procedia PDF Downloads 130
1682 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data

Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone

Abstract:

The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.

Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine

Procedia PDF Downloads 235
1681 Continuous and Discontinuos Modeling of Wellbore Instability in Anisotropic Rocks

Authors: C. Deangeli, P. Obentaku Obenebot, O. Omwanghe

Abstract:

The study focuses on the analysis of wellbore instability in rock masses affected by weakness planes. The occurrence of failure in such a type of rocks can occur in the rock matrix and/ or along the weakness planes, in relation to the mud weight gradient. In this case the simple Kirsch solution coupled with a failure criterion cannot supply a suitable scenario for borehole instabilities. Two different numerical approaches have been used in order to investigate the onset of local failure at the wall of a borehole. For each type of approach the influence of the inclination of weakness planes has been investigates, by considering joint sets at 0°, 35° and 90° to the horizontal. The first set of models have been carried out with FLAC 2D (Fast Lagrangian Analysis of Continua) by considering the rock material as a continuous medium, with a Mohr Coulomb criterion for the rock matrix and using the ubiquitous joint model for accounting for the presence of the weakness planes. In this model yield may occur in either the solid or along the weak plane, or both, depending on the stress state, the orientation of the weak plane and the material properties of the solid and weak plane. The second set of models have been performed with PFC2D (Particle Flow code). This code is based on the Discrete Element Method and considers the rock material as an assembly of grains bonded by cement-like materials, and pore spaces. The presence of weakness planes is simulated by the degradation of the bonds between grains along given directions. In general the results of the two approaches are in agreement. However the discrete approach seems to capture more complex phenomena related to local failure in the form of grain detachment at wall of the borehole. In fact the presence of weakness planes in the discontinuous medium leads to local instability along the weak planes also in conditions not predicted from the continuous solution. In general slip failure locations and directions do not follow the conventional wellbore breakout direction but depend upon the internal friction angle and the orientation of the bedding planes. When weakness plane is at 0° and 90° the behaviour are similar to that of a continuous rock material, but borehole instability is more severe when weakness planes are inclined at an angle between 0° and 90° to the horizontal. In conclusion, the results of the numerical simulations show that the prediction of local failure at the wall of the wellbore cannot disregard the presence of weakness planes and consequently the higher mud weight required for stability for any specific inclination of the joints. Despite the discrete approach can simulate smaller areas because of the large number of particles required for the generation of the rock material, however it seems to investigate more correctly the occurrence of failure at the miscroscale and eventually the propagation of the failed zone to a large portion of rock around the wellbore.

Keywords: continuous- discontinuous, numerical modelling, weakness planes wellbore, FLAC 2D

Procedia PDF Downloads 496
1680 A General Framework for Knowledge Discovery Using High Performance Machine Learning Algorithms

Authors: S. Nandagopalan, N. Pradeep

Abstract:

The aim of this paper is to propose a general framework for storing, analyzing, and extracting knowledge from two-dimensional echocardiographic images, color Doppler images, non-medical images, and general data sets. A number of high performance data mining algorithms have been used to carry out this task. Our framework encompasses four layers namely physical storage, object identification, knowledge discovery, user level. Techniques such as active contour model to identify the cardiac chambers, pixel classification to segment the color Doppler echo image, universal model for image retrieval, Bayesian method for classification, parallel algorithms for image segmentation, etc., were employed. Using the feature vector database that have been efficiently constructed, one can perform various data mining tasks like clustering, classification, etc. with efficient algorithms along with image mining given a query image. All these facilities are included in the framework that is supported by state-of-the-art user interface (UI). The algorithms were tested with actual patient data and Coral image database and the results show that their performance is better than the results reported already.

Keywords: active contour, bayesian, echocardiographic image, feature vector

Procedia PDF Downloads 416
1679 Influence of Flight Design on Discharging Profiles of Granular Material in Rotary Dryer

Authors: I. Benhsine, M. Hellou, F. Lominé, Y. Roques

Abstract:

During the manufacture of fertilizer, it is necessary to add water for granulation purposes. The water content is then removed or reduced using rotary dryers. They are commonly used to dry wet granular materials and they are usually fitted with lifting flights. The transport of granular materials occurs when particles cascade from the lifting flights and fall into the air stream. Each cascade consists of a lifting and a falling cycle. Lifting flights are thus of great importance for the transport of granular materials along the dryer. They also enhance the contact between solid particles and the air stream. Optimization of the drying process needs an understanding of the behavior of granular materials inside a rotary dryer. Different approaches exist to study the movement of granular materials inside the dryer. Most common of them are based on empirical formulations or on study the movement of the bulk material. In the present work, we are interested in the behavior of each particle in the cross section of the dryer using Discrete Element Method (DEM) to understand. In this paper, we focus on studying the hold-up, the cascade patterns, the falling time and the falling length of the particles leaving the flights. We will be using two segment flights. Three different profiles are used: a straight flight (180° between both segments), an angled flight (with an angle of 150°), and a right-angled flight (90°). The profile of the flight affects significantly the movement of the particles in the dryer. Changing the flight angle changes the flight capacity which leads to different discharging profile of the flight, thus affecting the hold-up in the flight. When the angle of the flight is reduced, the range of the discharge angle increases leading to a more uniformed cascade pattern in time. The falling length and the falling time of the particles also increase up to a maximum value then they start decreasing. Moreover, the results show an increase in the falling length and the falling time up to 70% and 50%, respectively, when using a right-angled flight instead of a straight one.

Keywords: discrete element method, granular materials, lifting flight, rotary dryer

Procedia PDF Downloads 322
1678 On the Implementation of The Pulse Coupled Neural Network (PCNN) in the Vision of Cognitive Systems

Authors: Hala Zaghloul, Taymoor Nazmy

Abstract:

One of the great challenges of the 21st century is to build a robot that can perceive and act within its environment and communicate with people, while also exhibiting the cognitive capabilities that lead to performance like that of people. The Pulse Coupled Neural Network, PCNN, is a relative new ANN model that derived from a neural mammal model with a great potential in the area of image processing as well as target recognition, feature extraction, speech recognition, combinatorial optimization, compressed encoding. PCNN has unique feature among other types of neural network, which make it a candid to be an important approach for perceiving in cognitive systems. This work show and emphasis on the potentials of PCNN to perform different tasks related to image processing. The main drawback or the obstacle that prevent the direct implementation of such technique, is the need to find away to control the PCNN parameters toward perform a specific task. This paper will evaluate the performance of PCNN standard model for processing images with different properties, and select the important parameters that give a significant result, also, the approaches towards find a way for the adaptation of the PCNN parameters to perform a specific task.

Keywords: cognitive system, image processing, segmentation, PCNN kernels

Procedia PDF Downloads 273
1677 Multi-Modal Feature Fusion Network for Speaker Recognition Task

Authors: Xiang Shijie, Zhou Dong, Tian Dan

Abstract:

Speaker recognition is a crucial task in the field of speech processing, aimed at identifying individuals based on their vocal characteristics. However, existing speaker recognition methods face numerous challenges. Traditional methods primarily rely on audio signals, which often suffer from limitations in noisy environments, variations in speaking style, and insufficient sample sizes. Additionally, relying solely on audio features can sometimes fail to capture the unique identity of the speaker comprehensively, impacting recognition accuracy. To address these issues, we propose a multi-modal network architecture that simultaneously processes both audio and text signals. By gradually integrating audio and text features, we leverage the strengths of both modalities to enhance the robustness and accuracy of speaker recognition. Our experiments demonstrate significant improvements with this multi-modal approach, particularly in complex environments, where recognition performance has been notably enhanced. Our research not only highlights the limitations of current speaker recognition methods but also showcases the effectiveness of multi-modal fusion techniques in overcoming these limitations, providing valuable insights for future research.

Keywords: feature fusion, memory network, multimodal input, speaker recognition

Procedia PDF Downloads 14
1676 Image Inpainting Model with Small-Sample Size Based on Generative Adversary Network and Genetic Algorithm

Authors: Jiawen Wang, Qijun Chen

Abstract:

The performance of most machine-learning methods for image inpainting depends on the quantity and quality of the training samples. However, it is very expensive or even impossible to obtain a great number of training samples in many scenarios. In this paper, an image inpainting model based on a generative adversary network (GAN) is constructed for the cases when the number of training samples is small. Firstly, a feature extraction network (F-net) is incorporated into the GAN network to utilize the available information of the inpainting image. The weighted sum of the extracted feature and the random noise acts as the input to the generative network (G-net). The proposed network can be trained well even when the sample size is very small. Secondly, in the phase of the completion for each damaged image, a genetic algorithm is designed to search an optimized noise input for G-net; based on this optimized input, the parameters of the G-net and F-net are further learned (Once the completion for a certain damaged image ends, the parameters restore to its original values obtained in the training phase) to generate an image patch that not only can fill the missing part of the damaged image smoothly but also has visual semantics.

Keywords: image inpainting, generative adversary nets, genetic algorithm, small-sample size

Procedia PDF Downloads 125
1675 Offline Signature Verification in Punjabi Based On SURF Features and Critical Point Matching Using HMM

Authors: Rajpal Kaur, Pooja Choudhary

Abstract:

Biometrics, which refers to identifying an individual based on his or her physiological or behavioral characteristics, has the capabilities to the reliably distinguish between an authorized person and an imposter. The Signature recognition systems can categorized as offline (static) and online (dynamic). This paper presents Surf Feature based recognition of offline signatures system that is trained with low-resolution scanned signature images. The signature of a person is an important biometric attribute of a human being which can be used to authenticate human identity. However the signatures of human can be handled as an image and recognized using computer vision and HMM techniques. With modern computers, there is need to develop fast algorithms for signature recognition. There are multiple techniques are defined to signature recognition with a lot of scope of research. In this paper, (static signature) off-line signature recognition & verification using surf feature with HMM is proposed, where the signature is captured and presented to the user in an image format. Signatures are verified depended on parameters extracted from the signature using various image processing techniques. The Off-line Signature Verification and Recognition is implemented using Mat lab platform. This work has been analyzed or tested and found suitable for its purpose or result. The proposed method performs better than the other recently proposed methods.

Keywords: offline signature verification, offline signature recognition, signatures, SURF features, HMM

Procedia PDF Downloads 379
1674 Detection and Classification of Myocardial Infarction Using New Extracted Features from Standard 12-Lead ECG Signals

Authors: Naser Safdarian, Nader Jafarnia Dabanloo

Abstract:

In this paper we used four features i.e. Q-wave integral, QRS complex integral, T-wave integral and total integral as extracted feature from normal and patient ECG signals to detection and localization of myocardial infarction (MI) in left ventricle of heart. In our research we focused on detection and localization of MI in standard ECG. We use the Q-wave integral and T-wave integral because this feature is important impression in detection of MI. We used some pattern recognition method such as Artificial Neural Network (ANN) to detect and localize the MI. Because these methods have good accuracy for classification of normal and abnormal signals. We used one type of Radial Basis Function (RBF) that called Probabilistic Neural Network (PNN) because of its nonlinearity property, and used other classifier such as k-Nearest Neighbors (KNN), Multilayer Perceptron (MLP) and Naive Bayes Classification. We used PhysioNet database as our training and test data. We reached over 80% for accuracy in test data for localization and over 95% for detection of MI. Main advantages of our method are simplicity and its good accuracy. Also we can improve accuracy of classification by adding more features in this method. A simple method based on using only four features which extracted from standard ECG is presented which has good accuracy in MI localization.

Keywords: ECG signal processing, myocardial infarction, features extraction, pattern recognition

Procedia PDF Downloads 452
1673 RGB Color Based Real Time Traffic Sign Detection and Feature Extraction System

Authors: Kay Thinzar Phu, Lwin Lwin Oo

Abstract:

In an intelligent transport system and advanced driver assistance system, the developing of real-time traffic sign detection and recognition (TSDR) system plays an important part in recent research field. There are many challenges for developing real-time TSDR system due to motion artifacts, variable lighting and weather conditions and situations of traffic signs. Researchers have already proposed various methods to minimize the challenges problem. The aim of the proposed research is to develop an efficient and effective TSDR in real time. This system proposes an adaptive thresholding method based on RGB color for traffic signs detection and new features for traffic signs recognition. In this system, the RGB color thresholding is used to detect the blue and yellow color traffic signs regions. The system performs the shape identify to decide whether the output candidate region is traffic sign or not. Lastly, new features such as termination points, bifurcation points, and 90’ angles are extracted from validated image. This system uses Myanmar Traffic Sign dataset.

Keywords: adaptive thresholding based on RGB color, blue color detection, feature extraction, yellow color detection

Procedia PDF Downloads 307
1672 Curvature Based-Methods for Automatic Coarse and Fine Registration in Dimensional Metrology

Authors: Rindra Rantoson, Hichem Nouira, Nabil Anwer, Charyar Mehdi-Souzani

Abstract:

Multiple measurements by means of various data acquisition systems are generally required to measure the shape of freeform workpieces for accuracy, reliability and holisticity. The obtained data are aligned and fused into a common coordinate system within a registration technique involving coarse and fine registrations. Standardized iterative methods have been established for fine registration such as Iterative Closest Points (ICP) and its variants. For coarse registration, no conventional method has been adopted yet despite a significant number of techniques which have been developed in the literature to supply an automatic rough matching between data sets. Two main issues are addressed in this paper: the coarse registration and the fine registration. For coarse registration, two novel automated methods based on the exploitation of discrete curvatures are presented: an enhanced Hough Transformation (HT) and an improved Ransac Transformation. The use of curvature features in both methods aims to reduce computational cost. For fine registration, a new variant of ICP method is proposed in order to reduce registration error using curvature parameters. A specific distance considering the curvature similarity has been combined with Euclidean distance to define the distance criterion used for correspondences searching. Additionally, the objective function has been improved by combining the point-to-point (P-P) minimization and the point-to-plane (P-Pl) minimization with automatic weights. These ones are determined from the preliminary calculated curvature features at each point of the workpiece surface. The algorithms are applied on simulated and real data performed by a computer tomography (CT) system. The obtained results reveal the benefit of the proposed novel curvature-based registration methods.

Keywords: discrete curvature, RANSAC transformation, hough transformation, coarse registration, ICP variant, point-to-point and point-to-plane minimization combination, computer tomography

Procedia PDF Downloads 420
1671 Amplifying Sine Unit-Convolutional Neural Network: An Efficient Deep Architecture for Image Classification and Feature Visualizations

Authors: Jamshaid Ul Rahman, Faiza Makhdoom, Dianchen Lu

Abstract:

Activation functions play a decisive role in determining the capacity of Deep Neural Networks (DNNs) as they enable neural networks to capture inherent nonlinearities present in data fed to them. The prior research on activation functions primarily focused on the utility of monotonic or non-oscillatory functions, until Growing Cosine Unit (GCU) broke the taboo for a number of applications. In this paper, a Convolutional Neural Network (CNN) model named as ASU-CNN is proposed which utilizes recently designed activation function ASU across its layers. The effect of this non-monotonic and oscillatory function is inspected through feature map visualizations from different convolutional layers. The optimization of proposed network is offered by Adam with a fine-tuned adjustment of learning rate. The network achieved promising results on both training and testing data for the classification of CIFAR-10. The experimental results affirm the computational feasibility and efficacy of the proposed model for performing tasks related to the field of computer vision.

Keywords: amplifying sine unit, activation function, convolutional neural networks, oscillatory activation, image classification, CIFAR-10

Procedia PDF Downloads 105
1670 Predicting Open Chromatin Regions in Cell-Free DNA Whole Genome Sequencing Data by Correlation Clustering  

Authors: Fahimeh Palizban, Farshad Noravesh, Amir Hossein Saeidian, Mahya Mehrmohamadi

Abstract:

In the recent decade, the emergence of liquid biopsy has significantly improved cancer monitoring and detection. Dying cells, including those originating from tumors, shed their DNA into the blood and contribute to a pool of circulating fragments called cell-free DNA. Accordingly, identifying the tissue origin of these DNA fragments from the plasma can result in more accurate and fast disease diagnosis and precise treatment protocols. Open chromatin regions are important epigenetic features of DNA that reflect cell types of origin. Profiling these features by DNase-seq, ATAC-seq, and histone ChIP-seq provides insights into tissue-specific and disease-specific regulatory mechanisms. There have been several studies in the area of cancer liquid biopsy that integrate distinct genomic and epigenomic features for early cancer detection along with tissue of origin detection. However, multimodal analysis requires several types of experiments to cover the genomic and epigenomic aspects of a single sample, which will lead to a huge amount of cost and time. To overcome these limitations, the idea of predicting OCRs from WGS is of particular importance. In this regard, we proposed a computational approach to target the prediction of open chromatin regions as an important epigenetic feature from cell-free DNA whole genome sequence data. To fulfill this objective, local sequencing depth will be fed to our proposed algorithm and the prediction of the most probable open chromatin regions from whole genome sequencing data can be carried out. Our method integrates the signal processing method with sequencing depth data and includes count normalization, Discrete Fourie Transform conversion, graph construction, graph cut optimization by linear programming, and clustering. To validate the proposed method, we compared the output of the clustering (open chromatin region+, open chromatin region-) with previously validated open chromatin regions related to human blood samples of the ATAC-DB database. The percentage of overlap between predicted open chromatin regions and the experimentally validated regions obtained by ATAC-seq in ATAC-DB is greater than 67%, which indicates meaningful prediction. As it is evident, OCRs are mostly located in the transcription start sites (TSS) of the genes. In this regard, we compared the concordance between the predicted OCRs and the human genes TSS regions obtained from refTSS and it showed proper accordance around 52.04% and ~78% with all and the housekeeping genes, respectively. Accurately detecting open chromatin regions from plasma cell-free DNA-seq data is a very challenging computational problem due to the existence of several confounding factors, such as technical and biological variations. Although this approach is in its infancy, there has already been an attempt to apply it, which leads to a tool named OCRDetector with some restrictions like the need for highly depth cfDNA WGS data, prior information about OCRs distribution, and considering multiple features. However, we implemented a graph signal clustering based on a single depth feature in an unsupervised learning manner that resulted in faster performance and decent accuracy. Overall, we tried to investigate the epigenomic pattern of a cell-free DNA sample from a new computational perspective that can be used along with other tools to investigate genetic and epigenetic aspects of a single whole genome sequencing data for efficient liquid biopsy-related analysis.

Keywords: open chromatin regions, cancer, cell-free DNA, epigenomics, graph signal processing, correlation clustering

Procedia PDF Downloads 145
1669 Comparing the Contribution of General Vocabulary Knowledge and Academic Vocabulary Knowledge to Learners' Academic Achievement

Authors: Reem Alsager, James Milton

Abstract:

Coxhead’s (2000) Academic Word List (AWL) believed to be essential for students pursuing higher education and helps differentiate English for Academic Purposes (EAP) from General English as a course of study, and it is thought to be important for comprehending English academic texts. It has been described that AWL is an infrequent, discrete set of vocabulary items unreachable from general language. On the other hand, it has been known for a period of time that general vocabulary knowledge is a good predictor of academic achievement. This study, however, is an attempt to measure and compare the contribution of academic knowledge and general vocabulary knowledge to learners’ GPA and examine what knowledge is a better predictor of academic achievement and investigate whether AWL as a specialised list of infrequent words relates to the frequency effect. The participants were comprised of 44 international postgraduate students in Swansea University, all from the School of Management, following the taught MSc (Master of Science). The study employed the Academic Vocabulary Size Test (AVST) and the XK_Lex vocabulary size test. The findings indicate that AWL is a list based on word frequency rather than a discrete and unique word list and that the AWL performs the same function as general vocabulary, with tests of each found to measure largely the same quality of knowledge. The findings also suggest that the contribution that AWL knowledge provides for academic success is not sufficient and that general vocabulary knowledge is better in predicting academic achievement. Furthermore, the contribution that academic knowledge added above the contribution of general vocabulary knowledge when combined is really small and noteworthy. This study’s results are in line with the argument and suggest that it is the development of general vocabulary size is an essential quality for academic success and acquiring the words of the AWL will form part of this process. The AWL by itself does not provide sufficient coverage, and is probably not specialised enough, for knowledge of this list to influence this general process. It can be concluded that AWL as an academic word list epitomizes only a fraction of words that are actually needed for academic success in English and that knowledge of academic vocabulary combined with general vocabulary knowledge above the most frequent 3000 words is what matters most to ultimate academic success.

Keywords: academic achievement, academic vocabulary, general vocabulary, vocabulary size

Procedia PDF Downloads 217
1668 Rock-Bed Thermocline Storage: A Numerical Analysis of Granular Bed Behavior and Interaction with Storage Tank

Authors: Nahia H. Sassine, Frédéric-Victor Donzé, Arnaud Bruch, Barthélemy Harthong

Abstract:

Thermal Energy Storage (TES) systems are central elements of various types of power plants operated using renewable energy sources. Packed bed TES can be considered as a cost–effective solution in concentrated solar power plants (CSP). Such a device is made up of a tank filled with a granular bed through which heat-transfer fluid circulates. However, in such devices, the tank might be subjected to catastrophic failure induced by a mechanical phenomenon known as thermal ratcheting. Thermal stresses are accumulated during cycles of loading and unloading until the failure happens. For instance, when rocks are used as storage material, the tank wall expands more than the solid medium during charge process, a gap is created between the rocks and tank walls and the filler material settles down to fill it. During discharge, the tank contracts against the bed, resulting in thermal stresses that may exceed the wall tank yield stress and generate plastic deformation. This phenomenon is repeated over the cycles and the tank will be slowly ratcheted outward until it fails. This paper aims at studying the evolution of tank wall stresses over granular bed thermal cycles, taking into account both thermal and mechanical loads, with a numerical model based on the discrete element method (DEM). Simulations were performed to study two different thermal configurations: (i) the tank is heated homogeneously along its height or (ii) with a vertical gradient of temperature. Then, the resulting loading stresses applied on the tank are compared as well the response of the internal granular material. Besides the study of the influence of different thermal configurations on the storage tank response, other parameters are varied, such as the internal angle of friction of the granular material, the dispersion of particles diameters as well as the tank’s dimensions. Then, their influences on the kinematics of the granular bed submitted to thermal cycles are highlighted.

Keywords: discrete element method (DEM), thermal cycles, thermal energy storage, thermocline

Procedia PDF Downloads 399
1667 Existence of Positive Solutions for Second-Order Difference Equation with Discrete Boundary Value Problem

Authors: Thanin Sitthiwirattham, Jiraporn Reunsumrit

Abstract:

We study the existence of positive solutions to the three points difference summation boundary value problem. We show the existence of at least one positive solution if f is either superlinear or sublinear by applying the fixed point theorem due to Krasnoselskii in cones.

Keywords: positive solution, boundary value problem, fixed point theorem, cone

Procedia PDF Downloads 436
1666 User-Driven Product Line Engineering for Assembling Large Families of Software

Authors: Zhaopeng Xuan, Yuan Bian, C. Cailleaux, Jing Qin, S. Traore

Abstract:

Traditional software engineering allows engineers to propose to their clients multiple specialized software distributions assembled from a shared set of software assets. The management of these assets however requires a trade-off between client satisfaction and software engineering process. Clients have more and more difficult to find a distribution or components based on their needs from all of distributed repositories. This paper proposes a software engineering for a user-driven software product line in which engineers define a feature model but users drive the actual software distribution on demand. This approach makes the user become final actor as a release manager in software engineering process, increasing user product satisfaction and simplifying user operations to find required components. In addition, it provides a way for engineers to manage and assembly large software families. As a proof of concept, a user-driven software product line is implemented for eclipse, an integrated development environment. An eclipse feature model is defined, which is exposed to users on a cloud-based built platform from which clients can download individualized Eclipse distributions.

Keywords: software product line, model-driven development, reverse engineering and refactoring, agile method

Procedia PDF Downloads 428
1665 Navigating the Legal Seas: The Freedom to Choose Applicable Law in Tort

Authors: Sara Vora (Hoxha)

Abstract:

An essential feature of any international lawsuit is the ability of the parties to pick the law that would apply in the event of a tort claim. This option to choose the law to use in tort cases is based on Article 14 and 4/3 of the Rome II Regulation. The purpose of this article is to examine the boundaries of this freedom, as well as its relevance in international legal disputes. The article opens with a brief introduction to the basics of tort law. After a short introduction, the article demonstrates why Article 14 and 4/3 of the Rome II Regulation are so crucial to the right to select appropriate law in tort cases. The notion of the right to select the law to use in tort cases is examined, along with its breadth and possible restrictions. The article presents case studies to demonstrate how the right to select relevant law in tort might be put into practise. Case results and the judges' rationales for their rulings are examined. The possible influence of the right to select applicable law in tort on the process of harmonisation is also explored in this study. The results are summarised and the primary research question is addressed in the last section of the paper. In conclusion, the parties' ability to pick the law that rules their dispute via the freedom to choose relevant law in tort is a crucial feature of cross-border litigation. Despite certain restrictions, this freedom is nevertheless an important part of the legal structure that governs international conflicts.

Keywords: applicable law, tort, Rome II regulation, freedom to choose, cross-border litigation, harmonization of tort law

Procedia PDF Downloads 62
1664 Consumer Preferences when Buying Second Hand Luxury Items

Authors: K. A. Schuck, J. K. Perret, A. Mehn, K. Rommel

Abstract:

Consumers increasingly consider sustainability aspects in their consumption behavior. Although, few fashion brands are already active in the second-hand luxury market with their own online platforms. Separating between base and high-end luxury brands, two online discrete choice experiments determine the drivers behind consumers’ willingness-to-pay for platform characteristics like the type of ownership, giving brands the opportunity to elicit a financial scope they can operate within.

Keywords: choice experiment, luxury, preferences, second-hand, platform, online

Procedia PDF Downloads 123
1663 Consumer Utility Analysis of Halal Certification on Beef Using Discrete Choice Experiment: A Case Study in the Netherlands

Authors: Rosa Amalia Safitri, Ine van der Fels-Klerx, Henk Hogeveen

Abstract:

Halal is a dietary law observed by people following Islamic faith. It is considered as a type of credence food quality which cannot be easily assured by consumers even upon and after consumption. Therefore, Halal certification takes place as a practical tool for the consumers to make an informed choice particularly in a non-Muslim majority country, including the Netherlands. Discrete choice experiment (DCE) was employed in this study for its ability to assess the importance of attributes attached to Halal beef in the Dutch market and to investigate consumer utilities. Furthermore, willingness to pay (WTP) for the desired Halal certification was estimated. Four most relevant attributes were selected, i.e., the slaughter method, traceability information, place of purchase, and Halal certification. Price was incorporated as an attribute to allow estimation of willingness to pay for Halal certification. There were 242 Muslim respondents who regularly consumed Halal beef completed the survey, from Dutch (53%) and non-Dutch consumers living in the Netherlands (47%). The vast majority of the respondents (95%) were within the age of 18-45 years old, with the largest group being student (43%) followed by employee (30%) and housewife (12%). Majority of the respondents (76%) had disposable monthly income less than € 2,500, while the rest earned more than € 2,500. The respondents assessed themselves of having good knowledge of the studied attributes, except for traceability information with 62% of the respondents considered themselves not knowledgeable. The findings indicated that slaughter method was valued as the most important attribute, followed by Halal certificate, place of purchase, price, and traceability information. This order of importance varied across sociodemographic variables, except for the slaughter method. Both Dutch and non-Dutch subgroups valued Halal certification as the third most important attributes. However, non-Dutch respondents valued it with higher importance (0,20) than their Dutch counterparts (0,16). For non-Dutch, the price was more important than Halal certification. The ideal product preferred by the consumers indicated the product serving the highest utilities for consumers, and characterized by beef obtained without pre-slaughtering stunning, with traceability info, available at Halal store, certified by an official certifier, and sold at 2.75 € per 500 gr. In general, an official Halal certifier was mostly preferred. However, consumers were not willing to pay for premium for any type of Halal certifiers, indicated by negative WTP of -0.73 €, -0.93 €, and -1,03€ for small, official, and international certifiers, respectively. This finding indicated that consumers tend to lose their utility when confronted with price. WTP estimates differ across socio-demographic variables with male and non-Dutch respondents had the lowest WTP. The unfamiliarity to traceability information might cause respondents to perceive it as the least important attribute. In the context of Halal certified meat, adding traceability information into meat packaging can serve two functions, first consumers can justify for themselves whether the processes comply with Halal requirements, for example, the use of pre-slaughtering stunning, and secondly to assure its safety. Therefore, integrating traceability info into meat packaging can help to make informed decision for both Halal status and food safety.

Keywords: consumer utilities, discrete choice experiments, Halal certification, willingness to pay

Procedia PDF Downloads 124
1662 Reallocation of Bed Capacity in a Hospital Combining Discrete Event Simulation and Integer Linear Programming

Authors: Muhammed Ordu, Eren Demir, Chris Tofallis

Abstract:

The number of inpatient admissions in the UK has been significantly increasing over the past decade. These increases cause bed occupancy rates to exceed the target level (85%) set by the Department of Health in England. Therefore, hospital service managers are struggling to better manage key resource such as beds. On the other hand, this severe demand pressure might lead to confusion in wards. For example, patients can be admitted to the ward of another inpatient specialty due to lack of resources (i.e., bed). This study aims to develop a simulation-optimization model to reallocate the available number of beds in a mid-sized hospital in the UK. A hospital simulation model was developed to capture the stochastic behaviours of the hospital by taking into account the accident and emergency department, all outpatient and inpatient services, and the interactions between each other. A couple of outputs of the simulation model (e.g., average length of stay and revenue) were generated as inputs to be used in the optimization model. An integer linear programming was developed under a number of constraints (financial, demand, target level of bed occupancy rate and staffing level) with the aims of maximizing number of admitted patients. In addition, a sensitivity analysis was carried out by taking into account unexpected increases on inpatient demand over the next 12 months. As a result, the major findings of the approach proposed in this study optimally reallocate the available number of beds for each inpatient speciality and reveal that 74 beds are idle. In addition, the findings of the study indicate that the hospital wards will be able to cope with 14% demand increase at most in the projected year. In conclusion, this paper sheds a new light on how best to reallocate beds in order to cope with current and future demand for healthcare services.

Keywords: bed occupancy rate, bed reallocation, discrete event simulation, inpatient admissions, integer linear programming, projected usage

Procedia PDF Downloads 139
1661 Image Segmentation Using Active Contours Based on Anisotropic Diffusion

Authors: Shafiullah Soomro

Abstract:

Active contour is one of the image segmentation techniques and its goal is to capture required object boundaries within an image. In this paper, we propose a novel image segmentation method by using an active contour method based on anisotropic diffusion feature enhancement technique. The traditional active contour methods use only pixel information to perform segmentation, which produces inaccurate results when an image has some noise or complex background. We use Perona and Malik diffusion scheme for feature enhancement, which sharpens the object boundaries and blurs the background variations. Our main contribution is the formulation of a new SPF (signed pressure force) function, which uses global intensity information across the regions. By minimizing an energy function using partial differential framework the proposed method captures semantically meaningful boundaries instead of catching uninterested regions. Finally, we use a Gaussian kernel which eliminates the problem of reinitialization in level set function. We use several synthetic and real images from different modalities to validate the performance of the proposed method. In the experimental section, we have found the proposed method performance is better qualitatively and quantitatively and yield results with higher accuracy compared to other state-of-the-art methods.

Keywords: active contours, anisotropic diffusion, level-set, partial differential equations

Procedia PDF Downloads 157
1660 Approximation Property Pass to Free Product

Authors: Kankeyanathan Kannan

Abstract:

On approximation properties of group C* algebras is everywhere; it is powerful, important, backbone of countless breakthroughs. For a discrete group G, let A(G) denote its Fourier algebra, and let M₀A(G) denote the space of completely bounded Fourier multipliers on G. An approximate identity on G is a sequence (Φn) of finitely supported functions such that (Φn) uniformly converge to constant function 1 In this paper we prove that approximation property pass to free product.

Keywords: approximation property, weakly amenable, strong invariant approximation property, invariant approximation property

Procedia PDF Downloads 671
1659 Simultaneous Determination of Methotrexate and Aspirin Using Fourier Transform Convolution Emission Data under Non-Parametric Linear Regression Method

Authors: Marwa A. A. Ragab, Hadir M. Maher, Eman I. El-Kimary

Abstract:

Co-administration of methotrexate (MTX) and aspirin (ASP) can cause a pharmacokinetic interaction and a subsequent increase in blood MTX concentrations which may increase the risk of MTX toxicity. Therefore, it is important to develop a sensitive, selective, accurate and precise method for their simultaneous determination in urine. A new hybrid chemometric method has been applied to the emission response data of the two drugs. Spectrofluorimetric method for determination of MTX through measurement of its acid-degradation product, 4-amino-4-deoxy-10-methylpteroic acid (4-AMP), was developed. Moreover, the acid-catalyzed degradation reaction enables the spectrofluorimetric determination of ASP through the formation of its active metabolite salicylic acid (SA). The proposed chemometric method deals with convolution of emission data using 8-points sin xi polynomials (discrete Fourier functions) after the derivative treatment of these emission data. The first and second derivative curves (D1 & D2) were obtained first then convolution of these curves was done to obtain first and second derivative under Fourier functions curves (D1/FF) and (D2/FF). This new application was used for the resolution of the overlapped emission bands of the degradation products of both drugs to allow their simultaneous indirect determination in human urine. Not only this chemometric approach was applied to the emission data but also the obtained data were subjected to non-parametric linear regression analysis (Theil’s method). The proposed method was fully validated according to the ICH guidelines and it yielded linearity ranges as follows: 0.05-0.75 and 0.5-2.5 µg mL-1 for MTX and ASP respectively. It was found that the non-parametric method was superior over the parametric one in the simultaneous determination of MTX and ASP after the chemometric treatment of the emission spectra of their degradation products. The work combines the advantages of derivative and convolution using discrete Fourier function together with the reliability and efficacy of the non-parametric analysis of data. The achieved sensitivity along with the low values of LOD (0.01 and 0.06 µg mL-1) and LOQ (0.04 and 0.2 µg mL-1) for MTX and ASP respectively, by the second derivative under Fourier functions (D2/FF) were promising and guarantee its application for monitoring the two drugs in patients’ urine samples.

Keywords: chemometrics, emission curves, derivative, convolution, Fourier transform, human urine, non-parametric regression, Theil’s method

Procedia PDF Downloads 427
1658 Relevance Feedback within CBIR Systems

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

We present here the results for a comparative study of some techniques, available in the literature, related to the relevance feedback mechanism in the case of a short-term learning. Only one method among those considered here is belonging to the data mining field which is the K-Nearest Neighbours Algorithm (KNN) while the rest of the methods is related purely to the information retrieval field and they fall under the purview of the following three major axes: Shifting query, Feature Weighting and the optimization of the parameters of similarity metric. As a contribution, and in addition to the comparative purpose, we propose a new version of the KNN algorithm referred to as an incremental KNN which is distinct from the original version in the sense that besides the influence of the seeds, the rate of the actual target image is influenced also by the images already rated. The results presented here have been obtained after experiments conducted on the Wang database for one iteration and utilizing colour moments on the RGB space. This compact descriptor, Colour Moments, is adequate for the efficiency purposes needed in the case of interactive systems. The results obtained allow us to claim that the proposed algorithm proves good results; it even outperforms a wide range of techniques available in the literature.

Keywords: CBIR, category search, relevance feedback, query point movement, standard Rocchio’s formula, adaptive shifting query, feature weighting, original KNN, incremental KNN

Procedia PDF Downloads 276
1657 Enhanced Multi-Scale Feature Extraction Using a DCNN by Proposing Dynamic Soft Margin SoftMax for Face Emotion Detection

Authors: Armin Nabaei, M. Omair Ahmad, M. N. S. Swamy

Abstract:

Many facial expression and emotion recognition methods in the traditional approaches of using LDA, PCA, and EBGM have been proposed. In recent years deep learning models have provided a unique platform addressing by automatically extracting the features for the detection of facial expression and emotions. However, deep networks require large training datasets to extract automatic features effectively. In this work, we propose an efficient emotion detection algorithm using face images when only small datasets are available for training. We design a deep network whose feature extraction capability is enhanced by utilizing several parallel modules between the input and output of the network, each focusing on the extraction of different types of coarse features with fined grained details to break the symmetry of produced information. In fact, we leverage long range dependencies, which is one of the main drawback of CNNs. We develop this work by introducing a Dynamic Soft-Margin SoftMax.The conventional SoftMax suffers from reaching to gold labels very soon, which take the model to over-fitting. Because it’s not able to determine adequately discriminant feature vectors for some variant class labels. We reduced the risk of over-fitting by using a dynamic shape of input tensor instead of static in SoftMax layer with specifying a desired Soft- Margin. In fact, it acts as a controller to how hard the model should work to push dissimilar embedding vectors apart. For the proposed Categorical Loss, by the objective of compacting the same class labels and separating different class labels in the normalized log domain.We select penalty for those predictions with high divergence from ground-truth labels.So, we shorten correct feature vectors and enlarge false prediction tensors, it means we assign more weights for those classes with conjunction to each other (namely, “hard labels to learn”). By doing this work, we constrain the model to generate more discriminate feature vectors for variant class labels. Finally, for the proposed optimizer, our focus is on solving weak convergence of Adam optimizer for a non-convex problem. Our noteworthy optimizer is working by an alternative updating gradient procedure with an exponential weighted moving average function for faster convergence and exploiting a weight decay method to help drastically reducing the learning rate near optima to reach the dominant local minimum. We demonstrate the superiority of our proposed work by surpassing the first rank of three widely used Facial Expression Recognition datasets with 93.30% on FER-2013, and 16% improvement compare to the first rank after 10 years, reaching to 90.73% on RAF-DB, and 100% k-fold average accuracy for CK+ dataset, and shown to provide a top performance to that provided by other networks, which require much larger training datasets.

Keywords: computer vision, facial expression recognition, machine learning, algorithms, depp learning, neural networks

Procedia PDF Downloads 73
1656 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method

Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh

Abstract:

In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.

Keywords: discrete element method, fluid flow, parametric study, sand production/bonds failure

Procedia PDF Downloads 318
1655 A CORDIC Based Design Technique for Efficient Computation of DCT

Authors: Deboraj Muchahary, Amlan Deep Borah Abir J. Mondal, Alak Majumder

Abstract:

A discrete cosine transform (DCT) is described and a technique to compute it using fast Fourier transform (FFT) is developed. In this work, DCT of a finite length sequence is obtained by incorporating CORDIC methodology in radix-2 FFT algorithm. The proposed methodology is simple to comprehend and maintains a regular structure, thereby reducing computational complexity. DCTs are used extensively in the area of digital processing for the purpose of pattern recognition. So the efficient computation of DCT maintaining a transparent design flow is highly solicited.

Keywords: DCT, DFT, CORDIC, FFT

Procedia PDF Downloads 474