Search results for: curse of dimensionality
85 Parallel Coordinates on a Spiral Surface for Visualizing High-Dimensional Data
Authors: Chris Suma, Yingcai Xiao
Abstract:
This paper presents Parallel Coordinates on a Spiral Surface (PCoSS), a parallel coordinate based interactive visualization method for high-dimensional data, and a test implementation of the method. Plots generated by the test system are compared with those generated by XDAT, a software implementing traditional parallel coordinates. Traditional parallel coordinate plots can be cluttered when the number of data points is large or when the dimensionality of the data is high. PCoSS plots display multivariate data on a 3D spiral surface and allow users to see the whole picture of high-dimensional data with less cluttering. Taking advantage of the 3D display environment in PCoSS, users can further reduce cluttering by zooming into an axis of interest for a closer view or by moving vantage points and by reorienting the viewing angle to obtain a desired view of the plots.Keywords: human computer interaction, parallel coordinates, spiral surface, visualization
Procedia PDF Downloads 1484 Distributed Perceptually Important Point Identification for Time Series Data Mining
Authors: Tak-Chung Fu, Ying-Kit Hung, Fu-Lai Chung
Abstract:
In the field of time series data mining, the concept of the Perceptually Important Point (PIP) identification process is first introduced in 2001. This process originally works for financial time series pattern matching and it is then found suitable for time series dimensionality reduction and representation. Its strength is on preserving the overall shape of the time series by identifying the salient points in it. With the rise of Big Data, time series data contributes a major proportion, especially on the data which generates by sensors in the Internet of Things (IoT) environment. According to the nature of PIP identification and the successful cases, it is worth to further explore the opportunity to apply PIP in time series ‘Big Data’. However, the performance of PIP identification is always considered as the limitation when dealing with ‘Big’ time series data. In this paper, two distributed versions of PIP identification based on the Specialized Binary (SB) Tree are proposed. The proposed approaches solve the bottleneck when running the PIP identification process in a standalone computer. Improvement in term of speed is obtained by the distributed versions.Keywords: distributed computing, performance analysis, Perceptually Important Point identification, time series data mining
Procedia PDF Downloads 43583 Dimensionality and Superconducting Parameters of YBa2Cu3O7 Foams
Authors: Michael Koblischka, Anjela Koblischka-Veneva, XianLin Zeng, Essia Hannachi, Yassine Slimani
Abstract:
Superconducting foams of YBa2Cu3O7 (abbreviated Y-123) were produced using the infiltration growth (IG) technique from Y2BaCuO5 (Y-211) foams. The samples were investigated by SEM (scanning electron microscopy) and electrical resistivity measurements. SEM observations indicated the specific microstructure of the foam struts with numerous tiny Y-211 particles (50-100 nm diameter) embedded in channel-like structures between the Y-123 grains. The investigation of the excess conductivity of different prepared composites was analyzed using Aslamazov-Larkin (AL) model. The investigated samples comprised of five distinct fluctuation regimes, namely short-wave (SWF), one-dimensional (1D), two-dimensional (2D), three-dimensional (3D), and critical (CR) fluctuations regimes. The coherence length along the c-axis at zero-temperature (ξc(0)), lower and upper critical magnetic fields (Bc1 and Bc2), critical current density (Jc) and numerous other superconducting parameters were estimated from the data. The analysis reveals that the presence of the tiny Y-211 particles alters the excess conductivity and the fluctuation behavior observed in standard YBCO samples.Keywords: Excess conductivity, Foam, Microstructure, Superconductor YBa2Cu3Oy
Procedia PDF Downloads 17082 Optimal Feature Extraction Dimension in Finger Vein Recognition Using Kernel Principal Component Analysis
Authors: Amir Hajian, Sepehr Damavandinejadmonfared
Abstract:
In this paper the issue of dimensionality reduction is investigated in finger vein recognition systems using kernel Principal Component Analysis (KPCA). One aspect of KPCA is to find the most appropriate kernel function on finger vein recognition as there are several kernel functions which can be used within PCA-based algorithms. In this paper, however, another side of PCA-based algorithms -particularly KPCA- is investigated. The aspect of dimension of feature vector in PCA-based algorithms is of importance especially when it comes to the real-world applications and usage of such algorithms. It means that a fixed dimension of feature vector has to be set to reduce the dimension of the input and output data and extract the features from them. Then a classifier is performed to classify the data and make the final decision. We analyze KPCA (Polynomial, Gaussian, and Laplacian) in details in this paper and investigate the optimal feature extraction dimension in finger vein recognition using KPCA.Keywords: biometrics, finger vein recognition, principal component analysis (PCA), kernel principal component analysis (KPCA)
Procedia PDF Downloads 36581 Electromyography Pattern Classification with Laplacian Eigenmaps in Human Running
Authors: Elnaz Lashgari, Emel Demircan
Abstract:
Electromyography (EMG) is one of the most important interfaces between humans and robots for rehabilitation. Decoding this signal helps to recognize muscle activation and converts it into smooth motion for the robots. Detecting each muscle’s pattern during walking and running is vital for improving the quality of a patient’s life. In this study, EMG data from 10 muscles in 10 subjects at 4 different speeds were analyzed. EMG signals are nonlinear with high dimensionality. To deal with this challenge, we extracted some features in time-frequency domain and used manifold learning and Laplacian Eigenmaps algorithm to find the intrinsic features that represent data in low-dimensional space. We then used the Bayesian classifier to identify various patterns of EMG signals for different muscles across a range of running speeds. The best result for vastus medialis muscle corresponds to 97.87±0.69 for sensitivity and 88.37±0.79 for specificity with 97.07±0.29 accuracy using Bayesian classifier. The results of this study provide important insight into human movement and its application for robotics research.Keywords: electromyography, manifold learning, ISOMAP, Laplacian Eigenmaps, locally linear embedding
Procedia PDF Downloads 36480 Contractual Complexity and Contract Parties' Opportunistic Behavior in Construction Projects: In a Contractual Function View
Authors: Mengxia Jin, Yongqiang Chen, Wenqian Wang, Yu Wang
Abstract:
The complexity and specificity of construction projects have made common opportunism phenomenon, and contractual governance for opportunism has been a topic of considerable ongoing research. Based on TCE, the research distinguishes control and coordination as different functions of the contract to investigate their complexity separately. And in a nuanced way, the dimensionality of contractual control is examined. Through the analysis of motivation and capability of strong or weak form opportunism, the framework focuses on the relationship between the complexity of above contractual dimensions and different types of opportunistic behavior and attempts to verify the possible explanatory mechanism. The explanatory power of the research model is evaluated in the light of empirical evidence from questionnaires. We collect data from Chinese companies in the construction industry, and the data collection is still in progress. The findings will speak to the debate surrounding the effects of contract complexity on opportunistic behavior. This nuanced research will derive implications for research on the role of contractual mechanisms in dealing with inter-organizational opportunism and offer suggestions for curbing contract parties’ opportunistic behavior in construction projects.Keywords: contractual complexity, contractual control, contractual coordinatio, opportunistic behavior
Procedia PDF Downloads 38479 The Curse of Vigilante Justice: Killings of Rape Suspects in India and Its Impact on the Discourse on Sexual Violence
Authors: Hrudaya Kamasani
Abstract:
The cultural prevalence of vigilante justice is sustained through the social sanction for foregoing a judicial trial to determine guilt. Precisely due to its roots in social sanction, it has repercussions as more than just being symptomatic of cultural values that condone violence. In the long term, the practice of vigilante justice as a response to incidents of sexual violence, while veiled in civic discontent over the standards of women’s security in society, can adversely affect the discourse on sexual violence. To illustrate the impact that acts of vigilante justice can have in prematurely ending a budding discourse on sexual violence, the paper reviews three cases of heinous crimes committed against women in India that gained popular attention in the discursive spaces. The 2012 Nirbhaya rape and murder case in Delhi demonstrates how the criminal justice system can spur a social movement and can result in legislative changes and a discourse that challenged a wide range of socio-cultural issues of women’s security and treatment. The paper compares it with two incidents of sexual violence in India that ended with the suspects being killed in the name of vigilante justice that had wide social sanction. The two cases are the 2019 extrajudicial killing of Priyanka Reddy rape and murder case suspects in Hyderabad and the 2015 mob lynching of an accused in a rape case in Dimapur. The paper explains why the absence of judicial trials in sexual violence cases results in ending any likelihood of the instances inspiring civic engagement with the discourse on sexual violence.Keywords: sexual violence, vigilante justice, extrajudicial killing, cultural values of violence, Nirbhaya rape case, mob violence
Procedia PDF Downloads 20678 Isothermal Crystallization Kinetics of Lauric Acid Methyl Ester from DSC Measurements
Authors: Charine Faith H. Lagrimas, Rommel N. Galvan, Rizalinda L. de Leon
Abstract:
An ongoing study, methyl laurate to be used as a refrigerant in an HVAC system, requires the crystallization kinetics of the said substance. Step-wise and normal forms of Avrami model parameters were used to describe the isothermal crystallization kinetics of methyl laurate at different temperatures from Differential Scanning Calorimetry (DSC) measurements. At 3 °C, parameters showed that methyl laurate exhibits a secondary crystallization. The primary crystallization occurred with instantaneous nuclei and spherulitic growth; followed by a secondary instantaneous nucleation with a lower growth of dimensionality, rod-like. At 4 °C to 6 °C, the exotherms from DSC implied that the system was under the isokinetic range. The kinetics behavior is the same which is instantaneous nucleation with one-dimensional growth. The differences for the isokinetic range temperatures are the activation energies (directly proportional to T) and nucleation rates (inversely proportional to T). From the images obtained during the crystallization of methyl laurate using an optical microscope, it is confirmed that the nucleation and crystal growth modes obtained from the optical microscope are consistent with the parameters from Avrami model.Keywords: Avrami model, isothermal crystallization, lipids kinetics, methyl laurate
Procedia PDF Downloads 34277 Curating Pluralistic Futures: Leveling up for Whole-Systems Change
Authors: Daniel Schimmelpfennig
Abstract:
This paper attempts to delineate the idea to curate the leveling up for whole-systems change. Curation is the act fo select, organize, look after, or present information from a professional point of view through expert knowledge. The trans-paradigmatic, trans-contextual, trans-disciplinary, trans-perspective of trans-media futures studies hopes to enable a move from a monochrome intellectual pursuit towards breathing a higher dimensionality. Progressing to the next level to equip actors for whole-systems change is in consideration of the commonly known symptoms of our time as well as in anticipation of future challenges, both a necessity and desirability. Systems of collective intelligence could potentially scale regenerative, adaptive, and anticipatory capacities. How could such a curation then be enacted and implemented, to initiate the process of leveling-up? The suggestion here is to focus on the metasystem transition, the bio-digital fusion, namely, by merging neurosciences, the ontological design of money as our operating system, and our understanding of the billions of years of time-proven permutations in nature, biomimicry, and biological metaphors like symbiogenesis. Evolutionary cybernetics accompanies the process of whole-systems change.Keywords: bio-digital fusion, evolutionary cybernetics, metasystem transition, symbiogenesis, transmedia futures studies
Procedia PDF Downloads 15676 Hyperspectral Image Classification Using Tree Search Algorithm
Authors: Shreya Pare, Parvin Akhter
Abstract:
Remotely sensing image classification becomes a very challenging task owing to the high dimensionality of hyperspectral images. The pixel-wise classification methods fail to take the spatial structure information of an image. Therefore, to improve the performance of classification, spatial information can be integrated into the classification process. In this paper, the multilevel thresholding algorithm based on a modified fuzzy entropy function is used to perform the segmentation of hyperspectral images. The fuzzy parameters of the MFE function have been optimized by using a new meta-heuristic algorithm based on the Tree-Search algorithm. The segmented image is classified by a large distribution machine (LDM) classifier. Experimental results are shown on a hyperspectral image dataset. The experimental outputs indicate that the proposed technique (MFE-TSA-LDM) achieves much higher classification accuracy for hyperspectral images when compared to state-of-art classification techniques. The proposed algorithm provides accurate segmentation and classification maps, thus becoming more suitable for image classification with large spatial structures.Keywords: classification, hyperspectral images, large distribution margin, modified fuzzy entropy function, multilevel thresholding, tree search algorithm, hyperspectral image classification using tree search algorithm
Procedia PDF Downloads 18075 Ensemble of Deep CNN Architecture for Classifying the Source and Quality of Teff Cereal
Authors: Belayneh Matebie, Michael Melese
Abstract:
The study focuses on addressing the challenges in classifying and ensuring the quality of Eragrostis Teff, a small and round grain that is the smallest cereal grain. Employing a traditional classification method is challenging because of its small size and the similarity of its environmental characteristics. To overcome this, this study employs a machine learning approach to develop a source and quality classification system for Teff cereal. Data is collected from various production areas in the Amhara regions, considering two types of cereal (high and low quality) across eight classes. A total of 5,920 images are collected, with 740 images for each class. Image enhancement techniques, including scaling, data augmentation, histogram equalization, and noise removal, are applied to preprocess the data. Convolutional Neural Network (CNN) is then used to extract relevant features and reduce dimensionality. The dataset is split into 80% for training and 20% for testing. Different classifiers, including FVGG16, FINCV3, QSCTC, EMQSCTC, SVM, and RF, are employed for classification, achieving accuracy rates ranging from 86.91% to 97.72%. The ensemble of FVGG16, FINCV3, and QSCTC using the Max-Voting approach outperforms individual algorithms.Keywords: Teff, ensemble learning, max-voting, CNN, SVM, RF
Procedia PDF Downloads 5774 Supervised/Unsupervised Mahalanobis Algorithm for Improving Performance for Cyberattack Detection over Communications Networks
Authors: Radhika Ranjan Roy
Abstract:
Deployment of machine learning (ML)/deep learning (DL) algorithms for cyberattack detection in operational communications networks (wireless and/or wire-line) is being delayed because of low-performance parameters (e.g., recall, precision, and f₁-score). If datasets become imbalanced, which is the usual case for communications networks, the performance tends to become worse. Complexities in handling reducing dimensions of the feature sets for increasing performance are also a huge problem. Mahalanobis algorithms have been widely applied in scientific research because Mahalanobis distance metric learning is a successful framework. In this paper, we have investigated the Mahalanobis binary classifier algorithm for increasing cyberattack detection performance over communications networks as a proof of concept. We have also found that high-dimensional information in intermediate features that are not utilized as much for classification tasks in ML/DL algorithms are the main contributor to the state-of-the-art of improved performance of the Mahalanobis method, even for imbalanced and sparse datasets. With no feature reduction, MD offers uniform results for precision, recall, and f₁-score for unbalanced and sparse NSL-KDD datasets.Keywords: Mahalanobis distance, machine learning, deep learning, NS-KDD, local intrinsic dimensionality, chi-square, positive semi-definite, area under the curve
Procedia PDF Downloads 7973 Ordinary Differentiation Equations (ODE) Reconstruction of High-Dimensional Genetic Networks through Game Theory with Application to Dissecting Tree Salt Tolerance
Authors: Libo Jiang, Huan Li, Rongling Wu
Abstract:
Ordinary differentiation equations (ODE) have proven to be powerful for reconstructing precise and informative gene regulatory networks (GRNs) from dynamic gene expression data. However, joint modeling and analysis of all genes, essential for the systematical characterization of genetic interactions, are challenging due to high dimensionality and a complex pattern of genetic regulation including activation, repression, and antitermination. Here, we address these challenges by unifying variable selection and game theory through ODE. Each gene within a GRN is co-expressed with its partner genes in a way like a game of multiple players, each of which tends to choose an optimal strategy to maximize its “fitness” across the whole network. Based on this unifying theory, we designed and conducted a real experiment to infer salt tolerance-related GRNs for Euphrates poplar, a hero tree that can grow in the saline desert. The pattern and magnitude of interactions between several hub genes within these GRNs were found to determine the capacity of Euphrates poplar to resist to saline stress.Keywords: gene regulatory network, ordinary differential equation, game theory, LASSO, saline resistance
Procedia PDF Downloads 64072 A Portable Cognitive Tool for Engagement Level and Activity Identification
Authors: Terry Teo, Sun Woh Lye, Yufei Li, Zainuddin Zakaria
Abstract:
Wearable devices such as Electroencephalography (EEG) hold immense potential in the monitoring and assessment of a person’s task engagement. This is especially so in remote or online sites. Research into its use in measuring an individual's cognitive state while performing task activities is therefore expected to increase. Despite the growing number of EEG research into brain functioning activities of a person, key challenges remain in adopting EEG for real-time operations. These include limited portability, long preparation time, high number of channel dimensionality, intrusiveness, as well as level of accuracy in acquiring neurological data. This paper proposes an approach using a 4-6 EEG channels to determine the cognitive states of a subject when undertaking a set of passive and active monitoring tasks of a subject. Air traffic controller (ATC) dynamic-tasks are used as a proxy. The work found that when using the channel reduction and identifier algorithm, good trend adherence of 89.1% can be obtained between a commercially available BCI 14 channel Emotiv EPOC+ EEG headset and that of a carefully selected set of reduced 4-6 channels. The approach can also identify different levels of engagement activities ranging from general monitoring ad hoc and repeated active monitoring activities involving information search, extraction, and memory activities.Keywords: assessment, neurophysiology, monitoring, EEG
Procedia PDF Downloads 7671 Gait Biometric for Person Re-Identification
Authors: Lavanya Srinivasan
Abstract:
Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat, and case recorded using longwave infrared, short wave infrared, medium wave infrared, and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using YOLO, background subtraction, silhouettes extraction, and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the principal component analysis and recognised using different classifiers. The comparative results with the different classifier show that linear discriminant analysis outperforms other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.Keywords: biometric, gait, silhouettes, YOLO
Procedia PDF Downloads 17370 Data Science-Based Key Factor Analysis and Risk Prediction of Diabetic
Authors: Fei Gao, Rodolfo C. Raga Jr.
Abstract:
This research proposal will ascertain the major risk factors for diabetes and to design a predictive model for risk assessment. The project aims to improve diabetes early detection and management by utilizing data science techniques, which may improve patient outcomes and healthcare efficiency. The phase relation values of each attribute were used to analyze and choose the attributes that might influence the examiner's survival probability using Diabetes Health Indicators Dataset from Kaggle’s data as the research data. We compare and evaluate eight machine learning algorithms. Our investigation begins with comprehensive data preprocessing, including feature engineering and dimensionality reduction, aimed at enhancing data quality. The dataset, comprising health indicators and medical data, serves as a foundation for training and testing these algorithms. A rigorous cross-validation process is applied, and we assess their performance using five key metrics like accuracy, precision, recall, F1-score, and area under the receiver operating characteristic curve (AUC-ROC). After analyzing the data characteristics, investigate their impact on the likelihood of diabetes and develop corresponding risk indicators.Keywords: diabetes, risk factors, predictive model, risk assessment, data science techniques, early detection, data analysis, Kaggle
Procedia PDF Downloads 7769 Culture and Mental Health in Nigeria: A Qualitative Study of Berom, Hausa, Yoruba and Igbo Cultural Beliefs
Authors: Dung Jidong, Rachel Tribe, Poul Rohlerder, Aneta Tunariu
Abstract:
Cultural understandings of mental health problems are frequently overshadowed by the western conceptualizations. Research on culture and mental health in the Nigerian context seems to be lacking. This study examined the linguistic understandings and cultural beliefs that have implications for mental health among the Berom, Hausa, Yoruba and Igbo people of Nigeria. A purposive sample of 53 participants underwent semi-structured interviews that lasted approximately 55 minutes each. Of the N=53 participants, n=26 were psychology-aligned practitioners and n=27 ‘laypersons’. Participants were recruited from four states in Nigeria, Plateau, Kaduna, Ekiti, and Enugu. All participants were self-identified as members of their ethnic groups who speak and understand their native-languages, cultural beliefs, and also are domiciled within their ethnic communities. Thematic analysis using socio-constructionism from a critical-realist position was employed to explore the participants’ beliefs about mental health, and the clash between western trained practitioners’ views and the cultural beliefs of the ‘laypersons’. Data analysis found three main themes that re-emerged across the four ethnic samples: (i) beliefs about mental health problems as a spiritual curse (ii) traditional and religious healing are used more often than western mental health care (iii) low levels of mental health awareness. In addition, the Nigerian traditional and religious healing are also revealed to be helpful as the practice gives prominence to the native-languages, religious and cultural values. However, participants described the role of ‘false’ traditional or religious healers in communities as being potentially harmful. Finally, due to the current lack of knowledge about mental health problems, awareness creation and re-orientation may be beneficial for both rural and urban Nigerian communities.Keywords: beliefs cultures, health mental, languages religions, values
Procedia PDF Downloads 28568 Using Closed Frequent Itemsets for Hierarchical Document Clustering
Authors: Cheng-Jhe Lee, Chiun-Chieh Hsu
Abstract:
Due to the rapid development of the Internet and the increased availability of digital documents, the excessive information on the Internet has led to information overflow problem. In order to solve these problems for effective information retrieval, document clustering in text mining becomes a popular research topic. Clustering is the unsupervised classification of data items into groups without the need of training data. Many conventional document clustering methods perform inefficiently for large document collections because they were originally designed for relational database. Therefore they are impractical in real-world document clustering and require special handling for high dimensionality and high volume. We propose the FIHC (Frequent Itemset-based Hierarchical Clustering) method, which is a hierarchical clustering method developed for document clustering, where the intuition of FIHC is that there exist some common words for each cluster. FIHC uses such words to cluster documents and builds hierarchical topic tree. In this paper, we combine FIHC algorithm with ontology to solve the semantic problem and mine the meaning behind the words in documents. Furthermore, we use the closed frequent itemsets instead of only use frequent itemsets, which increases efficiency and scalability. The experimental results show that our method is more accurate than those of well-known document clustering algorithms.Keywords: FIHC, documents clustering, ontology, closed frequent itemset
Procedia PDF Downloads 39967 Dutch Disease and Industrial Development: An Investigation of the Determinants of Manufacturing Sector Performance in Nigeria
Authors: Kayode Ilesanmi Ebenezer Bowale, Dominic Azuh, Busayo Aderounmu, Alfred Ilesanmi
Abstract:
There has been a debate among scholars and policymakers about the effects of oil exploration and production on industrial development. In Nigeria, there were many reforms resulting in an increase in crude oil production in the recent past. There is a controversy on the importance of oil production in the development of the manufacturing sector in Nigeria. Some scholars claim that oil has been a blessing to the development of the manufacturing sector, while others regard it as a curse. The objective of the study is to determine if empirical analysis supports the presence of Dutch Disease and de-industrialisation in the Nigerian manufacturing sector between 2019- 2022. The study employed data that were sourced from World Development Indicators, Nigeria Bureau of Statistics, and the Central Bank of Nigeria Statistical Bulletin on manufactured exports, manufacturing employment, agricultural employment, and service employment in line with the theory of Dutch Disease using the unit root test to establish their level of stationarity, Engel and Granger cointegration test to check their long-run relationship. Autoregressive. Distributed Lagged bound test was also used. The Vector Error Correction Model will be carried out to determine the speed of adjustment of the manufacturing export and resource movement effect. The results showed that the Nigerian manufacturing industry suffered from both direct and indirect de-industrialisation over the period. The findings also revealed that there was resource movement as labour moved away from the manufacturing sector to both the oil sector and the services sector. The study concluded that there was the presence of Dutch Disease in the manufacturing industry, and the problem of de-industrialisation led to the crowding out of manufacturing output. The study recommends that efforts should be made to diversify the Nigerian economy. Furthermore, a conducive business environment should be provided to encourage more involvement of the private sector in the agriculture and manufacturing sectors of the economy.Keywords: Dutch disease, resource movement, manufacturing sector performance, Nigeria
Procedia PDF Downloads 8166 The Curse of Oil: Unpacking the Challenges to Food Security in the Nigeria's Niger Delta
Authors: Abosede Omowumi Babatunde
Abstract:
While the Niger Delta region satisfies the global thirst for oil, the inhabitants have not been adequately compensated for the use of their ancestral land. Besides, the ruthless exploitation and destruction of the natural environment upon which the inhabitants of the Niger Delta depend for their livelihood and sustenance by the activities of oil multinationals, pose major threats to food security in the region and by implication, Nigeria in general, Africa, and the world, given the present global emphasis on food security. This paper examines the effect of oil exploitation on household food security, identify key gaps in measures put in place to address the changes to livelihoods and food security and explore what should be done to improve the local people access to sufficient, safe and culturally acceptable food in the Niger Delta. Data is derived through interviews with key informants and Focus Group Discussions (FGDs) conducted with respondents in the local communities in the Niger Delta states of Delta, Bayelsa and Rivers as well as relevant extant studies. The threat to food security is one important aspect of the human security challenges in the Niger Delta which has received limited scholarly attention. In addition, successive Nigerian governments have not meaningfully addressed the negative impacts of oil-induced environmental degradation on traditional livelihoods given the significant linkages between environmental sustainability, livelihood security, and food security. The destructive impact of oil pollution on the farmlands, crops, economic trees, creeks, lakes, and fishing equipment is so devastating that the people can no longer engage in productive farming and fishing. Also important is the limited access to modern agricultural methods for fishing and subsistence farming as fishing and farming are done using mostly crude implements and traditional methods. It is imperative and urgent to take stock of the negative implications of the activities of oil multinationals for environmental and livelihood sustainability, and household food security in the Niger Delta.Keywords: challenges, food security, Nigeria's Niger delta, oil
Procedia PDF Downloads 25165 Efficient Principal Components Estimation of Large Factor Models
Authors: Rachida Ouysse
Abstract:
This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting
Procedia PDF Downloads 15064 Enhancement Method of Network Traffic Anomaly Detection Model Based on Adversarial Training With Category Tags
Authors: Zhang Shuqi, Liu Dan
Abstract:
For the problems in intelligent network anomaly traffic detection models, such as low detection accuracy caused by the lack of training samples, poor effect with small sample attack detection, a classification model enhancement method, F-ACGAN(Flow Auxiliary Classifier Generative Adversarial Network) which introduces generative adversarial network and adversarial training, is proposed to solve these problems. Generating adversarial data with category labels could enhance the training effect and improve classification accuracy and model robustness. FACGAN consists of three steps: feature preprocess, which includes data type conversion, dimensionality reduction and normalization, etc.; A generative adversarial network model with feature learning ability is designed, and the sample generation effect of the model is improved through adversarial iterations between generator and discriminator. The adversarial disturbance factor of the gradient direction of the classification model is added to improve the diversity and antagonism of generated data and to promote the model to learn from adversarial classification features. The experiment of constructing a classification model with the UNSW-NB15 dataset shows that with the enhancement of FACGAN on the basic model, the classification accuracy has improved by 8.09%, and the score of F1 has improved by 6.94%.Keywords: data imbalance, GAN, ACGAN, anomaly detection, adversarial training, data augmentation
Procedia PDF Downloads 10663 Implementation of a Multimodal Biometrics Recognition System with Combined Palm Print and Iris Features
Authors: Rabab M. Ramadan, Elaraby A. Elgallad
Abstract:
With extensive application, the performance of unimodal biometrics systems has to face a diversity of problems such as signal and background noise, distortion, and environment differences. Therefore, multimodal biometric systems are proposed to solve the above stated problems. This paper introduces a bimodal biometric recognition system based on the extracted features of the human palm print and iris. Palm print biometric is fairly a new evolving technology that is used to identify people by their palm features. The iris is a strong competitor together with face and fingerprints for presence in multimodal recognition systems. In this research, we introduced an algorithm to the combination of the palm and iris-extracted features using a texture-based descriptor, the Scale Invariant Feature Transform (SIFT). Since the feature sets are non-homogeneous as features of different biometric modalities are used, these features will be concatenated to form a single feature vector. Particle swarm optimization (PSO) is used as a feature selection technique to reduce the dimensionality of the feature. The proposed algorithm will be applied to the Institute of Technology of Delhi (IITD) database and its performance will be compared with various iris recognition algorithms found in the literature.Keywords: iris recognition, particle swarm optimization, feature extraction, feature selection, palm print, the Scale Invariant Feature Transform (SIFT)
Procedia PDF Downloads 23562 Suspended Nickel Oxide Nano-Beam and Its Heterostructure Device for Gas Sensing
Authors: Kusuma Urs M. B., Navakant Bhat, Vinayak B. Kamble
Abstract:
Metal oxide semiconductors (MOS) are known to be excellent candidates for solid-state gas sensor devices. However, in spite of high sensitivities, their high operating temperatures and lack of selectivity is a big concern limiting their practical applications. A lot of research has been devoted so far to enhance their sensitivity and selectivity, often empirically. Some of the promising routes to achieve the same are reducing dimensionality and formation of heterostructures. These heterostructures offer improved sensitivity, selectivity even at relatively low operating temperatures compared to bare metal oxides. Thus, a combination of n-type and p-type metal oxides leads to the formation of p-n junction at the interface resulting in the diffusion of the carriers across the barrier along with the surface adsorption. In order to achieve this and to study their sensing mechanism, we have designed and lithographically fabricated a suspended nanobeam of NiO, which is a p-type semiconductor. The response of the same has been studied for various gases and is found to exhibit selective response towards hydrogen gas at room temperature. Further, the same has been radially coated with TiO₂ shell of varying thicknesses, in order to study the effect of radial p-n junction thus formed. Subsequently, efforts have been made to study the effect of shell thickness on the space charge region and to shed some light on the basic mechanism involved in gas sensing of MOS sensors.Keywords: gas sensing, heterostructure, metal oxide semiconductor, space charge region
Procedia PDF Downloads 13261 From Electroencephalogram to Epileptic Seizures Detection by Using Artificial Neural Networks
Authors: Gaetano Zazzaro, Angelo Martone, Roberto V. Montaquila, Luigi Pavone
Abstract:
Seizure is the main factor that affects the quality of life of epileptic patients. The diagnosis of epilepsy, and hence the identification of epileptogenic zone, is commonly made by using continuous Electroencephalogram (EEG) signal monitoring. Seizure identification on EEG signals is made manually by epileptologists and this process is usually very long and error prone. The aim of this paper is to describe an automated method able to detect seizures in EEG signals, using knowledge discovery in database process and data mining methods and algorithms, which can support physicians during the seizure detection process. Our detection method is based on Artificial Neural Network classifier, trained by applying the multilayer perceptron algorithm, and by using a software application, called Training Builder that has been developed for the massive extraction of features from EEG signals. This tool is able to cover all the data preparation steps ranging from signal processing to data analysis techniques, including the sliding window paradigm, the dimensionality reduction algorithms, information theory, and feature selection measures. The final model shows excellent performances, reaching an accuracy of over 99% during tests on data of a single patient retrieved from a publicly available EEG dataset.Keywords: artificial neural network, data mining, electroencephalogram, epilepsy, feature extraction, seizure detection, signal processing
Procedia PDF Downloads 18960 General Architecture for Automation of Machine Learning Practices
Authors: U. Borasi, Amit Kr. Jain, Rakesh, Piyush Jain
Abstract:
Data collection, data preparation, model training, model evaluation, and deployment are all processes in a typical machine learning workflow. Training data needs to be gathered and organised. This often entails collecting a sizable dataset and cleaning it to remove or correct any inaccurate or missing information. Preparing the data for use in the machine learning model requires pre-processing it after it has been acquired. This often entails actions like scaling or normalising the data, handling outliers, selecting appropriate features, reducing dimensionality, etc. This pre-processed data is then used to train a model on some machine learning algorithm. After the model has been trained, it needs to be assessed by determining metrics like accuracy, precision, and recall, utilising a test dataset. Every time a new model is built, both data pre-processing and model training—two crucial processes in the Machine learning (ML) workflow—must be carried out. Thus, there are various Machine Learning algorithms that can be employed for every single approach to data pre-processing, generating a large set of combinations to choose from. Example: for every method to handle missing values (dropping records, replacing with mean, etc.), for every scaling technique, and for every combination of features selected, a different algorithm can be used. As a result, in order to get the optimum outcomes, these tasks are frequently repeated in different combinations. This paper suggests a simple architecture for organizing this largely produced “combination set of pre-processing steps and algorithms” into an automated workflow which simplifies the task of carrying out all possibilities.Keywords: machine learning, automation, AUTOML, architecture, operator pool, configuration, scheduler
Procedia PDF Downloads 5859 Comparative Study and Parallel Implementation of Stochastic Models for Pricing of European Options Portfolios using Monte Carlo Methods
Authors: Vinayak Bassi, Rajpreet Singh
Abstract:
Over the years, with the emergence of sophisticated computers and algorithms, finance has been quantified using computational prowess. Asset valuation has been one of the key components of quantitative finance. In fact, it has become one of the embryonic steps in determining risk related to a portfolio, the main goal of quantitative finance. This study comprises a drawing comparison between valuation output generated by two stochastic dynamic models, namely Black-Scholes and Dupire’s bi-dimensionality model. Both of these models are formulated for computing the valuation function for a portfolio of European options using Monte Carlo simulation methods. Although Monte Carlo algorithms have a slower convergence rate than calculus-based simulation techniques (like FDM), they work quite effectively over high-dimensional dynamic models. A fidelity gap is analyzed between the static (historical) and stochastic inputs for a sample portfolio of underlying assets. In order to enhance the performance efficiency of the model, the study emphasized the use of variable reduction methods and customizing random number generators to implement parallelization. An attempt has been made to further implement the Dupire’s model on a GPU to achieve higher computational performance. Furthermore, ideas have been discussed around the performance enhancement and bottleneck identification related to the implementation of options-pricing models on GPUs.Keywords: monte carlo, stochastic models, computational finance, parallel programming, scientific computing
Procedia PDF Downloads 16558 Dancing in Bullets and in Brokenness: Metaphor in Terracotta on Canvas
Authors: Jonathan Okewu
Abstract:
Socio-economic occurrences and developments in any society is usually a thing of concern especially when this is been jeopardised as a result of insecurity of all sorts. There are numerous channels that such issues are brought to the fore for attention. Among this channels are the art media and more precisely the ceramic art media. Renown ceramic artists in Nigeria have taken to the medium of expression (clay) to contribute to succinct issues as it affects the society. This study takes advantage of a unique form of ceramic press moulding termed clay palm press. This uncommon production technique has been developed in this study to interrogate socio-economic insecurity in the form of ceramic art titled; “dancing in bullets and in brokenness”. This work was made possible through a ceramic medium that has been termed “terracotta on canvas”. Extensive studio practice was carried out to generate the terracotta forms for this study. Findings from this study indicates that the medium and mode of execution of the art work shows a unique side of working with clay that negates tactile or three dimensionality of inured ceramic practice. Additionally, the process of clay palm press in this study indicates that it could be therapeutical against medical conditions of muscles of the palm. This is because the process tightens and develops the muscles of the palm further. Dancing in bullets and in brokenness as portrayed through the medium of terracotta on canvas that social and security challenges are not a limiting factor to a resolute Nigerian, despite all, the strong will of Nigerians keeps persisting and overcoming challenges.Keywords: canvas, dancing in bullets and in brokenness, metaphor, terracotta, palm press
Procedia PDF Downloads 13057 A Review of Effective Gene Selection Methods for Cancer Classification Using Microarray Gene Expression Profile
Authors: Hala Alshamlan, Ghada Badr, Yousef Alohali
Abstract:
Cancer is one of the dreadful diseases, which causes considerable death rate in humans. DNA microarray-based gene expression profiling has been emerged as an efficient technique for cancer classification, as well as for diagnosis, prognosis, and treatment purposes. In recent years, a DNA microarray technique has gained more attraction in both scientific and in industrial fields. It is important to determine the informative genes that cause cancer to improve early cancer diagnosis and to give effective chemotherapy treatment. In order to gain deep insight into the cancer classification problem, it is necessary to take a closer look at the proposed gene selection methods. We believe that they should be an integral preprocessing step for cancer classification. Furthermore, finding an accurate gene selection method is a very significant issue in a cancer classification area because it reduces the dimensionality of microarray dataset and selects informative genes. In this paper, we classify and review the state-of-art gene selection methods. We proceed by evaluating the performance of each gene selection approach based on their classification accuracy and number of informative genes. In our evaluation, we will use four benchmark microarray datasets for the cancer diagnosis (leukemia, colon, lung, and prostate). In addition, we compare the performance of gene selection method to investigate the effective gene selection method that has the ability to identify a small set of marker genes, and ensure high cancer classification accuracy. To the best of our knowledge, this is the first attempt to compare gene selection approaches for cancer classification using microarray gene expression profile.Keywords: gene selection, feature selection, cancer classification, microarray, gene expression profile
Procedia PDF Downloads 45556 Shear Stress and Effective Structural Stress Fields of an Atherosclerotic Coronary Artery
Authors: Alireza Gholipour, Mergen H. Ghayesh, Anthony Zander, Stephen J. Nicholls, Peter J. Psaltis
Abstract:
A three-dimensional numerical model of an atherosclerotic coronary artery is developed for the determination of high-risk situation and hence heart attack prediction. Employing the finite element method (FEM) using ANSYS, fluid-structure interaction (FSI) model of the artery is constructed to determine the shear stress distribution as well as the von Mises stress field. A flexible model for an atherosclerotic coronary artery conveying pulsatile blood is developed incorporating three-dimensionality, artery’s tapered shape via a linear function for artery wall distribution, motion of the artery, blood viscosity via the non-Newtonian flow theory, blood pulsation via use of one-period heartbeat, hyperelasticity via the Mooney-Rivlin model, viscoelasticity via the Prony series shear relaxation scheme, and micro-calcification inside the plaque. The material properties used to relate the stress field to the strain field have been extracted from clinical data from previous in-vitro studies. The determined stress fields has potential to be used as a predictive tool for plaque rupture and dissection. The results show that stress concentration due to micro-calcification increases the von Mises stress significantly; chance of developing a crack inside the plaque increases. Moreover, the blood pulsation varies the stress distribution substantially for some cases.Keywords: atherosclerosis, fluid-structure interaction, coronary arteries, pulsatile flow
Procedia PDF Downloads 174