Search results for: wind tunnel experiments
125 Computational Identification of Bacterial Communities
Authors: Eleftheria Tzamali, Panayiota Poirazi, Ioannis G. Tollis, Martin Reczko
Abstract:
Stable bacterial polymorphism on a single limiting resource may appear if between the evolved strains metabolic interactions take place that allow the exchange of essential nutrients [8]. Towards an attempt to predict the possible outcome of longrunning evolution experiments, a network based on the metabolic capabilities of homogeneous populations of every single gene knockout strain (nodes) of the bacterium E. coli is reconstructed. Potential metabolic interactions (edges) are allowed only between strains of different metabolic capabilities. Bacterial communities are determined by finding cliques in this network. Growth of the emerged hypothetical bacterial communities is simulated by extending the metabolic flux balance analysis model of Varma et al [2] to embody heterogeneous cell population growth in a mutual environment. Results from aerobic growth on 10 different carbon sources are presented. The upper bounds of the diversity that can emerge from single-cloned populations of E. coli such as the number of strains that appears to metabolically differ from most strains (highly connected nodes), the maximum clique size as well as the number of all the possible communities are determined. Certain single gene deletions are identified to consistently participate in our hypothetical bacterial communities under most environmental conditions implying a pattern of growth-condition- invariant strains with similar metabolic effects. Moreover, evaluation of all the hypothetical bacterial communities under growth on pyruvate reveals heterogeneous populations that can exhibit superior growth performance when compared to the performance of the homogeneous wild-type population.
Keywords: Bacterial polymorphism, clique identification, dynamic FBA, evolution, metabolic interactions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1379124 Machine Learning Facing Behavioral Noise Problem in an Imbalanced Data Using One Side Behavioral Noise Reduction: Application to a Fraud Detection
Authors: Salma El Hajjami, Jamal Malki, Alain Bouju, Mohammed Berrada
Abstract:
With the expansion of machine learning and data mining in the context of Big Data analytics, the common problem that affects data is class imbalance. It refers to an imbalanced distribution of instances belonging to each class. This problem is present in many real world applications such as fraud detection, network intrusion detection, medical diagnostics, etc. In these cases, data instances labeled negatively are significantly more numerous than the instances labeled positively. When this difference is too large, the learning system may face difficulty when tackling this problem, since it is initially designed to work in relatively balanced class distribution scenarios. Another important problem, which usually accompanies these imbalanced data, is the overlapping instances between the two classes. It is commonly referred to as noise or overlapping data. In this article, we propose an approach called: One Side Behavioral Noise Reduction (OSBNR). This approach presents a way to deal with the problem of class imbalance in the presence of a high noise level. OSBNR is based on two steps. Firstly, a cluster analysis is applied to groups similar instances from the minority class into several behavior clusters. Secondly, we select and eliminate the instances of the majority class, considered as behavioral noise, which overlap with behavior clusters of the minority class. The results of experiments carried out on a representative public dataset confirm that the proposed approach is efficient for the treatment of class imbalances in the presence of noise.Keywords: Machine learning, Imbalanced data, Data mining, Big data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1137123 Bone Generation through Mechanical Loading
Authors: R. S. A. Nesbitt, J. Macione, A. Debroy, S. P. Kotha
Abstract:
Bones are dynamic and responsive organs, they regulate their strength and mass according to the loads which they are subjected. Because, the Wnt/β-catenin pathway has profound effects on the regulation of bone mass, we hypothesized that mechanical loading of bone cells stimulates Wnt/β-catenin signaling, which results in the generation of new bone mass. Mechanical loading triggers the secretion of the Wnt molecule, which after binding to transmembrane proteins, causes GSK-3β (Glycogen synthase kinase 3 beta) to cease the phosphorylation of β-catenin. β-catenin accumulation in the cytoplasm, followed by its transport into the nucleus, binding to transcription factors (TCF/LEF) that initiate transcription of genes related to bone formation. To test this hypothesis, we used TOPGAL (Tcf Optimal Promoter β-galactosidase) mice in an experiment in which cyclic loads were applied to the forearm. TOPGAL mice are reporters for cells effected by the Wnt/β-catenin signaling pathway. TOPGAL mice are genetically engineered mice in which transcriptional activation of β- catenin, results in the production of an enzyme, β-galactosidase. The presence of this enzyme allows us to localize transcriptional activation of β-catenin to individual cells, thereby, allowing us to quantify the effects that mechanical loading has on the Wnt/β-catenin pathway and new bone formation. The ulnae of loaded TOPGAL mice were excised and transverse slices along different parts of the ulnar shaft were assayed for the presence of β-galactosidase. Our results indicate that loading increases β-catenin transcriptional activity in regions where this pathway is already primed (i.e. where basal activity is already higher) in a load magnitude dependent manner. Further experiments are needed to determine the temporal and spatial activation of this signaling in relation to bone formation.Keywords: Bone Resorption and Formation, Mechanical Loading of Bone, Wnt Signaling Pathway & β-catenin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1482122 Effect of Soaking Period of Clay on Its California Bearing Ratio Value
Authors: Robert G. Nini
Abstract:
The quality of road pavement is affected mostly by the type of sub-grade which is acting as road foundation. The roads degradation is related to many factors especially the climatic conditions, the quality, and the thickness of the base materials. The thickness of this layer depends on its California Bearing Ratio (CBR) test value which by its turn is highly affected by the quantity of water infiltrated under the road after heavy rain. The capacity of the base material to drain out its water is predominant factor because any change in moisture content causes change in sub-grade strength. This paper studies the effect of the soaking period of soil especially clay on its CBR value. For this reason, we collected many clayey samples in order to study the effect of the soaking period on its CBR value. On each soil, two groups of experiments were performed: main tests consisting of Proctor and CBR test from one side and from other side identification tests consisting of other tests such as Atterberg limits tests. Each soil sample was first subjected to Proctor test in order to find its optimum moisture content which will be used to perform the CBR test. Four CBR tests were performed on each soil with different soaking period. The first CBR was done without soaking the soil sample; the second one with two days soaking, the third one with four days soaking period and the last one was done under eight days soaking. By comparing the results of CBR tests performed with different soaking time, a more detailed understanding was given to the role of the water in reducing the CBR of soil. In fact, by extending the soaking period, the CBR was found to be reduced quickly the first two days and slower after. A precise reduction factor of the CBR in relation with soaking period was found at the end of this paper.
Keywords: California bearing ratio, clay, proctor test, soaking period, sub-grade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871121 Identification of Spam Keywords Using Hierarchical Category in C2C E-commerce
Authors: Shao Bo Cheng, Yong-Jin Han, Se Young Park, Seong-Bae Park
Abstract:
Consumer-to-Consumer (C2C) E-commerce has been growing at a very high speed in recent years. Since identical or nearly-same kinds of products compete one another by relying on keyword search in C2C E-commerce, some sellers describe their products with spam keywords that are popular but are not related to their products. Though such products get more chances to be retrieved and selected by consumers than those without spam keywords, the spam keywords mislead the consumers and waste their time. This problem has been reported in many commercial services like ebay and taobao, but there have been little research to solve this problem. As a solution to this problem, this paper proposes a method to classify whether keywords of a product are spam or not. The proposed method assumes that a keyword for a given product is more reliable if the keyword is observed commonly in specifications of products which are the same or the same kind as the given product. This is because that a hierarchical category of a product in general determined precisely by a seller of the product and so is the specification of the product. Since higher layers of the hierarchical category represent more general kinds of products, a reliable degree is differently determined according to the layers. Hence, reliable degrees from different layers of a hierarchical category become features for keywords and they are used together with features only from specifications for classification of the keywords. Support Vector Machines are adopted as a basic classifier using the features, since it is powerful, and widely used in many classification tasks. In the experiments, the proposed method is evaluated with a golden standard dataset from Yi-han-wang, a Chinese C2C E-commerce, and is compared with a baseline method that does not consider the hierarchical category. The experimental results show that the proposed method outperforms the baseline in F1-measure, which proves that spam keywords are effectively identified by a hierarchical category in C2C E-commerce.
Keywords: Spam Keyword, E-commerce, keyword features, spam filtering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2508120 Opponent Color and Curvelet Transform Based Image Retrieval System Using Genetic Algorithm
Authors: Yesubai Rubavathi Charles, Ravi Ramraj
Abstract:
In order to retrieve images efficiently from a large database, a unique method integrating color and texture features using genetic programming has been proposed. Opponent color histogram which gives shadow, shade, and light intensity invariant property is employed in the proposed framework for extracting color features. For texture feature extraction, fast discrete curvelet transform which captures more orientation information at different scales is incorporated to represent curved like edges. The recent scenario in the issues of image retrieval is to reduce the semantic gap between user’s preference and low level features. To address this concern, genetic algorithm combined with relevance feedback is embedded to reduce semantic gap and retrieve user’s preference images. Extensive and comparative experiments have been conducted to evaluate proposed framework for content based image retrieval on two databases, i.e., COIL-100 and Corel-1000. Experimental results clearly show that the proposed system surpassed other existing systems in terms of precision and recall. The proposed work achieves highest performance with average precision of 88.2% on COIL-100 and 76.3% on Corel, the average recall of 69.9% on COIL and 76.3% on Corel. Thus, the experimental results confirm that the proposed content based image retrieval system architecture attains better solution for image retrieval.Keywords: Content based image retrieval, Curvelet transform, Genetic algorithm, Opponent color histogram, Relevance feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1822119 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2641118 Evaluation of Ensemble Classifiers for Intrusion Detection
Authors: M. Govindarajan
Abstract:
One of the major developments in machine learning in the past decade is the ensemble method, which finds highly accurate classifier by combining many moderately accurate component classifiers. In this research work, new ensemble classification methods are proposed with homogeneous ensemble classifier using bagging and heterogeneous ensemble classifier using arcing and their performances are analyzed in terms of accuracy. A Classifier ensemble is designed using Radial Basis Function (RBF) and Support Vector Machine (SVM) as base classifiers. The feasibility and the benefits of the proposed approaches are demonstrated by the means of standard datasets of intrusion detection. The main originality of the proposed approach is based on three main parts: preprocessing phase, classification phase, and combining phase. A wide range of comparative experiments is conducted for standard datasets of intrusion detection. The performance of the proposed homogeneous and heterogeneous ensemble classifiers are compared to the performance of other standard homogeneous and heterogeneous ensemble methods. The standard homogeneous ensemble methods include Error correcting output codes, Dagging and heterogeneous ensemble methods include majority voting, stacking. The proposed ensemble methods provide significant improvement of accuracy compared to individual classifiers and the proposed bagged RBF and SVM performs significantly better than ECOC and Dagging and the proposed hybrid RBF-SVM performs significantly better than voting and stacking. Also heterogeneous models exhibit better results than homogeneous models for standard datasets of intrusion detection.Keywords: Data mining, ensemble, radial basis function, support vector machine, accuracy.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1700117 SMaTTS: Standard Malay Text to Speech System
Authors: Othman O. Khalifa, Zakiah Hanim Ahmad, Teddy Surya Gunawan
Abstract:
This paper presents a rule-based text- to- speech (TTS) Synthesis System for Standard Malay, namely SMaTTS. The proposed system using sinusoidal method and some pre- recorded wave files in generating speech for the system. The use of phone database significantly decreases the amount of computer memory space used, thus making the system very light and embeddable. The overall system was comprised of two phases the Natural Language Processing (NLP) that consisted of the high-level processing of text analysis, phonetic analysis, text normalization and morphophonemic module. The module was designed specially for SM to overcome few problems in defining the rules for SM orthography system before it can be passed to the DSP module. The second phase is the Digital Signal Processing (DSP) which operated on the low-level process of the speech waveform generation. A developed an intelligible and adequately natural sounding formant-based speech synthesis system with a light and user-friendly Graphical User Interface (GUI) is introduced. A Standard Malay Language (SM) phoneme set and an inclusive set of phone database have been constructed carefully for this phone-based speech synthesizer. By applying the generative phonology, a comprehensive letter-to-sound (LTS) rules and a pronunciation lexicon have been invented for SMaTTS. As for the evaluation tests, a set of Diagnostic Rhyme Test (DRT) word list was compiled and several experiments have been performed to evaluate the quality of the synthesized speech by analyzing the Mean Opinion Score (MOS) obtained. The overall performance of the system as well as the room for improvements was thoroughly discussed.Keywords: Natural Language Processing, Text-To-Speech (TTS), Diphone, source filter, low-/ high- level synthesis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1973116 The Micro Ecosystem Restoration Mechanism Applied for Feasible Research of Lakes Eutrophication Enhancement
Authors: Ching-Tsan Tsai, Sih-Rong Chen, Chi-Hung Hsieh
Abstract:
The technique of inducing micro ecosystem restoration is one of aquatic ecology engineering methods used to retrieve the polluted water. Batch scale study, pilot plant study, and field study were carried out to observe the eutrophication using the Inducing Ecology Restorative Symbiosis Agent (IERSA) consisting mainly degraded products by using lactobacillus, saccharomycete, and phycomycete. The results obtained from the experiments of the batch scale and pilot plant study allowed us to development the parameters for the field study. A pond, 5 m to the outlet of a lake, with an area of 500 m2 and depth of 0.6-1.2 m containing about 500 tons of water was selected as a model. After the treatment with 10 mg IERSA/L water twice a week for 70 days, the micro restoration mechanisms consisted of three stages (i.e., restoration, impact maintenance, and ecology recovery experiment after impact). The COD, TN, TKN, and chlorophyll a were reduced significantly in the first week. Although the unexpected heavy rain and contaminate from sewage system might slow the ecology restoration. However, the self-cleaning function continued and the chlorophyll a reduced for 50% in one month. In the 4th week, amoeba, paramecium, rotifer, and red wriggle worm reappeared, and the number of fish flies appeared up to1000 fish fries/m3. Those results proved that inducing restorative mechanism can be applied to improve the eutrophication and to control the growth of algae in the lakes by gaining the selfcleaning through inducing and competition of microbes. The situation for growth of fishes also can reach an excellent result due to the improvement of water quality.Keywords: Ecosystem restoration, eutrophication, lake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872115 Effect of Laser Power and Powder Flow Rate on Properties of Laser Metal Deposited Ti6Al4V
Authors: Mukul Shukla, Rasheedat M. Mahamood, Esther T. Akinlabi, Sisa. Pityana
Abstract:
Laser Metal Deposition (LMD) is an additive manufacturing process with capabilities that include: producing new part directly from 3 Dimensional Computer Aided Design (3D CAD) model, building new part on the existing old component and repairing an existing high valued component parts that would have been discarded in the past. With all these capabilities and its advantages over other additive manufacturing techniques, the underlying physics of the LMD process is yet to be fully understood probably because of high interaction between the processing parameters and studying many parameters at the same time makes it further complex to understand. In this study, the effect of laser power and powder flow rate on physical properties (deposition height and deposition width), metallurgical property (microstructure) and mechanical (microhardness) properties on laser deposited most widely used aerospace alloy are studied. Also, because the Ti6Al4V is very expensive, and LMD is capable of reducing buy-to-fly ratio of aerospace parts, the material utilization efficiency is also studied. Four sets of experiments were performed and repeated to establish repeatability using laser power of 1.8 kW and 3.0 kW, powder flow rate of 2.88 g/min and 5.67 g/min, and keeping the gas flow rate and scanning speed constant at 2 l/min and 0.005 m/s respectively. The deposition height / width are found to increase with increase in laser power and increase in powder flow rate. The material utilization is favoured by higher power while higher powder flow rate reduces material utilization. The results are presented and fully discussed.Keywords: Laser Metal Deposition, Material Efficiency, Microstructure, Ti6Al4V.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3629114 Guidelines for Developing, Supervising, Assessing and Evaluating Capstone Design Project of BSc in Electrical and Electronic Engineering Program
Authors: Muhibul Haque Bhuyan
Abstract:
Inclusion of any design project in an undergraduate electrical and electronic engineering curriculum and producing creative ideas in the final year capstone design projects have received numerous comments at the Board of Accreditation for Engineering and Technical Education (BAETE) several times by the mentors and visiting program evaluator team members at different public and private universities in Bangladesh. To eradicate this deficiency which is needed for getting the program accreditation, a thorough change was required in the Department of Electrical and Electronic Engineering (EEE) for its BSc in EEE program at Southeast University, Dhaka, Bangladesh. We suggested making changes in the course curriculum titles and contents, emphasizing to include capstone design projects, question setting, examining students through other standard methods, selecting and retaining Outcome-Based Education (OBE)-oriented engineering faculty members, improving laboratories through purchasing new equipment and software as well as developing new experiments for each laboratory courses, and engaging the students to practical designs in various courses and final year projects. This paper reports on capstone design project course objectives, course outcomes, mapping with the program outcomes, cognitive domain of learning, assessment schemes, guidelines, suggestions and recommendations for supervision processes, assessment strategy, and rubric setting, etc. It is expected that this will substantially improve the capstone design projects offering, supervision, and assessment in the undergraduate EEE program to fulfill the arduous requirements of BAETE accreditation based on OBE.
Keywords: Course outcome, capstone design project, assessment and evaluation, electrical and electronic engineering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 525113 Screen of MicroRNA Targets in Zebrafish Using Heterogeneous Data Sources: A Case Study for Dre-miR-10 and Dre-miR-196
Authors: Yanju Zhang, Joost M. Woltering, Fons J. Verbeek
Abstract:
It has been established that microRNAs (miRNAs) play an important role in gene expression by post-transcriptional regulation of messengerRNAs (mRNAs). However, the precise relationships between microRNAs and their target genes in sense of numbers, types and biological relevance remain largely unclear. Dissecting the miRNA-target relationships will render more insights for miRNA targets identification and validation therefore promote the understanding of miRNA function. In miRBase, miRanda is the key algorithm used for target prediction for Zebrafish. This algorithm is high-throughput but brings lots of false positives (noise). Since validation of a large scale of targets through laboratory experiments is very time consuming, several computational methods for miRNA targets validation should be developed. In this paper, we present an integrative method to investigate several aspects of the relationships between miRNAs and their targets with the final purpose of extracting high confident targets from miRanda predicted targets pool. This is achieved by using the techniques ranging from statistical tests to clustering and association rules. Our research focuses on Zebrafish. It was found that validated targets do not necessarily associate with the highest sequence matching. Besides, for some miRNA families, the frequency of their predicted targets is significantly higher in the genomic region nearby their own physical location. Finally, in a case study of dre-miR-10 and dre-miR-196, it was found that the predicted target genes hoxd13a, hoxd11a, hoxd10a and hoxc4a of dre-miR- 10 while hoxa9a, hoxc8a and hoxa13a of dre-miR-196 have similar characteristics as validated target genes and therefore represent high confidence target candidates.Keywords: MicroRNA targets validation, microRNA-target relationships, dre-miR-10, dre-miR-196.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1991112 Evaluation of the Mechanical Behavior of a Retaining Wall Structure on a Weathered Soil through Probabilistic Methods
Authors: P. V. S. Mascarenhas, B. C. P. Albuquerque, D. J. F. Campos, L. L. Almeida, V. R. Domingues, L. C. S. M. Ozelim
Abstract:
Retaining slope structures are increasingly considered in geotechnical engineering projects due to extensive urban cities growth. These kinds of engineering constructions may present instabilities over the time and may require reinforcement or even rebuilding of the structure. In this context, statistical analysis is an important tool for decision making regarding retaining structures. This study approaches the failure probability of the construction of a retaining wall over the debris of an old and collapsed one. The new solution’s extension length will be of approximately 350 m and will be located over the margins of the Lake Paranoá, Brasilia, in the capital of Brazil. The building process must also account for the utilization of the ruins as a caisson. A series of in situ and laboratory experiments defined local soil strength parameters. A Standard Penetration Test (SPT) defined the in situ soil stratigraphy. Also, the parameters obtained were verified using soil data from a collection of masters and doctoral works from the University of Brasília, which is similar to the local soil. Initial studies show that the concrete wall is the proper solution for this case, taking into account the technical, economic and deterministic analysis. On the other hand, in order to better analyze the statistical significance of the factor-of-safety factors obtained, a Monte Carlo analysis was performed for the concrete wall and two more initial solutions. A comparison between the statistical and risk results generated for the different solutions indicated that a Gabion solution would better fit the financial and technical feasibility of the project.
Keywords: Economical analysis, probability of failure, retaining walls, statistical analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1023111 Customer Churn Prediction Using Four Machine Learning Algorithms Integrating Feature Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial part of maintaining a customer-oriented business in the telecommunications industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years, which has made it more important to understand customers’ needs in this strong market. For those who are looking to turn over their service providers, understanding their needs is especially important. Predictive churn is now a mandatory requirement for retaining customers in the telecommunications industry. Machine learning can be used to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.
Keywords: Machine Learning, Gradient Boosting, Logistic Regression, Churn, Random Forest, Decision Tree, ROC, AUC, F1-score.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 408110 Effect of Pole Weight on Nordic Walking
Authors: Takeshi Sato, Mizuki Nakajima, Macky Kato, Shoji Igawa
Abstract:
The purpose of study was to investigate the effect of varying pole weights on energy expenditure, upper limb and lower limb muscle activity as Electromyogram during Nordic walking (NW). Four healthy men [age = 22.5 (±1.0) years, body mass = 61.4 (±3.6) kg, height = 170.3 (±4.3) cm] and three healthy women [age = 22.7 (±2.9) years, body mass = 53.0 (±1.7) kg, height = 156.7 (±4.5) cm] participated in the experiments after informed consent. Seven healthy subjects were tested on the treadmill, walking, walking (W) with Nordic Poles (NW) and walking with 1kg weight Nordic Poles (NW+1). Walking speed was 6 km per hours in all trials. Eight EMG activities were recorded by bipolar surface methods in biceps brachii, triceps brachii, trapezius, deltoideus, tibialis anterior, medial gastrocnemius, rectus femoris and biceps femoris muscles. And heart rate (HR), oxygen uptake (VO2), and rate of perceived exertion (RPE) were measured. The level of significance was set at a = 0.05, with p < 0.05 regarded as statistically significant. Our results confirmed that use of NW poles increased HR at a given upper arm muscle activity but decreased lower limb EMGs in comparison with W. Moreover NW was able to increase more step lengths with hip joint extension during NW rather than W. Also, EMG revealed higher activation of upper limb for almost all NW and 1kgNW tests plus added masses compared to W (p < 0.05). Therefore, it was thought either of NW and 1kgNW were to have benefit as a physical exercise for safe, feasible, and readily training for a wide range of aged people in the quality of daily life. However, there was no significant effected in leg muscles activity by using 1kgNW except for upper arm muscle activity during Nordic pole walking.
Keywords: Nordic walking, electromyogram, heart rate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029109 Multi-Scale Gabor Feature Based Eye Localization
Authors: Sanghoon Kim, Sun-Tae Chung, Souhwan Jung, Dusik Oh, Jaemin Kim, Seongwon Cho
Abstract:
Eye localization is necessary for face recognition and related application areas. Most of eye localization algorithms reported so far still need to be improved about precision and computational time for successful applications. In this paper, we propose an eye location method based on multi-scale Gabor feature vectors, which is more robust with respect to initial points. The eye localization based on Gabor feature vectors first needs to constructs an Eye Model Bunch for each eye (left or right eye) which consists of n Gabor jets and average eye coordinates of each eyes obtained from n model face images, and then tries to localize eyes in an incoming face image by utilizing the fact that the true eye coordinates is most likely to be very close to the position where the Gabor jet will have the best Gabor jet similarity matching with a Gabor jet in the Eye Model Bunch. Similar ideas have been already proposed in such as EBGM (Elastic Bunch Graph Matching). However, the method used in EBGM is known to be not robust with respect to initial values and may need extensive search range for achieving the required performance, but extensive search ranges will cause much more computational burden. In this paper, we propose a multi-scale approach with a little increased computational burden where one first tries to localize eyes based on Gabor feature vectors in a coarse face image obtained from down sampling of the original face image, and then localize eyes based on Gabor feature vectors in the original resolution face image by using the eye coordinates localized in the coarse scaled image as initial points. Several experiments and comparisons with other eye localization methods reported in the other papers show the efficiency of our proposed method.Keywords: Eye Localization, Gabor features, Multi-scale, Gabor wavelets.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1821108 Shear Modulus Degradation of a Liquefiable Sand Deposit by Shaking Table Tests
Authors: Henry Munoz, Muhammad Mohsan, Takashi Kiyota
Abstract:
Strength and deformability characteristics of a liquefiable sand deposit including the development of earthquake-induced shear stress and shear strain as well as soil softening via the progressive degradation of shear modulus were studied via shaking table experiments. To do so, a model of a liquefiable sand deposit was constructed and densely instrumented where accelerations, pressures, and displacements at different locations were continuously monitored. Furthermore, the confinement effects on the strength and deformation characteristics of the liquefiable sand deposit due to an external surcharge by placing a heavy concrete slab (i.e. the model of an actual structural rigid pavement) on the ground surface were examined. The results indicate that as the number of seismic-loading cycles increases, the sand deposit softens progressively as large shear strains take place in different sand elements. Liquefaction state is reached after the combined effects of the progressive degradation of the initial shear modulus associated with the continuous decrease in the mean principal stress, and the buildup of the excess of pore pressure takes place in the sand deposit. Finally, the confinement effects given by a concrete slab placed on the surface of the sand deposit resulted in a favorable increasing in the initial shear modulus, an increase in the mean principal stress and a decrease in the softening rate (i.e. the decreasing rate in shear modulus) of the sand, thus making the onset of liquefaction to take place at a later stage. This is, only after the sand deposit having a concrete slab experienced a higher number of seismic loading cycles liquefaction took place, in contrast to an ordinary sand deposit having no concrete slab.
Keywords: Liquefaction, shaking table, shear modulus degradation, earthquake.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1755107 Evaluation of Energy and Environmental Aspects of Reduced Tillage Systems Applied in Maize Cultivation
Authors: E. Sarauskis, L. Masilionyte, Z. Kriauciuniene, K. Romaneckas, S. Buragiene
Abstract:
In maize growing technologies, tillage technological operations are the most time-consuming and require the greatest fuel input. Substitution of conventional tillage, involving deep ploughing, by other reduced tillage methods can reduce technological production costs, diminish soil degradation and environmental pollution from greenhouse gas emissions, as well as improve economic competitiveness of agricultural produce.
Experiments designed to assess energy and environmental aspects associated with different reduced tillage systems, applied in maize cultivation were conducted at Aleksandras Stulginskis University taking into account Lithuania’s economic and climate conditions. The study involved 5 tillage treatments: deep ploughing (DP, control), shallow ploughing (SP), deep cultivation (DC), shallow cultivation (SC) and no-tillage (NT).
Our experimental evidence suggests that with the application of reduced tillage systems it is feasible to reduce fuel consumption by 13-58% and working time input by 8.4% to nearly 3-fold, to reduce the cost price of maize cultivation technological operations, decrease environmental pollution with CO2 gas by 30 to 146 kg ha-1, compared with the deep ploughing.
Keywords: Reduced tillage, energy and environmental assessment, fuel consumption, CO2 emission, maize.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2094106 Progressive AAM Based Robust Face Alignment
Authors: Daehwan Kim, Jaemin Kim, Seongwon Cho, Yongsuk Jang, Sun-Tae Chung, Boo-Gyoun Kim
Abstract:
AAM has been successfully applied to face alignment, but its performance is very sensitive to initial values. In case the initial values are a little far distant from the global optimum values, there exists a pretty good possibility that AAM-based face alignment may converge to a local minimum. In this paper, we propose a progressive AAM-based face alignment algorithm which first finds the feature parameter vector fitting the inner facial feature points of the face and later localize the feature points of the whole face using the first information. The proposed progressive AAM-based face alignment algorithm utilizes the fact that the feature points of the inner part of the face are less variant and less affected by the background surrounding the face than those of the outer part (like the chin contour). The proposed algorithm consists of two stages: modeling and relation derivation stage and fitting stage. Modeling and relation derivation stage first needs to construct two AAM models: the inner face AAM model and the whole face AAM model and then derive relation matrix between the inner face AAM parameter vector and the whole face AAM model parameter vector. In the fitting stage, the proposed algorithm aligns face progressively through two phases. In the first phase, the proposed algorithm will find the feature parameter vector fitting the inner facial AAM model into a new input face image, and then in the second phase it localizes the whole facial feature points of the new input face image based on the whole face AAM model using the initial parameter vector estimated from using the inner feature parameter vector obtained in the first phase and the relation matrix obtained in the first stage. Through experiments, it is verified that the proposed progressive AAM-based face alignment algorithm is more robust with respect to pose, illumination, and face background than the conventional basic AAM-based face alignment algorithm.Keywords: Face Alignment, AAM, facial feature detection, model matching.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639105 Greywater Treatment Using Activated Biochar Produced from Agricultural Waste
Authors: Pascal Mwenge, Tumisang Seodigeng
Abstract:
The increase in urbanisation in South Africa has led to an increase in water demand and a decline in freshwater supply. Despite this, poor water usage is still a major challenge in South Africa, for instance, freshwater is still used for non-drinking applications. The freshwater shortage can be alleviated by using other sources of water for non-portable purposes such as greywater treated with activated biochar produced from agricultural waste. The success of activated biochar produced from agricultural waste to treat greywater can be both economically and environmentally beneficial. Greywater treated with activated biochar produced from agricultural waste is considered a cost-effective wastewater treatment. This work was aimed at determining the ability of activated biochar to remove Total Suspended Solids (TSS), Ammonium (NH4-N), Nitrate (NO3-N), and Chemical Oxygen Demand (COD) from greywater. The experiments were carried out in 800 ml laboratory plastic cylinders used as filter columns. 2.5 cm layer of gravel was used at the bottom and top of the column to sandwich the activated biochar material. Activated biochar (200 g and 400 g) was loaded in a column and used as a filter medium for greywater. Samples were collected after a week and sent for analysis. Four types of greywater were treated: Kitchen, floor cleaning water, shower and laundry water. The findings showed: 95% removal of TSS, 76% of NO3-N and 63% of COD on kitchen greywater and 85% removal of NH4-N on bathroom greywater, as highest removal of efficiency of the studied pollutants. The results showed that activated biochar produced from agricultural waste reduces a certain amount of pollutants from greywater. The results also indicated the ability of activated biochar to treat greywater for onsite non-potable reuse purposes.
Keywords: Activated biochar produced from agriculture waste, ammonium (NH4-N), chemical oxygen demand (COD), greywater, nitrate (NO3-N), total suspended solids (TSS).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1424104 A Study of Shear Stress Intensity Factor of PP and HDPE by a Modified Experimental Method together with FEM
Authors: Md. Shafiqul Islam, Abdullah Khan, Sharon Kao-Walter, Li Jian
Abstract:
Shear testing is one of the most complex testing areas where available methods and specimen geometries are different from each other. Therefore, a modified shear test specimen (MSTS) combining the simple uniaxial test with a zone of interest (ZOI) is tested which gives almost the pure shear. In this study, material parameters of polypropylene (PP) and high density polyethylene (HDPE) are first measured by tensile tests with a dogbone shaped specimen. These parameters are then used as an input for the finite element analysis. Secondly, a specially designed specimen (MSTS) is used to perform the shear stress tests in a tensile testing machine to get the results in terms of forces and extension, crack initiation etc. Scanning Electron Microscopy (SEM) is also performed on the shear fracture surface to find material behavior. These experiments are then simulated by finite element method and compared with the experimental results in order to confirm the simulation model. Shear stress state is inspected to find the usability of the proposed shear specimen. Finally, a geometry correction factor can be established for these two materials in this specific loading and geometry with notch using Linear Elastic Fracture Mechanics (LEFM). By these results, strain energy of shear failure and stress intensity factor (SIF) of shear of these two polymers are discussed in the special application of the screw cap opening of the medical or food packages with a temper evidence safety solution.
Keywords: Shear test specimen, Stress intensity factor, Finite Element simulation, Scanning electron microscopy, Screw cap opening.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2924103 Analysis of Linked in Series Servers with Blocking, Priority Feedback Service and Threshold Policy
Authors: Walenty Oniszczuk
Abstract:
The use of buffer thresholds, blocking and adequate service strategies are well-known techniques for computer networks traffic congestion control. This motivates the study of series queues with blocking, feedback (service under Head of Line (HoL) priority discipline) and finite capacity buffers with thresholds. In this paper, the external traffic is modelled using the Poisson process and the service times have been modelled using the exponential distribution. We consider a three-station network with two finite buffers, for which a set of thresholds (tm1 and tm2) is defined. This computer network behaves as follows. A task, which finishes its service at station B, gets sent back to station A for re-processing with probability o. When the number of tasks in the second buffer exceeds a threshold tm2 and the number of task in the first buffer is less than tm1, the fed back task is served under HoL priority discipline. In opposite case, for fed backed tasks, “no two priority services in succession" procedure (preventing a possible overflow in the first buffer) is applied. Using an open Markovian queuing schema with blocking, priority feedback service and thresholds, a closed form cost-effective analytical solution is obtained. The model of servers linked in series is very accurate. It is derived directly from a twodimensional state graph and a set of steady-state equations, followed by calculations of main measures of effectiveness. Consequently, efficient expressions of the low computational cost are determined. Based on numerical experiments and collected results we conclude that the proposed model with blocking, feedback and thresholds can provide accurate performance estimates of linked in series networks.Keywords: Blocking, Congestion control, Feedback, Markov chains, Performance evaluation, Threshold-base networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1294102 An Experimental Study on the Effect of Premixed and Equivalence Ratios on CO and HC Emissions of Dual Fuel HCCI Engine
Authors: M. Ghazikhani, M. R. Kalateh, Y. K. Toroghi, M. Dehnavi
Abstract:
In this study, effects of premixed and equivalence ratios on CO and HC emissions of a dual fuel HCCI engine are investigated. Tests were conducted on a single-cylinder engine with compression ratio of 17.5. Premixed gasoline is provided by a carburetor connected to intake manifold and equipped with a screw to adjust premixed air-fuel ratio, and diesel fuel is injected directly into the cylinder through an injector at pressure of 250 bars. A heater placed at inlet manifold is used to control the intake charge temperature. Optimal intake charge temperature results in better HCCI combustion due to formation of a homogeneous mixture, therefore, all tests were carried out over the optimum intake temperature of 110-115 ºC. Timing of diesel fuel injection has a great effect on stratification of in-cylinder charge and plays an important role in HCCI combustion phasing. Experiments indicated 35 BTDC as the optimum injection timing. Varying the coolant temperature in a range of 40 to 70 ºC, better HCCI combustion was achieved at 50 ºC. Therefore, coolant temperature was maintained 50 ºC during all tests. Simultaneous investigation of effective parameters on HCCI combustion was conducted to determine optimum parameters resulting in fast transition to HCCI combustion. One of the advantages of the method studied in this study is feasibility of easy and fast transition of typical diesel engine to a dual fuel HCCI engine. Results show that increasing premixed ratio, while keeping EGR rate constant, increases unburned hydrocarbon (UHC) emissions due to quenching phenomena and trapping of premixed fuel in crevices, but CO emission decreases due to increase in CO to CO2 reactions.Keywords: Dual fuel HCCI engine, premixed ratio, equivalenceratio, CO and UHC emissions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1905101 Oscillation Effect of the Multi-stage Learning for the Layered Neural Networks and Its Analysis
Authors: Isao Taguchi, Yasuo Sugai
Abstract:
This paper proposes an efficient learning method for the layered neural networks based on the selection of training data and input characteristics of an output layer unit. Comparing to recent neural networks; pulse neural networks, quantum neuro computation, etc, the multilayer network is widely used due to its simple structure. When learning objects are complicated, the problems, such as unsuccessful learning or a significant time required in learning, remain unsolved. Focusing on the input data during the learning stage, we undertook an experiment to identify the data that makes large errors and interferes with the learning process. Our method devides the learning process into several stages. In general, input characteristics to an output layer unit show oscillation during learning process for complicated problems. The multi-stage learning method proposes by the authors for the function approximation problems of classifying learning data in a phased manner, focusing on their learnabilities prior to learning in the multi layered neural network, and demonstrates validity of the multi-stage learning method. Specifically, this paper verifies by computer experiments that both of learning accuracy and learning time are improved of the BP method as a learning rule of the multi-stage learning method. In learning, oscillatory phenomena of a learning curve serve an important role in learning performance. The authors also discuss the occurrence mechanisms of oscillatory phenomena in learning. Furthermore, the authors discuss the reasons that errors of some data remain large value even after learning, observing behaviors during learning.
Keywords: data selection, function approximation problem, multistage leaning, neural network, voluntary oscillation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1430100 Comparative Study on the Effect of Substitution of Li and Mg Instead of Ca on Structural and Biological Behaviors of Silicate Bioactive Glass
Authors: Alireza Arab, Morteza Elsa, Amirhossein Moghanian
Abstract:
In this study, experiments were carried out to achieve a promising multifunctional and modified silicate based bioactive glass (BG). The main aim of the study was investigating the effect of lithium (Li) and magnesium (Mg) substitution, on in vitro bioactivity of substituted-58S BG. Moreover, it is noteworthy to state that modified BGs were synthesized in 60SiO2–(36-x)CaO–4P2O5–(x)Li2O and 60SiO2–(36-x)CaO–4P2O5–(x)MgO (where x = 0, 5, 10 mol.%) quaternary systems, by sol-gel method. Their performance was investigated through different aspects such as biocompatibility, antibacterial activity as well as their effect on alkaline phosphatase (ALP) activity, and proliferation of MC3T3 cells. The antibacterial efficiency was evaluated against methicillin-resistant Staphylococcus aureus bacteria. To do so, CaO was substituted with Li2O and MgO up to 10 mol % in 58S-BGs and then samples were immersed in simulated body fluid up to 14 days and then, characterized by X-ray diffraction, Fourier transform infrared spectroscopy, inductively coupled plasma atomic emission spectrometry, and scanning electron microscopy. Results indicated that this modification led to a retarding effect on in vitro hydroxyapatite (HA) formation due to the lower supersaturation degree for nucleation of HA compared with 58s-BG. Meanwhile, magnesium revealed further pronounced effect. The 3-(4,5 dimethylthiazol-2-yl)-2,5 diphenyltetrazolium bromide (MTT) and ALP analysis illustrated that substitutions of both Li2O and MgO, up to 5 mol %, had increasing effect on biocompatibility and stimulating proliferation of the pre-osteoblast MC3T3 cells in comparison to the control specimen. Regarding to bactericidal efficiency, the substitution of either Li or Mg for Ca in the 58s BG composition led to statistically significant difference in antibacterial behaviors of substituted-BGs. Meanwhile, the sample containing 5 mol % CaO/Li2O substitution (BG-5L) was selected as a multifunctional biomaterial in bone repair/regeneration due to the improved biocompatibility, enhanced ALP activity and antibacterial efficiency among all of the synthesized L-BGs and M-BGs.Keywords: Alkaline, alkaline earth, bioactivity, biomedical applications, sol-gel processes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56799 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: Customer value, Huff's Gravity Model, POS, retailer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 61298 Experimental Investigation into Chaotic Features of Flow Gauges in Automobile Fuel Metering System
Authors: S. K. Fasogbon
Abstract:
Chaotic system may lead to instability, extreme sensitivity and performance reduction in control systems. It is therefore important to understand the causes of such undesirable characteristics in control system especially in the automobile fuel gauges. This is because without accurate fuel gauges in automobile systems, it will be difficult if not impossible to embark on a journey whether during odd hours of the day or where fuel is difficult to obtain. To this end, this work studied the impacts of fuel tank rust and faulty component of fuel gauge system (voltage stabilizer) on the chaotic characteristics of fuel gauges. The results obtained were analyzed using Graph iSOFT package. Over the range of experiments conducted, the results obtained showed that rust effect of the fuel tank would alter the flow density, consequently the fluid pressure and ultimately the flow velocity of the fuel. The responses of the fuel gauge pointer to the faulty voltage stabilizer were erratic causing noticeable instability of gauge measurands indicated. The experiment also showed that the fuel gauge performed optimally by indicating the highest degree of accuracy when combined the effect of rust free tank and non-faulty voltage stabilizer conditions (± 6.75% measurand error) as compared to only the rust free tank situation (± 15% measurand error) and only the non-faulty voltage stabilizer condition (± 40% measurand error). The study concludes that both the fuel tank rust and the faulty voltage stabilizer gauge component have a significant effect on the sensitivity of fuel gauge and its accuracy ultimately. Also, by the reason of literature, our findings can also be said to be valid for all other fluid meters and gauges applicable in plant machineries and most hydraulic systems.Keywords: Chaotic system, degree of accuracy, measurand, sensitivity of fuel gauge.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 95197 An Integrated CFD and Experimental Analysis on Double-Skin Window
Authors: Sheam-Chyun Lin, Wei-Kai Chen, Hung-Cheng Yen, Yung-Jen Cheng, Yu-Cheng Chen
Abstract:
Result from the constant dwindle in natural resources, the alternative way to reduce the costs in our daily life would be urgent to be found in the near future. As the ancient technique based on the theory of solar chimney since roman times, the double-skin façade are simply composed of two large glass panels in purpose of daylighting and also natural ventilation in the daytime. Double-skin façade is generally installed on the exterior side of buildings as function as the window, so there is always a huge amount of passive solar energy the façade would receive to induce the airflow every sunny day. Therefore, this article imposes a domestic double-skin window for residential usage and attempts to improve the volume flow rate inside the cavity between the panels by the frame geometry design, the installation of outlet guide plate and the solar energy collection system. Note that the numerical analyses are applied to investigate the characteristics of flow field, and the boundary conditions in the simulation are totally based on the practical experiment of the original prototype. Then we redesign the prototype from the knowledge of the numerical results and fluid dynamic theory, and later the experiments of modified prototype will be conducted to verify the simulation results. The velocities at the inlet of each case are increase by 5%, 45% and 15% from the experimental data, and also the numerical simulation results reported 20% improvement in volume flow rate both for the frame geometry design and installation of outlet guide plate.Keywords: Solar energy, Double-skin façades, Thermal buoyancy, Fluid machinery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 152196 Normalizing Flow to Augmented Posterior: Conditional Density Estimation with Interpretable Dimension Reduction for High Dimensional Data
Authors: Cheng Zeng, George Michailidis, Hitoshi Iyatomi, Leo L Duan
Abstract:
The conditional density characterizes the distribution of a response variable y given other predictor x, and plays a key role in many statistical tasks, including classification and outlier detection. Although there has been abundant work on the problem of Conditional Density Estimation (CDE) for a low-dimensional response in the presence of a high-dimensional predictor, little work has been done for a high-dimensional response such as images. The promising performance of normalizing flow (NF) neural networks in unconditional density estimation acts a motivating starting point. In this work, we extend NF neural networks when external x is present. Specifically, they use the NF to parameterize a one-to-one transform between a high-dimensional y and a latent z that comprises two components [zP , zN]. The zP component is a low-dimensional subvector obtained from the posterior distribution of an elementary predictive model for x, such as logistic/linear regression. The zN component is a high-dimensional independent Gaussian vector, which explains the variations in y not or less related to x. Unlike existing CDE methods, the proposed approach, coined Augmented Posterior CDE (AP-CDE), only requires a simple modification on the common normalizing flow framework, while significantly improving the interpretation of the latent component, since zP represents a supervised dimension reduction. In image analytics applications, AP-CDE shows good separation of x-related variations due to factors such as lighting condition and subject id, from the other random variations. Further, the experiments show that an unconditional NF neural network, based on an unsupervised model of z, such as Gaussian mixture, fails to generate interpretable results.
Keywords: Conditional density estimation, image generation, normalizing flow, supervised dimension reduction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 165