Search results for: Dynamic model
5224 Puff Noise Detection and Cancellation for Robust Speech Recognition
Authors: Sangjun Park, Jungpyo Hong, Byung-Ok Kang, Yun-keun Lee, Minsoo Hahn
Abstract:
In this paper, an algorithm for detecting and attenuating puff noises frequently generated under the mobile environment is proposed. As a baseline system, puff detection system is designed based on Gaussian Mixture Model (GMM), and 39th Mel Frequency Cepstral Coefficient (MFCC) is extracted as feature parameters. To improve the detection performance, effective acoustic features for puff detection are proposed. In addition, detected puff intervals are attenuated by high-pass filtering. The speech recognition rate was measured for evaluation and confusion matrix and ROC curve are used to confirm the validity of the proposed system.Keywords: Gaussian mixture model, puff detection and cancellation, speech enhancement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22335223 Metabolic Analysis of Fibroblast Conditioned Media and Comparison with Theoretical Modeling
Authors: Priyanka Gupta, Paul Verma, Kerry Hourigan, Jayesh Bellare, Sameer Jadhav
Abstract:
Understanding the consumption and production of various metabolites of fibroblast conditioned media is needed for its proper and optimized use in expansion of pluripotent stem cells. For this purpose, we have used the HPLC method to analyse the consumption of glucose and the production of lactate over time by mouse embryonic fibroblasts. The experimental data have also been compared with mathematical model fits. 0.025 moles of lactate was produced after 72 hrs while the glucose concentration decreased from 0.017 moles to 0.011 moles. The mathematical model was able to predict the trends of glucose consumption and lactate production.Keywords: Conditioned media, HPLC, metabolite analysis, mouse embryonic fibroblast.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26155222 Competitors’ Influence Analysis of a Retailer by Using Customer Value and Huff’s Gravity Model
Authors: Yepeng Cheng, Yasuhiko Morimoto
Abstract:
Customer relationship analysis is vital for retail stores, especially for supermarkets. The point of sale (POS) systems make it possible to record the daily purchasing behaviors of customers as an identification point of sale (ID-POS) database, which can be used to analyze customer behaviors of a supermarket. The customer value is an indicator based on ID-POS database for detecting the customer loyalty of a store. In general, there are many supermarkets in a city, and other nearby competitor supermarkets significantly affect the customer value of customers of a supermarket. However, it is impossible to get detailed ID-POS databases of competitor supermarkets. This study firstly focused on the customer value and distance between a customer's home and supermarkets in a city, and then constructed the models based on logistic regression analysis to analyze correlations between distance and purchasing behaviors only from a POS database of a supermarket chain. During the modeling process, there are three primary problems existed, including the incomparable problem of customer values, the multicollinearity problem among customer value and distance data, and the number of valid partial regression coefficients. The improved customer value, Huff’s gravity model, and inverse attractiveness frequency are considered to solve these problems. This paper presents three types of models based on these three methods for loyal customer classification and competitors’ influence analysis. In numerical experiments, all types of models are useful for loyal customer classification. The type of model, including all three methods, is the most superior one for evaluating the influence of the other nearby supermarkets on customers' purchasing of a supermarket chain from the viewpoint of valid partial regression coefficients and accuracy.Keywords: Customer value, Huff's Gravity Model, POS, retailer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6125221 Low Light Image Enhancement with Multi-Stage Interconnected Autoencoders Integration in Pix-to-Pix GAN
Authors: Muhammad Atif, Cang Yan
Abstract:
The enhancement of low-light images is a significant area of study aimed at enhancing the quality of captured images in challenging lighting environments. Recently, methods based on Convolutional Neural Networks (CNN) have gained prominence as they offer state-of-the-art performance. However, many approaches based on CNN rely on increasing the size and complexity of the neural network. In this study, we propose an alternative method for improving low-light images using an Autoencoders-based multiscale knowledge transfer model. Our method leverages the power of three autoencoders, where the encoders of the first two autoencoders are directly connected to the decoder of the third autoencoder. Additionally, the decoder of the first two autoencoders is connected to the encoder of the third autoencoder. This architecture enables effective knowledge transfer, allowing the third autoencoder to learn and benefit from the enhanced knowledge extracted by the first two autoencoders. We further integrate the proposed model into the Pix-to-Pix GAN framework. By integrating our proposed model as the generator in the GAN framework, we aim to produce enhanced images that not only exhibit improved visual quality but also possess a more authentic and realistic appearance. These experimental results, both qualitative and quantitative, show that our method is better than the state-of-the-art methodologies.
Keywords: Low light image enhancement, deep learning, convolutional neural network, image processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375220 Security Weaknesses of Dynamic ID-based Remote User Authentication Protocol
Authors: Hyoungseob Lee, Donghyun Choi, Yunho Lee, Dongho Won, Seungjoo Kim
Abstract:
Recently, with the appearance of smart cards, many user authentication protocols using smart card have been proposed to mitigate the vulnerabilities in user authentication process. In 2004, Das et al. proposed a ID-based user authentication protocol that is secure against ID-theft and replay attack using smart card. In 2009, Wang et al. showed that Das et al.-s protocol is not secure to randomly chosen password attack and impersonation attack, and proposed an improved protocol. Their protocol provided mutual authentication and efficient password management. In this paper, we analyze the security weaknesses and point out the vulnerabilities of Wang et al.-s protocol.Keywords: Message Alteration Attack, Impersonation Attack
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17735219 Influence of High Temperature and Humidity on Polymer Composites Used in Relining of Sewage
Authors: Parastou Kharazmi, Folke Björk
Abstract:
Some of the main causes for degradation of polymeric materials are thermal aging, hydrolysis, oxidation or chemical degradation by acids, alkalis or water. The first part of this paper provides a brief summary of advances in technology, methods and specification of composite materials for relining as a rehabilitation technique for sewage systems. The second part summarizes an investigation on frequently used composite materials for relining in Sweden, the rubber filled epoxy composite and reinforced polyester composite when they were immersed in deionized water or in dry conditions, and elevated temperatures up to 80°C in the laboratory. The tests were conducted by visual inspection, microscopy, Dynamic Mechanical Analysis (DMA), Differential Scanning Calorimetry (DSC) as well as mechanical testing, three point bending and tensile testing.
Keywords: Composite, epoxy, polyester, relining, sewage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17025218 Parametric Cost Estimating Relationships for Design Effort Estimation
Authors: Adil Salam, Nadia Bhuiyan, Gerard J. Gouw
Abstract:
The Canadian aerospace industry faces many challenges. One of them is the difficulty in estimating costs. In particular, the design effort required in a project impacts resource requirements and lead-time, and consequently the final cost. This paper presents the findings of a case study conducted for recognized global leader in the design and manufacturing of aircraft engines. The study models parametric cost estimation relationships to estimate the design effort of integrated blade-rotor low-pressure compressor fans. Several effort drivers are selected to model the relationship. Comparative analyses of three types of models are conducted. The model with the best accuracy and significance in design estimation is retained.
Keywords: Effort estimation, design, aerospace.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25785217 Controlling the Angle of Attack of an Aircraft Using Genetic Algorithm Based Flight Controller
Authors: S. Swain, P. S Khuntia
Abstract:
In this paper, the unstable angle of attack of a FOXTROT aircraft is controlled by using Genetic Algorithm based flight controller and the result is compared with the conventional techniques like Tyreus-Luyben (TL), Ziegler-Nichols (ZN) and Interpolation Rule (IR) for tuning the PID controller. In addition, the performance indices like Mean Square Error (MSE), Integral Square Error (ISE), and Integral Absolute Time Error (IATE) etc. are improved by using Genetic Algorithm. It was established that the error by using GA is very less as compared to the conventional techniques thereby improving the performance indices of the dynamic system.Keywords: Angle of Attack, Genetic Algorithm, Performance Indices, PID Controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17715216 Wavelet-Based Spectrum Sensing for Cognitive Radios using Hilbert Transform
Authors: Shiann-Shiun Jeng, Jia-Ming Chen, Hong-Zong Lin, Chen-Wan Tsung
Abstract:
For cognitive radio networks, there is a major spectrum sensing problem, i.e. dynamic spectrum management. It is an important issue to sense and identify the spectrum holes in cognitive radio networks. The first-order derivative scheme is usually used to detect the edge of the spectrum. In this paper, a novel spectrum sensing technique for cognitive radio is presented. The proposed algorithm offers efficient edge detection. Then, simulation results show the performance of the first-order derivative scheme and the proposed scheme and depict that the proposed scheme obtains better performance than does the first-order derivative scheme.Keywords: cognitive radio, Spectrum Sensing, wavelet, edgedetection
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29335215 BIDENS: Iterative Density Based Biclustering Algorithm With Application to Gene Expression Analysis
Authors: Mohamed A. Mahfouz, M. A. Ismail
Abstract:
Biclustering is a very useful data mining technique for identifying patterns where different genes are co-related based on a subset of conditions in gene expression analysis. Association rules mining is an efficient approach to achieve biclustering as in BIMODULE algorithm but it is sensitive to the value given to its input parameters and the discretization procedure used in the preprocessing step, also when noise is present, classical association rules miners discover multiple small fragments of the true bicluster, but miss the true bicluster itself. This paper formally presents a generalized noise tolerant bicluster model, termed as μBicluster. An iterative algorithm termed as BIDENS based on the proposed model is introduced that can discover a set of k possibly overlapping biclusters simultaneously. Our model uses a more flexible method to partition the dimensions to preserve meaningful and significant biclusters. The proposed algorithm allows discovering biclusters that hard to be discovered by BIMODULE. Experimental study on yeast, human gene expression data and several artificial datasets shows that our algorithm offers substantial improvements over several previously proposed biclustering algorithms.Keywords: Machine learning, biclustering, bi-dimensional clustering, gene expression analysis, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19635214 A Comparison of Recent Methods for Solving a Model 1D Convection Diffusion Equation
Authors: Ashvin Gopaul, Jayrani Cheeneebash, Kamleshsing Baurhoo
Abstract:
In this paper we study some numerical methods to solve a model one-dimensional convection–diffusion equation. The semi-discretisation of the space variable results into a system of ordinary differential equations and the solution of the latter involves the evaluation of a matrix exponent. Since the calculation of this term is computationally expensive, we study some methods based on Krylov subspace and on Restrictive Taylor series approximation respectively. We also consider the Chebyshev Pseudospectral collocation method to do the spatial discretisation and we present the numerical solution obtained by these methods.
Keywords: Chebyshev Pseudospectral collocation method, convection-diffusion equation, restrictive Taylor approximation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16805213 Improvement of Soft Clay Using Floating Cement Dust-Lime Columns
Authors: Adel Belal, Sameh Aboelsoud, Mohy Elmashad, Mohammed Abdelmonem
Abstract:
The two main criteria that control the design and performance of footings are bearing capacity and settlement of soil. In soft soils, the construction of buildings, storage tanks, warehouse, etc. on weak soils usually involves excessive settlement problems. To solve bearing capacity or reduce settlement problems, soil improvement may be considered by using different techniques, including encased cement dust–lime columns. The proposed research studies the effect of adding floating encased cement dust and lime mix columns to soft clay on the clay-bearing capacity. Four experimental tests were carried out. Columns diameters of 3.0 cm, 4.0 cm, and 5.0 cm and columns length of 60% of the clay layer thickness were used. Numerical model was constructed and verified using commercial finite element package (PLAXIS 2D, V8.5). The verified model was used to study the effect of distributing columns around the footing at different distances. The study showed that the floating cement dust lime columns enhanced the clay-bearing capacity with 262%. The numerical model showed that the columns around the footing have a limit effect on the clay improvement.
Keywords: Bearing capacity, cement dust – lime columns, ground improvement, soft clay.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11165212 A Novel Web Metric for the Evaluation of Internet Trends
Authors: Radek Malinský, Ivan Jelínek
Abstract:
Web 2.0 (social networking, blogging and online forums) can serve as a data source for social science research because it contains vast amount of information from many different users. The volume of that information has been growing at a very high rate and becoming a network of heterogeneous data; this makes things difficult to find and is therefore not almost useful. We have proposed a novel theoretical model for gathering and processing data from Web 2.0, which would reflect semantic content of web pages in better way. This article deals with the analysis part of the model and its usage for content analysis of blogs. The introductory part of the article describes methodology for the gathering and processing data from blogs. The next part of the article is focused on the evaluation and content analysis of blogs, which write about specific trend.Keywords: Blog, Sentiment Analysis, Web 2.0, Webometrics
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35445211 Modeling of the Process Parameters using Soft Computing Techniques
Authors: Miodrag T. Manić, Dejan I. Tanikić, Miloš S. Stojković, Dalibor M. ðenadić
Abstract:
The design of technological procedures for manufacturing certain products demands the definition and optimization of technological process parameters. Their determination depends on the model of the process itself and its complexity. Certain processes do not have an adequate mathematical model, thus they are modeled using heuristic methods. First part of this paper presents a state of the art of using soft computing techniques in manufacturing processes from the perspective of applicability in modern CAx systems. Methods of artificial intelligence which can be used for this purpose are analyzed. The second part of this paper shows some of the developed models of certain processes, as well as their applicability in the actual calculation of parameters of some technological processes within the design system from the viewpoint of productivity.Keywords: fuzzy logic, manufacturing, neural networks
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19105210 Knowledge Discovery from Production Databases for Hierarchical Process Control
Authors: Pavol Tanuska, Pavel Vazan, Michal Kebisek, Dominika Jurovata
Abstract:
The paper gives the results of the project that was oriented on the usage of knowledge discoveries from production systems for needs of the hierarchical process control. One of the main project goals was the proposal of knowledge discovery model for process control. Specifics data mining methods and techniques was used for defined problems of the process control. The gained knowledge was used on the real production system thus the proposed solution has been verified. The paper documents how is possible to apply the new discovery knowledge to use in the real hierarchical process control. There are specified the opportunities for application of the proposed knowledge discovery model for hierarchical process control.
Keywords: Hierarchical process control, knowledge discovery from databases, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17755209 Iterative Way to Acquire Information Technology for Defense and Aerospace
Authors: Ahmet Denker, Hakan Gürkan
Abstract:
Defense and Aerospace environment is continuously striving to keep up with increasingly sophisticated Information Technology (IT) in order to remain effective in today-s dynamic and unpredictable threat environment. This makes IT one of the largest and fastest growing expenses of Defense. Hundreds of millions of dollars spent a year on IT projects. But, too many of those millions are wasted on costly mistakes. Systems that do not work properly, new components that are not compatible with old ones, trendy new applications that do not really satisfy defense needs or lost through poorly managed contracts. This paper investigates and compiles the effective strategies that aim to end exasperation with low returns and high cost of Information Technology acquisition for defense; it tries to show how to maximize value while reducing time and expenditure.Keywords: Iterative process, acquisition management, project management, software economics, requirement analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13995208 Effective Charge Coupling in Low Dimensional Doped Quantum Antiferromagnets
Authors: Suraka Bhattacharjee, Ranjan Chaudhury
Abstract:
The interaction between the charge degrees of freedom for itinerant antiferromagnets is investigated in terms of generalized charge stiffness constant corresponding to nearest neighbour t-J model and t1-t2-t3-J model. The low dimensional hole doped antiferromagnets are the well known systems that can be described by the t-J-like models. Accordingly, we have used these models to investigate the fermionic pairing possibilities and the coupling between the itinerant charge degrees of freedom. A detailed comparison between spin and charge couplings highlights that the charge and spin couplings show very similar behaviour in the over-doped region, whereas, they show completely different trends in the lower doping regimes. Moreover, a qualitative equivalence between generalized charge stiffness and effective Coulomb interaction is also established based on the comparisons with other theoretical and experimental results. Thus it is obvious that the enhanced possibility of fermionic pairing is inherent in the reduction of Coulomb repulsion with increase in doping concentration. However, the increased possibility can not give rise to pairing without the presence of any other pair producing mechanism outside the t-J model. Therefore, one can conclude that the t-J-like models themselves solely are not capable of producing conventional momentum-based superconducting pairing on their own.Keywords: Generalized charge stiffness constant, charge coupling, effective Coulomb interaction, t-J-like models, momentum-space pairing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6165207 Estimation of Time -Varying Linear Regression with Unknown Time -Volatility via Continuous Generalization of the Akaike Information Criterion
Authors: Elena Ezhova, Vadim Mottl, Olga Krasotkina
Abstract:
The problem of estimating time-varying regression is inevitably concerned with the necessity to choose the appropriate level of model volatility - ranging from the full stationarity of instant regression models to their absolute independence of each other. In the stationary case the number of regression coefficients to be estimated equals that of regressors, whereas the absence of any smoothness assumptions augments the dimension of the unknown vector by the factor of the time-series length. The Akaike Information Criterion is a commonly adopted means of adjusting a model to the given data set within a succession of nested parametric model classes, but its crucial restriction is that the classes are rigidly defined by the growing integer-valued dimension of the unknown vector. To make the Kullback information maximization principle underlying the classical AIC applicable to the problem of time-varying regression estimation, we extend it onto a wider class of data models in which the dimension of the parameter is fixed, but the freedom of its values is softly constrained by a family of continuously nested a priori probability distributions.Keywords: Time varying regression, time-volatility of regression coefficients, Akaike Information Criterion (AIC), Kullback information maximization principle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15345206 Speaker Independent Quranic Recognizer Basedon Maximum Likelihood Linear Regression
Authors: Ehab Mourtaga, Ahmad Sharieh, Mousa Abdallah
Abstract:
An automatic speech recognition system for the formal Arabic language is needed. The Quran is the most formal spoken book in Arabic, it is spoken all over the world. In this research, an automatic speech recognizer for Quranic based speakerindependent was developed and tested. The system was developed based on the tri-phone Hidden Markov Model and Maximum Likelihood Linear Regression (MLLR). The MLLR computes a set of transformations which reduces the mismatch between an initial model set and the adaptation data. It uses the regression class tree, as well as, estimates a set of linear transformations for the mean and variance parameters of a Gaussian mixture HMM system. The 30th Chapter of the Quran, with five of the most famous readers of the Quran, was used for the training and testing of the data. The chapter includes about 2000 distinct words. The advantages of using the Quranic verses as the database in this developed recognizer are the uniqueness of the words and the high level of orderliness between verses. The level of accuracy from the tested data ranged 68 to 85%.Keywords: Hidden Markov Model (HMM), MaximumLikelihood Linear Regression (MLLR), Quran, Regression ClassTree, Speech Recognition, Speaker-independent.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19155205 Model Inversion of a Two Degrees of Freedom Linearized PUMA from Bicausal Bond Graphs
Authors: Gilberto Gonzalez-A, Ignacio Rodríguez- A., Dunia Nuñez-P
Abstract:
A bond graph model of a two degrees of freedom PUMA is described. System inversion gives the system input required to generate a given system output. In order to get the system inversion of the PUMA manipulator, a linearization of the nonlinear bond graph is obtained. Hence, the bicausality of the linearized bond graph of the PUMA manipulator is applied. Thus, the bicausal bond graph provides a systematic way of generating the equations of the system inversion. Simulation results to verify the calculated input for a given output are shown.Keywords: Bond graph, system inversion, bicausality, PUMA manipulator
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20125204 Fragility Assessment for Torsionally Asymmetric Buildings in Plan
Authors: S. Feli, S. Tavousi Tafreshi, A. Ghasemi
Abstract:
The present paper aims at evaluating the response of three-dimensional buildings with in-plan stiffness irregularities that have been subjected to two-way excitation ground motion records simultaneously. This study is broadly-based fragility assessment with greater emphasis on structural response at in-plan flexible and stiff sides. To this end, three type of three-dimensional 5-story steel building structures with stiffness eccentricities, were subjected to extensive nonlinear incremental dynamic analyses (IDA) utilizing Ibarra-Krawinkler deterioration models. Fragility assessment was implemented for different configurations of braces to investigate the losses in buildings with center of resisting (CR) eccentricities.
Keywords: Ibarra Krawinkler, fragility assessment, flexible and stiff side, center of resisting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15355203 Simultaneous HPAM/SDS Injection in Heterogeneous/Layered Models
Authors: M. H. Sedaghat, A. Zamani, S. Morshedi, R. Janamiri, M. Safdari, I. Mahdavi, A. Hosseini, A. Hatampour
Abstract:
Although lots of experiments have been done in enhanced oil recovery, the number of experiments which consider the effects of local and global heterogeneity on efficiency of enhanced oil recovery based on the polymer-surfactant flooding is low and rarely done. In this research, we have done numerous experiments of water flooding and polymer-surfactant flooding on a five spot glass micromodel in different conditions such as different positions of layers. In these experiments, five different micromodels with three different pore structures are designed. Three models with different layer orientation, one homogenous model and one heterogeneous model are designed. In order to import the effect of heterogeneity of porous media, three types of pore structures are distributed accidentally and with equal ratio throughout heterogeneous micromodel network according to random normal distribution. The results show that maximum EOR recovery factor will happen in a situation where the layers are orthogonal to the path of mainstream and the minimum EOR recovery factor will happen in a situation where the model is heterogeneous. This experiments show that in polymer-surfactant flooding, with increase of angles of layers the EOR recovery factor will increase and this recovery factor is strongly affected by local heterogeneity around the injection zone.
Keywords: Layered Reservoir, Micromodel, Local Heterogeneity, Polymer-Surfactant Flooding, Enhanced Oil Recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22195202 Complementing Assessment Processes with Standardized Tests: A Work in Progress
Authors: Amparo Camacho
Abstract:
ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.
Keywords: Assessment, hard skills, soft skills, standardized tests.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8035201 CFD Simulations to Validate Two and Three Phase Up-flow in Bubble Columns
Authors: Shyam Kumar, Nannuri Srinivasulu, Ashok Khanna
Abstract:
Bubble columns have a variety of applications in absorption, bio-reactions, catalytic slurry reactions, and coal liquefaction; because they are simple to operate, provide good heat and mass transfer, having less operational cost. The use of Computational Fluid Dynamics (CFD) for bubble column becomes important, since it can describe the fluid hydrodynamics on both local and global scale. Euler- Euler two-phase fluid model has been used to simulate two-phase (air and water) transient up-flow in bubble column (15cm diameter) using FLUENT6.3. These simulations and experiments were operated over a range of superficial gas velocities in the bubbly flow and churn turbulent regime (1 to16 cm/s) at ambient conditions. Liquid velocity was varied from 0 to 16cm/s. The turbulence in the liquid phase is described using the standard k-ε model. The interactions between the two phases are described through drag coefficient formulations (Schiller Neumann). The objectives are to validate CFD simulations with experimental data, and to obtain grid-independent numerical solutions. Quantitatively good agreements are obtained between experimental data for hold-up and simulation values. Axial liquid velocity profiles and gas holdup profiles were also obtained for the simulation.Keywords: Bubble column, Computational fluid dynamics, Gas holdup profile, k-ε model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27195200 Design of Nonlinear Observer by Using Chebyshev Interpolation based on Formal Linearization
Authors: Kazuo Komatsu, Hitoshi Takata
Abstract:
This paper discusses a design of nonlinear observer by a formal linearization method using an application of Chebyshev Interpolation in order to facilitate processes for synthesizing a nonlinear observer and to improve the precision of linearization. A dynamic nonlinear system is linearized with respect to a linearization function, and a measurement equation is transformed into an augmented linear one by the formal linearization method which is based on Chebyshev interpolation. To the linearized system, a linear estimation theory is applied and a nonlinear observer is derived. To show effectiveness of the observer design, numerical experiments are illustrated and they indicate that the design shows remarkable performances for nonlinear systems.Keywords: nonlinear system, nonlinear observer, formal linearization, Chebyshev interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15945199 A Visco-elastic Model for High-density Cellulose Insulation Materials
Authors: Jonas Engqvist, Per Hard af Segerstad, Birger Bring, Mathias Wallin
Abstract:
A macroscopic constitutive equation is developed for a high-density cellulose insulation material with emphasis on the outof- plane stress relaxation behavior. A hypothesis is proposed where the total stress is additively composed by an out-of-plane visco-elastic isotropic contribution and an in-plane elastic orthotropic response. The theory is validated against out-of-plane stress relaxation, compressive experiments and in-plane tensile hysteresis, respectively. For large scale finite element simulations, the presented model provides a balance between simplicity and capturing the materials constitutive behaviour.
Keywords: Cellulose insulation materials, constitutive modelling, material characterisation, pressboard.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22125198 Orthogonal Array Application and Response Surface Method Approach for Optimal Product Values: An Application for Oil Blending Process
Authors: Christopher C. Ihueze, Constance C. Obiuto, Christian E. Okafor, Charles C. Okpala
Abstract:
This paper presents a methodical approach for designing and optimizing process parameters in oil blending industries. Twenty seven replicated experiments were conducted for production of A-Z crown super oil (SAE20W/50) employing L9 orthogonal array to establish process response parameters. Power law model was fitted to experimental data and the obtained model was optimized applying the central composite design (CCD) of response surface methodology (RSM). Quadratic model was found to be significant for production of A-Z crown supper oil. The study recognized and specified four new lubricant formulations that conform to ISO oil standard in the course of analyzing the batch productions of A-Z crown supper oil as: L1: KV = 21.8293Cst, BS200 = 9430.00Litres, Ad102=11024.00Litres, PVI = 2520 Litres, L2: KV = 22.513Cst, BS200 = 12430.00 Litres, Ad102 = 11024.00 Litres, PVI = 2520 Litres, L3: KV = 22.1671Cst, BS200 = 9430.00 Litres, Ad102 = 10481.00 Litres, PVI= 2520 Litres, L4: KV = 22.8605Cst, BS200 = 12430.00 Litres, Ad102 = 10481.00 Litres, PVI = 2520 Litres. The analysis of variance showed that quadratic model is significant for kinematic viscosity production while the R-sq value statistic of 0.99936 showed that the variation of kinematic viscosity is due to its relationship with the control factors. This study therefore resulted to appropriate blending proportions of lubricants base oil and additives and recommends the optimal kinematic viscosity of A-Z crown super oil (SAE20W/50) to be 22.86Cst.
Keywords: Additives, control factors, kinematic viscosity, lubricant, orthogonal array, process parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19455197 Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features
Authors: T. Thendral, M. S. Vijaya, S. Karpagavalli
Abstract:
Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.
Keywords: Classification, Feature extraction, Support vector machine, Training, Writer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23125196 Prediction of Writer Using Tamil Handwritten Document Image Based on Pooled Features
Authors: T. Thendral, M. S. Vijaya, S. Karpagavalli
Abstract:
Tamil handwritten document is taken as a key source of data to identify the writer. Tamil is a classical language which has 247 characters include compound characters, consonants, vowels and special character. Most characters of Tamil are multifaceted in nature. Handwriting is a unique feature of an individual. Writer may change their handwritings according to their frame of mind and this place a risky challenge in identifying the writer. A new discriminative model with pooled features of handwriting is proposed and implemented using support vector machine. It has been reported on 100% of prediction accuracy by RBF and polynomial kernel based classification model.Keywords: Classification, Feature extraction, Support vector machine, Training, Writer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17015195 Self-Sensing Concrete Nanocomposites for Smart Structures
Authors: A. D'Alessandro, F. Ubertini, A. L. Materazzi
Abstract:
In the field of civil engineering, Structural Health Monitoring is a topic of growing interest. Effective monitoring instruments permit the control of the working conditions of structures and infrastructures, through the identification of behavioral anomalies due to incipient damages, especially in areas of high environmental hazards as earthquakes. While traditional sensors can be applied only in a limited number of points, providing a partial information for a structural diagnosis, novel transducers may allow a diffuse sensing. Thanks to the new tools and materials provided by nanotechnology, new types of multifunctional sensors are developing in the scientific panorama. In particular, cement-matrix composite materials capable of diagnosing their own state of strain and tension, could be originated by the addition of specific conductive nanofillers. Because of the nature of the material they are made of, these new cementitious nano-modified transducers can be inserted within the concrete elements, transforming the same structures in sets of widespread sensors. This paper is aimed at presenting the results of a research about a new self-sensing nanocomposite and about the implementation of smart sensors for Structural Health Monitoring. The developed nanocomposite has been obtained by inserting multi walled carbon nanotubes within a cementitious matrix. The insertion of such conductive carbon nanofillers provides the base material with piezoresistive characteristics and peculiar sensitivity to mechanical modifications. The self-sensing ability is achieved by correlating the variation of the external stress or strain with the variation of some electrical properties, such as the electrical resistance or conductivity. Through the measurement of such electrical characteristics, the performance and the working conditions of an element or a structure can be monitored. Among conductive carbon nanofillers, carbon nanotubes seem to be particularly promising for the realization of self-sensing cement-matrix materials. Some issues related to the nanofiller dispersion or to the influence of the nano-inclusions amount in the cement matrix need to be carefully investigated: the strain sensitivity of the resulting sensors is influenced by such factors. This work analyzes the dispersion of the carbon nanofillers, the physical properties of the fresh dough, the electrical properties of the hardened composites and the sensing properties of the realized sensors. The experimental campaign focuses specifically on their dynamic characterization and their applicability to the monitoring of full-scale elements. The results of the electromechanical tests with both slow varying and dynamic loads show that the developed nanocomposite sensors can be effectively used for the health monitoring of structures.
Keywords: Carbon nanotubes, self-sensing nanocomposites, smart cement-matrix sensors, structural health monitoring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3460