Search results for: Homogenization method
4735 A Green Design for Assembly Model for Integrated Design Evaluation and Assembly and Disassembly Sequence Planning
Authors: Yuan-Jye Tseng, Fang-Yu Yu, Feng-Yi Huang
Abstract:
A green design for assembly model is presented to integrate design evaluation and assembly and disassembly sequence planning by evaluating the three activities in one integrated model. For an assembled product, an assembly sequence planning model is required for assembling the product at the start of the product life cycle. A disassembly sequence planning model is needed for disassembling the product at the end. In a green product life cycle, it is important to plan how a product can be disassembled, reused, or recycled, before the product is actually assembled and produced. Given a product requirement, there may be several design alternative cases to design the same product. In the different design cases, the assembly and disassembly sequences for producing the product can be different. In this research, a new model is presented to concurrently evaluate the design and plan the assembly and disassembly sequences. First, the components are represented by using graph based models. Next, a particle swarm optimization (PSO) method with a new encoding scheme is developed. In the new PSO encoding scheme, a particle is represented by a position matrix defining an assembly sequence and a disassembly sequence. The assembly and disassembly sequences can be simultaneously planned with an objective of minimizing the total of assembly costs and disassembly costs. The test results show that the presented method is feasible and efficient for solving the integrated design evaluation and assembly and disassembly sequence planning problem. An example product is implemented and illustrated in this paper.Keywords: green design, assembly and disassembly sequence planning, green design for assembly, particle swarm optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17784734 A Study on Abnormal Behavior Detection in BYOD Environment
Authors: Dongwan Kang, Joohyung Oh, Chaetae Im
Abstract:
Advancement of communication technologies and smart devices in the recent times is leading to changes into the integrated wired and wireless communication environments. Since early days, businesses had started introducing environments for mobile device application to their operations in order to improve productivity (efficiency) and the closed corporate environment gradually shifted to an open structure. Recently, individual user's interest in working environment using mobile devices has increased and a new corporate working environment under the concept of BYOD is drawing attention. BYOD (bring your own device) is a concept where individuals bring in and use their own devices in business activities. Through BYOD, businesses can anticipate improved productivity (efficiency) and also a reduction in the cost of purchasing devices. However, as a result of security threats caused by frequent loss and theft of personal devices and corporate data leaks due to low security, companies are reluctant about adopting BYOD system. In addition, without considerations to diverse devices and connection environments, there are limitations in detecting abnormal behaviors, such as information leaks, using the existing network-based security equipment. This study suggests a method to detect abnormal behaviors according to individual behavioral patterns, rather than the existing signature-based malicious behavior detection, and discusses applications of this method in BYOD environment.
Keywords: BYOD, Security, Anomaly Behavior Detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20674733 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis
Authors: Deng Zengming, Wang Mingjiang
Abstract:
As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.
Keywords: View synthesis, Gaussian mixture model, hybrid framework, fusion method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9934732 Inferring User Preference Using Distance Dependent Chinese Restaurant Process and Weighted Distribution for a Content Based Recommender System
Authors: Bagher Rahimpour Cami, Hamid Hassanpour, Hoda Mashayekhi
Abstract:
Nowadays websites provide a vast number of resources for users. Recommender systems have been developed as an essential element of these websites to provide a personalized environment for users. They help users to retrieve interested resources from large sets of available resources. Due to the dynamic feature of user preference, constructing an appropriate model to estimate the user preference is the major task of recommender systems. Profile matching and latent factors are two main approaches to identify user preference. In this paper, we employed the latent factor and profile matching to cluster the user profile and identify user preference, respectively. The method uses the Distance Dependent Chines Restaurant Process as a Bayesian nonparametric framework to extract the latent factors from the user profile. These latent factors are mapped to user interests and a weighted distribution is used to identify user preferences. We evaluate the proposed method using a real-world data-set that contains news tweets of a news agency (BBC). The experimental results and comparisons show the superior recommendation accuracy of the proposed approach related to existing methods, and its ability to effectively evolve over time.Keywords: Content-based recommender systems, dynamic user modeling, extracting user interests, predicting user preference.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8154731 A Comparison of Inverse Simulation-Based Fault Detection in a Simple Robotic Rover with a Traditional Model-Based Method
Authors: Murray L. Ireland, Kevin J. Worrall, Rebecca Mackenzie, Thaleia Flessa, Euan McGookin, Douglas Thomson
Abstract:
Robotic rovers which are designed to work in extra-terrestrial environments present a unique challenge in terms of the reliability and availability of systems throughout the mission. Should some fault occur, with the nearest human potentially millions of kilometres away, detection and identification of the fault must be performed solely by the robot and its subsystems. Faults in the system sensors are relatively straightforward to detect, through the residuals produced by comparison of the system output with that of a simple model. However, faults in the input, that is, the actuators of the system, are harder to detect. A step change in the input signal, caused potentially by the loss of an actuator, can propagate through the system, resulting in complex residuals in multiple outputs. These residuals can be difficult to isolate or distinguish from residuals caused by environmental disturbances. While a more complex fault detection method or additional sensors could be used to solve these issues, an alternative is presented here. Using inverse simulation (InvSim), the inputs and outputs of the mathematical model of the rover system are reversed. Thus, for a desired trajectory, the corresponding actuator inputs are obtained. A step fault near the input then manifests itself as a step change in the residual between the system inputs and the input trajectory obtained through inverse simulation. This approach avoids the need for additional hardware on a mass- and power-critical system such as the rover. The InvSim fault detection method is applied to a simple four-wheeled rover in simulation. Additive system faults and an external disturbance force and are applied to the vehicle in turn, such that the dynamic response and sensor output of the rover are impacted. Basic model-based fault detection is then employed to provide output residuals which may be analysed to provide information on the fault/disturbance. InvSim-based fault detection is then employed, similarly providing input residuals which provide further information on the fault/disturbance. The input residuals are shown to provide clearer information on the location and magnitude of an input fault than the output residuals. Additionally, they can allow faults to be more clearly discriminated from environmental disturbances.Keywords: Fault detection, inverse simulation, rover, ground robot.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9464730 Combining the Deep Neural Network with the K-Means for Traffic Accident Prediction
Authors: Celso L. Fernando, Toshio Yoshii, Takahiro Tsubota
Abstract:
Understanding the causes of a road accident and predicting their occurrence is key to prevent deaths and serious injuries from road accident events. Traditional statistical methods such as the Poisson and the Logistics regressions have been used to find the association of the traffic environmental factors with the accident occurred; recently, an artificial neural network, ANN, a computational technique that learns from historical data to make a more accurate prediction, has emerged. Although the ability to make accurate predictions, the ANN has difficulty dealing with highly unbalanced attribute patterns distribution in the training dataset; in such circumstances, the ANN treats the minority group as noise. However, in the real world data, the minority group is often the group of interest; e.g., in the road traffic accident data, the events of the accident are the group of interest. This study proposes a combination of the k-means with the ANN to improve the predictive ability of the neural network model by alleviating the effect of the unbalanced distribution of the attribute patterns in the training dataset. The results show that the proposed method improves the ability of the neural network to make a prediction on a highly unbalanced distributed attribute patterns dataset; however, on an even distributed attribute patterns dataset, the proposed method performs almost like a standard neural network.
Keywords: Accident risks estimation, artificial neural network, deep learning, K-mean, road safety.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9744729 Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison
Authors: Frank Emmert-Streib, Matthias Dehmer, Jing Liu, Max Muhlhauser
Abstract:
The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.
Keywords: Graph similarity, generalized trees, graph alignment, DNA microarray data, cervical cancer.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17534728 Wastewater Treatment with Ammonia Recovery System
Authors: M. Örvös, T. Balázs, K. F. Both
Abstract:
From environmental aspect purification of ammonia containing wastewater is expected. High efficiency ammonia desorption can be done from the water by air on proper temperature. After the desorption process, ammonia can be recovered and used in another technology. The calculation method described below give some methods to find either the minimum column height or ammonia rich solution of the effluent.Keywords: Absorber, desorber, packed column.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26684727 Unsteady Laminar Boundary Layer Forced Flow in the Region of the Stagnation Point on a Stretching Flat Sheet
Authors: A. T. Eswara
Abstract:
This paper analyses the unsteady, two-dimensional stagnation point flow of an incompressible viscous fluid over a flat sheet when the flow is started impulsively from rest and at the same time, the sheet is suddenly stretched in its own plane with a velocity proportional to the distance from the stagnation point. The partial differential equations governing the laminar boundary layer forced convection flow are non-dimensionalised using semi-similar transformations and then solved numerically using an implicit finitedifference scheme known as the Keller-box method. Results pertaining to the flow and heat transfer characteristics are computed for all dimensionless time, uniformly valid in the whole spatial region without any numerical difficulties. Analytical solutions are also obtained for both small and large times, respectively representing the initial unsteady and final steady state flow and heat transfer. Numerical results indicate that the velocity ratio parameter is found to have a significant effect on skin friction and heat transfer rate at the surface. Furthermore, it is exposed that there is a smooth transition from the initial unsteady state flow (small time solution) to the final steady state (large time solution).Keywords: Forced flow, Keller-box method, Stagnation point, Stretching flat sheet, Unsteady laminar boundary layer, Velocity ratio parameter.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16954726 Non-Local Behavior of a Mixed-Mode Crack in a Functionally Graded Piezoelectric Medium
Authors: Nidhal Jamia, Sami El-Borgi
Abstract:
In this paper, the problem of a mixed-Mode crack embedded in an infinite medium made of a functionally graded piezoelectric material (FGPM) with crack surfaces subjected to electro-mechanical loadings is investigated. Eringen’s non-local theory of elasticity is adopted to formulate the governing electro-elastic equations. The properties of the piezoelectric material are assumed to vary exponentially along a perpendicular plane to the crack. Using Fourier transform, three integral equations are obtained in which the unknown variables are the jumps of mechanical displacements and electric potentials across the crack surfaces. To solve the integral equations, the unknowns are directly expanded as a series of Jacobi polynomials, and the resulting equations solved using the Schmidt method. In contrast to the classical solutions based on the local theory, it is found that no mechanical stress and electric displacement singularities are present at the crack tips when nonlocal theory is employed to investigate the problem. A direct benefit is the ability to use the calculated maximum stress as a fracture criterion. The primary objective of this study is to investigate the effects of crack length, material gradient parameter describing FGPMs, and lattice parameter on the mechanical stress and electric displacement field near crack tips.
Keywords: Functionally graded piezoelectric material, mixed-mode crack, non-local theory, Schmidt method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9984725 Security Design of Root of Trust Based on RISC-V
Authors: Kang Huang, Wanting Zhou, Shiwei Yuan, Lei Li
Abstract:
Since information technology develops rapidly, the security issue has become an increasingly critical for computer system. In particular, as cloud computing and the Internet of Things (IoT) continue to gain widespread adoption, computer systems need to new security threats and attacks. The Root of Trust (RoT) is the foundation for providing basic trusted computing, which is used to verify the security and trustworthiness of other components. Designing a reliable RoT and guaranteeing its own security are essential for improving the overall security and credibility of computer systems. In this paper, we discuss the implementation of self-security technology based on the RISC-V RoT at the hardware level. To effectively safeguard the security of the RoT, researches on security safeguard technology on the RoT have been studied. At first, a lightweight and secure boot framework is proposed as a secure mechanism. Secondly, two kinds of memory protection mechanism are built to against memory attacks. Moreover, hardware implementation of proposed method has been also investigated. A series of experiments and tests have been carried on to verify to effectiveness of the proposed method. The experimental results demonstrated that the proposed approach is effective in verifying the integrity of the RoT’s own boot rom, user instructions, and data, ensuring authenticity and enabling the secure boot of the RoT’s own system. Additionally, our approach provides memory protection against certain types of memory attacks, such as cache leaks and tampering, and ensures the security of root-of-trust sensitive information, including keys.
Keywords: Root of Trust, secure boot, memory protection, hardware security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 804724 Optimization of Air Pollution Control Model for Mining
Authors: Zunaira Asif, Zhi Chen
Abstract:
The sustainable measures on air quality management are recognized as one of the most serious environmental concerns in the mining region. The mining operations emit various types of pollutants which have significant impacts on the environment. This study presents a stochastic control strategy by developing the air pollution control model to achieve a cost-effective solution. The optimization method is formulated to predict the cost of treatment using linear programming with an objective function and multi-constraints. The constraints mainly focus on two factors which are: production of metal should not exceed the available resources, and air quality should meet the standard criteria of the pollutant. The applicability of this model is explored through a case study of an open pit metal mine, Utah, USA. This method simultaneously uses meteorological data as a dispersion transfer function to support the practical local conditions. The probabilistic analysis and the uncertainties in the meteorological conditions are accomplished by Monte Carlo simulation. Reasonable results have been obtained to select the optimized treatment technology for PM2.5, PM10, NOx, and SO2. Additional comparison analysis shows that baghouse is the least cost option as compared to electrostatic precipitator and wet scrubbers for particulate matter, whereas non-selective catalytical reduction and dry-flue gas desulfurization are suitable for NOx and SO2 reduction respectively. Thus, this model can aid planners to reduce these pollutants at a marginal cost by suggesting control pollution devices, while accounting for dynamic meteorological conditions and mining activities.
Keywords: Air pollution, linear programming, mining, optimization, treatment technologies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16074723 Robust Integrated Design for a Mechatronic Feed Drive System of Machine Tools
Authors: Chin-Yin Chen, Chi-Cheng Cheng
Abstract:
This paper aims at to develop a robust optimization methodology for the mechatronic modules of machine tools by considering all important characteristics from all structural and control domains in one single process. The relationship between these two domains is strongly coupled. In order to reduce the disturbance caused by parameters in either one, the mechanical and controller design domains need to be integrated. Therefore, the concurrent integrated design method Design For Control (DFC), will be employed in this paper. In this connect, it is not only applied to achieve minimal power consumption but also enhance structural performance and system response at same time. To investigate the method for integrated optimization, a mechatronic feed drive system of the machine tools is used as a design platform. Pro/Engineer and AnSys are first used to build the 3D model to analyze and design structure parameters such as elastic deformation, nature frequency and component size, based on their effects and sensitivities to the structure. In addition, the robust controller,based on Quantitative Feedback Theory (QFT), will be applied to determine proper control parameters for the controller. Therefore, overall physical properties of the machine tool will be obtained in the initial stage. Finally, the technology of design for control will be carried out to modify the structural and control parameters to achieve overall system performance. Hence, the corresponding productivity is expected to be greatly improved.
Keywords: Machine tools, integrated structure and control design, design for control, multilevel decomposition, quantitative feedback theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19484722 An Optimal Control Method for Reconstruction of Topography in Dam-Break Flows
Authors: Alia Alghosoun, Nabil El Moçayd, Mohammed Seaid
Abstract:
Modeling dam-break flows over non-flat beds requires an accurate representation of the topography which is the main source of uncertainty in the model. Therefore, developing robust and accurate techniques for reconstructing topography in this class of problems would reduce the uncertainty in the flow system. In many hydraulic applications, experimental techniques have been widely used to measure the bed topography. In practice, experimental work in hydraulics may be very demanding in both time and cost. Meanwhile, computational hydraulics have served as an alternative for laboratory and field experiments. Unlike the forward problem, the inverse problem is used to identify the bed parameters from the given experimental data. In this case, the shallow water equations used for modeling the hydraulics need to be rearranged in a way that the model parameters can be evaluated from measured data. However, this approach is not always possible and it suffers from stability restrictions. In the present work, we propose an adaptive optimal control technique to numerically identify the underlying bed topography from a given set of free-surface observation data. In this approach, a minimization function is defined to iteratively determine the model parameters. The proposed technique can be interpreted as a fractional-stage scheme. In the first stage, the forward problem is solved to determine the measurable parameters from known data. In the second stage, the adaptive control Ensemble Kalman Filter is implemented to combine the optimality of observation data in order to obtain the accurate estimation of the topography. The main features of this method are on one hand, the ability to solve for different complex geometries with no need for any rearrangements in the original model to rewrite it in an explicit form. On the other hand, its achievement of strong stability for simulations of flows in different regimes containing shocks or discontinuities over any geometry. Numerical results are presented for a dam-break flow problem over non-flat bed using different solvers for the shallow water equations. The robustness of the proposed method is investigated using different numbers of loops, sensitivity parameters, initial samples and location of observations. The obtained results demonstrate high reliability and accuracy of the proposed techniques.Keywords: Optimal control, ensemble Kalman Filter, topography reconstruction, data assimilation, shallow water equations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6794721 Orbit Determination Modeling with Graphical Demonstration
Authors: Assem M. F. Sallam, Ah. El-S. Makled
Abstract:
In this paper, there is an implementation, verification, and graphical demonstration of a software application, which can be used swiftly over different preliminary orbit determination methods. A passive orbit determination method is used in this study to determine the location of a satellite or a flying body. It is named a passive orbit determination because it depends on observation without the use of any aids (radio and laser) installed on satellite. In order to understand how these methods work and how their output is accurate when compared with available verification data, the built models help in knowing the different inputs used with each method. Output from the different orbit determination methods (Gibbs, Lambert, and Gauss) will be compared with each other and verified by the data obtained from Satellite Tool Kit (STK) application. A modified model including all of the orbit determination methods using the same input will be introduced to investigate different models output (orbital parameters) for the same input (azimuth, elevation, and time). Simulation software is implemented using MATLAB. A Graphical User Interface (GUI) application named OrDet is produced using the GUI of MATLAB. It includes all the available used inputs and it outputs the current Classical Orbital Elements (COE) of satellite under observation. Produced COE are then used to propagate for a complete revolution and plotted on a 3-D view. Modified model which uses an adapter to allow same input parameters, passes these parameters to the preliminary orbit determination methods under study. Result from all orbit determination methods yield exactly the same COE output, which shows the equality of concept in determination of satellite’s location, but with different numerical methods.
Keywords: Orbit determination, STK, MATLAB-GUI, satellite tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15504720 On Methodologies for Analysing Sickness Absence Data: An Insight into a New Method
Authors: Xiaoshu Lu, Päivi Leino-Arjas, Kustaa Piha, Akseli Aittomäki, Peppiina Saastamoinen, Ossi Rahkonen, Eero Lahelma
Abstract:
Sickness absence represents a major economic and social issue. Analysis of sick leave data is a recurrent challenge to analysts because of the complexity of the data structure which is often time dependent, highly skewed and clumped at zero. Ignoring these features to make statistical inference is likely to be inefficient and misguided. Traditional approaches do not address these problems. In this study, we discuss model methodologies in terms of statistical techniques for addressing the difficulties with sick leave data. We also introduce and demonstrate a new method by performing a longitudinal assessment of long-term absenteeism using a large registration dataset as a working example available from the Helsinki Health Study for municipal employees from Finland during the period of 1990-1999. We present a comparative study on model selection and a critical analysis of the temporal trends, the occurrence and degree of long-term sickness absences among municipal employees. The strengths of this working example include the large sample size over a long follow-up period providing strong evidence in supporting of the new model. Our main goal is to propose a way to select an appropriate model and to introduce a new methodology for analysing sickness absence data as well as to demonstrate model applicability to complicated longitudinal data.Keywords: Sickness absence, longitudinal data, methodologies, mix-distribution model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22714719 Sand Production Modelled with Darcy Fluid Flow Using Discrete Element Method
Authors: M. N. Nwodo, Y. P. Cheng, N. H. Minh
Abstract:
In the process of recovering oil in weak sandstone formations, the strength of sandstones around the wellbore is weakened due to the increase of effective stress/load from the completion activities around the cavity. The weakened and de-bonded sandstone may be eroded away by the produced fluid, which is termed sand production. It is one of the major trending subjects in the petroleum industry because of its significant negative impacts, as well as some observed positive impacts. For efficient sand management therefore, there has been need for a reliable study tool to understand the mechanism of sanding. One method of studying sand production is the use of the widely recognized Discrete Element Method (DEM), Particle Flow Code (PFC3D) which represents sands as granular individual elements bonded together at contact points. However, there is limited knowledge of the particle-scale behavior of the weak sandstone, and the parameters that affect sanding. This paper aims to investigate the reliability of using PFC3D and a simple Darcy flow in understanding the sand production behavior of a weak sandstone. An isotropic tri-axial test on a weak oil sandstone sample was first simulated at a confining stress of 1MPa to calibrate and validate the parallel bond models of PFC3D using a 10m height and 10m diameter solid cylindrical model. The effect of the confining stress on the number of bonds failure was studied using this cylindrical model. With the calibrated data and sample material properties obtained from the tri-axial test, simulations without and with fluid flow were carried out to check on the effect of Darcy flow on bonds failure using the same model geometry. The fluid flow network comprised of every four particles connected with tetrahedral flow pipes with a central pore or flow domain. Parametric studies included the effects of confining stress, and fluid pressure; as well as validating flow rate – permeability relationship to verify Darcy’s fluid flow law. The effect of model size scaling on sanding was also investigated using 4m height, 2m diameter model. The parallel bond model successfully calibrated the sample’s strength of 4.4MPa, showing a sharp peak strength before strain-softening, similar to the behavior of real cemented sandstones. There seems to be an exponential increasing relationship for the bigger model, but a curvilinear shape for the smaller model. The presence of the Darcy flow induced tensile forces and increased the number of broken bonds. For the parametric studies, flow rate has a linear relationship with permeability at constant pressure head. The higher the fluid flow pressure, the higher the number of broken bonds/sanding. The DEM PFC3D is a promising tool to studying the micromechanical behavior of cemented sandstones.
Keywords: Discrete Element Method, fluid flow, parametric study, sand production/bonds failure.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17924718 Pattern Discovery from Student Feedback: Identifying Factors to Improve Student Emotions in Learning
Authors: Angelina A. Tzacheva, Jaishree Ranganathan
Abstract:
Interest in (STEM) Science Technology Engineering Mathematics education especially Computer Science education has seen a drastic increase across the country. This fuels effort towards recruiting and admitting a diverse population of students. Thus the changing conditions in terms of the student population, diversity and the expected teaching and learning outcomes give the platform for use of Innovative Teaching models and technologies. It is necessary that these methods adapted should also concentrate on raising quality of such innovations and have positive impact on student learning. Light-Weight Team is an Active Learning Pedagogy, which is considered to be low-stake activity and has very little or no direct impact on student grades. Emotion plays a major role in student’s motivation to learning. In this work we use the student feedback data with emotion classification using surveys at a public research institution in the United States. We use Actionable Pattern Discovery method for this purpose. Actionable patterns are patterns that provide suggestions in the form of rules to help the user achieve better outcomes. The proposed method provides meaningful insight in terms of changes that can be incorporated in the Light-Weight team activities, resources utilized in the course. The results suggest how to enhance student emotions to a more positive state, in particular focuses on the emotions ‘Trust’ and ‘Joy’.Keywords: Actionable pattern discovery, education, emotion, data mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5264717 Retrospective Reconstruction of Time Series Data for Integrated Waste Management
Authors: A. Buruzs, M. F. Hatwágner, A. Torma, L. T. Kóczy
Abstract:
The development, operation and maintenance of Integrated Waste Management Systems (IWMS) affects essentially the sustainable concern of every region. The features of such systems have great influence on all of the components of sustainability. In order to reach the optimal way of processes, a comprehensive mapping of the variables affecting the future efficiency of the system is needed such as analysis of the interconnections among the components and modeling of their interactions. The planning of a IWMS is based fundamentally on technical and economical opportunities and the legal framework. Modeling the sustainability and operation effectiveness of a certain IWMS is not in the scope of the present research. The complexity of the systems and the large number of the variables require the utilization of a complex approach to model the outcomes and future risks. This complex method should be able to evaluate the logical framework of the factors composing the system and the interconnections between them. The authors of this paper studied the usability of the Fuzzy Cognitive Map (FCM) approach modeling the future operation of IWMS’s. The approach requires two input data set. One is the connection matrix containing all the factors affecting the system in focus with all the interconnections. The other input data set is the time series, a retrospective reconstruction of the weights and roles of the factors. This paper introduces a novel method to develop time series by content analysis.
Keywords: Content analysis, factors, integrated waste management system, time series.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20184716 An Inverse Approach for Determining Creep Properties from a Miniature Thin Plate Specimen under Bending
Abstract:
This paper describes a new approach which can be used to interpret the experimental creep deformation data obtained from miniaturized thin plate bending specimen test to the corresponding uniaxial data based on an inversed application of the reference stress method. The geometry of the thin plate is fully defined by the span of the support, l, the width, b, and the thickness, d. Firstly, analytical solutions for the steady-state, load-line creep deformation rate of the thin plates for a Norton’s power law under plane stress (b→0) and plane strain (b→∞) conditions were obtained, from which it can be seen that the load-line deformation rate of the thin plate under plane-stress conditions is much higher than that under the plane-strain conditions. Since analytical solution is not available for the plates with random b-values, finite element (FE) analyses are used to obtain the solutions. Based on the FE results obtained for various b/l ratios and creep exponent, n, as well as the analytical solutions under plane stress and plane strain conditions, an approximate, numerical solutions for the deformation rate are obtained by curve fitting. Using these solutions, a reference stress method is utilised to establish the conversion relationships between the applied load and the equivalent uniaxial stress and between the creep deformations of thin plate and the equivalent uniaxial creep strains. Finally, the accuracy of the empirical solution was assessed by using a set of “theoretical” experimental data.Keywords: Bending, Creep, Miniature Specimen, Thin Plate.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19134715 Design of QFT-Based Self-Tuning Deadbeat Controller
Authors: H. Mansor, S. B. Mohd Noor
Abstract:
This paper presents a design method of self-tuning Quantitative Feedback Theory (QFT) by using improved deadbeat control algorithm. QFT is a technique to achieve robust control with pre-defined specifications whereas deadbeat is an algorithm that could bring the output to steady state with minimum step size. Nevertheless, usually there are large peaks in the deadbeat response. By integrating QFT specifications into deadbeat algorithm, the large peaks could be tolerated. On the other hand, emerging QFT with adaptive element will produce a robust controller with wider coverage of uncertainty. By combining QFT-based deadbeat algorithm and adaptive element, superior controller that is called selftuning QFT-based deadbeat controller could be achieved. The output response that is fast, robust and adaptive is expected. Using a grain dryer plant model as a pilot case-study, the performance of the proposed method has been evaluated and analyzed. Grain drying process is very complex with highly nonlinear behaviour, long delay, affected by environmental changes and affected by disturbances. Performance comparisons have been performed between the proposed self-tuning QFT-based deadbeat, standard QFT and standard dead-beat controllers. The efficiency of the self-tuning QFTbased dead-beat controller has been proven from the tests results in terms of controller’s parameters are updated online, less percentage of overshoot and settling time especially when there are variations in the plant.
Keywords: Deadbeat control, quantitative feedback theory (QFT), robust control, self-tuning control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23334714 Alumina Supported Copper-Manganese Catalysts for Combustion of Exhaust Gases: Effect of Preparation Method
Authors: Krasimir I. Ivanov, Elitsa N. Kolentsova, Dimitar Y. Dimitrov
Abstract:
The development of active and stable catalysts without noble metals for low temperature oxidation of exhaust gases remains a significant challenge. The purpose of this study is to determine the influence of the preparation method on the catalytic activity of the supported copper-manganese mixed oxides in terms of VOCs oxidation. The catalysts were prepared by impregnation of γ- Al2O3 with copper and manganese nitrates and acetates and the possibilities for CO, CH3OH and dimethyl ether (DME) oxidation were evaluated using continuous flow equipment with a four-channel isothermal stainless steel reactor. Effect of the support, Cu/Mn mole ratio, heat treatment of the precursor and active component loading were investigated. Highly active alumina supported Cu-Mn catalysts for CO and VOCs oxidation were synthesized. The effect of preparation conditions on the activity behavior of the catalysts was discussed. The synergetic interaction between copper and manganese species increases the activity for complete oxidation over mixed catalysts. Type of support, calcination temperature and active component loading along with catalyst composition are important factors, determining catalytic activity. Cu/Mn molar ratio of 1:5, heat treatment at 450oC and 20 % active component loading are the best compromise for production of active catalyst for simultaneous combustion of CO, CH3OH and DME.
Keywords: Copper-manganese catalysts, Preparation methods, Exhaust gases oxidation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23354713 Full Potential Study of Electronic and Optical Properties of NdF3
Authors: Sapan Mohan Saini
Abstract:
We report the electronic structure and optical properties of NdF3 compound. Our calculations are based on density functional theory (DFT) using the full potential linearized augmented plane wave (FPLAPW) method with the inclusion of spin orbit coupling. We employed the local spin density approximation (LSDA) and Coulomb-corrected local spin density approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. We find that the standard LSDA approach is incapable of correctly describing the electronic properties of such materials since it positions the f-bands incorrectly resulting in an incorrect metallic ground state. On the other hand, LSDA + U approximation, known for treating the highly correlated 4f electrons properly, is able to reproduce the correct insulating ground state. Interestingly, however, we do not find any significant differences in the optical properties calculated using LSDA, and LSDA + U suggesting that the 4f electrons do not play a decisive role in the optical properties of these compounds. The reflectivity for NdF3 compound stays low till 7 eV which is consistent with their large energy gaps. The calculated energy gaps are in good agreement with experiments. Our calculated reflectivity compares well with the experimental data and the results are analyzed in the light of band to band transitions.Keywords: FPLAPW Method, optical properties, rare earthtrifluorides LSDA+U
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16744712 Low Resolution Face Recognition Using Mixture of Experts
Authors: Fatemeh Behjati Ardakani, Fatemeh Khademian, Abbas Nowzari Dalini, Reza Ebrahimpour
Abstract:
Human activity is a major concern in a wide variety of applications, such as video surveillance, human computer interface and face image database management. Detecting and recognizing faces is a crucial step in these applications. Furthermore, major advancements and initiatives in security applications in the past years have propelled face recognition technology into the spotlight. The performance of existing face recognition systems declines significantly if the resolution of the face image falls below a certain level. This is especially critical in surveillance imagery where often, due to many reasons, only low-resolution video of faces is available. If these low-resolution images are passed to a face recognition system, the performance is usually unacceptable. Hence, resolution plays a key role in face recognition systems. In this paper we introduce a new low resolution face recognition system based on mixture of expert neural networks. In order to produce the low resolution input images we down-sampled the 48 × 48 ORL images to 12 × 12 ones using the nearest neighbor interpolation method and after that applying the bicubic interpolation method yields enhanced images which is given to the Principal Component Analysis feature extractor system. Comparison with some of the most related methods indicates that the proposed novel model yields excellent recognition rate in low resolution face recognition that is the recognition rate of 100% for the training set and 96.5% for the test set.Keywords: Low resolution face recognition, Multilayered neuralnetwork, Mixture of experts neural network, Principal componentanalysis, Bicubic interpolation, Nearest neighbor interpolation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17244711 Improving the Shunt Active Power Filter Performance Using Synchronous Reference Frame PI Based Controller with Anti-Windup Scheme
Authors: Consalva J. Msigwa, Beda J. Kundy, Bakari M. M. Mwinyiwiwa
Abstract:
In this paper the reference current for Voltage Source Converter (VSC) of the Shunt Active Power Filter (SAPF) is generated using Synchronous Reference Frame method, incorporating the PI controller with anti-windup scheme. The proposed method improves the harmonic filtering by compensating the winding up phenomenon caused by the integral term of the PI controller. Using Reference Frame Transformation, the current is transformed from om a - b - c stationery frame to rotating 0 - d - q frame. Using the PI controller, the current in the 0 - d - q frame is controlled to get the desired reference signal. A controller with integral action combined with an actuator that becomes saturated can give some undesirable effects. If the control error is so large that the integrator saturates the actuator, the feedback path becomes ineffective because the actuator will remain saturated even if the process output changes. The integrator being an unstable system may then integrate to a very large value, the phenomenon known as integrator windup. Implementing the integrator anti-windup circuit turns off the integrator action when the actuator saturates, hence improving the performance of the SAPF and dynamically compensating harmonics in the power network. In this paper the system performance is examined with Shunt Active Power Filter simulation model.Keywords: Phase Locked Loop (PLL), Voltage SourceConverter (VSC), Shunt Active Power Filter (SAPF), PI, Pulse WidthModulation (PWM).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15674710 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling
Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra
Abstract:
Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.
Keywords: Multi-temporal satellite image, urban growth, Non-stationarity, stochastic modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15044709 Cross Signal Identification for PSG Applications
Authors: Carmen Grigoraş, Victor Grigoraş, Daniela Boişteanu
Abstract:
The standard investigational method for obstructive sleep apnea syndrome (OSAS) diagnosis is polysomnography (PSG), which consists of a simultaneous, usually overnight recording of multiple electro-physiological signals related to sleep and wakefulness. This is an expensive, encumbering and not a readily repeated protocol, and therefore there is need for simpler and easily implemented screening and detection techniques. Identification of apnea/hypopnea events in the screening recordings is the key factor for the diagnosis of OSAS. The analysis of a solely single-lead electrocardiographic (ECG) signal for OSAS diagnosis, which may be done with portable devices, at patient-s home, is the challenge of the last years. A novel artificial neural network (ANN) based approach for feature extraction and automatic identification of respiratory events in ECG signals is presented in this paper. A nonlinear principal component analysis (NLPCA) method was considered for feature extraction and support vector machine for classification/recognition. An alternative representation of the respiratory events by means of Kohonen type neural network is discussed. Our prospective study was based on OSAS patients of the Clinical Hospital of Pneumology from Iaşi, Romania, males and females, as well as on non-OSAS investigated human subjects. Our computed analysis includes a learning phase based on cross signal PSG annotation.Keywords: Artificial neural networks, feature extraction, obstructive sleep apnea syndrome, pattern recognition, signalprocessing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15414708 A Comprehensive Evaluation of Supervised Machine Learning for the Phase Identification Problem
Authors: Brandon Foggo, Nanpeng Yu
Abstract:
Power distribution circuits undergo frequent network topology changes that are often left undocumented. As a result, the documentation of a circuit’s connectivity becomes inaccurate with time. The lack of reliable circuit connectivity information is one of the biggest obstacles to model, monitor, and control modern distribution systems. To enhance the reliability and efficiency of electric power distribution systems, the circuit’s connectivity information must be updated periodically. This paper focuses on one critical component of a distribution circuit’s topology - the secondary transformer to phase association. This topology component describes the set of phase lines that feed power to a given secondary transformer (and therefore a given group of power consumers). Finding the documentation of this component is call Phase Identification, and is typically performed with physical measurements. These measurements can take time lengths on the order of several months, but with supervised learning, the time length can be reduced significantly. This paper compares several such methods applied to Phase Identification for a large range of real distribution circuits, describes a method of training data selection, describes preprocessing steps unique to the Phase Identification problem, and ultimately describes a method which obtains high accuracy (> 96% in most cases, > 92% in the worst case) using only 5% of the measurements typically used for Phase Identification.Keywords: Distribution network, machine learning, network topology, phase identification, smart grid.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10744707 Analysis of Temperature Change under Global Warming Impact using Empirical Mode Decomposition
Authors: Md. Khademul Islam Molla, Akimasa Sumi, M. Sayedur Rahman
Abstract:
The empirical mode decomposition (EMD) represents any time series into a finite set of basis functions. The bases are termed as intrinsic mode functions (IMFs) which are mutually orthogonal containing minimum amount of cross-information. The EMD successively extracts the IMFs with the highest local frequencies in a recursive way, which yields effectively a set low-pass filters based entirely on the properties exhibited by the data. In this paper, EMD is applied to explore the properties of the multi-year air temperature and to observe its effects on climate change under global warming. This method decomposes the original time-series into intrinsic time scale. It is capable of analyzing nonlinear, non-stationary climatic time series that cause problems to many linear statistical methods and their users. The analysis results show that the mode of EMD presents seasonal variability. The most of the IMFs have normal distribution and the energy density distribution of the IMFs satisfies Chi-square distribution. The IMFs are more effective in isolating physical processes of various time-scales and also statistically significant. The analysis results also show that the EMD method provides a good job to find many characteristics on inter annual climate. The results suggest that climate fluctuations of every single element such as temperature are the results of variations in the global atmospheric circulation.
Keywords: Empirical mode decomposition, instantaneous frequency, Hilbert spectrum, Chi-square distribution, anthropogenic impact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21494706 Test Method Development for Evaluation of Process and Design Effect on Reinforced Tube
Authors: Cathal Merz, Gareth O’Donnell
Abstract:
Coil reinforced thin-walled (CRTW) tubes are used in medicine to treat problems affecting blood vessels within the body through minimally invasive procedures. The CRTW tube considered in this research makes up part of such a device and is inserted into the patient via their femoral or brachial arteries and manually navigated to the site in need of treatment. This procedure replaces the requirement to perform open surgery but is limited by reduction of blood vessel lumen diameter and increase in tortuosity of blood vessels deep in the brain. In order to maximize the capability of these procedures, CRTW tube devices are being manufactured with decreasing wall thicknesses in order to deliver treatment deeper into the body and to allow passage of other devices through its inner diameter. This introduces significant stresses to the device materials which have resulted in an observed increase in the breaking of the proximal segment of the device into two separate pieces after it has failed by buckling. As there is currently no international standard for measuring the mechanical properties of these CRTW tube devices, it is difficult to accurately analyze this problem. The aim of the current work is to address this discrepancy in the biomedical device industry by developing a measurement system that can be used to quantify the effect of process and design changes on CRTW tube performance, aiding in the development of better performing, next generation devices. Using materials testing frames, micro-computed tomography (micro-CT) imaging, experiment planning, analysis of variance (ANOVA), T-tests and regression analysis, test methods have been developed for assessing the impact of process and design changes on the device. The major findings of this study have been an insight into the suitability of buckle and three-point bend tests for the measurement of the effect of varying processing factors on the device’s performance, and guidelines for interpreting the output data from the test methods. The findings of this study are of significant interest with respect to verifying and validating key process and design changes associated with the device structure and material condition. Test method integrity evaluation is explored throughout.
Keywords: Buckling, coil reinforced thin-walled tubes, fracture, test method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 697