Search results for: Problem Solving
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3920

Search results for: Problem Solving

2330 CFD Modeling of Insect Flight at Low Reynolds Number

Authors: Wu Di, Yeo Khoon Seng, Lim Tee Tai

Abstract:

The typical insects employ a flapping-wing mode of flight. The numerical simulations on free flight of a model fruit fly (Re=143) including hovering and are presented in this paper. Unsteady aerodynamics around a flapping insect is studied by solving the three-dimensional Newtonian dynamics of the flyer coupled with Navier-Stokes equations. A hybrid-grid scheme (Generalized Finite Difference Method) that combines great geometry flexibility and accuracy of moving boundary definition is employed for obtaining flow dynamics. The results show good points of agreement and consistency with the outcomes and analyses of other researchers, which validate the computational model and demonstrate the feasibility of this computational approach on analyzing fluid phenomena in insect flight. The present modeling approach also offers a promising route of investigation that could complement as well as overcome some of the limitations of physical experiments in the study of free flight aerodynamics of insects. The results are potentially useful for the design of biomimetic flapping-wing flyers.

Keywords: Free hovering flight, flapping wings, fruit fly, insect aerodynamics, leading edge vortex (LEV), computational fluid dynamics (CFD), Navier-Stokes equations (N-S), fluid structure interaction (FSI), generalized finite-difference method (GFD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3169
2329 The Use of the Limit Cycles of Dynamic Systems for Formation of Program Trajectories of Points Feet of the Anthropomorphous Robot

Authors: A. S. Gorobtsov, A. S. Polyanina, A. E. Andreev

Abstract:

The movement of points feet of the anthropomorphous robot in space occurs along some stable trajectory of a known form. A large number of modifications to the methods of control of biped robots indicate the fundamental complexity of the problem of stability of the program trajectory and, consequently, the stability of the control for the deviation for this trajectory. Existing gait generators use piecewise interpolation of program trajectories. This leads to jumps in the acceleration at the boundaries of sites. Another interpolation can be realized using differential equations with fractional derivatives. In work, the approach to synthesis of generators of program trajectories is considered. The resulting system of nonlinear differential equations describes a smooth trajectory of movement having rectilinear sites. The method is based on the theory of an asymptotic stability of invariant sets. The stability of such systems in the area of localization of oscillatory processes is investigated. The boundary of the area is a bounded closed surface. In the corresponding subspaces of the oscillatory circuits, the resulting stable limit cycles are curves having rectilinear sites. The solution of the problem is carried out by means of synthesis of a set of the continuous smooth controls with feedback. The necessary geometry of closed trajectories of movement is obtained due to the introduction of high-order nonlinearities in the control of stabilization systems. The offered method was used for the generation of trajectories of movement of point’s feet of the anthropomorphous robot. The synthesis of the robot's program movement was carried out by means of the inverse method.

Keywords: Control, limits cycle, robot, stability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 762
2328 A Fuzzy MCDM Approach for Health-Care Waste Management

Authors: Mehtap Dursun, E. Ertugrul Karsak, Melis Almula Karadayi

Abstract:

The management of the health-care wastes is one of the most important problems in Istanbul, a city with more than 12 million inhabitants, as it is in most of the developing countries. Negligence in appropriate treatment and final disposal of the healthcare wastes can lead to adverse impacts to public health and to the environment. This paper employs a fuzzy multi-criteria group decision making approach, which is based on the principles of fusion of fuzzy information, 2-tuple linguistic representation model, and technique for order preference by similarity to ideal solution (TOPSIS), to evaluate health-care waste (HCW) treatment alternatives for Istanbul. The evaluation criteria are determined employing nominal group technique (NGT), which is a method of systematically developing a consensus of group opinion. The employed method is apt to manage information assessed using multigranularity linguistic information in a decision making problem with multiple information sources. The decision making framework employs ordered weighted averaging (OWA) operator that encompasses several operators as the aggregation operator since it can implement different aggregation rules by changing the order weights. The aggregation process is based on the unification of information by means of fuzzy sets on a basic linguistic term set (BLTS). Then, the unified information is transformed into linguistic 2-tuples in a way to rectify the problem of loss information of other fuzzy linguistic approaches.

Keywords: Group decision making, health care waste management, multi-criteria decision making, OWA, TOPSIS, 2-tuple linguistic representation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2397
2327 A Spatial Hypergraph Based Semi-Supervised Band Selection Method for Hyperspectral Imagery Semantic Interpretation

Authors: Akrem Sellami, Imed Riadh Farah

Abstract:

Hyperspectral imagery (HSI) typically provides a wealth of information captured in a wide range of the electromagnetic spectrum for each pixel in the image. Hence, a pixel in HSI is a high-dimensional vector of intensities with a large spectral range and a high spectral resolution. Therefore, the semantic interpretation is a challenging task of HSI analysis. We focused in this paper on object classification as HSI semantic interpretation. However, HSI classification still faces some issues, among which are the following: The spatial variability of spectral signatures, the high number of spectral bands, and the high cost of true sample labeling. Therefore, the high number of spectral bands and the low number of training samples pose the problem of the curse of dimensionality. In order to resolve this problem, we propose to introduce the process of dimensionality reduction trying to improve the classification of HSI. The presented approach is a semi-supervised band selection method based on spatial hypergraph embedding model to represent higher order relationships with different weights of the spatial neighbors corresponding to the centroid of pixel. This semi-supervised band selection has been developed to select useful bands for object classification. The presented approach is evaluated on AVIRIS and ROSIS HSIs and compared to other dimensionality reduction methods. The experimental results demonstrate the efficacy of our approach compared to many existing dimensionality reduction methods for HSI classification.

Keywords: Hyperspectral image, spatial hypergraph, dimensionality reduction, semantic interpretation, band selection, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1216
2326 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1524
2325 Efficient Design Optimization of Multi-State Flow Network for Multiple Commodities

Authors: Yu-Cheng Chou, Po Ting Lin

Abstract:

The network of delivering commodities has been an important design problem in our daily lives and many transportation applications. The delivery performance is evaluated based on the system reliability of delivering commodities from a source node to a sink node in the network. The system reliability is thus maximized to find the optimal routing. However, the design problem is not simple because (1) each path segment has randomly distributed attributes; (2) there are multiple commodities that consume various path capacities; (3) the optimal routing must successfully complete the delivery process within the allowable time constraints. In this paper, we want to focus on the design optimization of the Multi-State Flow Network (MSFN) for multiple commodities. We propose an efficient approach to evaluate the system reliability in the MSFN with respect to randomly distributed path attributes and find the optimal routing subject to the allowable time constraints. The delivery rates, also known as delivery currents, of the path segments are evaluated and the minimal-current arcs are eliminated to reduce the complexity of the MSFN. Accordingly, the correct optimal routing is found and the worst-case reliability is evaluated. It has been shown that the reliability of the optimal routing is at least higher than worst-case measure. Two benchmark examples are utilized to demonstrate the proposed method. The comparisons between the original and the reduced networks show that the proposed method is very efficient.

Keywords: Multiple Commodities, Multi-State Flow Network (MSFN), Time Constraints, Worst-Case Reliability (WCR)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
2324 Accurate Control of a Pneumatic System using an Innovative Fuzzy Gain-Scheduling Pattern

Authors: M. G. Papoutsidakis, G. Chamilothoris, F. Dailami, N. Larsen, A Pipe

Abstract:

Due to their high power-to-weight ratio and low cost, pneumatic actuators are attractive for robotics and automation applications; however, achieving fast and accurate control of their position have been known as a complex control problem. A methodology for obtaining high position accuracy with a linear pneumatic actuator is presented. During experimentation with a number of PID classical control approaches over many operations of the pneumatic system, the need for frequent manual re-tuning of the controller could not be eliminated. The reason for this problem is thermal and energy losses inside the cylinder body due to the complex friction forces developed by the piston displacements. Although PD controllers performed very well over short periods, it was necessary in our research project to introduce some form of automatic gain-scheduling to achieve good long-term performance. We chose a fuzzy logic system to do this, which proved to be an easily designed and robust approach. Since the PD approach showed very good behaviour in terms of position accuracy and settling time, it was incorporated into a modified form of the 1st order Tagaki- Sugeno fuzzy method to build an overall controller. This fuzzy gainscheduler uses an input variable which automatically changes the PD gain values of the controller according to the frequency of repeated system operations. Performance of the new controller was significantly improved and the need for manual re-tuning was eliminated without a decrease in performance. The performance of the controller operating with the above method is going to be tested through a high-speed web network (GRID) for research purposes.

Keywords: Fuzzy logic, gain scheduling, leaky integrator, pneumatic actuator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1745
2323 Atmosphere Water Vapour As Main Sweet Water Resource in the Arid Zones of Central Asia

Authors: S.I.Nikolaeva, Yu.V. Petrov, L.Ye.Skipnikova

Abstract:

It has been shown that the solution of water shortage problem in Central Asia closely connected with inclusion of atmosphere water vapour into the system of response and water resources management. Some methods of water extraction from atmosphere have been discussed.

Keywords: potable water, water resources, water problems, water scarcity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1554
2322 Modeling and Optimization of Abrasive Waterjet Parameters using Regression Analysis

Authors: Farhad Kolahan, A. Hamid Khajavi

Abstract:

Abrasive waterjet is a novel machining process capable of processing wide range of hard-to-machine materials. This research addresses modeling and optimization of the process parameters for this machining technique. To model the process a set of experimental data has been used to evaluate the effects of various parameter settings in cutting 6063-T6 aluminum alloy. The process variables considered here include nozzle diameter, jet traverse rate, jet pressure and abrasive flow rate. Depth of cut, as one of the most important output characteristics, has been evaluated based on different parameter settings. The Taguchi method and regression modeling are used in order to establish the relationships between input and output parameters. The adequacy of the model is evaluated using analysis of variance (ANOVA) technique. The pairwise effects of process parameters settings on process response outputs are also shown graphically. The proposed model is then embedded into a Simulated Annealing algorithm to optimize the process parameters. The optimization is carried out for any desired values of depth of cut. The objective is to determine proper levels of process parameters in order to obtain a certain level of depth of cut. Computational results demonstrate that the proposed solution procedure is quite effective in solving such multi-variable problems.

Keywords: AWJ cutting, Mathematical modeling, Simulated Annealing, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2148
2321 Architecture, Implementation and Application of Tools for Experimental Analysis

Authors: Tom Dowling, Adam Duffy

Abstract:

This paper presents an architecture to assist in the development of tools to perform experimental analysis. Existing implementations of tools based on this architecture are also described in this paper. These tools are applied to the real world problem of fault attack emulation and detection in cryptographic algorithms.

Keywords: Software Architectures and Design, Software Componentsand Reuse, Engineering Secure Software.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
2320 Solving Transient Conduction and Radiation Using Finite Volume Method

Authors: Ashok K. Satapathy, Prerana Nashine

Abstract:

Radiative heat transfer in participating medium was carried out using the finite volume method. The radiative transfer equations are formulated for absorbing and anisotropically scattering and emitting medium. The solution strategy is discussed and the conditions for computational stability are conferred. The equations have been solved for transient radiative medium and transient radiation incorporated with transient conduction. Results have been obtained for irradiation and corresponding heat fluxes for both the cases. The solutions can be used to conclude incident energy and surface heat flux. Transient solutions were obtained for a slab of heat conducting in slab and by thermal radiation. The effect of heat conduction during the transient phase is to partially equalize the internal temperature distribution. The solution procedure provides accurate temperature distributions in these regions. A finite volume procedure with variable space and time increments is used to solve the transient radiation equation. The medium in the enclosure absorbs, emits, and anisotropically scatters radiative energy. The incident radiations and the radiative heat fluxes are presented in graphical forms. The phase function anisotropy plays a significant role in the radiation heat transfer when the boundary condition is non-symmetric.

Keywords: Participating media, finite volume method, radiation coupled with conduction, heat transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2952
2319 A Case Study on Appearance Based Feature Extraction Techniques and Their Susceptibility to Image Degradations for the Task of Face Recognition

Authors: Vitomir Struc, Nikola Pavesic

Abstract:

Over the past decades, automatic face recognition has become a highly active research area, mainly due to the countless application possibilities in both the private as well as the public sector. Numerous algorithms have been proposed in the literature to cope with the problem of face recognition, nevertheless, a group of methods commonly referred to as appearance based have emerged as the dominant solution to the face recognition problem. Many comparative studies concerned with the performance of appearance based methods have already been presented in the literature, not rarely with inconclusive and often with contradictory results. No consent has been reached within the scientific community regarding the relative ranking of the efficiency of appearance based methods for the face recognition task, let alone regarding their susceptibility to appearance changes induced by various environmental factors. To tackle these open issues, this paper assess the performance of the three dominant appearance based methods: principal component analysis, linear discriminant analysis and independent component analysis, and compares them on equal footing (i.e., with the same preprocessing procedure, with optimized parameters for the best possible performance, etc.) in face verification experiments on the publicly available XM2VTS database. In addition to the comparative analysis on the XM2VTS database, ten degraded versions of the database are also employed in the experiments to evaluate the susceptibility of the appearance based methods on various image degradations which can occur in "real-life" operating conditions. Our experimental results suggest that linear discriminant analysis ensures the most consistent verification rates across the tested databases.

Keywords: Biometrics, face recognition, appearance based methods, image degradations, the XM2VTS database.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2280
2318 Soil Quality Status under Dryland Vegetation of Yabello District, Southern Ethiopia

Authors: Mohammed Abaoli, Omer Kara

Abstract:

The current research has investigated the soil quality status under dryland vegetation of Yabello district, Southern Ethiopia in which we should identify the nature and extent of salinity problem of the area for further research bases. About 48 soil samples were taken from 0-30, 31-60, 61-90 and 91-120 cm soil depths by opening 12 representative soil profile pits at 1.5 m depth. Soil color, texture, bulk density, Soil Organic Carbon (SOC), Cation Exchange Capacity (CEC), Na, K, Mg, Ca, CaCO3, gypsum (CaSO4), pH, Sodium Adsorption Ratio (SAR), Exchangeable Sodium Percentage (ESP) were analyzed. The dominant soil texture was silty-clay-loam.  Bulk density varied from 1.1 to 1.31 g/cm3. High SOC content was observed in 0-30 cm. The soil pH ranged from 7.1 to 8.6. The electrical conductivity shows indirect relationship with soil depth while CaCO3 and CaSO4 concentrations were observed in a direct relationship with depth. About 41% are non-saline, 38.31% saline, 15.23% saline-sodic and 5.46% sodic soils. Na concentration in saline soils was greater than Ca and Mg in all the soil depths. Ca and Mg contents were higher above 60 cm soil depth in non-saline soils. The concentrations of SO2-4 and HCO-3 were observed to be higher at the most lower depth than upper. SAR value tends to be higher at lower depths in saline and saline-sodic soils, but decreases at lower depth of the non-saline soils. The distribution of ESP above 60 cm depth was in an increasing order in saline and saline-sodic soils. The result of the research has shown the direction to which extent of salinity we should consider for the Commiphora plant species we want to grow on the area. 

Keywords: Commiphora species, dryland vegetation, ecological significance, soil quality, salinity problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 606
2317 Numerical Simulations on Feasibility of Stochastic Model Predictive Control for Linear Discrete-Time Systems with Random Dither Quantization

Authors: Taiki Baba, Tomoaki Hashimoto

Abstract:

The random dither quantization method enables us to achieve much better performance than the simple uniform quantization method for the design of quantized control systems. Motivated by this fact, the stochastic model predictive control method in which a performance index is minimized subject to probabilistic constraints imposed on the state variables of systems has been proposed for linear feedback control systems with random dither quantization. In other words, a method for solving optimal control problems subject to probabilistic state constraints for linear discrete-time control systems with random dither quantization has been already established. To our best knowledge, however, the feasibility of such a kind of optimal control problems has not yet been studied. Our objective in this paper is to investigate the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization. To this end, we provide the results of numerical simulations that verify the feasibility of stochastic model predictive control problems for linear discrete-time control systems with random dither quantization.

Keywords: Model predictive control, stochastic systems, probabilistic constraints, random dither quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1015
2316 Investigation of Bubble Growth during Nucleate Boiling Using CFD

Authors: K. Jagannath, Akhilesh Kotian, S. S. Sharma, Achutha Kini U., P. R. Prabhu

Abstract:

Boiling process is characterized by the rapid formation of vapour bubbles at the solid–liquid interface (nucleate boiling) with pre-existing vapour or gas pockets. Computational fluid dynamics (CFD) is an important tool to study bubble dynamics. In the present study, CFD simulation has been carried out to determine the bubble detachment diameter and its terminal velocity. Volume of fluid method is used to model the bubble and the surrounding by solving single set of momentum equations and tracking the volume fraction of each of the fluids throughout the domain. In the simulation, bubble is generated by allowing water-vapour to enter a cylinder filled with liquid water through an inlet at the bottom. After the bubble is fully formed, the bubble detaches from the surface and rises up during which the bubble accelerates due to the net balance between buoyancy force and viscous drag. Finally when these forces exactly balance each other, it attains a constant terminal velocity. The bubble detachment diameter and the terminal velocity of the bubble are captured by the monitor function provided in FLUENT. The detachment diameter and the terminal velocity obtained are compared with the established results based on the shape of the bubble. A good agreement is obtained between the results obtained from simulation and the equations in comparison with the established results.

Keywords: Bubble growth, computational fluid dynamics, detachment diameter, terminal velocity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2113
2315 Applications of Building Information Modeling (BIM) in Knowledge Sharing and Management in Construction

Authors: Shu-Hui Jan, Shih-Ping Ho, Hui-Ping Tserng

Abstract:

Construction knowledge can be referred to and reused among involved project managers and jobsite engineers to alleviate problems on a construction jobsite and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology to provide sharing of construction knowledge by using the Building Information Modeling (BIM) approach. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format, and facilitation of easy updating and transfer of information in the 3D BIM environment. Using the BIM approach, project managers and engineers can gain knowledge related to 3D BIM and obtain feedback provided by jobsite engineers for future reference. This study addresses the application of knowledge sharing management in the construction phase of construction projects and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to verify the proposed methodology and demonstrate the effectiveness of sharing knowledge in the BIM environment. The combined results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM approach and web technology.

Keywords: Construction knowledge management, building information modeling, project management, web-based information system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4342
2314 The Practice of Teaching Chemistry by the Application of Online Tests

Authors: Nikolina Ribarić

Abstract:

E-learning is most commonly defined as a set of applications and processes, such as Web-based learning, computer-based learning, virtual classrooms and digital collaboration, that enable access to instructional content through a variety of electronic media. The main goal of an e-learning system is learning, and the way to evaluate the impact of an e-learning system is by examining whether students learn effectively with the help of that system. Testmoz is a program for online preparation of knowledge evaluation assignments. The program provides teachers with computer support during the design of assignments and evaluating them. Students can review and solve assignments and also check the correctness of their solutions. Research into the increase of motivation by the practice of providing teaching content by applying online tests prepared in the Testmoz program, was carried out with students of the 8th grade of Ljubo Babić Primary School in Jastrebarsko. The students took the tests in their free time, from home, for an unlimited number of times. SPSS was used to process the data obtained by the research instruments. The results of the research showed that students preferred to practice teaching content, and achieved better educational results in chemistry, when they had access to online tests for repetition and practicing in relation to subject content which was checked after repetition and practicing in "the classical way" – i.e., solving assignments in a workbook or writing assignments in worksheets.

Keywords: Chemistry class, e-learning, online test, Testmoz.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 540
2313 Numerical Investigation of Unsteady MHD Flow of Second Order Fluid in a Tube of Elliptical Cross-Section on the Porous Boundary

Authors: S. B. Kulkarni, Hasim A. Chikte, V. Murali Mohan

Abstract:

Exact solution of an unsteady MHD flow of elasticoviscous fluid through a porous media in a tube of elliptic cross section under the influence of magnetic field and constant pressure gradient has been obtained in this paper. Initially, the flow is generated by a constant pressure gradient. After attaining the steady state, the pressure gradient is suddenly withdrawn and the resulting fluid motion in a tube of elliptical cross section by taking into account of the porosity factor and magnetic parameter of the bounding surface is investigated. The problem is solved in two-stages the first stage is a steady motion in tube under the influence of a constant pressure gradient, the second stage concern with an unsteady motion. The problem is solved employing separation of variables technique. The results are expressed in terms of a non-dimensional porosity parameter, magnetic parameter and elastico-viscosity parameter, which depends on the Non-Newtonian coefficient. The flow parameters are found to be identical with that of Newtonian case as elastic-viscosity parameter, magnetic parameter tends to zero, and porosity tends to infinity. The numerical results were simulated in MATLAB software to analyze the effect of Elastico-viscous parameter, porosity parameter, and magnetic parameter on velocity profile. Boundary conditions were satisfied. It is seen that the effect of elastico-viscosity parameter, porosity parameter and magnetic parameter of the bounding surface has significant effect on the velocity parameter.

Keywords: Elastico-viscous fluid, Porous media, Elliptic cross-section, Magnetic parameter, Numerical Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1813
2312 Space Telemetry Anomaly Detection Based on Statistical PCA Algorithm

Authors: B. Nassar, W. Hussein, M. Mokhtar

Abstract:

The critical concern of satellite operations is to ensure the health and safety of satellites. The worst case in this perspective is probably the loss of a mission, but the more common interruption of satellite functionality can result in compromised mission objectives. All the data acquiring from the spacecraft are known as Telemetry (TM), which contains the wealth information related to the health of all its subsystems. Each single item of information is contained in a telemetry parameter, which represents a time-variant property (i.e. a status or a measurement) to be checked. As a consequence, there is a continuous improvement of TM monitoring systems to reduce the time required to respond to changes in a satellite's state of health. A fast conception of the current state of the satellite is thus very important to respond to occurring failures. Statistical multivariate latent techniques are one of the vital learning tools that are used to tackle the problem above coherently. Information extraction from such rich data sources using advanced statistical methodologies is a challenging task due to the massive volume of data. To solve this problem, in this paper, we present a proposed unsupervised learning algorithm based on Principle Component Analysis (PCA) technique. The algorithm is particularly applied on an actual remote sensing spacecraft. Data from the Attitude Determination and Control System (ADCS) was acquired under two operation conditions: normal and faulty states. The models were built and tested under these conditions, and the results show that the algorithm could successfully differentiate between these operations conditions. Furthermore, the algorithm provides competent information in prediction as well as adding more insight and physical interpretation to the ADCS operation.

Keywords: Space telemetry monitoring, multivariate analysis, PCA algorithm, space operations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2058
2311 Quasi-ballistic Transport in Submicron Hg0.8Cd0.2Te Diodes: Hydrodynamic Modeling

Authors: M. Daoudi, A. Belghachi, L. Varani

Abstract:

In this paper, we analyze the problem of quasiballistic electron transport in ultra small of mercury -cadmiumtelluride (Hg0.8Cd0.2Te -MCT) n+-n- n+ devices from hydrodynamic point view. From our study, we note that, when the size of the active layer is low than 0.1μm and for low bias application( ( ≥ 9mV), the quasi-ballistic transport has an important effect.

Keywords: Hg0.8Cd0.2Te semiconductor, Hydrodynamicmode, Quasi-ballistic transport, Submicron diode

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
2310 An Eulerian Numerical Method and its Application to Explosion Problems

Authors: Li Hao, Yan Zhang, Jingan Cui

Abstract:

The Eulerian numerical method is proposed to analyze the explosion in tunnel. Based on this method, an original software M-MMIC2D is developed by Cµ program language. With this software, the explosion problem in the tunnel with three expansion-chambers is numerically simulated, and the results are found to be in full agreement with the observed experimental data.

Keywords: Eulerian method, numerical simulation, shock wave, tunnel

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1450
2309 Analysis and Evaluation of the Public Responses to Traffic Congestion Pricing Schemes in Urban Streets

Authors: Saeed Sayyad Hagh Shomar

Abstract:

Traffic congestion pricing in urban streets is one of the most suitable options for solving the traffic problems and environment pollutions in the cities of the country. Unlike its acceptable outcomes, there are problems concerning the necessity to pay by the mass. Regarding the fact that public response in order to succeed in this strategy is so influential, studying their response and behavior to get the feedback and improve the strategies is of great importance. In this study, a questionnaire was used to examine the public reactions to the traffic congestion pricing schemes at the center of Tehran metropolis and the factors involved in people’s decision making in accepting or rejecting the congestion pricing schemes were assessed based on the data obtained from the questionnaire as well as the international experiences. Then, by analyzing and comparing the schemes, guidelines to reduce public objections to them are discussed. The results of reviewing and evaluating the public reactions show that all the pros and cons must be considered to guarantee the success of these projects. Consequently, with targeted public education and consciousness-raising advertisements, prior to initiating a scheme and ensuring the mechanism of the implementation after the start of the project, the initial opposition is reduced and, with the gradual emergence of the real and tangible benefits of its implementation, users’ satisfaction will increase.

Keywords: Demand management, international experiences, traffic congestion pricing, public acceptance, public objection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 645
2308 Agreement Options on Multi Criteria Group Decision and Negotiation

Authors: Christiono Utomo, Arazi Idrus, Madzlan Napiah, Mohd. Faris Khamidi

Abstract:

This paper presents a conceptual model of agreement options on negotiation support for civil engineering decision. The negotiation support facilitates the solving of group choice decision making problems in civil engineering decision to reduce the impact of mud volcano disaster in Sidoarjo, Indonesia. The approach based on application of analytical hierarchy process (AHP) method for multi criteria decision on three level of decision hierarchy. Decisions for reducing impact is very complicated since many parties involved in a critical time. Where a number of stakeholders are involved in choosing a single alternative from a set of solution alternatives, there are different concern caused by differing stakeholder preferences, experiences, and background. Therefore, a group choice decision support is required to enable each stakeholder to evaluate and rank the solution alternatives before engaging into negotiation with the other stakeholders. Such civil engineering solutions as alternatives are referred to as agreement options that are determined by identifying the possible stakeholder choice, followed by determining the optimal solution for each group of stakeholder. Determination of the optimal solution is based on a game theory model of n-person general sum game with complete information that involves forming coalitions among stakeholders.

Keywords: Agreement options, AHP, agent, negotiation, multicriteria, game theory, and coalition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1639
2307 Application of Neural Network in User Authentication for Smart Home System

Authors: A. Joseph, D.B.L. Bong, D.A.A. Mat

Abstract:

Security has been an important issue and concern in the smart home systems. Smart home networks consist of a wide range of wired or wireless devices, there is possibility that illegal access to some restricted data or devices may happen. Password-based authentication is widely used to identify authorize users, because this method is cheap, easy and quite accurate. In this paper, a neural network is trained to store the passwords instead of using verification table. This method is useful in solving security problems that happened in some authentication system. The conventional way to train the network using Backpropagation (BPN) requires a long training time. Hence, a faster training algorithm, Resilient Backpropagation (RPROP) is embedded to the MLPs Neural Network to accelerate the training process. For the Data Part, 200 sets of UserID and Passwords were created and encoded into binary as the input. The simulation had been carried out to evaluate the performance for different number of hidden neurons and combination of transfer functions. Mean Square Error (MSE), training time and number of epochs are used to determine the network performance. From the results obtained, using Tansig and Purelin in hidden and output layer and 250 hidden neurons gave the better performance. As a result, a password-based user authentication system for smart home by using neural network had been developed successfully.

Keywords: Neural Network, User Authentication, Smart Home, Security

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035
2306 Ordinal Regression with Fenton-Wilkinson Order Statistics: A Case Study of an Orienteering Race

Authors: Joonas Pääkkönen

Abstract:

In sports, individuals and teams are typically interested in final rankings. Final results, such as times or distances, dictate these rankings, also known as places. Places can be further associated with ordered random variables, commonly referred to as order statistics. In this work, we introduce a simple, yet accurate order statistical ordinal regression function that predicts relay race places with changeover-times. We call this function the Fenton-Wilkinson Order Statistics model. This model is built on the following educated assumption: individual leg-times follow log-normal distributions. Moreover, our key idea is to utilize Fenton-Wilkinson approximations of changeover-times alongside an estimator for the total number of teams as in the notorious German tank problem. This original place regression function is sigmoidal and thus correctly predicts the existence of a small number of elite teams that significantly outperform the rest of the teams. Our model also describes how place increases linearly with changeover-time at the inflection point of the log-normal distribution function. With real-world data from Jukola 2019, a massive orienteering relay race, the model is shown to be highly accurate even when the size of the training set is only 5% of the whole data set. Numerical results also show that our model exhibits smaller place prediction root-mean-square-errors than linear regression, mord regression and Gaussian process regression.

Keywords: Fenton-Wilkinson approximation, German tank problem, log-normal distribution, order statistics, ordinal regression, orienteering, sports analytics, sports modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 824
2305 A Localized Interpolation Method Using Radial Basis Functions

Authors: Mehdi Tatari

Abstract:

Finding the interpolation function of a given set of nodes is an important problem in scientific computing. In this work a kind of localization is introduced using the radial basis functions which finds a sufficiently smooth solution without consuming large amount of time and computer memory. Some examples will be presented to show the efficiency of the new method.

Keywords: Radial basis functions, local interpolation method, closed form solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
2304 Attribution Theory and Perceived Reliability of Cellphones for Teaching and Learning

Authors: Mayowa A. Sofowora, Seraphim D. Eyono Obono

Abstract:

The use of information and communication technologies such as computers, mobile phones and the Internet is becoming prevalent in today’s world; and it is facilitating access to a vast amount of data, services and applications for the improvement of people’s lives. However, this prevalence of ICTs is hampered by the problem of low income levels in developing countries to the point where people cannot timeously replace or repair their ICT devices when damaged or lost; and this problem serves as a motivation for this study whose aim is to examine the perceptions of teachers on the reliability of cellphones when used for teaching and learning purposes. The research objectives unfolding this aim are of two types: Objectives on the selection and design of theories and models, and objectives on the empirical testing of these theories and models. The first type of objectives is achieved using content analysis in an extensive literature survey: and the second type of objectives is achieved through a survey of high school teachers from the ILembe and UMgungundlovu districts in the KwaZulu-Natal province of South Africa. Data collected from this questionnaire based survey is analysed in SPSS using descriptive statistics and Pearson correlations after checking the reliability and validity of the questionnaires. The main hypothesis driving this study is that there is a relationship between the demographics and the attribution identity of teachers on one hand, and their perceptions on the reliability of cellphones on the other hand, as suggested by existing literature; except that attribution identities are considered in this study under three angles: intention, knowledge and ability, and action. The results of this study confirm that the perceptions of teachers on the reliability of cellphones for teaching and learning are affected by the school location of these teachers, and by their perceptions on learners’ cellphones usage intentions and actual use.

Keywords: Attribution, Cellphones, E-learning, Reliability

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1802
2303 Integrated Subset Split for Balancing Network Utilization and Quality of Routing

Authors: S. V. Kasmir Raja, P. Herbert Raj

Abstract:

The overlay approach has been widely used by many service providers for Traffic Engineering (TE) in large Internet backbones. In the overlay approach, logical connections are set up between edge nodes to form a full mesh virtual network on top of the physical topology. IP routing is then run over the virtual network. Traffic engineering objectives are achieved through carefully routing logical connections over the physical links. Although the overlay approach has been implemented in many operational networks, it has a number of well-known scaling issues. This paper proposes a new approach to achieve traffic engineering without full-mesh overlaying with the help of integrated approach and equal subset split method. Traffic engineering needs to determine the optimal routing of traffic over the existing network infrastructure by efficiently allocating resource in order to optimize traffic performance on an IP network. Even though constraint-based routing [1] of Multi-Protocol Label Switching (MPLS) is developed to address this need, since it is not widely tested or debugged, Internet Service Providers (ISPs) resort to TE methods under Open Shortest Path First (OSPF), which is the most commonly used intra-domain routing protocol. Determining OSPF link weights for optimal network performance is an NP-hard problem. As it is not possible to solve this problem, we present a subset split method to improve the efficiency and performance by minimizing the maximum link utilization in the network via a small number of link weight modifications. The results of this method are compared against results of MPLS architecture [9] and other heuristic methods.

Keywords: Constraint based routing, Link Utilization, Subsetsplit method and Traffic Engineering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1391
2302 Methane and Other Hydrocarbon Gas Emissions Resulting from Flaring in Kuwait Oilfields

Authors: Khaireyah Kh. Al-Hamad, V. Nassehi, A. R. Khan

Abstract:

Air pollution is a major environmental health problem, affecting developed and developing countries around the world. Increasing amounts of potentially harmful gases and particulate matter are being emitted into the atmosphere on a global scale, resulting in damage to human health and the environment. Petroleum-related air pollutants can have a wide variety of adverse environmental impacts. In the crude oil production sectors, there is a strong need for a thorough knowledge of gaseous emissions resulting from the flaring of associated gas of known composition on daily basis through combustion activities under several operating conditions. This can help in the control of gaseous emission from flares and thus in the protection of their immediate and distant surrounding against environmental degradation. The impacts of methane and non-methane hydrocarbons emissions from flaring activities at oil production facilities at Kuwait Oilfields have been assessed through a screening study using records of flaring operations taken at the gas and oil production sites, and by analyzing available meteorological and air quality data measured at stations located near anthropogenic sources. In the present study the Industrial Source Complex (ISCST3) Dispersion Model is used to calculate the ground level concentrations of methane and nonmethane hydrocarbons emitted due to flaring in all over Kuwait Oilfields. The simulation of real hourly air quality in and around oil production facilities in the State of Kuwait for the year 2006, inserting the respective source emission data into the ISCST3 software indicates that the levels of non-methane hydrocarbons from the flaring activities exceed the allowable ambient air standard set by Kuwait EPA. So, there is a strong need to address this acute problem to minimize the impact of methane and non-methane hydrocarbons released from flaring activities over the urban area of Kuwait.

Keywords: Kuwait Oilfields, ISCST3 model, flaring, Airpollution, Methane and Non-methane.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2050
2301 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms

Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias

Abstract:

Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.

Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3123