Search results for: technology acceptance model (TAM)
6900 An Innovative Wireless Sensor Network Protocol Implementation using a Hybrid FPGA Technology
Authors: Danielle Reichel, Antoine Druilhe, Tuan Dang
Abstract:
Traditional development of wireless sensor network mote is generally based on SoC1 platform. Such method of development faces three main drawbacks: lack of flexibility in terms of development due to low resource and rigid architecture of SoC; low capability of evolution and portability versus performance if specific micro-controller architecture features are used; and the rapid obsolescence of micro-controller comparing to the long lifetime of power plants or any industrial installations. To overcome these drawbacks, we have explored a new approach of development of wireless sensor network mote using a hybrid FPGA technology. The application of such approach is illustrated through the implementation of an innovative wireless sensor network protocol called OCARI.Keywords: Hybrid FPGA, Embedded system, Mote, flexibility, durability, OCARI protocol, SoC, Wireless Sensor Network
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19096899 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback
Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu
Abstract:
With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.Keywords: Input performance, mobile device, slim keyboard, tactile feedback.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15756898 A Materialized Approach to the Integration of XML Documents: the OSIX System
Authors: H. Ahmad, S. Kermanshahani, A. Simonet, M. Simonet
Abstract:
The data exchanged on the Web are of different nature from those treated by the classical database management systems; these data are called semi-structured data since they do not have a regular and static structure like data found in a relational database; their schema is dynamic and may contain missing data or types. Therefore, the needs for developing further techniques and algorithms to exploit and integrate such data, and extract relevant information for the user have been raised. In this paper we present the system OSIX (Osiris based System for Integration of XML Sources). This system has a Data Warehouse model designed for the integration of semi-structured data and more precisely for the integration of XML documents. The architecture of OSIX relies on the Osiris system, a DL-based model designed for the representation and management of databases and knowledge bases. Osiris is a viewbased data model whose indexing system supports semantic query optimization. We show that the problem of query processing on a XML source is optimized by the indexing approach proposed by Osiris.Keywords: Data integration, semi-structured data, views, XML.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15986897 Using “Eckel” Model to Measure Income Smoothing Practices: The Case of French Companies
Authors: Feddaoui Amina
Abstract:
Income smoothing represents an attempt on the part of the company's management to reduce variations in earnings through the manipulation of the accounting principles. In this study, we aimed to measure income smoothing practices in a sample of 30 French joint stock companies during the period (2007-2009), we used Dummy variables method and “ECKEL” model to measure income smoothing practices and Binomial test accourding to SPSS program, to confirm or refute our hypothesis. This study concluded that there are no significant statistical indicators of income smoothing practices in the sample studied of French companies during the period (2007-2009), so the income series in the same sample studied of is characterized by stability and non-volatility without any intervention of management through accounting manipulation. However, this type of accounting manipulation should be taken into account and efforts should be made by control bodies to apply Eckel model and generalize its use at the global level.
Keywords: Income, smoothing, “Eckel”, French companies.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10146896 Developing a Research Culture in the Faculty of Engineering and Information Technology at the Central University of Technology, Free State: Implications for Knowledge Management
Authors: Mpho A. Mbeo, Patient Rambe
Abstract:
The 13th year of the Central University of Technology, Free State’s (CUT) transition from a vocational and professional training orientation institution (i.e. a technikon) into a university with a strong research focus has neither been a smooth nor an easy one. At the heart of this transition was the need to transform the psychological faculties of academic and research staffs compliment who were accustomed to training graduates for industrial placement. The lack of a research culture that fully embraces the strong solid ethos of conducting cutting-edge research needs to be addressed. The induction and socialisation of academic staff into the development and execution of cutting-edge research also required the provision of research support and the creation of a conducive academic environment for research, both for emerging and non-research active academics. Drawing on ten cases, consisting of four heads of departments, three seasoned researchers, and three novice researchers, this study explores the challenges faced in establishing a strong research culture at the university. Furthermore, it gives an account of the extent to which the current research interventions have addressed the perceivably “missing research culture”, and the implications of these interventions for knowledge management. Evidence suggests that the capability of an ideal institutional research environment, consisting of mentorship of novice researchers by seasoned researchers, balanced effort into teaching and research responsibilities, should be supported by strong research-oriented leadership. Furthermore, recruitment of research passionate staff, adoption of a salary structure that encourages the retention of excellent scholars should be matched by a coherent research incentive culture to growth research publication outputs. This is critical for building new knowledge and entrenching knowledge management founded on communities of practice and scholarly networking through the documentation and communication of research findings. The study concludes that the multiple policy documents set for the different domains of research may be creating pressure on researchers to engage research activities and increase output at the expense of research quality.
Keywords: Central University of Technology, performance, publication, research culture, university.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3426895 Green Function and Eshelby Tensor Based on Mindlin’s 2nd Gradient Model: An Explicit Study of Spherical Inclusion Case
Authors: A. Selmi, A. Bisharat
Abstract:
Using Fourier transform and based on the Mindlin's 2nd gradient model that involves two length scale parameters, the Green's function, the Eshelby tensor, and the Eshelby-like tensor for a spherical inclusion are derived. It is proved that the Eshelby tensor consists of two parts; the classical Eshelby tensor and a gradient part including the length scale parameters which enable the interpretation of the size effect. When the strain gradient is not taken into account, the obtained Green's function and Eshelby tensor reduce to its analogue based on the classical elasticity. The Eshelby tensor in and outside the inclusion, the volume average of the gradient part and the Eshelby-like tensor are explicitly obtained. Unlike the classical Eshelby tensor, the results show that the components of the new Eshelby tensor vary with the position and the inclusion dimensions. It is demonstrated that the contribution of the gradient part should not be neglected.
Keywords: Eshelby tensor, Eshelby-like tensor, Green’s function, Mindlin’s 2nd gradient model, Spherical inclusion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7326894 A New Quantile Based Fuzzy Time Series Forecasting Model
Authors: Tahseen A. Jilani, Aqil S. Burney, C. Ardil
Abstract:
Time series models have been used to make predictions of academic enrollments, weather, road accident, casualties and stock prices, etc. Based on the concepts of quartile regression models, we have developed a simple time variant quantile based fuzzy time series forecasting method. The proposed method bases the forecast using prediction of future trend of the data. In place of actual quantiles of the data at each point, we have converted the statistical concept into fuzzy concept by using fuzzy quantiles using fuzzy membership function ensemble. We have given a fuzzy metric to use the trend forecast and calculate the future value. The proposed model is applied for TAIFEX forecasting. It is shown that proposed method work best as compared to other models when compared with respect to model complexity and forecasting accuracy.
Keywords: Quantile Regression, Fuzzy time series, fuzzy logicalrelationship groups, heuristic trend prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20056893 Wicking and Evaporation of Liquids in Knitted Fabrics: Analytic Solution of Capillary Rise Restrained by Gravity and Evaporation
Authors: N. S. Achour, M. Hamdaoui, S. Ben Nasrallah
Abstract:
Wicking and evaporation of water in porous knitted fabrics is investigated by combining experimental and analytical approaches: The standard wicking model from Lucas and Washburn is enhanced to account for evaporation and gravity effects. The goal is to model the effect of gravity and evaporation on wicking using simple analytical expressions and investigate the influence of fabrics geometrical parameters, such as porosity and thickness on evaporation impact on maximum reachable height values. The results show that fabric properties have a significant influence on evaporation effect. In this paper, an experimental study of determining water kinetics from different knitted fabrics were gravimetrically investigated permitting the measure of the mass and the height of liquid rising in fabrics in various atmospheric conditions. From these measurements, characteristic pore parameters (capillary radius and permeability) can be determined.Keywords: Evaporation, experimental study, geometrical parameters, model, porous knitted fabrics, wicking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20826892 Application of Modified Maxwell-Stefan Equation for Separation of Aqueous Phenol by Pervaporation
Authors: Ujjal K Ghosh, Ling Teen
Abstract:
Pervaporation has the potential to be an alternative to the other traditional separation processes such as distillation, adsorption, reverse osmosis and extraction. This study investigates the separation of phenol from water using a polyurethane membrane by pervaporation by applying the modified Maxwell-Stephen model. The modified Maxwell-Stefan model takes into account the non-ideal multi-component solubility effect, nonideal diffusivity of all permeating components, concentration dependent density of the membrane and diffusion coupling to predict various fluxes. Four cases has been developed to investigate the process parameters effects on the flux and weight fraction of phenol in the permeate values namely feed concentration, membrane thickness, operating temperature and operating downstream pressure. The model could describe semi-quantitatively the performance of the pervaporation membrane for the given system as a very good agreement between the observed and theoretical fluxes was observed.
Keywords: Pervaporation, Phenol, Polyurethane, Modified Maxwell-Stefan equation, Solution Diffusion
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27476891 Flat Miniature Heat Pipes for Electronics Cooling: State of the Art, Experimental and Theoretical Analysis
Authors: M.C. Zaghdoudi, S. Maalej, J. Mansouri, M.B.H. Sassi
Abstract:
An experimental study is realized in order to verify the Mini Heat Pipe (MHP) concept for cooling high power dissipation electronic components and determines the potential advantages of constructing mini channels as an integrated part of a flat heat pipe. A Flat Mini Heat Pipe (FMHP) prototype including a capillary structure composed of parallel rectangular microchannels is manufactured and a filling apparatus is developed in order to charge the FMHP. The heat transfer improvement obtained by comparing the heat pipe thermal resistance to the heat conduction thermal resistance of a copper plate having the same dimensions as the tested FMHP is demonstrated for different heat input flux rates. Moreover, the heat transfer in the evaporator and condenser sections are analyzed, and heat transfer laws are proposed. In the theoretical part of this work, a detailed mathematical model of a FMHP with axial microchannels is developed in which the fluid flow is considered along with the heat and mass transfer processes during evaporation and condensation. The model is based on the equations for the mass, momentum and energy conservation, which are written for the evaporator, adiabatic, and condenser zones. The model, which permits to simulate several shapes of microchannels, can predict the maximum heat transfer capacity of FMHP, the optimal fluid mass, and the flow and thermal parameters along the FMHP. The comparison between experimental and model results shows the good ability of the numerical model to predict the axial temperature distribution along the FMHP.Keywords: Electronics Cooling, Micro Heat Pipe, Mini Heat Pipe, Mini Heat Spreader, Capillary grooves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 39456890 Confidence Intervals for Double Exponential Distribution: A Simulation Approach
Authors: M. Alrasheedi
Abstract:
The double exponential model (DEM), or Laplace distribution, is used in various disciplines. However, there are issues related to the construction of confidence intervals (CI), when using the distribution.In this paper, the properties of DEM are considered with intention of constructing CI based on simulated data. The analysis of pivotal equations for the models here in comparisons with pivotal equations for normal distribution are performed, and the results obtained from simulation data are presented.Keywords: Confidence intervals, double exponential model, pivotal equations, simulation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35636889 Estimating Shortest Circuit Path Length Complexity
Authors: Azam Beg, P. W. Chandana Prasad, S.M.N.A Senenayake
Abstract:
When binary decision diagrams are formed from uniformly distributed Monte Carlo data for a large number of variables, the complexity of the decision diagrams exhibits a predictable relationship to the number of variables and minterms. In the present work, a neural network model has been used to analyze the pattern of shortest path length for larger number of Monte Carlo data points. The neural model shows a strong descriptive power for the ISCAS benchmark data with an RMS error of 0.102 for the shortest path length complexity. Therefore, the model can be considered as a method of predicting path length complexities; this is expected to lead to minimum time complexity of very large-scale integrated circuitries and related computer-aided design tools that use binary decision diagrams.Keywords: Monte Carlo circuit simulation data, binary decision diagrams, neural network modeling, shortest path length estimation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13866888 ECG Based Reliable User Identification Using Deep Learning
Authors: R. N. Begum, Ambalika Sharma, G. K. Singh
Abstract:
Identity theft has serious ramifications beyond data and personal information loss. This necessitates the implementation of robust and efficient user identification systems. Therefore, automatic biometric recognition systems are the need of the hour, and electrocardiogram (ECG)-based systems are unquestionably the best choice due to their appealing inherent characteristics. The Convolutional Neural Networks (CNNs) are the recent state-of-the-art techniques for ECG-based user identification systems. However, the results obtained are significantly below standards, and the situation worsens as the number of users and types of heartbeats in the dataset grows. As a result, this study proposes a highly accurate and resilient ECG-based person identification system using CNN's dense learning framework. The proposed research explores explicitly the caliber of dense CNNs in the field of ECG-based human recognition. The study tests four different configurations of dense CNN which are trained on a dataset of recordings collected from eight popular ECG databases. With the highest False Acceptance Rate (FAR) of 0.04% and the highest False Rejection Rate (FRR) of 5%, the best performing network achieved an identification accuracy of 99.94%. The best network is also tested with various train/test split ratios. The findings show that DenseNets are not only extremely reliable, but also highly efficient. Thus, they might also be implemented in real-time ECG-based human recognition systems.
Keywords: Biometrics, dense networks, identification rate, train/test split ratio.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5616887 Modeling Influence on Petty Corruption Attitudes
Authors: Nina Bijedic, Drazena Gaspar, Mirsad Hadzikadic
Abstract:
Corruption is an influential and widespread problem. One part of it is so-called petty corruption, related to large-scale bribe giving by ordinary citizens trying to influence the works of public administration or public services. As it is with all means of corruption, petty corruption is related to the level of democracy (or administration efficiency) in a society. The developed model captures some of the factors related to corruptive behavior, as well as people’s attitude towards petty corruption. It has four basic elements: user’s perception of corruption in the society of interest, the influence of social interactions, the influence of penalizing mechanism, and influence of campaigns against petty corruption. The model is agent-based, developed in NetLogo, with a lot of random settings that provide a wider scope of responses. Interactions of different settings for variables of elements provide insight into the influence of each element on attitude towards petty corruption, as well as petty corruptive behavior.
Keywords: Agent based model, attitude, influence, petty corruption, society.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13616886 Online Signature Verification Using Angular Transformation for e-Commerce Services
Authors: Peerapong Uthansakul, Monthippa Uthansakul
Abstract:
The rapid growth of e-Commerce services is significantly observed in the past decade. However, the method to verify the authenticated users still widely depends on numeric approaches. A new search on other verification methods suitable for online e-Commerce is an interesting issue. In this paper, a new online signature-verification method using angular transformation is presented. Delay shifts existing in online signatures are estimated by the estimation method relying on angle representation. In the proposed signature-verification algorithm, all components of input signature are extracted by considering the discontinuous break points on the stream of angular values. Then the estimated delay shift is captured by comparing with the selected reference signature and the error matching can be computed as a main feature used for verifying process. The threshold offsets are calculated by two types of error characteristics of the signature verification problem, False Rejection Rate (FRR) and False Acceptance Rate (FAR). The level of these two error rates depends on the decision threshold chosen whose value is such as to realize the Equal Error Rate (EER; FAR = FRR). The experimental results show that through the simple programming, employed on Internet for demonstrating e-Commerce services, the proposed method can provide 95.39% correct verifications and 7% better than DP matching based signature-verification method. In addition, the signature verification with extracting components provides more reliable results than using a whole decision making.Keywords: Online signature verification, e-Commerce services, Angular transformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15916885 Radial Basis Surrogate Model Integrated to Evolutionary Algorithm for Solving Computation Intensive Black-Box Problems
Authors: Abdulbaset Saad, Adel Younis, Zuomin Dong
Abstract:
For design optimization with high-dimensional expensive problems, an effective and efficient optimization methodology is desired. This work proposes a series of modification to the Differential Evolution (DE) algorithm for solving computation Intensive Black-Box Problems. The proposed methodology is called Radial Basis Meta-Model Algorithm Assisted Differential Evolutionary (RBF-DE), which is a global optimization algorithm based on the meta-modeling techniques. A meta-modeling assisted DE is proposed to solve computationally expensive optimization problems. The Radial Basis Function (RBF) model is used as a surrogate model to approximate the expensive objective function, while DE employs a mechanism to dynamically select the best performing combination of parameters such as differential rate, cross over probability, and population size. The proposed algorithm is tested on benchmark functions and real life practical applications and problems. The test results demonstrate that the proposed algorithm is promising and performs well compared to other optimization algorithms. The proposed algorithm is capable of converging to acceptable and good solutions in terms of accuracy, number of evaluations, and time needed to converge.
Keywords: Differential evolution, engineering design, expensive computations, meta-modeling, radial basis function, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11836884 Numerical Simulation on Deformation Behaviour of Additively Manufactured AlSi10Mg Alloy
Authors: Racholsan Raj Nirmal, B. S. V. Patnaik, R. Jayaganthan
Abstract:
The deformation behaviour of additively manufactured AlSi10Mg alloy under low strains, high strain rates and elevated temperature conditions is essential to analyse and predict its response against dynamic loading such as impact and thermomechanical fatigue. The constitutive relation of Johnson-Cook is used to capture the strain rate sensitivity and thermal softening effect in AlSi10Mg alloy. Johnson-Cook failure model is widely used for exploring damage mechanics and predicting the fracture in many materials. In this present work, Johnson-Cook material and damage model parameters for additively manufactured AlSi10Mg alloy have been determined numerically from four types of uniaxial tensile test. Three different uniaxial tensile tests with dynamic strain rates (0.1, 1, 10, 50, and 100 s-1) and elevated temperature tensile test with three different temperature conditions (450 K, 500 K and 550 K) were performed on 3D printed AlSi10Mg alloy in ABAQUS/Explicit. Hexahedral elements are used to discretize tensile specimens and fracture energy value of 43.6 kN/m was used for damage initiation. Levenberg Marquardt optimization method was used for the evaluation of Johnson-Cook model parameters. It was observed that additively manufactured AlSi10Mg alloy has shown relatively higher strain rate sensitivity and lower thermal stability as compared to the other Al alloys.
Keywords: ABAQUS, additive manufacturing, AlSi10Mg, Johnson-Cook model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11596883 Modeling of Flood Mitigation Structures for Sarawak River Sub-basin Using Info Works River Simulation (RS)
Authors: Rosmina Bustami, Charles Bong, Darrien Mah, Afnie Hamzah, Marina Patrick
Abstract:
The distressing flood scenarios that occur in recent years at the surrounding areas of Sarawak River have left damages of properties and indirectly caused disruptions of productive activities. This study is meant to reconstruct a 100-year flood event that took place in this river basin. Sarawak River Subbasin was chosen and modeled using the one-dimensional hydrodynamic modeling approach using InfoWorks River Simulation (RS), in combination with Geographical Information System (GIS). This produces the hydraulic response of the river and its floodplains in extreme flooding conditions. With different parameters introduced to the model, correlations of observed and simulated data are between 79% – 87%. Using the best calibrated model, flood mitigation structures are imposed along the sub-basin. Analysis is done based on the model simulation results. Result shows that the proposed retention ponds constructed along the sub-basin provide the most efficient reduction of flood by 34.18%.Keywords: Flood, Flood mitigation structure, InfoWorks RS, Retention pond, Sarawak River sub-basin.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27246882 Dynamic Process Monitoring of an Ammonia Synthesis Fixed-Bed Reactor
Authors: Bothinah Altaf, Gary Montague, Elaine B. Martin
Abstract:
This study involves the modeling and monitoring of an ammonia synthesis fixed-bed reactor using partial least squares (PLS) and its variants. The process exhibits complex dynamic behavior due to the presence of heat recycling and feed quench. One limitation of static PLS model in this situation is that it does not take account of the process dynamics and hence dynamic PLS was used. Although it showed, superior performance to static PLS in terms of prediction, the monitoring scheme was inappropriate hence adaptive PLS was considered. A limitation of adaptive PLS is that non-conforming observations also contribute to the model, therefore, a new adaptive approach was developed, robust adaptive dynamic PLS. This approach updates a dynamic PLS model and is robust to non-representative data. The developed methodology showed a clear improvement over existing approaches in terms of the modeling of the reactor and the detection of faults.Keywords: Ammonia synthesis fixed-bed reactor, dynamic partial least squares modeling, recursive partial least squares, robust modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19326881 Robot Technology Impact on Dyslexic Students’ English Learning
Authors: Khaled Hamdan, Abid Amorri, Fatima Hamdan
Abstract:
Involving students in English language learning process and achieving an adequate English language proficiency in the target language can be a great challenge for both teachers and students. This can prove even a far greater challenge to engage students with special needs (Dyslexia) if they have physical impairment and inadequate mastery of basic communicative language competence/proficiency in the target language. From this perspective, technology like robots can probably be used to enhance learning process for the special needs students who have extensive communication needs, who face continuous struggle to interact with their peers and teachers and meet academic requirements. Robots, precisely NAO, can probably provide them with the perfect opportunity to practice social and communication skills, and meet their English academic requirements. This research paper aims to identify to what extent robots can be used to improve students’ social interaction and communication skills and to understand the potential for robotics-based education in motivating and engaging UAEU dyslexic students to meet university requirements. To reach this end, the paper will explore several factors that come into play – Motion Level-involving cognitive activities, Interaction Level-involving language processing, Behavior Level -establishing a close relationship with the robot and Appraisal Level- focusing on dyslexia students’ achievement in the target language.
Keywords: Dyslexia, robot technology, motion, interaction, behavior and appraisal levels, social and communication skills.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14036880 A Combined Conventional and Differential Evolution Method for Model Order Reduction
Authors: J. S. Yadav, N. P. Patidar, J. Singhai, S. Panda, C. Ardil
Abstract:
In this paper a mixed method by combining an evolutionary and a conventional technique is proposed for reduction of Single Input Single Output (SISO) continuous systems into Reduced Order Model (ROM). In the conventional technique, the mixed advantages of Mihailov stability criterion and continued Fraction Expansions (CFE) technique is employed where the reduced denominator polynomial is derived using Mihailov stability criterion and the numerator is obtained by matching the quotients of the Cauer second form of Continued fraction expansions. Then, retaining the numerator polynomial, the denominator polynomial is recalculated by an evolutionary technique. In the evolutionary method, the recently proposed Differential Evolution (DE) optimization technique is employed. DE method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. The proposed method is illustrated through a numerical example and compared with ROM where both numerator and denominator polynomials are obtained by conventional method to show its superiority.
Keywords: Reduced Order Modeling, Stability, Mihailov Stability Criterion, Continued Fraction Expansions, Differential Evolution, Integral Squared Error.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21696879 A Model to Study the Effect of Excess Buffers and Na+ Ions on Ca2+ Diffusion in Neuron Cell
Authors: Vikas Tewari, Shivendra Tewari, K. R. Pardasani
Abstract:
Calcium is a vital second messenger used in signal transduction. Calcium controls secretion, cell movement, muscular contraction, cell differentiation, ciliary beating and so on. Two theories have been used to simplify the system of reaction-diffusion equations of calcium into a single equation. One is excess buffer approximation (EBA) which assumes that mobile buffer is present in excess and cannot be saturated. The other is rapid buffer approximation (RBA), which assumes that calcium binding to buffer is rapid compared to calcium diffusion rate. In the present work, attempt has been made to develop a model for calcium diffusion under excess buffer approximation in neuron cells. This model incorporates the effect of [Na+] influx on [Ca2+] diffusion,variable calcium and sodium sources, sodium-calcium exchange protein, Sarcolemmal Calcium ATPase pump, sodium and calcium channels. The proposed mathematical model leads to a system of partial differential equations which have been solved numerically using Forward Time Centered Space (FTCS) approach. The numerical results have been used to study the relationships among different types of parameters such as buffer concentration, association rate, calcium permeability.
Keywords: Excess buffer approximation, Na+ influx, sodium calcium exchange protein, sarcolemmal calcium atpase pump, forward time centred space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16036878 Synchronization of Chaos in a Food Web in Ecological Systems
Authors: Anuraj Singh, Sunita Gakkhar
Abstract:
The three-species food web model proposed and investigated by Gakkhar and Naji is known to have chaotic behaviour for a choice of parameters. An attempt has been made to synchronize the chaos in the model using bidirectional coupling. Numerical simulations are presented to demonstrate the effectiveness and feasibility of the analytical results. Numerical results show that for higher value of coupling strength, chaotic synchronization is achieved. Chaos can be controlled to achieve stable synchronization in natural systems.
Keywords: Lyapunov Exponent, Bidirectional Coupling, ChaosSynchronization, Synchronization Manifold
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13146877 A Calibration Approach towards Reducing ASM2d Parameter Subsets in Phosphorus Removal Processes
Authors: N.Boontian
Abstract:
A novel calibration approach that aims to reduce ASM2d parameter subsets and decrease the model complexity is presented. This approach does not require high computational demand and reduces the number of modeling parameters required to achieve the ASMs calibration by employing a sensitivity and iteration methodology. Parameter sensitivity is a crucial factor and the iteration methodology enables refinement of the simulation parameter values. When completing the iteration process, parameters values are determined in descending order of their sensitivities. The number of iterations required is equal to the number of model parameters of the parameter significance ranking. This approach was used for the ASM2d model to the evaluated EBPR phosphorus removal and it was successful. Results of the simulation provide calibration parameters. These included YPAO, YPO4, YPHA, qPHA, qPP, μPAO, bPAO, bPP, bPHA, KPS, YA, μAUT, bAUT, KO2 AUT, and KNH4 AUT. Those parameters were corresponding to the experimental data available.Keywords: ASM2d, calibration approach, iteration methodology, sensitivity, phosphorus removal
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24246876 Closely Parametrical Model for an Electrical Arc Furnace
Authors: Labar Hocine, Dgeghader Yacine, Kelaiaia Mounia Samira, Bounaya Kamel
Abstract:
To maximise furnace production it-s necessary to optimise furnace control, with the objectives of achieving maximum power input into the melting process, minimum network distortion and power-off time, without compromise on quality and safety. This can be achieved with on the one hand by an appropriate electrode control and on the other hand by a minimum of AC transformer switching. Electrical arc is a stochastic process; witch is the principal cause of power quality problems, including voltages dips, harmonic distortion, unbalance loads and flicker. So it is difficult to make an appropriate model for an Electrical Arc Furnace (EAF). The factors that effect EAF operation are the melting or refining materials, melting stage, electrode position (arc length), electrode arm control and short circuit power of the feeder. So arc voltages, current and power are defined as a nonlinear function of the arc length. In this article we propose our own empirical function of the EAF and model, for the mean stages of the melting process, thanks to the measurements in the steel factory.Keywords: Modelling, electrical arc, melting, power, EAF, steel.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32516875 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: Crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11826874 Using Stresses Obtained from a Low Detailed FE Model and Located at a Reference Point to Quickly Calculate the Free-edge Stress Intensity Factors of Bonded Joints
Abstract:
The present study focuses on methods allowing a convenient and quick calculation of the SIFs in order to predict the static adhesive strength of bonded joints. A new SIF calculation method is proposed, based on the stresses obtained from a FE model at a reference point located in the adhesive layer at equal distance of the free-edge and of the two interfaces. It is shown that, even limiting ourselves to the two main modes, i.e. the opening and the shearing modes, and using the values of the stresses resulting from a low detailed FE model, an efficient calculation of the peeling stress at adhesive-substrate corners can be obtained by this way. The proposed method is interesting in that it can be the basis of a prediction tool that will allow the designer to quickly evaluate the SIFs characterizing a particular application without developing a detailed analysis.
Keywords: Adhesive layer, bounded joints, free-edge corner, stress intensity factor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11516873 A TIPSO-SVM Expert System for Efficient Classification of TSTO Surrogates
Authors: Ali Sarosh, Dong Yun-Feng, Muhammad Umer
Abstract:
Fully reusable spaceplanes do not exist as yet. This implies that design-qualification for optimized highly-integrated forebody-inlet configuration of booster-stage vehicle cannot be based on archival data of other spaceplanes. Therefore, this paper proposes a novel TIPSO-SVM expert system methodology. A non-trivial problem related to optimization and classification of hypersonic forebody-inlet configuration in conjunction with mass-model of the two-stage-to-orbit (TSTO) vehicle is solved. The hybrid-heuristic machine learning methodology is based on two-step improved particle swarm optimizer (TIPSO) algorithm and two-step support vector machine (SVM) data classification method. The efficacy of method is tested by first evolving an optimal configuration for hypersonic compression system using TIPSO algorithm; thereafter, classifying the results using two-step SVM method. In the first step extensive but non-classified mass-model training data for multiple optimized configurations is segregated and pre-classified for learning of SVM algorithm. In second step the TIPSO optimized mass-model data is classified using the SVM classification. Results showed remarkable improvement in configuration and mass-model along with sizing parameters.
Keywords: TIPSO-SVM expert system, TIPSO algorithm, two-step SVM method, aerothermodynamics, mass-modeling, TSTO vehicle.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23276872 Prioritizing Influential Factors on the Promotion of Virtual Training System
Authors: Nader Gharibnavaz, Mostafa Mosadeghi, Naser Gharibnavaz
Abstract:
In today's world where everything is rapidly changing and information technology is high in development, many features of culture, society, politic and economy has changed. The advent of information technology and electronic data transmission lead to easy communication and fields like e-learning and e-commerce, are accessible for everyone easily. One of these technologies is virtual training. The "quality" of such kind of education systems is critical. 131 questionnaires were prepared and distributed among university student in Toba University. So the research has followed factors that affect the quality of learning from the perspective of staff, students, professors and this type of university. It is concluded that the important factors in virtual training are the quality of professors, the quality of staff, and the quality of the university. These mentioned factors were the most prior factors in this education system and necessary for improving virtual training.Keywords: Training , Virtual Training, Strategic Positioning, Positioning Mapping, Unique Selling Proposition, Strong Brands, Indoors industry
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14776871 Grading Fourteen Zones of Isfahan in Terms of the Impact of Globalization on the Urban Fabric of the City, Using the TOPSIS Model
Authors: A. Zahedi Yeganeh, A. Khademolhosseini, R. Mokhtari Malekabadi
Abstract:
Undoubtedly one of the most far-reaching and controversial topics considered in the past few decades, has been globalization. Globalization lies in the essence of the modern culture. It is a complex and rapidly expanding network of links and mutual interdependence that is an aspect of modern life; though some argue that this link existed since the beginning of human history. If we consider globalization as a dynamic social process in which the geographical constraints governing the political, economic, social and cultural relationships have been undermined, it might not be possible to simply describe its impact on the urban fabric. But since in this phenomenon the increase in communications of societies (while preserving the main cultural - regional characteristics) with one another and the increase in the possibility of influencing other societies are discussed, the need for more studies will be felt. The main objective of this study is to grade based on some globalization factors on urban fabric applying the TOPSIS model. The research method is descriptive - analytical and survey. For data analysis, the TOPSIS model and SPSS software were used and the results of GIS software with fourteen cities are shown on the map. The results show that the process of being influenced by the globalization of the urban fabric of fourteen zones of Isfahan was not similar and there have been large differences in this respect between city zones; the most affected areas are zones 5, 6 and 9 of the municipality and the least impact has been on the zones 4 and 3 and 2.
Keywords: Grading, Globalization, Urban fabric, 14 zones of Isfahan, TOPSIS model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1995