Search results for: quick training algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4321

Search results for: quick training algorithm

421 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: Resistivity, inversion, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6073
420 Quad Tree Decomposition Based Analysis of Compressed Image Data Communication for Lossy and Lossless Using WSN

Authors: N. Muthukumaran, R. Ravi

Abstract:

The Quad Tree Decomposition based performance analysis of compressed image data communication for lossy and lossless through wireless sensor network is presented. Images have considerably higher storage requirement than text. While transmitting a multimedia content there is chance of the packets being dropped due to noise and interference. At the receiver end the packets that carry valuable information might be damaged or lost due to noise, interference and congestion. In order to avoid the valuable information from being dropped various retransmission schemes have been proposed. In this proposed scheme QTD is used. QTD is an image segmentation method that divides the image into homogeneous areas. In this proposed scheme involves analysis of parameters such as compression ratio, peak signal to noise ratio, mean square error, bits per pixel in compressed image and analysis of difficulties during data packet communication in Wireless Sensor Networks. By considering the above, this paper is to use the QTD to improve the compression ratio as well as visual quality and the algorithm in MATLAB 7.1 and NS2 Simulator software tool.

Keywords: Image compression, Compression Ratio, Quad tree decomposition, Wireless sensor networks, NS2 simulator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2391
419 Performance Comparison of Situation-Aware Models for Activating Robot Vacuum Cleaner in a Smart Home

Authors: Seongcheol Kwon, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

We assume an IoT-based smart-home environment where the on-off status of each of the electrical appliances including the room lights can be recognized in a real time by monitoring and analyzing the smart meter data. At any moment in such an environment, we can recognize what the household or the user is doing by referring to the status data of the appliances. In this paper, we focus on a smart-home service that is to activate a robot vacuum cleaner at right time by recognizing the user situation, which requires a situation-aware model that can distinguish the situations that allow vacuum cleaning (Yes) from those that do not (No). We learn as our candidate models a few classifiers such as naïve Bayes, decision tree, and logistic regression that can map the appliance-status data into Yes and No situations. Our training and test data are obtained from simulations of user behaviors, in which a sequence of user situations such as cooking, eating, dish washing, and so on is generated with the status of the relevant appliances changed in accordance with the situation changes. During the simulation, both the situation transition and the resulting appliance status are determined stochastically. To compare the performances of the aforementioned classifiers we obtain their learning curves for different types of users through simulations. The result of our empirical study reveals that naïve Bayes achieves a slightly better classification accuracy than the other compared classifiers.

Keywords: Situation-awareness, Smart home, IoT, Machine learning, Classifier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1856
418 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design

Authors: Vahid Nademi

Abstract:

In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.

Keywords: Blood glucose monitoring, insulin pump, optimization, predictive control, diabetes disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 749
417 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network

Authors: Zukisa Nante, Wang Zenghui

Abstract:

Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.

Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 505
416 Unsupervised Classification of DNA Barcodes Species Using Multi-Library Wavelet Networks

Authors: Abdesselem Dakhli, Wajdi Bellil, Chokri Ben Amar

Abstract:

DNA Barcode provides good sources of needed information to classify living species. The classification problem has to be supported with reliable methods and algorithms. To analyze species regions or entire genomes, it becomes necessary to use the similarity sequence methods. A large set of sequences can be simultaneously compared using Multiple Sequence Alignment which is known to be NP-complete. However, all the used methods are still computationally very expensive and require significant computational infrastructure. Our goal is to build predictive models that are highly accurate and interpretable. In fact, our method permits to avoid the complex problem of form and structure in different classes of organisms. The empirical data and their classification performances are compared with other methods. Evenly, in this study, we present our system which is consisted of three phases. The first one, is called transformation, is composed of three sub steps; Electron-Ion Interaction Pseudopotential (EIIP) for the codification of DNA Barcodes, Fourier Transform and Power Spectrum Signal Processing. Moreover, the second phase step is an approximation; it is empowered by the use of Multi Library Wavelet Neural Networks (MLWNN). Finally, the third one, is called the classification of DNA Barcodes, is realized by applying the algorithm of hierarchical classification.

Keywords: DNA Barcode, Electron-Ion Interaction Pseudopotential, Multi Library Wavelet Neural Networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1967
415 An Image Processing Based Approach for Assessing Wheelchair Cushions

Authors: B. Farahani, R. Fadil, A. Aboonabi, B. Hoffmann, J. Loscheider, K. Tavakolian, S. Arzanpour

Abstract:

Wheelchair users spend long hours in a sitting position, and selecting the right cushion is highly critical in preventing pressure ulcers in that demographic. Pressure Mapping Systems (PMS) are typically used in clinical settings by therapists to identify the sitting profile and pressure points in the sitting area to select the cushion that fits the best for the users. A PMS is a flexible mat composed of arrays of distributed networks of pressure sensors. The output of the PMS systems is a color-coded image that shows the intensity of the pressure concentration. Therapists use the PMS images to compare different cushions fit for each user. This process is highly subjective and requires good visual memory for the best outcome. This paper aims to develop an image processing technique to analyze the images of PMS and provide an objective measure to assess the cushions based on their pressure distribution mappings. In this paper, we first reviewed the skeletal anatomy of the human sitting area and its relation to the PMS image. This knowledge is then used to identify the important features that must be considered in image processing. We then developed an algorithm based on those features to analyze the images and rank them according to their fit to the user's needs. 

Keywords: cushion, image processing, pressure mapping system, wheelchair

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
414 Optimisation of Structural Design by Integrating Genetic Algorithms in the Building Information Modelling Environment

Authors: Tofigh Hamidavi, Sepehr Abrishami, Pasquale Ponterosso, David Begg

Abstract:

Structural design and analysis is an important and time-consuming process, particularly at the conceptual design stage. Decisions made at this stage can have an enormous effect on the entire project, as it becomes ever costlier and more difficult to alter the choices made early on in the construction process. Hence, optimisation of the early stages of structural design can provide important efficiencies in terms of cost and time. This paper suggests a structural design optimisation (SDO) framework in which Genetic Algorithms (GAs) may be used to semi-automate the production and optimisation of early structural design alternatives. This framework has the potential to leverage conceptual structural design innovation in Architecture, Engineering and Construction (AEC) projects. Moreover, this framework improves the collaboration between the architectural stage and the structural stage. It will be shown that this SDO framework can make this achievable by generating the structural model based on the extracted data from the architectural model. At the moment, the proposed SDO framework is in the process of validation, involving the distribution of an online questionnaire among structural engineers in the UK.

Keywords: Building Information Modelling, BIM, Genetic Algorithm, GA, architecture-engineering-construction, AEC, Optimisation, structure, design, population, generation, selection, mutation, crossover, offspring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 821
413 The Effectiveness of Cognitive Behavioural Intervention in Alleviating Social Avoidance for Blind Students

Authors: Mohamed M. Elsherbiny

Abstract:

Social Avoidance is one of the most important problems that face a good number of disabled students. It results from the negative attitudes of non-disabled students, teachers and others. Some of the past research has shown that non-disabled individuals hold negative attitudes toward persons with disabilities. The present study aims to alleviate Social Avoidance by applying the Cognitive Behavioral Intervention. 24 Blind students aged 19–24 (university students) were randomly chosen we compared an experimental group (consisted of 12 students) who went through the intervention program, with a control group (12 students also) who did not go through such intervention. We used the Social Avoidance and Distress Scale (SADS) to assess social anxiety and distress behavior. The author used many techniques of cognitive behavioral intervention such as modeling, cognitive restructuring, extension, contingency contracts, selfmonitoring, assertiveness training, role play, encouragement and others. Statistically, T-test was employed to test the research hypothesis. Result showed that there is a significance difference between the experimental group and the control group after the intervention and also at the follow up stages of the Social Avoidance and Distress Scale. Also for the experimental group, there is a significance difference before the intervention and the follow up stages for the scale. Results showed that, there is a decrease in social avoidance. Accordingly, cognitive behavioral intervention program was successful in decreasing social avoidance for blind students.

Keywords: Social avoidance, cognitive behavioral intervention, blind disability, disability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1993
412 Optic Disc Detection by Earth Mover's Distance Template Matching

Authors: Fernando C. Monteiro, Vasco Cadavez

Abstract:

This paper presents a method for the detection of OD in the retina which takes advantage of the powerful preprocessing techniques such as the contrast enhancement, Gabor wavelet transform for vessel segmentation, mathematical morphology and Earth Mover-s distance (EMD) as the matching process. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Vessel segmentation method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel-s feature vector. Feature vectors are composed of the pixel-s intensity and 2D Gabor wavelet transform responses taken at multiple scales. A simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity using the EMD. The minimum distance provides an estimate of the OD center coordinates. The method-s performance is evaluated on publicly available DRIVE and STARE databases. On the DRIVE database the OD center was detected correctly in all of the 40 images (100%) and on the STARE database the OD was detected correctly in 76 out of the 81 images, even in rather difficult pathological situations.

Keywords: Diabetic retinopathy, Earth Mover's distance, Gabor wavelets, optic disc detection, retinal images

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2006
411 Magnetohydrodynamic Free Convection in a Square Cavity Heated from Below and Cooled from Other Walls

Authors: S. Jani, M. Mahmoodi, M. Amini

Abstract:

Magnetohydrodynamic free convection fluid flow and heat transfer in a square cavity filled with an electric conductive fluid with Prandtl number of 0.7 has been investigated numerically. The horizontal bottom wall of the cavity was kept at Th while the side and the top walls of the cavity were maintained at a constant temperature Tc with Th>Tc. The governing equations written in terms of the primitive variables were solved numerically using the finite volume method while the SIMPLER algorithm was used to couple the velocity and pressure fields. Using the developed code, a parametric study was performed, and the effects of the Rayleigh number and the Hartman number on the fluid flow and heat transfer inside the cavity were investigated. The obtained results showed that temperature distribution and flow pattern inside the cavity depended on both strength of the magnetic field and Rayleigh number. For all cases two counter rotating eddies were formed inside the cavity. The magnetic field decreased the intensity of free convection and flow velocity. Also it was found that for higher Rayleigh numbers a relatively stronger magnetic field was needed to decrease the heat transfer through free convection.

Keywords: Free Convection, Magnetic Field, Square Cavity, Numerical Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2310
410 Hybrid of Hunting Search and Modified Simplex Methods for Grease Position Parameter Design Optimisation

Authors: P. Luangpaiboon, S. Boonhao

Abstract:

This study proposes a multi-response surface optimization problem (MRSOP) for determining the proper choices of a process parameter design (PPD) decision problem in a noisy environment of a grease position process in an electronic industry. The proposed models attempts to maximize dual process responses on the mean of parts between failure on left and right processes. The conventional modified simplex method and its hybridization of the stochastic operator from the hunting search algorithm are applied to determine the proper levels of controllable design parameters affecting the quality performances. A numerical example demonstrates the feasibility of applying the proposed model to the PPD problem via two iterative methods. Its advantages are also discussed. Numerical results demonstrate that the hybridization is superior to the use of the conventional method. In this study, the mean of parts between failure on left and right lines improve by 39.51%, approximately. All experimental data presented in this research have been normalized to disguise actual performance measures as raw data are considered to be confidential.

Keywords: Grease Position Process, Multi-response Surfaces, Modified Simplex Method, Hunting Search Method, Desirability Function Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1688
409 A Real-time Computer Vision System for VehicleTracking and Collision Detection

Authors: Mustafa Kisa, Fatih Mehmet Botsali

Abstract:

Recent developments in automotive technology are focused on economy, comfort and safety. Vehicle tracking and collision detection systems are attracting attention of many investigators focused on safety of driving in the field of automotive mechatronics. In this paper, a vision-based vehicle detection system is presented. Developed system is intended to be used in collision detection and driver alert. The system uses RGB images captured by a camera in a car driven in the highway. Images captured by the moving camera are used to detect the moving vehicles in the image. A vehicle ahead of the camera is detected in daylight conditions. The proposed method detects moving vehicles by subtracting successive images. Plate height of the vehicle is determined by using a plate recognition algorithm. Distance of the moving object is calculated by using the plate height. After determination of the distance of the moving vehicle relative speed of the vehicle and Time-to-Collision are calculated by using distances measured in successive images. Results obtained in road tests are discussed in order to validate the use of the proposed method.

Keywords: Image possessing, vehicle tracking, license plate detection, computer vision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3100
408 Motion Detection Method for Clutter Rejection in the Bio-Radar Signal Processing

Authors: Carolina Gouveia, José Vieira, Pedro Pinho

Abstract:

The cardiopulmonary signal monitoring, without the usage of contact electrodes or any type of in-body sensors, has several applications such as sleeping monitoring and continuous monitoring of vital signals in bedridden patients. This system has also applications in the vehicular environment to monitor the driver, in order to avoid any possible accident in case of cardiac failure. Thus, the bio-radar system proposed in this paper, can measure vital signals accurately by using the Doppler effect principle that relates the received signal properties with the distance change between the radar antennas and the person’s chest-wall. Once the bio-radar aim is to monitor subjects in real-time and during long periods of time, it is impossible to guarantee the patient immobilization, hence their random motion will interfere in the acquired signals. In this paper, a mathematical model of the bio-radar is presented, as well as its simulation in MATLAB. The used algorithm for breath rate extraction is explained and a method for DC offsets removal based in a motion detection system is proposed. Furthermore, experimental tests were conducted with a view to prove that the unavoidable random motion can be used to estimate the DC offsets accurately and thus remove them successfully.

Keywords: Bio-signals, DC Component, Doppler Effect, ellipse fitting, radar, SDR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
407 Eradication of Mental Illness through Buddhism

Authors: Deshar Bashu Dev

Abstract:

In this modern age, most people in developed and developing countries are affected by mental illness. There are many mental illnesses, and their differing symptoms impact peoples’ lives in different ways. These illnesses affect the way people think and feel, as well as how they behave with others. Mental illness results from compound interactions between the mind, body, and environment. New technologies and sciences make the world a better place. These technologies are becoming smarter and are being developed every day to help make daily life easier However, people suffer from mental illness in every part of the world. The philosophy propounded by the Buddha, Buddhism, teaches that all life is connected, from the microcosm to macrocosm. In the 2,500 years that elapsed since the death of the Buddha, his disciples have spread his teachings and developed sophisticated psycho-therapeutic methodologies. We can find many examples in Buddhist texts and in the modern age where Buddhist philosophy modern science could not solve. The Noble Eightfold Path, which is one of the main philosophies of Buddhism; it eradicates hatred and ill will and cultivates good deeds, kindness, and compassion. Buddhism, as a practice of dialectic conversation and mindfulness training, is full of rich therapeutic tools that the mental health community has adopted to help people. Similarly, Buddhist meditation is very necessary; it purifies thoughts and avoids unnecessary thinking. This research aims to study different causes of mental illness; analyzes the different approaches to eradicate mental illness problems and provides conclusions and recommendations present solutions through Buddhism in this modern age.

Keywords: Mental illness, Buddhism, mindfulness, Buddhist practices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
406 3D Locomotion and Fractal Analysis of Goldfish for Acute Toxicity Bioassay

Authors: Kittiwann Nimkerdphol, Masahiro Nakagawa

Abstract:

Biological reactions of individuals of a testing animal to toxic substance are unique and can be used as an indication of the existing of toxic substance. However, to distinguish such phenomenon need a very complicate system and even more complicate to analyze data in 3 dimensional. In this paper, a system to evaluate in vitro biological activities to acute toxicity of stochastic self-affine non-stationary signal of 3D goldfish swimming by using fractal analysis is introduced. Regular digital camcorders are utilized by proposed algorithm 3DCCPC to effectively capture and construct 3D movements of the fish. A Critical Exponent Method (CEM) has been adopted as a fractal estimator. The hypothesis was that the swimming of goldfish to acute toxic would show the fractal property which related to the toxic concentration. The experimental results supported the hypothesis by showing that the swimming of goldfish under the different toxic concentration has fractal properties. It also shows that the fractal dimension of the swimming related to the pH value of FD Ôëê 0.26pH + 0.05. With the proposed system, the fish is allowed to swim freely in all direction to react to the toxic. In addition, the trajectories are precisely evaluated by fractal analysis with critical exponent method and hence the results exhibit with much higher degree of confidence.

Keywords: 3D locomotion, bioassay, critical exponent method, CEM, fractal analysis, goldfish.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1724
405 Online Control of Knitted Fabric Quality: Loop Length Control

Authors: Dariush Semnani, Mohammad Sheikhzadeh

Abstract:

Circular knitting machine makes the fabric with more than two knitting tools. Variation of yarn tension between different knitting tools causes different loop length of stitches duration knitting process. In this research, a new intelligent method is applied to control loop length of stitches in various tools based on ideal shape of stitches and real angle of stitches direction while different loop length of stitches causes stitches deformation and deviation those of angle. To measure deviation of stitch direction against variation of tensions, image processing technique was applied to pictures of different fabrics with constant front light. After that, the rate of deformation is translated to needed compensation of loop length cam degree to cure stitches deformation. A fuzzy control algorithm was applied to loop length modification in knitting tools. The presented method was experienced for different knitted fabrics of various structures and yarns. The results show that presented method is useable for control of loop length variation between different knitting tools based on stitch deformation for various knitted fabrics with different fabric structures, densities and yarn types.

Keywords: Circular knitting, Radon transformation, Knittedfabric, Regularity, Fuzzy control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3674
404 Identity Management in Virtual Worlds Based on Biometrics Watermarking

Authors: S. Bader, N. Essoukri Ben Amara

Abstract:

With the technological development and rise of virtual worlds, these spaces are becoming more and more attractive for cybercriminals, hidden behind avatars and fictitious identities. Since access to these spaces is not restricted or controlled, some impostors take advantage of gaining unauthorized access and practicing cyber criminality. This paper proposes an identity management approach for securing access to virtual worlds. The major purpose of the suggested solution is to install a strong security mechanism to protect virtual identities represented by avatars. Thus, only legitimate users, through their corresponding avatars, are allowed to access the platform resources. Access is controlled by integrating an authentication process based on biometrics. In the request process for registration, a user fingerprint is enrolled and then encrypted into a watermark utilizing a cancelable and non-invertible algorithm for its protection. After a user personalizes their representative character, the biometric mark is embedded into the avatar through a watermarking procedure. The authenticity of the avatar identity is verified when it requests authorization for access. We have evaluated the proposed approach on a dataset of avatars from various virtual worlds, and we have registered promising performance results in terms of authentication accuracy, acceptation and rejection rates.

Keywords: Identity management, security, biometrics authentication and authorization, avatar, virtual world.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656
403 An Investigation of Final Tests of Translation as Practiced in Iranian Undergraduate English Translation Program

Authors: Hossein Heidari Tabrizi, Azizeh Chalak

Abstract:

The present study examined how translation teachers develop final tests as measures for checking on the quality of students’ academic translation in Iranian context. To achieve this goal, thirty experienced male and female translation teachers from the four types of the universities offering the program were invited to an in-depth 30-minute one-session semi-structured interview. The responses provided showed how much discrepancy exists among the Iranian translation teachers (as developers of final translation tests), who are least informed with the current translation evaluation methods. It was also revealed that the criteria they use for developing such tests and scoring student translations are not theory-driven but are highly subjective, mainly based on their personal experience and intuition. Hence, the quality and accountability of such tests are under serious question. The results also confirmed that the dominant method commonly and currently practiced is the purely essay-type format. To remedy the situation, some suggestions are in order. As part of the solution, to improve the reliability and validity of such tests, the present summative, product-oriented evaluation should be accompanied with some formative, process-oriented methods of evaluation. Training the teachers and helping them get acquainted with modern principles of translation evaluation as well as the existing models, and rating scales does improve the quality of academic translation evaluation.

Keywords: Iranian universities, students’ academic translations, translation final tests, undergraduate translation programs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
402 Pricing European Options under Jump Diffusion Models with Fast L-stable Padé Scheme

Authors: Salah Alrabeei, Mohammad Yousuf

Abstract:

The goal of option pricing theory is to help the investors to manage their money, enhance returns and control their financial future by theoretically valuing their options. Modeling option pricing by Black-School models with jumps guarantees to consider the market movement. However, only numerical methods can solve this model. Furthermore, not all the numerical methods are efficient to solve these models because they have nonsmoothing payoffs or discontinuous derivatives at the exercise price. In this paper, the exponential time differencing (ETD) method is applied for solving partial integrodifferential equations arising in pricing European options under Merton’s and Kou’s jump-diffusion models. Fast Fourier Transform (FFT) algorithm is used as a matrix-vector multiplication solver, which reduces the complexity from O(M2) into O(M logM). A partial fraction form of Pad`e schemes is used to overcome the complexity of inverting polynomial of matrices. These two tools guarantee to get efficient and accurate numerical solutions. We construct a parallel and easy to implement a version of the numerical scheme. Numerical experiments are given to show how fast and accurate is our scheme.

Keywords: Integral differential equations, L-stable methods, pricing European options, Jump–diffusion model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 499
401 CFD Prediction of the Round Elbow Fitting Loss Coefficient

Authors: Ana Paula P. dos Santos, Claudia R. Andrade, Edson L. Zaparoli

Abstract:

Pressure loss in ductworks is an important factor to be considered in design of engineering systems such as power-plants, refineries, HVAC systems to reduce energy costs. Ductwork can be composed by straight ducts and different types of fittings (elbows, transitions, converging and diverging tees and wyes). Duct fittings are significant sources of pressure loss in fluid distribution systems. Fitting losses can be even more significant than equipment components such as coils, filters, and dampers. At the present work, a conventional 90o round elbow under turbulent incompressible airflow is studied. Mass, momentum, and k-e turbulence model equations are solved employing the finite volume method. The SIMPLE algorithm is used for the pressure-velocity coupling. In order to validate the numerical tool, the elbow pressure loss coefficient is determined using the same conditions to compare with ASHRAE database. Furthermore, the effect of Reynolds number variation on the elbow pressure loss coefficient is investigated. These results can be useful to perform better preliminary design of air distribution ductworks in air conditioning systems.

Keywords: Duct fitting, Pressure loss, Elbow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4851
400 A Maximum Power Point Tracker for PV Panels Using SEPIC Converter

Authors: S. Ganesh, J. Janani, G. Besliya Angel

Abstract:

Photovoltaic (PV) energy is one of the most important renewable energy sources. Maximum Power Point Tracking (MPPT) techniques should be used in photovoltaic systems to maximize the PV panel output power by tracking continuously the maximum power point which depends on panel’s temperature and on irradiance conditions. Incremental conductance control method has been used as MPPT algorithm. The methodology is based on connecting a pulse width modulated dc/dc SEPIC converter, which is controlled by a microprocessor based unit. The SEPIC converter is one of the buck-boost converters which maintain the output voltage as constant irrespective of the solar isolation level. By adjusting the switching frequency of the converter the maximum power point has been achieved. The main difference between the method used in the proposed MPPT systems and other technique used in the past is that PV array output power is used to directly control the dc/dc converter thus reducing the complexity of the system. The resulting system has high efficiency, low cost and can be easily modified. The tracking capability has been verified experimentally with a 10 W solar panel under a controlled experimental setup. The SEPIC converter and their control strategies has been analyzed and simulated using Simulink/Matlab software.

Keywords: Maximum Power Point Tracking, Microprocessor, PV Module, SEPIC Converter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5969
399 Iterative Estimator-Based Nonlinear Backstepping Control of a Robotic Exoskeleton

Authors: Brahmi Brahim, Mohammad Habibur Rahman, Maarouf Saad, Cristóbal Ochoa Luna

Abstract:

A repetitive training movement is an efficient method to improve the ability and movement performance of stroke survivors and help them to recover their lost motor function and acquire new skills. The ETS-MARSE is seven degrees of freedom (DOF) exoskeleton robot developed to be worn on the lateral side of the right upper-extremity to assist and rehabilitate the patients with upper-extremity dysfunction resulting from stroke. Practically, rehabilitation activities are repetitive tasks, which make the assistive/robotic systems to suffer from repetitive/periodic uncertainties and external perturbations induced by the high-order dynamic model (seven DOF) and interaction with human muscle which impact on the tracking performance and even on the stability of the exoskeleton. To ensure the robustness and the stability of the robot, a new nonlinear backstepping control was implemented with designed tests performed by healthy subjects. In order to limit and to reject the periodic/repetitive disturbances, an iterative estimator was integrated into the control of the system. The estimator does not need the precise dynamic model of the exoskeleton. Experimental results confirm the robustness and accuracy of the controller performance to deal with the external perturbation, and the effectiveness of the iterative estimator to reject the repetitive/periodic disturbances.

Keywords: Backstepping control, iterative control, rehabilitation, ETS-MARSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1369
398 A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods

Authors: S. Sarumathi, N. Shanthi, M. Sharmila

Abstract:

Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.

Keywords: Clustering, Cluster Ensemble methods, Co-association matrix, Consensus function, Median partition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2603
397 Milling Chatter Prevention by Adaptive Spindle Speed Tuning

Authors: Nan-Chyuan Tsai, Din-Chang Chen, Rong-Mao Lee, Bai-Lu Wang

Abstract:

This paper presents how the real-time chatter prevention can be realized by feedback of acoustic cutting signal, and the efficacy of the proposed adaptive spindle speed tuning algorithm is verified by intensive experimental simulations. A pair of microphones, perpendicular to each other, is used to acquire the acoustic cutting signal resulting from milling chatter. A real-time feedback control loop is constructed for spindle speed compensation so that the milling process can be ensured to be within the stability zone of stability lobe diagram. Acoustic Chatter Signal Index (ACSI) and Spindle Speed Compensation Strategy (SSCS) are proposed to quantify the acoustic signal and actively tune the spindle speed respectively. By converting the acoustic feedback signal into ACSI, an appropriate Spindle Speed Compensation Rate (SSCR) can be determined by SSCS based on real-time chatter level or ACSI. Accordingly, the compensation command, referred to as Added-On Voltage (AOV), is applied to increase/decrease the spindle motor speed. By inspection on the precision and quality of the workpiece surface after milling, the efficacy of the real-time chatter prevention strategy via acoustic signal feedback is further assured.

Keywords: Chatter compensation, Stability lobes, Non-invasivemeasurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1714
396 Formal Analysis of a Public-Key Algorithm

Authors: Markus Kaiser, Johannes Buchmann

Abstract:

In this article, a formal specification and verification of the Rabin public-key scheme in a formal proof system is presented. The idea is to use the two views of cryptographic verification: the computational approach relying on the vocabulary of probability theory and complexity theory and the formal approach based on ideas and techniques from logic and programming languages. A major objective of this article is the presentation of the first computer-proved implementation of the Rabin public-key scheme in Isabelle/HOL. Moreover, we explicate a (computer-proven) formalization of correctness as well as a computer verification of security properties using a straight-forward computation model in Isabelle/HOL. The analysis uses a given database to prove formal properties of our implemented functions with computer support. The main task in designing a practical formalization of correctness as well as efficient computer proofs of security properties is to cope with the complexity of cryptographic proving. We reduce this complexity by exploring a light-weight formalization that enables both appropriate formal definitions as well as efficient formal proofs. Consequently, we get reliable proofs with a minimal error rate augmenting the used database, what provides a formal basis for more computer proof constructions in this area.

Keywords: public-key encryption, Rabin public-key scheme, formalproof system, higher-order logic, formal verification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1536
395 Study on Optimization Design of Pressure Hull for Underwater Vehicle

Authors: Qasim Idrees, Gao Liangtian, Liu Bo, Miao Yiran

Abstract:

In order to improve the efficiency and accuracy of the pressure hull structure, optimization of underwater vehicle based on response surface methodology, a method for optimizing the design of pressure hull structure was studied. To determine the pressure shell of five dimensions as a design variable, the application of thin shell theory and the Chinese Classification Society (CCS) specification was carried on the preliminary design. In order to optimize variables of the feasible region, different methods were studied and implemented such as Opt LHD method (to determine the design test sample points in the feasible domain space), parametric ABAQUS solution for each sample point response, and the two-order polynomial response for the surface model of the limit load of structures. Based on the ultimate load of the structure and the quality of the shell, the two-generation genetic algorithm was used to solve the response surface, and the Pareto optimal solution set was obtained. The final optimization result was 41.68% higher than that of the initial design, and the shell quality was reduced by about 27.26%. The parametric method can ensure the accuracy of the test and improve the efficiency of optimization.

Keywords: Parameterization, response surface, structure optimization, pressure hull.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1162
394 Error Correction of Radial Displacement in Grinding Machine Tool Spindle by Optimizing Shape and Bearing Tuning

Authors: Khairul Jauhari, Achmad Widodo, Ismoyo Haryanto

Abstract:

In this article, the radial displacement error correction capability of a high precision spindle grinding caused by unbalance force was investigated. The spindle shaft is considered as a flexible rotor mounted on two sets of angular contact ball bearing. Finite element methods (FEM) have been adopted for obtaining the equation of motion of the spindle. In this paper, firstly, natural frequencies, critical frequencies, and amplitude of the unbalance response caused by residual unbalance are determined in order to investigate the spindle behaviors. Furthermore, an optimization design algorithm is employed to minimize radial displacement of the spindle which considers dimension of the spindle shaft, the dynamic characteristics of the bearings, critical frequencies and amplitude of the unbalance response, and computes optimum spindle diameters and stiffness and damping of the bearings. Numerical simulation results show that by optimizing the spindle diameters, and stiffness and damping in the bearings, radial displacement of the spindle can be reduced. A spindle about 4 μm radial displacement error can be compensated with 2 μm accuracy. This certainly can improve the accuracy of the product of machining.

Keywords: Error correction, High precision grinding, Optimization, Radial displacement, Spindle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1794
393 Reasons for Doing Job outside Household and Difficulties Faced by the Working Women of Bangladesh

Authors: Md. Sayeed Akhter, Md. Akhtar Hossain Mazumder, Syeda Afreena Mamun

Abstract:

Bangladesh is a patriarchal and male dominated country. Traditional, cultural, social, and religious values and practices have reinforced the lower status of women accorded to them in society and have limited their opportunities for education, technical and vocational training, and involvement with earning activities outside their households. After independence numbers of women are doing job outside their households. This study attempts to find out the reasons of engaging in earning activities outside households and difficulties faced by upper and lower class working women in Bangladesh. To explore the objectives and research questions of the study descriptive techniques had been used. Survey was conducted among the women who were working in Rajshahi city of Bangladesh and face-to-face interviews were conducted to collect data. Findings of the study illustrates that most of the upper class working women engaged into job because they wanted to utilized their education and to bring solvency in the family, and they spend their income for meeting the needs of all the members of the family. On the other hand, most of the lower class working women involved into earning activities outside their households because they want to bring solvency in their families and spend their income on household expenditure. Both classes became tensed for their children because they had to stay at their working place for long time. Therefore, day care center should be established besides their working place for their children.

Keywords: Working Women, Reasons for Doing Jobs, Working Environment, Difficulties Faced.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1790
392 Tidal Data Analysis using ANN

Authors: Ritu Vijay, Rekha Govil

Abstract:

The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.

Keywords: ANN, RBF, Tidal Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656