Search results for: finite segment method.
6663 The Results of the Fetal Weight Estimation of the Infants Delivered in the Delivery Room At Dan Khunthot Hospital by Johnson-s Method
Authors: Nareelux Suwannobol, JintanaTapin, Khuanchanok Narachan
Abstract:
The objective of this study was to determine the accuracy to estimation fetal weight by Johnson-s method and compares it with actual birth weight. The sample group was 126 infants delivered in Dan KhunThot hospital from January March 2012. Fetal weight was estimated by measuring fundal height according to Johnson-s method. The information was collected by studying historical delivery records and then analyzed by using the statistics of frequency, percentage, mean, and standard deviation. Finally, the difference was analyzed by a paired t-test.The results showed had an average birth weight was 3093.57 ± 391.03 g (mean ± SD) and 3,455 ± 454.55 g average estimated fetal weight by Johnson-s method higher than average actual birth weight was 384.09 grams. When classifying the infants according to birth weight found that low birth weight (<2500 g) and the appropriate birth weight (2500-3999g) actual birth weight less than estimate fetal weight . But the high birth weight (> 4000 g) actual birth weight was more than estimated fetal weight. The difference was found between actual birth weight and estimation fetal weight of the minimum weight in high birth weight ( > 4000 g) , the appropriate birth weight (2500-3999g) and low birth weight (<2500 g) respectively. The rate of estimates fetal weight within 10% of actual birth weight was 35.7%. Actual birth weight were compared with the found that the difference is statistically significant (p <.000). Employing Johnson-s method to estimate fetal weight can estimate initial fetal weight before passing to special examinations, which may require excessive high cost. A variety of methods should be employed to estimate fetal weight more precisely, which will help plan care for mother-s and infant-s safety.
Keywords: Johnson's method, Fetal weight estimate, Delivery Room, Student nurse.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 23496662 Debt Reconstruction, Career Development and Famers Household Well-Being in Thailand
Authors: Yothin Sawangdee, Piyawat Katewongsa, Chutima Yousomboon, Kornkanok Pongpradit, Sakapas Saengchai, Phusit Khantikul
Abstract:
Debts reconstruction under some of moratorium projects is one of important method that highly benefits to both the Banks and farmers. The method can reduce probabilities for nonprofits loan. This paper discuss about debts reconstruction and career development training for farmers in Thailand between 2011 and 2013. The research designed is mix-method between quantitative survey and qualitative survey. Sample size for quantitative method is 1003 cases. Data gathering procedure is between October and December 2013. Main results affirmed that debts reconstruction is needed. And there are numerous benefits from farmers’ career development training. Many of farmers who attend field school activities able to bring knowledge learned to apply for the farms’ work. They can reduce production costs. Framers’ quality of life and their household well-being also improve. This program should apply in any countries where farmers have highly debts and highly risks for not return the debts.Keywords: Career development, debts reconstruction, farmers household well-being, Thailand.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10286661 A Novel Approach of Route Choice in Stochastic Time-varying Networks
Authors: Siliang Wang, Minghui Wang
Abstract:
Many exist studies always use Markov decision processes (MDPs) in modeling optimal route choice in stochastic, time-varying networks. However, taking many variable traffic data and transforming them into optimal route decision is a computational challenge by employing MDPs in real transportation networks. In this paper we model finite horizon MDPs using directed hypergraphs. It is shown that the problem of route choice in stochastic, time-varying networks can be formulated as a minimum cost hyperpath problem, and it also can be solved in linear time. We finally demonstrate the significant computational advantages of the introduced methods.Keywords: Markov decision processes (MDPs), stochastictime-varying networks, hypergraphs, route choice.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15606660 Real-Time Measurement Approach for Tracking the ΔV10 Estimate Value of DC EAF
Authors: Jin-Lung Guan, Jyh-Cherng Gu, Chun-Wei Huang, Hsin-Hung Chang
Abstract:
This investigation develops a revisable method for estimating the estimate value of equivalent 10 Hz voltage flicker (DV10) of a DC Electric Arc Furnace (EAF). This study also discusses three 161kV DC EAFs by field measurement, with those results indicating that the estimated DV10 value is significantly smaller than the survey value. The key point is that the conventional means of estimating DV10 is inappropriate. There is a main cause as the assumed Qmax is too small.
Although DC EAF is regularly operated in a constant MVA mode, the reactive power variation in the Main Transformer (MT) is more significant than that in the Furnace Transformer (FT). A substantial difference exists between estimated maximum reactive power fluctuation (DQmax) and the survey value from actual DC EAF operations. However, this study proposes a revisable method that can obtain a more accurate DV10 estimate than the conventional method.
Keywords: Voltage Flicker, dc EAF, Estimate Value, DV10.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33596659 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting: The WICKED Method
Authors: S. Impey, D. Berry, S. Furtado, M. Galvin, L. Grogan, O. Hardiman, L. Hederman, M. Heverin, V. Wade, L. Douris, D. O'Sullivan, G. Stephens
Abstract:
Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.
Keywords: Healthcare, knowledge acquisition, maximal data sets, action design science.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5676658 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion
Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto
Abstract:
In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15696657 Reduction of Linear Time-Invariant Systems Using Routh-Approximation and PSO
Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil
Abstract:
Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.
Keywords: Model Order Reduction, Markov Parameters, Routh Approximation, Particle Swarm Optimization, Integral Squared Error, Steady State Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32986656 Multimethod Approach to Research in Interlanguage Pragmatics
Authors: Saad Al-Gahtani, Ghassan H Al Shatter
Abstract:
Argument over the use of particular method in interlanguage pragmatics has increased recently. Researchers argued the advantages and disadvantages of each method either natural or elicited. Findings of different studies indicated that the use of one method may not provide enough data to answer all its questions. The current study investigated the validity of using multimethod approach in interlanguage pragmatics to understand the development of requests in Arabic as a second language (Arabic L2). To this end, the study adopted two methods belong to two types of data sources: the institutional discourse (natural data), and the role play (elicited data). Participants were 117 learners of Arabic L2 at the university level, representing four levels (beginners, low-intermediate, highintermediate, and advanced). Results showed that using two or more methods in interlanguage pragmatics affect the size and nature of data.
Keywords: Arabic L2, Development of requests, Interlanguage Pragmatics, Multimethod approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18356655 Seismic Behavior and Loss Assessment of High-Rise Buildings with Light Gauge Steel-Concrete Hybrid Structure
Authors: Bing Lu, Shuang Li, Hongyuan Zhou
Abstract:
The steel-concrete hybrid structure has been extensively employed in high-rise buildings and super high-rise buildings. The light gauge steel-concrete hybrid structure, including light gauge steel structure and concrete hybrid structure, is a type of steel-concrete hybrid structure, which possesses some advantages of light gauge steel structure and concrete hybrid structure. The seismic behavior and loss assessment of three high-rise buildings with three different concrete hybrid structures were investigated through finite element software. The three concrete hybrid structures are reinforced concrete column-steel beam (RC-S) hybrid structure, concrete-filled steel tube column-steel beam (CFST-S) hybrid structure, and tubed concrete column-steel beam (TC-S) hybrid structure. The nonlinear time-history analysis of three high-rise buildings under 80 earthquakes was carried out. After simulation, it indicated that the seismic performances of three high-rise buildings were superior. Under extremely rare earthquakes, the maximum inter-story drifts of three high-rise buildings are significantly lower than 1/50. The inter-story drift and floor acceleration of high-rise building with CFST-S hybrid structure were bigger than those of high-rise buildings with RC-S hybrid structure, and smaller than those of high-rise building with TC-S hybrid structure. Then, based on the time-history analysis results, the post-earthquake repair cost ratio and repair time of three high-rise buildings were predicted through an economic performance analysis method proposed in FEMA-P58 report. Under frequent earthquakes, basic earthquakes and rare earthquakes, the repair cost ratio and repair time of three high-rise buildings were less than 5% and 15 days, respectively. Under extremely rare earthquakes, the repair cost ratio and repair time of high-rise buildings with TC-S hybrid structure were the most among three high rise buildings. Due to the advantages of CFST-S hybrid structure, it could be extensively employed in high-rise buildings subjected to earthquake excitations.
Keywords: seismic behavior, loss assessment, light gauge steel, concrete hybrid structure, high-rise building, time-history analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4976654 Stego Machine – Video Steganography using Modified LSB Algorithm
Authors: Mritha Ramalingam
Abstract:
Computer technology and the Internet have made a breakthrough in the existence of data communication. This has opened a whole new way of implementing steganography to ensure secure data transfer. Steganography is the fine art of hiding the information. Hiding the message in the carrier file enables the deniability of the existence of any message at all. This paper designs a stego machine to develop a steganographic application to hide data containing text in a computer video file and to retrieve the hidden information. This can be designed by embedding text file in a video file in such away that the video does not loose its functionality using Least Significant Bit (LSB) modification method. This method applies imperceptible modifications. This proposed method strives for high security to an eavesdropper-s inability to detect hidden information.Keywords: Data hiding, LSB, Stego machine, VideoSteganography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42766653 Parallel Explicit Group Domain Decomposition Methods for the Telegraph Equation
Authors: Kew Lee Ming, Norhashidah Hj. Mohd. Ali
Abstract:
In a previous work, we presented the numerical solution of the two dimensional second order telegraph partial differential equation discretized by the centred and rotated five-point finite difference discretizations, namely the explicit group (EG) and explicit decoupled group (EDG) iterative methods, respectively. In this paper, we utilize a domain decomposition algorithm on these group schemes to divide the tasks involved in solving the same equation. The objective of this study is to describe the development of the parallel group iterative schemes under OpenMP programming environment as a way to reduce the computational costs of the solution processes using multicore technologies. A detailed performance analysis of the parallel implementations of points and group iterative schemes will be reported and discussed.Keywords: Telegraph equation, explicit group iterative scheme, domain decomposition algorithm, parallelization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15376652 Stabilization and Control of a UAV Flight Attitude Angles using the Backstepping Method
Authors: Mihai Lungu
Abstract:
The paper presents the design of a mini-UAV attitude controller using the backstepping method. Starting from the nonlinear dynamic equations of the mini-UAV, by using the backstepping method, the author of this paper obtained the expressions of the elevator, rudder and aileron deflections, which stabilize the UAV, at each moment, to the desired values of the attitude angles. The attitude controller controls the attitude angles, the angular rates, the angular accelerations and other variables that describe the UAV longitudinal and lateral motions. To design the nonlinear controller, by using the backstepping technique, the nonlinear equations and the Lyapunov analysis have been directly used. The designed controller has been implemented in Matlab/Simulink environment and its effectiveness has been tested with a campaign of numerical simulations using data from the UAV flight tests. The obtained results are very good and they are better than the ones found in previous works.Keywords: Attitude angles, Backstepping, Controller, UAV.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24136651 Objects Extraction by Cooperating Optical Flow, Edge Detection and Region Growing Procedures
Abstract:
The image segmentation method described in this paper has been developed as a pre-processing stage to be used in methodologies and tools for video/image indexing and retrieval by content. This method solves the problem of whole objects extraction from background and it produces images of single complete objects from videos or photos. The extracted images are used for calculating the object visual features necessary for both indexing and retrieval processes. The segmentation algorithm is based on the cooperation among an optical flow evaluation method, edge detection and region growing procedures. The optical flow estimator belongs to the class of differential methods. It permits to detect motions ranging from a fraction of a pixel to a few pixels per frame, achieving good results in presence of noise without the need of a filtering pre-processing stage and includes a specialised model for moving object detection. The first task of the presented method exploits the cues from motion analysis for moving areas detection. Objects and background are then refined using respectively edge detection and seeded region growing procedures. All the tasks are iteratively performed until objects and background are completely resolved. The method has been applied to a variety of indoor and outdoor scenes where objects of different type and shape are represented on variously textured background.Keywords: Image Segmentation, Motion Detection, Object Extraction, Optical Flow
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17636650 Construction Procedures Evaluation of Three Adjacent Tunnels and Excavation Step Effects
Authors: M. Mahdi, N. Shariatmadari
Abstract:
Since, both the relative position of tunnels and the construction procedure affect the soil movement and internal forces in the lining, it is of major concern to study the influence of these factors on the tunnel design. Construction procedures of tunnels have considerable effects on the magnitude of surface movements and lining stresses. This paper describes numerical analysis of construction procedure of a three adjacent shallow tunnels at high groundwater levels using the commercial finite difference software (FLAC-3D). The aim of this study is to determinate the most suitable construction procedure for the three tunnels and the optimum excavation step in Tehran Metro tunnels in order to optimize the surface settlements and lining stresses.
Keywords: Shallow tunnel, multiple tunnels, construction procedure, surface movement, numerical modeling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24406649 Optimising Data Transmission in Heterogeneous Sensor Networks
Authors: M. Hammerton, J. Trevathan, T. Myers, W. Read
Abstract:
The transfer rate of messages in distributed sensor network applications is a critical factor in a system's performance. The Sensor Abstraction Layer (SAL) is one such system. SAL is a middleware integration platform for abstracting sensor specific technology in order to integrate heterogeneous types of sensors in a network. SAL uses Java Remote Method Invocation (RMI) as its connection method, which has unsatisfying transfer rates, especially for streaming data. This paper analyses different connection methods to optimize data transmission in SAL by replacing RMI. Our results show that the most promising Java-based connections were frameworks for Java New Input/Output (NIO) including Apache MINA, JBoss Netty, and xSocket. A test environment was implemented to evaluate each respective framework based on transfer rate, resource usage, and scalability. Test results showed the most suitable connection method to improve data transmission in SAL JBoss Netty as it provides a performance enhancement of 68%.
Keywords: Wireless sensor networks, remote method invocation, transmission time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20436648 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.
Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5836647 Effect of Inclusions on the Shape and Size of Crack Tip Plastic Zones by Element Free Galerkin Method
Authors: A. Jameel, G. A. Harmain, Y. Anand, J. H. Masoodi, F. A. Najar
Abstract:
The present study investigates the effect of inclusions on the shape and size of crack tip plastic zones in engineering materials subjected to static loads by employing the element free Galerkin method (EFGM). The modeling of the discontinuities produced by cracks and inclusions becomes independent of the grid chosen for analysis. The standard displacement approximation is modified by adding additional enrichment functions, which introduce the effects of different discontinuities into the formulation. The level set method has been used to represent different discontinuities present in the domain. The effect of inclusions on the extent of crack tip plastic zones is investigated by solving some numerical problems by the EFGM.
Keywords: EFGM, stress intensity factors, crack tip plastic zones, inclusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8946646 Determining of Threshold Levels of Burst by Burst AQAM/CDMA in Slow Rayleigh Fading Environments
Authors: F. Nejadebrahimi, M. ArdebiliPour
Abstract:
In this paper, we are going to determine the threshold levels of adaptive modulation in a burst by burst CDMA system by a suboptimum method so that the above method attempts to increase the average bit per symbol (BPS) rate of transceiver system by switching between the different modulation modes in variable channel condition. In this method, we choose the minimum values of average bit error rate (BER) and maximum values of average BPS on different values of average channel signal to noise ratio (SNR) and then calculate the relative threshold levels of them, so that when the instantaneous SNR increases, a higher order modulation be employed for increasing throughput and vise-versa when the instantaneous SNR decreases, a lower order modulation be employed for improvement of BER. In transmission step, by this adaptive modulation method, in according to comparison between obtained estimation of pilot symbols and a set of above suboptimum threshold levels, above system chooses one of states no transmission, BPSK, 4QAM and square 16QAM for modulation of data. The expected channel in this paper is a slow Rayleigh fading.
Keywords: AQAM, burst, BER, BPS, CDMA, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15406645 Prediction the Limiting Drawing Ratio in Deep Drawing Process by Back Propagation Artificial Neural Network
Authors: H.Mohammadi Majd, M.Jalali Azizpour, M. Goodarzi
Abstract:
In this paper back-propagation artificial neural network (BPANN) with Levenberg–Marquardt algorithm is employed to predict the limiting drawing ratio (LDR) of the deep drawing process. To prepare a training set for BPANN, some finite element simulations were carried out. die and punch radius, die arc radius, friction coefficient, thickness, yield strength of sheet and strain hardening exponent were used as the input data and the LDR as the specified output used in the training of neural network. As a result of the specified parameters, the program will be able to estimate the LDR for any new given condition. Comparing FEM and BPANN results, an acceptable correlation was found.Keywords: BPANN, deep drawing, prediction, limiting drawingratio (LDR), Levenberg–Marquardt algorithm
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18596644 A Method to Calculate Frenet Apparatus of W-Curves in the Euclidean 6-Space
Authors: Süha Yılmaz, Melih Turgut
Abstract:
These In this work, a regular unit speed curve in six dimensional Euclidean space, whose Frenet curvatures are constant, is considered. Thereafter, a method to calculate Frenet apparatus of this curve is presented.Keywords: Classical Differential Geometry, Euclidean 6-space, Frenet Apparatus of the curves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12976643 Road Extraction Using Stationary Wavelet Transform
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a novel road extraction method using Stationary Wavelet Transform is proposed. To detect road features from color aerial satellite imagery, Mexican hat Wavelet filters are used by applying the Stationary Wavelet Transform in a multiresolution, multi-scale, sense and forming the products of Wavelet coefficients at a different scales to locate and identify road features at a few scales. In addition, the shifting of road features locations is considered through multiple scales for robust road extraction in the asymmetry road feature profiles. From the experimental results, the proposed method leads to a useful technique to form the basis of road feature extraction. Also, the method is general and can be applied to other features in imagery.
Keywords: Road extraction, Multiresolution, Stationary Wavelet Transform, Multi-scale analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18836642 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5196641 A Perceptual Image Coding method of High Compression Rate
Authors: Fahmi Kammoun, Mohamed Salim Bouhlel
Abstract:
In the framework of the image compression by Wavelet Transforms, we propose a perceptual method by incorporating Human Visual System (HVS) characteristics in the quantization stage. Indeed, human eyes haven-t an equal sensitivity across the frequency bandwidth. Therefore, the clarity of the reconstructed images can be improved by weighting the quantization according to the Contrast Sensitivity Function (CSF). The visual artifact at low bit rate is minimized. To evaluate our method, we use the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria witch takes into account visual criteria. The experimental results illustrate that our technique shows improvement on image quality at the same compression ratio.Keywords: Contrast Sensitivity Function, Human Visual System, Image compression, Wavelet transforms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18826640 Elliptical Features Extraction Using Eigen Values of Covariance Matrices, Hough Transform and Raster Scan Algorithms
Authors: J. Prakash, K. Rajesh
Abstract:
In this paper, we introduce a new method for elliptical object identification. The proposed method adopts a hybrid scheme which consists of Eigen values of covariance matrices, Circular Hough transform and Bresenham-s raster scan algorithms. In this approach we use the fact that the large Eigen values and small Eigen values of covariance matrices are associated with the major and minor axial lengths of the ellipse. The centre location of the ellipse can be identified using circular Hough transform (CHT). Sparse matrix technique is used to perform CHT. Since sparse matrices squeeze zero elements and contain a small number of nonzero elements they provide an advantage of matrix storage space and computational time. Neighborhood suppression scheme is used to find the valid Hough peaks. The accurate position of circumference pixels is identified using raster scan algorithm which uses the geometrical symmetry property. This method does not require the evaluation of tangents or curvature of edge contours, which are generally very sensitive to noise working conditions. The proposed method has the advantages of small storage, high speed and accuracy in identifying the feature. The new method has been tested on both synthetic and real images. Several experiments have been conducted on various images with considerable background noise to reveal the efficacy and robustness. Experimental results about the accuracy of the proposed method, comparisons with Hough transform and its variants and other tangential based methods are reported.Keywords: Circular Hough transform, covariance matrix, Eigen values, ellipse detection, raster scan algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26446639 Enhancing Student Evaluation Through Student Idol
Authors: M. S. Roslina, M.O. Syahrul Hakimah Ong, S. F. Syarifah Fazlin
Abstract:
Since after the historical moment of Malaysia Independence Day on the year of 1957, the government had been trying hard in order to find the most efficient methods in learning. However, it is hard to actually access and evaluate students whom will then be called an excellent student. It because in our realtime student who excellent is only excel in academic. This evaluation become a problem because it not balance in our real life interm of to get an excellent student in whole area in their involvement of curiculum and co-curiculum. To overcome this scenario, we proposed a method called Student Idol to evaluate student through three categories which are academic, co-curiculum and leadership. All the categories have their own merit point. Using this method, student will be evaluated more accurate compared to the previously. So, teacher can easily evaluate their student without having any emotion factor, relation factor and others. As conclustion this method will helps student evaluation more accurate and valid.Keywords: evaluation, curiculum, co-curriculum, idol.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13326638 Study on Position Polarity Compensation for Permanent Magnet Synchronous Motor Based on High Frequency Signal Injection
Authors: Gu Shan-Mao, He Feng-You, Ye Sheng-Wen, Ma Zhi-Xun
Abstract:
The application of a high frequency signal injection method as speed and position observer in PMSM drives has been a research focus. At present, the precision of this method is nearly good as that of ten-bit encoder. But there are some questions for estimating position polarity. Based on high frequency signal injection, this paper presents a method to compensate position polarity for permanent magnet synchronous motor (PMSM). Experiments were performed to test the effectiveness of the proposed algorithm and results present the good performance.
Keywords: permanent magnet synchronous motor, sensorless, high-frequency signal injection, magnetic pole position.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19816637 A Multi-period Profit Maximization Policy for a Stochastic Demand Inventory System with Upward Substitution
Authors: Soma Roychowdhury
Abstract:
This paper deals with a periodic-review substitutable inventory system for a finite and an infinite number of periods. Here an upward substitution structure, a substitution of a more costly item by a less costly one, is assumed, with two products. At the beginning of each period, a stochastic demand comes for the first item only, which is quality-wise better and hence costlier. Whenever an arriving demand finds zero inventory of this product, a fraction of unsatisfied customers goes for its substitutable second item. An optimal ordering policy has been derived for each period. The results are illustrated with numerical examples. A sensitivity analysis has been done to examine how sensitive the optimal solution and the maximum profit are to the values of the discount factor, when there is a large number of periods.Keywords: Multi-period model, inventory, random demand, upward substitution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14456636 Estimating Regression Effects in Com Poisson Generalized Linear Model
Authors: Vandna Jowaheer, Naushad A. Mamode Khan
Abstract:
Com Poisson distribution is capable of modeling the count responses irrespective of their mean variance relation and the parameters of this distribution when fitted to a simple cross sectional data can be efficiently estimated using maximum likelihood (ML) method. In the regression setup, however, ML estimation of the parameters of the Com Poisson based generalized linear model is computationally intensive. In this paper, we propose to use quasilikelihood (QL) approach to estimate the effect of the covariates on the Com Poisson counts and investigate the performance of this method with respect to the ML method. QL estimates are consistent and almost as efficient as ML estimates. The simulation studies show that the efficiency loss in the estimation of all the parameters using QL approach as compared to ML approach is quite negligible, whereas QL approach is lesser involving than ML approach.
Keywords: Com Poisson, Cross-sectional, Maximum Likelihood, Quasi likelihood
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17696635 Normalizing Logarithms of Realized Volatility in an ARFIMA Model
Authors: G. L. C. Yap
Abstract:
Modelling realized volatility with high-frequency returns is popular as it is an unbiased and efficient estimator of return volatility. A computationally simple model is fitting the logarithms of the realized volatilities with a fractionally integrated long-memory Gaussian process. The Gaussianity assumption simplifies the parameter estimation using the Whittle approximation. Nonetheless, this assumption may not be met in the finite samples and there may be a need to normalize the financial series. Based on the empirical indices S&P500 and DAX, this paper examines the performance of the linear volatility model pre-treated with normalization compared to its existing counterpart. The empirical results show that by including normalization as a pre-treatment procedure, the forecast performance outperforms the existing model in terms of statistical and economic evaluations.
Keywords: Long-memory, Gaussian process, Whittle estimator, normalization, volatility, value-at-risk.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16926634 Constructing a Fuzzy Net Present Value Method to Evaluating the BOT Sport Facilities
Authors: Huei-Fu Lu
Abstract:
This paper is to develop a fuzzy net present value (FNPV) method by taking vague cash flow and imprecise required rate of return into account for evaluating the value of the Build-Operate-Transfer (BOT) sport facilities. In order to clearly manifest a more realistic capital budgeting model based on the classical net present value (NPV) method, some uncertain financial elements in NPV formula will be fuzzified as triangular fuzzy numbers. Through the conscientious manipulation of fuzzy set theory, we will find that the proposed FNPV model is a more explicit extension of classical (crisp) model and could be more practicable for the financial managers to capture the essence of capital budgeting of sport facilities than non-fuzzy model.
Keywords: Fuzzy sets; Capital budgeting, Sport facility, Net present value (NPV), Build-Operate-Transfer (BOT) scheme
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038