Search results for: Number of neurons in the hidden layer.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4600

Search results for: Number of neurons in the hidden layer.

880 Searchable Encryption in Cloud Storage

Authors: Ren-Junn Hwang, Chung-Chien Lu, Jain-Shing Wu

Abstract:

Cloud outsource storage is one of important services in cloud computing. Cloud users upload data to cloud servers to reduce the cost of managing data and maintaining hardware and software. To ensure data confidentiality, users can encrypt their files before uploading them to a cloud system. However, retrieving the target file from the encrypted files exactly is difficult for cloud server. This study proposes a protocol for performing multikeyword searches for encrypted cloud data by applying k-nearest neighbor technology. The protocol ranks the relevance scores of encrypted files and keywords, and prevents cloud servers from learning search keywords submitted by a cloud user. To reduce the costs of file transfer communication, the cloud server returns encrypted files in order of relevance. Moreover, when a cloud user inputs an incorrect keyword and the number of wrong alphabet does not exceed a given threshold; the user still can retrieve the target files from cloud server. In addition, the proposed scheme satisfies security requirements for outsourced data storage.

Keywords: Fault-tolerance search, multi-keywords search, outsource storage, ranked search, searchable encryption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3052
879 Improving Decision Support for Organ Transplant

Authors: I. McCulloh, A. Placona, D. Stewart, D. Gause, K. Kiernan, M. Stuart, C. Zinner, L. Cartwright

Abstract:

We find in our data that an alarming number of viable deceased donor kidneys are discarded every year in the US, while waitlisted candidates are dying every day. We observe as many as 85% of transplanted organs are refused at least once for a patient that scored higher on the match list. There are hundreds of clinical variables involved in making a clinical transplant decision and there is rarely an ideal match. Decision makers exhibit an optimism bias where they may refuse an organ offer assuming a better match is imminent. We propose a semi-parametric Cox proportional hazard model, augmented by an accelerated failure time model based on patient-specific suitable organ supply and demand to estimate a time-to-next-offer. Performance is assessed with Cox-Snell residuals and decision curve analysis, demonstrating improved decision support for up to a 5-year outlook. Providing clinical decision-makers with quantitative evidence of likely patient outcomes (e.g., time to next offer and the mortality associated with waiting) may improve decisions and reduce optimism bias, thus reducing discarded organs and matching more patients on the waitlist.

Keywords: Decision science, KDPI, optimism bias, organ transplant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 139
878 Tracking Performance Evaluation of Robust Back-Stepping Control Design for a Nonlinear Electrohydraulic Servo System

Authors: M. Ahmadnezhad, M. Soltanpour

Abstract:

Electrohydraulic servo system have been used in industry in a wide number of applications. Its dynamics are highly nonlinear and also have large extent of model uncertainties and external disturbances. In this paper, a robust back-stepping control (RBSC) scheme is proposed to overcome the problem of disturbances and system uncertainties effectively and to improve the tracking performance of EHS systems. In order to implement the proposed control scheme, the system uncertainties in EHS systems are considered as total leakage coefficient and effective oil volume. In addition, in order to obtain the virtual controls for stabilizing system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system using Matlab/Simulink Software is executed. From the computer simulation, it was found that the RBSC system produces the desired tracking performance and has robustness to the disturbances and system uncertainties of EHS systems.

Keywords: Electro hydraulic servo system, back-stepping control, robust back-stepping control, Lyapunov redesign

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2004
877 Tracking Performance Evaluation of Robust Back-Stepping Control Design for a Nonlinear Electrohydraulic Servo System

Authors: M. Ahmadnezhad, M. Soltanpour

Abstract:

Electrohydraulic servo system have been used in industry in a wide number of applications. Its dynamics are highly nonlinear and also have large extent of model uncertainties and external disturbances. In this paper, a robust back-stepping control (RBSC) scheme is proposed to overcome the problem of disturbances and system uncertainties effectively and to improve the tracking performance of EHS systems. In order to implement the proposed control scheme, the system uncertainties in EHS systems are considered as total leakage coefficient and effective oil volume. In addition, in order to obtain the virtual controls for stabilizing system, the update rule for the system uncertainty term is induced by the Lyapunov control function (LCF). To verify the performance and robustness of the proposed control system, computer simulation of the proposed control system using Matlab/Simulink Software is executed. From the computer simulation, it was found that the RBSC system produces the desired tracking performance and has robustness to the disturbances and system uncertainties of EHS systems.

Keywords: Electro hydraulic servo system, back-stepping control, robust back-stepping control, Lyapunov redesign.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1466
876 Closed Loop Control of Bridgeless Cuk Converter Using Fuzzy Logic Controller for PFC Applications

Authors: Nesapriya. P., S. Rajalaxmi

Abstract:

This paper is based on the bridgeless single-phase Ac–Dc Power Factor Correction (PFC) converters with Fuzzy Logic Controller. High frequency isolated Cuk converters are used as a modular dc-dc converter in Discontinuous Conduction Mode (DCM) of operation of Power Factor Correction. The aim of this paper is to simplify the program complexity of the controller by reducing the number of fuzzy sets of the Membership Functions (MFs) and to improve the efficiency and to eliminate the power quality problems. The output of Fuzzy controller is compared with High frequency triangular wave to generate PWM gating signals of Cuk converter. The proposed topologies are designed to work in Discontinuous Conduction Mode (DCM) to achieve a unity power factor and low total harmonic distortion of the input current. The Fuzzy Logic Controller gives additional advantages such as accurate result, uncertainty and imprecision and automatic control circuitry. Performance comparisons between the proposed and conventional controllers and circuits are performed based on circuit simulations.

Keywords: Fuzzy Logic Controller (FLC), Bridgeless rectifier, Cuk converter, Pulse Width Modulation (PWM), Power Factor Correction, Total Harmonic Distortion (THD).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4025
875 Application of Computer Aided Engineering Tools in Performance Prediction and Fault Detection of Mechanical Equipment of Mining Process Line

Authors: K. Jahani, J. Razavi

Abstract:

Nowadays, to decrease the number of downtimes in the industries such as metal mining, petroleum and chemical industries, predictive maintenance is crucial. In order to have efficient predictive maintenance, knowing the performance of critical equipment of production line such as pumps and hydro-cyclones under variable operating parameters, selecting best indicators of this equipment health situations, best locations for instrumentation, and also measuring of these indicators are very important. In this paper, computer aided engineering (CAE) tools are implemented to study some important elements of copper process line, namely slurry pumps and cyclone to predict the performance of these components under different working conditions. These modeling and simulations can be used in predicting, for example, the damage tolerance of the main shaft of the slurry pump or wear rate and location of cyclone wall or pump case and impeller. Also, the simulations can suggest best-measuring parameters, measuring intervals, and their locations.

Keywords: Computer aided engineering, predictive maintenance, fault detection, mining process line, slurry pump, hydrocyclone.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809
874 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection

Authors: Malay K. Pakhira, Rajat K. De

Abstract:

In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.

Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1420
873 Seed-Based Region Growing (SBRG) vs Adaptive Network-Based Inference System (ANFIS) vs Fuzzyc-Means (FCM): Brain Abnormalities Segmentation

Authors: Shafaf Ibrahim, Noor Elaiza Abdul Khalid, Mazani Manaf

Abstract:

Segmentation of Magnetic Resonance Imaging (MRI) images is the most challenging problems in medical imaging. This paper compares the performances of Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS) and Fuzzy c-Means (FCM) in brain abnormalities segmentation. Controlled experimental data is used, which designed in such a way that prior knowledge of the size of the abnormalities are known. This is done by cutting various sizes of abnormalities and pasting it onto normal brain tissues. The normal tissues or the background are divided into three different categories. The segmentation is done with fifty seven data of each category. The knowledge of the size of the abnormalities by the number of pixels are then compared with segmentation results of three techniques proposed. It was proven that the ANFIS returns the best segmentation performances in light abnormalities, whereas the SBRG on the other hand performed well in dark abnormalities segmentation.

Keywords: Seed-Based Region Growing (SBRG), Adaptive Network-Based Fuzzy Inference System (ANFIS), Fuzzy c-Means (FCM), Brain segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2286
872 Measuring the Performance of the Accident Reductions: Evidence from Izmir City

Authors: Y. Duvarci, S. Mizokami

Abstract:

Traffic enforcement units (the Police) are partly responsible for the severity and frequency of the traffic accidents via the effectiveness of their safety measures. The Police claims that the reductions in accidents and their severities occur largely by their timely interventions at the black spots, through traffic management or temporary changes in the road design (guiding, reducing speeds and eliminating sight obstructions, etc.). Yet, some other external factors than the Police measures may intervene into which such claims require a statistical confirmation. In order to test the net impact of the Police contribution in the reduction of the number of crashes, Chi square test was applied for 25 spots (streets and intersections) and an average evaluation was achieved for general conclusion in the case study of Izmir city. Separately, the net impact of economic crisis in the reduction of crashes is assessed by the trend analysis for the case of the economic crisis with the probable reduction effects on the trip generation or modal choice. Finally, it was proven that the Police measures were effective to some degree as they claimed, though the economic crisis might have only negligible contribution to the reductions in the same period observed.

Keywords: Road Safety, Police, Enforcement units, Chi Squaretest, Economic Impact.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
871 Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images

Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah

Abstract:

This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.

Keywords: Block cipher, Image encryption, Encryption quality, and Security analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2383
870 Chewing behavior and Bolus Properties as Affected by Different Rice Types

Authors: Anuchita Moongngarm, John E. Bronlund, Nigel Grigg, Naruemon Sriwai

Abstract:

The study aimed to investigate the effect of rice types on chewing behaviours (chewing time, number of chews, and portion size) and bolus properties (bolus moisture content, solid loss, and particle size distribution (PSD)) in human subjects. Five cooked rice types including brown rice (BR), white rice (WR), parboiled white rice (PR), high amylose white rice (HR) and waxy white rice (WXR) were chewed by six subjects. The chewing behaviours were recorded and the food boluses were collected during mastication. Rice typeswere found to significantly influence all chewing parameters evaluated. The WXR and BR showed the most pronounced differences compared with other rice types. The initial moisture content of un-chewed WXR was lowest (43.39%) whereas those of other rice types were ranged from 66.86 to 70.33%. The bolus obtained from chewing the WXR contained lowest moisture content (56.43%) whilst its solid loss (22.03%) was not significant different from those of all rice types. In PSD evaluation using Mastersizer S, the diameter of particles measured was ranged between 4 to 3500 μm. The particle size of food bolus from BR, HR, and WXR contained much finer particles than those of WR and PR.

Keywords: Chewing behavior, Mastication, Rice, Rice types, Bolus properties

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1804
869 Mooring Analysis of Duct-Type Tidal Current Power System in Shallow Water

Authors: Chul H. Jo, Do Y. Kim, Bong K. Cho, Myeong J. Kim

Abstract:

The exhaustion of oil and the environmental pollution from the use of fossil fuel are increasing. Tidal current power (TCP) has been proposed as an alternative energy source because of its predictability and reliability. By applying a duct and single point mooring (SPM) system, a TCP device can amplify the generating power and keep its position properly. Because the generating power is proportional to cube of the current stream velocity, amplifying the current speed by applying a duct to a TCP system is an effective way to improve the efficiency of the power device. An SPM system can be applied at any water depth and is highly cost effective. Simple installation and maintenance procedures are also merits of an SPM system. In this study, we designed an SPM system for a duct-type TCP device for use in shallow water. Motions of the duct are investigated to obtain the response amplitude operator (RAO) as the magnitude of the transfer function. Parameters affecting the stability of the SPM system such as the fairlead departure angle, current velocity, and the number of clamp weights are analyzed and/or optimized. Wadam and OrcaFlex commercial software is used to design the mooring line.

Keywords: Mooring design, parametric analysis, response amplitude operator, single point mooring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152
868 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: Big data, k-NN, machine learning, traffic speed prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1353
867 A Grey-Fuzzy Controller for Optimization Technique in Wireless Networks

Authors: Yao-Tien Wang, Hsiang-Fu Yu, Dung Chen Chiou

Abstract:

In wireless and mobile communications, this progress provides opportunities for introducing new standards and improving existing services. Supporting multimedia traffic with wireless networks quality of service (QoS). In this paper, a grey-fuzzy controller for radio resource management (GF-RRM) is presented to maximize the number of the served calls and QoS provision in wireless networks. In a wireless network, the call arrival rate, the call duration and the communication overhead between the base stations and the control center are vague and uncertain. In this paper, we develop a method to predict the cell load and to solve the RRM problem based on the GF-RRM, and support the present facility has been built on the application-level of the wireless networks. The GF-RRM exhibits the better adaptability, fault-tolerant capability and performance than other algorithms. Through simulations, we evaluate the blocking rate, update overhead, and channel acquisition delay time of the proposed method. The results demonstrate our algorithm has the lower blocking rate, less updated overhead, and shorter channel acquisition delay.

Keywords: radio resource management, grey prediction, fuzzylogic control, wireless networks, quality of service.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
866 Behavior Analysis Based On Nine Degrees-of-Freedom Sensor for Emergency Rescue Evacuation Support System

Authors: Maeng-Hwan Hyun, Dae-Man Do, Young-Bok Choi

Abstract:

Around the world, there are frequent incidents of natural disasters, such as earthquakes, tsunamis, floods, and snowstorms, as well as manmade disasters such as fires, arsons, and acts of terror. These diverse and unpredictable adversities have resulted in a number of fatalities and injuries. If disaster occurrence can be assessed quickly and information such as the exact location of the disaster and evacuation routes can be provided, victims can promptly move to safe locations, minimizing losses. This paper proposes a behavior analysis method based on a nine degrees-of-freedom (9-DOF) sensor that is effective for the emergency rescue evacuation support system (ERESS), which is being researched with an objective of providing evacuation support during disasters. Based on experiments performed using the acceleration sensor and the gyroscope sensor in the 9-DOF sensor, data are analyzed for human behavior regarding stationary position, walking, running, and during emergency situation to suggest guidelines for system judgment. Using the results of the experiments performed to determine disaster occurrence, it was confirmed that the proposed method quickly determines whether a disaster has occurred.

Keywords: Behavior Analysis, Nine degrees-of-freedom sensor, Emergency rescue.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1664
865 Identification of Author and Reviewer from Single and Double Blind Paper

Authors: Jatinderkumar R. Saini, Nikita R. Sonthalia, Khushbu A. Dodiya

Abstract:

Research leads to the development of science and technology and hence it leads to the betterment of humankind also. Journals and Conferences provide a platform to receive large number of research papers for publications and presentations before the expert and peer-level scientific community. In order to assure quality of such papers, they are also sent to reviewers for their comments. In order to maintain good ethical standards, the research papers are sent to reviewers in such a way authors and reviewers do not know each other’s identity. This technique is called Double-blind Review Process. It is called Single-blind Review Process, if identity of any one party, generally authors’, is disclosed to the other. This paper presents the techniques by which identity of author as well as reviewer could be found even through Double-blind Review process. It is proposed that the characteristics and techniques presented here will help journals and conferences in assuring intentional or un-intentional disclosure of identity revealing information by the either party. 

Keywords: Author, Conference, Double Blind Paper, Journal, Reviewer, Single Blind Paper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2421
864 Exploration of Autistic Children using Case Based Reasoning System with Cognitive Map

Authors: Ebtehal Alawi Alsaggaf, Shehab A. Gamalel-Din

Abstract:

Exploring an autistic child in Elementary school is a difficult task that must be fully thought out and the teachers should be aware of the many challenges they face raising their child especially the behavioral problems of autistic children. Hence there arises a need for developing Artificial intelligence (AI) Contemporary Techniques to help diagnosis to discover autistic people. In this research, we suggest designing architecture of expert system that combine Cognitive Maps (CM) with Case Based Reasoning technique (CBR) in order to reduce time and costs of traditional diagnosis process for the early detection to discover autistic children. The teacher is supposed to enter child's information for analyzing by CM module. Then, the reasoning processor would translate the output into a case to be solved a current problem by CBR module. We will implement a prototype for the model as a proof of concept using java and MYSQL. This will be provided a new hybrid approach that will achieve new synergies and improve problem solving capabilities in AI. And we will predict that will reduce time, costs, the number of human errors and make expertise available to more people who want who want to serve autistic children and their families.

Keywords: Autism, Cognitive Maps (CM), Case Based Reasoning technique (CBR).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1935
863 Fingerprint Compression Using Contourlet Transform and Multistage Vector Quantization

Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, R. Sudhakar

Abstract:

This paper presents a new fingerprint coding technique based on contourlet transform and multistage vector quantization. Wavelets have shown their ability in representing natural images that contain smooth areas separated with edges. However, wavelets cannot efficiently take advantage of the fact that the edges usually found in fingerprints are smooth curves. This issue is addressed by directional transforms, known as contourlets, which have the property of preserving edges. The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The computation and storage requirements are the major difficulty in implementing a vector quantizer. In the full-search algorithm, the computation and storage complexity is an exponential function of the number of bits used in quantizing each frame of spectral information. The storage requirement in multistage vector quantization is less when compared to full search vector quantization. The coefficients of contourlet transform are quantized by multistage vector quantization. The quantized coefficients are encoded by Huffman coding. The results obtained are tabulated and compared with the existing wavelet based ones.

Keywords: Contourlet Transform, Directional Filter bank, Laplacian Pyramid, Multistage Vector Quantization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1982
862 Double Diffusive Convection in a Partially Porous Cavity under Suction/Injection Effects

Authors: Y. Outaleb, K. Bouhadef, O. Rahli

Abstract:

Double-diffusive steady convection in a partially porous cavity with partially permeable walls and under the combined buoyancy effects of thermal and mass diffusion was analysed numerically using finite volume method. The top wall is well insulated and impermeable while the bottom surface is partially well insulated and impermeable and partially submitted to constant temperature T1 and concentration C1. Constant equal temperature T2 and concentration C2 are imposed along the vertical surfaces of the enclosure. Mass suction/injection and injection/suction are respectively considered at the bottom of the porous centred partition and at one of the vertical walls. Heat and mass transfer characteristics as streamlines and average Nusselt numbers and Sherwood numbers were discussed for different values of buoyancy ratio, Rayleigh number, and injection/suction coefficient. It is especially noted that increasing the injection factor disadvantages the exchanges in the case of the injection while the transfer is augmented in case of suction. On the other hand, a critical value of the buoyancy ratio was highlighted for which heat and mass transfers are minimized.

Keywords: Double diffusive convection, Injection/Extraction, Partially porous cavity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
861 Assertion-Driven Test Repair Based on Priority Criteria

Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang

Abstract:

Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent-preservation has been proposed, it does not take into account the association between test repairs and assertions, leading a large number of irrelevant candidates and decreasing the repair capability. This paper proposes a assertion-driven test repair approach. Furthermore, a intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) broken test cases, which are more effective than the existing intent-preserved test repair approach, and our intent-oriented priority criteria work well.

Keywords: Test repair, test intent, software test, test case evolution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93
860 Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) Parameters for Propane, Ethylene, and Hydrogen under Supercritical Conditions

Authors: Ilke Senol

Abstract:

Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) equation of state (EOS) is a modified SAFT EOS with three pure component specific parameters: segment number (m), diameter (σ) and energy (ε). These PC-SAFT parameters need to be determined for each component under the conditions of interest by fitting experimental data, such as vapor pressure, density or heat capacity. PC-SAFT parameters for propane, ethylene and hydrogen in supercritical region were successfully estimated by fitting experimental density data available in literature. The regressed PCSAFT parameters were compared with the literature values by means of estimating pure component density and calculating average absolute deviation between the estimated and experimental density values. PC-SAFT parameters available in literature especially for ethylene and hydrogen estimated density in supercritical region reasonably well. However, the regressed PC-SAFT parameters performed better in supercritical region than the PC-SAFT parameters from literature.

Keywords: Equation of state, perturbed-chain, PC-SAFT, super critical.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6945
859 Numerical Simulations of Electronic Cooling with In-Line and Staggered Pin Fin Heat Sinks

Authors: Yue-Tzu Yang, Hsiang-Wen Tang, Jian-Zhang Yin, Chao-Han Wu

Abstract:

Three-dimensional incompressible turbulent fluid flow and heat transfer of pin fin heat sinks using air as a cooling fluid are numerically studied in this study. Two different kinds of pin fins are compared in the thermal performance, including circular and square cross sections, both are in-line and staggered arrangements. The turbulent governing equations are solved using a control-volume- based finite-difference method. Subsequently, numerical computations are performed with the realizable k - ԑ turbulence for the parameters studied, the fin height H, fin diameter D, and Reynolds number (Re) in the range of 7 ≤ H ≤ 10, 0.75 ≤ D ≤ 2, 2000 ≤ Re ≤ 126000 respectively. The numerical results are validated with available experimental data in the literature and good agreement has been found. It indicates that circular pin fins are streamlined in comparing with the square pin fins, the pressure drop is small than that of square pin fins, and heat transfer is not as good as the square pin fins. The thermal performance of the staggered pin fins is better than that of in-line pin fins because the staggered arrangements produce large disturbance. Both in-line and staggered arrangements show the same behavior for thermal resistance, pressure drop, and the entropy generation.

Keywords: Pin-fin, heat sinks, simulations, turbulent flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1234
858 A New Approach to Face Recognition Using Dual Dimension Reduction

Authors: M. Almas Anjum, M. Younus Javed, A. Basit

Abstract:

In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.

Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1660
857 Survey of Access Controls in Cloud Computing

Authors: Monirah Alkathiry, Hanan Aljarwan

Abstract:

Cloud computing is one of the most significant technologies that the world deals with, in different sectors with different purposes and capabilities. The cloud faces various challenges in securing data from unauthorized access or modification. Consequently, security risks and levels have greatly increased. Therefore, cloud service providers (CSPs) and users need secure mechanisms that ensure that data are kept secret and safe from any disclosures or exploits. For this reason, CSPs need a number of techniques and technologies to manage and secure access to the cloud services to achieve security goals, such as confidentiality, integrity, identity access management (IAM), etc. Therefore, this paper will review and explore various access controls implemented in a cloud environment that achieve different security purposes. The methodology followed in this survey was conducting an assessment, evaluation, and comparison between those access controls mechanisms and technologies based on different factors, such as the security goals it achieves, usability, and cost-effectiveness. This assessment resulted in the fact that the technology used in an access control affects the security goals it achieves as well as there is no one access control method that achieves all security goals. Consequently, such a comparison would help decision-makers to choose properly the access controls that meet their requirements.

Keywords: Access controls, cloud computing, confidentiality, identity and access management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 696
856 On the Network Packet Loss Tolerance of SVM Based Activity Recognition

Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir

Abstract:

In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces  high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.

Keywords: Activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2845
855 Types of Epilepsies and Findings EEG- LORETA about Epilepsy

Authors: Leila Maleki, Ahmad Esmali Kooraneh, Hossein Taghi Derakhshi

Abstract:

Neural activity in the human brain starts from the early stages of prenatal development. This activity or signals generated by the brain are electrical in nature and represent not only the brain function but also the status of the whole body. At the present moment, three methods can record functional and physiological changes within the brain with high temporal resolution of neuronal interactions at the network level: the electroencephalogram (EEG), the magnet oencephalogram (MEG), and functional magnetic resonance imaging (fMRI); each of these has advantages and shortcomings. EEG recording with a large number of electrodes is now feasible in clinical practice. Multichannel EEG recorded from the scalp surface provides very valuable but indirect information about the source distribution. However, deep electrode measurements yield more reliable information about the source locations intracranial recordings and scalp EEG are used with the source imaging techniques to determine the locations and strengths of the epileptic activity. As a source localization method, Low Resolution Electro-Magnetic Tomography (LORETA) is solved for the realistic geometry based on both forward methods, the Boundary Element Method (BEM) and the Finite Difference Method (FDM). In this paper, we review the findings EEG- LORETA about epilepsy.

Keywords: Epilepsy, EEG, EEG- Loreta, loreta analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3061
854 Computer Aided Design of Reshaping Process of Circular Pipes into Square Pipes

Authors: Parviz Alinezhad, Ali Sanati, Koorosh Naser Momtahen

Abstract:

Square pipes (pipes with square cross sections) are being used for various industrial objectives, such as machine structure components and housing/building elements. The utilization of them is extending rapidly and widely. Hence, the out-put of those pipes is increasing and new application fields are continually developing. Due to various demands in recent time, the products have to satisfy difficult specifications with high accuracy in dimensions. The reshaping process design of pipes with square cross sections; however, is performed by trial and error and based on expert-s experience. In this paper, a computer-aided simulation is developed based on the 2-D elastic-plastic method with consideration of the shear deformation to analyze the reshaping process. Effect of various parameters such as diameter of the circular pipe and mechanical properties of metal on product dimension and quality can be evaluated by using this simulation. Moreover, design of reshaping process include determination of shrinkage of cross section, necessary number of stands, radius of rolls and height of pipe at each stand, are investigated. Further, it is shown that there are good agreements between the results of the design method and the experimental results.

Keywords: Circular Pipes, Square Pipes, Shear Deformation, Reshaping Process, Numerical Simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1371
853 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems

Authors: Jianhua Zhou, Yuwen Zhang

Abstract:

A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.

Keywords: Conduction, inverse problems, conjugated gradient method, laser.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 812
852 Metaphorical Perceptions of Middle School Students Regarding Computer Games

Authors: Ismail Celik, Ismail Sahin, Fetah Eren

Abstract:

The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.

Keywords: Computer game, metaphor, middle school students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1538
851 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018

Authors: M. Sitoe, O. Zacarias

Abstract:

University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.

Keywords: Evasion and retention, cross validation, bagging, stacking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 71