Search results for: triangular fuzzy number.
912 Generational PipeLined Genetic Algorithm (PLGA)using Stochastic Selection
Authors: Malay K. Pakhira, Rajat K. De
Abstract:
In this paper, a pipelined version of genetic algorithm, called PLGA, and a corresponding hardware platform are described. The basic operations of conventional GA (CGA) are made pipelined using an appropriate selection scheme. The selection operator, used here, is stochastic in nature and is called SA-selection. This helps maintaining the basic generational nature of the proposed pipelined GA (PLGA). A number of benchmark problems are used to compare the performances of conventional roulette-wheel selection and the SA-selection. These include unimodal and multimodal functions with dimensionality varying from very small to very large. It is seen that the SA-selection scheme is giving comparable performances with respect to the classical roulette-wheel selection scheme, for all the instances, when quality of solutions and rate of convergence are considered. The speedups obtained by PLGA for different benchmarks are found to be significant. It is shown that a complete hardware pipeline can be developed using the proposed scheme, if parallel evaluation of the fitness expression is possible. In this connection a low-cost but very fast hardware evaluation unit is described. Results of simulation experiments show that in a pipelined hardware environment, PLGA will be much faster than CGA. In terms of efficiency, PLGA is found to outperform parallel GA (PGA) also.Keywords: Hardware evaluation, Hardware pipeline, Optimization, Pipelined genetic algorithm, SA-selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1443911 Measuring the Performance of the Accident Reductions: Evidence from Izmir City
Authors: Y. Duvarci, S. Mizokami
Abstract:
Traffic enforcement units (the Police) are partly responsible for the severity and frequency of the traffic accidents via the effectiveness of their safety measures. The Police claims that the reductions in accidents and their severities occur largely by their timely interventions at the black spots, through traffic management or temporary changes in the road design (guiding, reducing speeds and eliminating sight obstructions, etc.). Yet, some other external factors than the Police measures may intervene into which such claims require a statistical confirmation. In order to test the net impact of the Police contribution in the reduction of the number of crashes, Chi square test was applied for 25 spots (streets and intersections) and an average evaluation was achieved for general conclusion in the case study of Izmir city. Separately, the net impact of economic crisis in the reduction of crashes is assessed by the trend analysis for the case of the economic crisis with the probable reduction effects on the trip generation or modal choice. Finally, it was proven that the Police measures were effective to some degree as they claimed, though the economic crisis might have only negligible contribution to the reductions in the same period observed.Keywords: Road Safety, Police, Enforcement units, Chi Squaretest, Economic Impact.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1690910 Encryption Efficiency Analysis and Security Evaluation of RC6 Block Cipher for Digital Images
Authors: Hossam El-din H. Ahmed, Hamdy M. Kalash, Osama S. Farag Allah
Abstract:
This paper investigates the encryption efficiency of RC6 block cipher application to digital images, providing a new mathematical measure for encryption efficiency, which we will call the encryption quality instead of visual inspection, The encryption quality of RC6 block cipher is investigated among its several design parameters such as word size, number of rounds, and secret key length and the optimal choices for the best values of such design parameters are given. Also, the security analysis of RC6 block cipher for digital images is investigated from strict cryptographic viewpoint. The security estimations of RC6 block cipher for digital images against brute-force, statistical, and differential attacks are explored. Experiments are made to test the security of RC6 block cipher for digital images against all aforementioned types of attacks. Experiments and results verify and prove that RC6 block cipher is highly secure for real-time image encryption from cryptographic viewpoint. Thorough experimental tests are carried out with detailed analysis, demonstrating the high security of RC6 block cipher algorithm. So, RC6 block cipher can be considered to be a real-time secure symmetric encryption for digital images.
Keywords: Block cipher, Image encryption, Encryption quality, and Security analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2425909 Chewing behavior and Bolus Properties as Affected by Different Rice Types
Authors: Anuchita Moongngarm, John E. Bronlund, Nigel Grigg, Naruemon Sriwai
Abstract:
The study aimed to investigate the effect of rice types on chewing behaviours (chewing time, number of chews, and portion size) and bolus properties (bolus moisture content, solid loss, and particle size distribution (PSD)) in human subjects. Five cooked rice types including brown rice (BR), white rice (WR), parboiled white rice (PR), high amylose white rice (HR) and waxy white rice (WXR) were chewed by six subjects. The chewing behaviours were recorded and the food boluses were collected during mastication. Rice typeswere found to significantly influence all chewing parameters evaluated. The WXR and BR showed the most pronounced differences compared with other rice types. The initial moisture content of un-chewed WXR was lowest (43.39%) whereas those of other rice types were ranged from 66.86 to 70.33%. The bolus obtained from chewing the WXR contained lowest moisture content (56.43%) whilst its solid loss (22.03%) was not significant different from those of all rice types. In PSD evaluation using Mastersizer S, the diameter of particles measured was ranged between 4 to 3500 μm. The particle size of food bolus from BR, HR, and WXR contained much finer particles than those of WR and PR.
Keywords: Chewing behavior, Mastication, Rice, Rice types, Bolus properties
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1835908 Mooring Analysis of Duct-Type Tidal Current Power System in Shallow Water
Authors: Chul H. Jo, Do Y. Kim, Bong K. Cho, Myeong J. Kim
Abstract:
The exhaustion of oil and the environmental pollution from the use of fossil fuel are increasing. Tidal current power (TCP) has been proposed as an alternative energy source because of its predictability and reliability. By applying a duct and single point mooring (SPM) system, a TCP device can amplify the generating power and keep its position properly. Because the generating power is proportional to cube of the current stream velocity, amplifying the current speed by applying a duct to a TCP system is an effective way to improve the efficiency of the power device. An SPM system can be applied at any water depth and is highly cost effective. Simple installation and maintenance procedures are also merits of an SPM system. In this study, we designed an SPM system for a duct-type TCP device for use in shallow water. Motions of the duct are investigated to obtain the response amplitude operator (RAO) as the magnitude of the transfer function. Parameters affecting the stability of the SPM system such as the fairlead departure angle, current velocity, and the number of clamp weights are analyzed and/or optimized. Wadam and OrcaFlex commercial software is used to design the mooring line.
Keywords: Mooring design, parametric analysis, response amplitude operator, single point mooring.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174907 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network
Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu
Abstract:
A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.Keywords: Big data, k-NN, machine learning, traffic speed prediction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1376906 Behavior Analysis Based On Nine Degrees-of-Freedom Sensor for Emergency Rescue Evacuation Support System
Authors: Maeng-Hwan Hyun, Dae-Man Do, Young-Bok Choi
Abstract:
Around the world, there are frequent incidents of natural disasters, such as earthquakes, tsunamis, floods, and snowstorms, as well as manmade disasters such as fires, arsons, and acts of terror. These diverse and unpredictable adversities have resulted in a number of fatalities and injuries. If disaster occurrence can be assessed quickly and information such as the exact location of the disaster and evacuation routes can be provided, victims can promptly move to safe locations, minimizing losses. This paper proposes a behavior analysis method based on a nine degrees-of-freedom (9-DOF) sensor that is effective for the emergency rescue evacuation support system (ERESS), which is being researched with an objective of providing evacuation support during disasters. Based on experiments performed using the acceleration sensor and the gyroscope sensor in the 9-DOF sensor, data are analyzed for human behavior regarding stationary position, walking, running, and during emergency situation to suggest guidelines for system judgment. Using the results of the experiments performed to determine disaster occurrence, it was confirmed that the proposed method quickly determines whether a disaster has occurred.
Keywords: Behavior Analysis, Nine degrees-of-freedom sensor, Emergency rescue.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1689905 Identification of Author and Reviewer from Single and Double Blind Paper
Authors: Jatinderkumar R. Saini, Nikita R. Sonthalia, Khushbu A. Dodiya
Abstract:
Research leads to the development of science and technology and hence it leads to the betterment of humankind also. Journals and Conferences provide a platform to receive large number of research papers for publications and presentations before the expert and peer-level scientific community. In order to assure quality of such papers, they are also sent to reviewers for their comments. In order to maintain good ethical standards, the research papers are sent to reviewers in such a way authors and reviewers do not know each other’s identity. This technique is called Double-blind Review Process. It is called Single-blind Review Process, if identity of any one party, generally authors’, is disclosed to the other. This paper presents the techniques by which identity of author as well as reviewer could be found even through Double-blind Review process. It is proposed that the characteristics and techniques presented here will help journals and conferences in assuring intentional or un-intentional disclosure of identity revealing information by the either party.
Keywords: Author, Conference, Double Blind Paper, Journal, Reviewer, Single Blind Paper.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2447904 Exploration of Autistic Children using Case Based Reasoning System with Cognitive Map
Authors: Ebtehal Alawi Alsaggaf, Shehab A. Gamalel-Din
Abstract:
Exploring an autistic child in Elementary school is a difficult task that must be fully thought out and the teachers should be aware of the many challenges they face raising their child especially the behavioral problems of autistic children. Hence there arises a need for developing Artificial intelligence (AI) Contemporary Techniques to help diagnosis to discover autistic people. In this research, we suggest designing architecture of expert system that combine Cognitive Maps (CM) with Case Based Reasoning technique (CBR) in order to reduce time and costs of traditional diagnosis process for the early detection to discover autistic children. The teacher is supposed to enter child's information for analyzing by CM module. Then, the reasoning processor would translate the output into a case to be solved a current problem by CBR module. We will implement a prototype for the model as a proof of concept using java and MYSQL. This will be provided a new hybrid approach that will achieve new synergies and improve problem solving capabilities in AI. And we will predict that will reduce time, costs, the number of human errors and make expertise available to more people who want who want to serve autistic children and their families.Keywords: Autism, Cognitive Maps (CM), Case Based Reasoning technique (CBR).
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961903 Stability Analysis of Three-Dimensional Flow and Heat Transfer over a Permeable Shrinking Surface in a Cu-Water Nanofluid
Authors: Roslinda Nazar, Amin Noor, Khamisah Jafar, Ioan Pop
Abstract:
In this paper, the steady laminar three-dimensional boundary layer flow and heat transfer of a copper (Cu)-water nanofluid in the vicinity of a permeable shrinking flat surface in an otherwise quiescent fluid is studied. The nanofluid mathematical model in which the effect of the nanoparticle volume fraction is taken into account is considered. The governing nonlinear partial differential equations are transformed into a system of nonlinear ordinary differential equations using a similarity transformation which is then solved numerically using the function bvp4c from Matlab. Dual solutions (upper and lower branch solutions) are found for the similarity boundary layer equations for a certain range of the suction parameter. A stability analysis has been performed to show which branch solutions are stable and physically realizable. The numerical results for the skin friction coefficient and the local Nusselt number as well as the velocity and temperature profiles are obtained, presented and discussed in detail for a range of various governing parameters.
Keywords: Heat Transfer, Nanofluid, Shrinking Surface, Stability Analysis, Three-Dimensional Flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2194902 Fingerprint Compression Using Contourlet Transform and Multistage Vector Quantization
Authors: S. Esakkirajan, T. Veerakumar, V. Senthil Murugan, R. Sudhakar
Abstract:
This paper presents a new fingerprint coding technique based on contourlet transform and multistage vector quantization. Wavelets have shown their ability in representing natural images that contain smooth areas separated with edges. However, wavelets cannot efficiently take advantage of the fact that the edges usually found in fingerprints are smooth curves. This issue is addressed by directional transforms, known as contourlets, which have the property of preserving edges. The contourlet transform is a new extension to the wavelet transform in two dimensions using nonseparable and directional filter banks. The computation and storage requirements are the major difficulty in implementing a vector quantizer. In the full-search algorithm, the computation and storage complexity is an exponential function of the number of bits used in quantizing each frame of spectral information. The storage requirement in multistage vector quantization is less when compared to full search vector quantization. The coefficients of contourlet transform are quantized by multistage vector quantization. The quantized coefficients are encoded by Huffman coding. The results obtained are tabulated and compared with the existing wavelet based ones.Keywords: Contourlet Transform, Directional Filter bank, Laplacian Pyramid, Multistage Vector Quantization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2012901 A Cuckoo Search with Differential Evolution for Clustering Microarray Gene Expression Data
Authors: M. Pandi, K. Premalatha
Abstract:
A DNA microarray technology is a collection of microscopic DNA spots attached to a solid surface. Scientists use DNA microarrays to measure the expression levels of large numbers of genes simultaneously or to genotype multiple regions of a genome. Elucidating the patterns hidden in gene expression data offers a tremendous opportunity for an enhanced understanding of functional genomics. However, the large number of genes and the complexity of biological networks greatly increase the challenges of comprehending and interpreting the resulting mass of data, which often consists of millions of measurements. It is handled by clustering which reveals the natural structures and identifying the interesting patterns in the underlying data. In this paper, gene based clustering in gene expression data is proposed using Cuckoo Search with Differential Evolution (CS-DE). The experiment results are analyzed with gene expression benchmark datasets. The results show that CS-DE outperforms CS in benchmark datasets. To find the validation of the clustering results, this work is tested with one internal and one external cluster validation indexes.
Keywords: DNA, Microarray, genomics, Cuckoo Search, Differential Evolution, Gene expression data, Clustering.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483900 A General Mandatory Access Control Framework in Distributed Environments
Authors: Feng Yang, Xuehai Zhou, Dalei Hu
Abstract:
In this paper, we propose a general mandatory access framework for distributed systems. The framework can be applied into multiple operating systems and can handle multiple stakeholders. Despite considerable advancements in the area of mandatory access control, a certain approach to enforcing mandatory access control can only be applied in a specific operating system. Other than PC market in which windows captures the overwhelming shares, there are a number of popular operating systems in the emerging smart phone environment, i.e. Android, Windows mobile, Symbian, RIM. It should be noted that more and more stakeholders are involved in smartphone software, such as devices owners, service providers and application providers. Our framework includes three parts—local decision layer, the middle layer and the remote decision layer. The middle layer takes charge of managing security contexts, OS API, operations and policy combination. The design of the remote decision layer doesn’t depend on certain operating systems because of the middle layer’s existence. We implement the framework in windows, linux and other popular embedded systems.
Keywords: Mandatory Access Control, Distributed System, General Platform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2231899 Double Diffusive Convection in a Partially Porous Cavity under Suction/Injection Effects
Authors: Y. Outaleb, K. Bouhadef, O. Rahli
Abstract:
Double-diffusive steady convection in a partially porous cavity with partially permeable walls and under the combined buoyancy effects of thermal and mass diffusion was analysed numerically using finite volume method. The top wall is well insulated and impermeable while the bottom surface is partially well insulated and impermeable and partially submitted to constant temperature T1 and concentration C1. Constant equal temperature T2 and concentration C2 are imposed along the vertical surfaces of the enclosure. Mass suction/injection and injection/suction are respectively considered at the bottom of the porous centred partition and at one of the vertical walls. Heat and mass transfer characteristics as streamlines and average Nusselt numbers and Sherwood numbers were discussed for different values of buoyancy ratio, Rayleigh number, and injection/suction coefficient. It is especially noted that increasing the injection factor disadvantages the exchanges in the case of the injection while the transfer is augmented in case of suction. On the other hand, a critical value of the buoyancy ratio was highlighted for which heat and mass transfers are minimized.Keywords: Double diffusive convection, Injection/Extraction, Partially porous cavity
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562898 Assertion-Driven Test Repair Based on Priority Criteria
Authors: Ruilian Zhao, Shukai Zhang, Yan Wang, Weiwei Wang
Abstract:
Repairing broken test cases is an expensive and challenging task in evolving software systems. Although an automated repair technique with intent-preservation has been proposed, it does not take into account the association between test repairs and assertions, leading a large number of irrelevant candidates and decreasing the repair capability. This paper proposes a assertion-driven test repair approach. Furthermore, a intent-oriented priority criterion is raised to guide the repair candidate generation, making the repairs closer to the intent of the test. In more detail, repair targets are determined through post-dominance relations between assertions and the methods that directly cause compilation errors. Then, test repairs are generated from the target in a bottom-up way, guided by the the intent-oriented priority criteria. Finally, the generated repair candidates are prioritized to match the original test intent. The approach is implemented and evaluated on the benchmark of 4 open-source programs and 91 broken test cases. The result shows that the approach can fix 89% (81/91) broken test cases, which are more effective than the existing intent-preserved test repair approach, and our intent-oriented priority criteria work well.
Keywords: Test repair, test intent, software test, test case evolution.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 155897 Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) Parameters for Propane, Ethylene, and Hydrogen under Supercritical Conditions
Authors: Ilke Senol
Abstract:
Perturbed-Chain Statistical Association Fluid Theory (PC-SAFT) equation of state (EOS) is a modified SAFT EOS with three pure component specific parameters: segment number (m), diameter (σ) and energy (ε). These PC-SAFT parameters need to be determined for each component under the conditions of interest by fitting experimental data, such as vapor pressure, density or heat capacity. PC-SAFT parameters for propane, ethylene and hydrogen in supercritical region were successfully estimated by fitting experimental density data available in literature. The regressed PCSAFT parameters were compared with the literature values by means of estimating pure component density and calculating average absolute deviation between the estimated and experimental density values. PC-SAFT parameters available in literature especially for ethylene and hydrogen estimated density in supercritical region reasonably well. However, the regressed PC-SAFT parameters performed better in supercritical region than the PC-SAFT parameters from literature.
Keywords: Equation of state, perturbed-chain, PC-SAFT, super critical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6992896 Numerical Simulations of Electronic Cooling with In-Line and Staggered Pin Fin Heat Sinks
Authors: Yue-Tzu Yang, Hsiang-Wen Tang, Jian-Zhang Yin, Chao-Han Wu
Abstract:
Three-dimensional incompressible turbulent fluid flow and heat transfer of pin fin heat sinks using air as a cooling fluid are numerically studied in this study. Two different kinds of pin fins are compared in the thermal performance, including circular and square cross sections, both are in-line and staggered arrangements. The turbulent governing equations are solved using a control-volume- based finite-difference method. Subsequently, numerical computations are performed with the realizable k - ԑ turbulence for the parameters studied, the fin height H, fin diameter D, and Reynolds number (Re) in the range of 7 ≤ H ≤ 10, 0.75 ≤ D ≤ 2, 2000 ≤ Re ≤ 126000 respectively. The numerical results are validated with available experimental data in the literature and good agreement has been found. It indicates that circular pin fins are streamlined in comparing with the square pin fins, the pressure drop is small than that of square pin fins, and heat transfer is not as good as the square pin fins. The thermal performance of the staggered pin fins is better than that of in-line pin fins because the staggered arrangements produce large disturbance. Both in-line and staggered arrangements show the same behavior for thermal resistance, pressure drop, and the entropy generation.
Keywords: Pin-fin, heat sinks, simulations, turbulent flow.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1269895 A New Approach to Face Recognition Using Dual Dimension Reduction
Authors: M. Almas Anjum, M. Younus Javed, A. Basit
Abstract:
In this paper a new approach to face recognition is presented that achieves double dimension reduction, making the system computationally efficient with better recognition results and out perform common DCT technique of face recognition. In pattern recognition techniques, discriminative information of image increases with increase in resolution to a certain extent, consequently face recognition results change with change in face image resolution and provide optimal results when arriving at a certain resolution level. In the proposed model of face recognition, initially image decimation algorithm is applied on face image for dimension reduction to a certain resolution level which provides best recognition results. Due to increased computational speed and feature extraction potential of Discrete Cosine Transform (DCT), it is applied on face image. A subset of coefficients of DCT from low to mid frequencies that represent the face adequately and provides best recognition results is retained. A tradeoff between decimation factor, number of DCT coefficients retained and recognition rate with minimum computation is obtained. Preprocessing of the image is carried out to increase its robustness against variations in poses and illumination level. This new model has been tested on different databases which include ORL , Yale and EME color database.Keywords: Biometrics, DCT, Face Recognition, Illumination, Computation, Feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686894 Survey of Access Controls in Cloud Computing
Authors: Monirah Alkathiry, Hanan Aljarwan
Abstract:
Cloud computing is one of the most significant technologies that the world deals with, in different sectors with different purposes and capabilities. The cloud faces various challenges in securing data from unauthorized access or modification. Consequently, security risks and levels have greatly increased. Therefore, cloud service providers (CSPs) and users need secure mechanisms that ensure that data are kept secret and safe from any disclosures or exploits. For this reason, CSPs need a number of techniques and technologies to manage and secure access to the cloud services to achieve security goals, such as confidentiality, integrity, identity access management (IAM), etc. Therefore, this paper will review and explore various access controls implemented in a cloud environment that achieve different security purposes. The methodology followed in this survey was conducting an assessment, evaluation, and comparison between those access controls mechanisms and technologies based on different factors, such as the security goals it achieves, usability, and cost-effectiveness. This assessment resulted in the fact that the technology used in an access control affects the security goals it achieves as well as there is no one access control method that achieves all security goals. Consequently, such a comparison would help decision-makers to choose properly the access controls that meet their requirements.Keywords: Access controls, cloud computing, confidentiality, identity and access management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 728893 On the Network Packet Loss Tolerance of SVM Based Activity Recognition
Authors: Gamze Uslu, Sebnem Baydere, Alper K. Demir
Abstract:
In this study, data loss tolerance of Support Vector Machines (SVM) based activity recognition model and multi activity classification performance when data are received over a lossy wireless sensor network is examined. Initially, the classification algorithm we use is evaluated in terms of resilience to random data loss with 3D acceleration sensor data for sitting, lying, walking and standing actions. The results show that the proposed classification method can recognize these activities successfully despite high data loss. Secondly, the effect of differentiated quality of service performance on activity recognition success is measured with activity data acquired from a multi hop wireless sensor network, which introduces high data loss. The effect of number of nodes on the reliability and multi activity classification success is demonstrated in simulation environment. To the best of our knowledge, the effect of data loss in a wireless sensor network on activity detection success rate of an SVM based classification algorithm has not been studied before.
Keywords: Activity recognition, support vector machines, acceleration sensor, wireless sensor networks, packet loss.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2871892 Types of Epilepsies and Findings EEG- LORETA about Epilepsy
Authors: Leila Maleki, Ahmad Esmali Kooraneh, Hossein Taghi Derakhshi
Abstract:
Neural activity in the human brain starts from the early stages of prenatal development. This activity or signals generated by the brain are electrical in nature and represent not only the brain function but also the status of the whole body. At the present moment, three methods can record functional and physiological changes within the brain with high temporal resolution of neuronal interactions at the network level: the electroencephalogram (EEG), the magnet oencephalogram (MEG), and functional magnetic resonance imaging (fMRI); each of these has advantages and shortcomings. EEG recording with a large number of electrodes is now feasible in clinical practice. Multichannel EEG recorded from the scalp surface provides very valuable but indirect information about the source distribution. However, deep electrode measurements yield more reliable information about the source locations intracranial recordings and scalp EEG are used with the source imaging techniques to determine the locations and strengths of the epileptic activity. As a source localization method, Low Resolution Electro-Magnetic Tomography (LORETA) is solved for the realistic geometry based on both forward methods, the Boundary Element Method (BEM) and the Finite Difference Method (FDM). In this paper, we review the findings EEG- LORETA about epilepsy.Keywords: Epilepsy, EEG, EEG- Loreta, loreta analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3094891 Computer Aided Design of Reshaping Process of Circular Pipes into Square Pipes
Authors: Parviz Alinezhad, Ali Sanati, Koorosh Naser Momtahen
Abstract:
Square pipes (pipes with square cross sections) are being used for various industrial objectives, such as machine structure components and housing/building elements. The utilization of them is extending rapidly and widely. Hence, the out-put of those pipes is increasing and new application fields are continually developing. Due to various demands in recent time, the products have to satisfy difficult specifications with high accuracy in dimensions. The reshaping process design of pipes with square cross sections; however, is performed by trial and error and based on expert-s experience. In this paper, a computer-aided simulation is developed based on the 2-D elastic-plastic method with consideration of the shear deformation to analyze the reshaping process. Effect of various parameters such as diameter of the circular pipe and mechanical properties of metal on product dimension and quality can be evaluated by using this simulation. Moreover, design of reshaping process include determination of shrinkage of cross section, necessary number of stands, radius of rolls and height of pipe at each stand, are investigated. Further, it is shown that there are good agreements between the results of the design method and the experimental results.Keywords: Circular Pipes, Square Pipes, Shear Deformation, Reshaping Process, Numerical Simulation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1398890 A Multigrid Approach for Three-Dimensional Inverse Heat Conduction Problems
Authors: Jianhua Zhou, Yuwen Zhang
Abstract:
A two-step multigrid approach is proposed to solve the inverse heat conduction problem in a 3-D object under laser irradiation. In the first step, the location of the laser center is estimated using a coarse and uniform grid system. In the second step, the front-surface temperature is recovered in good accuracy using a multiple grid system in which fine mesh is used at laser spot center to capture the drastic temperature rise in this region but coarse mesh is employed in the peripheral region to reduce the total number of sensors required. The effectiveness of the two-step approach and the multiple grid system are demonstrated by the illustrative inverse solutions. If the measurement data for the temperature and heat flux on the back surface do not contain random error, the proposed multigrid approach can yield more accurate inverse solutions. When the back-surface measurement data contain random noise, accurate inverse solutions cannot be obtained if both temperature and heat flux are measured on the back surface.
Keywords: Conduction, inverse problems, conjugated gradient method, laser.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 843889 Metaphorical Perceptions of Middle School Students Regarding Computer Games
Authors: Ismail Celik, Ismail Sahin, Fetah Eren
Abstract:
The computer, among the most important inventions of the twentieth century, has become an increasingly important component in our everyday lives. Computer games also have become increasingly popular among people day-by-day, owing to their features based on realistic virtual environments, audio and visual features, and the roles they offer players. In the present study, the metaphors students have for computer games are investigated, as well as an effort to fill the gap in the literature. Students were asked to complete the sentence—‘Computer game is like/similar to….because….’— to determine the middle school students’ metaphorical images of the concept for ‘computer game’. The metaphors created by the students were grouped in six categories, based on the source of the metaphor. These categories were ordered as ‘computer game as a means of entertainment’, ‘computer game as a beneficial means’, ‘computer game as a basic need’, ‘computer game as a source of evil’, ‘computer game as a means of withdrawal’, and ‘computer game as a source of addiction’, according to the number of metaphors they included.
Keywords: Computer game, metaphor, middle school students.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1557888 Educational Data Mining: The Case of Department of Mathematics and Computing in the Period 2009-2018
Authors: M. Sitoe, O. Zacarias
Abstract:
University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.
Keywords: Evasion and retention, cross validation, bagging, stacking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 119887 Green Building Materials: Hemp Oil Based Biocomposites
Authors: Nathan W. Manthey, Francisco Cardona, Gaston M. Francucci, Thiru Aravinthan
Abstract:
Novel acrylated epoxidized hemp oil (AEHO) based bioresins were successfully synthesised, characterized and applied to biocomposites reinforced with woven jute fibre. Characterisation of the synthesised AEHO consisted of acid number titrations and FTIR spectroscopy to assess the success of the acrylation reaction. Three different matrices were produced (vinylester (VE), 50/50 blend of AEHO/VE and 100% AEHO) and reinforced with jute fibre to form three different types of biocomposite samples. Mechanical properties in the form of flexural and interlaminar shear strength (ILSS) were investigated and compared for the different samples. Results from the mechanical tests showed that AEHO and 50/50 based neat bioresins displayed lower flexural properties compared with the VE samples. However when applied to biocomposites and compared with VE based samples, AEHO biocomposites demonstrated comparable flexural performance and improved ILSS. These results are attributed to improved fibre-matrix interfacial adhesion due to surface-chemical compatibility between the natural fibres and bioresin.Keywords: Biocomposite, hemp oil based bioresin, green building materials, mechanical properties.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3472886 On the Computation of a Common n-finger Robotic Grasp for a Set of Objects
Authors: Avishai Sintov, Roland Menassa, Amir Shapiro
Abstract:
Industrial robotic arms utilize multiple end-effectors, each for a specific part and for a specific task. We propose a novel algorithm which will define a single end-effector’s configuration able to grasp a given set of objects with different geometries. The algorithm will have great benefit in production lines allowing a single robot to grasp various parts. Hence, reducing the number of endeffectors needed. Moreover, the algorithm will reduce end-effector design and manufacturing time and final product cost. The algorithm searches for a common grasp over the set of objects. The search algorithm maps all possible grasps for each object which satisfy a quality criterion and takes into account possible external wrenches (forces and torques) applied to the object. The mapped grasps are- represented by high-dimensional feature vectors which describes the shape of the gripper. We generate a database of all possible grasps for each object in the feature space. Then we use a search and classification algorithm for intersecting all possible grasps over all parts and finding a single common grasp suitable for all objects. We present simulations of planar and spatial objects to validate the feasibility of the approach.
Keywords: Common Grasping, Search Algorithm, Robotic End-Effector.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675885 Using Analytic Hierarchy Process as a Decision-Making Tool in Project Portfolio Management
Authors: D. Danesh, M. J. Ryan, A. Abbasi
Abstract:
Project Portfolio Management (PPM) is an essential component of an organisation’s strategic procedures, which requires attention of several factors to envisage a range of long-term outcomes to support strategic project portfolio decisions. To evaluate overall efficiency at the portfolio level, it is essential to identify the functionality of specific projects as well as to aggregate those findings in a mathematically meaningful manner that indicates the strategic significance of the associated projects at a number of levels of abstraction. PPM success is directly associated with the quality of decisions made and poor judgment increases portfolio costs. Hence, various Multi-Criteria Decision Making (MCDM) techniques have been designed and employed to support the decision-making functions. This paper reviews possible options to enhance the decision-making outcomes in organisational portfolio management processes using the Analytic Hierarchy Process (AHP) both from academic and practical perspectives and will examine the usability, certainty and quality of the technique. The results of the study will also provide insight into the technical risk associated with current decision-making model to underpin initiative tracking and strategic portfolio management.Keywords: Analytic hierarchy process, decision support systems, multi-criteria decision-making, project portfolio management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4272884 Classifying Biomedical Text Abstracts based on Hierarchical 'Concept' Structure
Authors: Rozilawati Binti Dollah, Masaki Aono
Abstract:
Classifying biomedical literature is a difficult and challenging task, especially when a large number of biomedical articles should be organized into a hierarchical structure. In this paper, we present an approach for classifying a collection of biomedical text abstracts downloaded from Medline database with the help of ontology alignment. To accomplish our goal, we construct two types of hierarchies, the OHSUMED disease hierarchy and the Medline abstract disease hierarchies from the OHSUMED dataset and the Medline abstracts, respectively. Then, we enrich the OHSUMED disease hierarchy before adapting it to ontology alignment process for finding probable concepts or categories. Subsequently, we compute the cosine similarity between the vector in probable concepts (in the “enriched" OHSUMED disease hierarchy) and the vector in Medline abstract disease hierarchies. Finally, we assign category to the new Medline abstracts based on the similarity score. The results obtained from the experiments show the performance of our proposed approach for hierarchical classification is slightly better than the performance of the multi-class flat classification.Keywords: Biomedical literature, hierarchical text classification, ontology alignment, text mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011883 Spatial Variability of Brahmaputra River Flow Characteristics
Authors: Hemant Kumar
Abstract:
Brahmaputra River is known according to the Hindu mythology the son of the Lord Brahma. According to this name, the river Brahmaputra creates mass destruction during the monsoon season in Assam, India. It is a state situated in North-East part of India. This is one of the essential states out of the seven countries of eastern India, where almost all entire Brahmaputra flow carried out. The other states carry their tributaries. In the present case study, the spatial analysis performed in this specific case the number of MODIS data are acquired. In the method of detecting the change, the spray content was found during heavy rainfall and in the flooded monsoon season. By this method, particularly the analysis over the Brahmaputra outflow determines the flooded season. The charged particle-associated in aerosol content genuinely verifies the heavy water content below the ground surface, which is validated by trend analysis through rainfall spectrum data. This is confirmed by in-situ sampled view data from a different position of Brahmaputra River. Further, a Hyperion Hyperspectral 30 m resolution data were used to scan the sediment deposits, which is also confirmed by in-situ sampled view data from a different position.
Keywords: Spatial analysis, change detection, aerosol, trend analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542