Search results for: Data mining techniques
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9245

Search results for: Data mining techniques

4775 A Novel Modified Adaptive Fuzzy Inference Engine and Its Application to Pattern Classification

Authors: J. Hossen, A. Rahman, K. Samsudin, F. Rokhani, S. Sayeed, R. Hasan

Abstract:

The Neuro-Fuzzy hybridization scheme has become of research interest in pattern classification over the past decade. The present paper proposes a novel Modified Adaptive Fuzzy Inference Engine (MAFIE) for pattern classification. A modified Apriori algorithm technique is utilized to reduce a minimal set of decision rules based on input output data sets. A TSK type fuzzy inference system is constructed by the automatic generation of membership functions and rules by the fuzzy c-means clustering and Apriori algorithm technique, respectively. The generated adaptive fuzzy inference engine is adjusted by the least-squares fit and a conjugate gradient descent algorithm towards better performance with a minimal set of rules. The proposed MAFIE is able to reduce the number of rules which increases exponentially when more input variables are involved. The performance of the proposed MAFIE is compared with other existing applications of pattern classification schemes using Fisher-s Iris and Wisconsin breast cancer data sets and shown to be very competitive.

Keywords: Apriori algorithm, Fuzzy C-means, MAFIE, TSK

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1925
4774 The Prediction of Sound Absorbing Coefficient for Multi-Layer Non-Woven

Authors: Un-Hwan Park, Jun-Hyeok Heo, In-Sung Lee, Tae-Hyeon Oh, Dae-Gyu Park

Abstract:

Automotive interior material consisting of several material layers has the sound-absorbing function. It is difficult to predict sound absorbing coefficient because of several material layers. So, many experimental tunings are required to achieve the target of sound absorption. Therefore, while the car interior materials are developed, so much time and money is spent. In this study, we present a method to predict the sound absorbing performance of the material with multi-layer using physical properties of each material. The properties are predicted by Foam-X software using the sound absorption coefficient data measured by impedance tube. Then, we will compare and analyze the predicted sound absorption coefficient with the data measured by scaled reverberation chamber and impedance tubes for a prototype. If the method is used instead of experimental tuning in the development of car interior material, the time and money can be saved, and then, the development effort can be reduced because it can be optimized by simulation.

Keywords: Multi-layer nonwoven, sound absorption coefficient, scaled reverberation chamber, impedance tubes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
4773 Energy-Aware Scheduling in Real-Time Systems: An Analysis of Fair Share Scheduling and Priority-Driven Preemptive Scheduling

Authors: Su Xiaohan, Jin Chicheng, Liu Yijing, Burra Venkata Durga Kumar

Abstract:

Energy-aware scheduling in real-time systems aims to minimize energy consumption, but issues related to resource reservation and timing constraints remain challenges. This study focuses on analyzing two scheduling algorithms, Fair-Share Scheduling (FFS) and Priority-Driven Preemptive Scheduling (PDPS), for solving these issues and energy-aware scheduling in real-time systems. Based on research on both algorithms and the processes of solving two problems, it can be found that FFS ensures fair allocation of resources but needs to improve with an imbalanced system load. And PDPS prioritizes tasks based on criticality to meet timing constraints through preemption but relies heavily on task prioritization and may not be energy efficient. Therefore, improvements to both algorithms with energy-aware features will be proposed. Future work should focus on developing hybrid scheduling techniques that minimize energy consumption through intelligent task prioritization, resource allocation, and meeting time constraints.

Keywords: Energy-aware scheduling, fair-share scheduling, priority-driven preemptive scheduling, real-time systems, optimization, resource reservation, timing constraints.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 96
4772 Discrete and Stationary Adaptive Sub-Band Threshold Method for Improving Image Resolution

Authors: P. Joyce Beryl Princess, Y. Harold Robinson

Abstract:

Image Processing is a structure of Signal Processing for which the input is the image and the output is also an image or parameter of the image. Image Resolution has been frequently referred as an important aspect of an image. In Image Resolution Enhancement, images are being processed in order to obtain more enhanced resolution. To generate highly resoluted image for a low resoluted input image with high PSNR value. Stationary Wavelet Transform is used for Edge Detection and minimize the loss occurs during Downsampling. Inverse Discrete Wavelet Transform is to get highly resoluted image. Highly resoluted output is generated from the Low resolution input with high quality. Noisy input will generate output with low PSNR value. So Noisy resolution enhancement technique has been used for adaptive sub-band thresholding is used. Downsampling in each of the DWT subbands causes information loss in the respective subbands. SWT is employed to minimize this loss. Inverse Discrete wavelet transform (IDWT) is to convert the object which is downsampled using DWT into a highly resoluted object. Used Image denoising and resolution enhancement techniques will generate image with high PSNR value. Our Proposed method will improve Image Resolution and reached the optimized threshold.

Keywords: Image Processing, Inverse Discrete wavelet transform, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1783
4771 Estimating Marine Tidal Power Potential in Kenya

Authors: Lucy Patricia Onundo, Wilfred Njoroge Mwema

Abstract:

The rapidly diminishing fossil fuel reserves, their exorbitant cost and the increasingly apparent negative effect of fossil fuels to climate changes is a wake-up call to explore renewable energy. Wind, bio-fuel and solar power have already become staples of Kenyan electricity mix. The potential of electric power generation from marine tidal currents is enormous, with oceans covering more than 70% of the earth. However, attempts to harness marine tidal energy in Kenya, has yet to be studied thoroughly due to its promising, cyclic, reliable and predictable nature and the vast energy contained within it. The high load factors resulting from the fluid properties and the predictable resource characteristics make marine currents particularly attractive for power generation and advantageous when compared to others. Global-level resource assessments and oceanographic literature and data have been compiled in an analysis of the technology-specific requirements for tidal energy technologies and the physical resources. Temporal variations in resource intensity as well as the differences between small-scale applications are considered.

Keywords: Energy data assessment, environmental legislation, renewable energy, tidal-in-stream turbines.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1350
4770 Identification, Prediction and Detection of the Process Fault in a Cement Rotary Kiln by Locally Linear Neuro-Fuzzy Technique

Authors: Masoud Sadeghian, Alireza Fatehi

Abstract:

In this paper, we use nonlinear system identification method to predict and detect process fault of a cement rotary kiln. After selecting proper inputs and output, an input-output model is identified for the plant. To identify the various operation points in the kiln, Locally Linear Neuro-Fuzzy (LLNF) model is used. This model is trained by LOLIMOT algorithm which is an incremental treestructure algorithm. Then, by using this method, we obtained 3 distinct models for the normal and faulty situations in the kiln. One of the models is for normal condition of the kiln with 15 minutes prediction horizon. The other two models are for the two faulty situations in the kiln with 7 minutes prediction horizon are presented. At the end, we detect these faults in validation data. The data collected from White Saveh Cement Company is used for in this study.

Keywords: Cement Rotary Kiln, Fault Detection, Delay Estimation Method, Locally Linear Neuro Fuzzy Model, LOLIMOT.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1667
4769 Low Power and Less Area Architecture for Integer Motion Estimation

Authors: C Hisham, K Komal, Amit K Mishra

Abstract:

Full search block matching algorithm is widely used for hardware implementation of motion estimators in video compression algorithms. In this paper we are proposing a new architecture, which consists of a 2D parallel processing unit and a 1D unit both working in parallel. The proposed architecture reduces both data access power and computational power which are the main causes of power consumption in integer motion estimation. It also completes the operations with nearly the same number of clock cycles as compared to a 2D systolic array architecture. In this work sum of absolute difference (SAD)-the most repeated operation in block matching, is calculated in two steps. The first step is to calculate the SAD for alternate rows by a 2D parallel unit. If the SAD calculated by the parallel unit is less than the stored minimum SAD, the SAD of the remaining rows is calculated by the 1D unit. Early termination, which stops avoidable computations has been achieved with the help of alternate rows method proposed in this paper and by finding a low initial SAD value based on motion vector prediction. Data reuse has been applied to the reference blocks in the same search area which significantly reduced the memory access.

Keywords: Sum of absolute difference, high speed DSP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
4768 FPGA Implementation of Generalized Maximal Ratio Combining Receiver Diversity

Authors: Rafic Ayoubi, Jean-Pierre Dubois, Rania Minkara

Abstract:

In this paper, we study FPGA implementation of a novel supra-optimal receiver diversity combining technique, generalized maximal ratio combining (GMRC), for wireless transmission over fading channels in SIMO systems. Prior published results using ML-detected GMRC diversity signal driven by BPSK showed superior bit error rate performance to the widely used MRC combining scheme in an imperfect channel estimation (ICE) environment. Under perfect channel estimation conditions, the performance of GMRC and MRC were identical. The main drawback of the GMRC study was that it was theoretical, thus successful FPGA implementation of it using pipeline techniques is needed as a wireless communication test-bed for practical real-life situations. Simulation results showed that the hardware implementation was efficient both in terms of speed and area. Since diversity combining is especially effective in small femto- and picocells, internet-associated wireless peripheral systems are to benefit most from GMRC. As a result, many spinoff applications can be made to the hardware of IP-based 4th generation networks.

Keywords: Femto-internet cells, field-programmable gate array, generalized maximal-ratio combining, Lyapunov fractal dimension, pipelining technique, wireless SIMO channels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
4767 Development of a Software System for Management and Genetic Analysis of Biological Samples for Forensic Laboratories

Authors: Mariana Lima, Rodrigo Silva, Victor Stange, Teodiano Bastos

Abstract:

Due to the high reliability reached by DNA tests, since the 1980s this kind of test has allowed the identification of a growing number of criminal cases, including old cases that were unsolved, now having a chance to be solved with this technology. Currently, the use of genetic profiling databases is a typical method to increase the scope of genetic comparison. Forensic laboratories must process, analyze, and generate genetic profiles of a growing number of samples, which require time and great storage capacity. Therefore, it is essential to develop methodologies capable to organize and minimize the spent time for both biological sample processing and analysis of genetic profiles, using software tools. Thus, the present work aims the development of a software system solution for laboratories of forensics genetics, which allows sample, criminal case and local database management, minimizing the time spent in the workflow and helps to compare genetic profiles. For the development of this software system, all data related to the storage and processing of samples, workflows and requirements that incorporate the system have been considered. The system uses the following software languages: HTML, CSS, and JavaScript in Web technology, with NodeJS platform as server, which has great efficiency in the input and output of data. In addition, the data are stored in a relational database (MySQL), which is free, allowing a better acceptance for users. The software system here developed allows more agility to the workflow and analysis of samples, contributing to the rapid insertion of the genetic profiles in the national database and to increase resolution of crimes. The next step of this research is its validation, in order to operate in accordance with current Brazilian national legislation.

Keywords: Database, forensic genetics, genetic analysis, sample management, software solution.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1156
4766 An Experimental Study to Mitigate Swelling Pressure of Expansive Tabuk Shale, Saudi Arabia

Authors: A. A. Embaby, A. Abu Halawa, M. Ramadan

Abstract:

In Kingdom of Saudi Arabia, there are several areas where expansive soil exists in the form of variable-thicknesses layers in the developed regions. Severe distress to infrastructures can be caused by the development of heave and swelling pressure in this kind of expansive shale. Among the various techniques for expansive soil mitigation, the removal and replacement technique is very popular for lightly loaded structures and shallow foundations. This paper presents the result of an experimental study conducted for evaluating the effect of type and thickness of the cushion soils on mitigation of swelling characteristics of expanded shale. Seven undisturbed shale samples collected from Al Qadsiyah district, which is located in the Tabuk town north Kingdom of Saudi Arabia, are treated with two types of cushion coarse-grained sediments (CCS); sand and gravel. Each type is represented with three thicknesses, 22%, 33% and 44% in relation to the depth of the active zone. The test results indicated that the replacement of expansive shale by CCS reduces the swelling potential and pressure. It is found that the reduction in swelling depends on the type and thickness of CCS. The treatment by removing the original expansive shale and replacing it by cushion sand with 44% thickness reduced the swelling potential and pressure of about 53.29% and 62.78 %, respectively.

Keywords: Cushion coarse-grained sediments, expansive soil, Saudi Arabia, swelling pressure, Tabuk Shale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
4765 Extraction of Forest Plantation Resources in Selected Forest of San Manuel, Pangasinan, Philippines Using LiDAR Data for Forest Status Assessment

Authors: Mark Joseph Quinto, Roan Beronilla, Guiller Damian, Eliza Camaso, Ronaldo Alberto

Abstract:

Forest inventories are essential to assess the composition, structure and distribution of forest vegetation that can be used as baseline information for management decisions. Classical forest inventory is labor intensive and time-consuming and sometimes even dangerous. The use of Light Detection and Ranging (LiDAR) in forest inventory would improve and overcome these restrictions. This study was conducted to determine the possibility of using LiDAR derived data in extracting high accuracy forest biophysical parameters and as a non-destructive method for forest status analysis of San Manual, Pangasinan. Forest resources extraction was carried out using LAS tools, GIS, Envi and .bat scripts with the available LiDAR data. The process includes the generation of derivatives such as Digital Terrain Model (DTM), Canopy Height Model (CHM) and Canopy Cover Model (CCM) in .bat scripts followed by the generation of 17 composite bands to be used in the extraction of forest classification covers using ENVI 4.8 and GIS software. The Diameter in Breast Height (DBH), Above Ground Biomass (AGB) and Carbon Stock (CS) were estimated for each classified forest cover and Tree Count Extraction was carried out using GIS. Subsequently, field validation was conducted for accuracy assessment. Results showed that the forest of San Manuel has 73% Forest Cover, which is relatively much higher as compared to the 10% canopy cover requirement. On the extracted canopy height, 80% of the tree’s height ranges from 12 m to 17 m. CS of the three forest covers based on the AGB were: 20819.59 kg/20x20 m for closed broadleaf, 8609.82 kg/20x20 m for broadleaf plantation and 15545.57 kg/20x20m for open broadleaf. Average tree counts for the tree forest plantation was 413 trees/ha. As such, the forest of San Manuel has high percent forest cover and high CS.

Keywords: Carbon stock, forest inventory, LiDAR, tree count.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271
4764 Multi-Factor Optimization Method through Machine Learning in Building Envelope Design: Focusing on Perforated Metal Façade

Authors: Jinwooung Kim, Jae-Hwan Jung, Seong-Jun Kim, Sung-Ah Kim

Abstract:

Because the building envelope has a significant impact on the operation and maintenance stage of the building, designing the facade considering the performance can improve the performance of the building and lower the maintenance cost of the building. In general, however, optimizing two or more performance factors confronts the limits of time and computational tools. The optimization phase typically repeats infinitely until a series of processes that generate alternatives and analyze the generated alternatives achieve the desired performance. In particular, as complex geometry or precision increases, computational resources and time are prohibitive to find the required performance, so an optimization methodology is needed to deal with this. Instead of directly analyzing all the alternatives in the optimization process, applying experimental techniques (heuristic method) learned through experimentation and experience can reduce resource waste. This study proposes and verifies a method to optimize the double envelope of a building composed of a perforated panel using machine learning to the design geometry and quantitative performance. The proposed method is to achieve the required performance with fewer resources by supplementing the existing method which cannot calculate the complex shape of the perforated panel.

Keywords: Building envelope, machine learning, perforated metal, multi-factor optimization, façade.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1213
4763 A Computational Study of the Effect of Intake Design on Volumetric Efficiency for Best Performance in Motorsport

Authors: Dominic Wentworth-Linton, Shian Gao

Abstract:

This project was aimed at investigating the effect of velocity stacks on the intakes of internal combustion engines for motorsport applications. The intake systems in motorsport are predominantly fuel injection with a plate mounted for the stacks. Using Computational Fluid Dynamics software, the relationship between the stack length and power and torque delivery across the engine’s rev range was investigated and the results were used to choose the best option for its intended motorsport discipline. The test results are expected to vary with engine geometry and its natural manufacturer characteristics. The test was also relevant in bridging between computational data and real simulation as the results show flow, pressure and velocity readings but the behaviour of the engine is inferred from the nature of each test. The results of the data analysis were tested in a real-life simulation on a dynamometer to prove the theory of stack length on power and torque delivery, which helps determine the most suitable stack for the Vauxhall engine for rallying in the Caribbean.

Keywords: CFD simulation, internal combustion engine, intake system, dynamometer test.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2238
4762 Static Balance in the Elderly: Comparison between Elderly Performing Physical Activity and Fine Motor Coordination Activity

Authors: Andreia Guimarães Farnese, Mateus Fernandes Réu Urban, Leandro Procópio, Renato Zângaro, Regiane Albertini

Abstract:

Senescence changes include postural balance, inferring the risk of falls, and can lead to fractures, bedridden, and the risk of death. Physical activity, e.g., cardiovascular exercises, is notable for improving balance due to brain cell stimulations, but fine coordination exercises also elevate cell brain metabolism. This study aimed to verify whether the elderly person who performs fine motor activity has a balance similar to that of those who practice physical activity. The subjects were divided into three groups according to the activity practice: control group (CG) with seven participants for the sedentary individuals, motor coordination group (MCG) with six participants, and physical activity group (PAG) with eight participants. Data comparisons were from the Berg balance scale, Time up and Go test, and stabilometric analysis. Descriptive statistical and ANOVA analyses were performed for data analysis. The results reveal that including fine motor activities can improve the balance of the elderly and indirectly decrease the risk of falls.

Keywords: Balance, barapodometer, coordination, elderly.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 512
4761 Technical Trading Rules in Emerging Stock Markets

Authors: Stefaan Pauwels, Koen Inghelbrecht, Dries Heyman, Pieter Marius

Abstract:

Literature reveals that many investors rely on technical trading rules when making investment decisions. If stock markets are efficient, one cannot achieve superior results by using these trading rules. However, if market inefficiencies are present, profitable opportunities may arise. The aim of this study is to investigate the effectiveness of technical trading rules in 34 emerging stock markets. The performance of the rules is evaluated by utilizing White-s Reality Check and the Superior Predictive Ability test of Hansen, along with an adjustment for transaction costs. These tests are able to evaluate whether the best model performs better than a buy-and-hold benchmark. Further, they provide an answer to data snooping problems, which is essential to obtain unbiased outcomes. Based on our results we conclude that technical trading rules are not able to outperform a naïve buy-and-hold benchmark on a consistent basis. However, we do find significant trading rule profits in 4 of the 34 investigated markets. We also present evidence that technical analysis is more profitable in crisis situations. Nevertheless, this result is relatively weak.

Keywords: technical trading rules, Reality Check, Superior Predictive Ability, emerging stock markets, data snooping

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2432
4760 Unbalanced Distribution Optimal Power Flow to Minimize Losses with Distributed Photovoltaic Plants

Authors: Malinwo Estone Ayikpa

Abstract:

Electric power systems are likely to operate with minimum losses and voltage meeting international standards. This is made possible generally by control actions provide by automatic voltage regulators, capacitors and transformers with on-load tap changer (OLTC). With the development of photovoltaic (PV) systems technology, their integration on distribution networks has increased over the last years to the extent of replacing the above mentioned techniques. The conventional analysis and simulation tools used for electrical networks are no longer able to take into account control actions necessary for studying distributed PV generation impact. This paper presents an unbalanced optimal power flow (OPF) model that minimizes losses with association of active power generation and reactive power control of single-phase and three-phase PV systems. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. The unbalance OPF is formulated by current balance equations and solved by primal-dual interior point method. Several simulation cases have been carried out varying the size and location of PV systems and the results show a detailed view of the impact of PV distributed generation on distribution systems.

Keywords: Distribution system, losses, photovoltaic generation, primal-dual interior point method, reactive power control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1074
4759 A Game-Based Product Modelling Environment for Non-Engineer

Authors: Guolong Zhong, Venkatesh Chennam Vijay, Ilias Oraifige

Abstract:

In the last 20 years, Knowledge Based Engineering (KBE) has shown its advantages in product development in different engineering areas such as automation, mechanical, civil and aerospace engineering in terms of digital design automation and cost reduction by automating repetitive design tasks through capturing, integrating, utilising and reusing the existing knowledge required in various aspects of the product design. However, in primary design stages, the descriptive information of a product is discrete and unorganized while knowledge is in various forms instead of pure data. Thus, it is crucial to have an integrated product model which can represent the entire product information and its associated knowledge at the beginning of the product design. One of the shortcomings of the existing product models is a lack of required knowledge representation in various aspects of product design and its mapping to an interoperable schema. To overcome the limitation of the existing product model and methodologies, two key factors are considered. First, the product model must have well-defined classes that can represent the entire product information and its associated knowledge. Second, the product model needs to be represented in an interoperable schema to ensure a steady data exchange between different product modelling platforms and CAD software. This paper introduced a method to provide a general product model as a generative representation of a product, which consists of the geometry information and non-geometry information, through a product modelling framework. The proposed method for capturing the knowledge from the designers through a knowledge file provides a simple and efficient way of collecting and transferring knowledge. Further, the knowledge schema provides a clear view and format on the data that needed to be gathered in order to achieve a unified knowledge exchange between different platforms. This study used a game-based platform to make product modelling environment accessible for non-engineers. Further the paper goes on to test use case based on the proposed game-based product modelling environment to validate the effectiveness among non-engineers.

Keywords: Game-based learning, knowledge based engineering, product modelling, design automation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
4758 Ports and Airports: Gateways to Vector-Borne Diseases in Portugal Mainland

Authors: Maria C. Proença, Maria T. Rebelo, Maria J. Alves, Sofia Cunha

Abstract:

Vector-borne diseases are transmitted to humans by mosquitos, sandflies, bugs, ticks, and other vectors. Some are re-transmitted between vectors, if the infected human has a new contact when his levels of infection are high. The vector is infected for lifetime and can transmit infectious diseases not only between humans but also from animals to humans. Some vector borne diseases are very disabling and globally account for more than one million deaths worldwide. The mosquitoes from the complex Culex pipiens sl. are the most abundant in Portugal, and we dispose in this moment of a data set from the surveillance program that has been carried on since 2006 across the country. All mosquitos’ species are included, but the large coverage of Culex pipiens sl. and its importance for public health make this vector an interesting candidate to assess risk of disease amplification. This work focus on ports and airports identified as key areas of high density of vectors. Mosquitoes being ectothermic organisms, the main factor for vector survival and pathogen development is temperature. Minima and maxima local air temperatures for each area of interest are averaged by month from data gathered on a daily basis at the national network of meteorological stations, and interpolated in a geographic information system (GIS). The range of temperatures ideal for several pathogens are known and this work shows how to use it with the meteorological data in each port and airport facility, to focus an efficient implementation of countermeasures and reduce simultaneously risk transmission and mitigation costs. The results show an increased alert with decreasing latitude, which corresponds to higher minimum and maximum temperatures and a lower amplitude range of the daily temperature.

Keywords: Human health, risk assessment, risk management, vector-borne diseases.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
4757 3D Frictionless Contact Case between the Structure of E-Bike and the Ground

Authors: Lele Zhang, HuiLeng Choo, Alexander Konyukhov, Shuguang Li

Abstract:

China is currently the world's largest producer and distributor of electric bicycle (e-bike). The increasing number of e-bikes on the road is accompanied by rising injuries and even deaths of e-bike drivers. Therefore, there is a growing need to improve the safety structure of e-bikes. This 3D frictionless contact analysis is a preliminary, but necessary work for further structural design improvement of an e-bike. The contact analysis between e-bike and the ground was carried out as follows: firstly, the Penalty method was illustrated and derived from the simplest spring-mass system. This is one of the most common methods to satisfy the frictionless contact case; secondly, ANSYS static analysis was carried out to verify finite element (FE) models with contact pair (without friction) between e-bike and the ground; finally, ANSYS transient analysis was used to obtain the data of the penetration p(u) of e-bike with respect to the ground. Results obtained from the simulation are as estimated by comparing with that from theoretical method. In the future, protective shell will be designed following the stability criteria and added to the frame of e-bike. Simulation of side falling of the improvedsafety structure of e-bike will be confirmed with experimental data.

Keywords: Frictionless contact, penalty method, e-bike, finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2092
4756 A Hybrid Classification Method using Artificial Neural Network Based Decision Tree for Automatic Sleep Scoring

Authors: Haoyu Ma, Bin Hu, Mike Jackson, Jingzhi Yan, Wen Zhao

Abstract:

In this paper we propose a new classification method for automatic sleep scoring using an artificial neural network based decision tree. It attempts to treat sleep scoring progress as a series of two-class problems and solves them with a decision tree made up of a group of neural network classifiers, each of which uses a special feature set and is aimed at only one specific sleep stage in order to maximize the classification effect. A single electroencephalogram (EEG) signal is used for our analysis rather than depending on multiple biological signals, which makes greatly simplifies the data acquisition process. Experimental results demonstrate that the average epoch by epoch agreement between the visual and the proposed method in separating 30s wakefulness+S1, REM, S2 and SWS epochs was 88.83%. This study shows that the proposed method performed well in all the four stages, and can effectively limit error propagation at the same time. It could, therefore, be an efficient method for automatic sleep scoring. Additionally, since it requires only a small volume of data it could be suited to pervasive applications.

Keywords: Sleep, Sleep stage, Automatic sleep scoring, Electroencephalography, Decision tree, Artificial neural network

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2063
4755 Split-Pipe Design of Water Distribution Network Using Simulated Annealing

Authors: J. Tospornsampan, I. Kita, M. Ishii, Y. Kitamura

Abstract:

In this paper a procedure for the split-pipe design of looped water distribution network based on the use of simulated annealing is proposed. Simulated annealing is a heuristic-based search algorithm, motivated by an analogy of physical annealing in solids. It is capable for solving the combinatorial optimization problem. In contrast to the split-pipe design that is derived from a continuous diameter design that has been implemented in conventional optimization techniques, the split-pipe design proposed in this paper is derived from a discrete diameter design where a set of pipe diameters is chosen directly from a specified set of commercial pipes. The optimality and feasibility of the solutions are found to be guaranteed by using the proposed method. The performance of the proposed procedure is demonstrated through solving the three well-known problems of water distribution network taken from the literature. Simulated annealing provides very promising solutions and the lowest-cost solutions are found for all of these test problems. The results obtained from these applications show that simulated annealing is able to handle a combinatorial optimization problem of the least cost design of water distribution network. The technique can be considered as an alternative tool for similar areas of research. Further applications and improvements of the technique are expected as well.

Keywords: Combinatorial problem, Heuristics, Least-cost design, Looped network, Pipe network, Optimization

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2670
4754 Analytical Model Based Evaluation of Human Machine Interfaces Using Cognitive Modeling

Authors: Belkacem Chikhaoui, Helene Pigot

Abstract:

Cognitive models allow predicting some aspects of utility and usability of human machine interfaces (HMI), and simulating the interaction with these interfaces. The action of predicting is based on a task analysis, which investigates what a user is required to do in terms of actions and cognitive processes to achieve a task. Task analysis facilitates the understanding of the system-s functionalities. Cognitive models are part of the analytical approaches, that do not associate the users during the development process of the interface. This article presents a study about the evaluation of a human machine interaction with a contextual assistant-s interface using ACTR and GOMS cognitive models. The present work shows how these techniques may be applied in the evaluation of HMI, design and research by emphasizing firstly the task analysis and secondly the time execution of the task. In order to validate and support our results, an experimental study of user performance is conducted at the DOMUS laboratory, during the interaction with the contextual assistant-s interface. The results of our models show that the GOMS and ACT-R models give good and excellent predictions respectively of users performance at the task level, as well as the object level. Therefore, the simulated results are very close to the results obtained in the experimental study.

Keywords: HMI, interface evaluation, Analytical evaluation, cognitivemodeling, user modeling, user performance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1522
4753 Delineation of Oil – Polluted Sites in Ibeno LGA, Nigeria, Using Geophysical Techniques

Authors: Ime R. Udotong, Justina I. R. Udotong, Ofonime U. M. John

Abstract:

Ibeno, Nigeria hosts the operational base of Mobil Producing Nigeria Unlimited (MPNU), a subsidiary of ExxonMobil and the current highest oil & condensate producer in Nigeria. Besides MPNU, other oil companies operate onshore, on the continental shelf and deep offshore of the Atlantic Ocean in Ibeno, Nigeria. This study was designed to delineate oil polluted sites in Ibeno, Nigeria using geophysical methods of electrical resistivity (ER) and ground penetrating radar (GPR). Results obtained revealed that there have been hydrocarbon contaminations of this environment by past crude oil spills as observed from high resistivity values and GPR profiles which clearly show the distribution, thickness and lateral extent of hydrocarbon contamination as represented on the radargram reflector tones. Contaminations were of varying degrees, ranging from slight to high, indicating levels of substantial attenuation of crude oil contamination over time. Moreover, the display of relatively lower resistivities of locations outside the impacted areas compared to resistivity values within the impacted areas and the 3-D Cartesian images of oil contaminant plume depicted by red, light brown and magenta for high, low and very low oil impacted areas, respectively confirmed significant recent pollution of the study area with crude oil.

Keywords: Electrical resistivity, geophysical investigations, ground penetrating radar, oil-polluted sites.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3079
4752 Comparison of Different Techniques to Estimate Surface Soil Moisture

Authors: S. Farid F. Mojtahedi, Ali Khosravi, Behnaz Naeimian, S. Adel A. Hosseini

Abstract:

Land subsidence is a gradual settling or sudden sinking of the land surface from changes that take place underground. There are different causes of land subsidence; most notably, ground-water overdraft and severe weather conditions. Subsidence of the land surface due to ground water overdraft is caused by an increase in the intergranular pressure in unconsolidated aquifers, which results in a loss of buoyancy of solid particles in the zone dewatered by the falling water table and accordingly compaction of the aquifer. On the other hand, exploitation of underground water may result in significant changes in degree of saturation of soil layers above the water table, increasing the effective stress in these layers, and considerable soil settlements. This study focuses on estimation of soil moisture at surface using different methods. Specifically, different methods for the estimation of moisture content at the soil surface, as an important term to solve Richard’s equation and estimate soil moisture profile are presented, and their results are discussed through comparison with field measurements obtained from Yanco1 station in south-eastern Australia. Surface soil moisture is not easy to measure at the spatial scale of a catchment. Due to the heterogeneity of soil type, land use, and topography, surface soil moisture may change considerably in space and time.

Keywords: Artificial neural network, empirical method, remote sensing, surface soil moisture, unsaturated soil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123
4751 Kinetic model and Simulation Analysis for Propane Dehydrogenation in an Industrial Moving Bed Reactor

Authors: Chin S. Y., Radzi, S. N. R., Maharon, I. H., Shafawi, M. A.

Abstract:

A kinetic model for propane dehydrogenation in an industrial moving bed reactor is developed based on the reported reaction scheme. The kinetic parameters and activity constant are fine tuned with several sets of balanced plant data. Plant data at different operating conditions is applied to validate the model and the results show a good agreement between the model predictions and plant observations in terms of the amount of main product, propylene produced. The simulation analysis of key variables such as inlet temperature of each reactor (Tinrx) and hydrogen to total hydrocarbon ratio (H2/THC) affecting process performance is performed to identify the operating condition to maximize the production of propylene. Within the range of operating conditions applied in the present studies, the operating condition to maximize the propylene production at the same weighted average inlet temperature (WAIT) is ΔTinrx1= -2, ΔTinrx2= +1, ΔTinrx3= +1 , ΔTinrx4= +2 and ΔH2/THC= -0.02. Under this condition, the surplus propylene produced is 7.07 tons/day as compared with base case.

Keywords: kinetic model, dehydrogenation, simulation, modeling, propane

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4413
4750 Shifted Window Based Self-Attention via Swin Transformer for Zero-Shot Learning

Authors: Yasaswi Palagummi, Sareh Rowlands

Abstract:

Generalised Zero-Shot Learning, often known as GZSL, is an advanced variant of zero-shot learning in which the samples in the unseen category may be either seen or unseen. GZSL methods typically have a bias towards the seen classes because they learn a model to perform recognition for both the seen and unseen classes using data samples from the seen classes. This frequently leads to the misclassification of data from the unseen classes into the seen classes, making the task of GZSL more challenging. In this work, we propose an approach leveraging the Shifted Window based Self-Attention in the Swin Transformer (Swin-GZSL) to work in the inductive GZSL problem setting. We run experiments on three popular benchmark datasets: CUB, SUN, and AWA2, which are specifically used for ZSL and its other variants. The results show that our model based on Swin Transformer has achieved state-of-the-art harmonic mean for two datasets - AWA2 and SUN and near-state-of-the-art for the other dataset - CUB. More importantly, this technique has a linear computational complexity, which reduces training time significantly. We have also observed less bias than most of the existing GZSL models.

Keywords: Generalised Zero-shot Learning, Inductive Learning, Shifted-Window Attention, Swin Transformer, Vision Transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 204
4749 Advanced Neural Network Learning Applied to Pulping Modeling

Authors: Z. Zainuddin, W. D. Wan Rosli, R. Lanouette, S. Sathasivam

Abstract:

This paper reports work done to improve the modeling of complex processes when only small experimental data sets are available. Neural networks are used to capture the nonlinear underlying phenomena contained in the data set and to partly eliminate the burden of having to specify completely the structure of the model. Two different types of neural networks were used for the application of pulping problem. A three layer feed forward neural networks, using the Preconditioned Conjugate Gradient (PCG) methods were used in this investigation. Preconditioning is a method to improve convergence by lowering the condition number and increasing the eigenvalues clustering. The idea is to solve the modified odified problem M-1 Ax= M-1b where M is a positive-definite preconditioner that is closely related to A. We mainly focused on Preconditioned Conjugate Gradient- based training methods which originated from optimization theory, namely Preconditioned Conjugate Gradient with Fletcher-Reeves Update (PCGF), Preconditioned Conjugate Gradient with Polak-Ribiere Update (PCGP) and Preconditioned Conjugate Gradient with Powell-Beale Restarts (PCGB). The behavior of the PCG methods in the simulations proved to be robust against phenomenon such as oscillations due to large step size.

Keywords: Convergence, pulping modeling, neural networks, preconditioned conjugate gradient.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1400
4748 Qualitative Profiling in Practice: The Italian Public Employment Services Experience

Authors: L. Agneni, F. Carta, C. Micheletta, V. Tersigni

Abstract:

The development of a qualitative method to profile jobseekers is needed to improve the quality of the Public Employment Services (PES) in Italy. This is why the National Agency for Active Labour Market Policies (ANPAL) decided to introduce a Qualitative Profiling Service in the context of the activities carried out by local employment offices’ operators. The qualitative profiling service provides information and data regarding the jobseeker’s personal transition status, through a semi-structured questionnaire administered to PES clients during the guidance interview. The questionnaire responses allow PES staff to identify, for each client, proper activities and policy measures to support jobseekers in their reintegration into the labour market. Data and information gathered by the qualitative profiling tool are the following: frequency, modalities and motivations for clients to apply to local employment offices; clients’ expectations and skills; difficulties that they have faced during the previous working experiences; strategies, actions undertaken and activated channels for job search. These data are used to assess jobseekers’ personal and career characteristics and to measure their employability level (qualitative profiling index), in order to develop and deliver tailor-made action programmes for each client. This paper illustrates the use of the above-mentioned qualitative profiling service on the national territory and provides an overview of the main findings of the survey: concerning the difficulties that unemployed people face in finding a job and their perception of different aspects related to the transition in the labour market. The survey involved over 10.000 jobseekers registered with the PES. Most of them are beneficiaries of the “citizens' income”, a specific active labour policy and social inclusion measure. Furthermore, data analysis allows classifying jobseekers into a specific group of clients with similar features and behaviours, on the basis of socio-demographic variables, customers' expectations, needs and required skills for the profession for which they seek employment. Finally, the survey collects PES staff opinions and comments concerning clients’ difficulties in finding a new job and also their strengths. This is a starting point for PESs’ operators to define adequate strategies to facilitate jobseekers’ access or reintegration into the labour market.

Keywords: Labour market transition, Public Employment Services, qualitative profiling, vocational guidance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 572
4747 Design and Development of 5-DOF Color Sorting Manipulator for Industrial Applications

Authors: Atef. A. Ata, Sohair F. Rezeka, Ahmed El-Shenawy, Mohammed Diab

Abstract:

Image processing in today’s world grabs massive attentions as it leads to possibilities of broaden application in many fields of high technology. The real challenge is how to improve existing sorting system applications which consists of two integrated stations of processing and handling with a new image processing feature. Existing color sorting techniques use a set of inductive, capacitive, and optical sensors to differentiate object color. This research presents a mechatronic color sorting system solution with the application of image processing. A 5-DOF robot arm is designed and developed with pick and place operation to act as the main part of the color sorting system. Image processing procedure senses the circular objects in an image captured in real time by a webcam fixed at the end-effector then extracts color and position information out of it. This information is passed as a sequence of sorting commands to the manipulator that has pick-and-place mechanism. Performance analysis proves that this color based object sorting system works accurately under ideal condition in term of adequate illumination, circular objects shape and color. The circular objects tested for sorting are red, green and blue. For non-ideal condition, such as unspecified color the accuracy reduces to 80%.

Keywords: Robotics manipulator, 5-DOF manipulator, image processing, Color sorting, Pick-and-place.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4209
4746 Application of IED to Condition Based Maintenance of Medium Voltage GCB/VCB

Authors: Ming-Ta Yang, Jyh-Cherng Gu, Chun-Wei Huang, Jin-Lung Guan

Abstract:

Time base maintenance (TBM) is conventionally applied by the power utilities to maintain circuit breakers (CBs), transformers, bus bars and cables, which may result in under maintenance or over maintenance. As information and communication technology (ICT) industry develops, the maintenance policies of many power utilities have gradually changed from TBM to condition base maintenance (CBM) to improve system operating efficiency, operation cost and power supply reliability. This paper discusses the feasibility of using intelligent electronic devices (IEDs) to construct a CB CBM management platform. CBs in power substations can be monitored using IEDs with additional logic configuration and wire connections. The CB monitoring data can be sent through intranet to a control center and be analyzed and integrated by the Elipse Power Studio software. Finally, a human-machine interface (HMI) of supervisory control and data acquisition (SCADA) system can be designed to construct a CBM management platform to provide maintenance decision information for the maintenance personnel, management personnel and CB manufacturers.

Keywords: Circuit breaker, Condition base maintenance, Intelligent electronic device, Time base maintenance, SCADA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2278