Search results for: time series classification
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7968

Search results for: time series classification

4728 A New Design of Permanent Magnets Reluctance Generator

Authors: Andi Pawawoi, Syafii

Abstract:

Instantaneous electromagnetic torque of simple reflectance generator can be positive at a time and negative at other time. It is utilized to design a permanent magnet reluctance generator specifically. Generator is designed by combining two simple reluctance generators, consists of two rotors mounted on the same shaft, two output-windings and a field source of the permanent magnet. By this design, the electromagnetic torque on both rotor will be eliminated each other, so the input torque generator can be smaller. Rotor is expected only to regulate the flux flow to both output windings alternately, until the magnetic energy is converted into electrical energy, such as occurs in the transformer energy conversion. ​​The prototype trials have been made to test this design. The test result show that the new design of permanent magnets reluctance generator able to convert energy from permanent magnets into electrical energy, this is proven by the existence 167% power output compared to the shaft input power.

Keywords: Energy, Magnet permanent, Reluctance generator.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2840
4727 Combining Minimum Energy and Minimum Direct Jerk of Linear Dynamic Systems

Authors: V. Tawiwat, P. Jumnong

Abstract:

Both the minimum energy consumption and smoothness, which is quantified as a function of jerk, are generally needed in many dynamic systems such as the automobile and the pick-and-place robot manipulator that handles fragile equipments. Nevertheless, many researchers come up with either solely concerning on the minimum energy consumption or minimum jerk trajectory. This research paper proposes a simple yet very interesting when combining the minimum energy and jerk of indirect jerks approaches in designing the time-dependent system yielding an alternative optimal solution. Extremal solutions for the cost functions of the minimum energy, the minimum jerk and combining them together are found using the dynamic optimization methods together with the numerical approximation. This is to allow us to simulate and compare visually and statistically the time history of state inputs employed by combining minimum energy and jerk designs. The numerical solution of minimum direct jerk and energy problem are exactly the same solution; however, the solutions from problem of minimum energy yield the similar solution especially in term of tendency.

Keywords: Optimization, Dynamic, Linear Systems, Jerks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1572
4726 Simulation and Validation of Spur Gear Heated by Induction using 3d Model

Authors: A. Chebak, N. Barka, A. Menou, J. Brousseau, D. S. Ramdenee

Abstract:

This paper presents the study of hardness profile of spur gear heated by induction heating process in function of the machine parameters, such as the power (kW), the heating time (s) and the generator frequency (kHz). The global work is realized by 3D finite-element simulation applied to the process by coupling and resolving the electromagnetic field and the heat transfer problems, and it was performed in three distinguished steps. First, a Comsol 3D model was built using an adequate formulation and taking into account the material properties and the machine parameters. Second, the convergence study was conducted to optimize the mesh. Then, the surface temperatures and the case depths were deeply analyzed in function of the initial current density and the heating time in medium frequency (MF) and high frequency (HF) heating modes and the edge effect were studied. Finally, the simulations results are validated using experimental tests.

Keywords: Induction heating, simulation, experimental validation, 3D model, hardness profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
4725 Statistical Modeling of Accelerated Pavement Failure Using Response Surface Methodology

Authors: Anshu Manik, Kasthurirangan Gopalakrishnan, Siddhartha K. Khaitan

Abstract:

Rutting is one of the major load-related distresses in airport flexible pavements. Rutting in paving materials develop gradually with an increasing number of load applications, usually appearing as longitudinal depressions in the wheel paths and it may be accompanied by small upheavals to the sides. Significant research has been conducted to determine the factors which affect rutting and how they can be controlled. Using the experimental design concepts, a series of tests can be conducted while varying levels of different parameters, which could be the cause for rutting in airport flexible pavements. If proper experimental design is done, the results obtained from these tests can give a better insight into the causes of rutting and the presence of interactions and synergisms among the system variables which have influence on rutting. Although traditionally, laboratory experiments are conducted in a controlled fashion to understand the statistical interaction of variables in such situations, this study is an attempt to identify the critical system variables influencing airport flexible pavement rut depth from a statistical DoE perspective using real field data from a full-scale test facility. The test results do strongly indicate that the response (rut depth) has too much noise in it and it would not allow determination of a good model. From a statistical DoE perspective, two major changes proposed for this experiment are: (1) actual replication of the tests is definitely required, (2) nuisance variables need to be identified and blocked properly. Further investigation is necessary to determine possible sources of noise in the experiment.

Keywords: Airport Pavement, Design of Experiments, Rutting, NAPTF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
4724 Duration Analysis of New Firms in the Banking Industry

Authors: Jesus Orbe, Vicente Nunez-Anton

Abstract:

This paper studies the duration or survival time of commercial banks active in the Moscovian three month Rouble deposits market, during the 1994-1997 period. The privatization process of the Russian commercial banking industry, after the 1988 banking reform, caused a massive entry of new banks followed by a period of high rates of exit. As a consequence, many firms went bankrupt without refunding their deposits. Therefore, both for the banks and for the banks- depositors, it is of interest to analyze which are the significant characteristics that motivate the exit or the closing of the bank. We propose a different methodology based on penalized weighted least squares which represents a very general, flexible and innovative approach for this type of analysis. The more relevant results are that smaller banks exit sooner, banks that enter the market in the last part of the study have shorter durations. As expected, the more experienced banks have a longer duration in the market. In addition, the mean survival time is lower for banks which offer extreme interest rates.

Keywords: Banking, censored, duration, Kaplan-Meier.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1549
4723 Analysis of Heuristic Based Hybrid Simulated Annealing Algorithm for Multiprocessor Task Scheduling

Authors: Supriya Arya, Sunita Dhingra

Abstract:

Multiprocessor task scheduling problem for dependent and independent tasks is computationally complex problem. Many methods are proposed to achieve optimal running time. As the multiprocessor task scheduling is NP hard in nature, therefore, many heuristics are proposed which have improved the makespan of the problem. But due to problem specific nature, the heuristic method which provide best results for one problem, might not provide good results for another problem. So, Simulated Annealing which is meta heuristic approach is considered. It can be applied on all types of problems. However, due to many runs, meta heuristic approach takes large computation time. Hence, the hybrid approach is proposed by combining the Duplication Scheduling Heuristic and Simulated Annealing (SA) and the makespan results of Simple Simulated Annealing and Hybrid approach are analyzed.

Keywords: Multiprocessor task scheduling Problem, Makespan, Duplication Scheduling Heuristic, Simulated Annealing, Hybrid Approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2227
4722 Optimization of Material Removal Rate in Electrical Discharge Machining Using Fuzzy Logic

Authors: Amit Kohli, Aashim Wadhwa, Tapan Virmani, Ujjwal Jain

Abstract:

The objective of present work is to stimulate the machining of material by electrical discharge machining (EDM) to give effect of input parameters like discharge current (Ip), pulse on time (Ton), pulse off time (Toff) which can bring about changes in the output parameter, i.e. material removal rate. Experimental data was gathered from die sinking EDM process using copper electrode and Medium Carbon Steel (AISI 1040) as work-piece. The rules of membership function (MF) and the degree of closeness to the optimum value of the MMR are within the upper and lower range of the process parameters. It was found that proposed fuzzy model is in close agreement with the experimental results. By Intelligent, model based design and control of EDM process parameters in this study will help to enable dramatically decreased product and process development cycle times.

Keywords: Electrical discharge Machining (EDM), Fuzzy Logic, Material removal rate (MRR), Membership functions (MF).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2749
4721 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluates the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: lexical semantics, feature representation, semantic decision, convolutional neural network, electronic medical record

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
4720 A Discrete Element Method Centrifuge Model of Monopile under Cyclic Lateral Loads

Authors: Nuo Duan, Yi Pik Cheng

Abstract:

This paper presents the data of a series of two-dimensional Discrete Element Method (DEM) simulations of a large-diameter rigid monopile subjected to cyclic loading under a high gravitational force. At present, monopile foundations are widely used to support the tall and heavy wind turbines, which are also subjected to significant from wind and wave actions. A safe design must address issues such as rotations and changes in soil stiffness subject to these loadings conditions. Design guidance on the issue is limited, so are the availability of laboratory and field test data. The interpretation of these results in sand, such as the relation between loading and displacement, relies mainly on empirical correlations to pile properties. Regarding numerical models, most data from Finite Element Method (FEM) can be found. They are not comprehensive, and most of the FEM results are sensitive to input parameters. The micro scale behaviour could change the mechanism of the soil-structure interaction. A DEM model was used in this paper to study the cyclic lateral loads behaviour. A non-dimensional framework is presented and applied to interpret the simulation results. The DEM data compares well with various set of published experimental centrifuge model test data in terms of lateral deflection. The accumulated permanent pile lateral displacements induced by the cyclic lateral loads were found to be dependent on the characteristics of the applied cyclic load, such as the extent of the loading magnitudes and directions.

Keywords: Cyclic loading, DEM, numerical modelling, sands.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1711
4719 Experimental and CFD Simulation of the Jet Pump for Air Bubbles Formation

Authors: L. Grinis, N. Lubashevsky, Y. Ostrovski

Abstract:

A jet pump is a type of pump that accelerates the flow of a secondary fluid (driven fluid) by introducing a motive fluid with high velocity into a converging-diverging nozzle. Jet pumps are also known as adductors or ejectors depending on the motivator phase. The ejector's motivator is of a gaseous nature, usually steam or air, while the educator's motivator is a liquid, usually water. Jet pumps are devices that use air bubbles and are widely used in wastewater treatment processes. In this work, we will discuss about the characteristics of the jet pump and the computational simulation of this device. To find the optimal angle and depth for the air pipe, so as to achieve the maximal air volumetric flow rate, an experimental apparatus was constructed to ascertain the best geometrical configuration for this new type of jet pump. By using 3D printing technology, a series of jet pumps was printed and tested whilst aspiring to maximize air flow rate dependent on angle and depth of the air pipe insertion. The experimental results show a major difference of up to 300% in performance between the different pumps (ratio of air flow rate to supplied power) where the optimal geometric model has an insertion angle of 600 and air pipe insertion depth ending at the center of the mixing chamber. The differences between the pumps were further explained by using CFD for better understanding the reasons that affect the airflow rate. The validity of the computational simulation and the corresponding assumptions have been proved experimentally. The present research showed high degree of congruence with the results of the laboratory tests. This study demonstrates the potential of using of the jet pump in many practical applications.

Keywords: Air bubbles, CFD simulation, jet pump, practical applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2018
4718 Scheduling Multiple Workflow Using De-De Dodging Algorithm and PBD Algorithm in Cloud: Detailed Study

Authors: B. Arun Kumar, T. Ravichandran

Abstract:

Workflow scheduling is an important part of cloud computing and based on different criteria it decides cost, execution time, and performances. A cloud workflow system is a platform service facilitating automation of distributed applications based on new cloud infrastructure. An aspect which differentiates cloud workflow system from others is market-oriented business model, an innovation which challenges conventional workflow scheduling strategies. Time and Cost optimization algorithm for scheduling Hybrid Clouds (TCHC) algorithm decides which resource should be chartered from public providers is combined with a new De-De algorithm considering that every instance of single and multiple workflows work without deadlocks. To offset this, two new concepts - De-De Dodging Algorithm and Priority Based Decisive Algorithm - combine with conventional deadlock avoidance issues by proposing one algorithm that maximizes active (not just allocated) resource use and reduces Makespan.

Keywords: Workflow Scheduling, cloud workflow, TCHC algorithm, De-De Dodging Algorithm, Priority Based Decisive Algorithm (PBD), Makespan.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2796
4717 Cirrhosis Mortality Prediction as Classification Using Frequent Subgraph Mining

Authors: Abdolghani Ebrahimi, Diego Klabjan, Chenxi Ge, Daniela Ladner, Parker Stride

Abstract:

In this work, we use machine learning and data analysis techniques to predict the one-year mortality of cirrhotic patients. Data from 2,322 patients with liver cirrhosis are collected at a single medical center. Different machine learning models are applied to predict one-year mortality. A comprehensive feature space including demographic information, comorbidity, clinical procedure and laboratory tests is being analyzed. A temporal pattern mining technic called Frequent Subgraph Mining (FSM) is being used. Model for End-stage liver disease (MELD) prediction of mortality is used as a comparator. All of our models statistically significantly outperform the MELD-score model and show an average 10% improvement of the area under the curve (AUC). The FSM technic itself does not improve the model significantly, but FSM, together with a machine learning technique called an ensemble, further improves the model performance. With the abundance of data available in healthcare through electronic health records (EHR), existing predictive models can be refined to identify and treat patients at risk for higher mortality. However, due to the sparsity of the temporal information needed by FSM, the FSM model does not yield significant improvements. Our work applies modern machine learning algorithms and data analysis methods on predicting one-year mortality of cirrhotic patients and builds a model that predicts one-year mortality significantly more accurate than the MELD score. We have also tested the potential of FSM and provided a new perspective of the importance of clinical features.

Keywords: machine learning, liver cirrhosis, subgraph mining, supervised learning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 449
4716 A Pole Radius Varying Notch Filter with Transient Suppression for Electrocardiogram

Authors: Ramesh Rajagopalan, Adam Dahlstrom

Abstract:

Noise removal techniques play a vital role in the performance of electrocardiographic (ECG) signal processing systems. ECG signals can be corrupted by various kinds of noise such as baseline wander noise, electromyographic interference, and powerline interference. One of the significant challenges in ECG signal processing is the degradation caused by additive 50 or 60 Hz powerline interference. This work investigates the removal of power line interference and suppression of transient response for filtering noise corrupted ECG signals. We demonstrate the effectiveness of infinite impulse response (IIR) notch filter with time varying pole radius for improving the transient behavior. The temporary change in the pole radius of the filter diminishes the transient behavior. Simulation results show that the proposed IIR filter with time varying pole radius outperforms traditional IIR notch filters in terms of mean square error and transient suppression.

Keywords: Notch filter, ECG, transient, pole radius.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3193
4715 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1850
4714 A Comparative Study of Rigid and Modified Simplex Methods for Optimal Parameter Settings of ACO for Noisy Non-Linear Surfaces

Authors: Seksan Chunothaisawat, Pongchanun Luangpaiboon

Abstract:

There are two common types of operational research techniques, optimisation and metaheuristic methods. The latter may be defined as a sequential process that intelligently performs the exploration and exploitation adopted by natural intelligence and strong inspiration to form several iterative searches. An aim is to effectively determine near optimal solutions in a solution space. In this work, a type of metaheuristics called Ant Colonies Optimisation, ACO, inspired by a foraging behaviour of ants was adapted to find optimal solutions of eight non-linear continuous mathematical models. Under a consideration of a solution space in a specified region on each model, sub-solutions may contain global or multiple local optimum. Moreover, the algorithm has several common parameters; number of ants, moves, and iterations, which act as the algorithm-s driver. A series of computational experiments for initialising parameters were conducted through methods of Rigid Simplex, RS, and Modified Simplex, MSM. Experimental results were analysed in terms of the best so far solutions, mean and standard deviation. Finally, they stated a recommendation of proper level settings of ACO parameters for all eight functions. These parameter settings can be applied as a guideline for future uses of ACO. This is to promote an ease of use of ACO in real industrial processes. It was found that the results obtained from MSM were pretty similar to those gained from RS. However, if these results with noise standard deviations of 1 and 3 are compared, MSM will reach optimal solutions more efficiently than RS, in terms of speed of convergence.

Keywords: Ant colony optimisation, metaheuristics, modified simplex, non-linear, rigid simplex.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1624
4713 Structural Characterization and Physical Properties of Antimicrobial (AM) Starch-Based Films

Authors: Eraricar Salleh, Ida Idayu Muhamad, Nozieanna Khairuddin

Abstract:

Antimicrobial (AM) starch-based films were developed by incorporating chitosan and lauric acid as antimicrobial agent into starch-based film. Chitosan has wide range of applications as a biomaterial, but barriers still exist to its broader use due to its physical and chemical limitations. In this work, a series of starch/chitosan (SC) blend films containing 8% of lauric acid was prepared by casting method. The structure of the film was characterized by Fourier transform infrared spectroscopy (FTIR), Xray diffraction (XRD), and scanning electron microscopy (SEM). The results indicated that there were strong interactions were present between the hydroxyl groups of starch and the amino groups of chitosan resulting in a good miscibility between starch and chitosan in the blend films. Physical properties and optical properties of the AM starch-based film were evaluated. The AM starch-based films incorporated with chitosan and lauric acid showed an improvement in water vapour transmission rate (WVTR) and addition of starch content provided more transparent films while the yellowness of the film attributed to the higher chitosan content. The improvement in water barrier properties was mainly attributed to the hydrophobicity of lauric acid and optimum chitosan or starch content. AM starch based film also showed excellent oxygen barrier. Obtaining films with good oxygen permeability would be an indication of the potential use of these antimicrobial packaging as a natural packaging and an alternative packaging to the synthetic polymer to protect food from oxidation reactions

Keywords: Antimicrobial starch-based films, chitosan, lauric acid, starch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2808
4712 Prediction of Time to Crack Reinforced Concrete by Chloride Induced Corrosion

Authors: Anuruddha Jayasuriya, Thanakorn Pheeraphan

Abstract:

In this paper, a review of different mathematical models which can be used as prediction tools to assess the time to crack reinforced concrete (RC) due to corrosion is investigated. This investigation leads to an experimental study to validate a selected prediction model. Most of these mathematical models depend upon the mechanical behaviors, chemical behaviors, electrochemical behaviors or geometric aspects of the RC members during a corrosion process. The experimental program is designed to verify the accuracy of a well-selected mathematical model from a rigorous literature study. Fundamentally, the experimental program exemplifies both one-dimensional chloride diffusion using RC squared slab elements of 500 mm by 500 mm and two-dimensional chloride diffusion using RC squared column elements of 225 mm by 225 mm by 500 mm. Each set consists of three water-to-cement ratios (w/c); 0.4, 0.5, 0.6 and two cover depths; 25 mm and 50 mm. 12 mm bars are used for column elements and 16 mm bars are used for slab elements. All the samples are subjected to accelerated chloride corrosion in a chloride bath of 5% (w/w) sodium chloride (NaCl) solution. Based on a pre-screening of different models, it is clear that the well-selected mathematical model had included mechanical properties, chemical and electrochemical properties, nature of corrosion whether it is accelerated or natural, and the amount of porous area that rust products can accommodate before exerting expansive pressure on the surrounding concrete. The experimental results have shown that the selected model for both one-dimensional and two-dimensional chloride diffusion had ±20% and ±10% respective accuracies compared to the experimental output. The half-cell potential readings are also used to see the corrosion probability, and experimental results have shown that the mass loss is proportional to the negative half-cell potential readings that are obtained. Additionally, a statistical analysis is carried out in order to determine the most influential factor that affects the time to corrode the reinforcement in the concrete due to chloride diffusion. The factors considered for this analysis are w/c, bar diameter, and cover depth. The analysis is accomplished by using Minitab statistical software, and it showed that cover depth is the significant effect on the time to crack the concrete from chloride induced corrosion than other factors considered. Thus, the time predictions can be illustrated through the selected mathematical model as it covers a wide range of factors affecting the corrosion process, and it can be used to predetermine the durability concern of RC structures that are vulnerable to chloride exposure. And eventually, it is further concluded that cover thickness plays a vital role in durability in terms of chloride diffusion.

Keywords: Accelerated corrosion, chloride diffusion, corrosion cracks, passivation layer, reinforcement corrosion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 895
4711 Application of Sensory Thermography as Measuring Method to Study Median Nerve Temperatures

Authors: Javier Ordorica Villalvazo, Claudia Camargo Wilson, Jesus Everardo Olguin Tiznado

Abstract:

This paper presents an experimental case using sensory thermography to describe temperatures behavior on median nerve once an activity of repetitive motion was done. Thermography is a noninvasive technique without biological hazard and not harm at all times and has been applied in many experiments to seek for temperature patterns that help to understand diseases like cancer and cumulative trauma disorders (CTD’s). An infrared sensory thermography technology was developed to execute this study. Three women in good shape were selected for the repetitive motion tests for 4 days, two right-handed women and 1 left handed woman, two sensory thermographers were put on both median nerve wrists to get measures. The evaluation time was of 3 hours 30 minutes in a controlled temperature, 20 minutes of stabilization time at the beginning and end of the operation. Temperatures distributions are statistically evaluated and showed similar temperature patterns behavior.

Keywords: Median nerve, temperature, sensory thermography, wrists, CTD’s.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
4710 Opportunistic Routing with Secure Coded Wireless Multicast Using MAS Approach

Authors: E. Golden Julie, S. Tamil Selvi, Y. Harold Robinson

Abstract:

Many Wireless Sensor Network (WSN) applications necessitate secure multicast services for the purpose of broadcasting delay sensitive data like video files and live telecast at fixed time-slot. This work provides a novel method to deal with end-to-end delay and drop rate of packets. Opportunistic Routing chooses a link based on the maximum probability of packet delivery ratio. Null Key Generation helps in authenticating packets to the receiver. Markov Decision Process based Adaptive Scheduling algorithm determines the time slot for packet transmission. Both theoretical analysis and simulation results show that the proposed protocol ensures better performance in terms of packet delivery ratio, average end-to-end delay and normalized routing overhead.

Keywords: Delay-sensitive data, Markovian Decision Process based Adaptive Scheduling, Opportunistic Routing, Digital Signature authentication.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
4709 Replicating Data Objects in Large-scale Distributed Computing Systems using Extended Vickrey Auction

Authors: Samee Ullah Khan, Ishfaq Ahmad

Abstract:

This paper proposes a novel game theoretical technique to address the problem of data object replication in largescale distributed computing systems. The proposed technique draws inspiration from computational economic theory and employs the extended Vickrey auction. Specifically, players in a non-cooperative environment compete for server-side scarce memory space to replicate data objects so as to minimize the total network object transfer cost, while maintaining object concurrency. Optimization of such a cost in turn leads to load balancing, fault-tolerance and reduced user access time. The method is experimentally evaluated against four well-known techniques from the literature: branch and bound, greedy, bin-packing and genetic algorithms. The experimental results reveal that the proposed approach outperforms the four techniques in both the execution time and solution quality.

Keywords: Auctions, data replication, pricing, static allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
4708 Study on Disaster Prevention Plan for an Electronic Industry in Thailand

Authors: S. Pullteap, M. Pathomsuriyaporn

Abstract:

In this article, a study of employee’s opinion to the factors that affect to the flood preventive and the corrective action plan in an electronic industry at the Sharp Manufacturing (Thailand) Co., Ltd. has been investigated. The surveys data of 175 workers and supervisors have, however, been selected for data analysis. The results is shown that the employees emphasize about the needs in a subsidy at the time of disaster at high levels of 77.8%, as the plan focusing on flood prevention of the rehabilitation equipment is valued at the intermediate level, which is 79.8%. Demonstration of the hypothesis has found that the different education levels has thus been affected to the needs factor at the flood disaster time. Moreover, most respondents give priority to flood disaster risk management factor. Consequently, we found that the flood prevention plan is valued at high level, especially on information monitoring, which is 93.4% for the supervisor item. The respondents largely assume that the flood will have impacts on the industry, up to 80%, thus to focus on flood management plans is enormous.

Keywords: Flood prevention plan, flood event, electronic industrial plant, disaster, risk management.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845
4707 Analytical Solution of Time-Harmonic Torsional Vibration of a Cylindrical Cavity in a Half-Space

Authors: M.Eskandari-Ghadi, M.Mahmoodian

Abstract:

In this article an isotropic linear elastic half-space with a cylindrical cavity of finite length is considered to be under the effect of a ring shape time-harmonic torsion force applied at an arbitrary depth on the surface of the cavity. The equation of equilibrium has been written in a cylindrical coordinate system. By means of Fourier cosine integral transform, the non-zero displacement component is obtained in the transformed domain. With the aid of the inversion theorem of the Fourier cosine integral transform, the displacement is obtained in the real domain. With the aid of boundary conditions, the involved boundary value problem for the fundamental solution is reduced to a generalized Cauchy singular integral equation. Integral representation of the stress and displacement are obtained, and it is shown that their degenerated form to the static problem coincides with existing solutions in the literature.

Keywords: Cosine transform, Half space, Isotropic, Singular integral equation, Torsion

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1561
4706 High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes

Authors: Caspar von Seckendorff, Eldar Sultanow

Abstract:

Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.

Keywords: Process delay, speedup, efficiency, parallel computing, data integration, E-Commerce, Amazon Elastic Compute Cloud (EC2), Hadoop, Nutch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1629
4705 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
4704 Volatile Profile of Monofloral Honeys Produced by Stingless Bees from the Brazilian Semiarid Region

Authors: Ana Caroliny Vieira da Costa, Marta Suely Madruga

Abstract:

In Brazil, there is a diverse fauna of social bees, known by Meliponinae or native stingless bees. These bees are important for providing a differentiated product, especially regarding unique sweetness, flavor, and aroma. However, information about the volatile fraction in honey produced by stingless native bees is still lacking. The aim of this work was to characterize the volatile compound profile of monofloral honey produced by jandaíra bees (Melipona subnitida Ducke) which used chanana (Turnera ulmifolia L.), malícia (Mimosa quadrivalvis) and algaroba (Prosopis juliflora (Sw.) DC) as their floral sources; and by uruçu bees (Melipona scutellaris Latrelle), which used chanana (Turnera ulmifolia L.), malícia (Mimosa quadrivalvis) and angico (Anadenanthera colubrina) as their floral sources. The volatiles were extracted using HS-SPME-GC-MS technique. The condition for the extraction was: equilibration time of 15 minutes, extraction time of 45 min and extraction temperature of 45°C. Through the results obtained, it was observed that the floral source had a strong influence on the aroma profile of the honey under evaluation, since the chemical profiles were marked primarily by the classes of terpenes, norisoprenoids, and benzene derivatives. Furthermore, the results obtained suggest the existence of differentiator compounds and potential markers for the botanical sources evaluated, such as linalool, D-sylvestrene, rose oxide and benzenethanol. These reports represent a valuable contribution to certifying the authenticity of those honey and provides for the first time, information intended for the construction of chemical knowledge of the aroma and flavor that characterize these honey produced in Brazil.

Keywords: Aroma, honey, semiarid, stingless, volatiles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
4703 Land Suitability Prediction Modelling for Agricultural Crops Using Machine Learning Approach: A Case Study of Khuzestan Province, Iran

Authors: Saba Gachpaz, Hamid Reza Heidari

Abstract:

The sharp increase in population growth leads to more pressure on agricultural areas to satisfy the food supply. This necessitates increased resource consumption and underscores the importance of addressing sustainable agriculture development along with other environmental considerations. Land-use management is a crucial factor in obtaining optimum productivity. Machine learning is a widely used technique in the agricultural sector, from yield prediction to customer behavior. This method focuses on learning and provides patterns and correlations from our data set. In this study, nine physical control factors, namely, soil classification, electrical conductivity, normalized difference water index (NDWI), groundwater level, elevation, annual precipitation, pH of water, annual mean temperature, and slope in the alluvial plain in Khuzestan (an agricultural hotspot in Iran) are used to decide the best agricultural land use for both rainfed and irrigated agriculture for 10 different crops. For this purpose, each variable was imported into Arc GIS, and a raster layer was obtained. In the next level, by using training samples, all layers were imported into the python environment. A random forest model was applied, and the weight of each variable was specified. In the final step, results were visualized using a digital elevation model, and the importance of all factors for each one of the crops was obtained. Our results show that despite 62% of the study area being allocated to agricultural purposes, only 42.9% of these areas can be defined as a suitable class for cultivation purposes.

Keywords: Land suitability, machine learning, random forest, sustainable agriculture.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 283
4702 Hydrodynamic Modeling of a Surface Water Treatment Pilot Plant

Authors: C.-M. Militaru, A. Pǎcalǎ, I. Vlaicu, K. Bodor, G.-A. Dumitrel, T. Todinca

Abstract:

A mathematical model for the hydrodynamics of a surface water treatment pilot plant was developed and validated by the determination of the residence time distribution (RTD) for the main equipments of the unit. The well known models of ideal/real mixing, ideal displacement (plug flow) and (one-dimensional axial) dispersion model were combined in order to identify the structure that gives the best fitting of the experimental data for each equipment of the pilot plant. RTD experimental results have shown that pilot plant hydrodynamics can be quite well approximated by a combination of simple mathematical models, structure which is suitable for engineering applications. Validated hydrodynamic models will be further used in the evaluation and selection of the most suitable coagulation-flocculation reagents, optimum operating conditions (injection point, reaction times, etc.), in order to improve the quality of the drinking water.

Keywords: drinking water, hydrodynamic modeling, pilot plant, residence time distribution, surface water.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1673
4701 Optimization and Kinetic Study of Gaharu Oil Extraction

Authors: Muhammad Hazwan H., Azlina M.F., Hasfalina C.M., Zurina Z.A., Hishamuddin J

Abstract:

Gaharu that produced by Aquilaria spp. is classified as one of the most valuable forest products traded internationally as it is very resinous, fragrant and highly valuable heartwood. Gaharu has been widely used in aromatheraphy, medicine, perfume and religious practices. This work aimed to determine the factors affecting solid liquid extraction of gaharu oil using hexane as solvent under experimental condition. The kinetics of extraction was assumed and verified based on a second-order mechanism. The effect of three main factors, which were temperature, reaction time and solvent to solid ratio were investigated to achieve maximum oil yield. The optimum condition were found at temperature 65°C, 9 hours reaction time and solvent to solid ratio of 12:1 with 14.5% oil yield. The kinetics experimental data agrees and well fitted with the second order extraction model. The initial extraction rate (h) was 0.0115 gmL-1min-1; the extraction capacity (Cs) was 1.282gmL-1; the second order extraction constant (k) was 0.007 mLg-1min-1 and coefficient of determination, R2 was 0.945.

Keywords: Gaharu, solid liquid extraction, optimization, kinetics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3264
4700 The New Semi-Experimental Method for Simulation of Turbine Flow Meters Rotation in the Transitional Flow

Authors: J. Tonkonogij, A. Pedišius, A. Stankevičius

Abstract:

The new semi-experimental method for simulation of the turbine flow meters rotation in the transitional flow has been developed. The method is based on the experimentally established exponential low of changing of dimensionless relative turbine gas meter rotation frequency and meter inertia time constant. For experimental evaluation of the meter time constant special facility has been developed. The facility ensures instant switching of turbine meter under test from one channel to the other channel with different flow rate and measuring the meter response. The developed method can be used for evaluation and predication of the turbine meters response and dynamic error in the transitional flow with any arbitrary law of flow rate changing. The examples of the method application are presented.

Keywords: Dynamic error, pulsing flow, numerical simulation, response, turbine gas meters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2202
4699 Automatically Driven Vector for Guidewire Segmentation in 2D and Biplane Fluoroscopy

Authors: Simon Lessard, Pascal Bigras, Caroline Lau, Daniel Roy, Gilles Soulez, Jacques A. de Guise

Abstract:

The segmentation of endovascular tools in fluoroscopy images can be accurately performed automatically or by minimum user intervention, using known modern techniques. It has been proven in literature, but no clinical implementation exists so far because the computational time requirements of such technology have not yet been met. A classical segmentation scheme is composed of edge enhancement filtering, line detection, and segmentation. A new method is presented that consists of a vector that propagates in the image to track an edge as it advances. The filtering is performed progressively in the projected path of the vector, whose orientation allows for oriented edge detection, and a minimal image area is globally filtered. Such an algorithm is rapidly computed and can be implemented in real-time applications. It was tested on medical fluoroscopy images from an endovascular cerebral intervention. Ex- periments showed that the 2D tracking was limited to guidewires without intersection crosspoints, while the 3D implementation was able to cope with such planar difficulties.

Keywords: Edge detection, Line Enhancement, Segmentation, Fluoroscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728