Search results for: Interleaving technique
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3070

Search results for: Interleaving technique

700 Fast Adjustable Threshold for Uniform Neural Network Quantization

Authors: Alexander Goncharenko, Andrey Denisov, Sergey Alyamkin, Evgeny Terentev

Abstract:

The neural network quantization is highly desired procedure to perform before running neural networks on mobile devices. Quantization without fine-tuning leads to accuracy drop of the model, whereas commonly used training with quantization is done on the full set of the labeled data and therefore is both time- and resource-consuming. Real life applications require simplification and acceleration of quantization procedure that will maintain accuracy of full-precision neural network, especially for modern mobile neural network architectures like Mobilenet-v1, MobileNet-v2 and MNAS. Here we present a method to significantly optimize training with quantization procedure by introducing the trained scale factors for discretization thresholds that are separate for each filter. Using the proposed technique, we quantize the modern mobile architectures of neural networks with the set of train data of only ∼ 10% of the total ImageNet 2012 sample. Such reduction of train dataset size and small number of trainable parameters allow to fine-tune the network for several hours while maintaining the high accuracy of quantized model (accuracy drop was less than 0.5%). Ready-for-use models and code are available in the GitHub repository.

Keywords: Distillation, machine learning, neural networks, quantization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 732
699 Person-Environment Fit (PE Fit): Evidence from Brazil

Authors: Jucelia Appio, Danielle Deimling De Carli, Bruno Henrique Rocha Fernandes, Nelson Natalino Frizon

Abstract:

The purpose of this paper is to investigate if there are positive and significant correlations between the dimensions of Person-Environment Fit (Person-Job, Person-Organization, Person-Group and Person-Supervisor) at the “Best Companies to Work for” in Brazil in 2017. For that, a quantitative approach was used with a descriptive method being defined as a research sample the "150 Best Companies to Work for", according to data base collected in 2017 and provided by Fundação Instituto of Administração (FIA) of the University of São Paulo (USP). About the data analysis procedures, asymmetry and kurtosis, factorial analysis, Kaiser-Meyer-Olkin (KMO) tests, Bartlett sphericity and Cronbach's alpha were used for the 69 research variables, and as a statistical technique for the purpose of analyzing the hypothesis, Pearson's correlation analysis was performed. As a main result, we highlight that there was a positive and significant correlation between the dimensions of Person-Environment Fit, corroborating the H1 hypothesis that there is a positive and significant correlation between Person-Job Fit, Person-Organization Fit, Person-Group Fit and Person-Supervisor Fit.

Keywords: Human resource management, person-environment fit, strategic people management, best companies to work for.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 998
698 Extracting Single Trial Visual Evoked Potentials using Selective Eigen-Rate Principal Components

Authors: Samraj Andrews, Ramaswamy Palaniappan, Nidal Kamel

Abstract:

In single trial analysis, when using Principal Component Analysis (PCA) to extract Visual Evoked Potential (VEP) signals, the selection of principal components (PCs) is an important issue. We propose a new method here that selects only the appropriate PCs. We denote the method as selective eigen-rate (SER). In the method, the VEP is reconstructed based on the rate of the eigen-values of the PCs. When this technique is applied on emulated VEP signals added with background electroencephalogram (EEG), with a focus on extracting the evoked P3 parameter, it is found to be feasible. The improvement in signal to noise ratio (SNR) is superior to two other existing methods of PC selection: Kaiser (KSR) and Residual Power (RP). Though another PC selection method, Spectral Power Ratio (SPR) gives a comparable SNR with high noise factors (i.e. EEGs), SER give more impressive results in such cases. Next, we applied SER method to real VEP signals to analyse the P3 responses for matched and non-matched stimuli. The P3 parameters extracted through our proposed SER method showed higher P3 response for matched stimulus, which confirms to the existing neuroscience knowledge. Single trial PCA using KSR and RP methods failed to indicate any difference for the stimuli.

Keywords: Electroencephalogram, P3, Single trial VEP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1641
697 Influence of Pressure from Compression Textile Bands: Their Using in the Treatment of Venous Human Leg Ulcers

Authors: Bachir Chemani, Rachid Halfaoui

Abstract:

The aim of study was to evaluate pressure distribution characteristics of the elastic textile bandages using two instrumental techniques: a prototype Instrument and a load Transference. The prototype instrument which simulates shape of real leg has pressure sensors which measure bandage pressure. Using this instrument, the results show that elastic textile bandages presents different pressure distribution characteristics and none produces a uniform distribution around lower limb.

The load transference test procedure is used to determine whether a relationship exists between elastic textile bandage structure and pressure distribution characteristics. The test procedure assesses degree of load, directly transferred through a textile when loads series are applied to bandaging surface. A range of weave fabrics was produced using needle weaving machine and a sewing technique. A textile bandage was developed with optimal characteristics far superior pressure distribution than other bandages. From results, we find that theoretical pressure is not consistent exactly with practical pressure. It is important in this study to make a practical application for specialized nurses in order to verify the results and draw useful conclusions for predicting the use of this type of elastic band.

Keywords: Textile, cotton, pressure, venous ulcers, elastic.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
696 Using Electrical Impedance Tomography to Control a Robot

Authors: Shayan Rezvanigilkolaei, Shayesteh Vefaghnematollahi

Abstract:

Electrical impedance tomography is a non-invasive medical imaging technique suitable for medical applications. This paper describes an electrical impedance tomography device with the ability to navigate a robotic arm to manipulate a target object. The design of the device includes various hardware and software sections to perform medical imaging and control the robotic arm. In its hardware section an image is formed by 16 electrodes which are located around a container. This image is used to navigate a 3DOF robotic arm to reach the exact location of the target object. The data set to form the impedance imaging is obtained by having repeated current injections and voltage measurements between all electrode pairs. After performing the necessary calculations to obtain the impedance, information is transmitted to the computer. This data is fed and then executed in MATLAB which is interfaced with EIDORS (Electrical Impedance Tomography Reconstruction Software) to reconstruct the image based on the acquired data. In the next step, the coordinates of the center of the target object are calculated by image processing toolbox of MATLAB (IPT). Finally, these coordinates are used to calculate the angles of each joint of the robotic arm. The robotic arm moves to the desired tissue with the user command.

Keywords: Electrical impedance tomography, EIT, Surgeon robot, image processing of Electrical impedance tomography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
695 Cost-Effective Private Grid Using Object-based Grid Architecture

Authors: M. Victor Jose, V. Seenivasagam

Abstract:

This paper proposes a cost-effective private grid using Object-based Grid Architecture (OGA). In OGA, the data process privacy and inter communication are increased through an object- oriented concept. The limitation of the existing grid is that the user can enter or leave the grid at any time without schedule and dedicated resource. To overcome these limitations, cost-effective private grid and appropriate algorithms are proposed. In this, each system contains two platforms such as grid and local platforms. The grid manager service running in local personal computer can act as grid resource. When the system is on, it is intimated to the Monitoring and Information System (MIS) and details are maintained in Resource Object Table (ROT). The MIS is responsible to select the resource where the file or the replica should be stored. The resource storage is done within virtual single private grid nodes using random object addressing to prevent stolen attack. If any grid resource goes down, then the resource ID will be removed from the ROT, and resource recovery is efficiently managed by the replicas. This random addressing technique makes the grid storage a single storage and the user views the entire grid network as a single system.

Keywords: Object Grid Architecture, Grid Manager Service, Resource Object table, Random object addressing, Object storage, Dynamic Object Update.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1031
694 Tensile Properties of Aluminum Silicon Nickel Iron Vanadium High Entropy Alloys

Authors: Sefiu A. Bello, Nasirudeen K. Raji, Jeleel A. Adebisi, Sadiq A. Raji

Abstract:

Pure metals are not used in most cases for structural applications because of their limited properties. Presently, high entropy alloys (HEAs) are emerging by mixing comparative proportions of metals with the aim of maximizing the entropy leading to enhancement in structural and mechanical properties. Aluminum Silicon Nickel Iron Vanadium (AlSiNiFeV) alloy was developed using stir cast technique and analysed. Results obtained show that the alloy grade G0 contains 44 percentage by weight (wt%) Al, 32 wt% Si, 9 wt% Ni, 4 wt% Fe, 3 wt% V and 8 wt% for minor elements with tensile strength and elongation of 106 Nmm-2 and 2.68%, respectively. X-ray diffraction confirmed intermetallic compounds having hexagonal closed packed (HCP), orthorhombic and cubic structures in cubic dendritic matrix. This affirmed transformation from the cubic structures of elemental constituents of the HEAs to the precipitated structures of the intermetallic compounds. A maximum tensile strength of 188 Nmm-2 with 4% elongation was noticed at 10wt% of silica addition to the G0. An increase in tensile strength with an increment in silica content could be attributed to different phases and crystal geometries characterizing each HEA.

Keywords: High entropy alloys, phases, model, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 746
693 Mathematical Approach towards Fault Detection and Isolation of Linear Dynamical Systems

Authors: V.Manikandan, N.Devarajan

Abstract:

The main objective of this work is to provide a fault detection and isolation based on Markov parameters for residual generation and a neural network for fault classification. The diagnostic approach is accomplished in two steps: In step 1, the system is identified using a series of input / output variables through an identification algorithm. In step 2, the fault is diagnosed comparing the Markov parameters of faulty and non faulty systems. The Artificial Neural Network is trained using predetermined faulty conditions serves to classify the unknown fault. In step 1, the identification is done by first formulating a Hankel matrix out of Input/ output variables and then decomposing the matrix via singular value decomposition technique. For identifying the system online sliding window approach is adopted wherein an open slit slides over a subset of 'n' input/output variables. The faults are introduced at arbitrary instances and the identification is carried out in online. Fault residues are extracted making a comparison of the first five Markov parameters of faulty and non faulty systems. The proposed diagnostic approach is illustrated on benchmark problems with encouraging results.

Keywords: Artificial neural network, Fault Diagnosis, Identification, Markov parameters.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
692 Development and Characterization of Bio-Tribological, Nano-Multilayer Coatings for Medical Tools Application

Authors: L. Major, J. M. Lackner, M. Dyner, B. Major

Abstract:

Development of new generation bio-tribological, multilayer coatings opens an avenue for fabrication of future hightech functional surfaces. In the presented work, nano-composite, Cr/CrN+[Cr/ a-C:H implanted by metallic nanocrystals] multilayer coatings have been developed for surface protection of medical tools. Thin films were fabricated by a hybrid Pulsed Laser Deposition technique. Complex microstructure analysis of nanomultilayer coatings, subjected to mechanical and biological tests, were performed by means of transmission electron microscopy (TEM). Microstructure characterization revealed the layered arrangement of Cr23C6 nanoparticles in multilayer structure. Influence of deposition conditions on bio-tribological properties of the coatings was studied. The bio-tests were used as a screening tool for the analyzed nanomultilayer coatings before they could be deposited on medical tools. Bio-medical tests were done using fibroblasts. The mechanical properties of the coatings were investigated by means of a ball-ondisc mechanical test. The micro hardness was done using Berkovich indenter. The scratch adhesion test was done using Rockwell indenter. From the bio-tribological point of view, the optimal properties had the C106_1 material.

Keywords: Bio-tribological coatings, cell-material interaction, hybrid PLD, tribology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
691 Analytical Studies on Volume Determination of Leg Ulcer using Structured Light and Laser Triangulation Data Acquisition Techniques

Authors: M. Abdul-Rani, K. K. Chong, A. F. M. Hani, Y. B. Yap, A. Jamil

Abstract:

Imaging is defined as the process of obtaining geometric images either two dimensional or three dimensional by scanning or digitizing the existing objects or products. In this research, it applied to retrieve 3D information of the human skin surface in medical application. This research focuses on analyzing and determining volume of leg ulcers using imaging devices. Volume determination is one of the important criteria in clinical assessment of leg ulcer. The volume and size of the leg ulcer wound will give the indication on responding to treatment whether healing or worsening. Different imaging techniques are expected to give different result (and accuracies) in generating data and images. Midpoint projection algorithm was used to reconstruct the cavity to solid model and compute the volume. Misinterpretation of the results can affect the treatment efficacy. The objectives of this paper is to compare the accuracy between two 3D data acquisition method, which is laser triangulation and structured light methods, It was shown that using models with known volume, that structured-light-based 3D technique produces better accuracy compared with laser triangulation data acquisition method for leg ulcer volume determination.

Keywords: Imaging, Laser Triangulation, Structured Light, Volume Determination.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1509
690 Steel Dust as a Coating Agent for Iron Ore Pellets at Ironmaking

Authors: M. Bahgat, H. Hanafy, H. Al-Tassan

Abstract:

Cluster formation is an essential phenomenon during direct reduction processes at shaft furnaces. Decreasing the reducing temperature to avoid this problem can cause a significant drop in throughput. In order to prevent sticking of pellets, a coating material basically inactive under the reducing conditions prevailing in the shaft furnace, should be applied to cover the outer layer of the pellets. In the present work, steel dust is used as coating material for iron ore pellets to explore dust coating effectiveness and determines the best coating conditions. Steel dust coating is applied for iron ore pellets in various concentrations. Dust slurry concentrations of 5.0-30% were used to have a coated steel dust amount of 1.0-5.0 kg per ton iron ore. Coated pellets with various concentrations were reduced isothermally in weight loss technique with simulated gas mixture to the composition of reducing gases at shaft furnaces. The influences of various coating conditions on the reduction behavior and the morphology were studied. The optimum reduced samples were comparatively applied for sticking index measurement. It was found that the optimized steel dust coating condition that achieve higher reducibility with lower sticking index was 30% steel dust slurry concentration with 3.0 kg steel dust/ton ore.

Keywords: Ironmaking, coating, steel dust, reduction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 939
689 On Determining the Most Effective Technique Available in Software Testing

Authors: Qasim Zafar, Matthew Anderson, Esteban Garcia, Steven Drager

Abstract:

Software failures can present an enormous detriment to people's lives and cost millions of dollars to repair when they are unexpectedly encountered in the wild. Despite a significant portion of the software development lifecycle and resources are dedicated to testing, software failures are a relatively frequent occurrence. Nevertheless, the evaluation of testing effectiveness remains at the forefront of ensuring high-quality software and software metrics play a critical role in providing valuable insights into quantifiable objectives to assess the level of assurance and confidence in the system. As the selection of appropriate metrics can be an arduous process, the goal of this paper is to shed light on the significance of software metrics by examining a range of testing techniques and metrics as well as identifying key areas for improvement. In doing so, this paper presents a method to compare the effectiveness of testing techniques with heterogeneous output metrics. Additionally, through this investigation, readers will gain a deeper understanding of how metrics can help to drive informed decision-making on delivering high-quality software and facilitate continuous improvement in testing practices.

Keywords: Software testing, software metrics, testing effectiveness, black box testing, random testing, adaptive random testing, combinatorial testing, fuzz testing, equivalence partition, boundary value analysis, white box testings.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 66
688 Gait Biometric for Person Re-Identification

Authors: Lavanya Srinivasan

Abstract:

Biometric identification is to identify unique features in a person like fingerprints, iris, ear, and voice recognition that need the subject's permission and physical contact. Gait biometric is used to identify the unique gait of the person by extracting moving features. The main advantage of gait biometric to identify the gait of a person at a distance, without any physical contact. In this work, the gait biometric is used for person re-identification. The person walking naturally compared with the same person walking with bag, coat and case recorded using long wave infrared, short wave infrared, medium wave infrared and visible cameras. The videos are recorded in rural and in urban environments. The pre-processing technique includes human identified using You Only Look Once, background subtraction, silhouettes extraction and synthesis Gait Entropy Image by averaging the silhouettes. The moving features are extracted from the Gait Entropy Energy Image. The extracted features are dimensionality reduced by the Principal Component Analysis and recognized using different classifiers. The comparative results with the different classifier show that Linear Discriminant Analysis outperform other classifiers with 95.8% for visible in the rural dataset and 94.8% for longwave infrared in the urban dataset.

Keywords: biometric, gait, silhouettes, You Only Look Once

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 531
687 Electronic Nose Based On Metal Oxide Semiconductor Sensors as an Alternative Technique for the Spoilage Classification of Oat Milk

Authors: A. Deswal, N. S. Deora, H. N. Mishra

Abstract:

The aim of the present study was to develop a rapid method for electronic nose for online quality control of oat milk. Analysis by electronic nose and bacteriological measurements were performed to analyze spoilage kinetics of oat milk samples stored at room temperature and refrigerated conditions for up to 15 days. Principal component analysis (PCA), Discriminant Factorial Analysis (DFA) and Soft Independent Modelling by Class Analogy (SIMCA) classification techniques were used to differentiate the samples of oat milk at different days. The total plate count (bacteriological method) was selected as the reference method to consistently train the electronic nose system. The e-nose was able to differentiate between the oat milk samples of varying microbial load. The results obtained by the bacteria total viable countsshowed that the shelf-life of oat milk stored at room temperature and refrigerated conditions were 20hrs and 13 days, respectively. The models built classified oat milk samples based on the total microbial population into “unspoiled” and “spoiled”.

Keywords: Electronic-nose, bacteriological, shelf-life, classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3272
686 Perception of Hygiene Knowledge among Staff Working in Top Five Famous Restaurants of Male’

Authors: Zulaikha Reesha Rashaad

Abstract:

One of the major factors which can contribute greatly to success of catering businesses is to employ food and beverage staff having sound hygiene knowledge. Individuals having sound knowledge of hygiene has a higher chance of following safe food practices in food production. One of the leading causes of food poisoning and food borne illnesses has been identified as lack of hygiene knowledge among food and beverage staff working in catering establishments and restaurants. This research aims to analyze the hygiene knowledge among food and beverage staff working in top five restaurants of Male’, in relation to their age, educational background, occupation and training. The research uses quantitative and descriptive methods in data collection and in data analysis. Data was obtained through random sampling technique with self-administered survey questionnaires which was completed by 60 respondents working in 5 different restaurants operating at top level in Male’. The respondents of the research were service staff and chefs working in these restaurants. The responses to the questionnaires have been analyzed by using SPSS. The results of the research indicated that age, education level, occupation and training correlated with hygiene knowledge perception scores.

Keywords: Food and beverage staff, food poisoning, food production, hygiene knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1091
685 Resveratrol Incorporated Liposomes Prepared from Pegylated Phospholipids and Cholesterol

Authors: Mont Kumpugdee-Vollrath, Khaled Abdallah

Abstract:

Liposomes and pegylated liposomes were widely used as drug delivery system in pharmaceutical field since a long time. However, in the former time, polyethylene glycol (PEG) was connected into phospholipid after the liposomes were already prepared. In this paper, we intend to study the possibility of applying phospholipids which already connected with PEG and then they were used to prepare liposomes. The model drug resveratrol was used because it can be applied against different diseases. Cholesterol was applied to stabilize the membrane of liposomes. The thin film technique in a laboratory scale was a preparation method. The liposomes were then characterized by nanoparticle tracking analysis (NTA), photon correlation spectroscopy (PCS) and light microscopic techniques. The stable liposomes can be produced and the particle sizes after filtration were in nanometers. The 2- and 3-chains-PEG-phospholipid (PL) caused in smaller particle size than the 4-chains-PEG-PL. Liposomes from PL 90G and cholesterol were stable during storage at 8 °C of 56 days because the particle sizes measured by PCS were almost not changed. There was almost no leakage of resveratrol from liposomes PL 90G with cholesterol after diffusion test in dialysis tube for 28 days. All liposomes showed the sustained release during measuring time of 270 min. The maximum release amount of 16-20% was detected with liposomes from 2- and 3-chains-PEG-PL. The other liposomes gave max. release amount of resveratrol only of 10%. The release kinetic can be explained by Korsmeyer-Peppas equation. 

Keywords: Liposome, NTA, resveratrol, pegylation, cholesterol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1063
684 An Improved Method on Static Binary Analysis to Enhance the Context-Sensitive CFI

Authors: Qintao Shen, Lei Luo, Jun Ma, Jie Yu, Qingbo Wu, Yongqi Ma, Zhengji Liu

Abstract:

Control Flow Integrity (CFI) is one of the most promising technique to defend Code-Reuse Attacks (CRAs). Traditional CFI Systems and recent Context-Sensitive CFI use coarse control flow graphs (CFGs) to analyze whether the control flow hijack occurs, left vast space for attackers at indirect call-sites. Coarse CFGs make it difficult to decide which target to execute at indirect control-flow transfers, and weaken the existing CFI systems actually. It is an unsolved problem to extract CFGs precisely and perfectly from binaries now. In this paper, we present an algorithm to get a more precise CFG from binaries. Parameters are analyzed at indirect call-sites and functions firstly. By comparing counts of parameters prepared before call-sites and consumed by functions, targets of indirect calls are reduced. Then the control flow would be more constrained at indirect call-sites in runtime. Combined with CCFI, we implement our policy. Experimental results on some popular programs show that our approach is efficient. Further analysis show that it can mitigate COOP and other advanced attacks.

Keywords: Contex-sensitive, CFI, binary analysis, code reuse attack.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
683 Matching Pursuit based Removal of Cardiac Pulse-Related Artifacts in EEG/fMRI

Authors: Rainer Schneider, Stephan Lau, Levin Kuhlmann, Simon Vogrin, Maciej Gratkowski, Mark Cook, Jens Haueisen

Abstract:

Cardiac pulse-related artifacts in the EEG recorded simultaneously with fMRI are complex and highly variable. Their effective removal is an unsolved problem. Our aim is to develop an adaptive removal algorithm based on the matching pursuit (MP) technique and to compare it to established methods using a visual evoked potential (VEP). We recorded the VEP inside the static magnetic field of an MR scanner (with artifacts) as well as in an electrically shielded room (artifact free). The MP-based artifact removal outperformed average artifact subtraction (AAS) and optimal basis set removal (OBS) in terms of restoring the EEG field map topography of the VEP. Subsequently, a dipole model was fitted to the VEP under each condition using a realistic boundary element head model. The source location of the VEP recorded inside the MR scanner was closest to that of the artifact free VEP after cleaning with the MP-based algorithm as well as with AAS. While none of the tested algorithms offered complete removal, MP showed promising results due to its ability to adapt to variations of latency, frequency and amplitude of individual artifact occurrences while still utilizing a common template.

Keywords: matching pursuit, ballistocardiogram, artifactremoval, EEG/fMRI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1687
682 Influencing of Rice Residue Management Method on GHG Emission from Rice Cultivation

Authors: Cheewaphongphan P., Garivait S., Pongpullponsak A., Patumsawad S.

Abstract:

Thailand is one of the world-s leaders of rice producers and exporters. Farmers have to increase the rice cultivation frequency for serving the national increasing of export-s demand. It leads to an elimination of rice residues by open burning which is the quickest and costless management method. The open burning of rice residue is one of the major causes of air pollutants and greenhouse gas (GHG) emission. Under ASEAN agreement on trans-boundary haze, Thailand set the master plan to mitigate air pollutant emission from open burning of agricultural residues. In this master plan, residues incorporation is promoted as alternative management method to open burning. However, the assessment of both options in term of GHG emission in order to investigate their contribution to long-term global warming is still scarce or inexistent. In this study, a method on rice residues assessment was first developed in order to estimate and compare GHG emissions from rice cultivation under rice residues open burning and the case with incorporation of the same amount of rice residues, using 2006 IPCC guidelines for emission estimation and Life Cycle Analysis technique. The emission from rice cultivation in different preparing area practice was also discussed.

Keywords: Greenhouse gases, Incorporation, Rice cultivation, Rice field residue, Rice residue management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3226
681 Magnetohydrodynamic Maxwell Nanofluids Flow over a Stretching Surface through a Porous Medium: Effects of Non-Linear Thermal Radiation, Convective Boundary Conditions and Heat Generation/Absorption

Authors: Sameh E. Ahmed, Ramadan A. Mohamed, Abd Elraheem M. Aly, Mahmoud S. Soliman

Abstract:

In this paper, an enhancement of the heat transfer using non-Newtonian nanofluids by magnetohydrodynamic (MHD) mixed convection along stretching sheets embedded in an isotropic porous medium is investigated. Case of the Maxwell nanofluids is studied using the two phase mathematical model of nanofluids and the Darcy model is applied for the porous medium. Important effects are taken into account, namely, non-linear thermal radiation, convective boundary conditions, electromagnetic force and presence of the heat source/sink. Suitable similarity transformations are used to convert the governing equations to a system of ordinary differential equations then it is solved numerically using a fourth order Runge-Kutta method with shooting technique. The main results of the study revealed that the velocity profiles are decreasing functions of the Darcy number, the Deborah number and the magnetic field parameter. Also, the increase in the non-linear radiation parameters causes an enhancement in the local Nusselt number.

Keywords: MHD, nanofluids, stretching surface, non-linear thermal radiation, convective condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 960
680 Rheological Properties of Polyethylene and Polypropylene Modified Bitumen

Authors: Noor Zainab Habib, Ibrahim Kamaruddin, Madzalan Napiah, Isa Mohd Tan

Abstract:

This paper presents a part of research on the rheological properties of bitumen modified by thermoplastic namely linear low density polyethylene (LLDPE), high density polyethylene (HDPE) and polypropylene (PP) and its interaction with 80 pen base bitumen. As it is known that the modification of bitumen by the use of polymers enhances its performance characteristics but at the same time significantly alters its rheological properties. The rheological study of polymer modified bitumen (PMB) was made through penetration, ring & ball softening point and viscosity test. The results were then related to the changes in the rheological properties of polymer modified bitumen. It was observed that thermoplastic copolymer shows profound effect on penetration rather than softening point. The viscoelastic behavior of polymer modified bitumen depend on the concentration of polymer, mixing temperature, mixing technique, solvating power of base bitumen and molecular structure of polymer used. PP offer better blend in comparison to HDPE and LLDPE. The viscosity of base bitumen was also enhanced with the addition of polymer. The pseudoplastic behavior was more prominent for HDPE and LLDPE than PP. Best results were obtained when polymer concentration was kept below 3%

Keywords: Polymer modified bitumen, Linear low densitypolyethylene, High density polyethylene, Polypropylene.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4420
679 Using the PARIS Method for Multiple Criteria Decision Making in Unmanned Combat Aircraft Evaluation and Selection

Authors: C. Ardil

Abstract:

Unmanned combat aircraft (UCA) are expanding significantly in several defense industries, along with artificial intelligence improvements in highly precise technology. UCA is crucial in military settings for targeting enemy elements, and objects. UCA is also utilized for highly precise reconnaissance and surveillance tasks. To select the best alternative for critical missions, a methodical and effective strategy for UCA selection is required. Multiple criteria decision-making (MCDM) methodologies are ideally equipped to handle the complexity of alternative aircraft selection. To analyze UCA alternatives for the selection process, an integrated methodology built on the objective criteria weights and preference analysis for reference ideal solution (PARIS). First, the weights of essential elements are determined using the average weight (AW), standard deviation (SW) and entropy weight (EW) approach. The weights of the evaluation criteria affect the decision-making process. The aircraft choices in the decision problem are then ranked using objective criteria weights along with the PARIS technique. The validation and sensitivity analysis of the proposed MCDM approach are discussed.

Keywords: unmanned combat aircraft (UCA), multiple criteria decision making, MCDM, PARIS

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 474
678 Multiplayer Game System for Therapeutic Exercise in Which Players with Different Athletic Abilities Can Participate on an Even Competitive Footing

Authors: Kazumoto Tanaka, Takayuki Fujino

Abstract:

Sports games conducted as a group are a form of therapeutic exercise for aged people with decreased strength and for people suffering from permanent damage of stroke and other conditions. However, it is difficult for patients with different athletic abilities to play a game on an equal footing. This study specifically examines a computer video game designed for therapeutic exercise, and a game system with support given depending on athletic ability. Thereby, anyone playing the game can participate equally. This video-game, to be specific, is a popular variant of balloon volleyball, in which players hit a balloon by hand before it falls to the floor. In this game system, each player plays the game watching a monitor on which the system displays tailor-made video-game images adjusted to the person’s athletic ability, providing players with player-adaptive assist support. We have developed a multiplayer game system with an image generation technique for the tailor-made video-game and conducted tests to evaluate it.

Keywords: Therapeutic exercise, computer video game, disability-adaptive assist, tailor-made video-game image.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2107
677 A Novel Approach to Iris Localization for Iris Biometric Processing

Authors: Somnath Dey, Debasis Samanta

Abstract:

Iris-based biometric system is gaining its importance in several applications. However, processing of iris biometric is a challenging and time consuming task. Detection of iris part in an eye image poses a number of challenges such as, inferior image quality, occlusion of eyelids and eyelashes etc. Due to these problems it is not possible to achieve 100% accuracy rate in any iris-based biometric authentication systems. Further, iris detection is a computationally intensive task in the overall iris biometric processing. In this paper, we address these two problems and propose a technique to localize iris part efficiently and accurately. We propose scaling and color level transform followed by thresholding, finding pupil boundary points for pupil boundary detection and dilation, thresholding, vertical edge detection and removal of unnecessary edges present in the eye images for iris boundary detection. Scaling reduces the search space significantly and intensity level transform is helpful for image thresholding. Experimental results show that our approach is comparable with the existing approaches. Following our approach it is possible to detect iris part with 95-99% accuracy as substantiated by our experiments on CASIA Ver-3.0, ICE 2005, UBIRIS, Bath and MMU iris image databases.

Keywords: Iris recognition, iris localization, biometrics, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3191
676 Interactive Chinese Character Learning System though Pictograph Evolution

Authors: J.H. Low, C.O. Wong, E.J. Han, K.R Kim K.C. Jung, H.K. Yang

Abstract:

This paper proposes an Interactive Chinese Character Learning System (ICCLS) based on pictorial evolution as an edutainment concept in computer-based learning of language. The advantage of the language origination itself is taken as a learning platform due to the complexity in Chinese language as compared to other types of languages. Users especially children enjoy more by utilize this learning system because they are able to memories the Chinese Character easily and understand more of the origin of the Chinese character under pleasurable learning environment, compares to traditional approach which children need to rote learning Chinese Character under un-pleasurable environment. Skeletonization is used as the representation of Chinese character and object with an animated pictograph evolution to facilitate the learning of the language. Shortest skeleton path matching technique is employed for fast and accurate matching in our implementation. User is required to either write a word or draw a simple 2D object in the input panel and the matched word and object will be displayed as well as the pictograph evolution to instill learning. The target of computer-based learning system is for pre-school children between 4 to 6 years old to learn Chinese characters in a flexible and entertaining manner besides utilizing visual and mind mapping strategy as learning methodology.

Keywords: Computer-based learning, Chinese character, pictograph evolution, skeletonization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1908
675 PUMA 560 Optimal Trajectory Control using Genetic Algorithm, Simulated Annealing and Generalized Pattern Search Techniques

Authors: Sufian Ashraf Mazhari, Surendra Kumar

Abstract:

Robot manipulators are highly coupled nonlinear systems, therefore real system and mathematical model of dynamics used for control system design are not same. Hence, fine-tuning of controller is always needed. For better tuning fast simulation speed is desired. Since, Matlab incorporates LAPACK to increase the speed and complexity of matrix computation, dynamics, forward and inverse kinematics of PUMA 560 is modeled on Matlab/Simulink in such a way that all operations are matrix based which give very less simulation time. This paper compares PID parameter tuning using Genetic Algorithm, Simulated Annealing, Generalized Pattern Search (GPS) and Hybrid Search techniques. Controller performances for all these methods are compared in terms of joint space ITSE and cartesian space ISE for tracking circular and butterfly trajectories. Disturbance signal is added to check robustness of controller. GAGPS hybrid search technique is showing best results for tuning PID controller parameters in terms of ITSE and robustness.

Keywords: Controller Tuning, Genetic Algorithm, Pattern Search, Robotic Controller, Simulated Annealing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3717
674 Semantically Enriched Web Usage Mining for Personalization

Authors: Suresh Shirgave, Prakash Kulkarni, José Borges

Abstract:

The continuous growth in the size of the World Wide Web has resulted in intricate Web sites, demanding enhanced user skills and more sophisticated tools to help the Web user to find the desired information. In order to make Web more user friendly, it is necessary to provide personalized services and recommendations to the Web user. For discovering interesting and frequent navigation patterns from Web server logs many Web usage mining techniques have been applied. The recommendation accuracy of usage based techniques can be improved by integrating Web site content and site structure in the personalization process.

Herein, we propose semantically enriched Web Usage Mining method for Personalization (SWUMP), an extension to solely usage based technique. This approach is a combination of the fields of Web Usage Mining and Semantic Web. In the proposed method, we envisage enriching the undirected graph derived from usage data with rich semantic information extracted from the Web pages and the Web site structure. The experimental results show that the SWUMP generates accurate recommendations and is able to achieve 10-20% better accuracy than the solely usage based model. The SWUMP addresses the new item problem inherent to solely usage based techniques.

Keywords: Prediction, Recommendation, Semantic Web Usage Mining, Web Usage Mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3023
673 Role of Process Parameters on Pocket Milling with Abrasive Water Jet Machining Technique

Authors: T. V. K. Gupta, J. Ramkumar, Puneet Tandon, N. S. Vyas

Abstract:

Abrasive Water Jet Machining is an unconventional machining process well known for machining hard to cut materials. The primary research focus on the process was for through cutting and a very limited literature is available on pocket milling using AWJM. The present work is an attempt to use this process for milling applications considering a set of various process parameters. Four different input parameters, which were considered by researchers for part separation, are selected for the above application, i.e., abrasive size, flow rate, standoff distance and traverse speed. Pockets of definite size are machined to investigate surface roughness, material removal rate and pocket depth. Based on the data available through experiments on SS304 material, it is observed that higher traverse speeds gives a better finish because of reduction in the particle energy density and lower depth is also observed. Increase in the standoff distance and abrasive flow rate reduces the rate of material removal as the jet loses its focus and occurrence of collisions within the particles. ANOVA for individual output parameter has been studied to know the significant process parameters.

Keywords: Abrasive flow rate, surface finish, abrasive size, standoff distance, traverse speed.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4232
672 An Integrative Bayesian Approach to Supporting the Prediction of Protein-Protein Interactions: A Case Study in Human Heart Failure

Authors: Fiona Browne, Huiru Zheng, Haiying Wang, Francisco Azuaje

Abstract:

Recent years have seen a growing trend towards the integration of multiple information sources to support large-scale prediction of protein-protein interaction (PPI) networks in model organisms. Despite advances in computational approaches, the combination of multiple “omic" datasets representing the same type of data, e.g. different gene expression datasets, has not been rigorously studied. Furthermore, there is a need to further investigate the inference capability of powerful approaches, such as fullyconnected Bayesian networks, in the context of the prediction of PPI networks. This paper addresses these limitations by proposing a Bayesian approach to integrate multiple datasets, some of which encode the same type of “omic" data to support the identification of PPI networks. The case study reported involved the combination of three gene expression datasets relevant to human heart failure (HF). In comparison with two traditional methods, Naive Bayesian and maximum likelihood ratio approaches, the proposed technique can accurately identify known PPI and can be applied to infer potentially novel interactions.

Keywords: Bayesian network, Classification, Data integration, Protein interaction networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1616
671 Efficient High Fidelity Signal Reconstruction Based on Level Crossing Sampling

Authors: Negar Riazifar, Nigel G. Stocks

Abstract:

This paper proposes strategies in level crossing (LC) sampling and reconstruction that provide high fidelity signal reconstruction for speech signals; these strategies circumvent the problem of exponentially increasing number of samples as the bit-depth is increased and hence are highly efficient. Specifically, the results indicate that the distribution of the intervals between samples is one of the key factors in the quality of signal reconstruction; including samples with short intervals does not improve the accuracy of the signal reconstruction, whilst samples with large intervals lead to numerical instability. The proposed sampling method, termed reduced conventional level crossing (RCLC) sampling, exploits redundancy between samples to improve the efficiency of the sampling without compromising performance. A reconstruction technique is also proposed that enhances the numerical stability through linear interpolation of samples separated by large intervals. Interpolation is demonstrated to improve the accuracy of the signal reconstruction in addition to the numerical stability. We further demonstrate that the RCLC and interpolation methods can give useful levels of signal recovery even if the average sampling rate is less than the Nyquist rate.

Keywords: Level crossing sampling, numerical stability, speech processing, trigonometric polynomial.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430