Search results for: mobile applications.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3217

Search results for: mobile applications.

787 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System

Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae

Abstract:

The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the AC-DC transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ACDC transfer measurement system was analysed for the commonmode (CM) EMI effects. Further investigation of coupling path as well as much accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.

Keywords: CM, EMI, GPIB, ground loops.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1808
786 Refined Buckling Analysis of Rectangular Plates Under Uniaxial and Biaxial Compression

Authors: V. Piscopo

Abstract:

In the traditional buckling analysis of rectangular plates the classical thin plate theory is generally applied, so neglecting the plating shear deformation. It seems quite clear that this method is not totally appropriate for the analysis of thick plates, so that in the following the two variable refined plate theory proposed by Shimpi (2006), that permits to take into account the transverse shear effects, is applied for the buckling analysis of simply supported isotropic rectangular plates, compressed in one and two orthogonal directions. The relevant results are compared with the classical ones and, for rectangular plates under uniaxial compression, a new direct expression, similar to the classical Bryan-s formula, is proposed for the Euler buckling stress. As the buckling analysis is a widely diffused topic for a variety of structures, such as ship ones, some applications for plates uniformly compressed in one and two orthogonal directions are presented and the relevant theoretical results are compared with those ones obtained by a FEM analysis, carried out by ANSYS, to show the feasibility of the presented method.

Keywords: Buckling analysis, Thick plates, Biaxial stresses

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2593
785 Robust Camera Calibration using Discrete Optimization

Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck

Abstract:

Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.

Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1784
784 A Distance Function for Data with Missing Values and Its Application

Authors: Loai AbdAllah, Ilan Shimshoni

Abstract:

Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our  experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.

Keywords: Missing values, Distance metric, Bhattacharyya distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2719
783 Gamification of eHealth Business Cases to Enhance Rich Learning Experience

Authors: Kari Björn

Abstract:

Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.

Keywords: Engineering education, integrated curriculum, learning experience, learning outcomes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 938
782 GSM-Based Approach for Indoor Localization

Authors: M.Stella, M. Russo, D. Begušić

Abstract:

Ability of accurate and reliable location estimation in indoor environment is the key issue in developing great number of context aware applications and Location Based Services (LBS). Today, the most viable solution for localization is the Received Signal Strength (RSS) fingerprinting based approach using wireless local area network (WLAN). This paper presents two RSS fingerprinting based approaches – first we employ widely used WLAN based positioning as a reference system and then investigate the possibility of using GSM signals for positioning. To compare them, we developed a positioning system in real world environment, where realistic RSS measurements were collected. Multi-Layer Perceptron (MLP) neural network was used as the approximation function that maps RSS fingerprints and locations. Experimental results indicate advantage of WLAN based approach in the sense of lower localization error compared to GSM based approach, but GSM signal coverage by far outreaches WLAN coverage and for some LBS services requiring less precise accuracy our results indicate that GSM positioning can also be a viable solution.

Keywords: Indoor positioning, WLAN, GSM, RSS, location fingerprints, neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2731
781 Data Mining Classification Methods Applied in Drug Design

Authors: Mária Stachová, Lukáš Sobíšek

Abstract:

Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.

Keywords: data mining, classification, drug design, QSAR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2824
780 Performance Analysis of OQSMS and MDDR Scheduling Algorithms for IQ Switches

Authors: K. Navaz, Kannan Balasubramanian

Abstract:

Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.

Keywords: Multicast, Switch, Delay, Scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1143
779 Comparative Study of Different Enhancement Techniques for Computed Tomography Images

Authors: C. G. Jinimole, A. Harsha

Abstract:

One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.

Keywords: Computed tomography, enhancement techniques, increasing contrast, PSNR and MSE.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1357
778 Human Factors as the Main Reason of the Accident in Scaffold Use Assessment

Authors: Krzysztof J. Czarnocki, E. Czarnocka, K. Szaniawska

Abstract:

Main goal of the research project is Scaffold Use Risk Assessment Model (SURAM) formulation, developed for the assessment of risk levels as a various construction process stages with various work trades. Finally, in 2016, the project received financing by the National Center for Research and development according to PBS3/A2/19/2015–Research Grant. The presented data, calculations and analyzes discussed in this paper were created as a result of the completion on the first and second phase of the PBS3/A2/19/2015 project. Method: One of the arms of the research project is the assessment of worker visual concentration on the sight zones as well as risky visual point inadequate observation. In this part of research, the mobile eye-tracker was used to monitor the worker observation zones. SMI Eye Tracking Glasses is a tool, which allows us to analyze in real time and place where our eyesight is concentrated on and consequently build the map of worker's eyesight concentration during a shift. While the project is still running, currently 64 construction sites have been examined, and more than 600 workers took part in the experiment including monitoring of typical parameters of the work regimen, workload, microclimate, sound vibration, etc. Full equipment can also be useful in more advanced analyses. Because of that technology we have verified not only main focus of workers eyes during work on or next to scaffolding, but we have also examined which changes in the surrounding environment during their shift influenced their concentration. In the result of this study it has been proven that only up to 45.75% of the shift time, workers’ eye concentration was on one of three work-related areas. Workers seem to be distracted by noisy vehicles or people nearby. In opposite to our initial assumptions and other authors’ findings, we observed that the reflective parts of the scaffoldings were not more recognized by workers in their direct workplaces. We have noticed that the red curbs were the only well recognized part on a very few scaffoldings. Surprisingly on numbers of samples, we have not recognized any significant number of concentrations on those curbs. Conclusion: We have found the eye-tracking method useful for the construction of the SURAM model in the risk perception and worker’s behavior sub-modules. We also have found that the initial worker's stress and work visual conditions seem to be more predictive for assessment of the risky developing situation or an accident than other parameters relating to a work environment.

Keywords: Accident assessment model, eye tracking, occupational safety, scaffolding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1112
777 Extraction, Characterization and Application of Natural Dyes from the Fresh Rind of Index Colour 5 Mangosteen (Garcinia mangostana L.)

Authors: T. Basitah

Abstract:

This study was to explore and utilize the fresh rind of mangosteen Index Colour 5 as an upcoming raw material for the production of natural dyes. Rind from the fresh mangosteen Index Colour 5 was utilized to extract the dyes. The established extracts were experimented on silk fabrics via three types of mordanting and dyeing procedures; pre-mordanting, simultaneous mordanting and post-mordanting. As a result, the applications of the freeze-drying methodology and mechanizable equipment have helped to produce excellent range of natural colours. Silk fabric treated simultaneously with mordanting and dyeing with extract dye Index Colour 5 produced a brilliant shade of the red colour and the colour from this index is also discovered sensitive to light and washing during the fastness tests. The preliminary evaluation and instrumentation analysis allowed us to examine whether the application of different mordanting and dyeing procedures with the same extract samples and concentrations affected the colours and shades of the fabric samples.

Keywords: Natural dye, Freeze-drying, Garcinia mangostana Linn, Mordanting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4355
776 Design and Analysis of an 8T Read Decoupled Dual Port SRAM Cell for Low Power High Speed Applications

Authors: Ankit Mitra

Abstract:

Speed, power consumption and area, are some of the most important factors of concern in modern day memory design. As we move towards Deep Sub-Micron Technologies, the problems of leakage current, noise and cell stability due to physical parameter variation becomes more pronounced. In this paper we have designed an 8T Read Decoupled Dual Port SRAM Cell with Dual Threshold Voltage and characterized it in terms of read and write delay, read and write noise margins, Data Retention Voltage and Leakage Current. Read Decoupling improves the Read Noise Margin and static power dissipation is reduced by using Dual-Vt transistors. The results obtained are compared with existing 6T, 8T, 9T SRAM Cells, which shows the superiority of the proposed design. The Cell is designed and simulated in TSPICE using 90nm CMOS process.

Keywords: CMOS, Dual-Port, Data Retention Voltage, 8T SRAM, Leakage Current, Noise Margin, Loop-cutting, Single-ended.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3425
775 Analysis and Categorization of e-Learning Activities Based On Meaningful Learning Characteristics

Authors: Arda Yunianta, Norazah Yusof, Mohd Shahizan Othman, Dewi Octaviani

Abstract:

Learning is the acquisition of new mental schemata, knowledge, abilities and skills which can be used to solve problems potentially more successfully. The learning process is optimum when it is assisted and personalized. Learning is not a single activity, but should involve many possible activities to make learning become meaningful. Many e-learning applications provide facilities to support teaching and learning activities. One way to identify whether the e-learning system is being used by the learners is through the number of hits that can be obtained from the e-learning system's log data. However, we cannot rely solely to the number of hits in order to determine whether learning had occurred meaningfully. This is due to the fact that meaningful learning should engage five characteristics namely active, constructive, intentional, authentic and cooperative. This paper aims to analyze the e-learning activities that is meaningful to learning. By focusing on the meaningful learning characteristics, we match it to the corresponding Moodle e-learning activities. This analysis discovers the activities that have high impact to meaningful learning, as well as activities that are less meaningful. The high impact activities is given high weights since it become important to meaningful learning, while the low impact has less weight and said to be supportive e-learning activities. The result of this analysis helps us categorize which e-learning activities that are meaningful to learning and guide us to measure the effectiveness of e-learning usage.

Keywords: e-learning system, e-learning activity, meaningful learning characteristics, Moodle

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3119
774 Cytotoxic Effects of Engineered Nanoparticles in Human Mesenchymal Stem Cells

Authors: Ali A. Alshatwi, Vaiyapuri S. Periasamy, Jegan Athinarayanan

Abstract:

Engineered nanoparticles’ usage rapidly increased in various applications in the last decade due to their unusual properties. However, there is an ever increasing concern to understand their toxicological effect in human health. Particularly, metal and metal oxide nanoparticles have been used in various sectors including biomedical, food and agriculture. But their impact on human health is yet to be fully understood. In this present investigation, we assessed the toxic effect of engineered nanoparticles (ENPs) including Ag, MgO and Co3O4 nanoparticles (NPs) on human mesenchymal stem cells (hMSC) adopting cell viability and cellular morphological changes as tools The results suggested that silver NPs are more toxic than MgO and Co3O4NPs. The ENPs induced cytotoxicity and nuclear morphological changes in hMSC depending on dose. The cell viability decreases with increase in concentration of ENPs. The cellular morphology studies revealed that ENPs damaged the cells. These preliminary findings have implications for the use of these nanoparticles in food industry with systematic regulations.

Keywords: Cobalt oxide, Human mesenchymal stem cells, MgO, Silver.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2388
773 A Bi-Objective Model for Location-Allocation Problem within Queuing Framework

Authors: Amirhossein Chambari, Seyed Habib Rahmaty, Vahid Hajipour, Aida Karimi

Abstract:

This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.

Keywords: Queuing, Location, Bi-objective, NSGA-II, NRGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2254
772 The Development of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications

Authors: Mohamed R. Mhereeg

Abstract:

The paper investigates the feasibility of constructing a software multi-agent based monitoring and classification system and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. The agents function autonomously to provide continuous and periodic monitoring of excels spreadsheet workbooks. Resulting in, the development of the MultiAgent classification System (MACS) that is in compliance with the specifications of the Foundation for Intelligent Physical Agents (FIPA). However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies that are Windows Communication Foundation (WCF) services, Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW that is in order to satisfy the monitoring and classification of the multiple developer aspect. ODM was used to automate the classification phase of MACS.

Keywords: Autonomous, Classification, MACS, Multi-Agent, SOA, WCF.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
771 Characterization of Extreme Low-Resolution Digital Encoder for Control System with Sinusoidal Reference Signal

Authors: Zhenyu Zhang, Qingbin Gao

Abstract:

Low-resolution digital encoder (LRDE) is commonly adopted as a position sensor in low-cost and resource-constraint applications. Traditionally, a digital encoder is modeled as a quantizer without considering the initial position of the LRDE. However, it cannot be applied to extreme LRDE for which stroke of angular motion is only a few times of resolution of the encoder. Besides, the actual angular motion is substantially distorted by this extreme LRDE so that the encoder reading does not faithfully represent the actual angular motion. This paper presents a modeling method for extreme LRDE by taking into account the initial position of the LRDE. For a control system with sinusoidal reference signal and extreme LRDE, this paper analyzes the characteristics of angular motion. Specifically, two descriptors of sinusoidal angular motion are studied, which essentially sheds light on the actual angular motion from extreme LRDE.

Keywords: Low resolution digital encoder, resource-constraint control system, sinusoidal reference signal, servo motion control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 782
770 Nanobiocomposites with Enhanced Cell Proliferation and Improved Mechanical Properties Based on Organomodified-Nanoclay and Silicone Rubber

Authors: M. S. Hosseini, M. Tazzoli-Shadpour, I. Amjadi, A. A. Katbab, E. Jaefargholi-Rangraz

Abstract:

Bionanotechnology deals with nanoscopic interactions between nanostructured materials and biological systems. Polymer nanocomposites with optimized biological activity have attracted great attention. Nanoclay is considered as reinforcing nanofiller in manufacturing of high performance nanocomposites. In current study, organomodified-nanoclay with negatively charged silicate layers was incorporated into biomedical grade silicone rubber. Nanoparticle loading has been tailored to enhance cell behavior. Addition of nanoparticles led to improved mechanical properties of substrate with enhanced strength and stiffness while no toxic effects was observed. Results indicated improved viability and proliferation of cells by addition of nanofillers. The improved mechanical properties of the matrix result in proper cell response through adjustment and arrangement of cytoskeletal fibers. Results can be applied in tissue engineering when enhanced substrates are required for improvement of cell behavior for in vivo applications.

Keywords: Biocompatibility, Composite, Organomodified- Nanoclay, Proliferation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1922
769 A Sequential Approach to Random-Effects Meta-Analysis

Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya

Abstract:

The objective of meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence base for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research significantly changed over time and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable only to fixed effect model (FEM) of meta-analysis. For random-effects model (REM), the analysis incorporates the heterogeneity variance, τ 2 and its estimation create complications. In this paper we study the use of a truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring in REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of applications.

Keywords: Meta-analysis, random-effects model, sequential testing, temporal changes in effect sizes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2400
768 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network

Authors: Jia Xin Low, Keng Wah Choo

Abstract:

This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.

Keywords: Convolutional neural network, discrete wavelet transform, deep learning, heart sound classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1127
767 Robust Fractional-Order PI Controller with Ziegler-Nichols Rules

Authors: Mazidah Tajjudin, Mohd Hezri Fazalul Rahiman, Norhashim Mohd Arshad, Ramli Adnan

Abstract:

In process control applications, above 90% of the controllers are of PID type. This paper proposed a robust PI controller with fractional-order integrator. The PI parameters were obtained using classical Ziegler-Nichols rules but enhanced with the application of error filter cascaded to the fractional-order PI. The controller was applied on steam temperature process that was described by FOPDT transfer function. The process can be classified as lag dominating process with very small relative dead-time. The proposed control scheme was compared with other PI controller tuned using Ziegler-Nichols and AMIGO rules. Other PI controller with fractional-order integrator known as F-MIGO was also considered. All the controllers were subjected to set point change and load disturbance tests. The performance was measured using Integral of Squared Error (ISE) and Integral of Control Signal (ICO). The proposed controller produced best performance for all the tests with the least ISE index.

Keywords: PID controller, fractional-order PID controller, PI control tuning, steam temperature control, Ziegler-Nichols tuning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3435
766 Square Printed Monopole Antenna for Wireless Applications

Authors: Rekha P. Labade, Shankar B. Deosarkar, Narayan Pisharoty

Abstract:

In this article design and optimization of square printed monopole antenna for wireless application is proposed. Theory of characteristics mode (TCM) is used for analysis of current modes on the antenna. TCM analysis shows that beveled ground plane improves the impedance bandwidth. The antenna operates over the frequency range from 1.860 GHz to 5 GHz for a VSWR ≤ 2, covering the GSM (1900-1990MHz), IMT-2000(1920-2170MHz), Bluetooth (2.400-2484 MHz) and lower band of ultrawideband (UWB). Stable radiation pattern shows minimal pulse distortion. The radiation pattern is omni-directional along the H-plane and figure of eight along the E-plane. Size of proposed antenna is 39 mm x 29 mm x 1.6mm. Antenna is simulated using CAD FEKO suite (6.2) using method of moment. A prototype antenna is fabricated using FR4 dielectric substrate with a dielectric constant of 4.4 and loss tangent of 0.02 to validate the simulated and measured results of the proposed antenna. Measured results are in good agreement with simulated results.

Keywords: Destructive Ground Surface (DGS), Method of moment, Theory of characteristics mode, UWB, VSWR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3350
765 Contention Window Adjustment in IEEE 802.11-Based Industrial Wireless Networks

Authors: Mohsen Maadani, Seyed Ahmad Motamedi

Abstract:

The use of wireless technology in industrial networks has gained vast attraction in recent years. In this paper, we have thoroughly analyzed the effect of contention window (CW) size on the performance of IEEE 802.11-based industrial wireless networks (IWN), from delay and reliability perspective. Results show that the default values of CWmin, CWmax, and retry limit (RL) are far from the optimum performance due to the industrial application characteristics, including short packet and noisy environment. In this paper, an adaptive CW algorithm (payload-dependent) has been proposed to minimize the average delay. Finally a simple, but effective CW and RL setting has been proposed for industrial applications which outperforms the minimum-average-delay solution from maximum delay and jitter perspective, at the cost of a little higher average delay. Simulation results show an improvement of up to 20%, 25%, and 30% in average delay, maximum delay and jitter respectively.

Keywords: Average Delay, Contention Window, Distributed Coordination Function (DCF), Jitter, Industrial Wireless Network (IWN), Maximum Delay, Reliability, Retry Limit.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2014
764 Optimum Signal-to-noise Ratio Performance of Electron Multiplying Charge Coupled Devices

Authors: Wen W. Zhang, Qian Chen

Abstract:

Electron multiplying charge coupled devices (EMCCDs) have revolutionized the world of low light imaging by introducing on-chip multiplication gain based on the impact ionization effect in the silicon. They combine the sub-electron readout noise with high frame rates. Signal-to-noise Ratio (SNR) is an important performance parameter for low-light-level imaging systems. This work investigates the SNR performance of an EMCCD operated in Non-inverted Mode (NIMO) and Inverted Mode (IMO). The theory of noise characteristics and operation modes is presented. The results show that the SNR of is determined by dark current and clock induced charge at high gain level. The optimum SNR performance is provided by an EMCCD operated in NIMO in short exposure and strong cooling applications. In contrast, an IMO EMCCD is preferable.

Keywords: electron multiplying charge coupled devices, noise characteristics, operation modes, signal-to-noise ratioperformance

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123
763 Flexible Cities: A Multisided Spatial Application of Tracking Livability of Urban Environment

Authors: Maria Christofi, George Plastiras, Rafaella Elia, Vaggelis Tsiourtis, Theocharis Theocharides, Miltiadis Katsaros

Abstract:

The rapidly expanding urban areas of the world constitute a challenge of how we need to make the transition to "the next urbanization", which will be defined by new analytical tools and new sources of data. This paper is about the production of a spatial application, the ‘FUMapp’, where space and its initiative will be available literally, in meters, but also abstractly, at a sensed level. While existing spatial applications typically focus on illustrations of the urban infrastructure, the suggested application goes beyond the existing: It investigates how our environment's perception adapts to the alterations of the built environment through a dataset construction of biophysical measurements (eye-tracking, heart beating), and physical metrics (spatial characteristics, size of stimuli, rhythm of mobility). It explores the intersections between architecture, cognition, and computing where future design can be improved and identifies the flexibility and livability of the ‘available space’ of specific examined urban paths.

Keywords: Biophysical data, flexibility of urban, livability, next urbanization, spatial application.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 994
762 Frequent Itemset Mining Using Rough-Sets

Authors: Usman Qamar, Younus Javed

Abstract:

Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.

Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2415
761 SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space

Authors: Sanaa Chafik, ImaneDaoudi, Mounim A. El Yacoubi, Hamid El Ouardi

Abstract:

Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.

Keywords: Approximate Nearest Neighbor Search, Content based image retrieval (CBIR), Curse of dimensionality, Locality sensitive hashing, Multidimensional indexing, Scalability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2552
760 Optimization of Design Parameters for Wire Mesh Fin Arrays as a Heat Sink Using Taguchi Method

Authors: Kavita H. Dhanawade, Hanamant S. Dhanawade

Abstract:

Heat transfer enhancement objects like extended surfaces, fins etc. are chosen for their thermal performance as well as for other design parameters depending on various applications. The present paper is on experimental study to investigate the heat transfer enhancement through wire mesh fin arrays equipped with horizontal base plate. The data used in performance analysis were obtained experimentally for the material (mild steel) for different heat inputs such as 40, 60, 80, 100 and 120 watt, by varying wire mesh diameter, fin height and spacing between two fin arrays. Using the Taguchi experimental design method, optimum design parameters and their levels were investigated. Average heat transfer coefficient was considered as a performance characteristic parameter. An L9 (33) orthogonal array was selected as an experimental plan. Optimum results were found by experimenting. It is observed that the wire mesh diameter and fin height have a higher impact on heat transfer coefficient as compared to spacing between two fin arrays.

Keywords: Heat transfer enhancement, finned surface, wire mesh diameter, natural convection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 793
759 Field Application of Reduced Crude Conversion Spent Lime

Authors: Brian H. Marsh, John H. Grove

Abstract:

Gypsum is being applied to ameliorate subsoil acidity and to overcome the problem of very slow lime movement from surface lime applications. Reduced Crude Conversion Spent Lime (RCCSL) containing anhydrite was evaluated for use as a liming material with specific consideration given to the movement of sulfate into the acid subsoil. Agricultural lime and RCCSL were applied at 0, 0.5, 1.0, and 1.5 times the lime requirement of 6.72 Mg ha-1 to an acid Trappist silt loam (TypicHapuldult). Corn [Zea mays (L.)]was grown following lime material application and soybean [Glycine max (L.) Merr.]was grown in the second year.Soil pH increased rapidly with the addition of the RCCSL material. Over time there was no difference in soil pH between the materials but there was with increasing rate. None of the observed changes in plant nutrient concentration had an impact on yield. Grain yield was higher for the RCCSL amended treatments in the first year but not in the second. There was a significant increase in soybean grain yield from the full lime requirement treatments over no lime.

Keywords: Soil acidity, corn, soybean, liming materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1750
758 Intelligent Modeling of the Electrical Activity of the Human Heart

Authors: Lambros V. Skarlas, Grigorios N. Beligiannis, Efstratios F. Georgopoulos, Adam V. Adamopoulos

Abstract:

The aim of this contribution is to present a new approach in modeling the electrical activity of the human heart. A recurrent artificial neural network is being used in order to exhibit a subset of the dynamics of the electrical behavior of the human heart. The proposed model can also be used, when integrated, as a diagnostic tool of the human heart system. What makes this approach unique is the fact that every model is being developed from physiological measurements of an individual. This kind of approach is very difficult to apply successfully in many modeling problems, because of the complexity and entropy of the free variables describing the complex system. Differences between the modeled variables and the variables of an individual, measured at specific moments, can be used for diagnostic purposes. The sensor fusion used in order to optimize the utilization of biomedical sensors is another point that this paper focuses on. Sensor fusion has been known for its advantages in applications such as control and diagnostics of mechanical and chemical processes.

Keywords: Artificial Neural Networks, Diagnostic System, Health Condition Modeling Tool, Heart Diagnostics Model, Heart Electricity Model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809