Search results for: 5G applications
851 Performences of Type-2 Fuzzy Logic Control and Neuro-Fuzzy Control Based on DPC for Grid Connected DFIG with Fixed Switching Frequency
Authors: Fayssal Amrane, Azeddine Chaiba
Abstract:
In this paper, type-2 fuzzy logic control (T2FLC) and neuro-fuzzy control (NFC) for a doubly fed induction generator (DFIG) based on direct power control (DPC) with a fixed switching frequency is proposed for wind generation application. First, a mathematical model of the doubly-fed induction generator implemented in d-q reference frame is achieved. Then, a DPC algorithm approach for controlling active and reactive power of DFIG via fixed switching frequency is incorporated using PID. The performance of T2FLC and NFC, which is based on the DPC algorithm, are investigated and compared to those obtained from the PID controller. Finally, simulation results demonstrate that the NFC is more robust, superior dynamic performance for wind power generation system applications.
Keywords: Doubly fed induction generetor, direct power control, space vector modulation, type-2 fuzzy logic control, neuro-fuzzy control, maximum power point tracking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1661850 Using FEM for Prediction of Thermal Post-Buckling Behavior of Thin Plates During Welding Process
Authors: Amin Esmaeilzadeh, Mohammad Sadeghi, Farhad Kolahan
Abstract:
Arc welding is an important joining process widely used in many industrial applications including production of automobile, ships structures and metal tanks. In welding process, the moving electrode causes highly non-uniform temperature distribution that leads to residual stresses and different deviations, especially buckling distortions in thin plates. In order to control the deviations and increase the quality of welded plates, a fixture can be used as a practical and low cost method with high efficiency. In this study, a coupled thermo-mechanical finite element model is coded in the software ANSYS to simulate the behavior of thin plates located by a 3-2-1 positioning system during the welding process. Computational results are compared with recent similar works to validate the finite element models. The agreement between the result of proposed model and other reported data proves that finite element modeling can accurately predict the behavior of welded thin plates.
Keywords: Welding, thin plate, buckling distortion, fixture locators, finite element modelling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2410849 Mitigation of Electromagnetic Interference Generated by GPIB Control-Network in AC-DC Transfer Measurement System
Authors: M. M. Hlakola, E. Golovins, D. V. Nicolae
Abstract:
The field of instrumentation electronics is undergoing an explosive growth, due to its wide range of applications. The proliferation of electrical devices in a close working proximity can negatively influence each other’s performance. The degradation in the performance is due to electromagnetic interference (EMI). This paper investigates the negative effects of electromagnetic interference originating in the General Purpose Interface Bus (GPIB) control-network of the AC-DC transfer measurement system. Remedial measures of reducing measurement errors and failure of range of industrial devices due to EMI have been explored. The ACDC transfer measurement system was analysed for the commonmode (CM) EMI effects. Further investigation of coupling path as well as much accurate identification of noise propagation mechanism has been outlined. To prevent the occurrence of common-mode (ground loops) which was identified between the GPIB system control circuit and the measurement circuit, a microcontroller-driven GPIB switching isolator device was designed, prototyped, programmed and validated. This mitigation technique has been explored to reduce EMI effectively.Keywords: CM, EMI, GPIB, ground loops.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825848 Refined Buckling Analysis of Rectangular Plates Under Uniaxial and Biaxial Compression
Authors: V. Piscopo
Abstract:
In the traditional buckling analysis of rectangular plates the classical thin plate theory is generally applied, so neglecting the plating shear deformation. It seems quite clear that this method is not totally appropriate for the analysis of thick plates, so that in the following the two variable refined plate theory proposed by Shimpi (2006), that permits to take into account the transverse shear effects, is applied for the buckling analysis of simply supported isotropic rectangular plates, compressed in one and two orthogonal directions. The relevant results are compared with the classical ones and, for rectangular plates under uniaxial compression, a new direct expression, similar to the classical Bryan-s formula, is proposed for the Euler buckling stress. As the buckling analysis is a widely diffused topic for a variety of structures, such as ship ones, some applications for plates uniformly compressed in one and two orthogonal directions are presented and the relevant theoretical results are compared with those ones obtained by a FEM analysis, carried out by ANSYS, to show the feasibility of the presented method.Keywords: Buckling analysis, Thick plates, Biaxial stresses
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2627847 Robust Camera Calibration using Discrete Optimization
Authors: Stephan Rupp, Matthias Elter, Michael Breitung, Walter Zink, Christian Küblbeck
Abstract:
Camera calibration is an indispensable step for augmented reality or image guided applications where quantitative information should be derived from the images. Usually, a camera calibration is obtained by taking images of a special calibration object and extracting the image coordinates of projected calibration marks enabling the calculation of the projection from the 3d world coordinates to the 2d image coordinates. Thus such a procedure exhibits typical steps, including feature point localization in the acquired images, camera model fitting, correction of distortion introduced by the optics and finally an optimization of the model-s parameters. In this paper we propose to extend this list by further step concerning the identification of the optimal subset of images yielding the smallest overall calibration error. For this, we present a Monte Carlo based algorithm along with a deterministic extension that automatically determines the images yielding an optimal calibration. Finally, we present results proving that the calibration can be significantly improved by automated image selection.Keywords: Camera Calibration, Discrete Optimization, Monte Carlo Method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1815846 A Distance Function for Data with Missing Values and Its Application
Authors: Loai AbdAllah, Ilan Shimshoni
Abstract:
Missing values in data are common in real world applications. Since the performance of many data mining algorithms depend critically on it being given a good metric over the input space, we decided in this paper to define a distance function for unlabeled datasets with missing values. We use the Bhattacharyya distance, which measures the similarity of two probability distributions, to define our new distance function. According to this distance, the distance between two points without missing attributes values is simply the Mahalanobis distance. When on the other hand there is a missing value of one of the coordinates, the distance is computed according to the distribution of the missing coordinate. Our distance is general and can be used as part of any algorithm that computes the distance between data points. Because its performance depends strongly on the chosen distance measure, we opted for the k nearest neighbor classifier to evaluate its ability to accurately reflect object similarity. We experimented on standard numerical datasets from the UCI repository from different fields. On these datasets we simulated missing values and compared the performance of the kNN classifier using our distance to other three basic methods. Our experiments show that kNN using our distance function outperforms the kNN using other methods. Moreover, the runtime performance of our method is only slightly higher than the other methods.
Keywords: Missing values, Distance metric, Bhattacharyya distance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2751845 Gamification of eHealth Business Cases to Enhance Rich Learning Experience
Authors: Kari Björn
Abstract:
Introduction of games has expanded the application area of computer-aided learning tools to wide variety of age groups of learners. Serious games engage the learners into a real-world -type of simulation and potentially enrich the learning experience. Institutional background of a Bachelor’s level engineering program in Information and Communication Technology is introduced, with detailed focus on one of its majors, Health Technology. As part of a Customer Oriented Software Application thematic semester, one particular course of “eHealth Business and Solutions” is described and reflected in a gamified framework. Learning a consistent view into vast literature of business management, strategies, marketing and finance in a very limited time enforces selection of topics relevant to the industry. Health Technology is a novel and growing industry with a growing sector in consumer wearable devices and homecare applications. The business sector is attracting new entrepreneurs and impatient investor funds. From engineering education point of view the sector is driven by miniaturizing electronics, sensors and wireless applications. However, the market is highly consumer-driven and usability, safety and data integrity requirements are extremely high. When the same technology is used in analysis or treatment of patients, very strict regulatory measures are enforced. The paper introduces a course structure using gamification as a tool to learn the most essential in a new market: customer value proposition design, followed by a market entry game. Students analyze the existing market size and pricing structure of eHealth web-service market and enter the market as a steering group of their company, competing against the legacy players and with each other. The market is growing but has its rules of demand and supply balance. New products can be developed with an R&D-investment, and targeted to market with unique quality- and price-combinations. Product cost structure can be improved by investing to enhanced production capacity. Investments can be funded optionally by foreign capital. Students make management decisions and face the dynamics of the market competition in form of income statement and balance sheet after each decision cycle. The focus of the learning outcome is to understand customer value creation to be the source of cash flow. The benefit of gamification is to enrich the learning experience on structure and meaning of financial statements. The paper describes the gamification approach and discusses outcomes after two course implementations. Along the case description of learning challenges, some unexpected misconceptions are noted. Improvements of the game or the semi-gamified teaching pedagogy are discussed. The case description serves as an additional support to new game coordinator, as well as helps to improve the method. Overall, the gamified approach has helped to engage engineering student to business studies in an energizing way.Keywords: Engineering education, integrated curriculum, learning experience, learning outcomes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 957844 GSM-Based Approach for Indoor Localization
Authors: M.Stella, M. Russo, D. Begušić
Abstract:
Ability of accurate and reliable location estimation in indoor environment is the key issue in developing great number of context aware applications and Location Based Services (LBS). Today, the most viable solution for localization is the Received Signal Strength (RSS) fingerprinting based approach using wireless local area network (WLAN). This paper presents two RSS fingerprinting based approaches – first we employ widely used WLAN based positioning as a reference system and then investigate the possibility of using GSM signals for positioning. To compare them, we developed a positioning system in real world environment, where realistic RSS measurements were collected. Multi-Layer Perceptron (MLP) neural network was used as the approximation function that maps RSS fingerprints and locations. Experimental results indicate advantage of WLAN based approach in the sense of lower localization error compared to GSM based approach, but GSM signal coverage by far outreaches WLAN coverage and for some LBS services requiring less precise accuracy our results indicate that GSM positioning can also be a viable solution.Keywords: Indoor positioning, WLAN, GSM, RSS, location fingerprints, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2747843 Data Mining Classification Methods Applied in Drug Design
Authors: Mária Stachová, Lukáš Sobíšek
Abstract:
Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.Keywords: data mining, classification, drug design, QSAR
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2849842 Performance Analysis of OQSMS and MDDR Scheduling Algorithms for IQ Switches
Authors: K. Navaz, Kannan Balasubramanian
Abstract:
Due to the increasing growth of internet users, the emerging applications of multicast are growing day by day and there is a requisite for the design of high-speed switches/routers. Huge amounts of effort have been done into the research area of multicast switch fabric design and algorithms. Different traffic scenarios are the influencing factor which affect the throughput and delay of the switch. The pointer based multicast scheduling algorithms are not performed well under non-uniform traffic conditions. In this work, performance of the switch has been analyzed by applying the advanced multicast scheduling algorithm OQSMS (Optimal Queue Selection Based Multicast Scheduling Algorithm), MDDR (Multicast Due Date Round-Robin Scheduling Algorithm) and MDRR (Multicast Dual Round-Robin Scheduling Algorithm). The results show that OQSMS achieves better switching performance than other algorithms under the uniform, non-uniform and bursty traffic conditions and it estimates optimal queue in each time slot so that it achieves maximum possible throughput.Keywords: Multicast, Switch, Delay, Scheduling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165841 Comparative Study of Different Enhancement Techniques for Computed Tomography Images
Authors: C. G. Jinimole, A. Harsha
Abstract:
One of the key problems facing in the analysis of Computed Tomography (CT) images is the poor contrast of the images. Image enhancement can be used to improve the visual clarity and quality of the images or to provide a better transformation representation for further processing. Contrast enhancement of images is one of the acceptable methods used for image enhancement in various applications in the medical field. This will be helpful to visualize and extract details of brain infarctions, tumors, and cancers from the CT image. This paper presents a comparison study of five contrast enhancement techniques suitable for the contrast enhancement of CT images. The types of techniques include Power Law Transformation, Logarithmic Transformation, Histogram Equalization, Contrast Stretching, and Laplacian Transformation. All these techniques are compared with each other to find out which enhancement provides better contrast of CT image. For the comparison of the techniques, the parameters Peak Signal to Noise Ratio (PSNR) and Mean Square Error (MSE) are used. Logarithmic Transformation provided the clearer and best quality image compared to all other techniques studied and has got the highest value of PSNR. Comparison concludes with better approach for its future research especially for mapping abnormalities from CT images resulting from Brain Injuries.
Keywords: Computed tomography, enhancement techniques, increasing contrast, PSNR and MSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1378840 Privacy-Preserving Location Sharing System with Client/Server Architecture in Mobile Online Social Network
Authors: Xi Xiao, Chunhui Chen, Xinyu Liu, Guangwu Hu, Yong Jiang
Abstract:
Location sharing is a fundamental service in mobile Online Social Networks (mOSNs), which raises significant privacy concerns in recent years. Now, most location-based service applications adopt client/server architecture. In this paper, a location sharing system, named CSLocShare, is presented to provide flexible privacy-preserving location sharing with client/server architecture in mOSNs. CSLocShare enables location sharing between both trusted social friends and untrusted strangers without the third-party server. In CSLocShare, Location-Storing Social Network Server (LSSNS) provides location-based services but do not know the users’ real locations. The thorough analysis indicates that the users’ location privacy is protected. Meanwhile, the storage and the communication cost are saved. CSLocShare is more suitable and effective in reality.
Keywords: Client/server architecture, location sharing, mobile online social networks, privacy-preserving.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1314839 Extraction, Characterization and Application of Natural Dyes from the Fresh Rind of Index Colour 5 Mangosteen (Garcinia mangostana L.)
Authors: T. Basitah
Abstract:
This study was to explore and utilize the fresh rind of mangosteen Index Colour 5 as an upcoming raw material for the production of natural dyes. Rind from the fresh mangosteen Index Colour 5 was utilized to extract the dyes. The established extracts were experimented on silk fabrics via three types of mordanting and dyeing procedures; pre-mordanting, simultaneous mordanting and post-mordanting. As a result, the applications of the freeze-drying methodology and mechanizable equipment have helped to produce excellent range of natural colours. Silk fabric treated simultaneously with mordanting and dyeing with extract dye Index Colour 5 produced a brilliant shade of the red colour and the colour from this index is also discovered sensitive to light and washing during the fastness tests. The preliminary evaluation and instrumentation analysis allowed us to examine whether the application of different mordanting and dyeing procedures with the same extract samples and concentrations affected the colours and shades of the fabric samples.Keywords: Natural dye, Freeze-drying, Garcinia mangostana Linn, Mordanting.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4397838 Design and Analysis of an 8T Read Decoupled Dual Port SRAM Cell for Low Power High Speed Applications
Authors: Ankit Mitra
Abstract:
Speed, power consumption and area, are some of the most important factors of concern in modern day memory design. As we move towards Deep Sub-Micron Technologies, the problems of leakage current, noise and cell stability due to physical parameter variation becomes more pronounced. In this paper we have designed an 8T Read Decoupled Dual Port SRAM Cell with Dual Threshold Voltage and characterized it in terms of read and write delay, read and write noise margins, Data Retention Voltage and Leakage Current. Read Decoupling improves the Read Noise Margin and static power dissipation is reduced by using Dual-Vt transistors. The results obtained are compared with existing 6T, 8T, 9T SRAM Cells, which shows the superiority of the proposed design. The Cell is designed and simulated in TSPICE using 90nm CMOS process.
Keywords: CMOS, Dual-Port, Data Retention Voltage, 8T SRAM, Leakage Current, Noise Margin, Loop-cutting, Single-ended.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3469837 Analysis and Categorization of e-Learning Activities Based On Meaningful Learning Characteristics
Authors: Arda Yunianta, Norazah Yusof, Mohd Shahizan Othman, Dewi Octaviani
Abstract:
Learning is the acquisition of new mental schemata, knowledge, abilities and skills which can be used to solve problems potentially more successfully. The learning process is optimum when it is assisted and personalized. Learning is not a single activity, but should involve many possible activities to make learning become meaningful. Many e-learning applications provide facilities to support teaching and learning activities. One way to identify whether the e-learning system is being used by the learners is through the number of hits that can be obtained from the e-learning system's log data. However, we cannot rely solely to the number of hits in order to determine whether learning had occurred meaningfully. This is due to the fact that meaningful learning should engage five characteristics namely active, constructive, intentional, authentic and cooperative. This paper aims to analyze the e-learning activities that is meaningful to learning. By focusing on the meaningful learning characteristics, we match it to the corresponding Moodle e-learning activities. This analysis discovers the activities that have high impact to meaningful learning, as well as activities that are less meaningful. The high impact activities is given high weights since it become important to meaningful learning, while the low impact has less weight and said to be supportive e-learning activities. The result of this analysis helps us categorize which e-learning activities that are meaningful to learning and guide us to measure the effectiveness of e-learning usage.
Keywords: e-learning system, e-learning activity, meaningful learning characteristics, Moodle
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3150836 Cytotoxic Effects of Engineered Nanoparticles in Human Mesenchymal Stem Cells
Authors: Ali A. Alshatwi, Vaiyapuri S. Periasamy, Jegan Athinarayanan
Abstract:
Engineered nanoparticles’ usage rapidly increased in various applications in the last decade due to their unusual properties. However, there is an ever increasing concern to understand their toxicological effect in human health. Particularly, metal and metal oxide nanoparticles have been used in various sectors including biomedical, food and agriculture. But their impact on human health is yet to be fully understood. In this present investigation, we assessed the toxic effect of engineered nanoparticles (ENPs) including Ag, MgO and Co3O4 nanoparticles (NPs) on human mesenchymal stem cells (hMSC) adopting cell viability and cellular morphological changes as tools The results suggested that silver NPs are more toxic than MgO and Co3O4NPs. The ENPs induced cytotoxicity and nuclear morphological changes in hMSC depending on dose. The cell viability decreases with increase in concentration of ENPs. The cellular morphology studies revealed that ENPs damaged the cells. These preliminary findings have implications for the use of these nanoparticles in food industry with systematic regulations.
Keywords: Cobalt oxide, Human mesenchymal stem cells, MgO, Silver.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2408835 A Bi-Objective Model for Location-Allocation Problem within Queuing Framework
Authors: Amirhossein Chambari, Seyed Habib Rahmaty, Vahid Hajipour, Aida Karimi
Abstract:
This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.Keywords: Queuing, Location, Bi-objective, NSGA-II, NRGA
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276834 The Development of the Multi-Agent Classification System (MACS) in Compliance with FIPA Specifications
Authors: Mohamed R. Mhereeg
Abstract:
The paper investigates the feasibility of constructing a software multi-agent based monitoring and classification system and utilizing it to provide an automated and accurate classification of end users developing applications in the spreadsheet domain. The agents function autonomously to provide continuous and periodic monitoring of excels spreadsheet workbooks. Resulting in, the development of the MultiAgent classification System (MACS) that is in compliance with the specifications of the Foundation for Intelligent Physical Agents (FIPA). However, different technologies have been brought together to build MACS. The strength of the system is the integration of the agent technology with the FIPA specifications together with other technologies that are Windows Communication Foundation (WCF) services, Service Oriented Architecture (SOA), and Oracle Data Mining (ODM). The Microsoft's .NET widows service based agents were utilized to develop the monitoring agents of MACS, the .NET WCF services together with SOA approach allowed the distribution and communication between agents over the WWW that is in order to satisfy the monitoring and classification of the multiple developer aspect. ODM was used to automate the classification phase of MACS.
Keywords: Autonomous, Classification, MACS, Multi-Agent, SOA, WCF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1590833 Characterization of Extreme Low-Resolution Digital Encoder for Control System with Sinusoidal Reference Signal
Authors: Zhenyu Zhang, Qingbin Gao
Abstract:
Low-resolution digital encoder (LRDE) is commonly adopted as a position sensor in low-cost and resource-constraint applications. Traditionally, a digital encoder is modeled as a quantizer without considering the initial position of the LRDE. However, it cannot be applied to extreme LRDE for which stroke of angular motion is only a few times of resolution of the encoder. Besides, the actual angular motion is substantially distorted by this extreme LRDE so that the encoder reading does not faithfully represent the actual angular motion. This paper presents a modeling method for extreme LRDE by taking into account the initial position of the LRDE. For a control system with sinusoidal reference signal and extreme LRDE, this paper analyzes the characteristics of angular motion. Specifically, two descriptors of sinusoidal angular motion are studied, which essentially sheds light on the actual angular motion from extreme LRDE.
Keywords: Low resolution digital encoder, resource-constraint control system, sinusoidal reference signal, servo motion control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 806832 Nanobiocomposites with Enhanced Cell Proliferation and Improved Mechanical Properties Based on Organomodified-Nanoclay and Silicone Rubber
Authors: M. S. Hosseini, M. Tazzoli-Shadpour, I. Amjadi, A. A. Katbab, E. Jaefargholi-Rangraz
Abstract:
Bionanotechnology deals with nanoscopic interactions between nanostructured materials and biological systems. Polymer nanocomposites with optimized biological activity have attracted great attention. Nanoclay is considered as reinforcing nanofiller in manufacturing of high performance nanocomposites. In current study, organomodified-nanoclay with negatively charged silicate layers was incorporated into biomedical grade silicone rubber. Nanoparticle loading has been tailored to enhance cell behavior. Addition of nanoparticles led to improved mechanical properties of substrate with enhanced strength and stiffness while no toxic effects was observed. Results indicated improved viability and proliferation of cells by addition of nanofillers. The improved mechanical properties of the matrix result in proper cell response through adjustment and arrangement of cytoskeletal fibers. Results can be applied in tissue engineering when enhanced substrates are required for improvement of cell behavior for in vivo applications.
Keywords: Biocompatibility, Composite, Organomodified- Nanoclay, Proliferation
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1941831 A Sequential Approach to Random-Effects Meta-Analysis
Authors: Samson Henry Dogo, Allan Clark, Elena Kulinskaya
Abstract:
The objective of meta-analysis is to combine results from several independent studies in order to create generalization and provide evidence base for decision making. But recent studies show that the magnitude of effect size estimates reported in many areas of research significantly changed over time and this can impair the results and conclusions of meta-analysis. A number of sequential methods have been proposed for monitoring the effect size estimates in meta-analysis. However they are based on statistical theory applicable only to fixed effect model (FEM) of meta-analysis. For random-effects model (REM), the analysis incorporates the heterogeneity variance, τ 2 and its estimation create complications. In this paper we study the use of a truncated CUSUM-type test with asymptotically valid critical values for sequential monitoring in REM. Simulation results show that the test does not control the Type I error well, and is not recommended. Further work required to derive an appropriate test in this important area of applications.
Keywords: Meta-analysis, random-effects model, sequential testing, temporal changes in effect sizes.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2424830 Automatic Classification of Periodic Heart Sounds Using Convolutional Neural Network
Authors: Jia Xin Low, Keng Wah Choo
Abstract:
This paper presents an automatic normal and abnormal heart sound classification model developed based on deep learning algorithm. MITHSDB heart sounds datasets obtained from the 2016 PhysioNet/Computing in Cardiology Challenge database were used in this research with the assumption that the electrocardiograms (ECG) were recorded simultaneously with the heart sounds (phonocardiogram, PCG). The PCG time series are segmented per heart beat, and each sub-segment is converted to form a square intensity matrix, and classified using convolutional neural network (CNN) models. This approach removes the need to provide classification features for the supervised machine learning algorithm. Instead, the features are determined automatically through training, from the time series provided. The result proves that the prediction model is able to provide reasonable and comparable classification accuracy despite simple implementation. This approach can be used for real-time classification of heart sounds in Internet of Medical Things (IoMT), e.g. remote monitoring applications of PCG signal.Keywords: Convolutional neural network, discrete wavelet transform, deep learning, heart sound classification.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1148829 Robust Fractional-Order PI Controller with Ziegler-Nichols Rules
Authors: Mazidah Tajjudin, Mohd Hezri Fazalul Rahiman, Norhashim Mohd Arshad, Ramli Adnan
Abstract:
In process control applications, above 90% of the controllers are of PID type. This paper proposed a robust PI controller with fractional-order integrator. The PI parameters were obtained using classical Ziegler-Nichols rules but enhanced with the application of error filter cascaded to the fractional-order PI. The controller was applied on steam temperature process that was described by FOPDT transfer function. The process can be classified as lag dominating process with very small relative dead-time. The proposed control scheme was compared with other PI controller tuned using Ziegler-Nichols and AMIGO rules. Other PI controller with fractional-order integrator known as F-MIGO was also considered. All the controllers were subjected to set point change and load disturbance tests. The performance was measured using Integral of Squared Error (ISE) and Integral of Control Signal (ICO). The proposed controller produced best performance for all the tests with the least ISE index.
Keywords: PID controller, fractional-order PID controller, PI control tuning, steam temperature control, Ziegler-Nichols tuning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3470828 Square Printed Monopole Antenna for Wireless Applications
Authors: Rekha P. Labade, Shankar B. Deosarkar, Narayan Pisharoty
Abstract:
In this article design and optimization of square printed monopole antenna for wireless application is proposed. Theory of characteristics mode (TCM) is used for analysis of current modes on the antenna. TCM analysis shows that beveled ground plane improves the impedance bandwidth. The antenna operates over the frequency range from 1.860 GHz to 5 GHz for a VSWR ≤ 2, covering the GSM (1900-1990MHz), IMT-2000(1920-2170MHz), Bluetooth (2.400-2484 MHz) and lower band of ultrawideband (UWB). Stable radiation pattern shows minimal pulse distortion. The radiation pattern is omni-directional along the H-plane and figure of eight along the E-plane. Size of proposed antenna is 39 mm x 29 mm x 1.6mm. Antenna is simulated using CAD FEKO suite (6.2) using method of moment. A prototype antenna is fabricated using FR4 dielectric substrate with a dielectric constant of 4.4 and loss tangent of 0.02 to validate the simulated and measured results of the proposed antenna. Measured results are in good agreement with simulated results.
Keywords: Destructive Ground Surface (DGS), Method of moment, Theory of characteristics mode, UWB, VSWR.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3371827 Contention Window Adjustment in IEEE 802.11-Based Industrial Wireless Networks
Authors: Mohsen Maadani, Seyed Ahmad Motamedi
Abstract:
The use of wireless technology in industrial networks has gained vast attraction in recent years. In this paper, we have thoroughly analyzed the effect of contention window (CW) size on the performance of IEEE 802.11-based industrial wireless networks (IWN), from delay and reliability perspective. Results show that the default values of CWmin, CWmax, and retry limit (RL) are far from the optimum performance due to the industrial application characteristics, including short packet and noisy environment. In this paper, an adaptive CW algorithm (payload-dependent) has been proposed to minimize the average delay. Finally a simple, but effective CW and RL setting has been proposed for industrial applications which outperforms the minimum-average-delay solution from maximum delay and jitter perspective, at the cost of a little higher average delay. Simulation results show an improvement of up to 20%, 25%, and 30% in average delay, maximum delay and jitter respectively.Keywords: Average Delay, Contention Window, Distributed Coordination Function (DCF), Jitter, Industrial Wireless Network (IWN), Maximum Delay, Reliability, Retry Limit.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2035826 Optimum Signal-to-noise Ratio Performance of Electron Multiplying Charge Coupled Devices
Authors: Wen W. Zhang, Qian Chen
Abstract:
Electron multiplying charge coupled devices (EMCCDs) have revolutionized the world of low light imaging by introducing on-chip multiplication gain based on the impact ionization effect in the silicon. They combine the sub-electron readout noise with high frame rates. Signal-to-noise Ratio (SNR) is an important performance parameter for low-light-level imaging systems. This work investigates the SNR performance of an EMCCD operated in Non-inverted Mode (NIMO) and Inverted Mode (IMO). The theory of noise characteristics and operation modes is presented. The results show that the SNR of is determined by dark current and clock induced charge at high gain level. The optimum SNR performance is provided by an EMCCD operated in NIMO in short exposure and strong cooling applications. In contrast, an IMO EMCCD is preferable.
Keywords: electron multiplying charge coupled devices, noise characteristics, operation modes, signal-to-noise ratioperformance
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2152825 Flexible Cities: A Multisided Spatial Application of Tracking Livability of Urban Environment
Authors: Maria Christofi, George Plastiras, Rafaella Elia, Vaggelis Tsiourtis, Theocharis Theocharides, Miltiadis Katsaros
Abstract:
The rapidly expanding urban areas of the world constitute a challenge of how we need to make the transition to "the next urbanization", which will be defined by new analytical tools and new sources of data. This paper is about the production of a spatial application, the ‘FUMapp’, where space and its initiative will be available literally, in meters, but also abstractly, at a sensed level. While existing spatial applications typically focus on illustrations of the urban infrastructure, the suggested application goes beyond the existing: It investigates how our environment's perception adapts to the alterations of the built environment through a dataset construction of biophysical measurements (eye-tracking, heart beating), and physical metrics (spatial characteristics, size of stimuli, rhythm of mobility). It explores the intersections between architecture, cognition, and computing where future design can be improved and identifies the flexibility and livability of the ‘available space’ of specific examined urban paths.
Keywords: Biophysical data, flexibility of urban, livability, next urbanization, spatial application.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1036824 Frequent Itemset Mining Using Rough-Sets
Authors: Usman Qamar, Younus Javed
Abstract:
Frequent pattern mining is the process of finding a pattern (a set of items, subsequences, substructures, etc.) that occurs frequently in a data set. It was proposed in the context of frequent itemsets and association rule mining. Frequent pattern mining is used to find inherent regularities in data. What products were often purchased together? Its applications include basket data analysis, cross-marketing, catalog design, sale campaign analysis, Web log (click stream) analysis, and DNA sequence analysis. However, one of the bottlenecks of frequent itemset mining is that as the data increase the amount of time and resources required to mining the data increases at an exponential rate. In this investigation a new algorithm is proposed which can be uses as a pre-processor for frequent itemset mining. FASTER (FeAture SelecTion using Entropy and Rough sets) is a hybrid pre-processor algorithm which utilizes entropy and roughsets to carry out record reduction and feature (attribute) selection respectively. FASTER for frequent itemset mining can produce a speed up of 3.1 times when compared to original algorithm while maintaining an accuracy of 71%.
Keywords: Rough-sets, Classification, Feature Selection, Entropy, Outliers, Frequent itemset mining.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2434823 SC-LSH: An Efficient Indexing Method for Approximate Similarity Search in High Dimensional Space
Authors: Sanaa Chafik, ImaneDaoudi, Mounim A. El Yacoubi, Hamid El Ouardi
Abstract:
Locality Sensitive Hashing (LSH) is one of the most promising techniques for solving nearest neighbour search problem in high dimensional space. Euclidean LSH is the most popular variation of LSH that has been successfully applied in many multimedia applications. However, the Euclidean LSH presents limitations that affect structure and query performances. The main limitation of the Euclidean LSH is the large memory consumption. In order to achieve a good accuracy, a large number of hash tables is required. In this paper, we propose a new hashing algorithm to overcome the storage space problem and improve query time, while keeping a good accuracy as similar to that achieved by the original Euclidean LSH. The Experimental results on a real large-scale dataset show that the proposed approach achieves good performances and consumes less memory than the Euclidean LSH.
Keywords: Approximate Nearest Neighbor Search, Content based image retrieval (CBIR), Curse of dimensionality, Locality sensitive hashing, Multidimensional indexing, Scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2577822 Optimization of Design Parameters for Wire Mesh Fin Arrays as a Heat Sink Using Taguchi Method
Authors: Kavita H. Dhanawade, Hanamant S. Dhanawade
Abstract:
Heat transfer enhancement objects like extended surfaces, fins etc. are chosen for their thermal performance as well as for other design parameters depending on various applications. The present paper is on experimental study to investigate the heat transfer enhancement through wire mesh fin arrays equipped with horizontal base plate. The data used in performance analysis were obtained experimentally for the material (mild steel) for different heat inputs such as 40, 60, 80, 100 and 120 watt, by varying wire mesh diameter, fin height and spacing between two fin arrays. Using the Taguchi experimental design method, optimum design parameters and their levels were investigated. Average heat transfer coefficient was considered as a performance characteristic parameter. An L9 (33) orthogonal array was selected as an experimental plan. Optimum results were found by experimenting. It is observed that the wire mesh diameter and fin height have a higher impact on heat transfer coefficient as compared to spacing between two fin arrays.Keywords: Heat transfer enhancement, finned surface, wire mesh diameter, natural convection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 813