Search results for: Design evaluation method
9618 The Clinical Use of Ahmed Valve Implant as an Aqueous Shunt for Control of Uveitic Glaucoma in Dogs
Authors: Khaled M. Ali, M. A. Abdel-Hamid, Ayman A. Mostafa
Abstract:
Objective: Safety and efficacy of Ahmed glaucoma valve implantation for the management of uveitis induced glaucoma evaluated on the five dogs with uncontrollable glaucoma. Materials and Methods: Ahmed Glaucoma Valve (AGV®; New World Medical, Rancho Cucamonga, CA, USA) is a flow restrictive, nonobstructive self-regulating valve system. Preoperative ocular evaluation included direct ophthalmoscopy and measurement of the intraocular pressure (IOP). The implant was examined and primed prior to implantation. The selected site of the valve implantation was the superior quadrant between the superior and lateral rectus muscles. A fornix-based incision was made through the conjunectiva and Tenon’s capsule. A pocket is formed by blunt dissection of Tenon’s capsule from the episclera. The body of the implant was inserted into the pocket with the leading edge of the device around 8-10 mm from the limbus. Results: No post-operative complications were detected in the operated eyes except a persistent corneal edema occupied the upper half of the cornea in one case. Hyphaema was very mild and seen only in two cases which resolved quickly two days after surgery. Endoscopical evaluation for the operated eyes revealed a normal ocular fundus with clearly visible optic papilla, tapetum and retinal blood vessels. No evidence of hemorrhage, infection, adhesions or retinal abnormalities was detected. Conclusion: Ahmed glaucoma valve is safe and effective implant for treatment of uveitic glaucoma in dogs.Keywords: Ahmed valve, endoscopy, glaucoma, ocular fundus.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21369617 Real-Time Measurement Approach for Tracking the ΔV10 Estimate Value of DC EAF
Authors: Jin-Lung Guan, Jyh-Cherng Gu, Chun-Wei Huang, Hsin-Hung Chang
Abstract:
This investigation develops a revisable method for estimating the estimate value of equivalent 10 Hz voltage flicker (DV10) of a DC Electric Arc Furnace (EAF). This study also discusses three 161kV DC EAFs by field measurement, with those results indicating that the estimated DV10 value is significantly smaller than the survey value. The key point is that the conventional means of estimating DV10 is inappropriate. There is a main cause as the assumed Qmax is too small.
Although DC EAF is regularly operated in a constant MVA mode, the reactive power variation in the Main Transformer (MT) is more significant than that in the Furnace Transformer (FT). A substantial difference exists between estimated maximum reactive power fluctuation (DQmax) and the survey value from actual DC EAF operations. However, this study proposes a revisable method that can obtain a more accurate DV10 estimate than the conventional method.
Keywords: Voltage Flicker, dc EAF, Estimate Value, DV10.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33559616 400 kW Six Analytical High Speed Generator Designs for Smart Grid Systems
Authors: A. El Shahat, A. Keyhani, H. El Shewy
Abstract:
High Speed PM Generators driven by micro-turbines are widely used in Smart Grid System. So, this paper proposes comparative study among six classical, optimized and genetic analytical design cases for 400 kW output power at tip speed 200 m/s. These six design trials of High Speed Permanent Magnet Synchronous Generators (HSPMSGs) are: Classical Sizing; Unconstrained optimization for total losses and its minimization; Constrained optimized total mass with bounded constraints are introduced in the problem formulation. Then a genetic algorithm is formulated for obtaining maximum efficiency and minimizing machine size. In the second genetic problem formulation, we attempt to obtain minimum mass, the machine sizing that is constrained by the non-linear constraint function of machine losses. Finally, an optimum torque per ampere genetic sizing is predicted. All results are simulated with MATLAB, Optimization Toolbox and its Genetic Algorithm. Finally, six analytical design examples comparisons are introduced with study of machines waveforms, THD and rotor losses.Keywords: High Speed, Micro - Turbines, Optimization, PM Generators, Smart Grid, MATLAB.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24559615 Comparison between Higher-Order SVD and Third-order Orthogonal Tensor Product Expansion
Authors: Chiharu Okuma, Jun Murakami, Naoki Yamamoto
Abstract:
In digital signal processing it is important to approximate multi-dimensional data by the method called rank reduction, in which we reduce the rank of multi-dimensional data from higher to lower. For 2-dimennsional data, singular value decomposition (SVD) is one of the most known rank reduction techniques. Additional, outer product expansion expanded from SVD was proposed and implemented for multi-dimensional data, which has been widely applied to image processing and pattern recognition. However, the multi-dimensional outer product expansion has behavior of great computation complex and has not orthogonally between the expansion terms. Therefore we have proposed an alterative method, Third-order Orthogonal Tensor Product Expansion short for 3-OTPE. 3-OTPE uses the power method instead of nonlinear optimization method for decreasing at computing time. At the same time the group of B. D. Lathauwer proposed Higher-Order SVD (HOSVD) that is also developed with SVD extensions for multi-dimensional data. 3-OTPE and HOSVD are similarly on the rank reduction of multi-dimensional data. Using these two methods we can obtain computation results respectively, some ones are the same while some ones are slight different. In this paper, we compare 3-OTPE to HOSVD in accuracy of calculation and computing time of resolution, and clarify the difference between these two methods.Keywords: Singular value decomposition (SVD), higher-order SVD (HOSVD), higher-order tensor, outer product expansion, power method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15629614 Optimized Facial Features-based Age Classification
Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park
Abstract:
The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20479613 Reduction of Linear Time-Invariant Systems Using Routh-Approximation and PSO
Authors: S. Panda, S. K. Tomar, R. Prasad, C. Ardil
Abstract:
Order reduction of linear-time invariant systems employing two methods; one using the advantages of Routh approximation and other by an evolutionary technique is presented in this paper. In Routh approximation method the denominator of the reduced order model is obtained using Routh approximation while the numerator of the reduced order model is determined using the indirect approach of retaining the time moments and/or Markov parameters of original system. By this method the reduced order model guarantees stability if the original high order model is stable. In the second method Particle Swarm Optimization (PSO) is employed to reduce the higher order model. PSO method is based on the minimization of the Integral Squared Error (ISE) between the transient responses of original higher order model and the reduced order model pertaining to a unit step input. Both the methods are illustrated through numerical examples.
Keywords: Model Order Reduction, Markov Parameters, Routh Approximation, Particle Swarm Optimization, Integral Squared Error, Steady State Stability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32889612 Mimicking Morphogenesis for Robust Behaviour of Cellular Architectures
Authors: David Jones, Richard McWilliam, Alan Purvis
Abstract:
Morphogenesis is the process that underpins the selforganised development and regeneration of biological systems. The ability to mimick morphogenesis in artificial systems has great potential for many engineering applications, including production of biological tissue, design of robust electronic systems and the co-ordination of parallel computing. Previous attempts to mimick these complex dynamics within artificial systems have relied upon the use of evolutionary algorithms that have limited their size and complexity. This paper will present some insight into the underlying dynamics of morphogenesis, then show how to, without the assistance of evolutionary algorithms, design cellular architectures that converge to complex patterns.
Keywords: Morphogenesis, regeneration, robustness, convergence, cellular automata.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14919611 Multimethod Approach to Research in Interlanguage Pragmatics
Authors: Saad Al-Gahtani, Ghassan H Al Shatter
Abstract:
Argument over the use of particular method in interlanguage pragmatics has increased recently. Researchers argued the advantages and disadvantages of each method either natural or elicited. Findings of different studies indicated that the use of one method may not provide enough data to answer all its questions. The current study investigated the validity of using multimethod approach in interlanguage pragmatics to understand the development of requests in Arabic as a second language (Arabic L2). To this end, the study adopted two methods belong to two types of data sources: the institutional discourse (natural data), and the role play (elicited data). Participants were 117 learners of Arabic L2 at the university level, representing four levels (beginners, low-intermediate, highintermediate, and advanced). Results showed that using two or more methods in interlanguage pragmatics affect the size and nature of data.
Keywords: Arabic L2, Development of requests, Interlanguage Pragmatics, Multimethod approach.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18309610 A Study on the Introduction of Wastewater Reuse Facility in Military Barracks by Cost-Benefit Analysis
Authors: D. G. Jung, J. B. Lim, J. H. Kim, J. J. Kim
Abstract:
The international society focuses on the environment protection and natural energy sources control for the global cooperation against weather change and sustainable growth. The study presents the overview of the water shortage status and the necessity of wastewater reuse facility in military facilities and for the possibility of the introduction, compares the economics by means of cost-benefit analysis. The military features such as the number of users of military barracks and the water use were surveyed by the design principles by facility types, the application method of wastewater reuse facility was selected, the feed water, its application and the volume of reuse volume were defined and the expectation was estimated, confirming the possibility of introducing a wastewater reuse possibility by means of cost-benefit analysis.Keywords: military barracks, wastewater reuse facility, cost-benefit analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14509609 Optimum Design of an Absorption Heat Pump Integrated with a Kraft Industry using Genetic Algorithm
Authors: B. Jabbari, N. Tahouni, M. H. Panjeshahi
Abstract:
In this study the integration of an absorption heat pump (AHP) with the concentration section of an industrial pulp and paper process is investigated using pinch technology. The optimum design of the proposed water-lithium bromide AHP is then achieved by minimizing the total annual cost. A comprehensive optimization is carried out by relaxation of all stream pressure drops as well as heat exchanger areas involving in AHP structure. It is shown that by applying genetic algorithm optimizer, the total annual cost of the proposed AHP is decreased by 18% compared to one resulted from simulation.Keywords: Absorption Heat Pump, Genetic Algorithm, Kraft Industry, Pinch Technology
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19429608 Biaxial Testing of Fabrics - A Comparison of Various Testing Methodologies
Authors: O.B. Ozipek, E. Bozdag, E. Sunbuloglu, A. Abdullahoglu, E. Belen, E. Celikkanat
Abstract:
In textile industry, besides the conventional textile products, technical textile goods, that have been brought external functional properties into, are being developed for technical textile industry. Especially these products produced with weaving technology are widely preferred in areas such as sports, geology, medical, automotive, construction and marine sectors. These textile products are exposed to various stresses and large deformations under typical conditions of use. At this point, sufficient and reliable data could not be obtained with uniaxial tensile tests for determination of the mechanical properties of such products due to mainly biaxial stress state. Therefore, the most preferred method is a biaxial tensile test method and analysis. These tests and analysis is applied to fabrics with different functional features in order to establish the textile material with several characteristics and mechanical properties of the product. Planar biaxial tensile test, cylindrical inflation and bulge tests are generally required to apply for textile products that are used in automotive, sailing and sports areas and construction industry to minimize accidents as long as their service life. Airbags, seat belts and car tires in the automotive sector are also subject to the same biaxial stress states, and can be characterized by same types of experiments. In this study, in accordance with the research literature related to the various biaxial test methods are compared. Results with discussions are elaborated mainly focusing on the design of a biaxial test apparatus to obtain applicable experimental data for developing a finite element model. Sample experimental results on a prototype system are expressed.Keywords: Biaxial Stress, Bulge Test, Cylindrical Inflation, Fabric Testing, Planar Tension.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 41509607 From Type-I to Type-II Fuzzy System Modeling for Diagnosis of Hepatitis
Authors: Shahabeddin Sotudian, M. H. Fazel Zarandi, I. B. Turksen
Abstract:
Hepatitis is one of the most common and dangerous diseases that affects humankind, and exposes millions of people to serious health risks every year. Diagnosis of Hepatitis has always been a challenge for physicians. This paper presents an effective method for diagnosis of hepatitis based on interval Type-II fuzzy. This proposed system includes three steps: pre-processing (feature selection), Type-I and Type-II fuzzy classification, and system evaluation. KNN-FD feature selection is used as the preprocessing step in order to exclude irrelevant features and to improve classification performance and efficiency in generating the classification model. In the fuzzy classification step, an “indirect approach” is used for fuzzy system modeling by implementing the exponential compactness and separation index for determining the number of rules in the fuzzy clustering approach. Therefore, we first proposed a Type-I fuzzy system that had an accuracy of approximately 90.9%. In the proposed system, the process of diagnosis faces vagueness and uncertainty in the final decision. Thus, the imprecise knowledge was managed by using interval Type-II fuzzy logic. The results that were obtained show that interval Type-II fuzzy has the ability to diagnose hepatitis with an average accuracy of 93.94%. The classification accuracy obtained is the highest one reached thus far. The aforementioned rate of accuracy demonstrates that the Type-II fuzzy system has a better performance in comparison to Type-I and indicates a higher capability of Type-II fuzzy system for modeling uncertainty.
Keywords: Hepatitis disease, medical diagnosis, type-I fuzzy logic, type-II fuzzy logic, feature selection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16489606 Stego Machine – Video Steganography using Modified LSB Algorithm
Authors: Mritha Ramalingam
Abstract:
Computer technology and the Internet have made a breakthrough in the existence of data communication. This has opened a whole new way of implementing steganography to ensure secure data transfer. Steganography is the fine art of hiding the information. Hiding the message in the carrier file enables the deniability of the existence of any message at all. This paper designs a stego machine to develop a steganographic application to hide data containing text in a computer video file and to retrieve the hidden information. This can be designed by embedding text file in a video file in such away that the video does not loose its functionality using Least Significant Bit (LSB) modification method. This method applies imperceptible modifications. This proposed method strives for high security to an eavesdropper-s inability to detect hidden information.Keywords: Data hiding, LSB, Stego machine, VideoSteganography
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42699605 Conceptual Design of the TransAtlantic as a Research Platform for the Development of “Green” Aircraft Technologies
Authors: Victor Maldonado
Abstract:
Recent concerns of the growing impact of aviation on climate change has prompted the emergence of a field referred to as Sustainable or “Green” Aviation dedicated to mitigating the harmful impact of aviation related CO2 emissions and noise pollution on the environment. In the current paper, a unique “green” business jet aircraft called the TransAtlantic was designed (using analytical formulation common in conceptual design) in order to show the feasibility for transatlantic passenger air travel with an aircraft weighing less than 10,000 pounds takeoff weight. Such an advance in fuel efficiency will require development and integration of advanced and emerging aerospace technologies. The TransAtlantic design is intended to serve as a research platform for the development of technologies such as active flow control. Recent advances in the field of active flow control and how this technology can be integrated on a sub-scale flight demonstrator are discussed in this paper. Flow control is a technique to modify the behavior of coherent structures in wall-bounded flows (over aerodynamic surfaces such as wings and turbine nozzles) resulting in improved aerodynamic cruise and flight control efficiency. One of the key challenges to application in manned aircraft is development of a robust high-momentum actuator that can penetrate the boundary layer flowing over aerodynamic surfaces. These deficiencies may be overcome in the current development and testing of a novel electromagnetic synthetic jet actuator which replaces piezoelectric materials as the driving diaphragm. One of the overarching goals of the TranAtlantic research platform include fostering national and international collaboration to demonstrate (in numerical and experimental models) reduced CO2/ noise pollution via development and integration of technologies and methodologies in design optimization, fluid dynamics, structures/ composites, propulsion, and controls.
Keywords: Aircraft Design, Sustainable “Green” Aviation, Active Flow Control, Aerodynamics.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25349604 Optimising Data Transmission in Heterogeneous Sensor Networks
Authors: M. Hammerton, J. Trevathan, T. Myers, W. Read
Abstract:
The transfer rate of messages in distributed sensor network applications is a critical factor in a system's performance. The Sensor Abstraction Layer (SAL) is one such system. SAL is a middleware integration platform for abstracting sensor specific technology in order to integrate heterogeneous types of sensors in a network. SAL uses Java Remote Method Invocation (RMI) as its connection method, which has unsatisfying transfer rates, especially for streaming data. This paper analyses different connection methods to optimize data transmission in SAL by replacing RMI. Our results show that the most promising Java-based connections were frameworks for Java New Input/Output (NIO) including Apache MINA, JBoss Netty, and xSocket. A test environment was implemented to evaluate each respective framework based on transfer rate, resource usage, and scalability. Test results showed the most suitable connection method to improve data transmission in SAL JBoss Netty as it provides a performance enhancement of 68%.
Keywords: Wireless sensor networks, remote method invocation, transmission time.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20389603 Data-driven ASIC for Multichannel Sensors
Authors: Eduard Atkin, Alexander Klyuev, Vitaly Shumikhin
Abstract:
An approach and its implementation in 0.18 m CMOS process of the multichannel ASIC for capacitive (up to 30 pF) sensors are described in the paper. The main design aim was to study an analog data-driven architecture. The design was done for an analog derandomizing function of the 128 to 16 structure. That means that the ASIC structure should provide a parallel front-end readout of 128 input analog sensor signals and after the corresponding fast commutation with appropriate arbitration logic their processing by means of 16 output chains, including analog-to-digital conversion. The principal feature of the ASIC is a low power consumption within 2 mW/channel (including a 9-bit 20Ms/s ADC) at a maximum average channel hit rate not less than 150 kHz.
Keywords: Data-driven architecture, derandomizer, multichannel sensor readout
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14239602 PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions
Authors: Anthony Wachs, Guillaume Vinay, Gilles Ferrer, Jacques Kouakou, Calin Dan, Laurence Girolami
Abstract:
An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Keywords: Particulate flow, distributed lagrange multiplier/fictitious domain method, discrete element method, polygonal shape, sedimentation, distributed computing, MPI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21259601 Control of A Cart-Ball System Using State-Feedback Controller
Authors: M. Shakir Saat, M. Noh Ahmad, Dr, Amat Amir
Abstract:
A cart-ball system is a challenging system from the control engineering point of view. This is due to the nonlinearities, multivariable, and non-minimum phase behavior present in this system. This paper is concerned with the problem of modeling and control of such system. The objective of control strategy is to place the cart at a desired position while balancing the ball on the top of the arc-shaped track fixed on the cart. A State-Feedback Controller (SFC) with a pole-placement method will be designed in order to control the system. At first, the mathematical model of a cart-ball system in the state-space form is developed. Then, the linearization of a model will be established in order to design a SFC. The integral control strategy will be performed as to control the cart position of a system. Simulation work is then performed using MATLAB/SIMULINK software in order to study the performance of SFC when applied to the system.Keywords: Cart-Ball System, Integral Control, Pole-PlacementMethod, State-Feedback Controller.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16549600 Flexible Sensor Array with Programmable Measurement System
Authors: Jung-Chuan Chou, Wei-Chuan Chen, Chien-Cheng Chen
Abstract:
This study is concerned with pH solution detection using 2 × 4 flexible sensor array based on a plastic polyethylene terephthalate (PET) substrate that is coated a conductive layer and a ruthenium dioxide (RuO2) sensitive membrane with the technologies of screen-printing and RF sputtering. For data analysis, we also prepared a dynamic measurement system for acquiring the response voltage and analyzing the characteristics of the working electrodes (WEs), such as sensitivity and linearity. In this condition, an array measurement system was designed to acquire the original signal from sensor array, and it is based on the method of digital signal processing (DSP). The DSP modifies the unstable acquisition data to a direct current (DC) output using the technique of digital filter. Hence, this sensor array can obtain a satisfactory yield, 62.5%, through the design measurement and analysis system in our laboratory.Keywords: Flexible sensor array, PET, RuO2, dynamic measurement, data analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14939599 Hybrid Weighted Multiple Attribute Decision Making Handover Method for Heterogeneous Networks
Authors: Mohanad Alhabo, Li Zhang, Naveed Nawaz
Abstract:
Small cell deployment in 5G networks is a promising technology to enhance the capacity and coverage. However, unplanned deployment may cause high interference levels and high number of unnecessary handovers, which in turn result in an increase in the signalling overhead. To guarantee service continuity, minimize unnecessary handovers and reduce signalling overhead in heterogeneous networks, it is essential to properly model the handover decision problem. In this paper, we model the handover decision problem using Multiple Attribute Decision Making (MADM) method, specifically Technique for Order Preference by Similarity to an Ideal Solution (TOPSIS), and propose a hybrid TOPSIS method to control the handover in heterogeneous network. The proposed method adopts a hybrid weighting policy, which is a combination of entropy and standard deviation. A hybrid weighting control parameter is introduced to balance the impact of the standard deviation and entropy weighting on the network selection process and the overall performance. Our proposed method show better performance, in terms of the number of frequent handovers and the mean user throughput, compared to the existing methods.
Keywords: Handover, HetNets, interference, MADM, small cells, TOPSIS, weight.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5779598 Effect of Inclusions on the Shape and Size of Crack Tip Plastic Zones by Element Free Galerkin Method
Authors: A. Jameel, G. A. Harmain, Y. Anand, J. H. Masoodi, F. A. Najar
Abstract:
The present study investigates the effect of inclusions on the shape and size of crack tip plastic zones in engineering materials subjected to static loads by employing the element free Galerkin method (EFGM). The modeling of the discontinuities produced by cracks and inclusions becomes independent of the grid chosen for analysis. The standard displacement approximation is modified by adding additional enrichment functions, which introduce the effects of different discontinuities into the formulation. The level set method has been used to represent different discontinuities present in the domain. The effect of inclusions on the extent of crack tip plastic zones is investigated by solving some numerical problems by the EFGM.
Keywords: EFGM, stress intensity factors, crack tip plastic zones, inclusions.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8869597 Determining of Threshold Levels of Burst by Burst AQAM/CDMA in Slow Rayleigh Fading Environments
Authors: F. Nejadebrahimi, M. ArdebiliPour
Abstract:
In this paper, we are going to determine the threshold levels of adaptive modulation in a burst by burst CDMA system by a suboptimum method so that the above method attempts to increase the average bit per symbol (BPS) rate of transceiver system by switching between the different modulation modes in variable channel condition. In this method, we choose the minimum values of average bit error rate (BER) and maximum values of average BPS on different values of average channel signal to noise ratio (SNR) and then calculate the relative threshold levels of them, so that when the instantaneous SNR increases, a higher order modulation be employed for increasing throughput and vise-versa when the instantaneous SNR decreases, a lower order modulation be employed for improvement of BER. In transmission step, by this adaptive modulation method, in according to comparison between obtained estimation of pilot symbols and a set of above suboptimum threshold levels, above system chooses one of states no transmission, BPSK, 4QAM and square 16QAM for modulation of data. The expected channel in this paper is a slow Rayleigh fading.
Keywords: AQAM, burst, BER, BPS, CDMA, threshold.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15339596 A Method to Calculate Frenet Apparatus of W-Curves in the Euclidean 6-Space
Authors: Süha Yılmaz, Melih Turgut
Abstract:
These In this work, a regular unit speed curve in six dimensional Euclidean space, whose Frenet curvatures are constant, is considered. Thereafter, a method to calculate Frenet apparatus of this curve is presented.Keywords: Classical Differential Geometry, Euclidean 6-space, Frenet Apparatus of the curves.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12889595 Ultimate Load Capacity of the Cable Tower of Liede Bridge
Authors: Weifeng Wang, Xilong Chen, Xianwei Zeng
Abstract:
The cable tower of Liede Bridge is a double-column curved-lever arched-beam portal framed structure. Being novel and unique in structure, its cable tower differs in complexity from traditional ones. This paper analyzes the ultimate load capacity of cable tower by adopting the finite element calculations and model tests which indicate that constitutive relations applied here give a better simulation of actual failure process of prestressed reinforced concrete. In vertical load, horizontal load and overloading tests, the stepped loading of the tower model is of linear relationship, and the test data has good repeatability. All suggests that the cable tower has good bearing capacity, rational design and high emergency capacity.
Keywords: Cable tower of Liede Bridge, ultimate load capacity, model test, nonlinear finite element method
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 21189594 Blueprinting of a Normalized Supply Chain Processes: Results in Implementing Normalized Software Systems
Authors: Bassam Istanbouli
Abstract:
With the technology evolving every day and with the increase in global competition, industries are always under the pressure to be the best. They need to provide good quality products at competitive prices, when and how the customer wants them. In order to achieve this level of service, products and their respective supply chain processes need to be flexible and evolvable; otherwise changes will be extremely expensive, slow and with many combinatorial effects. Those combinatorial effects impact the whole organizational structure, from a management, financial, documentation, logistics and specially the information system Enterprise Requirement Planning (ERP) perspective. By applying the normalized system concept/theory to segments of the supply chain, we believe minimal effects, especially at the time of launching an organization global software project. The purpose of this paper is to point out that if an organization wants to develop a software from scratch or implement an existing ERP software for their business needs and if their business processes are normalized and modular then most probably this will yield to a normalized and modular software system that can be easily modified when the business evolves. Another important goal of this paper is to increase the awareness regarding the design of the business processes in a software implementation project. If the blueprints created are normalized then the software developers and configurators will use those modular blueprints to map them into modular software. This paper only prepares the ground for further studies; the above concept will be supported by going through the steps of developing, configuring and/or implementing a software system for an organization by using two methods: The Software Development Lifecycle method (SDLC) and the Accelerated SAP implementation method (ASAP). Both methods start with the customer requirements, then blue printing of its business processes and finally mapping those processes into a software system. Since those requirements and processes are the starting point of the implementation process, then normalizing those processes will end up in a normalizing software.
Keywords: Blueprint, ERP, SDLC, Modular.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3979593 Dynamic Action Induced By Walking Pedestrian
Authors: J. Kala, V. Salajka, P. Hradil
Abstract:
The main focus of this paper is on the human induced forces. Almost all existing force models for this type of load (defined either in the time or frequency domain) are developed from the assumption of perfect periodicity of the force and are based on force measurements conducted on rigid (i.e. high frequency) surfaces. To verify the different authors conclusions the vertical pressure measurements invoked during the walking was performed, using pressure gauges in various configurations. The obtained forces are analyzed using Fourier transformation. This load is often decisive in the design of footbridges. Design criteria and load models proposed by widely used standards and other researchers were introduced and a comparison was made.Keywords: Pedestrian action, Experimental analysis, Fourier series, serviceability, cycle loading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24529592 Road Extraction Using Stationary Wavelet Transform
Authors: Somkait Udomhunsakul
Abstract:
In this paper, a novel road extraction method using Stationary Wavelet Transform is proposed. To detect road features from color aerial satellite imagery, Mexican hat Wavelet filters are used by applying the Stationary Wavelet Transform in a multiresolution, multi-scale, sense and forming the products of Wavelet coefficients at a different scales to locate and identify road features at a few scales. In addition, the shifting of road features locations is considered through multiple scales for robust road extraction in the asymmetry road feature profiles. From the experimental results, the proposed method leads to a useful technique to form the basis of road feature extraction. Also, the method is general and can be applied to other features in imagery.
Keywords: Road extraction, Multiresolution, Stationary Wavelet Transform, Multi-scale analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18799591 Face Recognition Using Principal Component Analysis, K-Means Clustering, and Convolutional Neural Network
Authors: Zukisa Nante, Wang Zenghui
Abstract:
Face recognition is the problem of identifying or recognizing individuals in an image. This paper investigates a possible method to bring a solution to this problem. The method proposes an amalgamation of Principal Component Analysis (PCA), K-Means clustering, and Convolutional Neural Network (CNN) for a face recognition system. It is trained and evaluated using the ORL dataset. This dataset consists of 400 different faces with 40 classes of 10 face images per class. Firstly, PCA enabled the usage of a smaller network. This reduces the training time of the CNN. Thus, we get rid of the redundancy and preserve the variance with a smaller number of coefficients. Secondly, the K-Means clustering model is trained using the compressed PCA obtained data which select the K-Means clustering centers with better characteristics. Lastly, the K-Means characteristics or features are an initial value of the CNN and act as input data. The accuracy and the performance of the proposed method were tested in comparison to other Face Recognition (FR) techniques namely PCA, Support Vector Machine (SVM), as well as K-Nearest Neighbour (kNN). During experimentation, the accuracy and the performance of our suggested method after 90 epochs achieved the highest performance: 99% accuracy F1-Score, 99% precision, and 99% recall in 463.934 seconds. It outperformed the PCA that obtained 97% and KNN with 84% during the conducted experiments. Therefore, this method proved to be efficient in identifying faces in the images.
Keywords: Face recognition, Principal Component Analysis, PCA, Convolutional Neural Network, CNN, Rectified Linear Unit, ReLU, feature extraction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5069590 A Perceptual Image Coding method of High Compression Rate
Authors: Fahmi Kammoun, Mohamed Salim Bouhlel
Abstract:
In the framework of the image compression by Wavelet Transforms, we propose a perceptual method by incorporating Human Visual System (HVS) characteristics in the quantization stage. Indeed, human eyes haven-t an equal sensitivity across the frequency bandwidth. Therefore, the clarity of the reconstructed images can be improved by weighting the quantization according to the Contrast Sensitivity Function (CSF). The visual artifact at low bit rate is minimized. To evaluate our method, we use the Peak Signal to Noise Ratio (PSNR) and a new evaluating criteria witch takes into account visual criteria. The experimental results illustrate that our technique shows improvement on image quality at the same compression ratio.Keywords: Contrast Sensitivity Function, Human Visual System, Image compression, Wavelet transforms.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18769589 Electrostatic and Dielectric Measurements for Hair Building Fibers from DC to Microwave Frequencies
Authors: K. Y. You, Y. L. Then
Abstract:
In recent years, the hair building fiber has become popular, in other words, it is an effective method which helps people who suffer hair loss or sparse hair since the hair building fiber is capable to create a natural look of simulated hair rapidly. In the markets, there are a lot of hair fiber brands that have been designed to formulate an intense bond with hair strands and make the hair appear more voluminous instantly. However, those products have their own set of properties. Thus, in this report, some measurement techniques are proposed to identify those products. Up to five different brands of hair fiber are tested. The electrostatic and dielectric properties of the hair fibers are macroscopically tested using design DC and high frequency microwave techniques. Besides, the hair fibers are microscopically analysis by magnifying the structures of the fiber using scanning electron microscope (SEM). From the SEM photos, the comparison of the uniformly shaped and broken rate of the hair fibers in the different bulk samples can be observed respectively.
Keywords: Hair fiber, electrostatic, dielectric properties, broken rate, microwave techniques.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3872