Search results for: P-Q method
14777 Analysis of Translational Ship Oscillations in a Realistic Environment
Authors: Chen Zhang, Bernhard Schwarz-Röhr, Alexander Härting
Abstract:
To acquire accurate ship motions at the center of gravity, a single low-cost inertial sensor is utilized and applied on board to measure ship oscillating motions. As observations, the three axes accelerations and three axes rotational rates provided by the sensor are used. The mathematical model of processing the observation data includes determination of the distance vector between the sensor and the center of gravity in x, y, and z directions. After setting up the transfer matrix from sensor’s own coordinate system to the ship’s body frame, an extended Kalman filter is applied to deal with nonlinearities between the ship motion in the body frame and the observation information in the sensor’s frame. As a side effect, the method eliminates sensor noise and other unwanted errors. Results are not only roll and pitch, but also linear motions, in particular heave and surge at the center of gravity. For testing, we resort to measurements recorded on a small vessel in a well-defined sea state. With response amplitude operators computed numerically by a commercial software (Seaway), motion characteristics are estimated. These agree well with the measurements after processing with the suggested method.Keywords: extended Kalman filter, nonlinear estimation, sea trial, ship motion estimation
Procedia PDF Downloads 52314776 Robust Inference with a Skew T Distribution
Authors: M. Qamarul Islam, Ergun Dogan, Mehmet Yazici
Abstract:
There is a growing body of evidence that non-normal data is more prevalent in nature than the normal one. Examples can be quoted from, but not restricted to, the areas of Economics, Finance and Actuarial Science. The non-normality considered here is expressed in terms of fat-tailedness and asymmetry of the relevant distribution. In this study a skew t distribution that can be used to model a data that exhibit inherent non-normal behavior is considered. This distribution has tails fatter than a normal distribution and it also exhibits skewness. Although maximum likelihood estimates can be obtained by solving iteratively the likelihood equations that are non-linear in form, this can be problematic in terms of convergence and in many other respects as well. Therefore, it is preferred to use the method of modified maximum likelihood in which the likelihood estimates are derived by expressing the intractable non-linear likelihood equations in terms of standardized ordered variates and replacing the intractable terms by their linear approximations obtained from the first two terms of a Taylor series expansion about the quantiles of the distribution. These estimates, called modified maximum likelihood estimates, are obtained in closed form. Hence, they are easy to compute and to manipulate analytically. In fact the modified maximum likelihood estimates are equivalent to maximum likelihood estimates, asymptotically. Even in small samples the modified maximum likelihood estimates are found to be approximately the same as maximum likelihood estimates that are obtained iteratively. It is shown in this study that the modified maximum likelihood estimates are not only unbiased but substantially more efficient than the commonly used moment estimates or the least square estimates that are known to be biased and inefficient in such cases. Furthermore, in conventional regression analysis, it is assumed that the error terms are distributed normally and, hence, the well-known least square method is considered to be a suitable and preferred method for making the relevant statistical inferences. However, a number of empirical researches have shown that non-normal errors are more prevalent. Even transforming and/or filtering techniques may not produce normally distributed residuals. Here, a study is done for multiple linear regression models with random error having non-normal pattern. Through an extensive simulation it is shown that the modified maximum likelihood estimates of regression parameters are plausibly robust to the distributional assumptions and to various data anomalies as compared to the widely used least square estimates. Relevant tests of hypothesis are developed and are explored for desirable properties in terms of their size and power. The tests based upon modified maximum likelihood estimates are found to be substantially more powerful than the tests based upon least square estimates. Several examples are provided from the areas of Economics and Finance where such distributions are interpretable in terms of efficient market hypothesis with respect to asset pricing, portfolio selection, risk measurement and capital allocation, etc.Keywords: least square estimates, linear regression, maximum likelihood estimates, modified maximum likelihood method, non-normality, robustness
Procedia PDF Downloads 39714775 Designing for Sustainable Public Housing from Property Management and Financial Feasibility Perspectives
Authors: Kung-Jen Tu
Abstract:
Many public housing properties developed by local governments in Taiwan in the 1980s have deteriorated severely as these rental apartment buildings aged. The lack of building maintainability considerations during project design phase as well as insufficient maintenance funds have made it difficult and costly for local governments to maintain and keep public housing properties in good shape. In order to assist the local governments in achieving and delivering sustainable public housing, this paper intends to present a developed design evaluation method to be used to evaluate the presented design schemes from property management and financial feasibility perspectives during project design phase of public housing projects. The design evaluation results, i.e. the property management and financial implications of presented design schemes that could occur later during the building operation and maintenance phase, will be reported to the client (the government) and design schemes revised consequently. It is proposed that the design evaluation be performed from two main perspectives: (1) Operation and property management perspective: Three criteria such as spatial appropriateness, people and vehicle circulation and control, property management working spaces are used to evaluate the ‘operation and PM effectiveness’ of a design scheme. (2) Financial feasibility perspective: Four types of financial analyses are performed to assess the long term financial feasibility of a presented design scheme, such as operational and rental income analysis, management fund analysis, regular operational and property management service expense analysis, capital expense analysis. The ongoing Chung-Li Public Housing Project developed by the Taoyuan City Government will be used as a case to demonstrate how the presented design evaluation method is implemented. The results of property management assessment as well as the annual operational and capital expenses of a proposed design scheme are presented.Keywords: design evaluation method, management fund, operational and capital expenses, rental apartment buildings
Procedia PDF Downloads 30814774 The Trajectory of the Ball in Football Game
Authors: Mahdi Motahari, Mojtaba Farzaneh, Ebrahim Sepidbar
Abstract:
Tracking of moving and flying targets is one of the most important issues in image processing topic. Estimating of trajectory of desired object in short-term and long-term scale is more important than tracking of moving and flying targets. In this paper, a new way of identifying and estimating of future trajectory of a moving ball in long-term scale is estimated by using synthesis and interaction of image processing algorithms including noise removal and image segmentation, Kalman filter algorithm in order to estimating of trajectory of ball in football game in short-term scale and intelligent adaptive neuro-fuzzy algorithm based on time series of traverse distance. The proposed system attain more than 96% identify accuracy by using aforesaid methods and relaying on aforesaid algorithms and data base video in format of synthesis and interaction. Although the present method has high precision, it is time consuming. By comparing this method with other methods we realize the accuracy and efficiency of that.Keywords: tracking, signal processing, moving targets and flying, artificial intelligent systems, estimating of trajectory, Kalman filter
Procedia PDF Downloads 46114773 Effect of Precursors Aging Time on the Photocatalytic Activity of Zno Thin Films
Authors: N. Kaneva, A. Bojinova, K. Papazova
Abstract:
Thin ZnO films are deposited on glass substrates via sol–gel method and dip-coating. The films are prepared from zinc acetate dehydrate as a starting reagent. After that the as-prepared ZnO sol is aged for different periods (0, 1, 3, 5, 10, 15, and 30 days). Nanocrystalline thin films are deposited from various sols. The effect ZnO sols aging time on the structural and photocatalytic properties of the films is studied. The films surface is studied by Scanning Electron Microscopy. The effect of the aging time of the starting solution is studied inrespect to photocatalytic degradation of Reactive Black 5 (RB5) by UV-vis spectroscopy. The experiments are conducted upon UV-light illumination and in complete darkness. The variation of the absorption spectra shows the degradation of RB5 dissolved in water, as a result of the reaction acurring on the surface of the films, and promoted by UV irradiation. The initial concentrations of dye (5, 10 and 20 ppm) and the effect of the aging time are varied during the experiments. The results show, that the increasing aging time of starting solution with respect to ZnO generally promotes photocatalytic activity. The thin films obtained from ZnO sol, which is aged 30 days have best photocatalytic degradation of the dye (97,22%) in comparison with the freshly prepared ones (65,92%). The samples and photocatalytic experimental results are reproducible. Nevertheless, all films exhibit a substantial activity in both UV light and darkness, which is promising for the development of new ZnO photocatalysts by sol-gel method.Keywords: ZnO thin films, sol-gel, photocatalysis, aging time
Procedia PDF Downloads 38214772 Screening Methodology for Seismic Risk Assessment of Aging Structures in Oil and Gas Plants
Authors: Mohammad Nazri Mustafa, Pedram Hatami Abdullah, M. Fakhrur Razi Ahmad Faizul
Abstract:
With the issuance of Malaysian National Annex 2017 as a part of MS EN 1998-1:2015, the seismic mapping of Malaysian Peninsular including Sabah and Sarawak has undergone some changes in terms of the Peak Ground Acceleration (PGA) value. The revision to the PGA has raised a concern on the safety of oil and gas onshore structures as these structures were not designed to accommodate the new PGA values which are much higher than the previous values used in the original design. In view of the high numbers of structures and buildings to be re-assessed, a risk assessment methodology has been developed to prioritize and rank the assets in terms of their criticality against the new seismic loading. To-date such risk assessment method for oil and gas onshore structures is lacking, and it is the main intention of this technical paper to share the risk assessment methodology and risk elements scoring finalized via Delphi Method. The finalized methodology and the values used to rank the risk elements have been established based on years of relevant experience on the subject matter and based on a series of rigorous discussions with professionals in the industry. The risk scoring is mapped against the risk matrix (i.e., the LOF versus COF) and hence, the overall risk for the assets can be obtained. The overall risk can be used to prioritize and optimize integrity assessment, repair and strengthening work against the new seismic mapping of the country.Keywords: methodology, PGA, risk, seismic
Procedia PDF Downloads 15214771 Stress Analysis of Tubular Bonded Joints under Torsion and Hygrothermal Effects Using DQM
Authors: Mansour Mohieddin Ghomshei, Reza Shahi
Abstract:
Laminated composite tubes with adhesively bonded joints are widely used in aerospace and automotive industries as well as oil and gas industries. In this research, adhesively tubular single lap joints subjected to torsional and hygrothermal loadings are studied using the differential quadrature method (DQM). The analysis is based on the classical shell theory. At first, an approximate closed form solution is developed by omitting the lateral deflections in the connecting tubes. Using the analytical model, the circumferential displacements in tubes and the shear stresses in the interfacing adhesive layer are determined. Then, a numerical formulation is presented using DQM in which the lateral deflections are taken into account. By using the DQM formulation, the circumferential and radial displacements in tubes as well as shear and peel stresses in the adhesive layer are calculated. Results obtained from the proposed DQM solutions are compared well with those of the approximate analytical model and those of some published references. Finally using the DQM model, parametric studies are carried out to investigate the influence of various parameters such as adhesive layer thickness, torsional loading, overlap length, tubes radii, relative humidity, and temperature.Keywords: adhesively bonded joint, differential quadrature method (DQM), hygrothermal, laminated composite tube
Procedia PDF Downloads 30214770 Synthesis, Characterization and Rheological Properties of Boronoxide, Polymer Nanocomposites
Authors: Mehmet Doğan, Mahir Alkan, Yasemin Turhan, Zürriye Gündüz, Pinar Beyli, Serap Doğan
Abstract:
Advances and new discoveries in the field of the material science on the basis of technological developments have played an important role. Today, material science is branched the lower branches such as metals, nonmetals, chemicals, polymers. The polymeric nano composites have found a wide application field as one of the most important among these groups. Many polymers used in the different fields of the industry have been desired to improve the thermal stability. One of the ways to improve this property of the polymers is to form the nano composite products of them using different fillers. There are many using area of boron compounds and is increasing day by day. In order to the further increasing of the variety of using area of boron compounds and industrial importance, it is necessary to synthesis of nano-products and to find yourself new application areas of these products. In this study, PMMA/boronoxide nano composites were synthesized using solution intercalation, polymerization and melting methods; and PAA/boronoxide nano composites using solution intercalation method. Furthermore, rheological properties of nano composites synthesed according to melting method were also studied. Nano composites were characterized by XRD, FTIR-ATR, DTA/TG, BET, SEM, and TEM instruments. The effects of filler material amount, solvent types and mediating reagent on the thermal stability of polymers were investigated. In addition, the rheological properties of PMMA/boronoxide nano composites synthesized by melting method were investigated using High Pressure Capillary Rheometer. XRD analysis showed that boronoxide was dispersed in polymer matrix; FTIR-ATR that there were interactions with boronoxide between PAA and PMMA; and TEM that boronoxide particles had spherical structure, and dispersed in nano sized dimension in polymer matrix; the thermal stability of polymers was increased with the adding of boronoxide in polymer matrix; the decomposition mechanism of PAA was changed. From rheological measurements, it was found that PMMA and PMMA/boronoxide nano composites exhibited non-Newtonian, pseudo-plastic, shear thinning behavior under all experimental conditions.Keywords: boronoxide, polymer, nanocomposite, rheology, characterization
Procedia PDF Downloads 43314769 Study on Accurate Calculation Method of Model Attidude on Wind Tunnel Test
Authors: Jinjun Jiang, Lianzhong Chen, Rui Xu
Abstract:
The accurate of model attitude angel plays an important role on the aerodynamic test results in the wind tunnel test. The original method applies the spherical coordinate system transformation to obtain attitude angel calculation.The model attitude angel is obtained by coordinate transformation and spherical surface mapping applying the nominal attitude angel (the balance attitude angel in the wind tunnel coordinate system) indicated by the mechanism. First, the coordinate transformation of this method is not only complex but also difficult to establish the transformed relationship between the space coordinate systems especially after many steps of coordinate transformation, moreover it cannot realize the iterative calculation of the interference relationship between attitude angels; Second, during the calculate process to solve the problem the arc is approximately used to replace the straight line, the angel for the tangent value, and the inverse trigonometric function is applied. Therefore, in the calculation of attitude angel, the process is complex and inaccurate, which can be solved approximately when calculating small attack angel. However, with the advancing development of modern aerodynamic unsteady research, the aircraft tends to develop high or super large attack angel and unsteadyresearch field.According to engineering practice and vector theory, the concept of vector angel coordinate systemis proposed for the first time, and the vector angel coordinate system of attitude angel is established.With the iterative correction calculation and avoiding the problem of approximate and inverse trigonometric function solution, the model attitude calculation process is carried out in detail, which validates that the calculation accuracy and accuracy of model attitude angels are improved.Based on engineering and theoretical methods, a vector angel coordinate systemis established for the first time, which gives the transformation and angel definition relations between different flight attitude coordinate systems, that can accurately calculate the attitude angel of the corresponding coordinate systemand determine its direction, especially in the channel coupling calculation, the calculation of the attitude angel between the coordinate systems is only related to the angel, and has nothing to do with the change order s of the coordinate system, whichsimplifies the calculation process.Keywords: attitude angel, angel vector coordinate system, iterative calculation, spherical coordinate system, wind tunnel test
Procedia PDF Downloads 14614768 Transient Response of Elastic Structures Subjected to a Fluid Medium
Authors: Helnaz Soltani, J. N. Reddy
Abstract:
Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response
Procedia PDF Downloads 56814767 Code Evaluation on Web-Shear Capacity of Presstressed Hollow-Core Slabs
Authors: Min-Kook Park, Deuck Hang Lee, Hyun Mo Yang, Jae Hyun Kim, Kang Su Kim
Abstract:
Prestressed hollow-core slabs (HCS) are structurally optimized precast units with light-weight hollowed-sections and very economical due to the mass production by a unique production method. They have been thus widely used in the precast concrete constructions in many countries all around the world. It is, however, difficult to provide shear reinforcement in HCS units produced by the extrusion method, and thus all the shear forces should be resisted solely by concrete webs in the HCS units. This means that, for the HCS units, it is very important to estimate the contribution of web concrete to the shear resistance accurately. In design codes, however, the shear strengths for HCS units are estimated by the same equations that are used for typical prestressed concrete members, which were determined from the calibrations to experimental results of conventional prestressed concrete members other than HCS units. In this study, therefore, shear test results of HCS members with a wide range of influential variables were collected, and the shear strength equations in design codes were thoroughly examined by comparing to the experimental results in the shear database of HCS members. Acknowledgement: This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(NRF-2016R1A2B2010277).Keywords: hollow-core, web-shear, precast concrete, prestress, capacity
Procedia PDF Downloads 50614766 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements
Authors: Ebru Turgal, Beyza Doganay Erdogan
Abstract:
Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data
Procedia PDF Downloads 20314765 A New Method of Extracting Polyphenols from Honey Using a Biosorbent Compared to the Commercial Resin Amberlite XAD2
Authors: Farid Benkaci-Alia, Abdelhamid Neggada, Sophie Laurentb
Abstract:
A new extraction method of polyphenols from honey using a biodegradable resin was developed and compared with the common commercial resin amberlite XAD2. For this purpose, three honey samples of Algerian origin were selected for the different physico-chemical and biochemical parameters study. After extraction of the target compounds by both resins, the polyphenol content was determined, the antioxidant activity was tested, and LC-MS analyses were performed for identification and quantification. The results showed that physico-chemical and biochemical parameters meet the norms of the International Honey commission, and the H1 sample seemed to be of high quality. The optimal conditions of extraction by biodegradable resin were a pH of 3, an adsorption dose of 40 g/L, a contact time of 50 min, an extraction temperature of 60°C and no stirring. The regeneration and reuse number of both resins was three cycles. The polyphenol contents demonstrated a higher extraction efficiency of biosorbent than of XAD2, especially in H1. LC-MS analyses allowed for the identification and quantification of fifteen compounds in the different honey samples extracted using both resins and the most abundant compound was 3,4,5-trimethoxybenzoic acid. In addition, the biosorbent extracts showed stronger antioxidant activities than the XAD2 extracts.Keywords: extraction, polyphénols, biosorbent, resin amberlite, HPLC-MS
Procedia PDF Downloads 10514764 Micropillar-Assisted Electric Field Enhancement for High-Efficiency Inactivation of Bacteria
Authors: Sanam Pudasaini, A. T. K. Perera, Ahmed Syed Shaheer Uddin, Sum Huan Ng, Chun Yang
Abstract:
Development of high-efficiency and environment friendly bacterial inactivation methods is of great importance for preventing waterborne diseases which are one of the leading causes of death in the world. Traditional bacterial inactivation methods (e.g., ultraviolet radiation and chlorination) have several limitations such as longer treatment time, formation of toxic byproducts, bacterial regrowth, etc. Recently, an electroporation-based inactivation method was introduced as a substitute. Here, an electroporation-based continuous flow microfluidic device equipped with an array of micropillars is developed, and the device achieved high bacterial inactivation performance ( > 99.9%) within a short exposure time ( < 1 s). More than 99.9% reduction of Escherichia coli bacteria was obtained for the flow rate of 1 mL/hr, and no regrowth of bacteria was observed. Images from scanning electron microscope confirmed the formation of electroporation-induced nano-pore within the cell membrane. Through numerical simulation, it has been shown that sufficiently large electric field strength (3 kV/cm), required for bacterial electroporation, were generated using PDMS micropillars for an applied voltage of 300 V. Further, in this method of inactivation, there is no involvement of chemicals and the formation of harmful by-products is also minimum.Keywords: electroporation, high-efficiency, inactivation, microfluidics, micropillar
Procedia PDF Downloads 18014763 Forecast of Polyethylene Properties in the Gas Phase Polymerization Aided by Neural Network
Authors: Nasrin Bakhshizadeh, Ashkan Forootan
Abstract:
A major problem that affects the quality control of polymer in the industrial polymerization is the lack of suitable on-line measurement tools to evaluate the properties of the polymer such as melt and density indices. Controlling the polymerization in ordinary method is performed manually by taking samples, measuring the quality of polymer in the lab and registry of results. This method is highly time consuming and leads to producing large number of incompatible products. An online application for estimating melt index and density proposed in this study is a neural network based on the input-output data of the polyethylene production plant. Temperature, the level of reactors' bed, the intensity of ethylene mass flow, hydrogen and butene-1, the molar concentration of ethylene, hydrogen and butene-1 are used for the process to establish the neural model. The neural network is taught based on the actual operational data and back-propagation and Levenberg-Marquart techniques. The simulated results indicate that the neural network process model established with three layers (one hidden layer) for forecasting the density and the four layers for the melt index is able to successfully predict those quality properties.Keywords: polyethylene, polymerization, density, melt index, neural network
Procedia PDF Downloads 14414762 Aerodynamic Design of a Light Long Range Blended Wing Body Unmanned Vehicle
Authors: Halison da Silva Pereira, Ciro Sobrinho Campolina Martins, Vitor Mainenti Leal Lopes
Abstract:
Long range performance is a goal for aircraft configuration optimization. Blended Wing Body (BWB) is presented in many works of literature as the most aerodynamically efficient design for a fixed-wing aircraft. Because of its high weight to thrust ratio, BWB is the ideal configuration for many Unmanned Aerial Vehicle (UAV) missions on geomatics applications. In this work, a BWB aerodynamic design for typical light geomatics payload is presented. Aerodynamic non-dimensional coefficients are predicted using low Reynolds number computational techniques (3D Panel Method) and wing parameters like aspect ratio, taper ratio, wing twist and sweep are optimized for high cruise performance and flight quality. The methodology of this work is a summary of tailless aircraft wing design and its application, with appropriate computational schemes, to light UAV subjected to low Reynolds number flows leads to conclusions like the higher performance and flight quality of thicker airfoils in the airframe body and the benefits of using aerodynamic twist rather than just geometric.Keywords: blended wing body, low Reynolds number, panel method, UAV
Procedia PDF Downloads 58614761 Hybrid Structure Learning Approach for Assessing the Phosphate Laundries Impact
Authors: Emna Benmohamed, Hela Ltifi, Mounir Ben Ayed
Abstract:
Bayesian Network (BN) is one of the most efficient classification methods. It is widely used in several fields (i.e., medical diagnostics, risk analysis, bioinformatics research). The BN is defined as a probabilistic graphical model that represents a formalism for reasoning under uncertainty. This classification method has a high-performance rate in the extraction of new knowledge from data. The construction of this model consists of two phases for structure learning and parameter learning. For solving this problem, the K2 algorithm is one of the representative data-driven algorithms, which is based on score and search approach. In addition, the integration of the expert's knowledge in the structure learning process allows the obtainment of the highest accuracy. In this paper, we propose a hybrid approach combining the improvement of the K2 algorithm called K2 algorithm for Parents and Children search (K2PC) and the expert-driven method for learning the structure of BN. The evaluation of the experimental results, using the well-known benchmarks, proves that our K2PC algorithm has better performance in terms of correct structure detection. The real application of our model shows its efficiency in the analysis of the phosphate laundry effluents' impact on the watershed in the Gafsa area (southwestern Tunisia).Keywords: Bayesian network, classification, expert knowledge, structure learning, surface water analysis
Procedia PDF Downloads 12814760 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue
Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov
Abstract:
The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport
Procedia PDF Downloads 11414759 Suppression Subtractive Hybridization Technique for Identification of the Differentially Expressed Genes
Authors: Tuhina-khatun, Mohamed Hanafi Musa, Mohd Rafii Yosup, Wong Mui Yun, Aktar-uz-Zaman, Mahbod Sahebi
Abstract:
Suppression subtractive hybridization (SSH) method is valuable tool for identifying differentially regulated genes in disease specific or tissue specific genes important for cellular growth and differentiation. It is a widely used method for separating DNA molecules that distinguish two closely related DNA samples. SSH is one of the most powerful and popular methods for generating subtracted cDNA or genomic DNA libraries. It is based primarily on a suppression polymerase chain reaction (PCR) technique and combines normalization and subtraction in a solitary procedure. The normalization step equalizes the abundance of DNA fragments within the target population, and the subtraction step excludes sequences that are common to the populations being compared. This dramatically increases the probability of obtaining low-abundance differentially expressed cDNAs or genomic DNA fragments and simplifies analysis of the subtracted library. SSH technique is applicable to many comparative and functional genetic studies for the identification of disease, developmental, tissue specific, or other differentially expressed genes, as well as for the recovery of genomic DNA fragments distinguishing the samples under comparison.Keywords: suppression subtractive hybridization, differentially expressed genes, disease specific genes, tissue specific genes
Procedia PDF Downloads 43314758 Fuzzy Total Factor Productivity by Credibility Theory
Authors: Shivi Agarwal, Trilok Mathur
Abstract:
This paper proposes the method to measure the total factor productivity (TFP) change by credibility theory for fuzzy input and output variables. Total factor productivity change has been widely studied with crisp input and output variables, however, in some cases, input and output data of decision-making units (DMUs) can be measured with uncertainty. These data can be represented as linguistic variable characterized by fuzzy numbers. Malmquist productivity index (MPI) is widely used to estimate the TFP change by calculating the total factor productivity of a DMU for different time periods using data envelopment analysis (DEA). The fuzzy DEA (FDEA) model is solved using the credibility theory. The results of FDEA is used to measure the TFP change for fuzzy input and output variables. Finally, numerical examples are presented to illustrate the proposed method to measure the TFP change input and output variables. The suggested methodology can be utilized for performance evaluation of DMUs and help to assess the level of integration. The methodology can also apply to rank the DMUs and can find out the DMUs that are lagging behind and make recommendations as to how they can improve their performance to bring them at par with other DMUs.Keywords: chance-constrained programming, credibility theory, data envelopment analysis, fuzzy data, Malmquist productivity index
Procedia PDF Downloads 36514757 Design Optimisation of a Novel Cross Vane Expander-Compressor Unit for Refrigeration System
Authors: Y. D. Lim, K. S. Yap, K. T. Ooi
Abstract:
In recent years, environmental issue has been a hot topic in the world, especially the global warming effect caused by conventional non-environmentally friendly refrigerants has increased. Several studies of a more energy-efficient and environmentally friendly refrigeration system have been conducted in order to tackle the issue. In search of a better refrigeration system, CO2 refrigeration system has been proposed as a better option. However, the high throttling loss involved during the expansion process of the refrigeration cycle leads to a relatively low efficiency and thus the system is impractical. In order to improve the efficiency of the refrigeration system, it is suggested by replacing the conventional expansion valve in the refrigeration system with an expander. Based on this issue, a new type of expander-compressor combined unit, named Cross Vane Expander-Compressor (CVEC) was introduced to replace the compressor and the expansion valve of a conventional refrigeration system. A mathematical model was developed to calculate the performance of CVEC, and it was found that the machine is capable of saving the energy consumption of a refrigeration system by as much as 18%. Apart from energy saving, CVEC is also geometrically simpler and more compact. To further improve its efficiency, optimization study of the device is carried out. In this report, several design parameters of CVEC were chosen to be the variables of optimization study. This optimization study was done in a simulation program by using complex optimization method, which is a direct search, multi-variables and constrained optimization method. It was found that the main design parameters, which was shaft radius was reduced around 8% while the inner cylinder radius was remained unchanged at its lower limit after optimization. Furthermore, the port sizes were increased to their upper limit after optimization. The changes of these design parameters have resulted in reduction of around 12% in the total frictional loss and reduction of 4% in power consumption. Eventually, the optimization study has resulted in an improvement in the mechanical efficiency CVEC by 4% and improvement in COP by 6%.Keywords: complex optimization method, COP, cross vane expander-compressor, CVEC, design optimization, direct search, energy saving, improvement, mechanical efficiency, multi variables
Procedia PDF Downloads 37314756 Development of a Systematic Approach to Assess the Applicability of Silver Coated Conductive Yarn
Authors: Y. T. Chui, W. M. Au, L. Li
Abstract:
Recently, wearable electronic textiles have been emerging in today’s market and were developed rapidly since, beside the needs for the clothing uses for leisure, fashion wear and personal protection, there also exist a high demand for the clothing to be capable for function in this electronic age, such as interactive interfaces, sensual being and tangible touch, social fabric, material witness and so on. With the requirements of wearable electronic textiles to be more comfortable, adorable, and easy caring, conductive yarn becomes one of the most important fundamental elements within the wearable electronic textile for interconnection between different functional units or creating a functional unit. The properties of conductive yarns from different companies can vary to a large extent. There are vitally important criteria for selecting the conductive yarns, which may directly affect its optimization, prospect, applicability and performance of the final garment. However, according to the literature review, few researches on conductive yarns on shelf focus on the assessment methods of conductive yarns for the scientific selection of material by a systematic way under different conditions. Therefore, in this study, direction of selecting high-quality conductive yarns is given. It is to test the stability and reliability of the conductive yarns according the problems industrialists would experience with the yarns during the every manufacturing process, in which, this assessment system can be classified into four stage. That is 1) Yarn stage, 2) Fabric stage, 3) Apparel stage and 4) End user stage. Several tests with clear experiment procedures and parameters are suggested to be carried out in each stage. This assessment method suggested that the optimal conducting yarns should be stable in property and resistant to various corrosions at every production stage or during using them. It is expected that this demonstration of assessment method can serve as a pilot study that assesses the stability of Ag/nylon yarns systematically at various conditions, i.e. during mass production with textile industry procedures, and from the consumer perspective. It aims to assist industrialists to understand the qualities and properties of conductive yarns and suggesting a few important parameters that they should be reminded of for the case of higher level of suitability, precision and controllability.Keywords: applicability, assessment method, conductive yarn, wearable electronics
Procedia PDF Downloads 53514755 A Mixed Integer Programming Model for Optimizing the Layout of an Emergency Department
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
During the recent years, demand for healthcare services has dramatically increased. As the demand for healthcare services increases, so does the necessity of constructing new healthcare buildings and redesigning and renovating existing ones. Increasing demands necessitate the use of optimization techniques to improve the overall service efficiency in healthcare settings. However, high complexity of care processes remains the major challenge to accomplish this goal. This study proposes a method based on process mining results to address the high complexity of care processes and to find the optimal layout of the various medical centers in an emergency department. ProM framework is used to discover clinical pathway patterns and relationship between activities. Sequence clustering plug-in is used to remove infrequent events and to derive the process model in the form of Markov chain. The process mining results served as an input for the next phase which consists of the development of the optimization model. Comparison of the current ED design with the one obtained from the proposed method indicated that a carefully designed layout can significantly decrease the distances that patients must travel.Keywords: Mixed Integer programming, Facility layout problem, Process Mining, Healthcare Operation Management
Procedia PDF Downloads 33914754 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 20714753 Mixed Number Algebra and Its Application
Authors: Md. Shah Alam
Abstract:
Mushfiq Ahmad has defined a Mixed Number, which is the sum of a scalar and a Cartesian vector. He has also defined the elementary group operations of Mixed numbers i.e. the norm of Mixed numbers, the product of two Mixed numbers, the identity element and the inverse. It has been observed that Mixed Number is consistent with Pauli matrix algebra and a handy tool to work with Dirac electron theory. Its use as a mathematical method in Physics has been studied. (1) We have applied Mixed number in Quantum Mechanics: Mixed Number version of Displacement operator, Vector differential operator, and Angular momentum operator has been developed. Mixed Number method has also been applied to Klein-Gordon equation. (2) We have applied Mixed number in Electrodynamics: Mixed Number version of Maxwell’s equation, the Electric and Magnetic field quantities and Lorentz Force has been found. (3) An associative transformation of Mixed Number numbers fulfilling Lorentz invariance requirement is developed. (4) We have applied Mixed number algebra as an extension of Complex number. Mixed numbers and the Quaternions have isomorphic correspondence, but they are different in algebraic details. The multiplication of unit Mixed number and the multiplication of unit Quaternions are different. Since Mixed Number has properties similar to those of Pauli matrix algebra, Mixed Number algebra is a more convenient tool to deal with Dirac equation.Keywords: mixed number, special relativity, quantum mechanics, electrodynamics, pauli matrix
Procedia PDF Downloads 36414752 Finite Element Method Analysis of a Modified Rotor 6/4 Switched Reluctance Motor's and Comparison with Brushless Direct Current Motor in Pan-Tilt Applications
Authors: Umit Candan, Kadir Dogan, Ozkan Akin
Abstract:
In this study, the use of a modified rotor 6/4 Switched Reluctance Motor (SRM) and a Brushless Direct Current Motor (BLDC) in pan-tilt systems is compared. Pan-tilt systems are critical mechanisms that enable the precise orientation of cameras and sensors, and their performance largely depends on the characteristics of the motors used. The aim of the study is to determine how the performance of the SRM can be improved through rotor modifications and how these improvements can compete with BLDC motors. Using Finite Element Method (FEM) analyses, the design characteristics and magnetic performance of the 6/4 Switched Reluctance Motor are examined in detail. The modified SRM is found to offer increased torque capacity and efficiency while standing out with its simple construction and robustness. FEM analysis results of SRM indicate that considering its cost-effectiveness and performance improvements achieved through modifications, the SRM is a strong alternative for certain pan-tilt applications. This study aims to provide engineers and researchers with a performance comparison of the modified rotor 6/4 SRM and BLDC motors in pan-tilt systems, helping them make more informed and effective motor selections.Keywords: reluctance machines, switched reluctance machines, pan-tilt application, comparison, FEM analysis
Procedia PDF Downloads 5914751 Information Theoretic Approach for Beamforming in Wireless Communications
Authors: Syed Khurram Mahmud, Athar Naveed, Shoaib Arif
Abstract:
Beamforming is a signal processing technique extensively utilized in wireless communications and radars for desired signal intensification and interference signal minimization through spatial selectivity. In this paper, we present a method for calculation of optimal weight vectors for smart antenna array, to achieve a directive pattern during transmission and selective reception in interference prone environment. In proposed scheme, Mutual Information (MI) extrema are evaluated through an energy constrained objective function, which is based on a-priori information of interference source and desired array factor. Signal to Interference plus Noise Ratio (SINR) performance is evaluated for both transmission and reception. In our scheme, MI is presented as an index to identify trade-off between information gain, SINR, illumination time and spatial selectivity in an energy constrained optimization problem. The employed method yields lesser computational complexity, which is presented through comparative analysis with conventional methods in vogue. MI based beamforming offers enhancement of signal integrity in degraded environment while reducing computational intricacy and correlating key performance indicators.Keywords: beamforming, interference, mutual information, wireless communications
Procedia PDF Downloads 28114750 Designing an Effective Accountability Model for Islamic Azad University Using the Qualitative Approach of Grounded Theory
Authors: Davoud Maleki, Neda Zamani
Abstract:
The present study aims at exploring the effective accountability model of Islamic Azad University using a qualitative approach of grounded theory. The data of this study were obtained from semi-structured interviews with 25 professors and scholars in Islamic Azad University of Tehran who were selected by theoretical sampling method. In the data analysis, the stepwise method and Strauss and Corbin analytical methods (1992) were used. After identification of the main component (balanced response to stakeholders’ needs) and using it to bring the categories together, expressions and ideas representing the relationships between the main and subcomponents, and finally, the revealed components were categorized into six dimensions of the paradigm model, with the relationships among them, including causal conditions (7 components), main component (balanced response to stakeholders’ needs), strategies (5 components), environmental conditions (5 components), intervention features (4 components), and consequences (3 components). Research findings show an exploratory model for describing the relationships between causal conditions, main components, accountability strategies, environmental conditions, university environmental features, and that consequences.Keywords: accountability, effectiveness, Islamic Azad University, grounded theory
Procedia PDF Downloads 8614749 Properties Optimization of Keratin Films Produced by Film Casting and Compression Moulding
Authors: Mahamad Yousif, Eoin Cunningham, Beatrice Smyth
Abstract:
Every year ~6 million tonnes of feathers are produced globally. Due to feathers’ low density and possible contamination with pathogens, their disposal causes health and environmental problems. The extraction of keratin, which represents >90% of feathers’ dry weight, could offer a solution due to its wide range of applications in the food, medical, cosmetics, and biopolymer industries. One of these applications is the production of biofilms which can be used for packaging, edible films, drug delivery, wound healing etc. Several studies in the last two decades investigated keratin film production and its properties. However, the effects of many parameters on the properties of the films remain to be investigated including the extraction method, crosslinker type and concentration, and the film production method. These parameters were investigated in this study. Keratin was extracted from chicken feathers using two methods, alkaline extraction with 0.5 M NaOH at 80 °C or sulphitolysis extraction with 0.5 M sodium sulphite, 8 M urea, and 0.25-1 g sodium dodecyl sulphate (SDS) at 100 °C. The extracted keratin was mixed with different types and concentrations of plasticizers (glycerol and polyethylene glycol) and crosslinkers (formaldehyde (FA), glutaraldehyde, cinnamaldehyde, glyoxal, and 1,4-Butanediol diglycidyl ether (BDE)). The mixtures were either cast in a mould or compression moulded to produce films. For casting, keratin powder was initially dissolved in water to form a 5% keratin solution and the mixture was dried in an oven at 60 °C. For compression moulding, 10% water was added and the compression moulding temperature and pressure were in the range of 60-120 °C and 10-30 bar. Finally, the tensile properties, solubility, and transparency of the films were analysed. The films prepared using the sulphitolysis keratin had superior tensile properties to the alkaline keratin and formed successfully with lower plasticizer concentrations. Lowering the SDS concentration from 1 to 0.25 g/g feathers improved all the tensile properties. All the films prepared without crosslinkers were 100% water soluble but adding crosslinkers reduced solubility to as low as 21%. FA and BDE were found to be the best crosslinkers increasing the tensile strength and elongation at break of the films. Higher compression moulding temperature and pressure lowered the tensile properties of the films; therefore, 80 °C and 10 bar were considered to be the optimal compression moulding temperature and pressure. Nevertheless, the films prepared by casting had higher tensile properties than compression moulding but were less transparent. Two optimal films, prepared by film casting, were identified and their compositions were: (a) Sulphitolysis keratin, 20% glycerol, 10% FA, and 10% BDE. (b) Sulphitolysis keratin, 20% glycerol, and 10% BDE. Their tensile strength, elongation at break, Young’s modulus, solubility, and transparency were: (a) 4.275±0.467 MPa, 86.12±4.24%, 22.227±2.711 MPa, 21.34±1.11%, and 8.57±0.94* respectively. (b) 3.024±0.231 MPa, 113.65±14.61%, 10±1.948 MPa, 25.03±5.3%, and 4.8±0.15 respectively. A higher value indicates that the film is less transparent. The extraction method, film composition, and production method had significant influence on the properties of keratin films and should therefore be tailored to meet the desired properties and applications.Keywords: compression moulding, crosslinker, film casting, keratin, plasticizer, solubility, tensile properties, transparency
Procedia PDF Downloads 3614748 Hydraulic Characteristics of Mine Tailings by Metaheuristics Approach
Authors: Akhila Vasudev, Himanshu Kaushik, Tadikonda Venkata Bharat
Abstract:
A large number of mine tailings are produced every year as part of the extraction process of phosphates, gold, copper, and other materials. Mine tailings are high in water content and have very slow dewatering behavior. The efficient design of tailings dam and economical disposal of these slurries requires the knowledge of tailings consolidation behavior. The large-strain consolidation theory closely predicts the self-weight consolidation of these slurries as the theory considers the conservation of mass and momentum conservation and considers the hydraulic conductivity as a function of void ratio. Classical laboratory techniques, such as settling column test, seepage consolidation test, etc., are expensive and time-consuming for the estimation of hydraulic conductivity variation with void ratio. Inverse estimation of the constitutive relationships from the measured settlement versus time curves is explored. In this work, inverse analysis based on metaheuristics techniques will be explored for predicting the hydraulic conductivity parameters for mine tailings from the base excess pore water pressure dissipation curve and the initial conditions of the mine tailings. The proposed inverse model uses particle swarm optimization (PSO) algorithm, which is based on the social behavior of animals searching for food sources. The finite-difference numerical solution of the forward analytical model is integrated with the PSO algorithm to solve the inverse problem. The method is tested on synthetic data of base excess pore pressure dissipation curves generated using the finite difference method. The effectiveness of the method is verified using base excess pore pressure dissipation curve obtained from a settling column experiment and further ensured through comparison with available predicted hydraulic conductivity parameters.Keywords: base excess pore pressure, hydraulic conductivity, large strain consolidation, mine tailings
Procedia PDF Downloads 136