Search results for: MATRIX method
19545 Enhancing Fracture Toughness of CF/PAEK Laminates for High-Velocity Impact Applications: An Experimental Investigation
Authors: Johannes Keil, Eric Mischorr, Veit Würfel, Jan Condé-Wolter, Alexander Liebsch, Maik Gude
Abstract:
In the aviation sector wastewater pipes are subjected to many different mechanical and medial loads. Worst-case scenarios include high-velocity impacts resulting from the introduction of foreign objects into the system. The industry is seeking to reduce the weight of these pipes, which are currently manufactured from titanium. A promising alternative is the use of fiber-reinforced polymers (FRP), specifically carbon fiber (CF) reinforced polyaryletherketone (PAEK) laminates. This study employs an experimental methodology to investigate the impact resistance of CF/PAEK laminates, with a particular focus on three configurations: crimp, non-crimp, and interleaved matrix rich films in cross-ply laminates. High-velocity impacts were performed using a gas gun resulting in three-dimensional damage patterns. Afterwards the damage behavior was qualitatively and quantitatively analyzed using ultrasonic scans and computed tomography (CT). Samples with an interleaved matrix-rich film led to a reduction of the damage area by around 40% compared to the non-interleaved, non-crimp samples, while the crimp architecture resulted in a reduction of more than 60%. Therefore, these findings contribute to understanding the influence of laminate architecture on impact resistance, paving the way for more efficient materials in aviation applications.Keywords: fracture toughness, high-velocity-impact, textile architecture, thermoplastic composites
Procedia PDF Downloads 1919544 Detection of Mustard Traces in Food by an Official Food Safety Laboratory
Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi
Abstract:
Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).Keywords: allergens, food, mustard, real time PCR
Procedia PDF Downloads 16719543 A Review on Higher-Order Spline Techniques for Solving Burgers Equation Using B-Spline Methods and Variation of B-Spline Techniques
Authors: Maryam Khazaei Pool, Lori Lewis
Abstract:
This is a summary of articles based on higher order B-splines methods and the variation of B-spline methods such as Quadratic B-spline Finite Elements Method, Exponential Cubic B-Spline Method, Septic B-spline Technique, Quintic B-spline Galerkin Method, and B-spline Galerkin Method based on the Quadratic B-spline Galerkin method (QBGM) and Cubic B-spline Galerkin method (CBGM). In this paper, we study the B-spline methods and variations of B-spline techniques to find a numerical solution to the Burgers’ equation. A set of fundamental definitions, including Burgers equation, spline functions, and B-spline functions, are provided. For each method, the main technique is discussed as well as the discretization and stability analysis. A summary of the numerical results is provided, and the efficiency of each method presented is discussed. A general conclusion is provided where we look at a comparison between the computational results of all the presented schemes. We describe the effectiveness and advantages of these methods.Keywords: Burgers’ equation, Septic B-spline, modified cubic B-spline differential quadrature method, exponential cubic B-spline technique, B-spline Galerkin method, quintic B-spline Galerkin method
Procedia PDF Downloads 12619542 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test
Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano
Abstract:
This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.Keywords: finite element method, fruits, inverse analysis, mechanical properties
Procedia PDF Downloads 35819541 Linear Array Geometry Synthesis with Minimum Sidelobe Level and Null Control Using Taguchi Method
Authors: Amara Prakasa Rao, N. V. S. N. Sarma
Abstract:
This paper describes the synthesis of linear array geometry with minimum sidelobe level and null control using the Taguchi method. Based on the concept of the orthogonal array, Taguchi method effectively reduces the number of tests required in an optimization process. Taguchi method has been successfully applied in many fields such as mechanical, chemical engineering, power electronics, etc. Compared to other evolutionary methods such as genetic algorithms, simulated annealing and particle swarm optimization, the Taguchi method is much easier to understand and implement. It requires less computational/iteration processing to optimize the problem. Different cases are considered to illustrate the performance of this technique. Simulation results show that this method outperforms the other evolution algorithms (like GA, PSO) for smart antenna systems design.Keywords: array factor, beamforming, null placement, optimization method, orthogonal array, Taguchi method, smart antenna system
Procedia PDF Downloads 39419540 Valorization of Plastic and Cork Wastes in Design of Composite Materials
Authors: Svetlana Petlitckaia, Toussaint Barboni, Paul-Antoine Santoni
Abstract:
Plastic is a revolutionary material. However, the pollution caused by plastics damages the environment, human health and the economy of different countries. It is important to find new ways to recycle and reuse plastic material. The use of waste materials as filler and as a matrix for composite materials is receiving increasing attention as an approach to increasing the economic value of streams. In this study, a new composite material based on high-density polyethylene (HDPE) and polypropylene (PP) wastes from bottle caps and cork powder from unused cork (virgin cork), which has a high capacity for thermal insulation, was developed. The composites were prepared with virgin and modified cork. The composite materials were obtained through twin-screw extrusion and injection molding. The composites were produced with proportions of 0 %, 5 %, 10 %, 15 %, and 20 % of cork powder in a polymer matrix with and without coupling agent and flame retardant. These composites were investigated in terms of mechanical, structural and thermal properties. The effect of cork fraction, particle size and the use of flame retardant on the properties of composites were investigated. The properties of samples elaborated with the polymer and the cork were compared to them with the coupling agent and commercial flame retardant. It was observed that the morphology of HDPE/cork and PP/cork composites revealed good distribution and dispersion of cork particles without agglomeration. The results showed that the addition of cork powder in the polymer matrix reduced the density of the composites. However, the incorporation of natural additives doesn’t have a significant effect on water adsorption. Regarding the mechanical properties, the value of tensile strength decreases with the addition of cork powder, ranging from 30 MPa to 19 MPa for PP composites and from 19 MPa to 17 MPa for HDPE composites. The value of thermal conductivity of composites HDPE/cork and PP/ cork is about 0.230 W/mK and 0.170 W/mK, respectively. Evaluation of the flammability of the composites was performed using a cone calorimeter. The results of thermal analysis and fire tests show that it is important to add flame retardants to improve fire resistance. The samples elaborated with the coupling agent and flame retardant have better mechanical properties and fire resistance. The feasibility of the composites based on cork and PP and HDPE wastes opens new ways of valorizing plastic waste and virgin cork. The formulation of composite materials must be optimized.Keywords: composite materials, cork and polymer wastes, flammability, modificated cork
Procedia PDF Downloads 8819539 Residual Power Series Method for System of Volterra Integro-Differential Equations
Authors: Zuhier Altawallbeh
Abstract:
This paper investigates the approximate analytical solutions of general form of Volterra integro-differential equations system by using the residual power series method (for short RPSM). The proposed method produces the solutions in terms of convergent series requires no linearization or small perturbation and reproduces the exact solution when the solution is polynomial. Some examples are given to demonstrate the simplicity and efficiency of the proposed method. Comparisons with the Laplace decomposition algorithm verify that the new method is very effective and convenient for solving system of pantograph equations.Keywords: integro-differential equation, pantograph equations, system of initial value problems, residual power series method
Procedia PDF Downloads 41819538 Therapeutic Evaluation of Bacopa Monnieri Extract on Liver Fibrosis in Rats
Authors: Yu Wen Wang, Shyh Ming Kuo, Hsia Ying Cheng, Yu Chiuan Wu
Abstract:
Liver fibrosis is caused by the activation of hepatic stellate cells in the liver to secrete excessive and deposition of extracellular matrix. In recent years, many treatment strategies have been developed to reduce the activation of hepatic stellate cells and therefore to increase the decomposition of extracellular matrix. Bacopa monnieri, an herbaceous plant of the scrophulariaceae, containing saponins and glycosides, which with antioxidant, anti-inflammation, pain relief and free radical scavenging characteristics. This study was to evaluate the inhibition of hepatic stellate cell activity by Bacopa monnieri extract and its therapeutic potential in treating thioacetamide-induced liver fibrosis in rats. The results showed that the IC50 of Bacopa monnieri extract was 0.39 mg/mL. Bacopa monnieri extract could effectively reduce H2O2-induced hepatic stellate cells inflammation. In the TAA-induced liver fibrosis animal studies, albumin secretion recovered to normal level after treated with Bacopa monnieri extract for 2-w, and fibrosis related proteins, α-SMA and TGF-1levels decreased indicating the extract exerted therapeutic effect on the liver fibrosis. However, inflammatory factors TNF- obviously decreased after 4-w treatment. In summary, we could successfully extract the main component-Bacopaside I from the plant and acquired a potential therapy using this component in treating TAA-induced liver fibrosis in rat.Keywords: anti-inflammatory, Bacopa monnieri, fibrosis, hepatic stellate cells, water extract
Procedia PDF Downloads 11119537 A Non-parametric Clustering Approach for Multivariate Geostatistical Data
Authors: Francky Fouedjio
Abstract:
Multivariate geostatistical data have become omnipresent in the geosciences and pose substantial analysis challenges. One of them is the grouping of data locations into spatially contiguous clusters so that data locations within the same cluster are more similar while clusters are different from each other, in some sense. Spatially contiguous clusters can significantly improve the interpretation that turns the resulting clusters into meaningful geographical subregions. In this paper, we develop an agglomerative hierarchical clustering approach that takes into account the spatial dependency between observations. It relies on a dissimilarity matrix built from a non-parametric kernel estimator of the spatial dependence structure of data. It integrates existing methods to find the optimal cluster number and to evaluate the contribution of variables to the clustering. The capability of the proposed approach to provide spatially compact, connected and meaningful clusters is assessed using bivariate synthetic dataset and multivariate geochemical dataset. The proposed clustering method gives satisfactory results compared to other similar geostatistical clustering methods.Keywords: clustering, geostatistics, multivariate data, non-parametric
Procedia PDF Downloads 47719536 A Method for Improving the Embedded Runge Kutta Fehlberg 4(5)
Authors: Sunyoung Bu, Wonkyu Chung, Philsu Kim
Abstract:
In this paper, we introduce a method for improving the embedded Runge-Kutta-Fehlberg 4(5) method. At each integration step, the proposed method is comprised of two equations for the solution and the error, respectively. This solution and error are obtained by solving an initial value problem whose solution has the information of the error at each integration step. The constructed algorithm controls both the error and the time step size simultaneously and possesses a good performance in the computational cost compared to the original method. For the assessment of the effectiveness, EULR problem is numerically solved.Keywords: embedded Runge-Kutta-Fehlberg method, initial value problem, EULR problem, integration step
Procedia PDF Downloads 46319535 Discerning Divergent Nodes in Social Networks
Authors: Mehran Asadi, Afrand Agah
Abstract:
In data mining, partitioning is used as a fundamental tool for classification. With the help of partitioning, we study the structure of data, which allows us to envision decision rules, which can be applied to classification trees. In this research, we used online social network dataset and all of its attributes (e.g., Node features, labels, etc.) to determine what constitutes an above average chance of being a divergent node. We used the R statistical computing language to conduct the analyses in this report. The data were found on the UC Irvine Machine Learning Repository. This research introduces the basic concepts of classification in online social networks. In this work, we utilize overfitting and describe different approaches for evaluation and performance comparison of different classification methods. In classification, the main objective is to categorize different items and assign them into different groups based on their properties and similarities. In data mining, recursive partitioning is being utilized to probe the structure of a data set, which allow us to envision decision rules and apply them to classify data into several groups. Estimating densities is hard, especially in high dimensions, with limited data. Of course, we do not know the densities, but we could estimate them using classical techniques. First, we calculated the correlation matrix of the dataset to see if any predictors are highly correlated with one another. By calculating the correlation coefficients for the predictor variables, we see that density is strongly correlated with transitivity. We initialized a data frame to easily compare the quality of the result classification methods and utilized decision trees (with k-fold cross validation to prune the tree). The method performed on this dataset is decision trees. Decision tree is a non-parametric classification method, which uses a set of rules to predict that each observation belongs to the most commonly occurring class label of the training data. Our method aggregates many decision trees to create an optimized model that is not susceptible to overfitting. When using a decision tree, however, it is important to use cross-validation to prune the tree in order to narrow it down to the most important variables.Keywords: online social networks, data mining, social cloud computing, interaction and collaboration
Procedia PDF Downloads 15819534 Geophysical Methods and Machine Learning Algorithms for Stuck Pipe Prediction and Avoidance
Authors: Ammar Alali, Mahmoud Abughaban
Abstract:
Cost reduction and drilling optimization is the goal of many drilling operators. Historically, stuck pipe incidents were a major segment of non-productive time (NPT) associated costs. Traditionally, stuck pipe problems are part of the operations and solved post-sticking. However, the real key to savings and success is in predicting the stuck pipe incidents and avoiding the conditions leading to its occurrences. Previous attempts in stuck-pipe predictions have neglected the local geology of the problem. The proposed predictive tool utilizes geophysical data processing techniques and Machine Learning (ML) algorithms to predict drilling activities events in real-time using surface drilling data with minimum computational power. The method combines two types of analysis: (1) real-time prediction, and (2) cause analysis. Real-time prediction aggregates the input data, including historical drilling surface data, geological formation tops, and petrophysical data, from wells within the same field. The input data are then flattened per the geological formation and stacked per stuck-pipe incidents. The algorithm uses two physical methods (stacking and flattening) to filter any noise in the signature and create a robust pre-determined pilot that adheres to the local geology. Once the drilling operation starts, the Wellsite Information Transfer Standard Markup Language (WITSML) live surface data are fed into a matrix and aggregated in a similar frequency as the pre-determined signature. Then, the matrix is correlated with the pre-determined stuck-pipe signature for this field, in real-time. The correlation used is a machine learning Correlation-based Feature Selection (CFS) algorithm, which selects relevant features from the class and identifying redundant features. The correlation output is interpreted as a probability curve of stuck pipe incidents prediction in real-time. Once this probability passes a fixed-threshold defined by the user, the other component, cause analysis, alerts the user of the expected incident based on set pre-determined signatures. A set of recommendations will be provided to reduce the associated risk. The validation process involved feeding of historical drilling data as live-stream, mimicking actual drilling conditions, of an onshore oil field. Pre-determined signatures were created for three problematic geological formations in this field prior. Three wells were processed as case studies, and the stuck-pipe incidents were predicted successfully, with an accuracy of 76%. This accuracy of detection could have resulted in around 50% reduction in NPT, equivalent to 9% cost saving in comparison with offset wells. The prediction of stuck pipe problem requires a method to capture geological, geophysical and drilling data, and recognize the indicators of this issue at a field and geological formation level. This paper illustrates the efficiency and the robustness of the proposed cross-disciplinary approach in its ability to produce such signatures and predicting this NPT event.Keywords: drilling optimization, hazard prediction, machine learning, stuck pipe
Procedia PDF Downloads 22919533 Seat Assignment Model for Student Admissions Process at Saudi Higher Education Institutions
Authors: Mohammed Salem Alzahrani
Abstract:
In this paper, student admission process is studied to optimize the assignment of vacant seats with three main objectives. Utilizing all vacant seats, satisfying all program of study admission requirements and maintaining fairness among all candidates are the three main objectives of the optimization model. Seat Assignment Method (SAM) is used to build the model and solve the optimization problem with help of Northwest Coroner Method and Least Cost Method. A closed formula is derived for applying the priority of assigning seat to candidate based on SAM.Keywords: admission process model, assignment problem, Hungarian Method, Least Cost Method, Northwest Corner Method, SAM
Procedia PDF Downloads 50019532 Effect of Carbon Nanotube Reinforcement in Polymer Composite Plates under Static Loading
Authors: S. Madhu, V. V. Subba Rao
Abstract:
In the implementation of carbon nanotube reinforced polymer matrix composites in structural applications, deflection and stress analysis are important considerations. In the present study, a multi scale analysis of deflection and stress analysis of carbon nanotube (CNT) reinforced polymer composite plates is presented. A micromechanics model based on the Mori-Tanaka method is developed by introducing straight CNTs aligned in one direction. The effect of volume fraction and diameter of CNTs on plate deflection and the stresses are investigated using Classical Laminate Plate Theory (CLPT). The study is primarily conducted with the intention of observing the suitability of CNT reinforced polymer composite plates under static loading for structural applications.Keywords: carbon nanotube, micromechanics, composite plate, multi-scale analysis, classical laminate plate theory
Procedia PDF Downloads 37219531 Improving Temporal Correlations in Empirical Orthogonal Function Expansions for Data Interpolating Empirical Orthogonal Function Algorithm
Authors: Ping Bo, Meng Yunshan
Abstract:
Satellite-derived sea surface temperature (SST) is a key parameter for many operational and scientific applications. However, the disadvantage of SST data is a high percentage of missing data which is mainly caused by cloud coverage. Data Interpolating Empirical Orthogonal Function (DINEOF) algorithm is an EOF-based technique for reconstructing the missing data and has been widely used in oceanographic field. The reconstruction of SST images within a long time series using DINEOF can cause large discontinuities and one solution for this problem is to filter the temporal covariance matrix to reduce the spurious variability. Based on the previous researches, an algorithm is presented in this paper to improve the temporal correlations in EOF expansion. Similar with the previous researches, a filter, such as Laplacian filter, is implemented on the temporal covariance matrix, but the temporal relationship between two consecutive images which is used in the filter is considered in the presented algorithm, for example, two images in the same season are more likely correlated than those in the different seasons, hence the latter one is less weighted in the filter. The presented approach is tested for the monthly nighttime 4-km Advanced Very High Resolution Radiometer (AVHRR) Pathfinder SST for the long-term period spanning from 1989 to 2006. The results obtained from the presented algorithm are compared to those from the original DINEOF algorithm without filtering and from the DINEOF algorithm with filtering but without taking temporal relationship into account.Keywords: data interpolating empirical orthogonal function, image reconstruction, sea surface temperature, temporal filter
Procedia PDF Downloads 32419530 Greatly Improved Dielectric Properties of Poly'vinylidene fluoride' Nanocomposites Using Ag-BaTiO₃ Hybrid Nanoparticles as Filler
Authors: K. Silakaew, P. Thongbai
Abstract:
There is an increasing need for high–permittivity polymer–matrix composites (PMC) owing to the rapid development of the electronics industry. Unfortunately, the dielectric permittivity of PMC is still too low ( < 80). Moreover, the dielectric loss tangent is usually high (tan > 0.1) when the dielectric permittivity of PMC increased. In this research work, the dielectric properties of poly(vinylidene fluoride) (PVDF)–based nanocomposites can be significantly improved by incorporating by silver–BaTiO3 (Ag–BT) ceramic hybrid nanoparticles. The Ag–BT/PVDF nanocomposites were fabricated using various volume fractions of Ag–BT hybrid nanoparticles (fAg–BT = 0–0.5). The Ag–BT/PVDF nanocomposites were characterized using several techniques. The main phase of Ag and BT can be detected by the XRD technique. The microstructure of the Ag–BT/PVDF nanocomposites was investigated to reveal the dispersion of Ag–BT hybrid nanoparticles because the dispersion state of a filler can have an effect on the dielectric properties of the nanocomposites. It was found that the filler hybrid nanoparticles were well dispersed in the PVDF matrix. The phase formation of PVDF phases was identified using the XRD and FTIR techniques. We found that the fillers can increase the polar phase of a PVDF polymer. The fabricated Ag–BT/PVDF nanocomposites are systematically characterized to explain the dielectric behavior in Ag–BT/PVDF nanocomposites. Interestingly, largely enhanced dielectric permittivity (>240) and suppressed loss tangent (tan<0.08) over a wide frequency range (102 – 105 Hz) are obtained. Notably, the dielectric permittivity is slightly dependent on temperature. The greatly enhanced dielectric permittivity was explained by the interfacial polarization between the Ag and PVDF interface, and due to a high permittivity of BT particles.Keywords: BaTiO3, PVDF, polymer composite, dielectric properties
Procedia PDF Downloads 19319529 Design of Two-Channel Quadrature Mirror Filter Banks Using a Transformation Approach
Authors: Ju-Hong Lee, Yi-Lin Shieh
Abstract:
Two-dimensional (2-D) quadrature mirror filter (QMF) banks have been widely considered for high-quality coding of image and video data at low bit rates. Without implementing subband coding, a 2-D QMF bank is required to have an exactly linear-phase response without magnitude distortion, i.e., the perfect reconstruction (PR) characteristics. The design problem of 2-D QMF banks with the PR characteristics has been considered in the literature for many years. This paper presents a transformation approach for designing 2-D two-channel QMF banks. Under a suitable one-dimensional (1-D) to two-dimensional (2-D) transformation with a specified decimation/interpolation matrix, the analysis and synthesis filters of the QMF bank are composed of 1-D causal and stable digital allpass filters (DAFs) and possess the 2-D doubly complementary half-band (DC-HB) property. This facilitates the design problem of the two-channel QMF banks by finding the real coefficients of the 1-D recursive DAFs. The design problem is formulated based on the minimax phase approximation for the 1-D DAFs. A novel objective function is then derived to obtain an optimization for 1-D minimax phase approximation. As a result, the problem of minimizing the objective function can be simply solved by using the well-known weighted least-squares (WLS) algorithm in the minimax (L∞) optimal sense. The novelty of the proposed design method is that the design procedure is very simple and the designed 2-D QMF bank achieves perfect magnitude response and possesses satisfactory phase response. Simulation results show that the proposed design method provides much better design performance and much less design complexity as compared with the existing techniques.Keywords: Quincunx QMF bank, doubly complementary filter, digital allpass filter, WLS algorithm
Procedia PDF Downloads 22519528 Perceptual Image Coding by Exploiting Internal Generative Mechanism
Authors: Kuo-Cheng Liu
Abstract:
In the perceptual image coding, the objective is to shape the coding distortion such that the amplitude of distortion does not exceed the error visibility threshold, or to remove perceptually redundant signals from the image. While most researches focus on color image coding, the perceptual-based quantizer developed for luminance signals are always directly applied to chrominance signals such that the color image compression methods are inefficient. In this paper, the internal generative mechanism is integrated into the design of a color image compression method. The internal generative mechanism working model based on the structure-based spatial masking is used to assess the subjective distortion visibility thresholds that are visually consistent to human eyes better. The estimation method of structure-based distortion visibility thresholds for color components is further presented in a locally adaptive way to design quantization process in the wavelet color image compression scheme. Since the lowest subband coefficient matrix of images in the wavelet domain preserves the local property of images in the spatial domain, the error visibility threshold inherent in each coefficient of the lowest subband for each color component is estimated by using the proposed spatial error visibility threshold assessment. The threshold inherent in each coefficient of other subbands for each color component is then estimated in a local adaptive fashion based on the distortion energy allocation. By considering that the error visibility thresholds are estimated using predicting and reconstructed signals of the color image, the coding scheme incorporated with locally adaptive perceptual color quantizer does not require side information. Experimental results show that the entropies of three color components obtained by using proposed IGM-based color image compression scheme are lower than that obtained by using the existing color image compression method at perceptually lossless visual quality.Keywords: internal generative mechanism, structure-based spatial masking, visibility threshold, wavelet domain
Procedia PDF Downloads 24819527 A Succinct Method for Allocation of Reactive Power Loss in Deregulated Scenario
Authors: J. S. Savier
Abstract:
Real power is the component power which is converted into useful energy whereas reactive power is the component of power which cannot be converted to useful energy but it is required for the magnetization of various electrical machineries. If the reactive power is compensated at the consumer end, the need for reactive power flow from generators to the load can be avoided and hence the overall power loss can be reduced. In this scenario, this paper presents a succinct method called JSS method for allocation of reactive power losses to consumers connected to radial distribution networks in a deregulated environment. The proposed method has the advantage that no assumptions are made while deriving the reactive power loss allocation method.Keywords: deregulation, reactive power loss allocation, radial distribution systems, succinct method
Procedia PDF Downloads 37619526 Modification of Underwood's Equation to Calculate Minimum Reflux Ratio for Column with One Side Stream Upper Than Feed
Authors: S. Mousavian, A. Abedianpour, A. Khanmohammadi, S. Hematian, Gh. Eidi Veisi
Abstract:
Distillation is one of the most important and utilized separation methods in the industrial practice. There are different ways to design of distillation column. One of these ways is short cut method. In short cut method, material balance and equilibrium are employed to calculate number of tray in distillation column. There are different methods that are classified in short cut method. One of these methods is Fenske-Underwood-Gilliland method. In this method, minimum reflux ratio should be calculated by underwood equation. Underwood proposed an equation that is useful for simple distillation column with one feed and one top and bottom product. In this study, underwood method is developed to predict minimum reflux ratio for column with one side stream upper than feed. The result of this model compared with McCabe-Thiele method. The result shows that proposed method able to calculate minimum reflux ratio with very small error.Keywords: minimum reflux ratio, side stream, distillation, Underwood’s method
Procedia PDF Downloads 40619525 Implication of Multi-Walled Carbon Nanotubes on Polymer/MXene Nanocomposites
Authors: Mathias Aakyiir, Qunhui Zheng, Sherif Araby, Jun Ma
Abstract:
MXene nanosheets stack in polymer matrices, while multi-walled carbon nanotubes (MWCNTs) entangle themselves when used to form composites. These challenges are addressed in this work by forming MXene/MWCNT hybrid nanofillers by electrostatic self-assembly and developing elastomer/MXene/MWCNTs nanocomposites using a latex compounding method. In a 3-phase nanocomposite, MWCNTs serve as bridges between MXene nanosheets, leading to nanocomposites with well-dispersed nanofillers. The high aspect ratio of MWCNTs and the interconnection role of MXene serve as a basis for forming nanocomposites of lower percolation threshold of electrical conductivity from the hybrid fillers compared with the 2-phase composites containing either MXene or MWCNTs only. This study focuses on discussing into detail the interfacial interaction of nanofillers and the elastomer matrix and the outstanding mechanical and functional properties of the resulting nanocomposites. The developed nanocomposites have potential applications in the automotive and aerospace industries.Keywords: elastomers, multi-walled carbon nanotubes, MXenes, nanocomposites
Procedia PDF Downloads 16319524 Analysis of Sulphur-Oxidizing Bacteria Attack on Concrete Based on Waste Materials
Authors: A. Eštoková, M. Kovalčíková, A. Luptáková, A. Sičáková, M. Ondová
Abstract:
Concrete durability as an important engineering property of concrete, determining the service life of concrete structures very significantly, can be threatened and even lost due to the interactions of concrete with external environment. Bio-corrosion process caused by presence and activities of microorganisms producing sulphuric acid is a special type of sulphate deterioration of concrete materials. The effects of sulphur-oxidizing bacteria Acidithiobacillus thiooxidans on various concrete samples, based on silica fume and zeolite, were investigated in laboratory during 180 days. A laboratory study was conducted to compare the performance of concrete samples in terms of the concrete deterioration influenced by the leaching of calcium and silicon compounds from the cement matrix. The changes in the elemental concentrations of calcium and silicon in both solid samples and liquid leachates were measured by using X – ray fluorescence method. Experimental studies confirmed the silica fume based concrete samples were found out to have the best performance in terms of both silicon and calcium ions leaching.Keywords: biocorrosion, concrete, leaching, bacteria
Procedia PDF Downloads 45119523 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm
Authors: Rashid Ahmed , John N. Avaritsiotis
Abstract:
Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis
Procedia PDF Downloads 45119522 Therapeutic Drug Monitoring by Dried Blood Spot and LC-MS/MS: Novel Application to Carbamazepine and Its Metabolite in Paediatric Population
Authors: Giancarlo La Marca, Engy Shokry, Fabio Villanelli
Abstract:
Epilepsy is one of the most common neurological disorders, with an estimated prevalence of 50 million people worldwide. Twenty five percent of the epilepsy population is represented in children under the age of 15 years. For antiepileptic drugs (AED), there is a poor correlation between plasma concentration and dose especially in children. This was attributed to greater pharmacokinetic variability than adults. Hence, therapeutic drug monitoring (TDM) is recommended in controlling toxicity while drug exposure is maintained. Carbamazepine (CBZ) is a first-line AED and the drug of first choice in trigeminal neuralgia. CBZ is metabolised in the liver into carbamazepine-10,11-epoxide (CBZE), its major metabolite which is equipotent. This develops the need for an assay able to monitor the levels of both CBZ and CBZE. The aim of the present study was to develop and validate a LC-MS/MS method for simultaneous quantification of CBZ and CBZE in dried blood spots (DBS). DBS technique overcomes many logistical problems, ethical issues and technical challenges faced by classical plasma sampling. LC-MS/MS has been regarded as superior technique over immunoassays and HPLC/UV methods owing to its better specificity and sensitivity, lack of interference or matrix effects. Our method combines advantages of DBS technique and LC-MS/MS in clinical practice. The extraction process was done using methanol-water-formic acid (80:20:0.1, v/v/v). The chromatographic elution was achieved by using a linear gradient with a mobile phase consisting of acetonitrile-water-0.1% formic acid at a flow rate of 0.50 mL/min. The method was linear over the range 1-40 mg/L and 0.25-20 mg/L for CBZ and CBZE respectively. The limit of quantification was 1.00 mg/L and 0.25 mg/L for CBZ and CBZE, respectively. Intra-day and inter-day assay precisions were found to be less than 6.5% and 11.8%. An evaluation of DBS technique was performed, including effect of extraction solvent, spot homogeneity and stability in DBS. Results from a comparison with the plasma assay are also presented. The novelty of the present work lies in being the first to quantify CBZ and its metabolite from only one 3.2 mm DBS disc finger-prick sample (3.3-3.4 µl blood) by LC-MS/MS in a 10 min. chromatographic run.Keywords: carbamazepine, carbamazepine-10, 11-epoxide, dried blood spots, LC-MS/MS, therapeutic drug monitoring
Procedia PDF Downloads 41719521 Electro-Mechanical Response and Engineering Properties of Piezocomposite with Imperfect Interface
Authors: Rattanan Tippayaphalapholgul, Yasothorn Sapsathiarn
Abstract:
Composites of piezoelectric materials are widely use in practical applications such as nondestructive testing devices, smart adaptive structures and medical devices. A thorough understanding of coupled electro-elastic response and properties of piezocomposite are crucial for the development and design of piezoelectric composite materials used in advanced applications. The micromechanics analysis is employed in this paper to determine the response and engineering properties of the piezocomposite. A mechanical imperfect interface bonding between piezoelectric inclusion and polymer matrix is taken into consideration in the analysis. The micromechanics analysis is based on the Boundary Element Method (BEM) together with the periodic micro-field micromechanics theory. A selected set of numerical results is presented to investigate the influence of volume ratio and interface bonding condition on effective piezocomposite material coefficients and portray basic features of coupled electroelastic response within the domain of piezocomposite unit cell.Keywords: effective engineering properties, electroelastic response, imperfect interface, piezocomposite
Procedia PDF Downloads 23119520 Polymer Modification of Fine Grained Concretes Used in Textile Reinforced Cementitious Composites
Authors: Esma Gizem Daskiran, Mehmet Mustafa Daskiran, Mustafa Gencoglu
Abstract:
Textile reinforced cementitious composite (TRCC) is a development of a composite material where textile and fine-grained concrete (matrix) materials are used in combination. These matrices offer high performance properties in many aspects. To achieve high performance, polymer modified fine-grained concretes were used as matrix material which have high flexural strength. In this study, ten latex polymers and ten powder polymers were added to fine-grained concrete mixtures. These latex and powder polymers were added to the mixtures at different rates related to binder weight. Mechanical properties such as compressive and flexural strength were studied. Results showed that latex polymer and redispersible polymer modified fine-grained concretes showed different mechanical performance. A wide range of both latex and redispersible powder polymers were studied. As the addition rate increased compressive strength decreased for all mixtures. Flexural strength increased as the addition rate increased but significant enhancement was not observed through all mixtures.Keywords: textile reinforced composite, cement, fine grained concrete, latex, redispersible powder
Procedia PDF Downloads 25619519 A Calibration Device for Force-Torque Sensors
Authors: Nicolay Zarutskiy, Roman Bulkin
Abstract:
The paper deals with the existing methods of force-torque sensor calibration with a number of components from one to six, analyzed their advantages and disadvantages, the necessity of introduction of a calibration method. Calibration method and its constructive realization are also described here. A calibration method allows performing automated force-torque sensor calibration both with selected components of the main vector of forces and moments and with complex loading. Thus, two main advantages of the proposed calibration method are achieved: the automation of the calibration process and universality.Keywords: automation, calibration, calibration device, calibration method, force-torque sensors
Procedia PDF Downloads 64619518 Relevance of Lecture Method in Modern Era: A Study from Nepal
Authors: Hari Prasad Nepal
Abstract:
Research on lecture method issues confirm that this teaching method has been practiced from the very beginnings of schooling. Many teachers, lecturers and professors are convinced that lecture still represents main tool of contemporary instructional process. The central purpose of this study is to uncover the extent of using lecture method in the higher education. The study was carried out in Nepalese context with employing mixed method research design. To obtain the primary data this study employed a questionnaire involving items with close and open answers. 120 teachers, lecturers and professors participated in this study. The findings indicated that 75 percent of the respondents use the lecture method in their classroom teaching. The study reveals that there are advantages of using lecture method such as easy to practice, less time to prepare, high pass rate, high students’ satisfaction, little comments on instructors, appropriate to large classes and high level students. In addition, the study divulged the instructors’ reflections and measures to improve the lecture method. This research concludes that the practice of lecture method is still significantly applicable in colleges and universities in Nepalese contexts. So, there are no significant changes in the application of lecture method in the higher education classroom despite the emergence of new learning approaches and strategies.Keywords: instructors, learning approaches, learning strategies, lecture method
Procedia PDF Downloads 23819517 Cosmetic Recommendation Approach Using Machine Learning
Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake
Abstract:
The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.Keywords: content-based filtering, cosmetics, machine learning, recommendation system
Procedia PDF Downloads 13419516 A Method of the Semantic on Image Auto-Annotation
Authors: Lin Huo, Xianwei Liu, Jingxiong Zhou
Abstract:
Recently, due to the existence of semantic gap between image visual features and human concepts, the semantic of image auto-annotation has become an important topic. Firstly, by extract low-level visual features of the image, and the corresponding Hash method, mapping the feature into the corresponding Hash coding, eventually, transformed that into a group of binary string and store it, image auto-annotation by search is a popular method, we can use it to design and implement a method of image semantic auto-annotation. Finally, Through the test based on the Corel image set, and the results show that, this method is effective.Keywords: image auto-annotation, color correlograms, Hash code, image retrieval
Procedia PDF Downloads 497