Search results for: minimum root mean square (RMS) error matching algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9329

Search results for: minimum root mean square (RMS) error matching algorithm

7769 Attributes That Influence Respondents When Choosing a Mate in Internet Dating Sites: An Innovative Matching Algorithm

Authors: Moti Zwilling, Srečko Natek

Abstract:

This paper aims to present an innovative predictive analytics analysis in order to find the best combination between two consumers who strive to find their partner or in internet sites. The methodology shown in this paper is based on analysis of consumer preferences and involves data mining and machine learning search techniques. The study is composed of two parts: The first part examines by means of descriptive statistics the correlations between a set of parameters that are taken between man and women where they intent to meet each other through the social media, usually the internet. In this part several hypotheses were examined and statistical analysis were taken place. Results show that there is a strong correlation between the affiliated attributes of man and woman as long as concerned to how they present themselves in a social media such as "Facebook". One interesting issue is the strong desire to develop a serious relationship between most of the respondents. In the second part, the authors used common data mining algorithms to search and classify the most important and effective attributes that affect the response rate of the other side. Results exhibit that personal presentation and education background are found as most affective to achieve a positive attitude to one's profile from the other mate.

Keywords: dating sites, social networks, machine learning, decision trees, data mining

Procedia PDF Downloads 293
7768 Pattern Recognition Search: An Advancement Over Interpolation Search

Authors: Shahpar Yilmaz, Yasir Nadeem, Syed A. Mehdi

Abstract:

Searching for a record in a dataset is always a frequent task for any data structure-related application. Hence, a fast and efficient algorithm for the approach has its importance in yielding the quickest results and enhancing the overall productivity of the company. Interpolation search is one such technique used to search through a sorted set of elements. This paper proposes a new algorithm, an advancement over interpolation search for the application of search over a sorted array. Pattern Recognition Search or PR Search (PRS), like interpolation search, is a pattern-based divide and conquer algorithm whose objective is to reduce the sample size in order to quicken the process and it does so by treating the array as a perfect arithmetic progression series and thereby deducing the key element’s position. We look to highlight some of the key drawbacks of interpolation search, which are accounted for in the Pattern Recognition Search.

Keywords: array, complexity, index, sorting, space, time

Procedia PDF Downloads 243
7767 A Communication Signal Recognition Algorithm Based on Holder Coefficient Characteristics

Authors: Hui Zhang, Ye Tian, Fang Ye, Ziming Guo

Abstract:

Communication signal modulation recognition technology is one of the key technologies in the field of modern information warfare. At present, communication signal automatic modulation recognition methods are mainly divided into two major categories. One is the maximum likelihood hypothesis testing method based on decision theory, the other is a statistical pattern recognition method based on feature extraction. Now, the most commonly used is a statistical pattern recognition method, which includes feature extraction and classifier design. With the increasingly complex electromagnetic environment of communications, how to effectively extract the features of various signals at low signal-to-noise ratio (SNR) is a hot topic for scholars in various countries. To solve this problem, this paper proposes a feature extraction algorithm for the communication signal based on the improved Holder cloud feature. And the extreme learning machine (ELM) is used which aims at the problem of the real-time in the modern warfare to classify the extracted features. The algorithm extracts the digital features of the improved cloud model without deterministic information in a low SNR environment, and uses the improved cloud model to obtain more stable Holder cloud features and the performance of the algorithm is improved. This algorithm addresses the problem that a simple feature extraction algorithm based on Holder coefficient feature is difficult to recognize at low SNR, and it also has a better recognition accuracy. The results of simulations show that the approach in this paper still has a good classification result at low SNR, even when the SNR is -15dB, the recognition accuracy still reaches 76%.

Keywords: communication signal, feature extraction, Holder coefficient, improved cloud model

Procedia PDF Downloads 156
7766 PointNetLK-OBB: A Point Cloud Registration Algorithm with High Accuracy

Authors: Wenhao Lan, Ning Li, Qiang Tong

Abstract:

To improve the registration accuracy of a source point cloud and template point cloud when the initial relative deflection angle is too large, a PointNetLK algorithm combined with an oriented bounding box (PointNetLK-OBB) is proposed. In this algorithm, the OBB of a 3D point cloud is used to represent the macro feature of source and template point clouds. Under the guidance of the iterative closest point algorithm, the OBB of the source and template point clouds is aligned, and a mirror symmetry effect is produced between them. According to the fitting degree of the source and template point clouds, the mirror symmetry plane is detected, and the optimal rotation and translation of the source point cloud is obtained to complete the 3D point cloud registration task. To verify the effectiveness of the proposed algorithm, a comparative experiment was performed using the publicly available ModelNet40 dataset. The experimental results demonstrate that, compared with PointNetLK, PointNetLK-OBB improves the registration accuracy of the source and template point clouds when the initial relative deflection angle is too large, and the sensitivity of the initial relative position between the source point cloud and template point cloud is reduced. The primary contribution of this paper is the use of PointNetLK to avoid the non-convex problem of traditional point cloud registration and leveraging the regularity of the OBB to avoid the local optimization problem in the PointNetLK context.

Keywords: mirror symmetry, oriented bounding box, point cloud registration, PointNetLK-OBB

Procedia PDF Downloads 150
7765 Develop a Software to Hydraulic Redesign a Depropanizer Column to Minimize Energy Consumption

Authors: Mahdi Goharrokhi, Rasool Shiri, Eiraj Naser

Abstract:

A depropanizer column of a particular refinery was redesigned in this work. That is, minimum reflux ratio, minimum number of trays, feed tray location and the hydraulic characteristics of the tower were calculated and compared with the actual values of the existing tower. To Design review of the tower, fundamental equations were used to develop software which its results were compared with two commercial software results. In each case PR EOS was used. Based on the total energy consumption in reboiler and condenser, feed tray location was also determined using case study definition for tower.

Keywords: column, hydraulic design, pressure drop, energy consumption

Procedia PDF Downloads 424
7764 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation

Authors: Ekin Nurbaş

Abstract:

One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.

Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing

Procedia PDF Downloads 147
7763 Terrestrial Laser Scans to Assess Aerial LiDAR Data

Authors: J. F. Reinoso-Gordo, F. J. Ariza-López, A. Mozas-Calvache, J. L. García-Balboa, S. Eddargani

Abstract:

The DEMs quality may depend on several factors such as data source, capture method, processing type used to derive them, or the cell size of the DEM. The two most important capture methods to produce regional-sized DEMs are photogrammetry and LiDAR; DEMs covering entire countries have been obtained with these methods. The quality of these DEMs has traditionally been evaluated by the national cartographic agencies through punctual sampling that focused on its vertical component. For this type of evaluation there are standards such as NMAS and ASPRS Positional Accuracy Standards for Digital Geospatial Data. However, it seems more appropriate to carry out this evaluation by means of a method that takes into account the superficial nature of the DEM and, therefore, its sampling is superficial and not punctual. This work is part of the Research Project "Functional Quality of Digital Elevation Models in Engineering" where it is necessary to control the quality of a DEM whose data source is an experimental LiDAR flight with a density of 14 points per square meter to which we call Point Cloud Product (PCpro). In the present work it is described the capture data on the ground and the postprocessing tasks until getting the point cloud that will be used as reference (PCref) to evaluate the PCpro quality. Each PCref consists of a patch 50x50 m size coming from a registration of 4 different scan stations. The area studied was the Spanish region of Navarra that covers an area of 10,391 km2; 30 patches homogeneously distributed were necessary to sample the entire surface. The patches have been captured using a Leica BLK360 terrestrial laser scanner mounted on a pole that reached heights of up to 7 meters; the position of the scanner was inverted so that the characteristic shadow circle does not exist when the scanner is in direct position. To ensure that the accuracy of the PCref is greater than that of the PCpro, the georeferencing of the PCref has been carried out with real-time GNSS, and its accuracy positioning was better than 4 cm; this accuracy is much better than the altimetric mean square error estimated for the PCpro (<15 cm); The kind of DEM of interest is the corresponding to the bare earth, so that it was necessary to apply a filter to eliminate vegetation and auxiliary elements such as poles, tripods, etc. After the postprocessing tasks the PCref is ready to be compared with the PCpro using different techniques: cloud to cloud or after a resampling process DEM to DEM.

Keywords: data quality, DEM, LiDAR, terrestrial laser scanner, accuracy

Procedia PDF Downloads 100
7762 A Unique Multi-Class Support Vector Machine Algorithm Using MapReduce

Authors: Aditi Viswanathan, Shree Ranjani, Aruna Govada

Abstract:

With data sizes constantly expanding, and with classical machine learning algorithms that analyze such data requiring larger and larger amounts of computation time and storage space, the need to distribute computation and memory requirements among several computers has become apparent. Although substantial work has been done in developing distributed binary SVM algorithms and multi-class SVM algorithms individually, the field of multi-class distributed SVMs remains largely unexplored. This research seeks to develop an algorithm that implements the Support Vector Machine over a multi-class data set and is efficient in a distributed environment. For this, we recursively choose the best binary split of a set of classes using a greedy technique. Much like the divide and conquer approach. Our algorithm has shown better computation time during the testing phase than the traditional sequential SVM methods (One vs. One, One vs. Rest) and out-performs them as the size of the data set grows. This approach also classifies the data with higher accuracy than the traditional multi-class algorithms.

Keywords: distributed algorithm, MapReduce, multi-class, support vector machine

Procedia PDF Downloads 401
7761 The Effect of Withania Somnifera in Alloxan Induced Diabetic Rabbits

Authors: Farah Ali, Tehreem Fayyaz, Musadiq Idris

Abstract:

The present work was undertaken to investigate effects of various extracts of withania somnifera for anti-diabetic activity in alloxan induced diabetic rabbits. Rabbits were acclimatized for a week to standard laboratory temperature. Animals were fed according to a strict schedule (8 am, 3 pm and 10 pm) with green fodder (Medicago sativa) and tap water ad libitum. Animals were divided into nine groups of six rabbits each in a random manner. Body weights and physical activities of all rabbits were recorded before start of experiments. The animals of group 1 and 2 were given lactose (250 mg/kg, p.o) and Withania somniferaroot powder (100 mg/kg, p.o) respectively daily from day 1-20. Animals of group 3 were given alloxan (100 mg/kg, i.v) as a single dose on day 1. Powdered root of Withania somnifera in the doses of 100, 150, 200 mg/kg and its aqueous and ethanol extracts (equivalent to 200 mg/kg of crude drug) were given to the treated animals (groups 4-8), respectively by oral route for three weeks (day 1-20o.d), along with alloxan (100 mg/kg, i.v) as a single dose on day 1. Group 9 was treated with metformin (200 mg/kg, p.o) daily from day 1-20, along with a single dose of alloxan (100 mg/ kg, i.v) on day 1. Fasting serum glucose concentration in groups 3-9 was increased significantly (p<0.05) on day 3, with a maximum increase (215.3 mg/dl) in animals of toxic control (TC) group (3) on day 21 of the experiment as compared to normal control (NC) group (1). Effects of different doses (100, 150, 200 mg/kg, p.o) of W. somnifera root powder (WS) decreased the fasting serum glucose concentration as compared to toxic control group, with a maximum decrease (88.3 mg/dl) in group 2 (treated control) on day 21 of the experiment. Metformin (200 mg/kg, p.o) (reference control), aqueous extract (AWS) and ethanol extract (EWS) of W. somnifera (equivalent to 100 mg/kg W.somnifera root, p.o) antagonized the effects of alloxan as compared to toxic control group. These results indicate that the W. somnifera possess significant anti–diabetic activity.

Keywords: diabetes, serum, glucose, blood, sugar, rabbits

Procedia PDF Downloads 522
7760 Nanofluid Flow Heat Transfer Through Ducts with Different Cross-Sections

Authors: Amir Dehshiri, Mohammad Reza Salimpour

Abstract:

In the present article, we investigate experimental laminar forced convective heat transfer specifications of TiO2/water nanofluids through conduits with different cross sections. We check the effects of different parameters such as cross-sectional shape, Reynolds number and concentration of nanoparticles in stable suspension on increasing convective heat transfer by designing and assembling of an experimental apparatus. The results demonstrate adding a little amount of nanoparticles to the base fluid, improves heat transfer behavior in conduits. Moreover, conduit with circular cross-section has better performance compared to the square and triangular cross sections. However, conduits with square and triangular cross sections have more relative heat transfer enhancement than conduit with circular cross section.

Keywords: nanofluid, cross-sectional shape, TiO2, convection

Procedia PDF Downloads 452
7759 Simulation of Utility Accrual Scheduling and Recovery Algorithm in Multiprocessor Environment

Authors: A. Idawaty, O. Mohamed, A. Z. Zuriati

Abstract:

This paper presents the development of an event based Discrete Event Simulation (DES) for a recovery algorithm known Backward Recovery Global Preemptive Utility Accrual Scheduling (BR_GPUAS). This algorithm implements the Backward Recovery (BR) mechanism as a fault recovery solution under the existing Time/Utility Function/ Utility Accrual (TUF/UA) scheduling domain for multiprocessor environment. The BR mechanism attempts to take the faulty tasks back to its initial safe state and then proceeds to re-execute the affected section of the faulty tasks to enable recovery. Considering that faults may occur in the components of any system; a fault tolerance system that can nullify the erroneous effect is necessary to be developed. Current TUF/UA scheduling algorithm uses the abortion recovery mechanism and it simply aborts the erroneous task as their fault recovery solution. None of the existing algorithm in TUF/UA scheduling domain in multiprocessor scheduling environment have considered the transient fault and implement the BR mechanism as a fault recovery mechanism to nullify the erroneous effect and solve the recovery problem in this domain. The developed BR_GPUAS simulator has derived the set of parameter, events and performance metrics according to a detailed analysis of the base model. Simulation results revealed that BR_GPUAS algorithm can saved almost 20-30% of the accumulated utilities making it reliable and efficient for the real-time application in the multiprocessor scheduling environment.

Keywords: real-time system (RTS), time utility function/ utility accrual (TUF/UA) scheduling, backward recovery mechanism, multiprocessor, discrete event simulation (DES)

Procedia PDF Downloads 306
7758 A Blind Three-Dimensional Meshes Watermarking Using the Interquartile Range

Authors: Emad E. Abdallah, Alaa E. Abdallah, Bajes Y. Alskarnah

Abstract:

We introduce a robust three-dimensional watermarking algorithm for copyright protection and indexing. The basic idea behind our technique is to measure the interquartile range or the spread of the 3D model vertices. The algorithm starts by converting all the vertices to spherical coordinate followed by partitioning them into small groups. The proposed algorithm is slightly altering the interquartile range distribution of the small groups based on predefined watermark. The experimental results on several 3D meshes prove perceptual invisibility and the robustness of the proposed technique against the most common attacks including compression, noise, smoothing, scaling, rotation as well as combinations of these attacks.

Keywords: watermarking, three-dimensional models, perceptual invisibility, interquartile range, 3D attacks

Procedia PDF Downloads 474
7757 Analysis of the Effects of Vibrations on Tractor Drivers by Measurements With Wearable Sensors

Authors: Gubiani Rino, Nicola Zucchiatti, Da Broi Ugo, Bietresato Marco

Abstract:

The problem of vibrations in agriculture is very important due to the different types of machinery used for the different types of soil in which work is carried out. One of the most commonly used machines is the tractor, where the phenomenon has been studied for a long time by measuring the whole body and placing the sensor on the seat. However, this measurement system does not take into account the characteristics of the drivers, such as their body index (BMI), their gender (male, female) or the muscle fatigue they are subjected to, which is highly dependent on their age for example. The aim of the research was therefore to place sensors not only on the seat but along the spinal column to check the transmission of vibration on drivers with different BMI on different tractors and at different travel speeds and of different genders. The test was also done using wearable sensors such as a dynamometer applied to the muscles, the data of which was correlated with the vibrations produced by the tractor. Initial data show that even on new tractors with pneumatic seats, the vibrations attenuate little and are still correlated with the roughness of the track travelled and the forward speed. Another important piece of data are the root-mean square values referred to 8 hours (A(8)x,y,z) and the maximum transient vibration values (MTVVx,y,z) and, the latter, the MTVVz values were problematic (limiting factor in most cases) and always aggravated by the speed. The MTVVx values can be lowered by having a tyre-pressure adjustment system, able to properly adjust the tire pressure according to the specific situation (ground, speed) in which a tractor is operating.

Keywords: fatigue, effect vibration on health, tractor driver vibrations, vibration, muscle skeleton disorders

Procedia PDF Downloads 71
7756 Upgraded Cuckoo Search Algorithm to Solve Optimisation Problems Using Gaussian Selection Operator and Neighbour Strategy Approach

Authors: Mukesh Kumar Shah, Tushar Gupta

Abstract:

An Upgraded Cuckoo Search Algorithm is proposed here to solve optimization problems based on the improvements made in the earlier versions of Cuckoo Search Algorithm. Short comings of the earlier versions like slow convergence, trap in local optima improved in the proposed version by random initialization of solution by suggesting an Improved Lambda Iteration Relaxation method, Random Gaussian Distribution Walk to improve local search and further proposing Greedy Selection to accelerate to optimized solution quickly and by “Study Nearby Strategy” to improve global search performance by avoiding trapping to local optima. It is further proposed to generate better solution by Crossover Operation. The proposed strategy used in algorithm shows superiority in terms of high convergence speed over several classical algorithms. Three standard algorithms were tested on a 6-generator standard test system and the results are presented which clearly demonstrate its superiority over other established algorithms. The algorithm is also capable of handling higher unit systems.

Keywords: economic dispatch, gaussian selection operator, prohibited operating zones, ramp rate limits

Procedia PDF Downloads 130
7755 Optimized Control of Roll Stability of Missile using Genetic Algorithm

Authors: Pham Van Hung, Nguyen Trong Hieu, Le Quoc Dinh, Nguyen Kiem Chien, Le Dinh Hieu

Abstract:

The article focuses on the study of automatic flight control on missiles during operation. The quality standards and characteristics of missile operations are very strict, requiring high stability and accurate response to commands within a relatively wide range of work. The study analyzes the linear transfer function model of the Missile Roll channel to facilitate the development of control systems. A two-loop control structure for the Missile Roll channel is proposed, with the inner loop controlling the Missile Roll rate and the outer loop controlling the Missile Roll angle. To determine the optimal control parameters, a genetic algorithm is applied. The study uses MATLAB simulation software to implement the genetic algorithm and evaluate the quality of the closed-loop system. The results show that the system achieves better quality than the original structure and is simple, reliable, and ready for implementation in practical experiments.

Keywords: genetic algorithm, roll chanel, two-loop control structure, missile

Procedia PDF Downloads 91
7754 Experimental Analysis of Laminar Nanofluid Flow Convection

Authors: Mohammad R. Salimpour

Abstract:

In this study, we investigate experimental laminar forced convective heat transfer specifications of TiO2/water nanofluids through conduits with different cross sections. Ee check the effects of different parameters such as cross sectional shape, Reynolds number and concentration of nanoparticles in stable suspension on increasing convective heat transfer by designing and assembling of an experimental apparatus. The results demonstrate adding a little amount of nanoparticles to the base fluid, improves heat transfer behavior in conduits. Moreover, conduit with circular cross-section has better performance compared to the square and triangular cross sections. However, conduits with square and triangular cross sections have more relative heat transfer enhancement than conduit with circular cross section.

Keywords: nanofluid, cross-sectional shape, TiO2, convection

Procedia PDF Downloads 391
7753 Transition Pay vs. Liquidity Holdings: A Comparative Analysis on Consumption Smoothing using Bank Transaction Data

Authors: Nora Neuteboom

Abstract:

This study investigates household financial behaviors during unemployment spells in the Netherlands using high-frequency transaction data through a event study specification integrating propensity score matching. In our specification, we contrasted treated individuals, who underwent job loss, with non-treated individuals possessing comparable financial characteristics. The initial onset of unemployment triggers a substantial surge in income, primarily attributed to transition payments, but swiftly drops post-unemployment, with unemployment benefits covering slightly over half of former salary earnings. Despite a re-employment rate of around half within six months, the treatment group experiences a persistent average monthly earnings reduction of approximately 600 EUR by month. Spending patterns fluctuate significantly, surging before unemployment due to transition payments and declining below non-treated individuals post-unemployment, indicating challenges to fully smooth consumption after job loss. Furthermore, our study disentangles the effects of transition payments and liquidity holdings on spending, revealing that transition payments exert a more pronounced and prolonged impact on consumption smoothing than liquidity holdings. Transition payments significantly stimulate spending, particularly in pin and iDEAL categories, contrasting a much smaller relative spending impact of liquidity holdings.

Keywords: household consumption, transaction data, big data, propensity score matching

Procedia PDF Downloads 19
7752 Chaos Fuzzy Genetic Algorithm

Authors: Mohammad Jalali Varnamkhasti

Abstract:

The genetic algorithms have been very successful in handling difficult optimization problems. The fundamental problem in genetic algorithms is premature convergence. This paper, present a new fuzzy genetic algorithm based on chaotic values instead of the random values in genetic algorithm processes. In this algorithm, for initial population is used chaotic sequences and then a new sexual selection proposed for selection mechanism. In this technique, the population is divided such that the male and female would be selected in an alternate way. The layout of the male and female chromosomes in each generation is different. A female chromosome is selected by tournament selection size from the female group. Then, the male chromosome is selected, in order of preference based on the maximum Hamming distance between the male chromosome and the female chromosome or The highest fitness value of male chromosome (if more than one male chromosome is having the maximum Hamming distance existed), or Random selection. The selections of crossover and mutation operators are achieved by running the fuzzy logic controllers, the crossover and mutation probabilities are varied on the basis of the phenotype and genotype characteristics of the chromosome population. Computational experiments are conducted on the proposed techniques and the results are compared with some other operators, heuristic and local search algorithms commonly used for solving p-median problems published in the literature.

Keywords: genetic algorithm, fuzzy system, chaos, sexual selection

Procedia PDF Downloads 385
7751 Design of Permanent Sensor Fault Tolerance Algorithms by Sliding Mode Observer for Smart Hybrid Powerpack

Authors: Sungsik Jo, Hyeonwoo Kim, Iksu Choi, Hunmo Kim

Abstract:

In the SHP, LVDT sensor is for detecting the length changes of the EHA output, and the thrust of the EHA is controlled by the pressure sensor. Sensor is possible to cause hardware fault by internal problem or external disturbance. The EHA of SHP is able to be uncontrollable due to control by feedback from uncertain information, on this paper; the sliding mode observer algorithm estimates the original sensor output information in permanent sensor fault. The proposed algorithm shows performance to recovery fault of disconnection and short circuit basically, also the algorithm detect various of sensor fault mode.

Keywords: smart hybrid powerpack (SHP), electro hydraulic actuator (EHA), permanent sensor fault tolerance, sliding mode observer (SMO), graphic user interface (GUI)

Procedia PDF Downloads 548
7750 Reliability Analysis: A Case Study in Designing Power Distribution System of Tehran Oil Refinery

Authors: A. B. Arani, R. Shojaee

Abstract:

Electrical power distribution system is one of the vital infrastructures of an oil refinery, which requires wide area of study and planning before construction. In this paper, power distribution reliability of Tehran Refinery’s KHDS/GHDS unit has been taken into consideration to investigate the importance of these kinds of studies and evaluate the designed system. In this regard, the authors chose and evaluated different configurations of electrical power distribution along with the existing configuration with the aim of finding the most suited configuration which satisfies the conditions of minimum cost of electrical system construction, minimum cost imposed by loss of load, and maximum power system reliability.

Keywords: power distribution system, oil refinery, reliability, investment cost, interruption cost

Procedia PDF Downloads 876
7749 On Compression Properties of Honeycomb Structures Using Flax/PLA Composite as Core Material

Authors: S. Alsubari, M. Y. M. Zuhri, S. M. Sapuan, M. R. Ishaks

Abstract:

Sandwich structures based on cellular cores are increasingly being utilized as energy-absorbing components in the industry. However, determining ideal structural configurations remains challenging. This chapter compares the compression properties of flax fiber-reinforced polylactic acid (PLA) of empty honeycomb core, foam-filled honeycomb and double cell wall square interlocking core sandwich structure under quasi-static compression loading. The square interlocking core is fabricated through a slotting technique, whereas the honeycomb core is made using a corrugated mold that was initially used to create the corrugated core composite profile, which is then cut into corrugated webs and assembled to form the honeycomb core. The sandwich structures are tested at a crosshead displacement rate of 2 mm/min. The experimental results showed that honeycomb outperformed the square interlocking core in terms of their strength capability and SEA by around 14% and 34%, respectively. It is observed that the foam-filled honeycomb collapse in a progressive mode, exhibiting noticeable advantages over the empty honeycomb; this is attributed to the interaction between the honeycomb wall and foam filler. Interestingly, the average SEAs of foam-filled and empty honeycomb cores have no significant difference, around 8.7kJ/kg and 8.2kJ/kg, respectively. In contrast, its strength capability is clearly pronounced, in which the foam-filled core outperforms the empty counterparts by around 33%. Finally, the results for empty and foam-filled cores were significantly superior to aluminum cores published in the literature.

Keywords: compressive strength, flax, honeycomb core, specific energy absorption

Procedia PDF Downloads 83
7748 Identification of Architectural Design Error Risk Factors in Construction Projects Using IDEF0 Technique

Authors: Sahar Tabarroki, Ahad Nazari

Abstract:

The design process is one of the most key project processes in the construction industry. Although architects have the responsibility to produce complete, accurate, and coordinated documents, architectural design is accompanied by many errors. A design error occurs when the constraints and requirements of the design are not satisfied. Errors are potentially costly and time-consuming to correct if not caught early during the design phase, and they become expensive in either construction documents or in the construction phase. The aim of this research is to identify the risk factors of architectural design errors, so identification of risks is necessary. First, a literature review in the design process was conducted and then a questionnaire was designed to identify the risks and risk factors. The questions in the form of the questionnaire were based on the “similar service description of study and supervision of architectural works” published by “Vice Presidency of Strategic Planning & Supervision of I.R. Iran” as the base of architects’ tasks. Second, the top 10 risks of architectural activities were identified. To determine the positions of possible causes of risks with respect to architectural activities, these activities were located in a design process modeled by the IDEF0 technique. The research was carried out by choosing a case study, checking the design drawings, interviewing its architect and client, and providing a checklist in order to identify the concrete examples of architectural design errors. The results revealed that activities such as “defining the current and future requirements of the project”, “studies and space planning,” and “time and cost estimation of suggested solution” has a higher error risk than others. Moreover, the most important causes include “unclear goals of a client”, “time force by a client”, and “lack of knowledge of architects about the requirements of end-users”. For error detecting in the case study, lack of criteria, standards and design criteria, and lack of coordination among them, was a barrier, anyway, “lack of coordination between architectural design and electrical and mechanical facility”, “violation of the standard dimensions and sizes in space designing”, “design omissions” were identified as the most important design errors.

Keywords: architectural design, design error, risk management, risk factor

Procedia PDF Downloads 130
7747 Identifying Protein-Coding and Non-Coding Regions in Transcriptomes

Authors: Angela U. Makolo

Abstract:

Protein-coding and Non-coding regions determine the biology of a sequenced transcriptome. Research advances have shown that Non-coding regions are important in disease progression and clinical diagnosis. Existing bioinformatics tools have been targeted towards Protein-coding regions alone. Therefore, there are challenges associated with gaining biological insights from transcriptome sequence data. These tools are also limited to computationally intensive sequence alignment, which is inadequate and less accurate to identify both Protein-coding and Non-coding regions. Alignment-free techniques can overcome the limitation of identifying both regions. Therefore, this study was designed to develop an efficient sequence alignment-free model for identifying both Protein-coding and Non-coding regions in sequenced transcriptomes. Feature grouping and randomization procedures were applied to the input transcriptomes (37,503 data points). Successive iterations were carried out to compute the gradient vector that converged the developed Protein-coding and Non-coding Region Identifier (PNRI) model to the approximate coefficient vector. The logistic regression algorithm was used with a sigmoid activation function. A parameter vector was estimated for every sample in 37,503 data points in a bid to reduce the generalization error and cost. Maximum Likelihood Estimation (MLE) was used for parameter estimation by taking the log-likelihood of six features and combining them into a summation function. Dynamic thresholding was used to classify the Protein-coding and Non-coding regions, and the Receiver Operating Characteristic (ROC) curve was determined. The generalization performance of PNRI was determined in terms of F1 score, accuracy, sensitivity, and specificity. The average generalization performance of PNRI was determined using a benchmark of multi-species organisms. The generalization error for identifying Protein-coding and Non-coding regions decreased from 0.514 to 0.508 and to 0.378, respectively, after three iterations. The cost (difference between the predicted and the actual outcome) also decreased from 1.446 to 0.842 and to 0.718, respectively, for the first, second and third iterations. The iterations terminated at the 390th epoch, having an error of 0.036 and a cost of 0.316. The computed elements of the parameter vector that maximized the objective function were 0.043, 0.519, 0.715, 0.878, 1.157, and 2.575. The PNRI gave an ROC of 0.97, indicating an improved predictive ability. The PNRI identified both Protein-coding and Non-coding regions with an F1 score of 0.970, accuracy (0.969), sensitivity (0.966), and specificity of 0.973. Using 13 non-human multi-species model organisms, the average generalization performance of the traditional method was 74.4%, while that of the developed model was 85.2%, thereby making the developed model better in the identification of Protein-coding and Non-coding regions in transcriptomes. The developed Protein-coding and Non-coding region identifier model efficiently identified the Protein-coding and Non-coding transcriptomic regions. It could be used in genome annotation and in the analysis of transcriptomes.

Keywords: sequence alignment-free model, dynamic thresholding classification, input randomization, genome annotation

Procedia PDF Downloads 68
7746 A Nonlocal Means Algorithm for Poisson Denoising Based on Information Geometry

Authors: Dongxu Chen, Yipeng Li

Abstract:

This paper presents an information geometry NonlocalMeans(NLM) algorithm for Poisson denoising. NLM estimates a noise-free pixel as a weighted average of image pixels, where each pixel is weighted according to the similarity between image patches in Euclidean space. In this work, every pixel is a Poisson distribution locally estimated by Maximum Likelihood (ML), all distributions consist of a statistical manifold. A NLM denoising algorithm is conducted on the statistical manifold where Fisher information matrix can be used for computing distribution geodesics referenced as the similarity between patches. This approach was demonstrated to be competitive with related state-of-the-art methods.

Keywords: image denoising, Poisson noise, information geometry, nonlocal-means

Procedia PDF Downloads 285
7745 Time and Wavelength Division Multiplexing Passive Optical Network Comparative Analysis: Modulation Formats and Channel Spacings

Authors: A. Fayad, Q. Alqhazaly, T. Cinkler

Abstract:

In light of the substantial increase in end-user requirements and the incessant need of network operators to upgrade the capabilities of access networks, in this paper, the performance of the different modulation formats on eight-channels Time and Wavelength Division Multiplexing Passive Optical Network (TWDM-PON) transmission system has been examined and compared. Limitations and features of modulation formats have been determined to outline the most suitable design to enhance the data rate and transmission reach to obtain the best performance of the network. The considered modulation formats are On-Off Keying Non-Return-to-Zero (NRZ-OOK), Carrier Suppressed Return to Zero (CSRZ), Duo Binary (DB), Modified Duo Binary (MODB), Quadrature Phase Shift Keying (QPSK), and Differential Quadrature Phase Shift Keying (DQPSK). The performance has been analyzed by varying transmission distances and bit rates under different channel spacing. Furthermore, the system is evaluated in terms of minimum Bit Error Rate (BER) and Quality factor (Qf) without applying any dispersion compensation technique, or any optical amplifier. Optisystem software was used for simulation purposes.

Keywords: BER, DuoBinary, NRZ-OOK, TWDM-PON

Procedia PDF Downloads 149
7744 Efficacy of Nemafric-BL Phytonematicide on Suppression of Root-Knot Nematodes and Growth of Tomato Plants

Authors: Pontsho E. Tseke, Phatu W. Mashela

Abstract:

Cucurbitacin-containing phytonematicides had been consistent in suppressing root-knot (Meloidogyne species) when used in dried crude form, with limited evidence whether the efficacy could be affected when fresh fruits were used during fermentation. The objective of this study was to determine the influence of Nemafric-BL phytonematicide prepared using fermented crude extracts of fresh fruit from wild watermelon (Cucumis africanus) on the growth of tomato (Solanum lycopersicum) plants and suppression of Meloidogyne species. Seedlings of tomato cultivar ‘Floradade’ were inoculated with 3 000 eggs and second-stage juveniles (J2) of M. incognita race 2 in pot trials, with treatments comprising 0, 2, 4, 8, 16, 32 and 64 % Nemafric-BL phytonematicide. At 56 days after inoculation, the phytonematicide reduced eggs and J2 in roots by 84-97%, J2 in soil by 49-96% and total nematodes by 70-97%. Plant variables and concentrations of Nemafric-BL phytonematicide exhibited positive quadratic relations, with 74-98% associations. In conclusion, fresh fruit of C. africanus could be used for the preparation of Nemafric-BL phytonematicide, particularly in cases where the dry infrastructure is not available.

Keywords: Cucurbitacin B, density-dependent growth, effective microorganisms, quadratic relations

Procedia PDF Downloads 185
7743 Feature Location Restoration for Under-Sampled Photoplethysmogram Using Spline Interpolation

Authors: Hangsik Shin

Abstract:

The purpose of this research is to restore the feature location of under-sampled photoplethysmogram using spline interpolation and to investigate feasibility for feature shape restoration. We obtained 10 kHz-sampled photoplethysmogram and decimated it to generate under-sampled dataset. Decimated dataset has 5 kHz, 2.5 k Hz, 1 kHz, 500 Hz, 250 Hz, 25 Hz and 10 Hz sampling frequency. To investigate the restoration performance, we interpolated under-sampled signals with 10 kHz, then compared feature locations with feature locations of 10 kHz sampled photoplethysmogram. Features were upper and lower peak of photplethysmography waveform. Result showed that time differences were dramatically decreased by interpolation. Location error was lesser than 1 ms in both feature types. In 10 Hz sampled cases, location error was also deceased a lot, however, they were still over 10 ms.

Keywords: peak detection, photoplethysmography, sampling, signal reconstruction

Procedia PDF Downloads 368
7742 Reducing Per-and Polyfluoroalkyl Substances (PFAS) Water Contamination with Mycorrhizal Hydroponics Plants

Authors: Neel Ahuja

Abstract:

Per- and polyfluoroalkyl substances (PFAS), known as ”forever chemicals”, are one of the most common and dangerous water pollutants, having carcinogenic effects and causing 382,000 global deaths annually. Current methods to purify PFAS-contaminated water can cost millions of dollars and require existing infrastructure, making them difficult to implement in low-income and rural areas without industrial treatment plants. Hydroponics plants colonized by beneficial mycorrhizal fungi present an affordable and sustainable solution to purifying PFAS-contaminated water. In this study, mycorrhizal-inoculated basil and lettuce plants were cultivated in hydroponics systems under controlled conditions. Root samples were stained and analyzed under a light microscope to confirm mycorrhizal presence. PFAS was added to the systems and an LC/QQQ-MS instrument was used to measure the reduction in PFAS concentrations over 72 hours. Results showed that mycorrhizal plants removed 71.1% of PFAS in a water system compared to 59.9% by non-mycorrhizal plants, and a t-test (p-value=0.00367) was used to prove statistical significance. Relative health of plants was measured through root length, with results revealing that mycorrhizal plant roots were 2.8 inches longer on average than non-mycorrhizal roots. Further analysis revealed a direct relationship between plant root length and PFAS purification, indicating the suitability of species with naturally longer roots for real-world phytoremediation applications, such as at stormwater detention ponds. This study provided a proof-of-concept of the effectiveness of mycorrhizal hydroponics plants in reducing PFAS contamination in water systems, presenting applications as an inexpensive and large-scale purification system.

Keywords: Perfluoroalkyl and polyfluoroalkyl substances, hydroponics, mycorrhizal fungi, water contamination, stormwater detention ponds

Procedia PDF Downloads 16
7741 A Network Economic Analysis of Friendship, Cultural Activity, and Homophily

Authors: Siming Xie

Abstract:

In social networks, the term homophily refers to the tendency of agents with similar characteristics to link with one another and is so robustly observed across many contexts and dimensions. The starting point of my research is the observation that the “type” of agents is not a single exogenous variable. Agents, despite their differences in race, religion, and other hard to alter characteristics, may share interests and engage in activities that cut across those predetermined lines. This research aims to capture the interactions of homophily effects in a model where agents have two-dimension characteristics (i.e., race and personal hobbies such as basketball, which one either likes or dislikes) and with biases in meeting opportunities and in favor of same-type friendships. A novel feature of my model is providing a matching process with biased meeting probability on different dimensions, which could help to understand the structuring process in multidimensional networks without missing layer interdependencies. The main contribution of this study is providing a welfare based matching process for agents with multi-dimensional characteristics. In particular, this research shows that the biases in meeting opportunities on one dimension would lead to the emergence of homophily on the other dimension. The objective of this research is to determine the pattern of homophily in network formations, which will shed light on our understanding of segregation and its remedies. By constructing a two-dimension matching process, this study explores a method to describe agents’ homophilous behavior in a social network with multidimension and construct a game in which the minorities and majorities play different strategies in a society. It also shows that the optimal strategy is determined by the relative group size, where society would suffer more from social segregation if the two racial groups have a similar size. The research also has political implications—cultivating the same characteristics among agents helps diminishing social segregation, but only if the minority group is small enough. This research includes both theoretical models and empirical analysis. Providing the friendship formation model, the author first uses MATLAB to perform iteration calculations, then derives corresponding mathematical proof on previous results, and last shows that the model is consistent with empirical evidence from high school friendships. The anonymous data comes from The National Longitudinal Study of Adolescent Health (Add Health).

Keywords: homophily, multidimension, social networks, friendships

Procedia PDF Downloads 170
7740 Behavior of Square Reinforced-Concrete Columns Strengthened with Carbon Fiber Reinforced Polymers under Eccentric Loading

Authors: Dana J. Abed, Mu'tasim S. Abdel-Jaber, Nasim K. Shatarat

Abstract:

In this paper, an experimental study on twelve square columns was conducted to investigate the influence of cross-sectional size on axial compressive capacity of carbon fiber reinforced polymers (CFRP) wrapped square reinforced concrete (RC) short columns subjected to eccentric loadings. The columns were divided into three groups with three cross sections (200×200×1200, 250×250×1500 and 300×300×1800 mm). Each group was tested under two different eccentricities: 10% and 20% of the width of samples measured from the center of the column cross section. Four columns were developed in each arrangement. Two columns in each category were left unwrapped as control samples, and two were wrapped with one layer CFRP perpendicular to the specimen surface. In general; CFRP sheets has enhanced the performance of the strengthened columns compared to the control columns. It was noticed that the percentage of compressive capacity enhancement was decreased by increasing the cross-sectional size, and increasing loading eccentricity generally leads to reduced load bearing capacity in columns. In the same group specimens, when the eccentricity increased the percentage of enhancement in load carrying capacity was increased. The study concludes that the optimum use of the CFRP sheets for axial strength enhancement is for smaller cross-section columns under higher eccentricities.

Keywords: CFRP, columns, eccentric loading, cross-sectional

Procedia PDF Downloads 175