Search results for: light weight algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10744

Search results for: light weight algorithm

9514 HR MRI CS Based Image Reconstruction

Authors: Krzysztof Malczewski

Abstract:

Magnetic Resonance Imaging (MRI) reconstruction algorithm using compressed sensing is presented in this paper. It is exhibited that the offered approach improves MR images spatial resolution in circumstances when highly undersampled k-space trajectories are applied. Compressed Sensing (CS) aims at signal and images reconstructing from significantly fewer measurements than were conventionally assumed necessary. Magnetic Resonance Imaging (MRI) is a fundamental medical imaging method struggles with an inherently slow data acquisition process. The use of CS to MRI has the potential for significant scan time reductions, with visible benefits for patients and health care economics. In this study the objective is to combine super-resolution image enhancement algorithm with CS framework benefits to achieve high resolution MR output image. Both methods emphasize on maximizing image sparsity on known sparse transform domain and minimizing fidelity. The presented algorithm considers the cardiac and respiratory movements.

Keywords: super-resolution, MRI, compressed sensing, sparse-sense, image enhancement

Procedia PDF Downloads 431
9513 Triangulations via Iterated Largest Angle Bisection

Authors: Yeonjune Kang

Abstract:

A triangulation of a planar region is a partition of that region into triangles. In the finite element method, triangulations are often used as the grid underlying a computation. In order to be suitable as a finite element mesh, a triangulation must have well-shaped triangles, according to criteria that depend on the details of the particular problem. For instance, most methods require that all triangles be small and as close to the equilateral shape as possible. Stated differently, one wants to avoid having either thin or flat triangles in the triangulation. There are many triangulation procedures, a particular one being the one known as the longest edge bisection algorithm described below. Starting with a given triangle, locate the midpoint of the longest edge and join it to the opposite vertex of the triangle. Two smaller triangles are formed; apply the same bisection procedure to each of these triangles. Continuing in this manner after n steps one obtains a triangulation of the initial triangle into 2n smaller triangles. The longest edge algorithm was first considered in the late 70’s. It was shown by various authors that this triangulation has the desirable properties for the finite element method: independently of the number of iterations the angles of these triangles cannot get too small; moreover, the size of the triangles decays exponentially. In the present paper we consider a related triangulation algorithm we refer to as the largest angle bisection procedure. As the name suggests, rather than bisecting the longest edge, at each step we bisect the largest angle. We study the properties of the resulting triangulation and prove that, while the general behavior resembles the one in the longest edge bisection algorithm, there are several notable differences as well.

Keywords: angle bisectors, geometry, triangulation, applied mathematics

Procedia PDF Downloads 402
9512 Traditional Drawing, BIM and Erudite Design Process

Authors: Maryam Kalkatechi

Abstract:

Nowadays, parametric design, scientific analysis, and digital fabrication are dominant. Many architectural practices are increasingly seeking to incorporate advanced digital software and fabrication in their projects. Proposing an erudite design process that combines digital and practical aspects in a strong frame within the method was resulted from the dissertation research. The digital aspects are the progressive advancements in algorithm design and simulation software. These aspects have assisted the firms to develop more holistic concepts at the early stage and maintain collaboration among disciplines during the design process. The erudite design process enhances the current design processes by encouraging the designer to implement the construction and architecture knowledge within the algorithm to make successful design processes. The erudite design process also involves the ongoing improvements of applying the new method of 3D printing in construction. This is achieved through the ‘data-sketches’. The term ‘data-sketch’ was developed by the author in the dissertation that was recently completed. It accommodates the decisions of the architect on the algorithm. This paper introduces the erudite design process and its components. It will summarize the application of this process in development of the ‘3D printed construction unit’. This paper contributes to overlaying the academic and practice with advanced technology by presenting a design process that transfers the dominance of tool to the learned architect and encourages innovation in design processes.

Keywords: erudite, data-sketch, algorithm design in architecture, design process

Procedia PDF Downloads 276
9511 Facial Biometric Privacy Using Visual Cryptography: A Fundamental Approach to Enhance the Security of Facial Biometric Data

Authors: Devika Tanna

Abstract:

'Biometrics' means 'life measurement' but the term is usually associated with the use of unique physiological characteristics to identify an individual. It is important to secure the privacy of digital face image that is stored in central database. To impart privacy to such biometric face images, first, the digital face image is split into two host face images such that, each of it gives no idea of existence of the original face image and, then each cover image is stored in two different databases geographically apart. When both the cover images are simultaneously available then only we can access that original image. This can be achieved by using the XM2VTS and IMM face database, an adaptive algorithm for spatial greyscale. The algorithm helps to select the appropriate host images which are most likely to be compatible with the secret image stored in the central database based on its geometry and appearance. The encryption is done using GEVCS which results in a reconstructed image identical to the original private image.

Keywords: adaptive algorithm, database, host images, privacy, visual cryptography

Procedia PDF Downloads 131
9510 A Non-Parametric Based Mapping Algorithm for Use in Audio Fingerprinting

Authors: Analise Borg, Paul Micallef

Abstract:

Over the past few years, the online multimedia collection has grown at a fast pace. Several companies showed interest to study the different ways to organize the amount of audio information without the need of human intervention to generate metadata. In the past few years, many applications have emerged on the market which are capable of identifying a piece of music in a short time. Different audio effects and degradation make it much harder to identify the unknown piece. In this paper, an audio fingerprinting system which makes use of a non-parametric based algorithm is presented. Parametric analysis is also performed using Gaussian Mixture Models (GMMs). The feature extraction methods employed are the Mel Spectrum Coefficients and the MPEG-7 basic descriptors. Bin numbers replaced the extracted feature coefficients during the non-parametric modelling. The results show that non-parametric analysis offer potential results as the ones mentioned in the literature.

Keywords: audio fingerprinting, mapping algorithm, Gaussian Mixture Models, MFCC, MPEG-7

Procedia PDF Downloads 423
9509 Effect of Different Feed Composition on the Growth Performance in Early Weaned Piglets

Authors: Obuzor Eze Obuzor, Ekpoke Okurube Sliver

Abstract:

The study was carried out at Debee farms at Ahoada West Local Government area, Rivers State, Nigeria. To evaluate the impact of two different cost-effective available feed composition on growth performance of weaned piglets. Thirty weaned uncontrolled cross bred (Large white x pietrain) piglets of average initial weight of 3.04 Kg weaned at 30days were assigned to three dietary treatments, comprising three replicates of 10 weaned piglets each, piglets were kept at 7 °C in different pens with dimensions of 4.50 × 4.50 m. The design of the experiment was completely randomized design, data from the study were subjected to one-way analysis of variance (ANOVA) and significant means were separated using Duncan's Multiple Range Test using Statistical Analysis System (SAS) software for windows (2 0 0 3), statistical significance was assessed at P < 0.05 (95% confidence interval) while survival rate was calculated using simple percentage. A standard diet was prepared to meet the nutrient requirements of weaned piglets at (20.8% crude protein). The three diets were fed to the animals in concrete feeding trough, control diet (C) had soybean meal while first treatment had spent grain (T1) and the second treatment had wheat offal (T2) respectively. The experiment was partitioned into four weeks periods (days 1-7, 8-14, 15-21 and 22-28). Feed and water were given unrestrictedly throughout the period of the experiment. The feed intake and weights of the pigs were recorded on weekly basis. Feed conversion ratio and daily weight gain were calculated and the study lasted for four weeks. There was no significant (P>0.05) effect of diet on survival rate, final body weight, average daily weight gain, daily feed intake and feed conversion ratio. The overall performance showed that treatment one (T1) had survival rate (93%), improved daily weight gain (36.21 g), average daily feed intake (120.14 g) and had the best feed conversion ratio (0.29) similar high mean value with the control while treatment two (T2) had lowest and negative response to all parameters. It could be concluded that feed formulated with spent grain is cheaper than control (soybean meal) and also improved the growth performance of weaned piglets.

Keywords: piglets, weaning, feed conversions ratio, daily weight gain

Procedia PDF Downloads 66
9508 Distribution System Planning with Distributed Generation and Capacitor Placements

Authors: Nattachote Rugthaicharoencheep

Abstract:

This paper presents a feeder reconfiguration problem in distribution systems. The objective is to minimize the system power loss and to improve bus voltage profile. The optimization problem is subjected to system constraints consisting of load-point voltage limits, radial configuration format, no load-point interruption, and feeder capability limits. A method based on genetic algorithm, a search algorithm based on the mechanics of natural selection and natural genetics, is proposed to determine the optimal pattern of configuration. The developed methodology is demonstrated by a 33-bus radial distribution system with distributed generations and feeder capacitors. The study results show that the optimal on/off patterns of the switches can be identified to give the minimum power loss while respecting all the constraints.

Keywords: network reconfiguration, distributed generation capacitor placement, loss reduction, genetic algorithm

Procedia PDF Downloads 178
9507 A Comparison of Sequential Quadratic Programming, Genetic Algorithm, Simulated Annealing, Particle Swarm Optimization for the Design and Optimization of a Beam Column

Authors: Nima Khosravi

Abstract:

This paper describes an integrated optimization technique with concurrent use of sequential quadratic programming, genetic algorithm, and simulated annealing particle swarm optimization for the design and optimization of a beam column. In this research, the comparison between 4 different types of optimization methods. The comparison is done and it is found out that all the methods meet the required constraints and the lowest value of the objective function is achieved by SQP, which was also the fastest optimizer to produce the results. SQP is a gradient based optimizer hence its results are usually the same after every run. The only thing which affects the results is the initial conditions given. The initial conditions given in the various test run were very large as compared. Hence, the value converged at a different point. Rest of the methods is a heuristic method which provides different values for different runs even if every parameter is kept constant.

Keywords: beam column, genetic algorithm, particle swarm optimization, sequential quadratic programming, simulated annealing

Procedia PDF Downloads 386
9506 Physical and Physiological Characteristics of Young Soccer Players in Republic of Macedonia

Authors: Sanja Manchevska, Vaska Antevska, Lidija Todorovska, Beti Dejanova, Sunchica Petrovska, Ivanka Karagjozova, Elizabeta Sivevska, Jasmina Pluncevic Gligoroska

Abstract:

Introduction: A number of positive effects on the player’s physical status, including the body mass components are attributed to training process. As young soccer players grow up qualitative and quantitative changes appear and contribute to better performance. Player’s anthropometric and physiologic characteristics are recognized as important determinants of performance. Material: A sample of 52 soccer players with an age span from 9 to 14 years were divided in two groups differentiated by age. The younger group consisted of 25 boys under 11 years (mean age 10.2) and second group consisted of 27 boys with mean age 12.64. Method: The set of basic anthropometric parameters was analyzed: height, weight, BMI (Body Mass Index) and body mass components. Maximal oxygen uptake was tested using the treadmill protocol by Brus. Results: The group aged under 11 years showed the following anthropometric and physiological features: average height= 143.39cm, average weight= 44.27 kg; BMI= 18.77; Err = 5.04; Hb= 13.78 g/l; VO2=37.72 mlO2/kg. Average values of analyzed parameters were as follows: height was 163.7 cm; weight= 56.3 kg; BMI = 19.6; VO2= 39.52 ml/kg; Err=5.01; Hb=14.3g/l for the participants aged 12 to14 years. Conclusion: Physiological parameters (maximal oxygen uptake, erythrocytes and Hb) were insignificantly higher in the older group compared to the younger group. There were no statistically significant differences between analyzed anthropometric parameters among the two groups except for the basic measurements (height and weight).

Keywords: body composition, young soccer players, BMI, physical status

Procedia PDF Downloads 402
9505 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 170
9504 Interpretation and Clustering Framework for Analyzing ECG Survey Data

Authors: Irum Matloob, Shoab Ahmad Khan, Fahim Arif

Abstract:

As Indo-Pak has been the victim of heart diseases since many decades. Many surveys showed that percentage of cardiac patients is increasing in Pakistan day by day, and special attention is needed to pay on this issue. The framework is proposed for performing detailed analysis of ECG survey data which is conducted for measuring prevalence of heart diseases statistics in Pakistan. The ECG survey data is evaluated or filtered by using automated Minnesota codes and only those ECGs are used for further analysis which is fulfilling the standardized conditions mentioned in the Minnesota codes. Then feature selection is performed by applying proposed algorithm based on discernibility matrix, for selecting relevant features from the database. Clustering is performed for exposing natural clusters from the ECG survey data by applying spectral clustering algorithm using fuzzy c means algorithm. The hidden patterns and interesting relationships which have been exposed after this analysis are useful for further detailed analysis and for many other multiple purposes.

Keywords: arrhythmias, centroids, ECG, clustering, discernibility matrix

Procedia PDF Downloads 472
9503 New Segmentation of Piecewise Linear Regression Models Using Reversible Jump MCMC Algorithm

Authors: Suparman

Abstract:

Piecewise linear regression models are very flexible models for modeling the data. If the piecewise linear regression models are matched against the data, then the parameters are generally not known. This paper studies the problem of parameter estimation of piecewise linear regression models. The method used to estimate the parameters of picewise linear regression models is Bayesian method. But the Bayes estimator can not be found analytically. To overcome these problems, the reversible jump MCMC algorithm is proposed. Reversible jump MCMC algorithm generates the Markov chain converges to the limit distribution of the posterior distribution of the parameters of picewise linear regression models. The resulting Markov chain is used to calculate the Bayes estimator for the parameters of picewise linear regression models.

Keywords: regression, piecewise, Bayesian, reversible Jump MCMC

Procedia PDF Downloads 521
9502 Visible-Light-Driven OVs-BiOCl Nanoplates with Enhanced Photocatalytic Activity toward NO Oxidation

Authors: Jiazhen Liao, Xiaolan Zeng

Abstract:

A series of BiOCl nanoplates with different oxygen vacancies (OVs) concentrations were successfully synthesized via a facile solvothermal method. The concentration of OVs of BiOCl can be tuned by the ratios of water/ethylene glycol. Such nanoplates containing oxygen vacancies served as an efficient visible-light-driven photocatalyst for NO oxidation. Compared with pure BiOCl, the enhanced photocatalytic performance was mainly attributed to the introduction of OVs, which greatly enhanced light absorption, promoted electron transfer, activated oxygen molecules. The present work could provide insights into the understanding of the role of OVs in photocatalysts for reference. Combined with characterization analysis, such as XRD(X-ray diffraction), XPS(X-ray photoelectron spectroscopy), TEM(Transmission Electron Microscopy), PL(Fluorescence Spectroscopy), and DFT (Density Functional Theory) calculations, the effect of vacancies on photoelectrochemical properties of BiOCl photocatalysts are shown. Furthermore, the possible reaction mechanisms of photocatalytic NO oxidation were also revealed. According to the results of in situ DRIFTS ( Diffused Reflectance Infrared Fourier Transform Spectroscopy), various intermediates were produced during different time intervals of NO photodegradation. The possible pathways are summarized below. First, visible light irradiation induces electron-hole pairs on the surface of OV-BOC (BiOCl with oxygen vacancies). Second, photogenerated electrons form superoxide radical with the contacted oxygen. Then, the NO molecules adsorbed on the surface of OV-BOC are attacked by superoxide radical and form nitrate instead of NO₂ (by-products). Oxygen vacancies greatly improve the photocatalytic oxidation activity of NO and effectively inhibit the production of harmful by-products during the oxidation of NO.

Keywords: OVs-BiOCl nanoplate, oxygen vacancies, NO oxidation, photocatalysis

Procedia PDF Downloads 133
9501 Genetic Algorithm Optimization of a Small Scale Natural Gas Liquefaction Process

Authors: M. I. Abdelhamid, A. O. Ghallab, R. S. Ettouney, M. A. El-Rifai

Abstract:

An optimization scheme based on COM server is suggested for communication between Genetic Algorithm (GA) toolbox of MATLAB and Aspen HYSYS. The structure and details of the proposed framework are discussed. The power of the developed scheme is illustrated by its application to the optimization of a recently developed natural gas liquefaction process in which Aspen HYSYS was used for minimization of the power consumption by optimizing the values of five operating variables. In this work, optimization by coupling between the GA in MATLAB and Aspen HYSYS model of the same process using the same five decision variables enabled improvements in power consumption by 3.3%, when 77% of the natural gas feed is liquefied. Also on inclusion of the flow rates of both nitrogen and carbon dioxide refrigerants as two additional decision variables, the power consumption decreased by 6.5% for a 78% liquefaction of the natural gas feed.

Keywords: stranded gas liquefaction, genetic algorithm, COM server, single nitrogen expansion, carbon dioxide pre-cooling

Procedia PDF Downloads 452
9500 Preparedness for Microbial Forensics Evidence Collection on Best Practice

Authors: Victor Ananth Paramananth, Rashid Muniginin, Mahaya Abd Rahman, Siti Afifah Ismail

Abstract:

Safety issues, scene protection, and appropriate evidence collection must be handled in any bio crime scene. There will be a scene or multi-scene to be cordoned for investigation in any bio-incident or bio crime event. Evidence collection is critical in determining the type of microbial or toxin, its lethality, and its source. As a consequence, from the start of the investigation, a proper sampling method is required. The most significant challenges for the crime scene officer would be deciding where to obtain samples, the best sampling method, and the sample sizes needed. Since there could be evidence in liquid, viscous, or powder shape at a crime scene, crime scene officers have difficulty determining which tools to use for sampling. To maximize sample collection, the appropriate tools for sampling methods are necessary. This study aims to assist the crime scene officer in collecting liquid, viscous, and powder biological samples in sufficient quantity while preserving sample quality. Observational tests on sample collection using liquid, viscous, and powder samples for adequate quantity and sample quality were performed using UV light in this research. The density of the light emission varies upon the method of collection and sample types. The best tools for collecting sufficient amounts of liquid, viscous, and powdered samples can be identified by observing UV light. Instead of active microorganisms, the invisible powder is used to assess sufficient sample collection during a crime scene investigation using various collection tools. The liquid, powdered and viscous samples collected using different tools were analyzed using Fourier transform infrared - attenuate total reflection (FTIR-ATR). FTIR spectroscopy is commonly used for rapid discrimination, classification, and identification of intact microbial cells. The liquid, viscous and powdered samples collected using various tools have been successfully observed using UV light. Furthermore, FTIR-ATR analysis showed that collected samples are sufficient in quantity while preserving their quality.

Keywords: biological sample, crime scene, collection tool, UV light, forensic

Procedia PDF Downloads 196
9499 A Memetic Algorithm for an Energy-Costs-Aware Flexible Job-Shop Scheduling Problem

Authors: Christian Böning, Henrik Prinzhorn, Eric C. Hund, Malte Stonis

Abstract:

In this article, the flexible job-shop scheduling problem is extended by consideration of energy costs which arise owing to the power peak, and further decision variables such as work in process and throughput time are incorporated into the objective function. This enables a production plan to be simultaneously optimized in respect of the real arising energy and logistics costs. The energy-costs-aware flexible job-shop scheduling problem (EFJSP) which arises is described mathematically, and a memetic algorithm (MA) is presented as a solution. In the MA, the evolutionary process is supplemented with a local search. Furthermore, repair procedures are used in order to rectify any infeasible solutions that have arisen in the evolutionary process. The potential for lowering the real arising costs of a production plan through consideration of energy consumption levels is highlighted.

Keywords: energy costs, flexible job-shop scheduling, memetic algorithm, power peak

Procedia PDF Downloads 346
9498 Bidirectional Dynamic Time Warping Algorithm for the Recognition of Isolated Words Impacted by Transient Noise Pulses

Authors: G. Tamulevičius, A. Serackis, T. Sledevič, D. Navakauskas

Abstract:

We consider the biggest challenge in speech recognition – noise reduction. Traditionally detected transient noise pulses are removed with the corrupted speech using pulse models. In this paper we propose to cope with the problem directly in Dynamic Time Warping domain. Bidirectional Dynamic Time Warping algorithm for the recognition of isolated words impacted by transient noise pulses is proposed. It uses simple transient noise pulse detector, employs bidirectional computation of dynamic time warping and directly manipulates with warping results. Experimental investigation with several alternative solutions confirms effectiveness of the proposed algorithm in the reduction of impact of noise on recognition process – 3.9% increase of the noisy speech recognition is achieved.

Keywords: transient noise pulses, noise reduction, dynamic time warping, speech recognition

Procedia PDF Downloads 559
9497 Laser Induced Transient Current in Quasi-One-Dimensional Nanostructure

Authors: Tokuei Sako

Abstract:

Light-induced ultrafast charge transfer in low-dimensional nanostructure has been studied by a model of a few electrons confined in a 1D electrostatic potential coupled to electrodes at both ends and subjected to an ultrashort pulsed laser field. The time-propagation of the one- and two-electron wave packets has been calculated by integrating the time-dependent Schrödinger equation by the symplectic integrator method with uniform Fourier grid. The temporal behavior of the resultant light-induced current in the studied systems has been discussed with respect to the central frequency and pulse width of the applied laser fields.

Keywords: pulsed laser field, nanowire, wave packet, quantum dots, conductivity

Procedia PDF Downloads 509
9496 Evaluation of Growth Performance and Survival Rate of African Catfish (Clarias gariepinus) Fed with Graded Levels of Egg Shell Substituted Ration

Authors: A. Bello-Olusoji, M. O. Sodamola, Y. A. Adejola, D. D Akinbola

Abstract:

An eight (8) weeks study was carried out on Four hundred and five (405) African catfish (Clarias gariepinus) juveniles to examine the effect of graded levels of egg shell on their growth performance and survival rates. They were acclimatized for two (2) weeks after which they were weighed and allotted into five dietary treatments of three (3) replicates each and 27 fishes per replicate making a total number of eighty-one (81) fishes per treatment. The dietary treatments contained 0, 25, 50, 75 and 100(%) egg shell inclusion from treatment one to treatment five respectively. Parameter on daily feed intake, weekly weight gain, and daily mortalities were recorded. The result of the experiment indicated that treatment four (4) with 75% inclusion of egg shell was the best in terms of weight gain and survival rates and was significantly different (P<0.05) when compared with the other treatments. For Catfish farming to remain viable in the nearest future, lower feed cost and increased profit are required; it is therefore recommended that diets of African catfish (Clarias gariepinus) be supplemented with well processed egg shell at 75% level of inclusion to achieve this.

Keywords: African catfish, egg shell, performance, performance, survival rate, weight gain

Procedia PDF Downloads 388
9495 Problem of Services Selection in Ubiquitous Systems

Authors: Malika Yaici, Assia Arab, Betitra Yakouben, Samia Zermani

Abstract:

Ubiquitous computing is nowadays a reality through the networking of a growing number of computing devices. It allows providing users with context aware information and services in a heterogeneous environment, anywhere and anytime. Selection of the best context-aware service, between many available services and providers, is a tedious problem. In this paper, a service selection method based on Constraint Satisfaction Problem (CSP) formalism is proposed. The services are considered as variables and domains; and the user context, preferences and providers characteristics are considered as constraints. The Backtrack algorithm is used to solve the problem to find the best service and provider which matches the user requirements. Even though this algorithm has an exponential complexity, but its use guarantees that the service, that best matches the user requirements, will be found. A comparison of the proposed method with the existing solutions finishes the paper.

Keywords: ubiquitous computing, services selection, constraint satisfaction problem, backtrack algorithm

Procedia PDF Downloads 245
9494 The Effect of Coconut Oil on Anthropometric Measurements and Irisin Levels in Overweight Individuals

Authors: Bilge Meral Koc, Elvan Yilmaz Akyuz, Tugce Ozlu

Abstract:

This study aimed to discover the effects of coconut oil intake and diet therapy on anthropometric measurements, biochemical findings and irisin levels in overweight individuals. Materials and Methods: Overweight individuals (n=44, 19-30 years) without any chronic disease were included. In this randomized controlled crossover study, the participants were divided into two groups (Group 1: 23 people, Group 2: 21 people). In the first phase, Group 1 received diet therapy to lose 0.5-1 kg of weight per week and 20 mL of coconut oil/day, while Group 2 only received diet therapy. In the second phase, Group 1 received diet therapy while Group 2 received diet therapy and 20 mL of coconut oil/day. Anthropometric measurements were taken four times. Irisin was measured four times by enzyme-linked immunosorbent (ELISA) method and other biochemical findings were measured twice. Statistical analysis was made on SPSS 20. Results: The irisin level decreased significantly when the participants only took coconut oil (p≤0.05). There was a significant decrease in the participants' body weight, body mass index (BMI) level and body fat percentage (p≤0.01). Insulin, total cholesterol, low density lipoproteins (LDL) cholesterol, and triglyceride (TG) levels of all participants decreased significantly (p≤0.05). There was no significant difference in irisin level due to body weight loss (p≤0.05); coconut oil provided a significant decrease in irisin level (p≤0.05). Conclusion: Diet therapy and weight loss did not have an effect on irisin level, but coconut oil alone was found to reduce irisin level. Coconut oil had no impact on anthropometric and biochemical findings.

Keywords: coconut oil, diet therapy, irisin, overweight

Procedia PDF Downloads 108
9493 MCDM Spectrum Handover Models for Cognitive Wireless Networks

Authors: Cesar Hernández, Diego Giral, Fernando Santa

Abstract:

The spectral handoff is important in cognitive wireless networks to ensure an adequate quality of service and performance for secondary user communications. This work proposes a benchmarking of performance of the three spectrum handoff models: VIKOR, SAW and MEW. Four evaluation metrics are used. These metrics are, accumulative average of failed handoffs, accumulative average of handoffs performed, accumulative average of transmission bandwidth and, accumulative average of the transmission delay. As a difference with related work, the performance of the three spectrum handoff models was validated with captured data of spectral occupancy in experiments realized at the GSM frequency band (824 MHz-849 MHz). These data represent the actual behavior of the licensed users for this wireless frequency band. The results of the comparative show that VIKOR Algorithm provides 15.8% performance improvement compared to a SAW Algorithm and, 12.1% better than the MEW Algorithm.

Keywords: cognitive radio, decision making, MEW, SAW, spectrum handoff, VIKOR

Procedia PDF Downloads 439
9492 Hot Forging Process Simulation of Outer Tie Rod to Reduce Forming Load

Authors: Kyo Jin An, Bukyo Seo, Young-Chul Park

Abstract:

The current trend in car market is increase of parts of automobile and weight in vehicle. It comes from improvement of vehicle performance. Outer tie rod is a part of component of steering system and it is lighter than the others. But, weight lightening is still required for improvement of car mileage. So, we have presented a model of aluminized outer tie rod, but the process of fabrication has to be checked to manufacture the product. Therefore, we have anticipated forming load, die stress and abrasion to use the program of forging interpretation in the part of hot forging process of outer tie rod in this study. Also, we have implemented the experiments design to use the table of orthogonal arrays to reduce the forming load.

Keywords: forming load, hot forging, orthogonal array, outer tie rod (OTR), multi–step forging

Procedia PDF Downloads 433
9491 An Improved Face Recognition Algorithm Using Histogram-Based Features in Spatial and Frequency Domains

Authors: Qiu Chen, Koji Kotani, Feifei Lee, Tadahiro Ohmi

Abstract:

In this paper, we propose an improved face recognition algorithm using histogram-based features in spatial and frequency domains. For adding spatial information of the face to improve recognition performance, a region-division (RD) method is utilized. The facial area is firstly divided into several regions, then feature vectors of each facial part are generated by Binary Vector Quantization (BVQ) histogram using DCT coefficients in low frequency domains, as well as Local Binary Pattern (LBP) histogram in spatial domain. Recognition results with different regions are first obtained separately and then fused by weighted averaging. Publicly available ORL database is used for the evaluation of our proposed algorithm, which is consisted of 40 subjects with 10 images per subject containing variations in lighting, posing, and expressions. It is demonstrated that face recognition using RD method can achieve much higher recognition rate.

Keywords: binary vector quantization (BVQ), DCT coefficients, face recognition, local binary patterns (LBP)

Procedia PDF Downloads 350
9490 Optimal Placement and Sizing of Distributed Generation in Microgrid for Power Loss Reduction and Voltage Profile Improvement

Authors: Ferinar Moaidi, Mahdi Moaidi

Abstract:

Environmental issues and the ever-increasing in demand of electrical energy make it necessary to have distributed generation (DG) resources in the power system. In this research, in order to realize the goals of reducing losses and improving the voltage profile in a microgrid, the allocation and sizing of DGs have been used. The proposed Genetic Algorithm (GA) is described from the array of artificial intelligence methods for solving the problem. The algorithm is implemented on the IEEE 33 buses network. This study is presented in two scenarios, primarily to illustrate the effect of location and determination of DGs has been done to reduce losses and improve the voltage profile. On the other hand, decisions made with the one-level assumptions of load are not universally accepted for all levels of load. Therefore, in this study, load modelling is performed and the results are presented for multi-levels load state.

Keywords: distributed generation, genetic algorithm, microgrid, load modelling, loss reduction, voltage improvement

Procedia PDF Downloads 144
9489 Design and Test a Robust Bearing-Only Target Motion Analysis Algorithm Based on Modified Gain Extended Kalman Filter

Authors: Mohammad Tarek Al Muallim, Ozhan Duzenli, Ceyhun Ilguy

Abstract:

Passive sonar is a method for detecting acoustic signals in the ocean. It detects the acoustic signals emanating from external sources. With passive sonar, we can determine the bearing of the target only, no information about the range of the target. Target Motion Analysis (TMA) is a process to estimate the position and speed of a target using passive sonar information. Since bearing is the only available information, the TMA technique called Bearing-only TMA. Many TMA techniques have been developed. However, until now, there is not a very effective method that could be used to always track an unknown target and extract its moving trace. In this work, a design of effective Bearing-only TMA Algorithm is done. The measured bearing angles are very noisy. Moreover, for multi-beam sonar, the measurements is quantized due to the sonar beam width. To deal with this, modified gain extended Kalman filter algorithm is used. The algorithm is fine-tuned, and many modules are added to improve the performance. A special validation gate module is used to insure stability of the algorithm. Many indicators of the performance and confidence level measurement are designed and tested. A new method to detect if the target is maneuvering is proposed. Moreover, a reactive optimal observer maneuver based on bearing measurements is proposed, which insure converging to the right solution all of the times. To test the performance of the proposed TMA algorithm a simulation is done with a MATLAB program. The simulator program tries to model a discrete scenario for an observer and a target. The simulator takes into consideration all the practical aspects of the problem such as a smooth transition in the speed, a circular turn of the ship, noisy measurements, and a quantized bearing measurement come for multi-beam sonar. The tests are done for a lot of given test scenarios. For all the tests, full tracking is achieved within 10 minutes with very little error. The range estimation error was less than 5%, speed error less than 5% and heading error less than 2 degree. For the online performance estimator, it is mostly aligned with the real performance. The range estimation confidence level gives a value equal to 90% when the range error less than 10%. The experiments show that the proposed TMA algorithm is very robust and has low estimation error. However, the converging time of the algorithm is needed to be improved.

Keywords: target motion analysis, Kalman filter, passive sonar, bearing-only tracking

Procedia PDF Downloads 404
9488 Biomimetic Luminescent Textile Using Biobased Products

Authors: Sweta Iyer, Nemeshwaree Behary, Vincent Nierstrasz

Abstract:

Various organisms involve bioluminescence for their particular biological function. The bio-based molecules responsible for bioluminescence vary from one species to another, research has been done to identify the chemistry and different mechanisms involved in light production in living organisms. The light emitting chemical systems such as firefly and bacterial luminous mostly involves enzyme-catalyzed reactions and is widely used for ATP measurement, bioluminescence imaging, environmental biosensors etc. Our strategy is to design bioluminescent textiles using such bioluminescent systems. Hence, a detailed literature work was carried out to study on how to mimic bioluminescence effect seen in nature. Reaction mechanisms in various bioluminescent living organisms were studied and the components or molecules responsible for luminescence were identified. However, the challenge is to obtain the same effect on textiles by immobilizing enzymes responsible for light creation. Another challenge is also to regenerate substrates involved in the reaction system to create a longer lasting illumination in bioluminescent textiles. Natural film-forming polymers were used to immobilize the reactive components including enzymes on textile materials to design a biomimetic luminescent textile.

Keywords: bioluminescence, biomimetic, immobilize, luminescent textile

Procedia PDF Downloads 266
9487 Lego Mindstorms as a Simulation of Robotic Systems

Authors: Miroslav Popelka, Jakub Nožička

Abstract:

In this paper we deal with using Lego Mindstorms in simulation of robotic systems with respect to cost reduction. Lego Mindstorms kit contains broad variety of hardware components which are required to simulate, program and test the robotics systems in practice. Algorithm programming went in development environment supplied together with Lego kit as in programming language C# as well. Algorithm following the line, which we dealt with in this paper, uses theoretical findings from area of controlling circuits. PID controller has been chosen as controlling circuit whose individual components were experimentally adjusted for optimal motion of robot tracking the line. Data which are determined to process by algorithm are collected by sensors which scan the interface between black and white surfaces followed by robot. Based on discovered facts Lego Mindstorms can be considered for low-cost and capable kit to simulate real robotics systems.

Keywords: LEGO Mindstorms, PID controller, low-cost robotics systems, line follower, sensors, programming language C#, EV3 Home Edition Software

Procedia PDF Downloads 375
9486 Investigations of Thermo Fluid Characteristics of Copper Alloy Porous Heat Sinks by Forced Air Cooling

Authors: Ashish Mahalle, Kishore Borakhade

Abstract:

High porosity metal foams are excellent for heat dissipation. There use has been widened to include heat removal from high density microelectronics circuits. Other important applications have been found in compact heat exchangers for airborne equipment, regenerative and dissipative air cooled condenser towers, and compact heat sinks for power electronic. The low relative density, open porosity and high thermal conductivity of the cell edges, large accessible surface area per unit volume, and the ability to mix the cooling fluid make metal foam heat exchangers efficient, compact and light weight. This paper reports the thermal performance of metal foam for high heat dissipation. In experimentation metal foam samples of different pore diameters i.e. 35 µ, 20 µ, 12 µ, are analyzed for varying velocities and heat inputs. The study investigate the effect of various dimensionless no. like Re,Nu, Pr and heat transfer characteristics of basic flow configuration.

Keywords: pores, foam, effective thermal conductivity, permeability

Procedia PDF Downloads 312
9485 Additive Manufacturing’s Impact on Product Design and Development: An Industrial Case Study

Authors: Ahmed Abdelsalam, Daniel Roozbahani, Marjan Alizadeh, Heikki Handroos

Abstract:

The aim of this study was to redesign a pressing air nozzle with lower weight and improved efficiency utilizing Selective Laser Melting (SLM) technology based on Design for Additive Manufacturing (DfAM) methods. The original pressing air nozzle was modified in SolidWorks 3D CAD, and two design concepts were introduced considering the DfAM approach. In the proposed designs, the air channels were amended. 3D models for the original pressing air nozzle and introduced designs were created to obtain the flow characteristic data using Ansys software. Results of CFD modeling for the original and two proposed designs were extracted, compared, and analyzed to demonstrate the impact of design on the development of a more efficient pressing air nozzle by AM process. Improved airflow was achieved by optimizing the pressing air nozzle's internal channel for both design concepts by providing 30% and 50.6% fewer pressure drops than the original design. Moreover, utilizing the presented designs, a significant reduction in product weight was attained. In addition, by applying the proposed designs, 48.3% and 70.3% reduction in product weight was attained compared to the original design. Therefore, pressing air nozzle with enhanced productivity and lowered weight was generated utilizing the DfAM-driven designs developed in this study. The main contribution of this study is to investigate the additional possibilities that can be achieved in designing modern parts using the advantage of SLM technology in producing that part. The approach presented in this study can be applied to almost any similar industrial application.

Keywords: additive manufacturing, design for additive manufacturing, design methods, product design, pressing air nozzle

Procedia PDF Downloads 176