Search results for: rectified linear units ReLU
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4664

Search results for: rectified linear units ReLU

4664 Convergence Analysis of Training Two-Hidden-Layer Partially Over-Parameterized ReLU Networks via Gradient Descent

Authors: Zhifeng Kong

Abstract:

Over-parameterized neural networks have attracted a great deal of attention in recent deep learning theory research, as they challenge the classic perspective of over-fitting when the model has excessive parameters and have gained empirical success in various settings. While a number of theoretical works have been presented to demystify properties of such models, the convergence properties of such models are still far from being thoroughly understood. In this work, we study the convergence properties of training two-hidden-layer partially over-parameterized fully connected networks with the Rectified Linear Unit activation via gradient descent. To our knowledge, this is the first theoretical work to understand convergence properties of deep over-parameterized networks without the equally-wide-hidden-layer assumption and other unrealistic assumptions. We provide a probabilistic lower bound of the widths of hidden layers and proved linear convergence rate of gradient descent. We also conducted experiments on synthetic and real-world datasets to validate our theory.

Keywords: over-parameterization, rectified linear units ReLU, convergence, gradient descent, neural networks

Procedia PDF Downloads 120
4663 Investigating the Influence of Activation Functions on Image Classification Accuracy via Deep Convolutional Neural Network

Authors: Gulfam Haider, sana danish

Abstract:

Convolutional Neural Networks (CNNs) have emerged as powerful tools for image classification, and the choice of optimizers profoundly affects their performance. The study of optimizers and their adaptations remains a topic of significant importance in machine learning research. While numerous studies have explored and advocated for various optimizers, the efficacy of these optimization techniques is still subject to scrutiny. This work aims to address the challenges surrounding the effectiveness of optimizers by conducting a comprehensive analysis and evaluation. The primary focus of this investigation lies in examining the performance of different optimizers when employed in conjunction with the popular activation function, Rectified Linear Unit (ReLU). By incorporating ReLU, known for its favorable properties in prior research, the aim is to bolster the effectiveness of the optimizers under scrutiny. Specifically, we evaluate the adjustment of these optimizers with both the original Softmax activation function and the modified ReLU activation function, carefully assessing their impact on overall performance. To achieve this, a series of experiments are conducted using a well-established benchmark dataset for image classification tasks, namely the Canadian Institute for Advanced Research dataset (CIFAR-10). The selected optimizers for investigation encompass a range of prominent algorithms, including Adam, Root Mean Squared Propagation (RMSprop), Adaptive Learning Rate Method (Adadelta), Adaptive Gradient Algorithm (Adagrad), and Stochastic Gradient Descent (SGD). The performance analysis encompasses a comprehensive evaluation of the classification accuracy, convergence speed, and robustness of the CNN models trained with each optimizer. Through rigorous experimentation and meticulous assessment, we discern the strengths and weaknesses of the different optimization techniques, providing valuable insights into their suitability for image classification tasks. By conducting this in-depth study, we contribute to the existing body of knowledge surrounding optimizers in CNNs, shedding light on their performance characteristics for image classification. The findings gleaned from this research serve to guide researchers and practitioners in making informed decisions when selecting optimizers and activation functions, thus advancing the state-of-the-art in the field of image classification with convolutional neural networks.

Keywords: deep neural network, optimizers, RMsprop, ReLU, stochastic gradient descent

Procedia PDF Downloads 92
4662 Cobb Angle Measurement from Coronal X-Rays Using Artificial Neural Networks

Authors: Andrew N. Saylor, James R. Peters

Abstract:

Scoliosis is a complex 3D deformity of the thoracic and lumbar spines, clinically diagnosed by measurement of a Cobb angle of 10 degrees or more on a coronal X-ray. The Cobb angle is the angle made by the lines drawn along the proximal and distal endplates of the respective proximal and distal vertebrae comprising the curve. Traditionally, Cobb angles are measured manually using either a marker, straight edge, and protractor or image measurement software. The task of measuring the Cobb angle can also be represented by a function taking the spine geometry rendered using X-ray imaging as input and returning the approximate angle. Although the form of such a function may be unknown, it can be approximated using artificial neural networks (ANNs). The performance of ANNs is affected by many factors, including the choice of activation function and network architecture; however, the effects of these parameters on the accuracy of scoliotic deformity measurements are poorly understood. Therefore, the objective of this study was to systematically investigate the effect of ANN architecture and activation function on Cobb angle measurement from the coronal X-rays of scoliotic subjects. The data set for this study consisted of 609 coronal chest X-rays of scoliotic subjects divided into 481 training images and 128 test images. These data, which included labeled Cobb angle measurements, were obtained from the SpineWeb online database. In order to normalize the input data, each image was resized using bi-linear interpolation to a size of 500 × 187 pixels, and the pixel intensities were scaled to be between 0 and 1. A fully connected (dense) ANN with a fixed cost function (mean squared error), batch size (10), and learning rate (0.01) was developed using Python Version 3.7.3 and TensorFlow 1.13.1. The activation functions (sigmoid, hyperbolic tangent [tanh], or rectified linear units [ReLU]), number of hidden layers (1, 3, 5, or 10), and number of neurons per layer (10, 100, or 1000) were varied systematically to generate a total of 36 network conditions. Stochastic gradient descent with early stopping was used to train each network. Three trials were run per condition, and the final mean squared errors and mean absolute errors were averaged to quantify the network response for each condition. The network that performed the best used ReLU neurons had three hidden layers, and 100 neurons per layer. The average mean squared error of this network was 222.28 ± 30 degrees2, and the average mean absolute error was 11.96 ± 0.64 degrees. It is also notable that while most of the networks performed similarly, the networks using ReLU neurons, 10 hidden layers, and 1000 neurons per layer, and those using Tanh neurons, one hidden layer, and 10 neurons per layer performed markedly worse with average mean squared errors greater than 400 degrees2 and average mean absolute errors greater than 16 degrees. From the results of this study, it can be seen that the choice of ANN architecture and activation function has a clear impact on Cobb angle inference from coronal X-rays of scoliotic subjects.

Keywords: scoliosis, artificial neural networks, cobb angle, medical imaging

Procedia PDF Downloads 103
4661 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks

Authors: Chad Brown

Abstract:

This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.

Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes

Procedia PDF Downloads 11
4660 A Deep Learning Based Method for Faster 3D Structural Topology Optimization

Authors: Arya Prakash Padhi, Anupam Chakrabarti, Rajib Chowdhury

Abstract:

Topology or layout optimization often gives better performing economic structures and is very helpful in the conceptual design phase. But traditionally it is being done in finite element-based optimization schemes which, although gives a good result, is very time-consuming especially in 3D structures. Among other alternatives machine learning, especially deep learning-based methods, have a very good potential in resolving this computational issue. Here convolutional neural network (3D-CNN) based variational auto encoder (VAE) is trained using a dataset generated from commercially available topology optimization code ABAQUS Tosca using solid isotropic material with penalization (SIMP) method for compliance minimization. The encoded data in latent space is then fed to a 3D generative adversarial network (3D-GAN) to generate the outcome in 64x64x64 size. Here the network consists of 3D volumetric CNN with rectified linear unit (ReLU) activation in between and sigmoid activation in the end. The proposed network is seen to provide almost optimal results with significantly reduced computational time, as there is no iteration involved.

Keywords: 3D generative adversarial network, deep learning, structural topology optimization, variational auto encoder

Procedia PDF Downloads 144
4659 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks

Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha

Abstract:

Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs –Sigmoid, ReLU, and Tanh–have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment with multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLUReLU) combination. Our results show that using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).

Keywords: activation function, universal approximation function, neural networks, convergence

Procedia PDF Downloads 134
4658 Efficient Utilization of Negative Half Wave of Regulator Rectifier Output to Drive Class D LED Headlamp

Authors: Lalit Ahuja, Nancy Das, Yashas Shetty

Abstract:

LED lighting has been increasingly adopted for vehicles in both domestic and foreign automotive markets. Although this miniaturized technology gives the best light output, low energy consumption, and cost-efficient solutions for driving, the same is the need of the hour. In this paper, we present a methodology for driving the highest class two-wheeler headlamp with regulator and rectifier (RR) output. Unlike usual LED headlamps, which are driven by a battery, regulator, and rectifier (RR) driven, a low-cost and highly efficient LED Driver Module (LDM) is proposed. The positive half of magneto output is regulated and used to charge batteries used for various peripherals. While conventionally, the negative half was used for operating bulb-based exterior lamps. But with advancements in LED-based headlamps, which are driven by a battery, this negative half pulse remained unused in most of the vehicles. Our system uses negative half-wave rectified DC output from RR to provide constant light output at all RPMs of the vehicle. With the negative rectified DC output of RR, we have the advantage of pulsating DC input which periodically goes to zero, thus helping us to generate a constant DC output equivalent to the required LED load, and with a change in RPM, additional active thermal bypass circuit help us to maintain the efficiency and thermal rise. The methodology uses the negative half wave output of the RR along with a linear constant current driver with significantly higher efficiency. Although RR output has varied frequency and duty cycles at different engine RPMs, the driver is designed such that it provides constant current to LEDs with minimal ripple. In LED Headlamps, a DC-DC switching regulator is usually used, which is usually bulky. But with linear regulators, we’re eliminating bulky components and improving the form factor. Hence, this is both cost-efficient and compact. Presently, output ripple-free amplitude drivers with fewer components and less complexity are limited to lower-power LED Lamps. The focus of current high-efficiency research is often on high LED power applications. This paper presents a method of driving LED load at both High Beam and Low Beam using the negative half wave rectified pulsating DC from RR with minimum components, maintaining high efficiency within the thermal limitations. Linear regulators are significantly inefficient, with efficiencies typically about 40% and reaching as low as 14%. This leads to poor thermal performance. Although they don’t require complex and bulky circuitry, powering high-power devices is difficult to realise with the same. But with the input being negative half wave rectified pulsating DC, this efficiency can be improved as this helps us to generate constant DC output equivalent to LED load minimising the voltage drop on the linear regulator. Hence, losses are significantly reduced, and efficiency as high as 75% is achieved. With a change in RPM, DC voltage increases, which can be managed by active thermal bypass circuitry, thus resulting in better thermal performance. Hence, the use of bulky and expensive heat sinks can be avoided. Hence, the methodology to utilize the unused negative pulsating DC output of RR to optimize the utilization of RR output power and provide a cost-efficient solution as compared to costly DC-DC drivers.

Keywords: class D LED headlamp, regulator and rectifier, pulsating DC, low cost and highly efficient, LED driver module

Procedia PDF Downloads 43
4657 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 339
4656 Estimating Housing Prices Using Automatic Linear Modeling in the Metropolis of Mashhad, Iran

Authors: Mohammad Rahim Rahnama

Abstract:

Market-transaction price for housing is the main criteria for determining municipality taxes and is determined and announced on an annual basis. Of course, there is a discrepancy between the actual value of transactions in the Bureau of Finance (P for short) or municipality (P´ for short) and the real price on the market (P˝). The present research aims to determine the real price of housing in the metropolis of Mashhad and to pinpoint the price gap with those of the aforementioned apparatuses and identify the factors affecting it. In order to reach this practical objective, Automatic Linear Modeling, which calls for an explanatory research, was utilized. The population of the research consisted of all the residential units in Mashhad, from which 317 residential units were randomly selected. Through cluster sampling, out of the 170 income blocks defined by the municipality, three blocks form high-income (Kosar), middle-income (Elahieh), and low-income (Seyyedi) strata were surveyed using questionnaires during February and March of 2015 and the information regarding the price and specifications of residential units were gathered. In order to estimate the effect of various factors on the price, the relationship between independent variables (8 variables) and the dependent variable of the housing price was calculated using Automatic Linear Modeling in SPSS. The results revealed that the average for housing price index is 788$ per square meter, compared to the Bureau of Finance’s prices which is 10$ and that of municipality’s which is 378$. Correlation coefficient among dependent and independent variables was calculated to be R²=0.81. Out of the eight initial variables, three were omitted. The most influential factor affecting the housing prices is the quality of Quality of construction (Ordinary, Full, Luxury). The least important factor influencing the housing prices is the variable of number of sides. The price gap between low-income (Seyyedi) and middle-income (Elahieh) districts was not confirmed via One-Way ANOVA but their gap with the high-income district (Kosar) was confirmed. It is suggested that city be divided into two low-income and high-income sections, as opposed three, in terms of housing prices.

Keywords: automatic linear modeling, housing prices, Mashhad, Iran

Procedia PDF Downloads 236
4655 Semantic Features of Turkish and Spanish Phraseological Units with a Somatic Component ‘Hand’

Authors: Narmina Mammadova

Abstract:

In modern linguistics, the comparative study of languages is becoming increasingly popular, the typology and comparison of languages that have different structures is expanding and deepening. Of particular interest is the study of phraseological units, which makes it possible to identify the specific features of the compared languages in all their national identity. This paper gives a brief analysis of the comparative study of somatic phraseological units (SFU) of the Spanish and Turkish languages with the component "hand" in the semantic aspect; identification of equivalents, analogs and non-equivalent units, as well as a description of methods of translation of non-equivalent somatic phraseological units. Comparative study of the phraseology of unrelated languages is of particular relevance since it allows us to identify both general, universal features and differential and specific features characteristic of a particular language. Based on the results of the generalization of the study, it can be assumed that phraseological units containing a somatic component have a high interlingual phraseological activity, which contributes to an increase in the degree of interlingual equivalence.

Keywords: Linguoculturology, Turkish, Spanish, language picture of the world, phraseological units, semantic microfield

Procedia PDF Downloads 174
4654 Improving the Analytical Power of Dynamic DEA Models, by the Consideration of the Shape of the Distribution of Inputs/Outputs Data: A Linear Piecewise Decomposition Approach

Authors: Elias K. Maragos, Petros E. Maravelakis

Abstract:

In Dynamic Data Envelopment Analysis (DDEA), which is a subfield of Data Envelopment Analysis (DEA), the productivity of Decision Making Units (DMUs) is considered in relation to time. In this case, as it is accepted by the most of the researchers, there are outputs, which are produced by a DMU to be used as inputs in a future time. Those outputs are known as intermediates. The common models, in DDEA, do not take into account the shape of the distribution of those inputs, outputs or intermediates data, assuming that the distribution of the virtual value of them does not deviate from linearity. This weakness causes the limitation of the accuracy of the analytical power of the traditional DDEA models. In this paper, the authors, using the concept of piecewise linear inputs and outputs, propose an extended DDEA model. The proposed model increases the flexibility of the traditional DDEA models and improves the measurement of the dynamic performance of DMUs.

Keywords: Dynamic Data Envelopment Analysis, DDEA, piecewise linear inputs, piecewise linear outputs

Procedia PDF Downloads 140
4653 On the Construction of Some Optimal Binary Linear Codes

Authors: Skezeer John B. Paz, Ederlina G. Nocon

Abstract:

Finding an optimal binary linear code is a central problem in coding theory. A binary linear code C = [n, k, d] is called optimal if there is no linear code with higher minimum distance d given the length n and the dimension k. There are bounds giving limits for the minimum distance d of a linear code of fixed length n and dimension k. The lower bound which can be taken by construction process tells that there is a known linear code having this minimum distance. The upper bound is given by theoretic results such as Griesmer bound. One way to find an optimal binary linear code is to make the lower bound of d equal to its higher bound. That is, to construct a binary linear code which achieves the highest possible value of its minimum distance d, given n and k. Some optimal binary linear codes were presented by Andries Brouwer in his published table on bounds of the minimum distance d of binary linear codes for 1 ≤ n ≤ 256 and k ≤ n. This was further improved by Markus Grassl by giving a detailed construction process for each code exhibiting the lower bound. In this paper, we construct new optimal binary linear codes by using some construction processes on existing binary linear codes. Particularly, we developed an algorithm applied to the codes already constructed to extend the list of optimal binary linear codes up to 257 ≤ n ≤ 300 for k ≤ 7.

Keywords: bounds of linear codes, Griesmer bound, construction of linear codes, optimal binary linear codes

Procedia PDF Downloads 732
4652 A Condition-Based Maintenance Policy for Multi-Unit Systems Subject to Deterioration

Authors: Nooshin Salari, Viliam Makis

Abstract:

In this paper, we propose a condition-based maintenance policy for multi-unit systems considering the existence of economic dependency among units. We consider a system composed of N identical units, where each unit deteriorates independently. Deterioration process of each unit is modeled as a three-state continuous time homogeneous Markov chain with two working states and a failure state. The average production rate of units varies in different working states and demand rate of the system is constant. Units are inspected at equidistant time epochs, and decision regarding performing maintenance is determined by the number of units in the failure state. If the total number of units in the failure state exceeds a critical level, maintenance is initiated, where units in failed state are replaced correctively and deteriorated state units are maintained preventively. Our objective is to determine the optimal number of failed units to initiate maintenance minimizing the long run expected average cost per unit time. The problem is formulated and solved in the semi-Markov decision process (SMDP) framework. A numerical example is developed to demonstrate the proposed policy and the comparison with the corrective maintenance policy is presented.

Keywords: reliability, maintenance optimization, semi-Markov decision process, production

Procedia PDF Downloads 135
4651 Determining the Factors Affecting Social Media Addiction (Virtual Tolerance, Virtual Communication), Phubbing, and Perception of Addiction in Nurses

Authors: Fatima Zehra Allahverdi, Nukhet Bayer

Abstract:

Objective: Three questions were formulated to examine stressful working units (intensive care units, emergency unit nurses) utilizing the self-perception theory and social support theory. This study provides a distinctive input by inspecting the combination of variables regarding stressful working environments. Method: The descriptive research was conducted with the participation of 400 nurses working at Ankara City Hospital. The study used Multivariate Analysis of Variance (MANOVA), regression analysis, and a mediation model. Hypothesis one used MANOVA followed by a Scheffe post hoc test. Hypothesis two utilized regression analysis using a hierarchical linear regression model. Hypothesis three used a mediation model. Result: The study utilized mediation analyses. Findings supported the hypotheses that intensive care units have significantly high scores in virtual communication and virtual tolerance. The number of years on the job, virtual communication, virtual tolerance, and phubbing significantly predicted 51% of the variance of perception of addiction. Interestingly, the number of years on the job, while significant, was negatively related to perception of addiction. Conclusion: The reasoning behind these findings and the lack of significance in the emergency unit is discussed. Around 7% of the variance of phubbing was accounted for through working in intensive care units. The model accounted for 26.80 % of the differences in the perception of addiction.

Keywords: phubbing, social media, working units, years on the job, stress

Procedia PDF Downloads 27
4650 Extension of Positive Linear Operator

Authors: Manal Azzidani

Abstract:

This research consideres the extension of special functions called Positive Linear Operators. the bounded linear operator which defined from normed space to Banach space will extend to the closure of the its domain, And extend identified linear functional on a vector subspace by Hana-Banach theorem which could be generalized to the positive linear operators.

Keywords: extension, positive operator, Riesz space, sublinear function

Procedia PDF Downloads 500
4649 Knowledge and Ontology Engineering in Continuous Monitoring of Production Systems

Authors: Maciej Zaręba, Sławomir Lasota

Abstract:

The monitoring of manufacturing processes is an important issue in nowadays ERP systems. The identification and analysis of appropriate data for the units that take part in the production process are ones of the most crucial problems. In this paper, the authors introduce a new approach towards modelling the relation between production units, signals, and factors possible to obtain from the production system. The main idea for the system is based on the ontology of production units.

Keywords: manufacturing operation management, OWL, ontology implementation, ontology modeling

Procedia PDF Downloads 92
4648 Reliability Prediction of Tires Using Linear Mixed-Effects Model

Authors: Myung Hwan Na, Ho- Chun Song, EunHee Hong

Abstract:

We widely use normal linear mixed-effects model to analysis data in repeated measurement. In case of detecting heteroscedasticity and the non-normality of the population distribution at the same time, normal linear mixed-effects model can give improper result of analysis. To achieve more robust estimation, we use heavy tailed linear mixed-effects model which gives more exact and reliable analysis conclusion than standard normal linear mixed-effects model.

Keywords: reliability, tires, field data, linear mixed-effects model

Procedia PDF Downloads 543
4647 A Study of Environmental Test Sequences for Electrical Units

Authors: Jung Ho Yang, Yong Soo Kim

Abstract:

Electrical units are operated by electrical and electronic components. An environmental test sequence is useful for testing electrical units to reduce reliability issues. This study introduces test sequence guidelines based on relevant principles and considerations for electronic testing according to international standard IEC-60068-1 and the United States military standard MIL-STD-810G. Then, test sequences were proposed based on the descriptions for each test. Finally, General Motors (GM) specification GMW3172 was interpreted and compared to IEC-60068-1 and MIL-STD-810G.

Keywords: reliability, environmental test sequence, electrical units, IEC 60068-1, MIL-STD-810G

Procedia PDF Downloads 478
4646 Measuring Multi-Class Linear Classifier for Image Classification

Authors: Fatma Susilawati Mohamad, Azizah Abdul Manaf, Fadhillah Ahmad, Zarina Mohamad, Wan Suryani Wan Awang

Abstract:

A simple and robust multi-class linear classifier is proposed and implemented. For a pair of classes of the linear boundary, a collection of segments of hyper planes created as perpendicular bisectors of line segments linking centroids of the classes or part of classes. Nearest Neighbor and Linear Discriminant Analysis are compared in the experiments to see the performances of each classifier in discriminating ripeness of oil palm. This paper proposes a multi-class linear classifier using Linear Discriminant Analysis (LDA) for image identification. Result proves that LDA is well capable in separating multi-class features for ripeness identification.

Keywords: multi-class, linear classifier, nearest neighbor, linear discriminant analysis

Procedia PDF Downloads 511
4645 Sensitivity Analysis in Fuzzy Linear Programming Problems

Authors: S. H. Nasseri, A. Ebrahimnejad

Abstract:

Fuzzy set theory has been applied to many fields, such as operations research, control theory, and management sciences. In this paper, we consider two classes of fuzzy linear programming (FLP) problems: Fuzzy number linear programming and linear programming with trapezoidal fuzzy variables problems. We state our recently established results and develop fuzzy primal simplex algorithms for solving these problems. Finally, we give illustrative examples.

Keywords: fuzzy linear programming, fuzzy numbers, duality, sensitivity analysis

Procedia PDF Downloads 536
4644 Characterizing the Rectification Process for Designing Scoliosis Braces: Towards Digital Brace Design

Authors: Inigo Sanz-Pena, Shanika Arachchi, Dilani Dhammika, Sanjaya Mallikarachchi, Jeewantha S. Bandula, Alison H. McGregor, Nicolas Newell

Abstract:

The use of orthotic braces for adolescent idiopathic scoliosis (AIS) patients is the most common non-surgical treatment to prevent deformity progression. The traditional method to create an orthotic brace involves casting the patient’s torso to obtain a representative geometry, which is then rectified by an orthotist to the desired geometry of the brace. Recent improvements in 3D scanning technologies, rectification software, CNC, and additive manufacturing processes have given the possibility to compliment, or in some cases, replace manual methods with digital approaches. However, the rectification process remains dependent on the orthotist’s skills. Therefore, the rectification process needs to be carefully characterized to ensure that braces designed through a digital workflow are as efficient as those created using a manual process. The aim of this study is to compare 3D scans of patients with AIS against 3D scans of both pre- and post-rectified casts that have been manually shaped by an orthotist. Six AIS patients were recruited from the Ragama Rehabilitation Clinic, Colombo, Sri Lanka. All patients were between 10 and 15 years old, were skeletally immature (Risser grade 0-3), and had Cobb angles between 20-45°. Seven spherical markers were placed at key anatomical locations on each patient’s torso and on the pre- and post-rectified molds so that distances could be reliably measured. 3D scans were obtained of 1) the patient’s torso and pelvis, 2) the patient’s pre-rectification plaster mold, and 3) the patient’s post-rectification plaster mold using a Structure Sensor Mark II 3D scanner (Occipital Inc., USA). 3D stick body models were created for each scan to represent the distances between anatomical landmarks. The 3D stick models were used to analyze the changes in position and orientation of the anatomical landmarks between scans using Blender open-source software. 3D Surface deviation maps represented volume differences between the scans using CloudCompare open-source software. The 3D stick body models showed changes in the position and orientation of thorax anatomical landmarks between the patient and the post-rectification scans for all patients. Anatomical landmark position and volume differences were seen between 3D scans of the patient’s torsos and the pre-rectified molds. Between the pre- and post-rectified molds, material removal was consistently seen on the anterior side of the thorax and the lateral areas below the ribcage. Volume differences were seen in areas where the orthotist planned to place pressure pads (usually at the trochanter on the side to which the lumbar curve was tilted (trochanter pad), at the lumbar apical vertebra (lumbar pad), on the rib connected to the apical vertebrae at the mid-axillary line (thoracic pad), and on the ribs corresponding to the upper thoracic vertebra (axillary extension pad)). The rectification process requires the skill and experience of an orthotist; however, this study demonstrates that the brace shape, location, and volume of material removed from the pre-rectification mold can be characterized and quantified. Results from this study can be fed into software that can accelerate the brace design process and make steps towards the automated digital rectification process.

Keywords: additive manufacturing, orthotics, scoliosis brace design, sculpting software, spinal deformity

Procedia PDF Downloads 123
4643 A Four-Step Ortho-Rectification Procedure for Geo-Referencing Video Streams from a Low-Cost UAV

Authors: B. O. Olawale, C. R. Chatwin, R. C. D. Young, P. M. Birch, F. O. Faithpraise, A. O. Olukiran

Abstract:

Ortho-rectification is the process of geometrically correcting an aerial image such that the scale is uniform. The ortho-image formed from the process is corrected for lens distortion, topographic relief, and camera tilt. This can be used to measure true distances, because it is an accurate representation of the Earth’s surface. Ortho-rectification and geo-referencing are essential to pin point the exact location of targets in video imagery acquired at the UAV platform. This can only be achieved by comparing such video imagery with an existing digital map. However, it is only when the image is ortho-rectified with the same co-ordinate system as an existing map that such a comparison is possible. The video image sequences from the UAV platform must be geo-registered, that is, each video frame must carry the necessary camera information before performing the ortho-rectification process. Each rectified image frame can then be mosaicked together to form a seamless image map covering the selected area. This can then be used for comparison with an existing map for geo-referencing. In this paper, we present a four-step ortho-rectification procedure for real-time geo-referencing of video data from a low-cost UAV equipped with multi-sensor system. The basic procedures for the real-time ortho-rectification are: (1) Decompilation of video stream into individual frames; (2) Finding of interior camera orientation parameters; (3) Finding the relative exterior orientation parameters for each video frames with respect to each other; (4) Finding the absolute exterior orientation parameters, using self-calibration adjustment with the aid of a mathematical model. Each ortho-rectified video frame is then mosaicked together to produce a 2-D planimetric mapping, which can be compared with a well referenced existing digital map for the purpose of georeferencing and aerial surveillance. A test field located in Abuja, Nigeria was used for testing our method. Fifteen minutes video and telemetry data were collected using the UAV and the data collected were processed using the four-step ortho-rectification procedure. The results demonstrated that the geometric measurement of the control field from ortho-images are more reliable than those from original perspective photographs when used to pin point the exact location of targets on the video imagery acquired by the UAV. The 2-D planimetric accuracy when compared with the 6 control points measured by a GPS receiver is between 3 to 5 meters.

Keywords: geo-referencing, ortho-rectification, video frame, self-calibration

Procedia PDF Downloads 461
4642 Determinants of Success of University Industry Collaboration in the Science Academic Units at Makerere University

Authors: Mukisa Simon Peter Turker, Etomaru Irene

Abstract:

This study examined factors determining the success of University-Industry Collaboration (UIC) in the science academic units (SAUs) at Makerere University. This was prompted by concerns about weak linkages between industry and the academic units at Makerere University. The study examined institutional, relational, output, and framework factors determining the success of UIC in the science academic units at Makerere University. The study adopted a predictive cross-sectional survey design. Data was collected using a questionnaire survey from 172 academic staff from the six SAUs at Makerere University. Stratified, proportionate, and simple random sampling techniques were used to select the samples. The study used descriptive statistics and linear multiple regression analysis to analyze data. The study findings reveal a coefficient of determination (R-square) of 0.403 at a significance level of 0.000, suggesting that UIC success was 40.3% at a standardized error of estimate of 0.60188. The strength of association between Institutional factors, Relational factors, Output factors, and Framework factors, taking into consideration all interactions among the study variables, was at 64% (R= 0.635). Institutional, Relational, Output and Framework factors accounted for 34% of the variance in the level of UIC success (adjusted R2 = 0.338). The remaining variance of 66% is explained by factors other than Institutional, Relational, Output, and Framework factors. The standardized coefficient statistics revealed that Relational factors (β = 0.454, t = 5.247, p = 0.000) and Framework factors (β = 0.311, t = 3.770, p = 0.000) are the only statistically significant determinants of the success of UIC in the SAU in Makerere University. Output factors (β = 0.082, t =1.096, p = 0.275) and Institutional factors β = 0.023, t = 0.292, p = 0.771) turned out to be statistically insignificant determinants of the success of UIC in the science academic units at Makerere University. The study concludes that Relational Factors and Framework Factors positively and significantly determine the success of UIC, but output factors and institutional factors are not statistically significant determinants of UIC in the SAUs at Makerere University. The study recommends strategies to consolidate Relational and Framework Factors to enhance UIC at Makerere University and further research on the effects of Institutional and Output factors on the success of UIC in universities.

Keywords: university-industry collaboration, output factors, relational factors, framework factors, institutional factors

Procedia PDF Downloads 32
4641 A Spectral Decomposition Method for Ordinary Differential Equation Systems with Constant or Linear Right Hand Sides

Authors: R. B. Ogunrinde, C. C. Jibunoh

Abstract:

In this paper, a spectral decomposition method is developed for the direct integration of stiff and nonstiff homogeneous linear (ODE) systems with linear, constant, or zero right hand sides (RHSs). The method does not require iteration but obtains solutions at any random points of t, by direct evaluation, in the interval of integration. All the numerical solutions obtained for the class of systems coincide with the exact theoretical solutions. In particular, solutions of homogeneous linear systems, i.e. with zero RHS, conform to the exact analytical solutions of the systems in terms of t.

Keywords: spectral decomposition, linear RHS, homogeneous linear systems, eigenvalues of the Jacobian

Procedia PDF Downloads 312
4640 Ranking All of the Efficient DMUs in DEA

Authors: Elahe Sarfi, Esmat Noroozi, Farhad Hosseinzadeh Lotfi

Abstract:

One of the important issues in Data Envelopment Analysis is the ranking of Decision Making Units. In this paper, a method for ranking DMUs is presented through which the weights related to efficient units should be chosen in a way that the other units preserve a certain percentage of their efficiency with the mentioned weights. To this end, a model is presented for ranking DMUs on the base of their superefficiency by considering the mentioned restrictions related to weights. This percentage can be determined by decision Maker. If the specific percentage is unsuitable, we can find a suitable and feasible one for ranking DMUs accordingly. Furthermore, the presented model is capable of ranking all of the efficient units including nonextreme efficient ones. Finally, the presented models are utilized for two sets of data and related results are reported.

Keywords: data envelopment analysis, efficiency, ranking, weight

Procedia PDF Downloads 431
4639 Apply Commitment Method in Power System to Minimize the Fuel Cost

Authors: Mohamed Shaban, Adel Yahya

Abstract:

The goal of this paper study is to schedule the power generation units to minimize fuel consumption cost based on a model that solves unit commitment problems. This can be done by utilizing forward dynamic programming method to determine the most economic scheduling of generating units. The model was applied to a power station, which consists of four generating units. The obtained results show that the applications of forward dynamic programming method offer a substantial reduction in fuel consumption cost. The fuel consumption cost has been reduced from $116,326 to $102,181 within a 24-hour period. This means saving about 12.16 % of fuel consumption cost. The study emphasizes the importance of applying modeling schedule programs to the operation of power generation units. As a consequence less consumption of fuel, less loss of power and less pollution

Keywords: unit commitment, forward dynamic, fuel cost, programming, generation scheduling, operation cost, power system, generating units

Procedia PDF Downloads 580
4638 The Interrelationship between Formal and Informal Institutions and Its Impacts on the Autonomy of Public Service Delivery Units: The Case of Vietnam

Authors: Minh Thi Hai Vo

Abstract:

This article draws on in-depth interviews with state employees at public hospitals and universities in its institutional analysis of the autonomy practices of public service delivery units in Vietnam. Unlike many empirical and theoretical studies that view formal and informal institutions as complements or substitutes, this article finds no evidence of complementary or substitutive relationships. Instead, the article finds that formal institutions accommodate informal ones and that informal institutions tend to compete and interfere, with the existing and ineffective formal institutions. The result of such conflicting relationship is that the actual autonomy of public service delivery units is, in most cases, perceived to be greater than the formal autonomy they are given. In the condition of poor regulation, the informal autonomy may result in unethical practices including rent-seeking and corruption. The implication of the study finding is policy-makers need to redesign and reorganize the autonomisation of public service delivery units to make informal institutions support and reinforce formal ones in a complementary manner.

Keywords: autonomy, formal institutions, informal institutions, public service delivery units, Vietnam

Procedia PDF Downloads 181
4637 Finding Data Envelopment Analysis Target Using the Multiple Objective Linear Programming Structure in Full Fuzzy Case

Authors: Raziyeh Shamsi

Abstract:

In this paper, we present a multiple objective linear programming (MOLP) problem in full fuzzy case and find Data Envelopment Analysis(DEA) targets. In the presented model, we are seeking the least inputs and the most outputs in the production possibility set (PPS) with the variable return to scale (VRS) assumption, so that the efficiency projection is obtained for all decision making units (DMUs). Then, we provide an algorithm for finding DEA targets interactively in the full fuzzy case, which solves the full fuzzy problem without defuzzification. Owing to the use of interactive methods, the targets obtained by our algorithm are more applicable, more realistic, and they are according to the wish of the decision maker. Finally, an application of the algorithm in 21 educational institutions is provided.

Keywords: DEA, MOLP, full fuzzy, target

Procedia PDF Downloads 283
4636 Area-Efficient FPGA Implementation of an FFT Processor by Reusing Butterfly Units

Authors: Atin Mukherjee, Amitabha Sinha, Debesh Choudhury

Abstract:

Fast Fourier transform (FFT) of large-number of samples requires larger hardware resources of field programmable gate arrays and it asks for more area as well as power. In this paper, an area efficient architecture of FFT processor is proposed, that reuses the butterfly units more than once. The FFT processor is emulated and the results are validated on Virtex-6 FPGA. The proposed architecture outperforms the conventional architecture of a N-point FFT processor in terms of area which is reduced by a factor of log_N(2) with the negligible increase of processing time.

Keywords: FFT, FPGA, resource optimization, butterfly units

Procedia PDF Downloads 500
4635 Foreign Literature at the Lessons of Individual Reading: Contemporary Methods of Phraseological Units Teaching

Authors: Diana Davletbaeva, Elena Pankratova

Abstract:

This article observes some current questions of use of foreign literature in a process of phraseological units teaching in schools. It reveals and establishes different advantages of literary read at the lessons of individual reading and gives some core points of arrangements and organizational work. The article touches upon some essential keys concerning successful phraseological units mastering and improvement of students’ knowledge in a sphere of phraseology.

Keywords: foreign languages teaching, literary read, individual reading, phraseological unit, complex of exercises

Procedia PDF Downloads 356