Search results for: objective function
3035 Modelling of Electron States in Quantum -Wire Systems - Influence of Stochastic Effects on the Confining Potential
Authors: Mikhail Vladimirovich Deryabin, Morten Willatzen
Abstract:
In this work, we address theoretically the influence of red and white Gaussian noise for electronic energies and eigenstates of cylindrically shaped quantum dots. The stochastic effect can be imagined as resulting from crystal-growth statistical fluctuations in the quantum-dot material composition. In particular we obtain analytical expressions for the eigenvalue shifts and electronic envelope functions in the k . p formalism due to stochastic variations in the confining band-edge potential. It is shown that white noise in the band-edge potential leaves electronic properties almost unaffected while red noise may lead to changes in state energies and envelopefunction amplitudes of several percentages. In the latter case, the ensemble-averaged envelope function decays as a function of distance. It is also shown that, in a stochastic system, constant ensembleaveraged envelope functions are the only bounded solutions for the infinite quantum-wire problem and the energy spectrum is completely discrete. In other words, the infinite stochastic quantum wire behaves, ensemble-averaged, as an atom.
Keywords: cylindrical quantum dots, electronic eigen energies, red and white Gaussian noise, ensemble averaging effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15293034 An Improved K-Means Algorithm for Gene Expression Data Clustering
Authors: Billel Kenidra, Mohamed Benmohammed
Abstract:
Data mining technique used in the field of clustering is a subject of active research and assists in biological pattern recognition and extraction of new knowledge from raw data. Clustering means the act of partitioning an unlabeled dataset into groups of similar objects. Each group, called a cluster, consists of objects that are similar between themselves and dissimilar to objects of other groups. Several clustering methods are based on partitional clustering. This category attempts to directly decompose the dataset into a set of disjoint clusters leading to an integer number of clusters that optimizes a given criterion function. The criterion function may emphasize a local or a global structure of the data, and its optimization is an iterative relocation procedure. The K-Means algorithm is one of the most widely used partitional clustering techniques. Since K-Means is extremely sensitive to the initial choice of centers and a poor choice of centers may lead to a local optimum that is quite inferior to the global optimum, we propose a strategy to initiate K-Means centers. The improved K-Means algorithm is compared with the original K-Means, and the results prove how the efficiency has been significantly improved.
Keywords: Microarray data mining, biological pattern recognition, partitional clustering, k-means algorithm, centroid initialization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12833033 Compact Binary Tree Representation of Logic Function with Enhanced Throughput
Authors: Padmanabhan Balasubramanian, C. Ardil
Abstract:
An effective approach for realizing the binary tree structure, representing a combinational logic functionality with enhanced throughput, is discussed in this paper. The optimization in maximum operating frequency was achieved through delay minimization, which in turn was possible by means of reducing the depth of the binary network. The proposed synthesis methodology has been validated by experimentation with FPGA as the target technology. Though our proposal is technology independent, yet the heuristic enables better optimization in throughput even after technology mapping for such Boolean functionality; whose reduced CNF form is associated with a lesser literal cost than its reduced DNF form at the Boolean equation level. For cases otherwise, our method converges to similar results as that of [12]. The practical results obtained for a variety of case studies demonstrate an improvement in the maximum throughput rate for Spartan IIE (XC2S50E-7FT256) and Spartan 3 (XC3S50-4PQ144) FPGA logic families by 10.49% and 13.68% respectively. With respect to the LUTs and IOBUFs required for physical implementation of the requisite non-regenerative logic functionality, the proposed method enabled savings to the tune of 44.35% and 44.67% respectively, over the existing efficient method available in literature [12].
Keywords: Binary logic tree, FPGA based design, Boolean function, Throughput rate, CNF, DNF.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19063032 Multilevel Activation Functions For True Color Image Segmentation Using a Self Supervised Parallel Self Organizing Neural Network (PSONN) Architecture: A Comparative Study
Authors: Siddhartha Bhattacharyya, Paramartha Dutta, Ujjwal Maulik, Prashanta Kumar Nandi
Abstract:
The paper describes a self supervised parallel self organizing neural network (PSONN) architecture for true color image segmentation. The proposed architecture is a parallel extension of the standard single self organizing neural network architecture (SONN) and comprises an input (source) layer of image information, three single self organizing neural network architectures for segmentation of the different primary color components in a color image scene and one final output (sink) layer for fusion of the segmented color component images. Responses to the different shades of color components are induced in each of the three single network architectures (meant for component level processing) by applying a multilevel version of the characteristic activation function, which maps the input color information into different shades of color components, thereby yielding a processed component color image segmented on the basis of the different shades of component colors. The number of target classes in the segmented image corresponds to the number of levels in the multilevel activation function. Since the multilevel version of the activation function exhibits several subnormal responses to the input color image scene information, the system errors of the three component network architectures are computed from some subnormal linear index of fuzziness of the component color image scenes at the individual level. Several multilevel activation functions are employed for segmentation of the input color image scene using the proposed network architecture. Results of the application of the multilevel activation functions to the PSONN architecture are reported on three real life true color images. The results are substantiated empirically with the correlation coefficients between the segmented images and the original images.
Keywords: Colour image segmentation, fuzzy set theory, multi-level activation functions, parallel self-organizing neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20213031 Fashion Consumption for Fashion Innovators: A Study of Fashion Consumption Behavior of Innovators and Non-Innovators
Authors: Vaishali P. Joshi, Pallav Joshi
Abstract:
The objective of this study is to examine the differences fashion innovators and non-fashion innovators in their fashion consumption behavior in terms of their pre-purchase behavior, purchase behavior and post purchase behavior. The questionnaire was distributed to a female college student for data collection for achieving the objective of the first part of the study. Question-related to fashion innovativeness and fashion consumption behavior was asked. The sample was comprised of 81 college females ages 18 through 30 who were attending Business Management degree. A series of attitude questions was used to categorize respondents on the Innovativeness Scale. 32 respondents with a score of 21 and above were designated as Fashion innovators and the remainder (49) as Non-fashion innovators. Findings showed that there exist significant differences between innovators and non-innovators in their fashion consumption behavior. Data was analyzed through frequency distribution table. Many differences were found in the behavior of innovators and non-innovators in terms of their pre-purchase, actual purchase, and post-purchase behavior.Keywords: Consumption behavior, fashion, innovativeness, frequency distribution table.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18193030 Designing Mobile Application to Motivate Young People to Visit Cultural Heritage Sites
Authors: Yuko Hiramatsu, Fumihiro Sato, Atsushi Ito, Hiroyuki Hatano, Mie Sato, Yu Watanabe, Akira Sasaki
Abstract:
This paper presents a mobile phone application developed for sightseeing in Nikko, one of the cultural world heritages in Japan, using the BLE (Bluetooth Low Energy) beacon. Based on our pre-research, we decided to design our application for young people who walk around the area actively, but know little about the tradition and culture of Nikko. One solution is to construct many information boards to explain; however, it is difficult to construct new guide plates in cultural world heritage sites. The smartphone is a good solution to send such information to such visitors. This application was designed using a combination of the smartphone and beacons, set in the area, so that when a tourist passes near a beacon, the application displays information about the area including a map, historical or cultural information about the temples and shrines, and local shops nearby as well as a bus timetable. It is useful for foreigners, too. In addition, we developed quizzes relating to the culture and tradition of Nikko to provide information based on the Zeigarnik effect, a psychological effect. According to the results of our trials, tourists positively evaluated the basic information and young people who used the quiz function were able to learn the historical and cultural points. This application helped young visitors at Nikko to understand the cultural elements of the site. In addition, this application has a function to send notifications. This function is designed to provide information about the local community such as shops, local transportation companies and information office. The application hopes to also encourage people living in the area, and such cooperation from the local people will make this application vivid and inspire young visitors to feel that the cultural heritage site is still alive today. This is a gateway for young people to learn about a traditional place and understand the gravity of preserving such areas.
Keywords: BLE beacon, smartphone application, Zeigarnik effect, world heritage site, school trip.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19573029 Impulse Response Shortening for Discrete Multitone Transceivers using Convex Optimization Approach
Authors: Ejaz Khan, Conor Heneghan
Abstract:
In this paper we propose a new criterion for solving the problem of channel shortening in multi-carrier systems. In a discrete multitone receiver, a time-domain equalizer (TEQ) reduces intersymbol interference (ISI) by shortening the effective duration of the channel impulse response. Minimum mean square error (MMSE) method for TEQ does not give satisfactory results. In [1] a new criterion for partially equalizing severe ISI channels to reduce the cyclic prefix overhead of the discrete multitone transceiver (DMT), assuming a fixed transmission bandwidth, is introduced. Due to specific constrained (unit morm constraint on the target impulse response (TIR)) in their method, the freedom to choose optimum vector (TIR) is reduced. Better results can be obtained by avoiding the unit norm constraint on the target impulse response (TIR). In this paper we change the cost function proposed in [1] to the cost function of determining the maximum of a determinant subject to linear matrix inequality (LMI) and quadratic constraint and solve the resulting optimization problem. Usefulness of the proposed method is shown with the help of simulations.Keywords: Equalizer, target impulse response, convex optimization, matrix inequality.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17113028 Spatial and Temporal Variability of Fog Over the Indo-Gangetic Plains, India
Authors: Sanjay Kumar Srivastava, Anu Rani Sharma, Kamna Sachdeva
Abstract:
The aim of the paper is to analyze the characteristics of winter fog in terms of its trend and spatial-temporal variability over Indo-Gangetic plains. The study reveals that during last four and half decades (1971-2015), an alarming increasing trend in fog frequency has been observed during the winter months of December and January over the study area. The frequency of fog has increased by 118.4% during the peak winter months of December and January. It has also been observed that on an average central part of IGP has 66.29% fog days followed by west IGP with 41.94% fog days. Further, Empirical Orthogonal Function (EOF) decomposition and Mann-Kendall variation analysis are used to analyze the spatial and temporal patterns of winter fog. The findings have significant implications for the further research of fog over IGP and formulate robust strategies to adapt the fog variability and mitigate its effects. The decision by Delhi Government to implement odd-even scheme to restrict the use of private vehicles in order to reduce pollution and improve quality of air may result in increasing the alarming increasing trend of fog over Delhi and its surrounding areas regions of IGP.
Keywords: Fog, climatology, spatial variability, temporal variability, empirical orthogonal function, visibility, Mann-Kendall test, variation point.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16533027 Collaborative Business Strategy of PTT Energy Trading Co. Ltd. for LNG form of Coal Bed Methane in B2B Transaction to Japanese Shareholder, Especially to Electricity and Power Supply Companies
Authors: Shabrina Pritta Radyanti, Harimukti Wandebori
Abstract:
A research study was conducted with an objective to propose a collaborative business strategy of a oil and gas trading company, representing PPT Energy Trading Co., Ltd., with its shareholder, especially electricity and power supply companies for LNG Form of Coal Bed Methane in B2B Transaction. Collaborative business strategy is a strategy to collaborate with other organizations due to have future benefits in both parties, or achieve the business objective through the collaboration of business, its strategy and partners. A structured interview was established to collect the required primary data from the company. Not only interview, but also company’s business plan and annual report were collected and analyzed for the company’s current condition. As the result, this research shows a recommendation to propose a new collaborative strategy with limiting its target market, diversifying product, conducting new business model, and considering other stakeholders.
Keywords: collaborative business strategy, trading company, LNG, coal bed methane
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 25663026 Secure Block-Based Video Authentication with Localization and Self-Recovery
Authors: Ammar M. Hassan, Ayoub Al-Hamadi, Yassin M. Y. Hasan, Mohamed A. A. Wahab, Bernd Michaelis
Abstract:
Because of the great advance in multimedia technology, digital multimedia is vulnerable to malicious manipulations. In this paper, a public key self-recovery block-based video authentication technique is proposed which can not only precisely localize the alteration detection but also recover the missing data with high reliability. In the proposed block-based technique, multiple description coding MDC is used to generate two codes (two descriptions) for each block. Although one block code (one description) is enough to rebuild the altered block, the altered block is rebuilt with better quality by the two block descriptions. So using MDC increases the ratability of recovering data. A block signature is computed using a cryptographic hash function and a doubly linked chain is utilized to embed the block signature copies and the block descriptions into the LSBs of distant blocks and the block itself. The doubly linked chain scheme gives the proposed technique the capability to thwart vector quantization attacks. In our proposed technique , anyone can check the authenticity of a given video using the public key. The experimental results show that the proposed technique is reliable for detecting, localizing and recovering the alterations.Keywords: Authentication, hash function, multiple descriptioncoding, public key encryption, watermarking.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19383025 Predicting the Impact of the Defect on the Overall Environment in Function Based Systems
Authors: Parvinder S. Sandhu, Urvashi Malhotra, E. Ardil
Abstract:
There is lot of work done in prediction of the fault proneness of the software systems. But, it is the severity of the faults that is more important than number of faults existing in the developed system as the major faults matters most for a developer and those major faults needs immediate attention. In this paper, we tried to predict the level of impact of the existing faults in software systems. Neuro-Fuzzy based predictor models is applied NASA-s public domain defect dataset coded in C programming language. As Correlation-based Feature Selection (CFS) evaluates the worth of a subset of attributes by considering the individual predictive ability of each feature along with the degree of redundancy between them. So, CFS is used for the selecting the best metrics that have highly correlated with level of severity of faults. The results are compared with the prediction results of Logistic Models (LMT) that was earlier quoted as the best technique in [17]. The results are recorded in terms of Accuracy, Mean Absolute Error (MAE) and Root Mean Squared Error (RMSE). The results show that Neuro-fuzzy based model provide a relatively better prediction accuracy as compared to other models and hence, can be used for the modeling of the level of impact of faults in function based systems.Keywords: Software Metrics, Fuzzy, Neuro-Fuzzy, Software Faults, Accuracy, MAE, RMSE.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13553024 OXADM Asymmetrical Optical Device: Extending the Application to FTTH System
Authors: Mohammad Syuhaimi Ab-Rahman, Mohd. Saiful Dzulkefly Zan, Mohd Taufiq Mohd Yusof
Abstract:
With the drastically growth in optical communication technology, a lossless, low-crosstalk and multifunction optical switch is most desirable for large-scale photonic network. To realize such a switch, we have introduced the new architecture of optical switch that embedded many functions on single device. The asymmetrical architecture of OXADM consists of 3 parts; selective port, add/drop operation, and path routing. Selective port permits only the interest wavelength pass through and acts as a filter. While add and drop function can be implemented in second part of OXADM architecture. The signals can then be re-routed to any output port or/and perform an accumulation function which multiplex all signals onto single path and then exit to any interest output port. This will be done by path routing operation. The unique features offered by OXADM has extended its application to Fiber to-the Home Technology (FTTH), here the OXADM is used as a wavelength management element in Optical Line Terminal (OLT). Each port is assigned specifically with the operating wavelengths and with the dynamic routing management to ensure no traffic combustion occurs in OLT.Keywords: OXADM, asymmetrical architecture, optical switch, OLT, FTTH.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15483023 Prediction of Slump in Concrete using Artificial Neural Networks
Authors: V. Agrawal, A. Sharma
Abstract:
High Strength Concrete (HSC) is defined as concrete that meets special combination of performance and uniformity requirements that cannot be achieved routinely using conventional constituents and normal mixing, placing, and curing procedures. It is a highly complex material, which makes modeling its behavior a very difficult task. This paper aimed to show possible applicability of Neural Networks (NN) to predict the slump in High Strength Concrete (HSC). Neural Network models is constructed, trained and tested using the available test data of 349 different concrete mix designs of High Strength Concrete (HSC) gathered from a particular Ready Mix Concrete (RMC) batching plant. The most versatile Neural Network model is selected to predict the slump in concrete. The data used in the Neural Network models are arranged in a format of eight input parameters that cover the Cement, Fly Ash, Sand, Coarse Aggregate (10 mm), Coarse Aggregate (20 mm), Water, Super-Plasticizer and Water/Binder ratio. Furthermore, to test the accuracy for predicting slump in concrete, the final selected model is further used to test the data of 40 different concrete mix designs of High Strength Concrete (HSC) taken from the other batching plant. The results are compared on the basis of error function (or performance function).Keywords: Artificial Neural Networks, Concrete, prediction ofslump, slump in concrete
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 35953022 Sound Instance: Art, Perception and Composition through Soundscapes
Authors: Ricardo Mestre
Abstract:
The soundscape stands out as an agglomeration of sounds available in the world, associated with different contexts and origins, being a theme studied by various areas of knowledge, seeking to guide their benefits and their consequences, contributing to the welfare of society and other ecosystems. With the objective for a greater recognition of sound reality, through the selection and differentiation of sounds, the soundscape studies focus on the contribution for a better tuning of the world and to the balance and well-being of humanity. Sound environment, produced and created in various ways, can provide various sources of information, contributing to the orientation of the human being, alerting and manipulating him during his daily journey, like small notifications received on a cell phone or other device with these features. In this way, it becomes possible to give sound its due importance in relation to the processes of individual representation, in manners of social, professional and emotional life. Ensuring an individual representation means providing the human being with new tools for the long process of reflection by recognizing his environment, the sounds that represent him, and his perspective on his respective function in it. In order to provide more information about the importance of the sound environment inherent to the individual reality, one introduces the term sound instance, in order to refer to the whole sound field existing in the individual's life, which is divided into four distinct subfields, but essential to the process of individual representation, called sound matrix, sound cycles, sound traces and sound interference. Alongside volunteers we were able to create six representations of sound instances, based on the individual perception of his/her life, focusing on the present, past and future. With this investigation it was possible to determine that sound instance has a tool for self-recognition, considering the statements of opinion about the experience from the volunteers, reflecting about the three time lines, based on memories, thoughts and wishes.
Keywords: Sound instance, soundscape, sound art, self-recognition.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5763021 A New Class χ2 (M, A,) of the Double Difference Sequences of Fuzzy Numbers
Authors: N.Subramanian, U.K.Misra
Abstract:
The aim of this paper is to introduce and study a new concept of strong double χ2 (M,A, Δ) of fuzzy numbers and also some properties of the resulting sequence spaces of fuzzy numbers were examined.
Keywords: Modulus function, fuzzy number, metric space.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22963020 Semi-Automatic Method to Assist Expert for Association Rules Validation
Authors: Amdouni Hamida, Gammoudi Mohamed Mohsen
Abstract:
In order to help the expert to validate association rules extracted from data, some quality measures are proposed in the literature. We distinguish two categories: objective and subjective measures. The first one depends on a fixed threshold and on data quality from which the rules are extracted. The second one consists on providing to the expert some tools in the objective to explore and visualize rules during the evaluation step. However, the number of extracted rules to validate remains high. Thus, the manually mining rules task is very hard. To solve this problem, we propose, in this paper, a semi-automatic method to assist the expert during the association rule's validation. Our method uses rule-based classification as follow: (i) We transform association rules into classification rules (classifiers), (ii) We use the generated classifiers for data classification. (iii) We visualize association rules with their quality classification to give an idea to the expert and to assist him during validation process.Keywords: Association rules, Rule-based classification, Classification quality, Validation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17903019 Two Area Power Systems Economic Dispatch Problem Solving Considering Transmission Capacity Constraints
Authors: M. Zarei, A. Roozegar, R. Kazemzadeh, J.M. Kauffmann
Abstract:
This paper describes an efficient and practical method for economic dispatch problem in one and two area electrical power systems with considering the constraint of the tie transmission line capacity constraint. Direct search method (DSM) is used with some equality and inequality constraints of the production units with any kind of fuel cost function. By this method, it is possible to use several inequality constraints without having difficulty for complex cost functions or in the case of unavailability of the cost function derivative. To minimize the number of total iterations in searching, process multi-level convergence is incorporated in the DSM. Enhanced direct search method (EDSM) for two area power system will be investigated. The initial calculation step size that causes less iterations and then less calculation time is presented. Effect of the transmission tie line capacity, between areas, on economic dispatch problem and on total generation cost will be studied; line compensation and active power with reactive power dispatch are proposed to overcome the high generation costs for this multi-area system.Keywords: Economic dispatch, Power System Operation, Direct Search Method, Transmission Capacity Constraint.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 24833018 Factors of Effective Business Software Systems Development and Enhancement Projects Work Effort Estimation
Authors: Beata Czarnacka-Chrobot
Abstract:
Majority of Business Software Systems (BSS) Development and Enhancement Projects (D&EP) fail to meet criteria of their effectiveness, what leads to the considerable financial losses. One of the fundamental reasons for such projects- exceptionally low success rate are improperly derived estimates for their costs and time. In the case of BSS D&EP these attributes are determined by the work effort, meanwhile reliable and objective effort estimation still appears to be a great challenge to the software engineering. Thus this paper is aimed at presenting the most important synthetic conclusions coming from the author-s own studies concerning the main factors of effective BSS D&EP work effort estimation. Thanks to the rational investment decisions made on the basis of reliable and objective criteria it is possible to reduce losses caused not only by abandoned projects but also by large scale of overrunning the time and costs of BSS D&EP execution.Keywords: Benchmarking data, business software systems development and enhancement projects, effort estimation, software engineering economics, software functional size measurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15413017 Application of Stochastic Models to Annual Extreme Streamflow Data
Authors: Karim Hamidi Machekposhti, Hossein Sedghi
Abstract:
This study was designed to find the best stochastic model (using of time series analysis) for annual extreme streamflow (peak and maximum streamflow) of Karkheh River at Iran. The Auto-regressive Integrated Moving Average (ARIMA) model used to simulate these series and forecast those in future. For the analysis, annual extreme streamflow data of Jelogir Majin station (above of Karkheh dam reservoir) for the years 1958–2005 were used. A visual inspection of the time plot gives a little increasing trend; therefore, series is not stationary. The stationarity observed in Auto-Correlation Function (ACF) and Partial Auto-Correlation Function (PACF) plots of annual extreme streamflow was removed using first order differencing (d=1) in order to the development of the ARIMA model. Interestingly, the ARIMA(4,1,1) model developed was found to be most suitable for simulating annual extreme streamflow for Karkheh River. The model was found to be appropriate to forecast ten years of annual extreme streamflow and assist decision makers to establish priorities for water demand. The Statistical Analysis System (SAS) and Statistical Package for the Social Sciences (SPSS) codes were used to determinate of the best model for this series.Keywords: Stochastic models, ARIMA, extreme streamflow, Karkheh River.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7213016 An Empirical Study on Switching Activation Functions in Shallow and Deep Neural Networks
Authors: Apoorva Vinod, Archana Mathur, Snehanshu Saha
Abstract:
Though there exists a plethora of Activation Functions (AFs) used in single and multiple hidden layer Neural Networks (NN), their behavior always raised curiosity, whether used in combination or singly. The popular AFs – Sigmoid, ReLU, and Tanh – have performed prominently well for shallow and deep architectures. Most of the time, AFs are used singly in multi-layered NN, and, to the best of our knowledge, their performance is never studied and analyzed deeply when used in combination. In this manuscript, we experiment on multi-layered NN architecture (both on shallow and deep architectures; Convolutional NN and VGG16) and investigate how well the network responds to using two different AFs (Sigmoid-Tanh, Tanh-ReLU, ReLU-Sigmoid) used alternately against a traditional, single (Sigmoid-Sigmoid, Tanh-Tanh, ReLU-ReLU) combination. Our results show that on using two different AFs, the network achieves better accuracy, substantially lower loss, and faster convergence on 4 computer vision (CV) and 15 Non-CV (NCV) datasets. When using different AFs, not only was the accuracy greater by 6-7%, but we also accomplished convergence twice as fast. We present a case study to investigate the probability of networks suffering vanishing and exploding gradients when using two different AFs. Additionally, we theoretically showed that a composition of two or more AFs satisfies Universal Approximation Theorem (UAT).
Keywords: Activation Function, Universal Approximation function, Neural Networks, convergence.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1533015 Clarification of the Essential of Life Cycle Cost upon Decision-Making Process: An Empirical Study in Building Projects
Authors: Ayedh Alqahtani, Andrew Whyte
Abstract:
Life Cycle Cost (LCC) is one of the goals and key pillars of the construction management science because it comprises many of the functions and processes necessary, which assist organisations and agencies to achieve their goals. It has therefore become important to design and control assets during their whole life cycle, from the design and planning phase through to disposal phase. LCCA is aimed to improve the decision making system in the ownership of assets by taking into account all the cost elements including to the asset throughout its life. Current application of LCC approach is impractical during misunderstanding of the advantages of LCC. This main objective of this research is to show a different relationship between capital cost and long-term running costs. One hundred and thirty eight actual building projects in United Kingdom (UK) were used in order to achieve and measure the above-mentioned objective of the study. The result shown that LCC is one of the most significant tools should be considered on the decision making process.
Keywords: Building projects, Capital cost, Life cycle cost, Maintenance costs, Operation costs.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19293014 Cable Tension Control and Analysis of Reel Transparency for 6-DOF Haptic Foot Platform on a Cable-Driven Locomotion Interface
Authors: Martin J.-D. Otis, Thien-Ly Nguyen-Dang, Thierry Laliberte, Denis Ouellet, Denis Laurendeau, Clement Gosselin
Abstract:
A Cable-Driven Locomotion Interface provides a low inertia haptic interface and is used as a way of enabling the user to walk and interact with virtual surfaces. These surfaces generate Cartesian wrenches which must be optimized for each motorized reel in order to reproduce a haptic sensation in both feet. However, the use of wrench control requires a measure of the cable tensions applied to the moving platform. The latter measure may be inaccurate if it is based on sensors located near the reel. Moreover, friction hysteresis from the reel moving parts needs to be compensated for with an evaluation of low angular velocity of the motor shaft. Also, the pose of the platform is not known precisely due to cable sagging and mechanical deformation. This paper presents a non-ideal motorized reel design with its corresponding control strategy that aims at overcoming the aforementioned issues. A transfert function of the reel based on frequency responses in function of cable tension and cable length is presented with an optimal adaptative PIDF controller. Finally, an hybrid position/tension control is discussed with an analysis of the stability for achieving a complete functionnality of the haptic platform.Keywords: haptic, reel, transparency, cable, tension, control
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18423013 Sliding Joints and Soil-Structure Interaction
Authors: Radim Cajka, Pavlina Mateckova, Martina Janulikova, Marie Stara
Abstract:
Use of a sliding joint is an effective method to decrease the stress in foundation structure where there is a horizontal deformation of subsoil (areas afflicted with underground mining) or horizontal deformation of a foundation structure (pre-stressed foundations, creep, shrinkage, temperature deformation). A convenient material for a sliding joint is a bitumen asphalt belt. Experiments for different types of bitumen belts were undertaken at the Faculty of Civil Engineering - VSB Technical University of Ostrava in 2008. This year an extension of the 2008 experiments is in progress and the shear resistance of a slide joint is being tested as a function of temperature in a temperature controlled room. In this paper experimental results of temperature dependant shear resistance are presented. The result of the experiments should be the sliding joint shear resistance as a function of deformation velocity and temperature. This relationship is used for numerical analysis of stress/strain relation between foundation structure and subsoil. Using a rheological slide joint could lead to a decrease of the reinforcement amount, and contribute to higher reliability of foundation structure and thus enable design of more durable and sustainable building structures.Keywords: Pre-stressed foundations, sliding joint, soil-structure interaction, subsoil horizontal deformation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20143012 An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks
Authors: A. Allirani, M. Suganthi
Abstract:
Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.Keywords: Sensor networks, Low latency, Energy sorting protocol, data processing, Cluster formation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 27403011 Computational Method for Annotation of Protein Sequence According to Gene Ontology Terms
Authors: Razib M. Othman, Safaai Deris, Rosli M. Illias
Abstract:
Annotation of a protein sequence is pivotal for the understanding of its function. Accuracy of manual annotation provided by curators is still questionable by having lesser evidence strength and yet a hard task and time consuming. A number of computational methods including tools have been developed to tackle this challenging task. However, they require high-cost hardware, are difficult to be setup by the bioscientists, or depend on time intensive and blind sequence similarity search like Basic Local Alignment Search Tool. This paper introduces a new method of assigning highly correlated Gene Ontology terms of annotated protein sequences to partially annotated or newly discovered protein sequences. This method is fully based on Gene Ontology data and annotations. Two problems had been identified to achieve this method. The first problem relates to splitting the single monolithic Gene Ontology RDF/XML file into a set of smaller files that can be easy to assess and process. Thus, these files can be enriched with protein sequences and Inferred from Electronic Annotation evidence associations. The second problem involves searching for a set of semantically similar Gene Ontology terms to a given query. The details of macro and micro problems involved and their solutions including objective of this study are described. This paper also describes the protein sequence annotation and the Gene Ontology. The methodology of this study and Gene Ontology based protein sequence annotation tool namely extended UTMGO is presented. Furthermore, its basic version which is a Gene Ontology browser that is based on semantic similarity search is also introduced.
Keywords: automatic clustering, bioinformatics tool, gene ontology, protein sequence annotation, semantic similarity search
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 31273010 A High-Frequency Low-Power Low-Pass-Filter-Based All-Current-Mirror Sinusoidal Quadrature Oscillator
Authors: A. Leelasantitham, B. Srisuchinwong
Abstract:
A high-frequency low-power sinusoidal quadrature oscillator is presented through the use of two 2nd-order low-pass current-mirror (CM)-based filters, a 1st-order CM low-pass filter and a CM bilinear transfer function. The technique is relatively simple based on (i) inherent time constants of current mirrors, i.e. the internal capacitances and the transconductance of a diode-connected NMOS, (ii) a simple negative resistance RN formed by a resistor load RL of a current mirror. Neither external capacitances nor inductances are required. As a particular example, a 1.9-GHz, 0.45-mW, 2-V CMOS low-pass-filter-based all-current-mirror sinusoidal quadrature oscillator is demonstrated. The oscillation frequency (f0) is 1.9 GHz and is current-tunable over a range of 370 MHz or 21.6 %. The power consumption is at approximately 0.45 mW. The amplitude matching and the quadrature phase matching are better than 0.05 dB and 0.15°, respectively. Total harmonic distortions (THD) are less than 0.3 %. At 2 MHz offset from the 1.9 GHz, the carrier to noise ratio (CNR) is 90.01 dBc/Hz whilst the figure of merit called a normalized carrier-to-noise ratio (CNRnorm) is 153.03 dBc/Hz. The ratio of the oscillation frequency (f0) to the unity-gain frequency (fT) of a transistor is 0.25. Comparisons to other approaches are also included.Keywords: Sinusoidal quadrature oscillator, low-pass-filterbased, current-mirror bilinear transfer function, all-current-mirror, negative resistance, low power, high frequency, low distortion.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20693009 An Identification Method of Geological Boundary Using Elastic Waves
Authors: Masamitsu Chikaraishi, Mutsuto Kawahara
Abstract:
This paper focuses on a technique for identifying the geological boundary of the ground strata in front of a tunnel excavation site using the first order adjoint method based on the optimal control theory. The geological boundary is defined as the boundary which is different layers of elastic modulus. At tunnel excavations, it is important to presume the ground situation ahead of the cutting face beforehand. Excavating into weak strata or fault fracture zones may cause extension of the construction work and human suffering. A theory for determining the geological boundary of the ground in a numerical manner is investigated, employing excavating blasts and its vibration waves as the observation references. According to the optimal control theory, the performance function described by the square sum of the residuals between computed and observed velocities is minimized. The boundary layer is determined by minimizing the performance function. The elastic analysis governed by the Navier equation is carried out, assuming the ground as an elastic body with linear viscous damping. To identify the boundary, the gradient of the performance function with respect to the geological boundary can be calculated using the adjoint equation. The weighed gradient method is effectively applied to the minimization algorithm. To solve the governing and adjoint equations, the Galerkin finite element method and the average acceleration method are employed for the spatial and temporal discretizations, respectively. Based on the method presented in this paper, the different boundary of three strata can be identified. For the numerical studies, the Suemune tunnel excavation site is employed. At first, the blasting force is identified in order to perform the accuracy improvement of analysis. We identify the geological boundary after the estimation of blasting force. With this identification procedure, the numerical analysis results which almost correspond with the observation data were provided.
Keywords: Parameter identification, finite element method, average acceleration method, first order adjoint equation method, weighted gradient method, geological boundary, navier equation, optimal control theory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15823008 Dynamic Soil-Structure Interaction Analysis of Reinforced Concrete Buildings
Authors: Abdelhacine Gouasmia, Abdelhamid Belkhiri, Allaeddine Athmani
Abstract:
The objective of this paper is to evaluate the effects of soil-structure interaction (SSI) on the modal characteristics and on the dynamic response of current structures. The objective is on the overall behaviour of a real structure of five storeys reinforced concrete (R/C) building typically encountered in Algeria. Sensitivity studies are undertaken in order to study the effects of frequency content of the input motion, frequency of the soil-structure system, rigidity and depth of the soil layer on the dynamic response of such structures. This investigation indicated that the rigidity of the soil layer is the predominant factor in soil-structure interaction and its increases would definitely reduce the deformation in the R/C structure. On the other hand, increasing the period of the underlying soil will cause an increase in the lateral displacements at story levels and create irregularity in the distribution of story shears. Possible resonance between the frequency content of the input motion and soil could also play an important role in increasing the structural response.Keywords: Direct method, finite element method, foundation, R/C frame, soil-structure interaction.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 26783007 The Classification Performance in Parametric and Nonparametric Discriminant Analysis for a Class- Unbalanced Data of Diabetes Risk Groups
Authors: Lily Ingsrisawang, Tasanee Nacharoen
Abstract:
The problems arising from unbalanced data sets generally appear in real world applications. Due to unequal class distribution, many researchers have found that the performance of existing classifiers tends to be biased towards the majority class. The k-nearest neighbors’ nonparametric discriminant analysis is a method that was proposed for classifying unbalanced classes with good performance. In this study, the methods of discriminant analysis are of interest in investigating misclassification error rates for classimbalanced data of three diabetes risk groups. The purpose of this study was to compare the classification performance between parametric discriminant analysis and nonparametric discriminant analysis in a three-class classification of class-imbalanced data of diabetes risk groups. Data from a project maintaining healthy conditions for 599 employees of a government hospital in Bangkok were obtained for the classification problem. The employees were divided into three diabetes risk groups: non-risk (90%), risk (5%), and diabetic (5%). The original data including the variables of diabetes risk group, age, gender, blood glucose, and BMI were analyzed and bootstrapped for 50 and 100 samples, 599 observations per sample, for additional estimation of the misclassification error rate. Each data set was explored for the departure of multivariate normality and the equality of covariance matrices of the three risk groups. Both the original data and the bootstrap samples showed nonnormality and unequal covariance matrices. The parametric linear discriminant function, quadratic discriminant function, and the nonparametric k-nearest neighbors’ discriminant function were performed over 50 and 100 bootstrap samples and applied to the original data. Searching the optimal classification rule, the choices of prior probabilities were set up for both equal proportions (0.33: 0.33: 0.33) and unequal proportions of (0.90:0.05:0.05), (0.80: 0.10: 0.10) and (0.70, 0.15, 0.15). The results from 50 and 100 bootstrap samples indicated that the k-nearest neighbors approach when k=3 or k=4 and the defined prior probabilities of non-risk: risk: diabetic as 0.90: 0.05:0.05 or 0.80:0.10:0.10 gave the smallest error rate of misclassification. The k-nearest neighbors approach would be suggested for classifying a three-class-imbalanced data of diabetes risk groups.Keywords: Bootstrap, diabetes risk groups, error rate, k-nearest neighbors.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20073006 Optimal Design of Selective Excitation Pulses in Magnetic Resonance Imaging using Genetic Algorithms
Authors: Mohammed A. Alolfe, Abou-Bakr M. Youssef, Yasser M. Kadah
Abstract:
The proper design of RF pulses in magnetic resonance imaging (MRI) has a direct impact on the quality of acquired images, and is needed for many applications. Several techniques have been proposed to obtain the RF pulse envelope given the desired slice profile. Unfortunately, these techniques do not take into account the limitations of practical implementation such as limited amplitude resolution. Moreover, implementing constraints for special RF pulses on most techniques is not possible. In this work, we propose to develop an approach for designing optimal RF pulses under theoretically any constraints. The new technique will pose the RF pulse design problem as a combinatorial optimization problem and uses efficient techniques from this area such as genetic algorithms (GA) to solve this problem. In particular, an objective function will be proposed as the norm of the difference between the desired profile and the one obtained from solving the Bloch equations for the current RF pulse design values. The proposed approach will be verified using analytical solution based RF simulations and compared to previous methods such as Shinnar-Le Roux (SLR) method, and analysis, selected, and tested the options and parameters that control the Genetic Algorithm (GA) can significantly affect its performance to get the best improved results and compared to previous works in this field. The results show a significant improvement over conventional design techniques, select the best options and parameters for GA to get most improvement over the previous works, and suggest the practicality of using of the new technique for most important applications as slice selection for large flip angles, in the area of unconventional spatial encoding, and another clinical use.
Keywords: Selective excitation, magnetic resonance imaging, combinatorial optimization, pulse design.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611