Search results for: reliability measure
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1752

Search results for: reliability measure

342 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: Diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion equation, trends functions, bi-parameters Weibull density function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
341 A Method for Iris Recognition Based on 1D Coiflet Wavelet

Authors: Agus Harjoko, Sri Hartati, Henry Dwiyasa

Abstract:

There have been numerous implementations of security system using biometric, especially for identification and verification cases. An example of pattern used in biometric is the iris pattern in human eye. The iris pattern is considered unique for each person. The use of iris pattern poses problems in encoding the human iris. In this research, an efficient iris recognition method is proposed. In the proposed method the iris segmentation is based on the observation that the pupil has lower intensity than the iris, and the iris has lower intensity than the sclera. By detecting the boundary between the pupil and the iris and the boundary between the iris and the sclera, the iris area can be separated from pupil and sclera. A step is taken to reduce the effect of eyelashes and specular reflection of pupil. Then the four levels Coiflet wavelet transform is applied to the extracted iris image. The modified Hamming distance is employed to measure the similarity between two irises. This research yields the identification success rate of 84.25% for the CASIA version 1.0 database. The method gives an accuracy of 77.78% for the left eyes of MMU 1 database and 86.67% for the right eyes. The time required for the encoding process, from the segmentation until the iris code is generated, is 0.7096 seconds. These results show that the accuracy and speed of the method is better than many other methods.

Keywords: Biometric, iris recognition, wavelet transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1895
340 Psychological Variables of Sport Participation and Involvement among Student-Athletes of Tertiary Institutions in South-West, Nigeria

Authors: Mayowa Adeyeye

Abstract:

This study was conducted to investigate the psychological variables motivating sport participation and involvement among student-athletes of tertiary institutions in southwest Nigeria. One thousand three hundred and fifty (N-1350) studentathletes were randomly selected in all sports from nine tertiary institutions in south-west Nigeria. These tertiary institutions include University of Lagos, Lagos State University, Obafemi Awolowo University, Osun State University, University of Ibadan, University of Agriculture Abeokuta, Federal University of Technology Akungba, University of Ilorin, and Kwara State University. The descriptive survey research method was adopted while a self developed validated Likert type questionnaire named Sport Participation Scale (SPS) was used to elicit opinion from respondents. The test-retest reliability value obtained for the instrument, using Pearson Product Moment Correlation Co-efficient was 0.96. Out of the one thousand three hundred and fifty (N-1350) questionnaire administered, only one thousand two hundred and five (N-1286) were correctly filled, coded and analysed using inferential statistics of Chi-Square (X2) while all the tested hypotheses were set at. 05 alpha level. Based on the findings of this study, the result revealed that several psychological factors influence student athletes to continue participation in sport one which includes love for the game, famous athletes as role model and family support. However, the analysis further revealed that the stipends the student-athletes get from their universities have no influence on their participation and involvement in sport.

Keywords: Family support, peer, role model, sport participation, student-athletes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2956
339 Enhancing the Performance of H.264/AVC in Adaptive Group of Pictures Mode Using Octagon and Square Search Pattern

Authors: S. Sowmyayani, P. Arockia Jansi Rani

Abstract:

This paper integrates Octagon and Square Search pattern (OCTSS) motion estimation algorithm into H.264/AVC (Advanced Video Coding) video codec in Adaptive Group of Pictures (AGOP) mode. AGOP structure is computed based on scene change in the video sequence. Octagon and square search pattern block-based motion estimation method is implemented in inter-prediction process of H.264/AVC. Both these methods reduce bit rate and computational complexity while maintaining the quality of the video sequence respectively. Experiments are conducted for different types of video sequence. The results substantially proved that the bit rate, computation time and PSNR gain achieved by the proposed method is better than the existing H.264/AVC with fixed GOP and AGOP. With a marginal gain in quality of 0.28dB and average gain in bitrate of 132.87kbps, the proposed method reduces the average computation time by 27.31 minutes when compared to the existing state-of-art H.264/AVC video codec.

Keywords: Block Distortion Measure, Block Matching Algorithms, H.264/AVC, Motion estimation, Search patterns, Shot cut detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
338 A Complexity-Based Approach in Image Compression using Neural Networks

Authors: Hadi Veisi, Mansour Jamzad

Abstract:

In this paper we present an adaptive method for image compression that is based on complexity level of the image. The basic compressor/de-compressor structure of this method is a multilayer perceptron artificial neural network. In adaptive approach different Back-Propagation artificial neural networks are used as compressor and de-compressor and this is done by dividing the image into blocks, computing the complexity of each block and then selecting one network for each block according to its complexity value. Three complexity measure methods, called Entropy, Activity and Pattern-based are used to determine the level of complexity in image blocks and their ability in complexity estimation are evaluated and compared. In training and evaluation, each image block is assigned to a network based on its complexity value. Best-SNR is another alternative in selecting compressor network for image blocks in evolution phase which chooses one of the trained networks such that results best SNR in compressing the input image block. In our evaluations, best results are obtained when overlapping the blocks is allowed and choosing the networks in compressor is based on the Best-SNR. In this case, the results demonstrate superiority of this method comparing with previous similar works and JPEG standard coding.

Keywords: Adaptive image compression, Image complexity, Multi-layer perceptron neural network, JPEG Standard, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2208
337 Methodology of the Turkey’s National Geographic Information System Integration Project

Authors: Buse A. Ataç, Doğan K. Cenan, Arda Çetinkaya, Naz D. Şahin, Köksal Sanlı, Zeynep Koç, Akın Kısa

Abstract:

With its spatial data reliability, interpretation and questioning capabilities, Geographical Information Systems make significant contributions to scientists, planners and practitioners. Geographic information systems have received great attention in today's digital world, growing rapidly, and increasing the efficiency of use. Access to and use of current and accurate geographical data, which are the most important components of the Geographical Information System, has become a necessity rather than a need for sustainable and economic development. This project aims to enable sharing of data collected by public institutions and organizations on a web-based platform. Within the scope of the project, INSPIRE (Infrastructure for Spatial Information in the European Community) data specifications are considered as a road-map. In this context, Turkey's National Geographic Information System (TUCBS) Integration Project supports sharing spatial data within 61 pilot public institutions as complied with defined national standards. In this paper, which is prepared by the project team members in the TUCBS Integration Project, the technical process with a detailed methodology is explained. In this context, the main technical processes of the Project consist of Geographic Data Analysis, Geographic Data Harmonization (Standardization), Web Service Creation (WMS, WFS) and Metadata Creation-Publication. In this paper, the integration process carried out to provide the data produced by 61 institutions to be shared from the National Geographic Data Portal (GEOPORTAL), have been trying to be conveyed with a detailed methodology.

Keywords: Data specification, geoportal, GIS, INSPIRE, TUCBS, Turkey’s National Geographic Information System.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 677
336 Numerical Investigation of Nozzle Shape Effect on Shock Wave in Natural Gas Processing

Authors: Esam I. Jassim, Mohamed M. Awad

Abstract:

Natural gas flow contains undesirable solid particles, liquid condensation, and/or oil droplets and requires reliable removing equipment to perform filtration. Recent natural gas processing applications are demanded compactness and reliability of process equipment. Since conventional means are sophisticated in design, poor in efficiency, and continue lacking robust, a supersonic nozzle has been introduced as an alternative means to meet such demands. A 3-D Convergent-Divergent Nozzle is simulated using commercial Code for pressure ratio (NPR) varies from 1.2 to 2. Six different shapes of nozzle are numerically examined to illustrate the position of shock-wave as such spot could be considered as a benchmark of particle separation. Rectangle, triangle, circular, elliptical, pentagon, and hexagon nozzles are simulated using Fluent Code with all have same cross-sectional area. The simple one-dimensional inviscid theory does not describe the actual features of fluid flow precisely as it ignores the impact of nozzle configuration on the flow properties. CFD Simulation results, however, show that nozzle geometry influences the flow structures including location of shock wave. The CFD analysis predicts shock appearance when p01/pa>1.2 for almost all geometry and locates at the lower area ratio (Ae/At). Simulation results showed that shock wave in Elliptical nozzle has the farthest distance from the throat among the others at relatively small NPR. As NPR increases, hexagon would be the farthest. The numerical result is compared with available experimental data and has shown good agreement in terms of shock location and flow structure.

Keywords: CFD, Particle Separation, Shock wave, Supersonic Nozzle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3240
335 Monitoring Blood Pressure Using Regression Techniques

Authors: Qasem Qananwah, Ahmad Dagamseh, Hiam AlQuran, Khalid Shaker Ibrahim

Abstract:

Blood pressure helps the physicians greatly to have a deep insight into the cardiovascular system. The determination of individual blood pressure is a standard clinical procedure considered for cardiovascular system problems. The conventional techniques to measure blood pressure (e.g. cuff method) allows a limited number of readings for a certain period (e.g. every 5-10 minutes). Additionally, these systems cause turbulence to blood flow; impeding continuous blood pressure monitoring, especially in emergency cases or critically ill persons. In this paper, the most important statistical features in the photoplethysmogram (PPG) signals were extracted to estimate the blood pressure noninvasively. PPG signals from more than 40 subjects were measured and analyzed and 12 features were extracted. The features were fed to principal component analysis (PCA) to find the most important independent features that have the highest correlation with blood pressure. The results show that the stiffness index means and standard deviation for the beat-to-beat heart rate were the most important features. A model representing both features for Systolic Blood Pressure (SBP) and Diastolic Blood Pressure (DBP) was obtained using a statistical regression technique. Surface fitting is used to best fit the series of data and the results show that the error value in estimating the SBP is 4.95% and in estimating the DBP is 3.99%.

Keywords: Blood pressure, noninvasive optical system, PCA, continuous monitoring.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 667
334 Modeling of Supply Chains Delocalization Problems Taking into Account the New Financial Policies: Case of Multinational Firms Established in OECD Member Countries

Authors: Mouna Benfssahi, Zoubir El Felsoufi

Abstract:

For many enterprises, the delocalization of a part or the totality of their supply chain to low cost countries is the best way to reduce costs and remain competitive against the growing globalized market. This new tendency is driven by logistics advantages, as well as, financial and tax discount offered by the host countries. The objective of this article is to examine the new financial challenges introduced by the project of base erosion and profits shifting (BEPS), published in 2015, and also their impact on the decision of delocalization. In fact, the strategy adopted by multinational firms for determining the transfer price (TP) of goods and services, as well as the shared amount of revenues and expenses have a major impact upon group profit and may contribute to divergent results. In order to get more profit, a coherent decision of delocalization should be based on an evaluation of all the operational and financial characteristics associated with such movement. Therefore, it is interesting to model these new constraints and integrate them in a more global decision model. The established model will enable to measure how much these financial constraints impact the decision of delocalization and will give new helpful directives for enterprise managers.

Keywords: Delocalization, intragroup transaction, multinational firms, optimization model, supply chain management, transfer pricing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 757
333 Corporate Social Responsibility Disclosure, Tax Aggressiveness and Sustainability Report Assurance: Evidence from Thailand

Authors: Eko Budi Santoso, Kazia Laturette, Stanislaus Adnanto Mastan

Abstract:

This study aims to examine the association between disclosure of social responsibility and tax aggressiveness in developing countries, namely Thailand. This is due to the increasing trend of disclosure of social responsibility in developing countries, even though this disclosure of information is still voluntary. On the other hand, developing countries have low taxation rate and investor protection infrastructures that allow the disclosure of social responsibility to be used opportunistically as a tool to fool the attainment of interests. This study also examines the role of assurance on the association between corporate social responsibility disclosure and tax aggressiveness. The assurance aims to provide confidence that the disclosure of social responsibility by the company is valid. This research builds an index to measure the disclosure of social responsibility based on the rules issued by the innovative Global Reporting. The results of the study are based on a sample of publicly traded companies in Thailand, which showed a positive association between disclosure of corporate social responsibility and tax aggressiveness, but it was further discovered that these results were mitigated by the existence of assurance against disclosure of corporate social responsibility. The results of this study indicate that the disclosure of corporate social responsibility can show that the company cares about the issue of social responsibility but does not automatically make the company as one that holds ethical values ​​in its business practices.

Keywords: Corporate Social Responsibility disclosure, tax aggressiveness, sustainability assurance, business ethics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1487
332 Telecommunications Access, Social Capital and Sustainable Development

Authors: Susan.Bandias

Abstract:

This paper examines the role of telecommunications in sustainable development of urban, rural and remote communities in the Northern Territory of Australia through the theoretical lens of Social Capital. Social Capital is a relatively new construct and is rapidly gaining interest among policy makers, politicians and researchers as a means to both describe and understand social and economic development. Increasingly, the concept of Social Capital, as opposed to the traditional economic indicators, is seen as a more accurate measure of well-being. Whilst the essence of Social Capital is quality social relations, the concept intersects with telecommunications and Information Communications Technology (ICT) in a number of ways. The potential of ICT to disseminate information quickly, to reach vast numbers of people simultaneously and to include the previously excluded, is immense. However, the exact nature of the relationship is not clearly defined. This paper examines the nexus between social relations of mutual benefit, telecommunications access and sustainable development. A mixed methodological approach was used to test the hypothesis that No relationship exists between Social Capital and access to telecommunications services and facilities. Four communities, which included two urban, a rural and a remote Indigenous community in the Northern Territory of Australia are the focus of this research paper.

Keywords: Indigenous disadvantage, Social Capital, sustainable development, telecommunications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1633
331 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption

Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif

Abstract:

Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus, developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.

Keywords: Battery endurance, software metrics, mobile application, power consumption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1934
330 Length Dimension Correlates of Longitudinal Physical Conditioning on Indian Male Youth

Authors: Seema Sharma Kaushik, Dhananjoy Shaw

Abstract:

Various length dimensions of the body have been a variable of interest in the research areas of kinanthropometry. However the inclusion of length measurements in various studies remains restricted to reflect characteristics of a particular game/sport at a particular time. Hence, the present investigation was conducted to study various length dimensions correlates of a longitudinal physical conditioning program on Indian male youth. The study was conducted on 90 Indian male youth. The sample was equally divided into three groups namely, progressive load training (PLT), constant load training (CLT) and no load training (NL). The variables included sitting height, leg length, arm length and foot length. The study was conducted by adopting the multi group repeated measure design. Three different groups were measured four times after completion of each of the three meso-cycles of six-weeks duration each. The measurements were taken using the standard landmarks and procedures. Mean, standard deviation and analysis of co-variance were computed to analyze the data statistically. The post-hoc analysis was conducted for the significant F-ratios at 0.05 level. The study concluded that the followed longitudinal physical conditioning program had significant effect on various length dimensions of Indian male youth.

Keywords: Indian male youth, longitudinal, length dimensions, physical conditioning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 594
329 The Interplay of Locus of Control, Academic Achievement, and Biological Variables among Iranian Online EFL Learners

Authors: Azizeh Chalak, Niloufar Nasri

Abstract:

Students' academic achievement, along with the effects of different variables, has been a serious concern of educators since long ago. This study was an attempt to investigate the interplay of Locus of Control (LOC), academic achievement and biological variables among Iranian online EFL Learners. The participants of the study included 100 students of different age groups and genders studying English online at Iran Language Institute (ILI), Isfahan, Iran. The instrument used was Trice Academic LOC questionnaire which identifies orientations of internality or externality. The participants' Grade Point Averages (GPAs) were used as the measure of their academic achievement. A series of independent samples ttests were performed on the data. The results of the study showed that (a) there were no significant differences between male and female participants in LOC orientation, (b) there was no relationship between LOC and academic achievement among internal males and females, (c) external females were better achievers than external males, (d) and the age had no significant relationship with LOC and academic achievement. It can be concluded that the social, cultural patterns of genders have changed. This study might help sociologists and psychologists as well as applied linguists in that they reflect the recent social changes and their effects on the LOC and their consequent implications in teaching languages.

Keywords: Academic achievement, biological variables, Iranian online EFL learners, locus of control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2244
328 Actionable Rules: Issues and New Directions

Authors: Harleen Kaur

Abstract:

Knowledge Discovery in Databases (KDD) is the process of extracting previously unknown, hidden and interesting patterns from a huge amount of data stored in databases. Data mining is a stage of the KDD process that aims at selecting and applying a particular data mining algorithm to extract an interesting and useful knowledge. It is highly expected that data mining methods will find interesting patterns according to some measures, from databases. It is of vital importance to define good measures of interestingness that would allow the system to discover only the useful patterns. Measures of interestingness are divided into objective and subjective measures. Objective measures are those that depend only on the structure of a pattern and which can be quantified by using statistical methods. While, subjective measures depend only on the subjectivity and understandability of the user who examine the patterns. These subjective measures are further divided into actionable, unexpected and novel. The key issues that faces data mining community is how to make actions on the basis of discovered knowledge. For a pattern to be actionable, the user subjectivity is captured by providing his/her background knowledge about domain. Here, we consider the actionability of the discovered knowledge as a measure of interestingness and raise important issues which need to be addressed to discover actionable knowledge.

Keywords: Data Mining Community, Knowledge Discovery inDatabases (KDD), Interestingness, Subjective Measures, Actionability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1936
327 A Refined Application of QFD in SCM, A New Approach

Authors: Nooshin La'l Mohamadi

Abstract:

Due to the fact that in the new century customers tend to express globally increasing demands, networks of interconnected businesses have been established in societies and the management of such networks seems to be a major key through gaining competitive advantages. Supply chain management encompasses such managerial activities. Within a supply chain, a critical role is played by quality. QFD is a widely-utilized tool which serves the purpose of not only bringing quality to the ultimate provision of products or service packages required by the end customer or the retailer, but it can also initiate us into a satisfactory relationship with our initial customer; that is the wholesaler. However, the wholesalers- cooperation is considerably based on the capabilities that are heavily dependent on their locations and existing circumstances. Therefore, it is undeniable that for all companies each wholesaler possesses a specific importance ratio which can heavily influence the figures calculated in the House of Quality in QFD. Moreover, due to the competitiveness of the marketplace today, it-s been widely recognized that consumers- expression of demands has been highly volatile in periods of production. Apparently, such instability and proneness to change has been very tangibly noticed and taking it into account during the analysis of HOQ is widely influential and doubtlessly required. For a more reliable outcome in such matters, this article demonstrates the application viability of Analytic Network Process for considering the wholesalers- reputation and simultaneously introduces a mortality coefficient for the reliability and stability of the consumers- expressed demands in course of time. Following to this, the paper provides further elaboration on the relevant contributory factors and approaches through the calculation of such coefficients. In the end, the article concludes that an empirical application is needed to achieve broader validity.

Keywords: Analytic Network Process, Quality Function Deployment, QFD flaws, Supply Chain Management

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1417
326 Cellular Automata Based Robust Watermarking Architecture towards the VLSI Realization

Authors: V. H. Mankar, T. S. Das, S. K. Sarkar

Abstract:

In this paper, we have proposed a novel blind watermarking architecture towards its hardware implementation in VLSI. In order to facilitate this hardware realization, cellular automata (CA) concept is introduced. The CA has been already accepted as an attractive structure for VLSI implementation because of its modularity, parallelism, high performance and reliability. The hardware realizable multiresolution spread spectrum watermarking techniques are very few in numbers in spite of their best ever resiliency against signal impairments. This is because of the computational cost and complexity associated with their different filter banks and lifting techniques. The concept of cellular automata theory in order to form a new transform domain technique i.e. Cellular Automata Transform (CAT) have been incorporated. Since CA provides spreading sequences having very low cross-correlation properties, the CA based pseudorandom sequence generator is considered in the present work. Considering the watermarking technique as a digital communication process, an error control coding (ECC) must be incorporated in the data hiding schemes. Besides the hardware implementation of entire CA based data hiding technique, the individual blocks of the algorithm using CA provide the best result than that of some other methods irrespective of the hardware and software technique. The Cellular Automata Transform, CA based PN sequence generator, and CA ECC are the requisite blocks that are developed not only to meet the reliable hardware requirements but also for the basic spread spectrum watermarking features. The proposed algorithm shows statistical invisibility and resiliency against various common signal-processing operations. This algorithmic design utilizes the existing allocated bandwidth in the data transmission channel in a more efficient manner.

Keywords: Cellular automata, watermarking, error control coding, PN sequence, VLSI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2060
325 Attention Based Fully Convolutional Neural Network for Simultaneous Detection and Segmentation of Optic Disc in Retinal Fundus Images

Authors: Sandip Sadhukhan, Arpita Sarkar, Debprasad Sinha, Goutam Kumar Ghorai, Gautam Sarkar, Ashis K. Dhara

Abstract:

Accurate segmentation of the optic disc is very important for computer-aided diagnosis of several ocular diseases such as glaucoma, diabetic retinopathy, and hypertensive retinopathy. The paper presents an accurate and fast optic disc detection and segmentation method using an attention based fully convolutional network. The network is trained from scratch using the fundus images of extended MESSIDOR database and the trained model is used for segmentation of optic disc. The false positives are removed based on morphological operation and shape features. The result is evaluated using three-fold cross-validation on six public fundus image databases such as DIARETDB0, DIARETDB1, DRIVE, AV-INSPIRE, CHASE DB1 and MESSIDOR. The attention based fully convolutional network is robust and effective for detection and segmentation of optic disc in the images affected by diabetic retinopathy and it outperforms existing techniques.

Keywords: Ocular diseases, retinal fundus image, optic disc detection and segmentation, fully convolutional network, overlap measure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 766
324 Use of Fuzzy Logic in the Corporate Reputation Assessment: Stock Market Investors’ Perspective

Authors: Tomasz L. Nawrocki, Danuta Szwajca

Abstract:

The growing importance of reputation in building enterprise value and achieving long-term competitive advantage creates the need for its measurement and evaluation for the management purposes (effective reputation and its risk management). The paper presents practical application of self-developed corporate reputation assessment model from the viewpoint of stock market investors. The model has a pioneer character and example analysis performed for selected industry is a form of specific test for this tool. In the proposed solution, three aspects - informational, financial and development, as well as social ones - were considered. It was also assumed that the individual sub-criteria will be based on public sources of information, and as the calculation apparatus, capable of obtaining synthetic final assessment, fuzzy logic will be used. The main reason for developing this model was to fulfill the gap in the scope of synthetic measure of corporate reputation that would provide higher degree of objectivity by relying on "hard" (not from surveys) and publicly available data. It should be also noted that results obtained on the basis of proposed corporate reputation assessment method give possibilities of various internal as well as inter-branch comparisons and analysis of corporate reputation impact.

Keywords: Corporate reputation, fuzzy logic, fuzzy model, stock market investors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1362
323 The Martingale Options Price Valuation for European Puts Using Stochastic Differential Equation Models

Authors: H. C. Chinwenyi, H. D. Ibrahim, F. A. Ahmed

Abstract:

In modern financial mathematics, valuing derivatives such as options is often a tedious task. This is simply because their fair and correct prices in the future are often probabilistic. This paper examines three different Stochastic Differential Equation (SDE) models in finance; the Constant Elasticity of Variance (CEV) model, the Balck-Karasinski model, and the Heston model. The various Martingales option price valuation formulas for these three models were obtained using the replicating portfolio method. Also, the numerical solution of the derived Martingales options price valuation equations for the SDEs models was carried out using the Monte Carlo method which was implemented using MATLAB. Furthermore, results from the numerical examples using published data from the Nigeria Stock Exchange (NSE), all share index data show the effect of increase in the underlying asset value (stock price) on the value of the European Put Option for these models. From the results obtained, we see that an increase in the stock price yields a decrease in the value of the European put option price. Hence, this guides the option holder in making a quality decision by not exercising his right on the option.

Keywords: Equivalent Martingale Measure, European Put Option, Girsanov Theorem, Martingales, Monte Carlo method, option price valuation, option price valuation formula.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 723
322 A Preliminary Study on the Suitability of Data Driven Approach for Continuous Water Level Modeling

Authors: Muhammad Aqil, Ichiro Kita, Moses Macalinao

Abstract:

Reliable water level forecasts are particularly important for warning against dangerous flood and inundation. The current study aims at investigating the suitability of the adaptive network based fuzzy inference system for continuous water level modeling. A hybrid learning algorithm, which combines the least square method and the back propagation algorithm, is used to identify the parameters of the network. For this study, water levels data are available for a hydrological year of 2002 with a sampling interval of 1-hour. The number of antecedent water level that should be included in the input variables is determined by two statistical methods, i.e. autocorrelation function and partial autocorrelation function between the variables. Forecasting was done for 1-hour until 12-hour ahead in order to compare the models generalization at higher horizons. The results demonstrate that the adaptive networkbased fuzzy inference system model can be applied successfully and provide high accuracy and reliability for river water level estimation. In general, the adaptive network-based fuzzy inference system provides accurate and reliable water level prediction for 1-hour ahead where the MAPE=1.15% and correlation=0.98 was achieved. Up to 12-hour ahead prediction, the model still shows relatively good performance where the error of prediction resulted was less than 9.65%. The information gathered from the preliminary results provide a useful guidance or reference for flood early warning system design in which the magnitude and the timing of a potential extreme flood are indicated.

Keywords: Neural Network, Fuzzy, River, Forecasting

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1279
321 Developing and Validating an Instrument for Measuring Mobile Government Adoption in Saudi Arabia

Authors: Sultan Alotaibi, Dmitri Roussinov

Abstract:

Many governments recently started to change the ways of providing their services by allowing their citizens to access services from anywhere without the need of visiting the location of the service provider. Mobile government (M-government) is one of the techniques that fulfill that goal. It has been adopted by many governments. M-government can be defined as an implementation of Electronic Government (E-Government) by using mobile technology with the aim of improving service delivery to citizens, businesses and all government agencies. There have been several research projects developing models to understand the behavior of individuals towards the adoption of m-government. This paper proposes a model for adoption of m-government services in Saudi Arabia by extending Technology Acceptance Model (TAM) by introducing external factors. This paper also reports on the development of a survey instrument designed to measure user perception of mobile government acceptance. A survey instrument has been developed by using existing scales from prior instruments and a pilot study has been conducted by distributing the survey to 33 participants. As a result, a survey instrument has been refined to retain 43 items. The results also showed that the reliabilities of all the scales in the survey instrument are above the levels acceptable in current academic research, thus the instruments developed by us are capable of analyzing the factors in M-government adoption.

Keywords: TAM, m-government, e-government, model, acceptance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1595
320 Reliability of Dissimilar Metal Soldered Joint in Fabrication of Electromagnetic Interference Shielded Door Frame

Authors: Rehan Waheed, Hasan Aftab Saeed, Wasim Tarar, Khalid Mahmood, Sajid Ullah Butt

Abstract:

Electromagnetic Interference (EMI) shielded doors made from brass extruded channels need to be welded with shielded enclosures to attain optimum shielding performance. Control of welding induced distortion is a problem in welding dissimilar metals like steel and brass. In this research, soldering of the steel-brass joint has been proposed to avoid weld distortion. The material used for brass channel is UNS C36000. The thickness of brass is defined by the manufacturing process, i.e. extrusion. The thickness of shielded enclosure material (ASTM A36) can be varied to produce joint between the dissimilar metals. Steel sections of different gauges are soldered using (91% tin, 9% zinc) solder to the brass, and strength of joint is measured by standard test procedures. It is observed that thin steel sheets produce a stronger bond with brass. The steel sections further require to be welded with shielded enclosure steel sheets through TIG welding process. Stresses and deformation in the vicinity of soldered portion is calculated through FE simulation. Crack formation in soldered area is also studied through experimental work. It has been found that in thin sheets deformation produced due to applied force is localized and has no effect on soldered joint area whereas in thick sheets profound cracks have been observed in soldered joint. The shielding effectiveness of EMI shielded door is compromised due to these cracks. The shielding effectiveness of the specimens is tested and results are compared.

Keywords: Dissimilar metals, soldering, joint strength, EMI shielding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 701
319 Study on the Impact of Size and Position of the Shear Field in Determining the Shear Modulus of Glulam Beam Using Photogrammetry Approach

Authors: Niaz Gharavi, Hexin Zhang

Abstract:

The shear modulus of a timber beam can be determined using torsion test or shear field test method. The shear field test method is based on shear distortion measurement of the beam at the zone with the constant transverse load in the standardized four-point bending test. The current code of practice advises using two metallic arms act as an instrument to measure the diagonal displacement of the constructing square. The size and the position of the constructing square might influence the shear modulus determination. This study aimed to investigate the size and the position effect of the square in the shear field test method. A binocular stereo vision system has been employed to determine the 3D displacement of a grid of target points. Six glue laminated beams were produced and tested. Analysis of Variance (ANOVA) was performed on the acquired data to evaluate the significance of the size effect and the position effect of the square. The results have shown that the size of the square has a noticeable influence on the value of shear modulus, while, the position of the square within the area with the constant shear force does not affect the measured mean shear modulus.

Keywords: Shear field test method, structural-sized test, shear modulus of Glulam beam, photogrammetry approach.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 993
318 Stochastic Modeling for Parameters of Modified Car-Following Model in Area-Based Traffic Flow

Authors: N. C. Sarkar, A. Bhaskar, Z. Zheng

Abstract:

The driving behavior in area-based (i.e., non-lane based) traffic is induced by the presence of other individuals in the choice space from the driver’s visual perception area. The driving behavior of a subject vehicle is constrained by the potential leaders and leaders are frequently changed over time. This paper is to determine a stochastic model for a parameter of modified intelligent driver model (MIDM) in area-based traffic (as in developing countries). The parametric and non-parametric distributions are presented to fit the parameters of MIDM. The goodness of fit for each parameter is measured in two different ways such as graphically and statistically. The quantile-quantile (Q-Q) plot is used for a graphical representation of a theoretical distribution to model a parameter and the Kolmogorov-Smirnov (K-S) test is used for a statistical measure of fitness for a parameter with a theoretical distribution. The distributions are performed on a set of estimated parameters of MIDM. The parameters are estimated on the real vehicle trajectory data from India. The fitness of each parameter with a stochastic model is well represented. The results support the applicability of the proposed modeling for parameters of MIDM in area-based traffic flow simulation.

Keywords: Area-based traffic, car-following model, micro-simulation, stochastic modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 762
317 Identification of Disease Causing DNA Motifs in Human DNA Using Clustering Approach

Authors: G. Tamilpavai, C. Vishnuppriya

Abstract:

Studying DNA (deoxyribonucleic acid) sequence is useful in biological processes and it is applied in the fields such as diagnostic and forensic research. DNA is the hereditary information in human and almost all other organisms. It is passed to their generations. Earlier stage detection of defective DNA sequence may lead to many developments in the field of Bioinformatics. Nowadays various tedious techniques are used to identify defective DNA. The proposed work is to analyze and identify the cancer-causing DNA motif in a given sequence. Initially the human DNA sequence is separated as k-mers using k-mer separation rule. The separated k-mers are clustered using Self Organizing Map (SOM). Using Levenshtein distance measure, cancer associated DNA motif is identified from the k-mer clusters. Experimental results of this work indicate the presence or absence of cancer causing DNA motif. If the cancer associated DNA motif is found in DNA, it is declared as the cancer disease causing DNA sequence. Otherwise the input human DNA is declared as normal sequence. Finally, elapsed time is calculated for finding the presence of cancer causing DNA motif using clustering formation. It is compared with normal process of finding cancer causing DNA motif. Locating cancer associated motif is easier in cluster formation process than the other one. The proposed work will be an initiative aid for finding genetic disease related research.

Keywords: Bioinformatics, cancer motif, DNA, k-mers, Levenshtein distance, SOM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1364
316 The Evaluation of Costs and Greenhouse Gas Reduction Using Technologies for Energy from Sewage Sludge

Authors: Futoshi Kakuta, Takashi Ishida

Abstract:

Sewage sludge is a biomass resource that can create a solid fuel and electricity. Utilizing sewage sludge as a renewable energy can contribute to the reduction of greenhouse gases. In Japan, the "National Plan for the Promotion of Biomass Utilization" and the “Priority Plan for Social Infrastructure Development" were approved at cabinet meetings in December 2010 and August 2012, respectively, to promote the energy utilization of sewage sludge. This study investigated costs and greenhouse gas emission in different sewage sludge treatments with technologies for energy from sewage sludge. Expenses were estimated based on capital costs and O&M costs including energy consumption of solid fuel plants and biogas power generation plants for sewage sludge. Results showed that the cost of sludge digestion treatment with solid fuel technologies was 8% lower than landfill disposal. The greenhouse gas emission of sludge digestion treatment with solid fuel technologies was also 6,390t as CO2 smaller than landfill disposal. Biogas power generation reduced the electricity of a wastewater treatment plant by 30% and the cost by 5%.

Keywords: Global warming counter measure, energy technology, solid fuel production, biogas.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1722
315 An Image Processing Based Approach for Assessing Wheelchair Cushions

Authors: B. Farahani, R. Fadil, A. Aboonabi, B. Hoffmann, J. Loscheider, K. Tavakolian, S. Arzanpour

Abstract:

Wheelchair users spend long hours in a sitting position, and selecting the right cushion is highly critical in preventing pressure ulcers in that demographic. Pressure Mapping Systems (PMS) are typically used in clinical settings by therapists to identify the sitting profile and pressure points in the sitting area to select the cushion that fits the best for the users. A PMS is a flexible mat composed of arrays of distributed networks of pressure sensors. The output of the PMS systems is a color-coded image that shows the intensity of the pressure concentration. Therapists use the PMS images to compare different cushions fit for each user. This process is highly subjective and requires good visual memory for the best outcome. This paper aims to develop an image processing technique to analyze the images of PMS and provide an objective measure to assess the cushions based on their pressure distribution mappings. In this paper, we first reviewed the skeletal anatomy of the human sitting area and its relation to the PMS image. This knowledge is then used to identify the important features that must be considered in image processing. We then developed an algorithm based on those features to analyze the images and rank them according to their fit to the user's needs. 

Keywords: cushion, image processing, pressure mapping system, wheelchair

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 676
314 Study of Compatibility and Oxidation Stability of Vegetable Insulating Oils

Authors: Helena M. Wilhelm, Paulo O. Fernandes, Laís P. Dill, Kethlyn G. Moscon

Abstract:

The use of vegetable oil (or natural ester) as an insulating fluid in electrical transformers is a trend that aims to contribute to environmental preservation since it is biodegradable and non-toxic. Besides, vegetable oil has high flash and combustion points, being considered a fire safety fluid. However, vegetable oil is usually less stable towards oxidation than mineral oil. Both insulating fluids, mineral and vegetable oils, need to be tested periodically according to specific standards. Oxidation stability can be determined by the induction period measured by conductivity method (Rancimat) by monitoring the effectivity of oil’s antioxidant additives, a methodology already developed for food application and biodiesel but still not standardized for insulating fluids. Besides adequate oxidation stability, fluids must be compatible with transformer's construction materials under normal operating conditions to ensure that damage to the oil and parts of the transformer does not occur. ASTM standard and Brazilian normative differ in parameters evaluated, which reveals the need to regulate tests for each oil type. The aim of this study was to assess oxidation stability and compatibility of vegetable oils to suggest the best way to assure a viable performance of vegetable oil as transformer insulating fluid. The determination of the induction period for several vegetable insulating oils from the local market by using Rancimat was carried out according to BS EN 14112 standard, at different temperatures (110, 120, and 130 °C). Also, the compatibility of vegetable oil was assessed according to ASTM and ABNT NBR standards. The main results showed that the best temperature for use in the Rancimat test is 130 °C, which allows a better observation of conductivity change. The compatibility test results presented differences between vegetable and mineral oil standards that should be taken into account in oil testing since materials compatibility and oxidation stability are essential for equipment reliability.

Keywords: Compatibility, Rancimat, natural ester, vegetable oil.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
313 Vickers Indentation Simulation of Buffer Layer Thickness Effect for DLC Coated Materials

Authors: Abdul Wasy, Balakrishnan G., Yi Qi Wang, Atta Ur Rehman, Jung Il Song

Abstract:

Vickers indentation is used to measure the hardness of materials. In this study, numerical simulation of Vickers indentation experiment was performed for Diamond like Carbon (DLC) coated materials. DLC coatings were deposited on stainless steel 304 substrates with Chromium buffer layer using RF Magnetron and T-shape Filtered Cathodic Vacuum Arc Dual system The objective of this research is to understand the elastic plastic properties, stress strain distribution, ring and lateral crack growth and propagation, penetration depth of indenter and delamination of coating from substrate with effect of buffer layer thickness. The effect of Poisson-s ratio of DLC coating was also analyzed. Indenter penetration is more in coated materials with thin buffer layer as compared to thicker one, under same conditions. Similarly, the specimens with thinner buffer layer failed quickly due to high residual stress as compared to the coated materials with reasonable thickness of 200nm buffer layer. The simulation results suggested the optimized thickness of 200 nm among the prepared specimens for durable and long service.

Keywords: Thin film, buffer layer. Diamond like Carbon, Vickers indentation, Poisson's ratio, Finite element.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2928