Search results for: Binary Sequential switched capacitor bank
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 925

Search results for: Binary Sequential switched capacitor bank

175 Non-Overlapping Hierarchical Index Structure for Similarity Search

Authors: Mounira Taileb, Sid Lamrous, Sami Touati

Abstract:

In order to accelerate the similarity search in highdimensional database, we propose a new hierarchical indexing method. It is composed of offline and online phases. Our contribution concerns both phases. In the offline phase, after gathering the whole of the data in clusters and constructing a hierarchical index, the main originality of our contribution consists to develop a method to construct bounding forms of clusters to avoid overlapping. For the online phase, our idea improves considerably performances of similarity search. However, for this second phase, we have also developed an adapted search algorithm. Our method baptized NOHIS (Non-Overlapping Hierarchical Index Structure) use the Principal Direction Divisive Partitioning (PDDP) as algorithm of clustering. The principle of the PDDP is to divide data recursively into two sub-clusters; division is done by using the hyper-plane orthogonal to the principal direction derived from the covariance matrix and passing through the centroid of the cluster to divide. Data of each two sub-clusters obtained are including by a minimum bounding rectangle (MBR). The two MBRs are directed according to the principal direction. Consequently, the nonoverlapping between the two forms is assured. Experiments use databases containing image descriptors. Results show that the proposed method outperforms sequential scan and SRtree in processing k-nearest neighbors.

Keywords: K-nearest neighbour search, multi-dimensional indexing, multimedia databases, similarity search.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1562
174 Mixture Design Experiment on Flow Behaviour of O/W Emulsions as Affected by Polysaccharide Interactions

Authors: Nor Hayati Ibrahim, Yaakob B. Che Man, Chin Ping Tan, Nor Aini Idris

Abstract:

Interaction effects of xanthan gum (XG), carboxymethyl cellulose (CMC), and locust bean gum (LBG) on the flow properties of oil-in-water emulsions were investigated by a mixture design experiment. Blends of XG, CMC and LBG were prepared according to an augmented simplex-centroid mixture design (10 points) and used at 0.5% (wt/wt) in the emulsion formulations. An appropriate mathematical model was fitted to express each response as a function of the proportions of the blend components that are able to empirically predict the response to any blend of combination of the components. The synergistic interaction effect of the ternary XG:CMC:LBG blends at approximately 33-67% XG levels was shown to be much stronger than that of the binary XG:LBG blend at 50% XG level (p < 0.05). Nevertheless, an antagonistic interaction effect became significant as CMC level in blends was more than 33% (p < 0.05). Yield stress and apparent viscosity (at 10 s-1) responses were successfully fitted with a special quartic model while flow behaviour index and consistency coefficient were fitted with a full quartic model (R2 adjusted ≥ 0.90). This study found that a mixture design approach could serve as a valuable tool in better elucidating and predicting the interaction effects beyond the conventional twocomponent blends.

Keywords: O/W emulsions, flow behavior, polysaccharideinteraction, mixture design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2220
173 Optimized Facial Features-based Age Classification

Authors: Md. Zahangir Alom, Mei-Lan Piao, Md. Shariful Islam, Nam Kim, Jae-Hyeung Park

Abstract:

The evaluation and measurement of human body dimensions are achieved by physical anthropometry. This research was conducted in view of the importance of anthropometric indices of the face in forensic medicine, surgery, and medical imaging. The main goal of this research is to optimization of facial feature point by establishing a mathematical relationship among facial features and used optimize feature points for age classification. Since selected facial feature points are located to the area of mouth, nose, eyes and eyebrow on facial images, all desire facial feature points are extracted accurately. According this proposes method; sixteen Euclidean distances are calculated from the eighteen selected facial feature points vertically as well as horizontally. The mathematical relationships among horizontal and vertical distances are established. Moreover, it is also discovered that distances of the facial feature follows a constant ratio due to age progression. The distances between the specified features points increase with respect the age progression of a human from his or her childhood but the ratio of the distances does not change (d = 1 .618 ) . Finally, according to the proposed mathematical relationship four independent feature distances related to eight feature points are selected from sixteen distances and eighteen feature point-s respectively. These four feature distances are used for classification of age using Support Vector Machine (SVM)-Sequential Minimal Optimization (SMO) algorithm and shown around 96 % accuracy. Experiment result shows the proposed system is effective and accurate for age classification.

Keywords: 3D Face Model, Face Anthropometrics, Facial Features Extraction, Feature distances, SVM-SMO

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2047
172 Banking Union: A New Step towards Completing the Economic and Monetary Union

Authors: Marijana Ivanov, Roman Šubić

Abstract:

This study analyzes the critical gaps in the architecture of European stability and the expected role of the banking union as the new important step towards completing the Economic and Monetary Union that should enable the creation of safe and sound financial sector for the euro area market. The single rulebook together with the Single Supervisory Mechanism and the Single Resolution Mechanism - as two main pillars of the banking union, should provide a consistent application of common rules and administrative standards for supervision, recovery and resolution of banks – with the final aim of replacing the former bail-out practice with the bail-in system through which possible future bank failures would be resolved by their own funds, i.e. with minimal costs for taxpayers and real economy. In this way, the vicious circle between banks and sovereigns would be broken. It would also reduce the financial fragmentation recorded in the years of crisis as the result of divergent behaviors in risk premium, lending activities and interest rates between the core and the periphery. In addition, it should strengthen the effectiveness of monetary transmission channels, in particular the credit channels and overflows of liquidity on the money market which, due to the fragmentation of the common financial market, has been significantly disabled in period of crisis. However, contrary to all the positive expectations related to the future functioning of the banking union, major findings of this study indicate that characteristics of the economic system in which the banking union will operate should not be ignored. The euro area is an integration of strong and weak entities with large differences in economic development, wealth, assets of banking systems, growth rates and accountability of fiscal policy. The analysis indicates that low and unbalanced economic growth remains a challenge for the maintenance of financial stability and this problem cannot be resolved just by a single supervision. In many countries bank assets exceed their GDP by several times and large banks are still a matter of concern, because of their systemic importance for individual countries and the euro zone as a whole. The creation of the Single Supervisory Mechanism and the Single Resolution Mechanism is a response to the European crisis, which has particularly affected peripheral countries and caused the associated loop between the banking crisis and the sovereign debt crisis, but has also influenced banks’ balance sheets in the core countries, as the result of crossborder capital flows. The creation of the SSM and the SRM should prevent the similar episodes to happen again and should also provide a new opportunity for strengthening of economic and financial systems of the peripheral countries. On the other hand, there is a potential threat that future focus of the ECB, resolution mechanism and other relevant institutions will be extremely oriented towards large and significant banks (whereby one half of them operate in the core and most important euro area countries), and therefore it remains questionable to what extent will the common resolution funds will be used for rescue of less important institutions. Recent geopolitical developments will be the optimal indicator to show whether the previously established mechanisms are sufficient enough to maintain the adequate financial stability in the euro area market.

Keywords: Banking Union, financial integration, single supervisory mechanism (SSM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1665
171 A Neuron Model of Facial Recognition and Detection of an Authorized Entity Using Machine Learning System

Authors: J. K. Adedeji, M. O. Oyekanmi

Abstract:

This paper has critically examined the use of Machine Learning procedures in curbing unauthorized access into valuable areas of an organization. The use of passwords, pin codes, user’s identification in recent times has been partially successful in curbing crimes involving identities, hence the need for the design of a system which incorporates biometric characteristics such as DNA and pattern recognition of variations in facial expressions. The facial model used is the OpenCV library which is based on the use of certain physiological features, the Raspberry Pi 3 module is used to compile the OpenCV library, which extracts and stores the detected faces into the datasets directory through the use of camera. The model is trained with 50 epoch run in the database and recognized by the Local Binary Pattern Histogram (LBPH) recognizer contained in the OpenCV. The training algorithm used by the neural network is back propagation coded using python algorithmic language with 200 epoch runs to identify specific resemblance in the exclusive OR (XOR) output neurons. The research however confirmed that physiological parameters are better effective measures to curb crimes relating to identities.

Keywords: Biometric characters, facial recognition, neural network, OpenCV.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 695
170 Investigation of Cytotoxic Compounds in Ethyl Acetate and Chloroform Extracts of Nigella sativa by Sulforhodamine-B Assay-Guided Fractionation

Authors: Harshani Uggallage, Kapila D. Dissanayaka

Abstract:

A Sulforhodamine-B assay-guided fractionation on Nigella sativa seeds was conducted to determine the presence of cytotoxic compounds against human hepatoma (HepG2) cells. Initially, a freeze-dried sample of Nigella sativa seeds was sequentially extracted into solvents of increasing polarities. Crude extracts from the sequential extraction of Nigella sativa seeds in chloroform and ethyl acetate showed the highest cytotoxicity. The combined mixture of these two extracts was subjected to bioassay guided fractionation using a modified Kupchan method of partitioning, followed by Sephadex® LH-20 chromatography. This chromatographic separation process resulted in a column fraction with a convincing IC50 (half-maximal inhibitory concentration) value of 13.07 µg/ml, which is considerable for developing therapeutic drug leads against human hepatoma. Reversed phase High-Performance Liquid Chromatography (HPLC) was finally conducted for the same column fraction and the result indicates the presence of one or several main cytotoxic compounds against human HepG2 cells.

Keywords: Cytotoxic compounds, half-maximal inhibitory concentration, high-performance liquid chromatography, human HepG2 cells, Nigella sativa seeds, Sulforhodamine-B assay-guided fractionation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 445
169 Fast Painting with Different Colors Using Cross Correlation in the Frequency Domain

Authors: Hazem M. El-Bakry

Abstract:

In this paper, a new technique for fast painting with different colors is presented. The idea of painting relies on applying masks with different colors to the background. Fast painting is achieved by applying these masks in the frequency domain instead of spatial (time) domain. New colors can be generated automatically as a result from the cross correlation operation. This idea was applied successfully for faster specific data (face, object, pattern, and code) detection using neural algorithms. Here, instead of performing cross correlation between the input input data (e.g., image, or a stream of sequential data) and the weights of neural networks, the cross correlation is performed between the colored masks and the background. Furthermore, this approach is developed to reduce the computation steps required by the painting operation. The principle of divide and conquer strategy is applied through background decomposition. Each background is divided into small in size subbackgrounds and then each sub-background is processed separately by using a single faster painting algorithm. Moreover, the fastest painting is achieved by using parallel processing techniques to paint the resulting sub-backgrounds using the same number of faster painting algorithms. In contrast to using only faster painting algorithm, the speed up ratio is increased with the size of the background when using faster painting algorithm and background decomposition. Simulation results show that painting in the frequency domain is faster than that in the spatial domain.

Keywords: Fast Painting, Cross Correlation, Frequency Domain, Parallel Processing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
168 Fault Diagnosis of Nonlinear Systems Using Dynamic Neural Networks

Authors: E. Sobhani-Tehrani, K. Khorasani, N. Meskin

Abstract:

This paper presents a novel integrated hybrid approach for fault diagnosis (FD) of nonlinear systems. Unlike most FD techniques, the proposed solution simultaneously accomplishes fault detection, isolation, and identification (FDII) within a unified diagnostic module. At the core of this solution is a bank of adaptive neural parameter estimators (NPE) associated with a set of singleparameter fault models. The NPEs continuously estimate unknown fault parameters (FP) that are indicators of faults in the system. Two NPE structures including series-parallel and parallel are developed with their exclusive set of desirable attributes. The parallel scheme is extremely robust to measurement noise and possesses a simpler, yet more solid, fault isolation logic. On the contrary, the series-parallel scheme displays short FD delays and is robust to closed-loop system transients due to changes in control commands. Finally, a fault tolerant observer (FTO) is designed to extend the capability of the NPEs to systems with partial-state measurement.

Keywords: Hybrid fault diagnosis, Dynamic neural networks, Nonlinear systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2221
167 The Determinants of Voluntary Disclosure in Croatia

Authors: Zeljana Aljinovic Barac, Marina Granic, Tina Vuko

Abstract:

Study investigates the level and extent of voluntary disclosure practice in Croatia. The research was conducted on the sample of 130 medium and large companies. Findings indicate that two thirds of the companies analyzed disclose below-average number of additional information. The explanatory analyses has shown that firm size, listing status and industrial sector significantly and positively affect the level and extent of voluntary disclosure in the annual report of Croatian companies. On the other hand, profitability and ownership structure were found statistically insignificant. Unlike previous studies, this paper deals with level of voluntary disclosure of medium and large companies, as well as companies whose shares are not listed on the organized capital market, which can be found as our contribution. Also, the research makes contribution by providing the insights into voluntary disclosure practices in Croatia, as a case of macro-oriented accounting system economy, i.e. bank oriented economy with an emerging capital market.

Keywords: Annual report, Croatian companies, Disclosure index, Voluntary disclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2525
166 A Kernel Based Rejection Method for Supervised Classification

Authors: Abdenour Bounsiar, Edith Grall, Pierre Beauseroy

Abstract:

In this paper we are interested in classification problems with a performance constraint on error probability. In such problems if the constraint cannot be satisfied, then a rejection option is introduced. For binary labelled classification, a number of SVM based methods with rejection option have been proposed over the past few years. All of these methods use two thresholds on the SVM output. However, in previous works, we have shown on synthetic data that using thresholds on the output of the optimal SVM may lead to poor results for classification tasks with performance constraint. In this paper a new method for supervised classification with rejection option is proposed. It consists in two different classifiers jointly optimized to minimize the rejection probability subject to a given constraint on error rate. This method uses a new kernel based linear learning machine that we have recently presented. This learning machine is characterized by its simplicity and high training speed which makes the simultaneous optimization of the two classifiers computationally reasonable. The proposed classification method with rejection option is compared to a SVM based rejection method proposed in recent literature. Experiments show the superiority of the proposed method.

Keywords: rejection, Chow's rule, error-reject tradeoff, SupportVector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
165 Logistic Model Tree and Expectation-Maximization for Pollen Recognition and Grouping

Authors: Endrick Barnacin, Jean-Luc Henry, Jack Molinié, Jimmy Nagau, Hélène Delatte, Gérard Lebreton

Abstract:

Palynology is a field of interest for many disciplines. It has multiple applications such as chronological dating, climatology, allergy treatment, and even honey characterization. Unfortunately, the analysis of a pollen slide is a complicated and time-consuming task that requires the intervention of experts in the field, which is becoming increasingly rare due to economic and social conditions. So, the automation of this task is a necessity. Pollen slides analysis is mainly a visual process as it is carried out with the naked eye. That is the reason why a primary method to automate palynology is the use of digital image processing. This method presents the lowest cost and has relatively good accuracy in pollen retrieval. In this work, we propose a system combining recognition and grouping of pollen. It consists of using a Logistic Model Tree to classify pollen already known by the proposed system while detecting any unknown species. Then, the unknown pollen species are divided using a cluster-based approach. Success rates for the recognition of known species have been achieved, and automated clustering seems to be a promising approach.

Keywords: Pollen recognition, logistic model tree, expectation-maximization, local binary pattern.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 770
164 Minimizing Risk Costs through Optimal Responses in NPD Projects

Authors: Chan-Sik Kim, Jong-Seong Kim, Se Won Lee, Hoo-Gon Choi

Abstract:

In rapidly changing market environment, firms are investing a lot of time and resources into new product development (NPD) projects to make profit and to obtain competitive advantage. However, failure rate of NPD projects is becoming high due to various internal and external risks which hinder successful NPD projects. To reduce the failure rate, it is critical that risks have to be managed effectively and efficiently through good strategy, and treated by optimal responses to minimize risk cost. Four strategies are adopted to handle the risks in this study. The optimal responses are characterized by high reduction of risk costs with high efficiency. This study suggests a framework to decide the optimal responses considering the core risks, risk costs, response efficiency and response costs for successful NPD projects. Both binary particles warm optimization (BPSO) and multi-objective particle swarm optimization (MOPSO) methods are mainly used in the framework. Although several limitations exist in use for real industries, the frame work shows good strength for handling the risks with highly scientific ways through an example.

Keywords: NPD projects, risk cost, strategy, optimal responses, Particle Swarm Optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1957
163 Comparative Study in Dentinal Tubuli Occlusion Using Bioglass and Copper-Bromide Laser

Authors: Sun Woo Lee, Tae Bum Lee, Yoon Hwa Park, Yoo Jeong Kim

Abstract:

Cervical dentinal hypersensitivity (CDH) affects 8-30% of adults and nearly 85% of perio-treated patients. Various treatment schemes have been applied for treating CDH, among them being fluoride application, laser irradiation, and, recently, bioglass. The purpose of this study was to investigate the influence of bioglass, copper-bromide (Cu-Br) laser irradiation and their combination on dentinal tubule occlusion as a potential dentinal hypersensitivity treatment for CDH. 45 human dentin surfaces were organized into three equal groups: group A received Cu-Br laser only; group B received bioglass only; group C received bioglass followed by Cu-Br laser irradiation. Specimens were evaluated with regard to dentinal tubule occlusion under environmental scanning electron microscope. Treatment modality significantly affected dentinal tubule occlusion (p<0.001). Groups B and C scored higher dentinal tubule occlusion than group A. Binary logistic regression showed that bioglass application significantly (p<0.001) contributed to dentinal tubule occlusion, compared with other variables. Under the conditions used herein and within the limitations of this study, bioglass application, alone or combined with Cu-Br laser irradiation, is a superior method for producing dentinal tubule occlusion, and may lead to an effective treatment modality for CDH.

Keywords: Bioglass, Cu-Br laser, cervical dentinal hypersensitivity, dentinal tubule occlusion.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1333
162 Analytical Study of Applying the Account Aggregation Approach in E-Banking Services

Authors: A. Al Drees, A. Alahmari, R. Almuwayshir

Abstract:

The advanced information technology is becoming an important factor in the development of financial services industry, especially the banking industry. It has introduced new ways of delivering banking to the customer, such as Internet Banking. Banks began to look at electronic banking (e-banking) as a means to replace some of their traditional branch functions using the Internet as a new distribution channel. Some consumers have at least more than one account, and across banks, and access these accounts using e-banking services. To look at the current net worth position, customers have to login to each of their accounts and get the details and work on consolidation. This not only takes ample time but it is a repetitive activity at a specified frequency. To address this point, an account aggregation concept is added as a solution. E-banking account aggregation, as one of the e-banking types, appeared to build a stronger relationship with customers. Account Aggregation Service generally refers to a service that allows customers to manage their bank accounts maintained in different institutions through a common Internet banking operating a platform, with a high concern to security and privacy. This paper presents an overview of an e-banking account aggregation approach as a new service in the e-banking field.

Keywords: E-banking, security, account aggregation, enterprise application development.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1548
161 Developing a Coronavirus Academic Paper Sorting Application

Authors: Christina A. van Hal, Xiaoqian Jiang, Luyao Chen, Yan Chu, Robert D. Jolly, Yaobin Lin, Jitian Zhao, Kang Lin Hsieh

Abstract:

The COVID-19 Literature Summary App, now live on the university website, was created for the primary purpose of enabling academicians and clinicians to quickly sort through the vast array of recent coronavirus publications by topics of interest. Multiple methods of summarizing and sorting the manuscripts were created. A summary page introduces the application function and capabilities, while an interactive map provides daily updates on infection, death, and recovery rates. A page with a pivot table allows publication sorting by topic, with an interactive data table that allows sorting topics by columns, as wells as the capability to view abstracts. Additionally, publications may be sorted by the medical topics they cover. We used the CORD-19 database to compile lists of publications. The data table can sort binary variables, allowing the user to pick desired publication topics, such as papers that describe COVID-19 symptoms. The application is primarily designed for use by researchers but can be used by anybody who wants a faster and more efficient means of locating papers of interest. 

Keywords: COVID-19, literature summary, information retrieval, snorkel

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 469
160 Prioritizing Service Quality Dimensions:A Neural Network Approach

Authors: A. Golmohammadi, B. Jahandideh

Abstract:

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue –despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

Keywords: service quality, customer satisfaction, relativeimportance, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2159
159 Exploration of Least Significant Bit Based Watermarking and Its Robustness against Salt and Pepper Noise

Authors: Kamaldeep Joshi, Rajkumar Yadav, Sachin Allwadhi

Abstract:

Image steganography is the best aspect of information hiding. In this, the information is hidden within an image and the image travels openly on the Internet. The Least Significant Bit (LSB) is one of the most popular methods of image steganography. In this method, the information bit is hidden at the LSB of the image pixel. In one bit LSB steganography method, the total numbers of the pixels and the total number of message bits are equal to each other. In this paper, the LSB method of image steganography is used for watermarking. The watermarking is an application of the steganography. The watermark contains 80*88 pixels and each pixel requirs 8 bits for its binary equivalent form so, the total number of bits required to hide the watermark are 80*88*8(56320). The experiment was performed on standard 256*256 and 512*512 size images. After the watermark insertion, histogram analysis was performed. A noise factor (salt and pepper) of 0.02 was added to the stego image in order to evaluate the robustness of the method. The watermark was successfully retrieved after insertion of noise. An experiment was performed in order to know the imperceptibility of stego and the retrieved watermark. It is clear that the LSB watermarking scheme is robust to the salt and pepper noise.

Keywords: LSB, watermarking, salt and pepper, PSNR.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1053
158 A Bi-Objective Model for Location-Allocation Problem within Queuing Framework

Authors: Amirhossein Chambari, Seyed Habib Rahmaty, Vahid Hajipour, Aida Karimi

Abstract:

This paper proposes a bi-objective model for the facility location problem under a congestion system. The idea of the model is motivated by applications of locating servers in bank automated teller machines (ATMS), communication networks, and so on. This model can be specifically considered for situations in which fixed service facilities are congested by stochastic demand within queueing framework. We formulate this model with two perspectives simultaneously: (i) customers and (ii) service provider. The objectives of the model are to minimize (i) the total expected travelling and waiting time and (ii) the average facility idle-time. This model represents a mixed-integer nonlinear programming problem which belongs to the class of NP-hard problems. In addition, to solve the model, two metaheuristic algorithms including nondominated sorting genetic algorithms (NSGA-II) and non-dominated ranking genetic algorithms (NRGA) are proposed. Besides, to evaluate the performance of the two algorithms some numerical examples are produced and analyzed with some metrics to determine which algorithm works better.

Keywords: Queuing, Location, Bi-objective, NSGA-II, NRGA

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2276
157 A Character Detection Method for Ancient Yi Books Based on Connected Components and Regressive Character Segmentation

Authors: Xu Han, Shanxiong Chen, Shiyu Zhu, Xiaoyu Lin, Fujia Zhao, Dingwang Wang

Abstract:

Character detection is an important issue for character recognition of ancient Yi books. The accuracy of detection directly affects the recognition effect of ancient Yi books. Considering the complex layout, the lack of standard typesetting and the mixed arrangement between images and texts, we propose a character detection method for ancient Yi books based on connected components and regressive character segmentation. First, the scanned images of ancient Yi books are preprocessed with nonlocal mean filtering, and then a modified local adaptive threshold binarization algorithm is used to obtain the binary images to segment the foreground and background for the images. Second, the non-text areas are removed by the method based on connected components. Finally, the single character in the ancient Yi books is segmented by our method. The experimental results show that the method can effectively separate the text areas and non-text areas for ancient Yi books and achieve higher accuracy and recall rate in the experiment of character detection, and effectively solve the problem of character detection and segmentation in character recognition of ancient books.

Keywords: Computing methodologies, interest point, salient region detections, image segmentation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 865
156 Optimization of Technical and Technological Solutions for the Development of Offshore Hydrocarbon Fields in the Kaliningrad Region

Authors: Pavel Shcherban, Viktoria Ivanova, Alexander Neprokin, Vladislav Golovanov

Abstract:

Currently, LLC «Lukoil-Kaliningradmorneft» is implementing a comprehensive program for the development of offshore fields of the Kaliningrad region. This is largely associated with the depletion of the resource base of land in the region, as well as the positive results of geological investigation surrounding the Baltic Sea area and the data on the volume of hydrocarbon recovery from a single offshore field are working on the Kaliningrad region – D-6 «Kravtsovskoye».The article analyzes the main stages of the LLC «Lukoil-Kaliningradmorneft»’s development program for the development of the hydrocarbon resources of the region's shelf and suggests an optimization algorithm that allows managing a multi-criteria process of development of shelf deposits. The algorithm is formed on the basis of the problem of sequential decision making, which is a section of dynamic programming. Application of the algorithm during the consolidation of the initial data, the elaboration of project documentation, the further exploration and development of offshore fields will allow to optimize the complex of technical and technological solutions and increase the economic efficiency of the field development project implemented by LLC «Lukoil-Kaliningradmorneft».

Keywords: Offshore fields of hydrocarbons of the Baltic Sea, Development of offshore oil and gas fields, Optimization of the field development scheme, Solution of multi-criteria tasks in the oil and gas complex, Quality management of technical and technological processes.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 856
155 Investigation of Crack Formation in Ordinary Reinforced Concrete Beams and in Beams Strengthened with Carbon Fiber Sheet: Theory and Experiment

Authors: Anton A. Bykov, Irina O. Glot, Igor N. Shardakov, Alexey P. Shestakov

Abstract:

This paper presents the results of experimental and theoretical investigations of the mechanisms of crack formation in reinforced concrete beams subjected to quasi-static bending. The boundary-value problem has been formulated in the framework of brittle fracture mechanics and has been solved by using the finite-element method. Numerical simulation of the vibrations of an uncracked beam and a beam with cracks of different size serves to determine the pattern of changes in the spectrum of eigenfrequencies observed during crack evolution. Experiments were performed on the sequential quasistatic four-point bending of the beam leading to the formation of cracks in concrete. At each loading stage, the beam was subjected to an impulse load to induce vibrations. Two stages of cracking were detected. At the first stage the conservative process of deformation is realized. The second stage is an active cracking, which is marked by a sharp change in eingenfrequencies. The boundary of a transition from one stage to another is well registered. The vibration behavior was examined for the beams strengthened by carbon-fiber sheet before loading and at the intermediate stage of loading after the grouting of initial cracks. The obtained results show that the vibrodiagnostic approach is an effective tool for monitoring of cracking and for assessing the quality of measures aimed at strengthening concrete structures.

Keywords: Crack formation. experiment. mathematical modeling. reinforced concrete. vibrodiagnostics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1278
154 Financing Decision and Productivity Growth for the Venture Capital Industry Using High-Order Fuzzy Time Series

Authors: Shang-En Yu

Abstract:

Human society, there are many uncertainties, such as economic growth rate forecast of the financial crisis, many scholars have, since the the Song Chissom two scholars in 1993 the concept of the so-called fuzzy time series (Fuzzy Time Series)different mode to deal with these problems, a previous study, however, usually does not consider the relevant variables selected and fuzzy process based solely on subjective opinions the fuzzy semantic discrete, so can not objectively reflect the characteristics of the data set, in addition to carrying outforecasts are often fuzzy rules as equally important, failed to consider the importance of each fuzzy rule. For these reasons, the variable selection (Factor Selection) through self-organizing map (Self-Organizing Map, SOM) and proposed high-end weighted multivariate fuzzy time series model based on fuzzy neural network (Fuzzy-BPN), and using the the sequential weighted average operator (Ordered Weighted Averaging operator, OWA) weighted prediction. Therefore, in order to verify the proposed method, the Taiwan stock exchange (Taiwan Stock Exchange Corporation) Taiwan Weighted Stock Index (Taiwan Stock Exchange Capitalization Weighted Stock Index, TAIEX) as experimental forecast target, in order to filter the appropriate variables in the experiment Finally, included in other studies in recent years mode in conjunction with this study, the results showed that the predictive ability of this study further improve.

Keywords: Heterogeneity, residential mortgage loans, foreclosure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1388
153 Improved Modulo 2n +1 Adder Design

Authors: Somayeh Timarchi, Keivan Navi

Abstract:

Efficient modulo 2n+1 adders are important for several applications including residue number system, digital signal processors and cryptography algorithms. In this paper we present a novel modulo 2n+1 addition algorithm for a recently represented number system. The proposed approach is introduced for the reduction of the power dissipated. In a conventional modulo 2n+1 adder, all operands have (n+1)-bit length. To avoid using (n+1)-bit circuits, the diminished-1 and carry save diminished-1 number systems can be effectively used in applications. In the paper, we also derive two new architectures for designing modulo 2n+1 adder, based on n-bit ripple-carry adder. The first architecture is a faster design whereas the second one uses less hardware. In the proposed method, the special treatment required for zero operands in Diminished-1 number system is removed. In the fastest modulo 2n+1 adders in normal binary system, there are 3-operand adders. This problem is also resolved in this paper. The proposed architectures are compared with some efficient adders based on ripple-carry adder and highspeed adder. It is shown that the hardware overhead and power consumption will be reduced. As well as power reduction, in some cases, power-delay product will be also reduced.

Keywords: Modulo 2n+1 arithmetic, residue number system, low power, ripple-carry adders.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2903
152 Support Vector Machine based Intelligent Watermark Decoding for Anticipated Attack

Authors: Syed Fahad Tahir, Asifullah Khan, Abdul Majid, Anwar M. Mirza

Abstract:

In this paper, we present an innovative scheme of blindly extracting message bits from an image distorted by an attack. Support Vector Machine (SVM) is used to nonlinearly classify the bits of the embedded message. Traditionally, a hard decoder is used with the assumption that the underlying modeling of the Discrete Cosine Transform (DCT) coefficients does not appreciably change. In case of an attack, the distribution of the image coefficients is heavily altered. The distribution of the sufficient statistics at the receiving end corresponding to the antipodal signals overlap and a simple hard decoder fails to classify them properly. We are considering message retrieval of antipodal signal as a binary classification problem. Machine learning techniques like SVM is used to retrieve the message, when certain specific class of attacks is most probable. In order to validate SVM based decoding scheme, we have taken Gaussian noise as a test case. We generate a data set using 125 images and 25 different keys. Polynomial kernel of SVM has achieved 100 percent accuracy on test data.

Keywords: Bit Correct Ratio (BCR), Grid Search, Intelligent Decoding, Jackknife Technique, Support Vector Machine (SVM), Watermarking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1670
151 A Xenon Mass Gauging through Heat Transfer Modeling for Electric Propulsion Thrusters

Authors: A. Soria-Salinas, M.-P. Zorzano, J. Martín-Torres, J. Sánchez-García-Casarrubios, J.-L. Pérez-Díaz, A. Vakkada-Ramachandran

Abstract:

The current state-of-the-art methods of mass gauging of Electric Propulsion (EP) propellants in microgravity conditions rely on external measurements that are taken at the surface of the tank. The tanks are operated under a constant thermal duty cycle to store the propellant within a pre-defined temperature and pressure range. We demonstrate using computational fluid dynamics (CFD) simulations that the heat-transfer within the pressurized propellant generates temperature and density anisotropies. This challenges the standard mass gauging methods that rely on the use of time changing skin-temperatures and pressures. We observe that the domes of the tanks are prone to be overheated, and that a long time after the heaters of the thermal cycle are switched off, the system reaches a quasi-equilibrium state with a more uniform density. We propose a new gauging method, which we call the Improved PVT method, based on universal physics and thermodynamics principles, existing TRL-9 technology and telemetry data. This method only uses as inputs the temperature and pressure readings of sensors externally attached to the tank. These sensors can operate during the nominal thermal duty cycle. The improved PVT method shows little sensitivity to the pressure sensor drifts which are critical towards the end-of-life of the missions, as well as little sensitivity to systematic temperature errors. The retrieval method has been validated experimentally with CO2 in gas and fluid state in a chamber that operates up to 82 bar within a nominal thermal cycle of 38 °C to 42 °C. The mass gauging error is shown to be lower than 1% the mass at the beginning of life, assuming an initial tank load at 100 bar. In particular, for a pressure of about 70 bar, just below the critical pressure of CO2, the error of the mass gauging in gas phase goes down to 0.1% and for 77 bar, just above the critical point, the error of the mass gauging of the liquid phase is 0.6% of initial tank load. This gauging method improves by a factor of 8 the accuracy of the standard PVT retrievals using look-up tables with tabulated data from the National Institute of Standards and Technology.

Keywords: Electric propulsion, mass gauging, propellant, PVT, xenon.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2189
150 Bubble Point Pressures of CO2+Ethyl Palmitate by a Cubic Equation of State and the Wong-Sandler Mixing Rule

Authors: M. A. Sedghamiz, S. Raeissi

Abstract:

This study presents three different approaches to estimate bubble point pressures for the binary system of CO2 and ethyl palmitate fatty acid ethyl ester. The first method involves the Peng-Robinson (PR) Equation of State (EoS) with the conventional mixing rule of Van der Waals. The second approach involves the PR EOS together with the Wong Sandler (WS) mixing rule, coupled with the UNIQUAC GE model. In order to model the bubble point pressures with this approach, the volume and area parameter for ethyl palmitate were estimated by the Hansen group contribution method. The last method involved the Peng-Robinson, combined with the Wong-Sandler method, but using NRTL as the GE model. Results using the Van der Waals mixing rule clearly indicated that this method has the largest errors among all three methods, with errors in the range of 3.96-6.22%. The PR-WS-UNIQUAC method exhibited small errors, with average absolute deviations between 0.95 to 1.97 percent. The PR-WS-NRTL method led to the least errors, where average absolute deviations ranged between 0.65-1.7%.

Keywords: Bubble pressure, Gibbs excess energy model, mixing rule, CO2 solubility, ethyl palmitate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1853
149 Prioritizing Service Quality Dimensions: A Neural Network Approach

Authors: A. Golmohammadi, B. Jahandideh

Abstract:

One of the determinants of a firm-s prosperity is the customers- perceived service quality and satisfaction. While service quality is wide in scope, and consists of various dimensions, there may be differences in the relative importance of these dimensions in affecting customers- overall satisfaction of service quality. Identifying the relative rank of different dimensions of service quality is very important in that it can help managers to find out which service dimensions have a greater effect on customers- overall satisfaction. Such an insight will consequently lead to more effective resource allocation which will finally end in higher levels of customer satisfaction. This issue – despite its criticality- has not received enough attention so far. Therefore, using a sample of 240 bank customers in Iran, an artificial neural network is developed to address this gap in the literature. As customers- evaluation of service quality is a subjective process, artificial neural networks –as a brain metaphor- may appear to have a potentiality to model such a complicated process. Proposing a neural network which is able to predict the customers- overall satisfaction of service quality with a promising level of accuracy is the first contribution of this study. In addition, prioritizing the service quality dimensions in affecting customers- overall satisfaction –by using sensitivity analysis of neural network- is the second important finding of this paper.

Keywords: service quality, customer satisfaction, relative importance, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1643
148 Academic Achievement Differences in Grandiose and Vulnerable Narcissists and the Mediating Effects of Self-Esteem and Self-Efficacy

Authors: Amber L. Dummett, Efstathia Tzemou

Abstract:

Narcissism is a personality trait characterised by selfishness, entitlement, and superiority. Narcissism is split into two subtypes, grandiose narcissism (GN) and vulnerable narcissism (VN). Grandiose narcissists are extraverted and arrogant, while vulnerable narcissists are introverted and insecure. This study investigates the psychological mechanisms that lead to differences in academic achievement (AA) between grandiose and vulnerable narcissists, specifically the mediating effects of self-esteem and self-efficacy. While narcissism is considered to be a negative trait, this study considers if better AA is one of them. Moreover, further research into VN is essential to fully compare and contrast it with GN. We hypothesise that grandiose narcissists achieve higher marks due to having high self-esteem which in turn boosts their sense of self-efficacy. In comparison, we hypothesise that vulnerable narcissists underperform due to having low self-esteem which limits their self-efficacy. Two online surveys were distributed to undergraduate university students. The first was a collection of scales measuring the mentioned dimensions, and the second investigated end of year AA. Sequential mediation analyses were conducted using the gathered data. Our analysis shows that neither self-esteem nor self-efficacy mediate the relationship between GN and AA. GN positively predicts self-esteem but has no relationship with self-efficacy. Self-esteem does not mediate the relationship between VN and AA. VN has a negative indirect effect on AA via self-efficacy, and VN negatively predicts self-esteem. Self-efficacy positively predicts AA. GN does not affect AA through the mediation of self-esteem and then self-efficacy, and neither does VN in this way. Overall, having grandiose or vulnerable narcissistic traits does not affect students’ AA. However, being highly efficacious does lead to academic success, therefore, universities should employ methods to improve the self-efficacy of their students.

Keywords: Academic achievement, grandiose narcissism, self-efficacy, self-esteem, vulnerable narcissism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 463
147 Entrepreneurial Predisposition and Intention of Students from the IFRN – Mossoró, Brazil

Authors: Giovane Gurgel, Cristina S. Rodrigues, Filipa D. Vieira

Abstract:

IFRN – Mossoró is a Brazilian technical education institute that develops several activities to encourage entrepreneurship, such as a curricular discipline about enterprise management and the existence of a business incubator. Despite efforts, the business incubator does not produce the expected effects. Therefore, what predisposes students to start their own business? If literature review explores determinant factors like the family and personal characteristics, it can be sustained that entrepreneurship skills can be taught since primary level, until university level. This paper presents the results of research project “Empreende IFRN” to understand the entrepreneurial predisposition and intention of the students from technical level courses. Data from 365 students from technical level courses reveal an increased entrepreneurial intention of students during time (from a 2 years period to someday in the future). The entrepreneurial behavior of parents affects students’ perception about starting their own business. Students also present a cautions behavior, preferring bank deposit and investment fund instead starting a business.

Keywords: Brazil, Entrepreneurial intention, Entrepreneurship, Secondary technical students.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4038
146 Design and Analysis of Electric Power Production Unit for Low Enthalpy Geothermal Reservoir Applications

Authors: Ildar Akhmadullin, Mayank Tyagi

Abstract:

The subject of this paper is the design analysis of a single well power production unit from low enthalpy geothermal resources. A complexity of the project is defined by a low temperature heat source that usually makes such projects economically disadvantageous using the conventional binary power plant approach. A proposed new compact design is numerically analyzed. This paper describes a thermodynamic analysis, a working fluid choice, downhole heat exchanger (DHE) and turbine calculation results. The unit is able to produce 321 kW of electric power from a low enthalpy underground heat source utilizing n-Pentane as a working fluid. A geo-pressured reservoir located in Vermilion Parish, Louisiana, USA is selected as a prototype for the field application. With a brine temperature of 126 , the optimal length of DHE is determined as 304.8 m (1000ft). All units (pipes, turbine, and pumps) are chosen from commercially available parts to bring this project closer to the industry requirements. Numerical calculations are based on petroleum industry standards. The project is sponsored by the Department of Energy of the US.

Keywords: Downhole Heat Exchangers, Geothermal Power Generation, Organic Rankine Cycle, Refrigerants, Working Fluids.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2670