Search results for: Luminance and Contrast masking
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 362

Search results for: Luminance and Contrast masking

92 Great Powers’ Proxy Wars in Middle East and Difficulty in Transition from Cold War to Cold Peace

Authors: Arash Sharghi, Irina Dotu

Abstract:

The developments in the Middle East region have activated the involvement of a numerous diverse state and non-state actors in the regional affairs. The goals, positions, ideologies, different, and even contrast policy behaviors had procured the spreading and continuity of crisis. Non-state actors varying from Islamic organizations to takfiri-terrorist movements on one hand and regional and trans- regional actors, from another side, seek to reach their interests in the power struggle. Here, a research worthy question comes on the agenda: taking into consideration actors’ contradictory interests and constraints what are the regional peace and stability perspectives? Therein, different actors’ aims definition, their actions and behaviors, which affect instability, can be regarded as independent variables; whereas, on the contrary, Middle East peace and stability perspective analysis is a dependent variable. Though, this regional peace and war theory based research admits the significant influence of trans-regional actors, it asserts the roots of violence to derive from region itself. Consequently, hot war and conflict prevention and hot peace assurance in the Middle East region cannot be attained only by demands and approaches of trans-regional actors. Moreover, capacity of trans-regional actors is sufficient only for a cold war or cold peace to be reached in the region. Furthermore, within the framework of current conflict (struggle) between regional actors it seems to be difficult and even impossible to turn the cold war into a cold peace in the region.

Keywords: Cold peace, cold war, hot war, Middle East, non-state actors, regional and Great powers, war theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1119
91 Hybrid Temporal Correlation Based on Gaussian Mixture Model Framework for View Synthesis

Authors: Deng Zengming, Wang Mingjiang

Abstract:

As 3D video is explored as a hot research topic in the last few decades, free-viewpoint TV (FTV) is no doubt a promising field for its better visual experience and incomparable interactivity. View synthesis is obviously a crucial technology for FTV; it enables to render images in unlimited numbers of virtual viewpoints with the information from limited numbers of reference view. In this paper, a novel hybrid synthesis framework is proposed and blending priority is explored. In contrast to the commonly used View Synthesis Reference Software (VSRS), the presented synthesis process is driven in consideration of the temporal correlation of image sequences. The temporal correlations will be exploited to produce fine synthesis results even near the foreground boundaries. As for the blending priority, this scheme proposed that one of the two reference views is selected to be the main reference view based on the distance between the reference views and virtual view, another view is chosen as the auxiliary viewpoint, just assist to fill the hole pixel with the help of background information. Significant improvement of the proposed approach over the state-of –the-art pixel-based virtual view synthesis method is presented, the results of the experiments show that subjective gains can be observed, and objective PSNR average gains range from 0.5 to 1.3 dB, while SSIM average gains range from 0.01 to 0.05.

Keywords: View synthesis, Gaussian mixture model, hybrid framework, fusion method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 943
90 Mechanical Behavior of Recycled Mortars Manufactured from Moisture Correction Using the Halogen Light Thermogravimetric Balance as an Alternative to the Traditional ASTM C 128 Method

Authors: Diana Gómez-Cano, J. C. Ochoa-Botero, Roberto Bernal Correa, Yhan Paul Arias

Abstract:

To obtain high mechanical performance, the fresh conditions of a mortar are decisive. Measuring the absorption of aggregates used in mortar mixes is a fundamental requirement for proper design of the mixes prior to their placement in construction sites. In this sense, absorption is a determining factor in the design of a mix because it conditions the amount of water, which in turn affects the water/cement ratio and the final porosity of the mortar. Thus, this work focuses on the mechanical behavior of recycled mortars manufactured from moisture correction using the Thermogravimetric Balancing Halogen Light (TBHL) technique in comparison with the traditional ASTM C 128 International Standard method. The advantages of using the TBHL technique are favorable in terms of reduced consumption of resources such as materials, energy and time. The results show that in contrast to the ASTM C 128 method, the TBHL alternative technique allows obtaining a higher precision in the absorption values of recycled aggregates, which is reflected not only in a more efficient process in terms of sustainability in the characterization of construction materials, but also in an effect on the mechanical performance of recycled mortars.

Keywords: Alternative raw materials, halogen light, recycled mortar, resources optimization, water absorption.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 461
89 Measuring the Structural Similarity of Web-based Documents: A Novel Approach

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian

Abstract:

Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2522
88 Non-Local Behavior of a Mixed-Mode Crack in a Functionally Graded Piezoelectric Medium

Authors: Nidhal Jamia, Sami El-Borgi

Abstract:

In this paper, the problem of a mixed-Mode crack embedded in an infinite medium made of a functionally graded piezoelectric material (FGPM) with crack surfaces subjected to electro-mechanical loadings is investigated. Eringen’s non-local theory of elasticity is adopted to formulate the governing electro-elastic equations. The properties of the piezoelectric material are assumed to vary exponentially along a perpendicular plane to the crack. Using Fourier transform, three integral equations are obtained in which the unknown variables are the jumps of mechanical displacements and electric potentials across the crack surfaces. To solve the integral equations, the unknowns are directly expanded as a series of Jacobi polynomials, and the resulting equations solved using the Schmidt method. In contrast to the classical solutions based on the local theory, it is found that no mechanical stress and electric displacement singularities are present at the crack tips when nonlocal theory is employed to investigate the problem. A direct benefit is the ability to use the calculated maximum stress as a fracture criterion. The primary objective of this study is to investigate the effects of crack length, material gradient parameter describing FGPMs, and lattice parameter on the mechanical stress and electric displacement field near crack tips.

Keywords: Functionally graded piezoelectric material, mixed-mode crack, non-local theory, Schmidt method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 953
87 The Phonology and Phonetics of Second Language Intonation in Case of “Downstep”

Authors: Tayebeh Norouzi

Abstract:

This study aims to investigate the acquisition process of intonation. It examines the intonation structure of Tokyo Japanese and its realization by Iranian learners of Japanese. Seven Iranian learners of Japanese, differing in fluency, and two Japanese speakers participated in the experiment. Two sentences were used to test the phonological and phonetic characteristics of lexical pitch-accent as well as the intonation patterns produced by the speakers. Both sentences consisted of similar words with the same number of syllables and lexical pitch-accents but different syntactic structure. Speakers were asked to read each sentence three times at normal speed, and the data were analyzed by Praat. The results show that lexical pitch-accent, Accentual Phrase (AP) and AP boundary tone realization vary depending on sentence type. For sentences of type XdeYwo, the lexical pitch-accent is realized properly. However, there is a rise in AP boundary tone regardless of speakers’ level of fluency. In contrast, in sentences of type XnoYwo, the lexical pitch-accent and AP boundary tone vary depending on the speakers’ fluency level. Advanced speakers are better at grouping words into phrases and produce more native-like intonation patterns, though they are not able to realize downstep properly. The non-native speakers tried to realize proper intonation patterns by making changes in lexical accent and boundary tone.

Keywords: Intonation, Iranian learners, Japanese prosody, lexical accent, second language acquisition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 925
86 Academic Achievement Differences in Grandiose and Vulnerable Narcissists and the Mediating Effects of Self-Esteem and Self-Efficacy

Authors: Amber L. Dummett, Efstathia Tzemou

Abstract:

Narcissism is a personality trait characterised by selfishness, entitlement, and superiority. Narcissism is split into two subtypes, grandiose narcissism (GN) and vulnerable narcissism (VN). Grandiose narcissists are extraverted and arrogant, while vulnerable narcissists are introverted and insecure. This study investigates the psychological mechanisms that lead to differences in academic achievement (AA) between grandiose and vulnerable narcissists, specifically the mediating effects of self-esteem and self-efficacy. While narcissism is considered to be a negative trait, this study considers if better AA is one of them. Moreover, further research into VN is essential to fully compare and contrast it with GN. We hypothesise that grandiose narcissists achieve higher marks due to having high self-esteem which in turn boosts their sense of self-efficacy. In comparison, we hypothesise that vulnerable narcissists underperform due to having low self-esteem which limits their self-efficacy. Two online surveys were distributed to undergraduate university students. The first was a collection of scales measuring the mentioned dimensions, and the second investigated end of year AA. Sequential mediation analyses were conducted using the gathered data. Our analysis shows that neither self-esteem nor self-efficacy mediate the relationship between GN and AA. GN positively predicts self-esteem but has no relationship with self-efficacy. Self-esteem does not mediate the relationship between VN and AA. VN has a negative indirect effect on AA via self-efficacy, and VN negatively predicts self-esteem. Self-efficacy positively predicts AA. GN does not affect AA through the mediation of self-esteem and then self-efficacy, and neither does VN in this way. Overall, having grandiose or vulnerable narcissistic traits does not affect students’ AA. However, being highly efficacious does lead to academic success, therefore, universities should employ methods to improve the self-efficacy of their students.

Keywords: Academic achievement, grandiose narcissism, self-efficacy, self-esteem, vulnerable narcissism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375
85 A Computational Stochastic Modeling Formalism for Biological Networks

Authors: Werner Sandmann, Verena Wolf

Abstract:

Stochastic models of biological networks are well established in systems biology, where the computational treatment of such models is often focused on the solution of the so-called chemical master equation via stochastic simulation algorithms. In contrast to this, the development of storage-efficient model representations that are directly suitable for computer implementation has received significantly less attention. Instead, a model is usually described in terms of a stochastic process or a "higher-level paradigm" with graphical representation such as e.g. a stochastic Petri net. A serious problem then arises due to the exponential growth of the model-s state space which is in fact a main reason for the popularity of stochastic simulation since simulation suffers less from the state space explosion than non-simulative numerical solution techniques. In this paper we present transition class models for the representation of biological network models, a compact mathematical formalism that circumvents state space explosion. Transition class models can also serve as an interface between different higher level modeling paradigms, stochastic processes and the implementation coded in a programming language. Besides, the compact model representation provides the opportunity to apply non-simulative solution techniques thereby preserving the possible use of stochastic simulation. Illustrative examples of transition class representations are given for an enzyme-catalyzed substrate conversion and a part of the bacteriophage λ lysis/lysogeny pathway.

Keywords: Computational Modeling, Biological Networks, Stochastic Models, Markov Chains, Transition Class Models.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
84 A Community Compromised Approach to Combinatorial Coalition Problem

Authors: Laor Boongasame, Veera Boonjing, Ho-fung Leung

Abstract:

Buyer coalition with a combination of items is a group of buyers joining together to purchase a combination of items with a larger discount. The primary aim of existing buyer coalition with a combination of items research is to generate a large total discount. However, the aim is hard to achieve because this research is based on the assumption that each buyer completely knows other buyers- information or at least one buyer knows other buyers- information in a coalition by exchange of information. These assumption contrast with the real world environment where buyers join a coalition with incomplete information, i.e., they concerned only with their expected discounts. Therefore, this paper proposes a new buyer community coalition formation with a combination of items scheme, called the Community Compromised Combinatorial Coalition scheme, under such an environment of incomplete information. In order to generate a larger total discount, after buyers who want to join a coalition propose their minimum required saving, a coalition structure that gives a maximum total retail prices is formed. Then, the total discount division of the coalition is divided among buyers in the coalition depending on their minimum required saving and is a Pareto optimal. In mathematical analysis, we compare concepts of this scheme with concepts of the existing buyer coalition scheme. Our mathematical analysis results show that the total discount of the coalition in this scheme is larger than that in the existing buyer coalition scheme.

Keywords: group decision and negotiations, group buying, gametheory, combinatorial coalition formation, Pareto optimality

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1491
83 An Adaptive Memetic Algorithm With Dynamic Population Management for Designing HIV Multidrug Therapies

Authors: Hassan Zarei, Ali Vahidian Kamyad, Sohrab Effati

Abstract:

In this paper, a mathematical model of human immunodeficiency virus (HIV) is utilized and an optimization problem is proposed, with the final goal of implementing an optimal 900-day structured treatment interruption (STI) protocol. Two type of commonly used drugs in highly active antiretroviral therapy (HAART), reverse transcriptase inhibitors (RTI) and protease inhibitors (PI), are considered. In order to solving the proposed optimization problem an adaptive memetic algorithm with population management (AMAPM) is proposed. The AMAPM uses a distance measure to control the diversity of population in genotype space and thus preventing the stagnation and premature convergence. Moreover, the AMAPM uses diversity parameter in phenotype space to dynamically set the population size and the number of crossovers during the search process. Three crossover operators diversify the population, simultaneously. The progresses of crossover operators are utilized to set the number of each crossover per generation. In order to escaping the local optima and introducing the new search directions toward the global optima, two local searchers assist the evolutionary process. In contrast to traditional memetic algorithms, the activation of these local searchers is not random and depends on both the diversity parameters in genotype space and phenotype space. The capability of AMAPM in finding optimal solutions compared with three popular metaheurestics is introduced.

Keywords: HIV therapy design, memetic algorithms, adaptivealgorithms, nonlinear integer programming.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
82 The Effect of Foreign Owned Firms and Licensed Manufacturing Agreements on Innovation: Case of Pharmaceutical Firms in Developing Countries

Authors: Ilham Benali, Nasser Hajji, Nawfal Acha

Abstract:

Given the fact that the pharmaceutical industry is a commonly studied sector in the context of innovation, the majority of innovation research is devoted to the developed markets known by high research and development (R&D) assets and intensive innovation. In contrast, in developing countries where R&D assets are very low, there is relatively little research to mention in the area of pharmaceutical sector innovation, characterized mainly by two principal elements which are the presence of foreign-owned firms and licensed manufacturing agreements between local firms and multinationals. With the scarcity of research in this field, this paper attempts to study the effect of these two elements on the firms’ innovation tendencies. Other traditional factors that influence innovation, which are the age and the size of the firm, the R&D activities and the market structure, revealed in the literature review, will be included in the study in order to try to make this work more exhaustive. The study starts by examining innovation tendency in pharmaceutical firms located in developing countries before analyzing the effect of foreign-owned firms and licensed manufacturing agreements between local firms and multinationals on technological, organizational and marketing innovation. Based on the related work and on the theoretical framework developed, there is a probability that foreign-owned firms and licensed manufacturing agreements between local firms and multinationals have a negative influence on technological innovation. The opposite effect is possible in the case of organizational and marketing innovation.

Keywords: Developing countries, foreign owned firms, innovation, licensed manufacturing agreements, pharmaceutical industry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
81 A New High Speed Neural Model for Fast Character Recognition Using Cross Correlation and Matrix Decomposition

Authors: Hazem M. El-Bakry

Abstract:

Neural processors have shown good results for detecting a certain character in a given input matrix. In this paper, a new idead to speed up the operation of neural processors for character detection is presented. Such processors are designed based on cross correlation in the frequency domain between the input matrix and the weights of neural networks. This approach is developed to reduce the computation steps required by these faster neural networks for the searching process. The principle of divide and conquer strategy is applied through image decomposition. Each image is divided into small in size sub-images and then each one is tested separately by using a single faster neural processor. Furthermore, faster character detection is obtained by using parallel processing techniques to test the resulting sub-images at the same time using the same number of faster neural networks. In contrast to using only faster neural processors, the speed up ratio is increased with the size of the input image when using faster neural processors and image decomposition. Moreover, the problem of local subimage normalization in the frequency domain is solved. The effect of image normalization on the speed up ratio of character detection is discussed. Simulation results show that local subimage normalization through weight normalization is faster than subimage normalization in the spatial domain. The overall speed up ratio of the detection process is increased as the normalization of weights is done off line.

Keywords: Fast Character Detection, Neural Processors, Cross Correlation, Image Normalization, Parallel Processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1492
80 Performance Analysis of Reconstruction Algorithms in Diffuse Optical Tomography

Authors: K. Uma Maheswari, S. Sathiyamoorthy, G. Lakshmi

Abstract:

Diffuse Optical Tomography (DOT) is a non-invasive imaging modality used in clinical diagnosis for earlier detection of carcinoma cells in brain tissue. It is a form of optical tomography which produces gives the reconstructed image of a human soft tissue with by using near-infra-red light. It comprises of two steps called forward model and inverse model. The forward model provides the light propagation in a biological medium. The inverse model uses the scattered light to collect the optical parameters of human tissue. DOT suffers from severe ill-posedness due to its incomplete measurement data. So the accurate analysis of this modality is very complicated. To overcome this problem, optical properties of the soft tissue such as absorption coefficient, scattering coefficient, optical flux are processed by the standard regularization technique called Levenberg - Marquardt regularization. The reconstruction algorithms such as Split Bregman and Gradient projection for sparse reconstruction (GPSR) methods are used to reconstruct the image of a human soft tissue for tumour detection. Among these algorithms, Split Bregman method provides better performance than GPSR algorithm. The parameters such as signal to noise ratio (SNR), contrast to noise ratio (CNR), relative error (RE) and CPU time for reconstructing images are analyzed to get a better performance.

Keywords: Diffuse optical tomography, ill-posedness, Levenberg Marquardt method, Split Bregman, the Gradient projection for sparse reconstruction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1563
79 Packet Forwarding with Multiprotocol Label Switching

Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar

Abstract:

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643
78 A Study on Algorithm Fusion for Recognition and Tracking of Moving Robot

Authors: Jungho Choi, Youngwan Cho

Abstract:

This paper presents an algorithm for the recognition and tracking of moving objects, 1/10 scale model car is used to verify performance of the algorithm. Presented algorithm for the recognition and tracking of moving objects in the paper is as follows. SURF algorithm is merged with Lucas-Kanade algorithm. SURF algorithm has strong performance on contrast, size, rotation changes and it recognizes objects but it is slow due to many computational complexities. Processing speed of Lucas-Kanade algorithm is fast but the recognition of objects is impossible. Its optical flow compares the previous and current frames so that can track the movement of a pixel. The fusion algorithm is created in order to solve problems which occurred using the Kalman Filter to estimate the position and the accumulated error compensation algorithm was implemented. Kalman filter is used to create presented algorithm to complement problems that is occurred when fusion two algorithms. Kalman filter is used to estimate next location, compensate for the accumulated error. The resolution of the camera (Vision Sensor) is fixed to be 640x480. To verify the performance of the fusion algorithm, test is compared to SURF algorithm under three situations, driving straight, curve, and recognizing cars behind the obstacles. Situation similar to the actual is possible using a model vehicle. Proposed fusion algorithm showed superior performance and accuracy than the existing object recognition and tracking algorithms. We will improve the performance of the algorithm, so that you can experiment with the images of the actual road environment.

Keywords: SURF, Optical Flow Lucas-Kanade, Kalman Filter, object recognition, object tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2246
77 Color Characteristics of Dried Cocoa Using Shallow Box Fermentation Technique

Authors: Khairul Bariah Sulaiman, Tajul Aris Yang

Abstract:

Fermentation is well known as an essential process to develop chocolate flavor in dried cocoa beans. Besides developing the precursor of cocoa flavor, it also induces the color changes in the beans. The fermentation process is influenced by various factors such as planting material, preconditioning of cocoa pod and fermentation technique. Therefore, this study was conducted to evaluate color of Malaysian cocoa beans and how the duration of pods storage and fermentation technique using shallow box will effect on its color characteristics. There are two factors being studied i.e. duration of cocoa pod storage (0, 2, 4 and 6 days) and duration of cocoa fermentation (0, 1, 2, 3, 4 and 5 days). The experiment is arranged in 4 x 6 factorial designs with 24 treatments and arrangement is in a Completely Randomised Design (CRD). The produced beans are inspected for color changes under artificial light during cut test and divided into four groups of color namely fully brown, purple brown, fully purple and slaty. Cut tests indicated that cocoa beans which are directly dried without undergone fermentation has the highest slaty percentage. However, application of pods storage before fermentation process is found to decrease the slaty percentage. In contrast, the percentages of fully brown beans start to dominate after two days of fermentation, especially from four and six days of pods storage batch. Whereas, almost all batches of cocoa beans have a percentage of fully purple less than 20%. Interestingly, the percentage of purple brown beans are scattered in the entire beans batch regardless any specific trend. Meanwhile, statistical analysis using General Linear Model showed that the pods storage has a significant effect on the color characteristic of the Malaysian dried beans compared to fermentation duration.

Keywords: Cocoa beans, color, fermentation, shallow box.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2912
76 Conditions of the Anaerobic Digestion of Biomass

Authors: N. Boontian

Abstract:

Biological conversion of biomass to methane has received increasing attention in recent years. Grasses have been explored for their potential anaerobic digestion to methane. In this review, extensive literature data have been tabulated and classified. The influences of several parameters on the potential of these feedstocks to produce methane are presented. Lignocellulosic biomass represents a mostly unused source for biogas and ethanol production. Many factors, including lignin content, crystallinity of cellulose, and particle size, limit the digestibility of the hemicellulose and cellulose present in the lignocellulosic biomass. Pretreatments have used to improve the digestibility of the lignocellulosic biomass. Each pretreatment has its own effects on cellulose, hemicellulose and lignin, the three main components of lignocellulosic biomass. Solidstate anaerobic digestion (SS-AD) generally occurs at solid concentrations higher than 15%. In contrast, liquid anaerobic digestion (AD) handles feedstocks with solid concentrations between 0.5% and 15%. Animal manure, sewage sludge, and food waste are generally treated by liquid AD, while organic fractions of municipal solid waste (OFMSW) and lignocellulosic biomass such as crop residues and energy crops can be processed through SS-AD. An increase in operating temperature can improve both the biogas yield and the production efficiency, other practices such as using AD digestate or leachate as an inoculant or decreasing the solid content may increase biogas yield but have negative impact on production efficiency. Focus is placed on substrate pretreatment in anaerobic digestion (AD) as a means of increasing biogas yields using today’s diversified substrate sources.

Keywords: Anaerobic digestion, Lignocellulosic biomass, Methane production, Optimization, Pretreatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4176
75 Moderation in Temperature Dependence on Counter Frictional Coefficient and Prevention of Wear of C/C Composites by Synthesizing SiC around Surface and Internal Vacancies

Authors: Noboru Wakamoto, Kiyotaka Obunai, Kazuya Okubo, Toru Fujii

Abstract:

The aim of this study is to moderate the dependence of counter frictional coefficient on temperature between counter surfaces and to reduce the wear of C/C composites at low temperature. To modify the C/C composites, Silica (SiO2) powders were added into phenolic resin for carbon precursor. The preform plate of the precursor of C/C composites was prepared by conventional filament winding method. The C/C composites plates were obtained by carbonizing preform plate at 2200 °C under an argon atmosphere. At that time, the silicon carbides (SiC) were synthesized around the surfaces and the internal vacancies of the C/C composites. The frictional coefficient on the counter surfaces and specific wear volumes of the C/C composites were measured by our developed frictional test machine like pin-on disk type. The XRD indicated that SiC was synthesized in the body of C/C composite fabricated by current method. The results of friction test showed that coefficient of friction of unmodified C/C composites have temperature dependence when the test condition was changed. In contrast, frictional coefficient of the C/C composite modified with SiO2 powders was almost constant at about 0.27 when the temperature condition was changed from Room Temperature (RT) to 300 °C. The specific wear rate decreased from 25×10-6 mm2/N to 0.1×10-6 mm2/N. The observations of the surfaces after friction tests showed that the frictional surface of the modified C/C composites was covered with a film produced by the friction. This study found that synthesizing SiC around surface and internal vacancies of C/C composites was effective to moderate the dependence on the frictional coefficient and reduce to the abrasion of C/C composites.

Keywords: C/C composites, frictional coefficient, SiC, wear.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792
74 Medical Image Segmentation Based On Vigorous Smoothing and Edge Detection Ideology

Authors: Jagadish H. Pujar, Pallavi S. Gurjal, Shambhavi D. S, Kiran S. Kunnur

Abstract:

Medical image segmentation based on image smoothing followed by edge detection assumes a great degree of importance in the field of Image Processing. In this regard, this paper proposes a novel algorithm for medical image segmentation based on vigorous smoothening by identifying the type of noise and edge diction ideology which seems to be a boom in medical image diagnosis. The main objective of this algorithm is to consider a particular medical image as input and make the preprocessing to remove the noise content by employing suitable filter after identifying the type of noise and finally carrying out edge detection for image segmentation. The algorithm consists of three parts. First, identifying the type of noise present in the medical image as additive, multiplicative or impulsive by analysis of local histograms and denoising it by employing Median, Gaussian or Frost filter. Second, edge detection of the filtered medical image is carried out using Canny edge detection technique. And third part is about the segmentation of edge detected medical image by the method of Normalized Cut Eigen Vectors. The method is validated through experiments on real images. The proposed algorithm has been simulated on MATLAB platform. The results obtained by the simulation shows that the proposed algorithm is very effective which can deal with low quality or marginal vague images which has high spatial redundancy, low contrast and biggish noise, and has a potential of certain practical use of medical image diagnosis.

Keywords: Image Segmentation, Image smoothing, Edge Detection, Impulsive noise, Gaussian noise, Median filter, Canny edge, Eigen values, Eigen vector.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1855
73 MinRoot and CMesh: Interconnection Architectures for Network-on-Chip Systems

Authors: Mohammad Ali Jabraeil Jamali, Ahmad Khademzadeh

Abstract:

The success of an electronic system in a System-on- Chip is highly dependent on the efficiency of its interconnection network, which is constructed from routers and channels (the routers move data across the channels between nodes). Since neither classical bus based nor point to point architectures can provide scalable solutions and satisfy the tight power and performance requirements of future applications, the Network-on-Chip (NoC) approach has recently been proposed as a promising solution. Indeed, in contrast to the traditional solutions, the NoC approach can provide large bandwidth with moderate area overhead. The selected topology of the components interconnects plays prime rule in the performance of NoC architecture as well as routing and switching techniques that can be used. In this paper, we present two generic NoC architectures that can be customized to the specific communication needs of an application in order to reduce the area with minimal degradation of the latency of the system. An experimental study is performed to compare these structures with basic NoC topologies represented by 2D mesh, Butterfly-Fat Tree (BFT) and SPIN. It is shown that Cluster mesh (CMesh) and MinRoot schemes achieves significant improvements in network latency and energy consumption with only negligible area overhead and complexity over existing architectures. In fact, in the case of basic NoC topologies, CMesh and MinRoot schemes provides substantial savings in area as well, because they requires fewer routers. The simulation results show that CMesh and MinRoot networks outperforms MESH, BFT and SPIN in main performance metrics.

Keywords: MinRoot, CMesh, NoC, Topology, Performance Evaluation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2081
72 Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory

Authors: Gesine Hellwig

Abstract:

Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.

Keywords: Functional imaging, Tracer kinetic modeling, LTIsystem, Indicator dilution theory / convolution approach, Two-Compartment model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1372
71 Spatial Integration at the Room-Level of 'Sequina' Slum Area in Alexandria, Egypt

Authors: Ali Essam El Shazly

Abstract:

The social logic of 'Sequina' slum area in Alexandria details the integral measure of space syntax at the room-level of twenty-building samples. The essence of spatial structure integrates the central 'visitor' domain with the 'living' frontage of the 'children' zone against the segregated privacy of the opposite 'parent' depth. Meanwhile, the multifunctioning of shallow rooms optimizes the integral 'visitor' structure through graph and visibility dimensions in contrast to the 'inhabitant' structure of graph-tails out of sight. Common theme of the layout integrity increases in compensation to the decrease of room visibility. Despite the 'pheno-type' of collective integration, the individual layouts observe 'geno-type' structure of spatial diversity per room adjoins. In this regard, the layout integrity alternates the cross-correlation of the 'kitchen & living' rooms with the 'inhabitant & visitor' domains of 'motherhood' dynamic structure. Moreover, the added 'grandparent' restructures the integral measure to become the deepest space, but opens to the 'living' of 'household' integrity. Some isomorphic layouts change the integral structure just through the 'balcony' extension of access, visual or ignored 'ringiness' of space syntax. However, the most integrated or segregated layouts invert the 'geno-type' into a shallow 'inhabitant' centrality versus the remote 'visitor' structure. Overview of the multivariate social logic of spatial integrity could never clarify without the micro-data analysis.

Keywords: Alexandria, Sequina slum, spatial integration, space syntax.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
70 Scatterer Density in Edge and Coherence Enhancing Nonlinear Anisotropic Diffusion for Medical Ultrasound Speckle Reduction

Authors: Ahmed Badawi, J. Michael Johnson, Mohamed Mahfouz

Abstract:

This paper proposes new enhancement models to the methods of nonlinear anisotropic diffusion to greatly reduce speckle and preserve image features in medical ultrasound images. By incorporating local physical characteristics of the image, in this case scatterer density, in addition to the gradient, into existing tensorbased image diffusion methods, we were able to greatly improve the performance of the existing filtering methods, namely edge enhancing (EE) and coherence enhancing (CE) diffusion. The new enhancement methods were tested using various ultrasound images, including phantom and some clinical images, to determine the amount of speckle reduction, edge, and coherence enhancements. Scatterer density weighted nonlinear anisotropic diffusion (SDWNAD) for ultrasound images consistently outperformed its traditional tensor-based counterparts that use gradient only to weight the diffusivity function. SDWNAD is shown to greatly reduce speckle noise while preserving image features as edges, orientation coherence, and scatterer density. SDWNAD superior performances over nonlinear coherent diffusion (NCD), speckle reducing anisotropic diffusion (SRAD), adaptive weighted median filter (AWMF), wavelet shrinkage (WS), and wavelet shrinkage with contrast enhancement (WSCE), make these methods ideal preprocessing steps for automatic segmentation in ultrasound imaging.

Keywords: Nonlinear anisotropic diffusion, ultrasound imaging, speckle reduction, scatterer density estimation, edge based enhancement, coherence enhancement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1863
69 Inter-Specific Differences in Leaf Phenology, Growth of Seedlings of Cork OAK (Quercus suber L.), Zeen Oak (Quercus canariensis Willd.) and Their Hybrid Afares Oak (Quercus afares Pomel) in the Nursery

Authors: S. Mhamdi, O. Brendel, P. Montpied, K. Ben Yahia, N. Saouyah, B. Hasnaoui, E. Dreyer

Abstract:

Leaf Life Span (LLS) is used to classify trees into two main groups: evergreen and deciduous species. It varies according to the forms of life between taxonomic groups. Co-occurrence of deciduous and evergreen oaks is common in some Mediterranean type climate areas. Nevertheless, in the Tunisian forests, there is no enough information about the functional inter-specific diversity among oak species, especially in the mixed stand marked by the simultaneous presence of Q. suber L., Q. canariensis Willd. and their hybrid (Q. afares), the latter being an endemic oak species threatened with extinction. This study has been conducted to estimate the LLS, the relative growth rate, and the count of different growth flushes of samplings in semi-controlled conditions. Our study took 17 months, with an observation's interval of 4 weeks. The aim is to characterize and compare the hybrid species to the parental ones. Differences were observed among species, both for phenology and growth. Indeed, Q. suber saplings reached higher total height and number of growth flushes then Q. canariensis, while Q. afares showed much less growth flushes than the parental species. The LLS of parental species has exceeded the duration of the experiment, but their hybrid lost all leaves on all cohorts. The short LLSs of hybrid species are in accordance with this phenology in the field, but for Q. canariensis there was a contrast with observations in the field where phenology is strictly annual. This study allowed us to differentiate the hybrid from both parental species.

Keywords: Leaf life span, growth, hybrid, evergreen, deciduous, seedlings, Q. afares Pomel, Q. suber L., Q. canariensis Willd.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 916
68 Pre and Post IFRS Loss Avoidance in France and the United Kingdom

Authors: T. Miková

Abstract:

This paper analyzes the effect of a single uniform accounting rule on reporting quality by investigating the influence of IFRS on earnings management. This paper examines whether earnings management is reduced after IFRS adoption through the use of “loss avoidance thresholds”, a method that has been verified in earlier studies. This paper concentrates on two European countries: one that represents the continental code law tradition with weak protection of investors (France) and one that represents the Anglo-American common law tradition, which typically implies a strong enforcement system (the United Kingdom).

The research investigates a sample of 526 companies (6822 firm-year observations) during the years 2000 – 2013. The results are different for the two jurisdictions. This study demonstrates that a single set of accounting standards contributes to better reporting quality and reduces the pervasiveness of earnings management in France. In contrast, there is no evidence that a reduction in earnings management followed the implementation of IFRS in the United Kingdom. Due to the fact that IFRS benefit France but not the United Kingdom, other political and economic factors, such legal system or capital market strength, must play a significant role in influencing the comparability and transparency cross-border companies’ financial statements. Overall, the result suggests that IFRS moderately contribute to the accounting quality of reported financial statements and bring benefit for stakeholders, though the role played by other economic factors cannot be discounted.

Keywords: Accounting Standards, Earnings Management, International Financial Reporting Standards, Loss Avoidance, Reporting Quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3080
67 Occurrence of Foreign Matter in Food: Applied Identification Method - Association of Official Agricultural Chemists (AOAC) and Food and Drug Administration (FDA)

Authors: E. C. Mattos, V. S. M. G. Daros, R. Dal Col, A. L. Nascimento

Abstract:

The aim of this study is to present the results of a retrospective survey on the foreign matter found in foods analyzed at the Adolfo Lutz Institute, from July 2001 to July 2015. All the analyses were conducted according to the official methods described on Association of Official Agricultural Chemists (AOAC) for the micro analytical procedures and Food and Drug Administration (FDA) for the macro analytical procedures. The results showed flours, cereals and derivatives such as baking and pasta products were the types of food where foreign matters were found more frequently followed by condiments and teas. Fragments of stored grains insects, its larvae, nets, excrement, dead mites and rodent excrement were the most foreign matter found in food. Besides, foreign matters that can cause a physical risk to the consumer’s health such as metal, stones, glass, wood were found but rarely. Miscellaneous (shell, sand, dirt and seeds) were also reported. There are a lot of extraneous materials that are considered unavoidable since are something inherent to the product itself, such as insect fragments in grains. In contrast, there are avoidable extraneous materials that are less tolerated because it is preventable with the Good Manufacturing Practice. The conclusion of this work is that although most extraneous materials found in food are considered unavoidable it is necessary to keep the Good Manufacturing Practice throughout the food processing as well as maintaining a constant surveillance of the production process in order to avoid accidents that may lead to occurrence of these extraneous materials in food.

Keywords: Food contamination, extraneous materials, foreign matter, surveillance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3631
66 Methodology for Quantifying the Meaning of Information in Biological Systems

Authors: Richard L. Summers

Abstract:

The advanced computational analysis of biological systems is becoming increasingly dependent upon an understanding of the information-theoretic structure of the materials, energy and interactive processes that comprise those systems. The stability and survival of these living systems is fundamentally contingent upon their ability to acquire and process the meaning of information concerning the physical state of its biological continuum (biocontinuum). The drive for adaptive system reconciliation of a divergence from steady state within this biocontinuum can be described by an information metric-based formulation of the process for actionable knowledge acquisition that incorporates the axiomatic inference of Kullback-Leibler information minimization driven by survival replicator dynamics. If the mathematical expression of this process is the Lagrangian integrand for any change within the biocontinuum then it can also be considered as an action functional for the living system. In the direct method of Lyapunov, such a summarizing mathematical formulation of global system behavior based on the driving forces of energy currents and constraints within the system can serve as a platform for the analysis of stability. As the system evolves in time in response to biocontinuum perturbations, the summarizing function then conveys information about its overall stability. This stability information portends survival and therefore has absolute existential meaning for the living system. The first derivative of the Lyapunov energy information function will have a negative trajectory toward a system steady state if the driving force is dissipating. By contrast, system instability leading to system dissolution will have a positive trajectory. The direction and magnitude of the vector for the trajectory then serves as a quantifiable signature of the meaning associated with the living system’s stability information, homeostasis and survival potential.

Keywords: Semiotic meaning, Shannon information, Lyapunov, living systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 435
65 Fractal Dimension of Breast Cancer Cell Migration in a Wound Healing Assay

Authors: R. Sullivan, T. Holden, G. Tremberger, Jr, E. Cheung, C. Branch, J. Burrero, G. Surpris, S. Quintana, A. Rameau, N. Gadura, H. Yao, R. Subramaniam, P. Schneider, S. A. Rotenberg, P. Marchese, A. Flamhlolz, D. Lieberman, T. Cheung

Abstract:

Migration in breast cancer cell wound healing assay had been studied using image fractal dimension analysis. The migration of MDA-MB-231 cells (highly motile) in a wound healing assay was captured using time-lapse phase contrast video microscopy and compared to MDA-MB-468 cell migration (moderately motile). The Higuchi fractal method was used to compute the fractal dimension of the image intensity fluctuation along a single pixel width region parallel to the wound. The near-wound region fractal dimension was found to decrease three times faster in the MDA-MB- 231 cells initially as compared to the less cancerous MDA-MB-468 cells. The inner region fractal dimension was found to be fairly constant for both cell types in time and suggests a wound influence range of about 15 cell layer. The box-counting fractal dimension method was also used to study region of interest (ROI). The MDAMB- 468 ROI area fractal dimension was found to decrease continuously up to 7 hours. The MDA-MB-231 ROI area fractal dimension was found to increase and is consistent with the behavior of a HGF-treated MDA-MB-231 wound healing assay posted in the public domain. A fractal dimension based capacity index has been formulated to quantify the invasiveness of the MDA-MB-231 cells in the perpendicular-to-wound direction. Our results suggest that image intensity fluctuation fractal dimension analysis can be used as a tool to quantify cell migration in terms of cancer severity and treatment responses.

Keywords: Higuchi fractal dimension, box-counting fractal dimension, cancer cell migration, wound healing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2484
64 Issues in Spectral Source Separation Techniques for Plant-wide Oscillation Detection and Diagnosis

Authors: A.K. Tangirala, S. Babji

Abstract:

In the last few years, three multivariate spectral analysis techniques namely, Principal Component Analysis (PCA), Independent Component Analysis (ICA) and Non-negative Matrix Factorization (NMF) have emerged as effective tools for oscillation detection and isolation. While the first method is used in determining the number of oscillatory sources, the latter two methods are used to identify source signatures by formulating the detection problem as a source identification problem in the spectral domain. In this paper, we present a critical drawback of the underlying linear (mixing) model which strongly limits the ability of the associated source separation methods to determine the number of sources and/or identify the physical source signatures. It is shown that the assumed mixing model is only valid if each unit of the process gives equal weighting (all-pass filter) to all oscillatory components in its inputs. This is in contrast to the fact that each unit, in general, acts as a filter with non-uniform frequency response. Thus, the model can only facilitate correct identification of a source with a single frequency component, which is again unrealistic. To overcome this deficiency, an iterative post-processing algorithm that correctly identifies the physical source(s) is developed. An additional issue with the existing methods is that they lack a procedure to pre-screen non-oscillatory/noisy measurements which obscure the identification of oscillatory sources. In this regard, a pre-screening procedure is prescribed based on the notion of sparseness index to eliminate the noisy and non-oscillatory measurements from the data set used for analysis.

Keywords: non-negative matrix factorization, PCA, source separation, plant-wide diagnosis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1481
63 A Novel Neighborhood Defined Feature Selection on Phase Congruency Images for Recognition of Faces with Extreme Variations

Authors: Satyanadh Gundimada, Vijayan K Asari

Abstract:

A novel feature selection strategy to improve the recognition accuracy on the faces that are affected due to nonuniform illumination, partial occlusions and varying expressions is proposed in this paper. This technique is applicable especially in scenarios where the possibility of obtaining a reliable intra-class probability distribution is minimal due to fewer numbers of training samples. Phase congruency features in an image are defined as the points where the Fourier components of that image are maximally inphase. These features are invariant to brightness and contrast of the image under consideration. This property allows to achieve the goal of lighting invariant face recognition. Phase congruency maps of the training samples are generated and a novel modular feature selection strategy is implemented. Smaller sub regions from a predefined neighborhood within the phase congruency images of the training samples are merged to obtain a large set of features. These features are arranged in the order of increasing distance between the sub regions involved in merging. The assumption behind the proposed implementation of the region merging and arrangement strategy is that, local dependencies among the pixels are more important than global dependencies. The obtained feature sets are then arranged in the decreasing order of discriminating capability using a criterion function, which is the ratio of the between class variance to the within class variance of the sample set, in the PCA domain. The results indicate high improvement in the classification performance compared to baseline algorithms.

Keywords: Discriminant analysis, intra-class probability distribution, principal component analysis, phase congruency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1809