Search results for: partition metric
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 408

Search results for: partition metric

318 Removal of Polycyclic Aromatic Hydrocarbons Present in Tyre Pyrolytic Oil Using Low Cost Natural Adsorbents

Authors: Neha Budhwani

Abstract:

Polycyclic aromatic hydrocarbons (PAHs) are formed during the pyrolysis of scrap tyres to produce tyre pyrolytic oil (TPO). Due to carcinogenic, mutagenic, and toxic properties PAHs are priority pollutants. Hence it is essential to remove PAHs from TPO before utilising TPO as a petroleum fuel alternative (to run the engine). Agricultural wastes have promising future to be utilized as biosorbent due to their cost effectiveness, abundant availability, high biosorption capacity and renewability. Various low cost adsorbents were prepared from natural sources. Uptake of PAHs present in tyre pyrolytic oil was investigated using various low-cost adsor¬bents of natural origin including sawdust (shiham), coconut fiber, neem bark, chitin, activated charcol. Adsorption experiments of different PAHs viz. naphthalene, acenaphthalene, biphenyl and anthracene have been carried out at ambient temperature (25°C) and at pH 7. It was observed that for any given PAH, the adsorption capacity increases with the lignin content. Freundlich constant kf and 1/n have been evaluated and it was found that the adsorption isotherms of PAHs were in agreement with a Freundlich model, while the uptake capacity of PAHs followed the order: activated charcoal> saw dust (shisham) > coconut fiber > chitin. The partition coefficients in acetone-water, and the adsorption constants at equilibrium, could be linearly correlated with octanol–water partition coefficients. It is observed that natural adsorbents are good alternative for PAHs removal. Sawdust of Dalbergia sissoo, a by-product of sawmills was found to be a promising adsorbent for the removal of PAHs present in TPO. It is observed that adsorbents studied were comparable to those of some conventional adsorbents.

Keywords: natural adsorbent, PAHs, TPO, coconut fiber, wood powder (shisham), naphthalene, acenaphthene, biphenyl and anthracene

Procedia PDF Downloads 204
317 Impact Factor Analysis for Spatially Varying Aerosol Optical Depth in Wuhan Agglomeration

Authors: Wenting Zhang, Shishi Liu, Peihong Fu

Abstract:

As an indicator of air quality and directly related to concentration of ground PM2.5, the spatial-temporal variation and impact factor analysis of Aerosol Optical Depth (AOD) have been a hot spot in air pollution. This paper concerns the non-stationarity and the autocorrelation (with Moran’s I index of 0.75) of the AOD in Wuhan agglomeration (WHA), in central China, uses the geographically weighted regression (GRW) to identify the spatial relationship of AOD and its impact factors. The 3 km AOD product of Moderate Resolution Imaging Spectrometer (MODIS) is used in this study. Beyond the economic-social factor, land use density factors, vegetable cover, and elevation, the landscape metric is also considered as one factor. The results suggest that the GWR model is capable of dealing with spatial varying relationship, with R square, corrected Akaike Information Criterion (AICc) and standard residual better than that of ordinary least square (OLS) model. The results of GWR suggest that the urban developing, forest, landscape metric, and elevation are the major driving factors of AOD. Generally, the higher AOD trends to located in the place with higher urban developing, less forest, and flat area.

Keywords: aerosol optical depth, geographically weighted regression, land use change, Wuhan agglomeration

Procedia PDF Downloads 331
316 The Influence of Audio on Perceived Quality of Segmentation

Authors: Silvio Ricardo Rodrigues Sanches, Bianca Cogo Barbosa, Beatriz Regina Brum, Cléber Gimenez Corrêa

Abstract:

To evaluate the quality of a segmentation algorithm, the authors use subjective or objective metrics. Although subjective metrics are more accurate than objective ones, objective metrics do not require user feedback to test an algorithm. Objective metrics require subjective experiments only during their development. Subjective experiments typically display to users some videos (generated from frames with segmentation errors) that simulate the environment of an application domain. This user feedback is crucial information for metric definition. In the subjective experiments applied to develop some state-of-the-art metrics used to test segmentation algorithms, the videos displayed during the experiments did not contain audio. Audio is an essential component in applications such as videoconference and augmented reality. If the audio influences the user’s perception, using only videos without audio in subjective experiments can compromise the efficiency of an objective metric generated using data from these experiments. This work aims to identify if the audio influences the user’s perception of segmentation quality in background substitution applications with audio. The proposed approach used a subjective method based on formal video quality assessment methods. The results showed that audio influences the quality of segmentation perceived by a user.

Keywords: background substitution, influence of audio, segmentation evaluation, segmentation quality

Procedia PDF Downloads 91
315 The Study of Heat and Mass Transfer for Ferrous Materials' Filtration Drying

Authors: Dmytro Symak

Abstract:

Drying is a complex technologic, thermal and energy process. Energy cost of drying processes in many cases is the most costly stage of production, and can be over 50% of total costs. As we know, in Ukraine over 85% of Portland cement is produced moist, and the finished product energy costs make up to almost 60%. During the wet cement production, energy costs make up over 5500 kJ / kg of clinker, while during the dry only 3100 kJ / kg, that is, switching to a dry Portland cement will allow result into double cutting energy costs. Therefore, to study raw materials drying process in the manufacture of Portland cement is very actual task. The fine ferrous materials drying (small pyrites, red mud, clay Kyoko) is recommended to do by filtration method, that is one of the most intense. The essence of filtration method drying lies in heat agent filtering through a stationary layer of wet material, which is located on the perforated partition, in the "layer-dispersed material - perforated partition." For the optimum drying purposes, it is necessary to establish the dependence of pressure loss in the layer of dispersed material, and the values of heat and mass transfer, depending on the speed of the gas flow filtering. In our research, the experimentally determined pressure loss in the layer of dispersed material was generalized based on dimensionless complexes in the form and coefficients of heat exchange. We also determined the relation between the coefficients of mass and heat transfer. As a result of theoretic and experimental investigations, it was possible to develop a methodology for calculating the optimal parameters for the thermal agent and the main parameters for the filtration drying installation. The comparison of calculated by known operating expenses methods for the process of small pyrites drying in a rotating drum and filtration method shows to save up to 618 kWh per 1,000 kg of dry material and 700 kWh during filtration drying clay.

Keywords: drying, cement, heat and mass transfer, filtration method

Procedia PDF Downloads 236
314 Practical Experiences in the Development of a Lab-Scale Process for the Production and Recovery of Fucoxanthin

Authors: Alma Gómez-Loredo, José González-Valdez, Jorge Benavides, Marco Rito-Palomares

Abstract:

Fucoxanthin is a carotenoid that exerts multiple beneficial effects on human health, including antioxidant, anti-cancer, antidiabetic and anti-obesity activity; making the development of a whole process for its production and recovery an important contribution. In this work, the lab-scale production and purification of fucoxanthin in Isocrhysis galbana have been studied. In batch cultures, low light intensities (13.5 μmol/m2s) and bubble agitation were the best conditions for production of the carotenoid with product yields of up to 0.143 mg/g. After fucoxanthin ethanolic extraction from biomass and hexane partition, further recovery and purification of the carotenoid has been accomplished by means of alcohol – salt Aqueous Two-Phase System (ATPS) extraction followed by an ultrafiltration (UF) step. An ATPS comprised of ethanol and potassium phosphate (Volume Ratio (VR) =3; Tie-line Length (TLL) 60% w/w) presented a fucoxanthin recovery yield of 76.24 ± 1.60% among the studied systems and was able to remove 64.89 ± 2.64% of the carotenoid and chlorophyll pollutants. For UF, the addition of ethanol to the original recovered ethanolic ATPS stream to a final relation of 74.15% (w/w) resulted in a reduction of approximately 16% of the protein contents, increasing product purity with a recovery yield of about 63% of the compound in the permeate stream. Considering the production, extraction and primary recovery (ATPS and UF) steps, around a 45% global fucoxanthin recovery should be expected. Although other purification technologies, such as Centrifugal Partition Chromatography are able to obtain fucoxanthin recoveries of up to 83%, the process developed in the present work does not require large volumes of solvents or expensive equipment. Moreover, it has a potential for scale up to commercial scale and represents a cost-effective strategy when compared to traditional separation techniques like chromatography.

Keywords: aqueous two-phase systems, fucoxanthin, Isochrysis galbana, microalgae, ultrafiltration

Procedia PDF Downloads 391
313 Conformal Invariance and F(R,T) Gravity

Authors: P. Y. Tsyba, O. V. Razina, E. Güdekli, R. Myrzakulov

Abstract:

In this paper, we consider the equation of motion for the F(R,T) gravity on their property of conformal invariance. It is shown that in the general case such a theory is not conformally invariant. Special cases for the functions v and u, in which the properties of the theory can appear, were studied.

Keywords: conformal invariance, gravity, space-time, metric

Procedia PDF Downloads 628
312 Graph Cuts Segmentation Approach Using a Patch-Based Similarity Measure Applied for Interactive CT Lung Image Segmentation

Authors: Aicha Majda, Abdelhamid El Hassani

Abstract:

Lung CT image segmentation is a prerequisite in lung CT image analysis. Most of the conventional methods need a post-processing to deal with the abnormal lung CT scans such as lung nodules or other lesions. The simplest similarity measure in the standard Graph Cuts Algorithm consists of directly comparing the pixel values of the two neighboring regions, which is not accurate because this kind of metrics is extremely sensitive to minor transformations such as noise or other artifacts problems. In this work, we propose an improved version of the standard graph cuts algorithm based on the Patch-Based similarity metric. The boundary penalty term in the graph cut algorithm is defined Based on Patch-Based similarity measurement instead of the simple intensity measurement in the standard method. The weights between each pixel and its neighboring pixels are Based on the obtained new term. The graph is then created using theses weights between its nodes. Finally, the segmentation is completed with the minimum cut/Max-Flow algorithm. Experimental results show that the proposed method is very accurate and efficient, and can directly provide explicit lung regions without any post-processing operations compared to the standard method.

Keywords: graph cuts, lung CT scan, lung parenchyma segmentation, patch-based similarity metric

Procedia PDF Downloads 141
311 Optimization of Spatial Light Modulator to Generate Aberration Free Optical Traps

Authors: Deepak K. Gupta, T. R. Ravindran

Abstract:

Holographic Optical Tweezers (HOTs) in general use iterative algorithms such as weighted Gerchberg-Saxton (WGS) to generate multiple traps, which produce traps with 99% uniformity theoretically. But in experiments, it is the phase response of the spatial light modulator (SLM) which ultimately determines the efficiency, uniformity, and quality of the trap spots. In general, SLMs show a nonlinear phase response behavior, and they may even have asymmetric phase modulation depth before and after π. This affects the resolution with which the gray levels are addressed before and after π, leading to a degraded trap performance. We present a method to optimize the SLM for a linear phase response behavior along with a symmetric phase modulation depth around π. Further, we optimize the SLM for its varying phase response over different spatial regions by optimizing the brightness/contrast and gamma of the hologram in different subsections. We show the effect of the optimization on an array of trap spots resulting in improved efficiency and uniformity. We also calculate the spot sharpness metric and trap performance metric and show a tightly focused spot with reduced aberration. The trap performance is compared by calculating the trap stiffness of a trapped particle in a given trap spot before and after aberration correction. The trap stiffness is found to improve by 200% after the optimization.

Keywords: spatial light modulator, optical trapping, aberration, phase modulation

Procedia PDF Downloads 143
310 A Metric to Evaluate Conventional and Electrified Vehicles in Terms of Customer-Oriented Driving Dynamics

Authors: Stephan Schiffer, Andreas Kain, Philipp Wilde, Maximilian Helbing, Bernard Bäker

Abstract:

Automobile manufacturers progressively focus on a downsizing strategy to meet the EU's CO2 requirements concerning type-approval consumption cycles. The reduction in naturally aspirated engine power is compensated by increased levels of turbocharging. By downsizing conventional engines, CO2 emissions are reduced. However, it also implicates major challenges regarding longitudinal dynamic characteristics. An example of this circumstance is the delayed turbocharger-induced torque reaction which leads to a partially poor response behavior of the vehicle during acceleration operations. That is why it is important to focus conventional drive train design on real customer driving again. The currently considered dynamic maneuvers like the acceleration time 0-100 km/h discussed by journals and car manufacturers describe longitudinal dynamics experienced by a driver inadequately. For that reason we present the realization and evaluation of a comprehensive proband study. Subjects are provided with different vehicle concepts (electrified vehicles, vehicles with naturally aspired engines and vehicles with different concepts of turbochargers etc.) in order to find out which dynamic criteria are decisive for a subjectively strong acceleration and response behavior of a vehicle. Subsequently, realistic acceleration criteria are derived. By weighing the criteria an evaluation metric is developed to objectify customer-oriented transient dynamics. Fully-electrified vehicles are the benchmark in terms of customer-oriented longitudinal dynamics. The electric machine provides the desired torque almost without delay. This advantage compared to combustion engines is especially noticeable at low engine speeds. In conclusion, we will show the degree to which extent customer-relevant longitudinal dynamics of conventional vehicles can be approximated to electrified vehicle concepts. Therefore, various technical measures (turbocharger concepts, 48V electrical chargers etc.) and drive train designs (e.g. varying the final drive) are presented and evaluated in order to strengthen the vehicle’s customer-relevant transient dynamics. As a rating size the newly developed evaluation metric will be used.

Keywords: 48V, customer-oriented driving dynamics, electric charger, electrified vehicles, vehicle concepts

Procedia PDF Downloads 383
309 Hierarchical Clustering Algorithms in Data Mining

Authors: Z. Abdullah, A. R. Hamdan

Abstract:

Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the areas in data mining and it can be classified into partition, hierarchical, density based, and grid-based. Therefore, in this paper, we do a survey and review for four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON, and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems, as well as deriving more robust and scalable algorithms for clustering.

Keywords: clustering, unsupervised learning, algorithms, hierarchical

Procedia PDF Downloads 848
308 Genomic Sequence Representation Learning: An Analysis of K-Mer Vector Embedding Dimensionality

Authors: James Jr. Mashiyane, Risuna Nkolele, Stephanie J. Müller, Gciniwe S. Dlamini, Rebone L. Meraba, Darlington S. Mapiye

Abstract:

When performing language tasks in natural language processing (NLP), the dimensionality of word embeddings is chosen either ad-hoc or is calculated by optimizing the Pairwise Inner Product (PIP) loss. The PIP loss is a metric that measures the dissimilarity between word embeddings, and it is obtained through matrix perturbation theory by utilizing the unitary invariance of word embeddings. Unlike in natural language, in genomics, especially in genome sequence processing, unlike in natural language processing, there is no notion of a “word,” but rather, there are sequence substrings of length k called k-mers. K-mers sizes matter, and they vary depending on the goal of the task at hand. The dimensionality of word embeddings in NLP has been studied using the matrix perturbation theory and the PIP loss. In this paper, the sufficiency and reliability of applying word-embedding algorithms to various genomic sequence datasets are investigated to understand the relationship between the k-mer size and their embedding dimension. This is completed by studying the scaling capability of three embedding algorithms, namely Latent Semantic analysis (LSA), Word2Vec, and Global Vectors (GloVe), with respect to the k-mer size. Utilising the PIP loss as a metric to train embeddings on different datasets, we also show that Word2Vec outperforms LSA and GloVe in accurate computing embeddings as both the k-mer size and vocabulary increase. Finally, the shortcomings of natural language processing embedding algorithms in performing genomic tasks are discussed.

Keywords: word embeddings, k-mer embedding, dimensionality reduction

Procedia PDF Downloads 95
307 Complex Network Analysis of Seismicity and Applications to Short-Term Earthquake Forecasting

Authors: Kahlil Fredrick Cui, Marissa Pastor

Abstract:

Earthquakes are complex phenomena, exhibiting complex correlations in space, time, and magnitude. Recently, the concept of complex networks has been used to shed light on the statistical and dynamical characteristics of regional seismicity. In this work, we study the relationships and interactions of seismic regions in Chile, Japan, and the Philippines through weighted and directed complex network analysis. Geographical areas are digitized into cells of fixed dimensions which in turn become the nodes of the network when an earthquake has occurred therein. Nodes are linked if a correlation exists between them as determined and measured by a correlation metric. The networks are found to be scale-free, exhibiting power-law behavior in the distributions of their different centrality measures: the in- and out-degree and the in- and out-strength. The evidence is also found of preferential interaction between seismically active regions through their degree-degree correlations suggesting that seismicity is dictated by the activity of a few active regions. The importance of a seismic region to the overall seismicity is measured using a generalized centrality metric taken to be an indicator of its activity or passivity. The spatial distribution of earthquake activity indicates the areas where strong earthquakes have occurred in the past while the passivity distribution points toward the likely locations an earthquake would occur whenever another one happens elsewhere. Finally, we propose a method that would project the location of the next possible earthquake using the generalized centralities coupled with correlations calculated between the latest earthquakes and a geographical point in the future.

Keywords: complex networks, correlations, earthquake, hazard assessment

Procedia PDF Downloads 181
306 Assessment of Metal Dynamics in Dissolved and Particulate Phase in Human Impacted Hooghly River Estuary, India

Authors: Soumita Mitra, Santosh Kumar Sarkar

Abstract:

Hooghly river estuary (HRE), situated at the north eastern part of Bay of Bengal has global significance due to its holiness. It is of immense importance to the local population as it gives perpetual water supply for various activities such as transportation, fishing, boating, bathing etc. to the local people who settled on both the banks of this estuary. This study was done to assess the dissolved and particulate trace metal in the estuary covering a stretch of about 175 Km. The water samples were collected from the surface (0-5 cm) along the salinity gradient and metal concentration were studied both in dissolved and particulate phase using Graphite Furnace Atomic Absorption Spectrophotometer (GF-AAS) along some physical characteristics such as water temperature, salinity, pH, turbidity and total dissolved solids. Although much significant spatial variation was noticed but little enrichment was found along the downstream of the estuary. The mean concentration of the metals in the dissolved and particulate phase followed the same trend and as follows: Fe>Mn>Cr>Zn>Cu>Ni>Pb. The concentration of the metals in the particulate phase were much greater than that in dissolved phase which was also depicted from the values of the partition coefficient (Kd)(ml mg-1). The Kdvalues ranged from 1.5x105 (in case of Pb) to 4.29x106 (in case of Cr). The high value of Kd for Cr denoted that the metal Cr is mostly bounded with the suspended particulate matter while the least value for Pb signified it presence more in dissolved phase. Moreover, the concentrations of all the studied metals in the dissolved phase were many folds higher than their respective permissible limits assested by WHO 2008, 2009 and 2011. On the other hand, according to Sediment Quality Guidelines (SQGs), Zn, Cu and Ni in the particulate phase lied between ERL and ERM values but Cr exceeded ERM values at all the stations confirming that the estuary is mostly contaminated with the particulate Cr and it might cause frequent adverse effects on the aquatic life. Multivariate statistics Cluster analysis was also performed which separated the stations according to the level of contamination from several point and nonpoint sources. Thus, it is found that the estuarine system is much polluted by the toxic metals and further investigation, toxicological studies should be implemented for full risk assessment of this system, better management and restoration of the water quality of this globally significant aquatic system.

Keywords: dissolved and particulate phase, Hooghly river estuary, partition coefficient, surface water, toxic metals

Procedia PDF Downloads 253
305 Towards Law Data Labelling Using Topic Modelling

Authors: Daniel Pinheiro Da Silva Junior, Aline Paes, Daniel De Oliveira, Christiano Lacerda Ghuerren, Marcio Duran

Abstract:

The Courts of Accounts are institutions responsible for overseeing and point out irregularities of Public Administration expenses. They have a high demand for processes to be analyzed, whose decisions must be grounded on severity laws. Despite the existing large amount of processes, there are several cases reporting similar subjects. Thus, previous decisions on already analyzed processes can be a precedent for current processes that refer to similar topics. Identifying similar topics is an open, yet essential task for identifying similarities between several processes. Since the actual amount of topics is considerably large, it is tedious and error-prone to identify topics using a pure manual approach. This paper presents a tool based on Machine Learning and Natural Language Processing to assists in building a labeled dataset. The tool relies on Topic Modelling with Latent Dirichlet Allocation to find the topics underlying a document followed by Jensen Shannon distance metric to generate a probability of similarity between documents pairs. Furthermore, in a case study with a corpus of decisions of the Rio de Janeiro State Court of Accounts, it was noted that data pre-processing plays an essential role in modeling relevant topics. Also, the combination of topic modeling and a calculated distance metric over document represented among generated topics has been proved useful in helping to construct a labeled base of similar and non-similar document pairs.

Keywords: courts of accounts, data labelling, document similarity, topic modeling

Procedia PDF Downloads 139
304 A Study of Algebraic Structure Involving Banach Space through Q-Analogue

Authors: Abdul Hakim Khan

Abstract:

The aim of the present paper is to study the Banach Space and Combinatorial Algebraic Structure of R. It is further aimed to study algebraic structure of set of all q-extension of classical formula and function for 0 < q < 1.

Keywords: integral functions, q-extensions, q numbers of metric space, algebraic structure of r and banach space

Procedia PDF Downloads 546
303 Study on Impact of Existence of an Open Boundary Foreign Enclave and a 24-Hours Open Corridor for Foreigners inside Indian Territory

Authors: Debarshi Bhattacharya

Abstract:

In 2015, historic Land Boundary Agreement (LBA) executed between India and Bangladesh finally settled almost seven decades long overdue critical enclave problems of the two neighbouring countries. Enclaves within India and Bangladesh were the awful outcome of the partition of India in 1947. As a dire consequence, the populace within these enclaves enormously suffered from getting basic rights and opportunities and governmental support services till long 67 years after India’s independence and partition. As per LBA, 2015, 51 Bangladeshi (BD) enclaves inside Indian territory and 111 Indian enclaves inside Bangladesh territory were actually transferred to each other. But, by virtue of LBA, 1974 executed earlier between the two countries, one BD enclave situated inside India, namely Dohogram-Angarpota (D-A) twin enclave, had not yet been exchanged by means of LBA, 2015 and it still remains as an integral part, may not be contiguous, of Bangladesh completely surrounded by Indian territory. A study was undertaken through an extensive field survey to assess the impact of the existence of D-A BD enclave inside Indian territory from India’s perspective. Field survey was conducted for the purpose in the form of an interview, group discussion, questionnaire survey, personal interaction etc. to gather information from the Indian people residing adjacent to D-A enclave and Tin Bigha Corridor (TBC), people of D-A enclave, officials of Border Security Forces of India and Bangladesh, public representatives, representatives of political organizations etc. The issue of the existence of D-A BD enclave inside Indian territory seriously brought apprehension of future problems to the people of Kuchlibari Region of Mekhligunj Block, India, on its contiguity with Indian mainland due to 24-hour open access for the BD people through TBC. The anxiety of the local Indian people regarding threats to the national security of India as well as to the law and order issues of the locality due to the open border of D-A BD enclave in the region. On the other hand, it was observed that 24 hours opening of TBC brought significant positive changes to the people of D-A BD enclave in terms of their socio-economic condition and security status.

Keywords: enclave, exchange of enclaves, land boundary agreement, Dohogram-Angarpota (D-A) Bangladeshi (BD) enclave, Tin Bigha Corridor

Procedia PDF Downloads 55
302 Classifying and Predicting Efficiencies Using Interval DEA Grid Setting

Authors: Yiannis G. Smirlis

Abstract:

The classification and the prediction of efficiencies in Data Envelopment Analysis (DEA) is an important issue, especially in large scale problems or when new units frequently enter the under-assessment set. In this paper, we contribute to the subject by proposing a grid structure based on interval segmentations of the range of values for the inputs and outputs. Such intervals combined, define hyper-rectangles that partition the space of the problem. This structure, exploited by Interval DEA models and a dominance relation, acts as a DEA pre-processor, enabling the classification and prediction of efficiency scores, without applying any DEA models.

Keywords: data envelopment analysis, interval DEA, efficiency classification, efficiency prediction

Procedia PDF Downloads 141
301 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 300
300 Woodfuels as Alternative Source of Energy in Rural and Urban Areas in the Philippines

Authors: R. T. Aggangan

Abstract:

Woodfuels continue to be a major component of the energy supply mix of the Philippines due to increasing demand for energy that are not adequately met by decreasing supply and increasing prices of fuel oil such as liquefied petroleum gas (LPG) and kerosene. The Development Academy of the Philippines projects the demand of woodfuels in 2016 as 28.3 million metric tons in the household sector and about 105.4 million metric tons combined supply potentials of both forest and non-forest lands. However, the Revised Master Plan for Forestry Development projects a demand of about 50 million cu meters of fuelwood in 2016 but the capability to supply from local sources is only about 28 million cu meters indicating a 44 % deficiency. Household demand constitutes 82% while industries demand is 18%. Domestic household demand for energy is for cooking needs while the industrial demand is for steam power generation, curing barns of tobacco: brick, ceramics and pot making; bakery; lime production; and small scale food processing. Factors that favour increased use of wood-based energy include the relatively low prices (increasing oil-based fuel prices), availability of efficient wood-based energy utilization technology, increasing supply, and increasing population that cannot afford conventional fuels. Moreover, innovations in combustion technology and cogeneration of heat and power from biomass for modern applications favour biomass energy development. This paper recommends policies and strategic directions for the development of the woodfuel industry with the twin goals of sustainably supplying the energy requirements of households and industry.

Keywords: biomass energy development, fuelwood, households and industry, innovations in combustion technology, supply and demand

Procedia PDF Downloads 302
299 Sensor Network Routing Optimization by Simulating Eurygaster Life in Wheat Farms

Authors: Fariborz Ahmadi, Hamid Salehi, Khosrow Karimi

Abstract:

A sensor network is set of sensor nodes that cooperate together to perform a predefined tasks. The important problem in this network is power consumption. So, in this paper one algorithm based on the eurygaster life is introduced to minimize power consumption by the nodes of these networks. In this method the search space of problem is divided into several partitions and each partition is investigated separately. The evaluation results show that our approach is more efficient in comparison to other evolutionary algorithm like genetic algorithm.

Keywords: evolutionary computation, genetic algorithm, particle swarm optimization, sensor network optimization

Procedia PDF Downloads 393
298 Structural Balance and Creative Tensions in New Product Development Teams

Authors: Shankaran Sitarama

Abstract:

New Product Development involves team members coming together and working in teams to come up with innovative solutions to problems, resulting in new products. Thus, a core attribute of a successful NPD team is their creativity and innovation. They need to be creative as a group, generating a breadth of ideas and innovative solutions that solve or address the problem they are targeting and meet the user’s needs. They also need to be very efficient in their teamwork as they work through the various stages of the development of these ideas, resulting in a POC (proof-of-concept) implementation or a prototype of the product. There are two distinctive traits that the teams need to have, one is ideational creativity, and the other is effective and efficient teamworking. There are multiple types of tensions that each of these traits cause in the teams, and these tensions reflect in the team dynamics. Ideational conflicts arising out of debates and deliberations increase the collective knowledge and affect the team creativity positively. However, the same trait of challenging each other’s viewpoints might lead the team members to be disruptive, resulting in interpersonal tensions, which in turn lead to less than efficient teamwork. Teams that foster and effectively manage these creative tensions are successful, and teams that are not able to manage these tensions show poor team performance. In this paper, it explore these tensions as they result in the team communication social network and propose a Creative Tension Balance index along the lines of Degree of Balance in social networks that has the potential to highlight the successful (and unsuccessful) NPD teams. Team communication reflects the team dynamics among team members and is the data set for analysis. The emails between the members of the NPD teams are processed through a semantic analysis algorithm (LSA) to analyze the content of communication and a semantic similarity analysis to arrive at a social network graph that depicts the communication amongst team members based on the content of communication. This social network is subjected to traditional social network analysis methods to arrive at some established metrics and structural balance analysis metrics. Traditional structural balance is extended to include team interaction pattern metrics to arrive at a creative tension balance metric that effectively captures the creative tensions and tension balance in teams. This CTB (Creative Tension Balance) metric truly captures the signatures of successful and unsuccessful (dissonant) NPD teams. The dataset for this research study includes 23 NPD teams spread out over multiple semesters and computes this CTB metric and uses it to identify the most successful and unsuccessful teams by classifying these teams into low, high and medium performing teams. The results are correlated to the team reflections (for team dynamics and interaction patterns), the team self-evaluation feedback surveys (for teamwork metrics) and team performance through a comprehensive team grade (for high and low performing team signatures).

Keywords: team dynamics, social network analysis, new product development teamwork, structural balance, NPD teams

Procedia PDF Downloads 43
297 The Non-Existence of Perfect 2-Error Correcting Lee Codes of Word Length 7 over Z

Authors: Catarina Cruz, Ana Breda

Abstract:

Tiling problems have been capturing the attention of many mathematicians due to their real-life applications. In this study, we deal with tilings of Zⁿ by Lee spheres, where n is a positive integer number, being these tilings related with error correcting codes on the transmission of information over a noisy channel. We focus our attention on the question ‘for what values of n and r does the n-dimensional Lee sphere of radius r tile Zⁿ?’. It seems that the n-dimensional Lee sphere of radius r does not tile Zⁿ for n ≥ 3 and r ≥ 2. Here, we prove that is not possible to tile Z⁷ with Lee spheres of radius 2 presenting a proof based on a combinatorial method and faithful to the geometric idea of the problem. The non-existence of such tilings has been studied by several authors being considered the most difficult cases those in which the radius of the Lee spheres is equal to 2. The relation between these tilings and error correcting codes is established considering the center of a Lee sphere as a codeword and the other elements of the sphere as words which are decoded by the central codeword. When the Lee spheres of radius r centered at elements of a set M ⊂ Zⁿ tile Zⁿ, M is a perfect r-error correcting Lee code of word length n over Z, denoted by PL(n, r). Our strategy to prove the non-existence of PL(7, 2) codes are based on the assumption of the existence of such code M. Without loss of generality, we suppose that O ∈ M, where O = (0, ..., 0). In this sense and taking into account that we are dealing with Lee spheres of radius 2, O covers all words which are distant two or fewer units from it. By the definition of PL(7, 2) code, each word which is distant three units from O must be covered by a unique codeword of M. These words have to be covered by codewords which dist five units from O. We prove the non-existence of PL(7, 2) codes showing that it is not possible to cover all the referred words without superposition of Lee spheres whose centers are distant five units from O, contradicting the definition of PL(7, 2) code. We achieve this contradiction by combining the cardinality of particular subsets of codewords which are distant five units from O. There exists an extensive literature on codes in the Lee metric. Here, we present a new approach to prove the non-existence of PL(7, 2) codes.

Keywords: Golomb-Welch conjecture, Lee metric, perfect Lee codes, tilings

Procedia PDF Downloads 130
296 Experimental Design in Extraction of Pseudomonas sp. Protease from Fermented Broth by Polyethylene Glycol/Citrate Aqueous Two-Phase System

Authors: Omar Pillaca-Pullo, Arturo Alejandro-Paredes, Carol Flores-Fernandez, Marijuly Sayuri Kina, Amparo Iris Zavaleta

Abstract:

Aqueous two-phase system (ATPS) is an interesting alternative for separating industrial enzymes due to it is easy to scale-up and low cost. Polyethylene glycol (PEG) mixed with potassium phosphate or magnesium sulfate is one of the most frequently polymer/salt ATPS used, but the consequences of its use is a high concentration of phosphates and sulfates in wastewater causing environmental issues. Citrate could replace these inorganic salts due to it is biodegradable and does not produce toxic compounds. On the other hand, statistical design of experiments is widely used for ATPS optimization and it allows to study the effects of the involved variables in the purification, and to estimate their significant effects on selected responses and interactions. The 24 factorial design with four central points (20 experiments) was employed to study the partition and purification of proteases produced by Pseudomonas sp. in PEG/citrate ATPS system. ATPS was prepared with different sodium citrate concentrations [14, 16 and 18% (w/w)], pH values (7, 8 and 9), PEG molecular weight (2,000; 4,000 and 6,000 g/mol) and PEG concentrations [18, 20 and 22 % (w/w)]. All system components were mixed with 15% (w/w) of the fermented broth and deionized water was added to a final weight of 12.5 g. Then, the systems were mixed and kept at room temperature until to reach two-phases separation. Volumes of the top and bottom phases were measured, and aliquots from both phases were collected for subsequent proteolytic activity and total protein determination. Influence of variables such as PEG molar mass (MPEG), PEG concentration (CPEG), citrate concentration (CSal) and pH were evaluated on the following responses: purification factor (PF), activity yield (Y), partition coefficient (K) and selectivity (S). STATISTICA program version 10 was used for the analysis. According to the obtained results, higher levels of CPEG and MPEG had a positive effect on extraction, while pH did not influence on the process. On the other hand, the CSal could be related with low values of Y because of the citrate ions have a negative effect on solubility and enzymatic structure. The optimum values of Y (66.4 %), PF (1.8), K (5.5) and S (4.3) were obtained at CSal (18%), MPEG (6,000 g/mol), CPEG (22%) and pH 9. These results indicated that the PEG/citrate system is accurate to purify these Pseudomonas sp. proteases from fermented broth as a first purification step.

Keywords: citrate, polyethylene glycol, protease, Pseudomonas sp

Procedia PDF Downloads 168
295 The Effects of Street Network Layout on Walking to School

Authors: Ayse Ozbil, Gorsev Argin, Demet Yesiltepe

Abstract:

Data for this cross-sectional study were drawn from questionnaires conducted in 10 elementary schools (1000 students, ages 12-14) located in Istanbul, Turkey. School environments (1600 meter buffers around the school) were evaluated through GIS-based land-use data (parcel level land use density) and street-level topography. Street networks within the same buffers were evaluated by using angular segment analysis (Integration and Choice) implemented in Depthmap as well as two segment-based connectivity measures, namely Metric and Directional Reach implemented in GIS. Segment Angular Integration measures how accessible each space from all the others within the radius using the least angle measure of distance. Segment Angular Choice which measures how many times a space is selected on journeys between all pairs of origins and destinations. Metric Reach captures the density of streets and street connections accessible from each individual road segment. Directional Reach measures the extent to which the entire street network is accessible with few direction changes. In addition, socio-economic characteristics (annual income, car ownership, education-level) of parents, obtained from parental questionnaires, were also included in the analysis. It is shown that surrounding street network configuration is strongly associated with both walk-mode shares and average walking distances to/from schools when controlling for parental socio-demographic attributes as well as land-use compositions and topographic features in school environments. More specifically, findings suggest that the scale at which urban form has an impact on pedestrian travel is considerably larger than a few blocks around the school.

Keywords: Istanbul, street network layout, urban form, walking to/from school

Procedia PDF Downloads 379
294 When Conducting an Analysis of Workplace Incidents, It Is Imperative to Meticulously Calculate Both the Frequency and Severity of Injuries Sustain

Authors: Arash Yousefi

Abstract:

Experts suggest that relying exclusively on parameters to convey a situation or establish a condition may not be adequate. Assessing and appraising incidents in a system based on accident parameters, such as accident frequency, lost workdays, or fatalities, may not always be precise and occasionally erroneous. The frequency rate of accidents is a metric that assesses the correlation between the number of accidents causing work-time loss due to injuries and the total working hours of personnel over a year. Traditionally, this has been calculated based on one million working hours, but the American Occupational Safety and Health Organization has updated its standards. The new coefficient of 200/000 working hours is now used to compute the frequency rate of accidents. It's crucial to ensure that the total working hours of employees are equally represented when calculating individual event and incident numbers. The accident severity rate is a metric used to determine the amount of time lost or wasted during a given period, often a year, in relation to the total number of working hours. It measures the percentage of work hours lost or wasted compared to the total number of useful working hours, which provides valuable insight into the number of days lost or wasted due to work-related incidents for each working hour. Calculating the severity of an incident can be difficult if a worker suffers permanent disability or death. To determine lost days, coefficients specified in the "tables of days equivalent to OSHA or ANSI standards" for disabling injuries are used. The accident frequency coefficient denotes the rate at which accidents occur, while the accident severity coefficient specifies the extent of damage and injury caused by these accidents. These coefficients are crucial in accurately assessing the magnitude and impact of accidents.

Keywords: incidents, safety, analysis, frequency, severity, injuries, determine

Procedia PDF Downloads 55
293 Classification of Red, Green and Blue Values from Face Images Using k-NN Classifier to Predict the Skin or Non-Skin

Authors: Kemal Polat

Abstract:

In this study, it has been estimated whether there is skin by using RBG values obtained from the camera and k-nearest neighbor (k-NN) classifier. The dataset used in this study has an unbalanced distribution and a linearly non-separable structure. This problem can also be called a big data problem. The Skin dataset was taken from UCI machine learning repository. As the classifier, we have used the k-NN method to handle this big data problem. For k value of k-NN classifier, we have used as 1. To train and test the k-NN classifier, 50-50% training-testing partition has been used. As the performance metrics, TP rate, FP Rate, Precision, recall, f-measure and AUC values have been used to evaluate the performance of k-NN classifier. These obtained results are as follows: 0.999, 0.001, 0.999, 0.999, 0.999, and 1,00. As can be seen from the obtained results, this proposed method could be used to predict whether the image is skin or not.

Keywords: k-NN classifier, skin or non-skin classification, RGB values, classification

Procedia PDF Downloads 219
292 Effects of Macro and Micro Nutrients on Growth and Yield Performances of Tomato (Lycopersicon esculentum MILL.)

Authors: K. M. S. Weerasinghe, A. H. K. Balasooriya, S. L. Ransingha, G. D. Krishantha, R. S. Brhakamanagae, L. C. Wijethilke

Abstract:

Tomato (Lycopersicon esculentum Mill.) is a major horticultural crop with an estimated global production of over 120 million metric tons and ranks first as a processing crop. The average tomato productivity in Sri Lanka (11 metric tons/ha) is much lower than the world average (24 metric tons/ha).To meet the tomato demand for the increasing population the productivity has to be intensified through the agronomic-techniques. Nutrition is one of the main factors which govern the growth and yield of tomato and the main nutrient source soil affect the plant growth and quality of the produce. Continuous cropping, improper fertilizer usage etc., cause widespread nutrient deficiencies. Therefore synthetic fertilizers and organic manures were introduced to enhance plant growth and maximize the crop yields. In this study, effects of macro and micronutrient supplementations on improvement of growth and yield of tomato were investigated. Selected tomato variety is Maheshi and plants were grown in Regional Agricultural and Research Centre Makadura under the Department of Agriculture recommended (DOA) macro nutrients and various combination of Ontario recommended dosages of secondary and micro fertilizer supplementations. There were six treatments in this experiment and each treatment was replicated in three times and each replicate consisted of six plants. Other than the DOA recommendation, five combinations of Ontario recommended dosage of secondary and micronutrients for tomato were also used as treatments. The treatments were arranged in a Randomized Complete Block Design. All cultural practices were carried out according to the DOA recommendations. The mean data was subjected to the statistical analysis using SAS package and mean separation (Duncan’s Multiple Range test at 5% probability level) procedures. Secondary and micronutrients containing treatments significantly increased most of the growth parameters. Plant height, plant girth, number of leaves, leaf area index etc. Fruits harvested from pots amended with macro, secondary and micronutrients performed best in terms of total yield; yield quality; to pots amended with DOA recommended dosage of fertilizer for tomato. It could be due to the application of all essential macro and micro nutrients that rise in photosynthetic activity, efficient translocation and utilization of photosynthates causing rapid cell elongation and cell division in actively growing region of the plant leading to stimulation of growth and yield were caused. The experiment revealed and highlighted the requirements of essential macro, secondary and micro nutrient fertilizer supplementations for tomato farming. The study indicated that, macro and micro nutrient supplementation practices can influence growth and yield performances of tomato fruits and it is a promising approach to get potential tomato yields.

Keywords: macro and micronutrients, tomato, SAS package, photosynthates

Procedia PDF Downloads 422
291 A Theoretical Study on Pain Assessment through Human Facial Expresion

Authors: Mrinal Kanti Bhowmik, Debanjana Debnath Jr., Debotosh Bhattacharjee

Abstract:

A facial expression is undeniably the human manners. It is a significant channel for human communication and can be applied to extract emotional features accurately. People in pain often show variations in facial expressions that are readily observable to others. A core of actions is likely to occur or to increase in intensity when people are in pain. To illustrate the changes in the facial appearance, a system known as Facial Action Coding System (FACS) is pioneered by Ekman and Friesen for human observers. According to Prkachin and Solomon, a set of such actions carries the bulk of information about pain. Thus, the Prkachin and Solomon pain intensity (PSPI) metric is defined. So, it is very important to notice that facial expressions, being a behavioral source in communication media, provide an important opening into the issues of non-verbal communication in pain. People express their pain in many ways, and this pain behavior is the basis on which most inferences about pain are drawn in clinical and research settings. Hence, to understand the roles of different pain behaviors, it is essential to study the properties. For the past several years, the studies are concentrated on the properties of one specific form of pain behavior i.e. facial expression. This paper represents a comprehensive study on pain assessment that can model and estimate the intensity of pain that the patient is suffering. It also reviews the historical background of different pain assessment techniques in the context of painful expressions. Different approaches incorporate FACS from psychological views and a pain intensity score using the PSPI metric in pain estimation. This paper investigates in depth analysis of different approaches used in pain estimation and presents different observations found from each technique. It also offers a brief study on different distinguishing features of real and fake pain. Therefore, the necessity of the study lies in the emerging fields of painful face assessment in clinical settings.

Keywords: facial action coding system (FACS), pain, pain behavior, Prkachin and Solomon pain intensity (PSPI)

Procedia PDF Downloads 304
290 Dynamic Modeling of Orthotropic Cracked Materials by X-FEM

Authors: S. Houcine Habib, B. Elkhalil Hachi, Mohamed Guesmi, Mohamed Haboussi

Abstract:

In this paper, dynamic fracture behaviors of cracked orthotropic structure are modeled using extended finite element method (X-FEM). In this approach, the finite element method model is first created and then enriched by special orthotropic crack tip enrichments and Heaviside functions in the framework of partition of unity. The mixed mode stress intensity factor (SIF) is computed using the interaction integral technique based on J-integral in order to predict cracking behavior of the structure. The developments of these procedures are programmed and introduced in a self-software platform code. To assess the accuracy of the developed code, results obtained by the proposed method are compared with those of literature.

Keywords: X-FEM, composites, stress intensity factor, crack, dynamic orthotropic behavior

Procedia PDF Downloads 533
289 Isolation and Identification of Compounds from the Leaves of Actinodaphne sesquipedalis Hook. F. Var. Glabra (Lauraceae)

Authors: O. Hanita, S. A. Ainnul Hamidah, A. H. Yang Zalila, M. R. Siti Nadiah, M. H. Najihah, M. A. Hapipah

Abstract:

The crude extract of the leaves of Actinodaphne sesquipedalis Hook. F. Var. Glabra (Kochummen), was taken under phytochemical investigation. The crude methanolic extract was partitioned with a different solvent system by increasing their polarities (n-hexane, dichloromethane, and methanol). The compounds were fractionated and isolated from n-hexane partition by using column chromatography with silica gel 60 or Sephadex LH-20 as a stationary phase and preparative thin layer chromatographic technique. Isolates were characterized using TLC, FTIR, UV spectrophotometer and NMR spectroscopy. The n-hexane fractionates yielded a total of four compounds namely N-methyllaurotetanine (1), dicentrine (2), β-sitosterol (3), and stigmasterol (4). The result indicates that the leaves of Actinodaphne sesquipedalis may provide a rich source of alkaloids and triterpenoids.

Keywords: actinodaphne sesquipedalis, alkaloids, phytochemical investigation, triterpenoids

Procedia PDF Downloads 368