Search results for: underlying pitch targets.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 581

Search results for: underlying pitch targets.

101 Contribution of Vitaton (Β-Carotene) to the Rearing Factors Survival Rate and Visual Flesh Color of Rainbow Trout Fish in Comparison With Astaxanthin

Authors: M.Ghotbi, M.Ghotbi, Gh. Azari Takami

Abstract:

In this study Vitaton (an organic supplement which contains fermentative β-carotene) and synthetic astaxanthin (CAROPHYLL® Pink) were evaluated as pro-growth factors in Rainbow trout diet. An 8 week feeding trial was conducted to determine the effects of Vitaton versus astaxanthin on rearing factors, survival rate and visual flesh color of Rainbow trout (Oncorhnchynchus mykiss) with initial weight of 196±5. Four practical diets were formulated to contain 50 and 80 (ppm) of β- carotene and astaxanthin and also a control diet was prepared without any pigment. Each diet was fed to triplicate groups of fish rearing in fresh water. Fish were fed twice daily. The water temperature fluctuated from 12 to 15 (C˚) and also dissolved oxygen content was between 7 to 7.5 (mg/lit) during the experimental period. At the end of the experiment, growth and food utilization parameters and survival rate were unaffected by dietary treatments (p>0.05). Also, there was no significant difference between carcass yield within treatments (p>0.05). No significant difference recognized between visual flesh color (SalmoFan score) of fish fed Vitaton-containing diets. On the contrary, feeding on diets containing 50 and 80 (ppm) of astaxanthin, increased SalmoFan score (flesh astaxanthin concentration) from <20 (<1 mg/kg) to 23.33 (2.03 mg/kg) and 27.67 (5.74 mg/kg), respectively. Ultimately, a significant difference was seen between flesh carotenoid concentrations of fish feeding on astaxanthin containing treatments and control treatment (P<0.05). It should be mentioned that just raw fillet color of fish belonged to 80 (ppm) of astaxanthin treatment was seen to be close to color targets (SalmoFan scores) adopted for harvest-size fish.

Keywords: Astaxanthin, Flesh color, Rainbow trout, Vitaton, β- carotene,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3418
100 Innovative Teaching in Systems Analysis and Design - an Action Research Project

Authors: Imelda Smit

Abstract:

Systems Analysis and Design is a key subject in Information Technology courses, but students do not find it easy to cope with, since it is not “precise" like programming and not exact like Mathematics. It is a subject working with many concepts, modeling ideas into visual representations and then translating the pictures into a real life system. To complicate matters users who are not necessarily familiar with computers need to give their inputs to ensure that they get the system the need. Systems Analysis and Design also covers two fields, namely Analysis, focusing on the analysis of the existing system and Design, focusing on the design of the new system. To be able to test the analysis and design of a system, it is necessary to develop a system or at least a prototype of the system to test the validity of the analysis and design. The skills necessary in each aspect differs vastly. Project Management Skills, Database Knowledge and Object Oriented Principles are all necessary. In the context of a developing country where students enter tertiary education underprepared and the digital divide is alive and well, students need to be motivated to learn the necessary skills, get an opportunity to test it in a “live" but protected environment – within the framework of a university. The purpose of this article is to improve the learning experience in Systems Analysis and Design through reviewing the underlying teaching principles used, the teaching tools implemented, the observations made and the reflections that will influence future developments in Systems Analysis and Design. Action research principles allows the focus to be on a few problematic aspects during a particular semester.

Keywords: Action Research, Project Development, Systems Analysis and Design, Technology in Teaching.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1446
99 Surrogate based Evolutionary Algorithm for Design Optimization

Authors: Maumita Bhattacharya

Abstract:

Optimization is often a critical issue for most system design problems. Evolutionary Algorithms are population-based, stochastic search techniques, widely used as efficient global optimizers. However, finding optimal solution to complex high dimensional, multimodal problems often require highly computationally expensive function evaluations and hence are practically prohibitive. The Dynamic Approximate Fitness based Hybrid EA (DAFHEA) model presented in our earlier work [14] reduced computation time by controlled use of meta-models to partially replace the actual function evaluation by approximate function evaluation. However, the underlying assumption in DAFHEA is that the training samples for the meta-model are generated from a single uniform model. Situations like model formation involving variable input dimensions and noisy data certainly can not be covered by this assumption. In this paper we present an enhanced version of DAFHEA that incorporates a multiple-model based learning approach for the SVM approximator. DAFHEA-II (the enhanced version of the DAFHEA framework) also overcomes the high computational expense involved with additional clustering requirements of the original DAFHEA framework. The proposed framework has been tested on several benchmark functions and the empirical results illustrate the advantages of the proposed technique.

Keywords: Evolutionary algorithm, Fitness function, Optimization, Meta-model, Stochastic method.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
98 A Consumption-Based Hybrid Life Cycle Assessment of Carbon Footprints in California: High Footprints in Small Urban Households

Authors: Jukka Heinonen

Abstract:

Higher density reduces distances, private car dependency and thus reduces greenhouse gas emissions (GHGs). As a result, increased density has been given a central role among urban development targets. However, it is not just travel behavior that changes along with density. Rather, the consumption patterns, or overall lifestyles, change along with changing urban structure, particularly with changing housing types and consumption opportunities. Furthermore, elevated consumption of services, more frequent flying and less intra-household sharing have been shown to potentially outweigh the gains from reduced driving in more dense urban settlements. In this study, the geography of carbon footprints (CFs) in California is analyzed paying close attention to the household size differences and the resulting economies-of-scale advantages and disadvantages. A hybrid life cycle assessment (LCA) framework is employed together with consumer expenditure data to assess the CFs. According to the study, small urban households have the highest CFs in California. Their transport related emissions are significantly lower than those of the residents of less urbanized areas, but higher emissions from other consumption categories, together with the low degree of sharing of goods, overweigh the gains. Two functional units, per capita and per household, are used to analyze the CFs and to demonstrate the importance of household size. The lifestyle impacts visible through the consumption data are also discussed. The study suggests that there are still significant gaps in our understanding of the premises of low-carbon human settlements.

Keywords: Carbon footprint, life cycle assessment, consumption, lifestyle, household size, economies-of-scale.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1218
97 The Application of Real Options to Capital Budgeting

Authors: George Yungchih Wang

Abstract:

Real options theory suggests that managerial flexibility embedded within irreversible investments can account for a significant value in project valuation. Although the argument has become the dominant focus of capital investment theory over decades, yet recent survey literature in capital budgeting indicates that corporate practitioners still do not explicitly apply real options in investment decisions. In this paper, we explore how real options decision criteria can be transformed into equivalent capital budgeting criteria under the consideration of uncertainty, assuming that underlying stochastic process follows a geometric Brownian motion (GBM), a mixed diffusion-jump (MX), or a mean-reverting process (MR). These equivalent valuation techniques can be readily decomposed into conventional investment rules and “option impacts", the latter of which describe the impacts on optimal investment rules with the option value considered. Based on numerical analysis and Monte Carlo simulation, three major findings are derived. First, it is shown that real options could be successfully integrated into the mindset of conventional capital budgeting. Second, the inclusion of option impacts tends to delay investment. It is indicated that the delay effect is the most significant under a GBM process and the least significant under a MR process. Third, it is optimal to adopt the new capital budgeting criteria in investment decision-making and adopting a suboptimal investment rule without considering real options could lead to a substantial loss in value.

Keywords: real options, capital budgeting, geometric Brownianmotion, mixed diffusion-jump, mean-reverting process

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2759
96 Performance Analysis of New Types of Reference Targets Based on Spaceborne and Airborne SAR Data

Authors: Y. S. Zhou, C. R. Li, L. L. Tang, C. X. Gao, D. J. Wang, Y. Y. Guo

Abstract:

Triangular trihedral corner reflector (CR) has been widely used as point target for synthetic aperture radar (SAR) calibration and image quality assessment. The additional “tip” of the triangular plate does not contribute to the reflector’s theoretical RCS and if it interacts with a perfectly reflecting ground plane, it will yield an increase of RCS at the radar bore-sight and decrease the accuracy of SAR calibration and image quality assessment. Regarding this problem, two types of CRs were manufactured. One was the hexagonal trihedral CR. It is a self-illuminating CR with relatively small plate edge length, while large edge length usually introduces unexpected edge diffraction error. The other was the triangular trihedral CR with extended bottom plate which considers the effect of ‘tip’ into the total RCS. In order to assess the performance of the two types of new CRs, flight campaign over the National Calibration and Validation Site for High Resolution Remote Sensors was carried out. Six hexagonal trihedral CRs and two bottom-extended trihedral CRs, as well as several traditional triangular trihedral CRs, were deployed. KOMPSAT-5 X-band SAR image was acquired for the performance analysis of the hexagonal trihedral CRs. C-band airborne SAR images were acquired for the performance analysis of the bottom-extended trihedral CRs. The analysis results showed that the impulse response function of both the hexagonal trihedral CRs and bottom-extended trihedral CRs were much closer to the ideal sinc-function than the traditional triangular trihedral CRs. The flight campaign results validated the advantages of new types of CRs and they might be useful in the future SAR calibration mission.

Keywords: Synthetic Aperture Radar, calibration, corner reflector, KOMPSAT-5.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1223
95 Changing Roles and Skills of Urban Planners in the Turkish Planning System

Authors: Fatih Eren

Abstract:

This research aims to find an answer to the question of which knowledge and skills do the Turkish urban planners need in their business practice. Understanding change in cities, making a prediction, making an urban decision and putting it into practice, working together with actors from different organizations from various academic disciplines, persuading people to accept something and developing good personal and professional relationships have become very complex and difficult in today’s world. The truth is that urban planners work in many institutions under various positions which are not similar to each other by field of activity and all planners are forced to develop some knowledge and skills for success in their business in Turkey. This study targets to explore what urban planners do in the global information age. The study is the product of a comprehensive nation-wide research. In-depth interviews were conducted with 174 experienced urban planners, who work in different public institutions and private companies under varied positions in the Turkish Planning System, to find out knowledge and skills needed by next-generation urban planners. The main characteristics of next-generation urban planners are defined; skills that planners needed today are explored in this paper. Findings show that the positivist (traditional) planning approach has given place to anti-positivist planning approaches in the Turkish Planning System so next-generation urban planners who seek success and want to carve out a niche for themselves in business life have to equip themselves with innovative skills. The result section also includes useful and instructive findings for planners about what is the meaning of being an urban planner and what is the ideal content and context of planning education at universities in the global age.

Keywords: The global information age, urban planners, innovative job skills, planning education.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1256
94 Evaluation of Easy-to-Use Energy Building Design Tools for Solar Access Analysis in Urban Contexts: Comparison of Friendly Simulation Design Tools for Architectural Practice in the Early Design Stage

Authors: M. Iommi, G. Losco

Abstract:

Current building sector is focused on reduction of energy requirements, on renewable energy generation and on regeneration of existing urban areas. These targets need to be solved with a systemic approach, considering several aspects simultaneously such as climate conditions, lighting conditions, solar radiation, PV potential, etc. The solar access analysis is an already known method to analyze the solar potentials, but in current years, simulation tools have provided more effective opportunities to perform this type of analysis, in particular in the early design stage. Nowadays, the study of the solar access is related to the easiness of the use of simulation tools, in rapid and easy way, during the design process. This study presents a comparison of three simulation tools, from the point of view of the user, with the aim to highlight differences in the easy-to-use of these tools. Using a real urban context as case study, three tools; Ecotect, Townscope and Heliodon, are tested, performing models and simulations and examining the capabilities and output results of solar access analysis. The evaluation of the ease-to-use of these tools is based on some detected parameters and features, such as the types of simulation, requirements of input data, types of results, etc. As a result, a framework is provided in which features and capabilities of each tool are shown. This framework shows the differences among these tools about functions, features and capabilities. The aim of this study is to support users and to improve the integration of simulation tools for solar access with the design process.

Keywords: Solar access analysis, energy building design tools, urban planning, solar potential.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2059
93 Applying Biosensors’ Electromyography Signals through an Artificial Neural Network to Control a Small Unmanned Aerial Vehicle

Authors: Mylena McCoggle, Shyra Wilson, Andrea Rivera, Rocio Alba-Flores, Valentin Soloiu

Abstract:

This work describes a system that uses electromyography (EMG) signals obtained from muscle sensors and an Artificial Neural Network (ANN) for signal classification and pattern recognition that is used to control a small unmanned aerial vehicle using specific arm movements. The main objective of this endeavor is the development of an intelligent interface that allows the user to control the flight of a drone beyond direct manual control. The sensor used were the MyoWare Muscle sensor which contains two EMG electrodes used to collect signals from the posterior (extensor) and anterior (flexor) forearm, and the bicep. The collection of the raw signals from each sensor was performed using an Arduino Uno. Data processing algorithms were developed with the purpose of classifying the signals generated by the arm’s muscles when performing specific movements, namely: flexing, resting, and motion of the arm. With these arm motions roll control of the drone was achieved. MATLAB software was utilized to condition the signals and prepare them for the classification. To generate the input vector for the ANN and perform the classification, the root mean square and the standard deviation were processed for the signals from each electrode. The neuromuscular information was trained using an ANN with a single 10 neurons hidden layer to categorize the four targets. The result of the classification shows that an accuracy of 97.5% was obtained. Afterwards, classification results are used to generate the appropriate control signals from the computer to the drone through a Wi-Fi network connection. These procedures were successfully tested, where the drone responded successfully in real time to the commanded inputs.

Keywords: Biosensors, electromyography, Artificial Neural Network, Arduino, drone flight control, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 537
92 Improvement of Overall Equipment Effectiveness through Total Productive Maintenance

Authors: S. Fore, L. Zuze

Abstract:

Frequent machine breakdowns, low plant availability and increased overtime are a great threat to a manufacturing plant as they increase operating costs of an industry. The main aim of this study was to improve Overall Equipment Effectiveness (OEE) at a manufacturing company through the implementation of innovative maintenance strategies. A case study approach was used. The paper focuses on improving the maintenance in a manufacturing set up using an innovative maintenance regime mix to improve overall equipment effectiveness. Interviews, reviewing documentation and historical records, direct and participatory observation were used as data collection methods during the research. Usually production is based on the total kilowatt of motors produced per day. The target kilowatt at 91% availability is 75 Kilowatts a day. Reduced demand and lack of raw materials particularly imported items are adversely affecting the manufacturing operations. The company had to reset its targets from the usual figure of 250 Kilowatt per day to mere 75 per day due to lower availability of machines as result of breakdowns as well as lack of raw materials. The price reductions and uncertainties as well as general machine breakdowns further lowered production. Some recommendations were given. For instance, employee empowerment in the company will enhance responsibility and authority to improve and totally eliminate the six big losses. If the maintenance department is to realise its proper function in a progressive, innovative industrial society, then its personnel must be continuously trained to meet current needs as well as future requirements. To make the maintenance planning system effective, it is essential to keep track of all the corrective maintenance jobs and preventive maintenance inspections. For large processing plants these cannot be handled manually. It was therefore recommended that the company implement (Computerised Maintenance Management System) CMMS.

Keywords: Maintenance, Manufacturing, Overall Equipment Effectiveness

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3980
91 Fuzzy Logic Approach to Robust Regression Models of Uncertain Medical Categories

Authors: Arkady Bolotin

Abstract:

Dichotomization of the outcome by a single cut-off point is an important part of various medical studies. Usually the relationship between the resulted dichotomized dependent variable and explanatory variables is analyzed with linear regression, probit regression or logistic regression. However, in many real-life situations, a certain cut-off point dividing the outcome into two groups is unknown and can be specified only approximately, i.e. surrounded by some (small) uncertainty. It means that in order to have any practical meaning the regression model must be robust to this uncertainty. In this paper, we show that neither the beta in the linear regression model, nor its significance level is robust to the small variations in the dichotomization cut-off point. As an alternative robust approach to the problem of uncertain medical categories, we propose to use the linear regression model with the fuzzy membership function as a dependent variable. This fuzzy membership function denotes to what degree the value of the underlying (continuous) outcome falls below or above the dichotomization cut-off point. In the paper, we demonstrate that the linear regression model of the fuzzy dependent variable can be insensitive against the uncertainty in the cut-off point location. In the paper we present the modeling results from the real study of low hemoglobin levels in infants. We systematically test the robustness of the binomial regression model and the linear regression model with the fuzzy dependent variable by changing the boundary for the category Anemia and show that the behavior of the latter model persists over a quite wide interval.

Keywords: Categorization, Uncertain medical categories, Binomial regression model, Fuzzy dependent variable, Robustness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
90 Currency Boards in Crisis: Experience of Baltic Countries

Authors: Gordana Kordić, Petra Palić

Abstract:

The European countries that during the past two decades based their exchange rate regimes on currency board arrangement (CBA) are usually analysed from the perspective of corner solution choice’s stabilisation effects. There is an open discussion on the positive and negative background of a strict exchange rate regime choice, although it should be seen as part of the transition process towards the monetary union membership. The focus of the paper is on the Baltic countries that after two decades of a rigid exchange rate arrangement and strongly influenced by global crisis are finishing their path towards the euro zone. Besides the stabilising capacity, the CBA is highly vulnerable regime, with limited developing potential. The rigidity of the exchange rate (and monetary) system, despite the ensured credibility, do not leave enough (or any) space for the adjustment and/or active crisis management. Still, the Baltics are in a process of recovery, with fiscal consolidation measures combined with (painful and politically unpopular) measures of internal devaluation. Today, two of them (Estonia and Latvia) are members of euro zone, fulfilling their ultimate transition targets, but de facto exchanging one fixed regime with another. The paper analyses the challenges for the CBA in unstable environment since the fixed regimes rely on imported stability and are sensitive to external shocks. With limited monetary instruments, these countries were oriented to the fiscal policies and used a combination of internal devaluation and tax policy measures. Despite their rather quick recovery, our second goal is to analyse the long term influence that the measures had on the national economy.

Keywords: Currency Board Arrangement, internal devaluation, exchange rate regime, Great recession.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2037
89 Measuring the Structural Similarity of Web-based Documents: A Novel Approach

Authors: Matthias Dehmer, Frank Emmert Streib, Alexander Mehler, Jürgen Kilian

Abstract:

Most known methods for measuring the structural similarity of document structures are based on, e.g., tag measures, path metrics and tree measures in terms of their DOM-Trees. Other methods measures the similarity in the framework of the well known vector space model. In contrast to these we present a new approach to measuring the structural similarity of web-based documents represented by so called generalized trees which are more general than DOM-Trees which represent only directed rooted trees.We will design a new similarity measure for graphs representing web-based hypertext structures. Our similarity measure is mainly based on a novel representation of a graph as strings of linear integers, whose components represent structural properties of the graph. The similarity of two graphs is then defined as the optimal alignment of the underlying property strings. In this paper we apply the well known technique of sequence alignments to solve a novel and challenging problem: Measuring the structural similarity of generalized trees. More precisely, we first transform our graphs considered as high dimensional objects in linear structures. Then we derive similarity values from the alignments of the property strings in order to measure the structural similarity of generalized trees. Hence, we transform a graph similarity problem to a string similarity problem. We demonstrate that our similarity measure captures important structural information by applying it to two different test sets consisting of graphs representing web-based documents.

Keywords: Graph similarity, hierarchical and directed graphs, hypertext, generalized trees, web structure mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2551
88 Ranking Genes from DNA Microarray Data of Cervical Cancer by a local Tree Comparison

Authors: Frank Emmert-Streib, Matthias Dehmer, Jing Liu, Max Muhlhauser

Abstract:

The major objective of this paper is to introduce a new method to select genes from DNA microarray data. As criterion to select genes we suggest to measure the local changes in the correlation graph of each gene and to select those genes whose local changes are largest. More precisely, we calculate the correlation networks from DNA microarray data of cervical cancer whereas each network represents a tissue of a certain tumor stage and each node in the network represents a gene. From these networks we extract one tree for each gene by a local decomposition of the correlation network. The interpretation of a tree is that it represents the n-nearest neighbor genes on the n-th level of a tree, measured by the Dijkstra distance, and, hence, gives the local embedding of a gene within the correlation network. For the obtained trees we measure the pairwise similarity between trees rooted by the same gene from normal to cancerous tissues. This evaluates the modification of the tree topology due to tumor progression. Finally, we rank the obtained similarity values from all tissue comparisons and select the top ranked genes. For these genes the local neighborhood in the correlation networks changes most between normal and cancerous tissues. As a result we find that the top ranked genes are candidates suspected to be involved in tumor growth. This indicates that our method captures essential information from the underlying DNA microarray data of cervical cancer.

Keywords: Graph similarity, generalized trees, graph alignment, DNA microarray data, cervical cancer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1747
87 Transformation of Vocal Characteristics: A Review of Literature

Authors: Dong-Yan Huang, Ee Ping Ong, Susanto Rahardja, Minghui Dong, Haizhou Li

Abstract:

The transformation of vocal characteristics aims at modifying voice such that the intelligibility of aphonic voice is increased or the voice characteristics of a speaker (source speaker) to be perceived as if another speaker (target speaker) had uttered it. In this paper, the current state-of-the-art voice characteristics transformation methodology is reviewed. Special emphasis is placed on voice transformation methodology and issues for improving the transformed speech quality in intelligibility and naturalness are discussed. In particular, it is suggested to use the modulation theory of speech as a base for research on high quality voice transformation. This approach allows one to separate linguistic, expressive, organic and perspective information of speech, based on an analysis of how they are fused when speech is produced. Therefore, this theory provides the fundamentals not only for manipulating non-linguistic, extra-/paralinguistic and intra-linguistic variables for voice transformation, but also for paving the way for easily transposing the existing voice transformation methods to emotion-related voice quality transformation and speaking style transformation. From the perspectives of human speech production and perception, the popular voice transformation techniques are described and classified them based on the underlying principles either from the speech production or perception mechanisms or from both. In addition, the advantages and limitations of voice transformation techniques and the experimental manipulation of vocal cues are discussed through examples from past and present research. Finally, a conclusion and road map are pointed out for more natural voice transformation algorithms in the future.

Keywords: Voice transformation, Voice Quality, Emotion, Individuality, Speaking Style, Speech Production, Speech Perception.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2038
86 Multiple Targets Classification and Fuzzy Logic Decision Fusion in Wireless Sensor Networks

Authors: Ahmad Aljaafreh

Abstract:

This paper proposes a hierarchical hidden Markov model (HHMM) to model the detection of M vehicles in a wireless sensor network (WSN). The HHMM model contains an extra level of hidden Markov model to model the temporal transitions of each state of the first HMM. By modeling the temporal transitions, only those hypothesis with nonzero transition probabilities needs to be tested. Thus, this method efficiently reduces the computation load, which is preferable in WSN applications.This paper integrates several techniques to optimize the detection performance. The output of the states of the first HMM is modeled as Gaussian Mixture Model (GMM), where the number of states and the number of Gaussians are experimentally determined, while the other parameters are estimated using Expectation Maximization (EM). HHMM is used to model the sequence of the local decisions which are based on multiple hypothesis testing with maximum likelihood approach. The states in the HHMM represent various combinations of vehicles of different types. Due to the statistical advantages of multisensor data fusion, we propose a heuristic based on fuzzy weighted majority voting to enhance cooperative classification of moving vehicles within a region that is monitored by a wireless sensor network. A fuzzy inference system weighs each local decision based on the signal to noise ratio of the acoustic signal for target detection and the signal to noise ratio of the radio signal for sensor communication. The spatial correlation among the observations of neighboring sensor nodes is efficiently utilized as well as the temporal correlation. Simulation results demonstrate the efficiency of this scheme.

Keywords: Classification, decision fusion, fuzzy logic, hidden Markov model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6240
85 A Study on the Characteristics of the Korean Color Based On the Comparative Analysis of the Korea, China and Japan-s Porcelains

Authors: Sungwon Jo

Abstract:

Ceramics comprise the largest proportion of Korea-s cultural heritage currently preserved (Cited from “The Beauty of Old Ceramics of Korea" written by Yoon Yong-iee). Thus, this researcher conducted this investigation in an attempt to gain insight into Korea-s past culture and the lost period of the colonial period and the Korean War by looking into the ceramics. Korea, China and Japan are part of the similar cultural bloc within the East Asian region. Their porcelains manifest distinctive characteristics by each nation along with similarities. Thus, this research seeks to find the distinctive characteristics of the Korean porcelain by conducting comparative analysis of the similarities and distinctive characteristics. These distinctive characteristics are manifested effectively in the colors of the porcelains following the materials that can be obtained in Korea, China and Japan and production method. Likewise, this research seeks to identify the characteristics of the Korean porcelains- colors based on the comparative analysis of the porcelain colors. The reasons that porcelains were selected were because they are the most well preserved cultural remains in Korea and since they have both similarities and distinctive characteristics due to the cultural interchanges among Korea, China and Japan, which facilitates comparative study. The research targets include Korea, China and Japan-s porcelains. By comparing the colors of the porcelains from Korea, China and Japan that have their distinctive characteristics, this research seeks to identify Korea-specific porcelain colors. These colors derive from the materials that can be obtained only in Korea, and they are affected by the ideologies that governed at the time. This research is meaningful in the sense that this identifies the colors that embraces the Korean culture and provides important data by leveraging the study of the characteristics of the Korea-specific porcelains.

Keywords: The colors of Korean pottery, The colors of China pottery, The colors of Japan pottery, The unique identity of Korea, Pottery History.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1891
84 Entropic Measures of a Probability Sample Space and Exponential Type (α, β) Entropy

Authors: Rajkumar Verma, Bhu Dev Sharma

Abstract:

Entropy is a key measure in studies related to information theory and its many applications. Campbell for the first time recognized that the exponential of the Shannon’s entropy is just the size of the sample space, when distribution is uniform. Here is the idea to study exponentials of Shannon’s and those other entropy generalizations that involve logarithmic function for a probability distribution in general. In this paper, we introduce a measure of sample space, called ‘entropic measure of a sample space’, with respect to the underlying distribution. It is shown in both discrete and continuous cases that this new measure depends on the parameters of the distribution on the sample space - same sample space having different ‘entropic measures’ depending on the distributions defined on it. It was noted that Campbell’s idea applied for R`enyi’s parametric entropy of a given order also. Knowing that parameters play a role in providing suitable choices and extended applications, paper studies parametric entropic measures of sample spaces also. Exponential entropies related to Shannon’s and those generalizations that have logarithmic functions, i.e. are additive have been studies for wider understanding and applications. We propose and study exponential entropies corresponding to non additive entropies of type (α, β), which include Havard and Charvˆat entropy as a special case.

Keywords: Sample space, Probability distributions, Shannon’s entropy, R`enyi’s entropy, Non-additive entropies .

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3387
83 Chikungunya Protease Domain–High Throughput Virtual Screening

Authors: Surender Singh Jadav, Venkatesan Jayaprakash, Arijit Basu, Barij Nayan Sinha

Abstract:

Chikungunya virus (CHICKV) is an arboviruses belonging to family Tagoviridae and is transmitted to human through by mosquito (Aedes aegypti and Aedes albopictus) bite. A large outbreak of chikungunya has been reported in India between 2006 and 2007, along with several other countries from South-East Asia and for the first time in Europe. It was for the first time that the CHICKV outbreak has been reported with mortality from Reunion Island and increased mortality from Asian countries. CHICKV affects all age groups, and currently there are no specific drugs or vaccine to cure the disease. The need of antiviral agents for the treatment of CHICKV infection and the success of virtual screening against many therapeutically valuable targets led us to carry out the structure based drug design against Chikungunya nSP2 protease (PDB: 3TRK). Highthroughput virtual screening of publicly available databases, ZINC12 and BindingDB, has been carried out using the Openeye tools and Schrodinger LLC software packages. Openeye Filter program has been used to filter the database and the filtered outputs were docked using HTVS protocol implemented in GLIDE package of Schrodinger LLC. The top HITS were further used for enriching the similar molecules from the database through vROCS; a shape based screening protocol implemented in Openeye. The approach adopted has provided different scaffolds as HITS against CHICKV protease. Three scaffolds: Indole, Pyrazole and Sulphone derivatives were selected based on the docking score and synthetic feasibility. Derivatives of Pyrazole were synthesized and submitted for antiviral screening against CHICKV.

Keywords: Chikungunya, nsP2 protease, ADME filter, HTVS, Docking, Active site.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491
82 Effect of Fill Material Density under Structures on Ground Motion Characteristics Due to Earthquake

Authors: Ahmed T. Farid, Khaled Z. Soliman

Abstract:

Due to limited areas and excessive cost of land for projects, backfilling process has become necessary. Also, backfilling will be done to overcome the un-leveling depths or raising levels of site construction, especially near the sea region. Therefore, backfilling soil materials used under the foundation of structures should be investigated regarding its effect on ground motion characteristics, especially at regions subjected to earthquakes. In this research, 60-meter thickness of sandy fill material was used above a fixed 240-meter of natural clayey soil underlying by rock formation to predict the modified ground motion characteristics effect at the foundation level. Comparison between the effect of using three different situations of fill material compaction on the recorded earthquake is studied, i.e. peak ground acceleration, time history, and spectra acceleration values. The three different densities of the compacted fill material used in the study were very loose, medium dense and very dense sand deposits, respectively. Shake computer program was used to perform this study. Strong earthquake records, with Peak Ground Acceleration (PGA) of 0.35 g, were used in the analysis. It was found that, higher compaction of fill material thickness has a significant effect on eliminating the earthquake ground motion properties at surface layer of fill material, near foundation level. It is recommended to consider the fill material characteristics in the design of foundations subjected to seismic motions. Future studies should be analyzed for different fill and natural soil deposits for different seismic conditions.

Keywords: Fill, material, density, compaction, earthquake, PGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 873
81 Statistical Modeling of Local Area Fading Channels Based on Triply Stochastic Filtered Marked Poisson Point Processes

Authors: Jihad S. Daba, J. P. Dubois

Abstract:

Fading noise degrades the performance of cellular communication, most notably in femto- and pico-cells in 3G and 4G systems. When the wireless channel consists of a small number of scattering paths, the statistics of fading noise is not analytically tractable and poses a serious challenge to developing closed canonical forms that can be analysed and used in the design of efficient and optimal receivers. In this context, noise is multiplicative and is referred to as stochastically local fading. In many analytical investigation of multiplicative noise, the exponential or Gamma statistics are invoked. More recent advances by the author of this paper utilized a Poisson modulated-weighted generalized Laguerre polynomials with controlling parameters and uncorrelated noise assumptions. In this paper, we investigate the statistics of multidiversity stochastically local area fading channel when the channel consists of randomly distributed Rayleigh and Rician scattering centers with a coherent Nakagami-distributed line of sight component and an underlying doubly stochastic Poisson process driven by a lognormal intensity. These combined statistics form a unifying triply stochastic filtered marked Poisson point process model.

Keywords: Cellular communication, femto- and pico-cells, stochastically local area fading channel, triply stochastic filtered marked Poisson point process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1341
80 Navigation and Guidance System Architectures for Small Unmanned Aircraft Applications

Authors: Roberto Sabatini, Celia Bartel, Anish Kaharkar, Tesheen Shaid, Subramanian Ramasamy

Abstract:

Two multisensor system architectures for navigation and guidance of small Unmanned Aircraft (UA) are presented and compared. The main objective of our research is to design a compact, light and relatively inexpensive system capable of providing the required navigation performance in all phases of flight of small UA, with a special focus on precision approach and landing, where Vision Based Navigation (VBN) techniques can be fully exploited in a multisensor integrated architecture. Various existing techniques for VBN are compared and the Appearance-Based Navigation (ABN) approach is selected for implementation. Feature extraction and optical flow techniques are employed to estimate flight parameters such as roll angle, pitch angle, deviation from the runway centreline and body rates. Additionally, we address the possible synergies of VBN, Global Navigation Satellite System (GNSS) and MEMS-IMU (Micro-Electromechanical System Inertial Measurement Unit) sensors, and the use of Aircraft Dynamics Model (ADM) to provide additional information suitable to compensate for the shortcomings of VBN and MEMS-IMU sensors in high-dynamics attitude determination tasks. An Extended Kalman Filter (EKF) is developed to fuse the information provided by the different sensors and to provide estimates of position, velocity and attitude of the UA platform in real-time. The key mathematical models describing the two architectures i.e., VBN-IMU-GNSS (VIG) system and VIGADM (VIGA) system are introduced. The first architecture uses VBN and GNSS to augment the MEMS-IMU. The second mode also includes the ADM to provide augmentation of the attitude channel. Simulation of these two modes is carried out and the performances of the two schemes are compared in a small UA integration scheme (i.e., AEROSONDE UA platform) exploring a representative cross-section of this UA operational flight envelope, including high dynamics manoeuvres and CAT-I to CAT-III precision approach tasks. Simulation of the first system architecture (i.e., VIG system) shows that the integrated system can reach position, velocity and attitude accuracies compatible with the Required Navigation Performance (RNP) requirements. Simulation of the VIGA system also shows promising results since the achieved attitude accuracy is higher using the VBN-IMU-ADM than using VBN-IMU only. A comparison of VIG and VIGA system is also performed and it shows that the position and attitude accuracy of the proposed VIG and VIGA systems are both compatible with the RNP specified in the various UA flight phases, including precision approach down to CAT-II.

Keywords: Global Navigation Satellite System (GNSS), Lowcost Navigation Sensors, MEMS Inertial Measurement Unit (IMU), Unmanned Aerial Vehicle, Vision Based Navigation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3207
79 Impact of Changes of the Conceptual Framework for Financial Reporting on the Indicators of the Financial Statement

Authors: Nadezhda Kvatashidze

Abstract:

The International Accounting Standards Board updated the conceptual framework for financial reporting. The main reason behind it is to resolve the tasks of the accounting, which are caused by the market development and business-transactions of a new economic content. Also, the investors call for higher transparency of information and responsibility for the results in order to make a more accurate risk assessment and forecast. All these make it necessary to further develop the conceptual framework for financial reporting so that the users get useful information. The market development and certain shortcomings of the conceptual framework revealed in practice require its reconsideration and finding new solutions. Some issues and concepts, such as disclosure and supply of information, its qualitative characteristics, assessment, and measurement uncertainty had to be supplemented and perfected. The criteria of recognition of certain elements (assets and liabilities) of reporting had to be updated, too and all this is set out in the updated edition of the conceptual framework for financial reporting, a comprehensive collection of concepts underlying preparation of the financial statement. The main objective of conceptual framework revision is to improve financial reporting and development of clear concepts package. This will support International Accounting Standards Board (IASB) to set common “Approach & Reflection” for similar transactions on the basis of mutually accepted concepts. As a result, companies will be able to develop coherent accounting policies for those transactions or events that are occurred from particular deals to which no standard is used or when standard allows choice of accounting policy.

Keywords: Conceptual framework, measurement basis, measurement uncertainty, neutrality, prudence, stewardship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2482
78 Preliminary Results of In-Vitro Skin Tissue Soldering using Gold Nanoshells and ICG Combination

Authors: M. S. Nourbakhsh, M. E. Khosroshahi

Abstract:

Laser soldering is based on applying some soldering material (albumin) onto the approximated edges of the cut and heating the solder (and the underlying tissues) by a laser beam. Endogenous and exogenous materials such as indocyanine green (ICG) are often added to solders to enhance light absorption. Gold nanoshells are new materials which have an optical response dictated by the plasmon resonance. The wavelength at which the resonance occurs depends on the core and shell sizes, allowing nanoshells to be tailored for particular applications. The purposes of this study was use combination of ICG and different concentration of gold nanoshells for skin tissue soldering and also to examine the effect of laser soldering parameters on the properties of repaired skin. Two mixtures of albumin solder and different combinations of ICG and gold nanoshells were prepared. A full thickness incision of 2×20 mm2 was made on the surface and after addition of mixtures it was irradiated by an 810nm diode laser at different power densities. The changes of tensile strength σt due to temperature rise, number of scan (Ns), and scan velocity (Vs) were investigated. The results showed at constant laser power density (I), σt of repaired incisions increases by increasing the concentration of gold nanoshells in solder, Ns and decreasing Vs. It is therefore important to consider the tradeoff between the scan velocity and the surface temperature for achieving an optimum operating condition. In our case this corresponds to σt =1800 gr/cm2 at I~ 47 Wcm-2, T ~ 85ºC, Ns =10 and Vs=0.3mms-1.

Keywords: Tissue soldering, gold nanoshells, indocyanine green, combination, tensile strength.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1495
77 Linguistic Competence Analysis and the Development of Speaking Instructional Material

Authors: Felipa M. Rico

Abstract:

Linguistic oral competence plays a vital role in attaining effective communication. Since the English language is considered as universally used language and has a high demand skill needed in the work-place, mastery is the expected output from learners. To achieve this, learners should be given integrated differentiated tasks which help them develop and strengthen the expected skills. This study aimed to develop speaking instructional supplementary material to enhance the English linguistic competence of Grade 9 students in areas of pronunciation, intonation and stress, voice projection, diction and fluency. A descriptive analysis was utilized to analyze the speaking level of performance of the students in order to employ appropriate strategies. There were two sets of respondents: 178 Grade 9 students selected through a stratified sampling and chosen at random. The other set comprised English teachers who evaluated the usefulness of the devised teaching materials. A teacher conducted a speaking test and activities were employed to analyze the speaking needs of students. Observation and recordings were also used to evaluate the students’ performance. The findings revealed that the English pronunciation of the students was slightly unclear at times, but generally fair. There were lapses but generally they rated moderate in intonation and stress, because of other language interference. In terms of voice projection, students have erratic high volume pitch. For diction, the students’ ability to produce comprehensible language is limited, and as to fluency, the choice of vocabulary and use of structure were severely limited. Based on the students’ speaking needs analyses, the supplementary material devised was based on Nunan’s IM model, incorporating context of daily life and global work settings, considering the principle that language is best learned in the actual meaningful situation. To widen the mastery of skill, a rich learning environment, filled with a variety instructional material tends to foster faster acquisition of the requisite skills for sustained learning and development. The role of IM is to encourage information to stick in the learners’ mind, as what is seen is understood more than what is heard. Teachers say they found the IM “very useful.” This implied that English teachers could adopt the materials to improve the speaking skills of students. Further, teachers should provide varied opportunities for students to get involved in real life situations where they could take turns in asking and answering questions and share information related to the activities. This would minimize anxiety among students in the use of the English language.

Keywords: Fluency, intonation, instructional materials, linguistic competence, pronunciation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1647
76 Application of Single Tuned Passive Filters in Distribution Networks at the Point of Common Coupling

Authors: M. Almutairi, S. Hadjiloucas

Abstract:

The harmonic distortion of voltage is important in relation to power quality due to the interaction between the large diffusion of non-linear and time-varying single-phase and three-phase loads with power supply systems. However, harmonic distortion levels can be reduced by improving the design of polluting loads or by applying arrangements and adding filters. The application of passive filters is an effective solution that can be used to achieve harmonic mitigation mainly because filters offer high efficiency, simplicity, and are economical. Additionally, possible different frequency response characteristics can work to achieve certain required harmonic filtering targets. With these ideas in mind, the objective of this paper is to determine what size single tuned passive filters work in distribution networks best, in order to economically limit violations caused at a given point of common coupling (PCC). This article suggests that a single tuned passive filter could be employed in typical industrial power systems. Furthermore, constrained optimization can be used to find the optimal sizing of the passive filter in order to reduce both harmonic voltage and harmonic currents in the power system to an acceptable level, and, thus, improve the load power factor. The optimization technique works to minimize voltage total harmonic distortions (VTHD) and current total harmonic distortions (ITHD), where maintaining a given power factor at a specified range is desired. According to the IEEE Standard 519, both indices are viewed as constraints for the optimal passive filter design problem. The performance of this technique will be discussed using numerical examples taken from previous publications.

Keywords: Harmonics, passive filter, power factor, power quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2185
75 A Growing Natural Gas Approach for Evaluating Quality of Software Modules

Authors: Parvinder S. Sandhu, Sandeep Khimta, Kiranpreet Kaur

Abstract:

The prediction of Software quality during development life cycle of software project helps the development organization to make efficient use of available resource to produce the product of highest quality. “Whether a module is faulty or not" approach can be used to predict quality of a software module. There are numbers of software quality prediction models described in the literature based upon genetic algorithms, artificial neural network and other data mining algorithms. One of the promising aspects for quality prediction is based on clustering techniques. Most quality prediction models that are based on clustering techniques make use of K-means, Mixture-of-Guassians, Self-Organizing Map, Neural Gas and fuzzy K-means algorithm for prediction. In all these techniques a predefined structure is required that is number of neurons or clusters should be known before we start clustering process. But in case of Growing Neural Gas there is no need of predetermining the quantity of neurons and the topology of the structure to be used and it starts with a minimal neurons structure that is incremented during training until it reaches a maximum number user defined limits for clusters. Hence, in this work we have used Growing Neural Gas as underlying cluster algorithm that produces the initial set of labeled cluster from training data set and thereafter this set of clusters is used to predict the quality of test data set of software modules. The best testing results shows 80% accuracy in evaluating the quality of software modules. Hence, the proposed technique can be used by programmers in evaluating the quality of modules during software development.

Keywords: Growing Neural Gas, data clustering, fault prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1858
74 Roundabout Optimal Entry and Circulating Flow Induced by Road Hump

Authors: Amir Hossein Pakshir, A. Hossein Pour, N. Jahandar, Ali Paydar

Abstract:

Roundabout work on the principle of circulation and entry flows, where the maximum entry flow rates depend largely on circulating flow bearing in mind that entry flows must give away to circulating flows. Where an existing roundabout has a road hump installed at the entry arm, it can be hypothesized that the kinematics of vehicles may prevent the entry arm from achieving optimum performance. Road humps are traffic calming devices placed across road width solely as speed reduction mechanism. They are the preferred traffic calming option in Malaysia and often used on single and dual carriageway local routes. The speed limit on local routes is 30mph (50 km/hr). Road humps in their various forms achieved the biggest mean speed reduction (based on a mean speed before traffic calming of 30mph) of up to 10mph or 16 km/hr according to the UK Department of Transport. The underlying aim of reduced speed should be to achieve a 'safe' distribution of speeds which reflects the function of the road and the impacts on the local community. Constraining safe distribution of speeds may lead to poor drivers timing and delayed reflex reaction that can probably cause accident. Previous studies on road hump impact have focused mainly on speed reduction, traffic volume, noise and vibrations, discomfort and delay from the use of road humps. The paper is aimed at optimal entry and circulating flow induced by road humps. Results show that roundabout entry and circulating flow perform better in circumstances where there is no road hump at entrance.

Keywords: Road hump, Roundabout, Speed Reduction

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3006
73 Buildings Founded on Thermal Insulation Layer Subjected to Earthquake Load

Authors: D. Koren, V. Kilar

Abstract:

The modern energy-efficient houses are often founded on a thermal insulation (TI) layer placed under the building’s RC foundation slab.The purpose of the paper is to identify the potential problems of the buildings founded on TI layer from the seismic point of view. The two main goals of the study were to assess the seismic behavior of such buildings, and to search for the critical structural parameters affecting the response of the superstructure as well as of the extruded polystyrene (XPS) layer. As a test building a multi-storeyed RC frame structure with and without the XPS layer under the foundation slab has been investigated utilizing nonlinear dynamic (time-history) and static (pushover) analyses. The structural response has been investigated with reference to the following performance parameters: i) Building’s lateral roof displacements, ii) Edge compressive and shear strains of the XPS, iii) Horizontal accelerations of the superstructure, iv) Plastic hinge patterns of the superstructure, v) Part of the foundation in compression, and vi) Deformations of the underlying soil and vertical displacements of the foundation slab (i.e. identifying the potential uplift). The results have shown that in the case of higher and stiff structures lying on firm soil the use of XPS under the foundation slab might induce amplified structural peak responses compared to the building models without XPS under the foundation slab. The analysis has revealed that the superstructure as well as the XPS response is substantially affected by the stiffness of the foundation slab.

Keywords: Extruded polystyrene (XPS), foundation on thermal insulation, energy-efficient buildings, nonlinear seismic analysis, seismic response, soil–structure interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
72 A New Distribution Network Reconfiguration Approach using a Tree Model

Authors: E. Dolatdar, S. Soleymani, B. Mozafari

Abstract:

Power loss reduction is one of the main targets in power industry and so in this paper, the problem of finding the optimal configuration of a radial distribution system for loss reduction is considered. Optimal reconfiguration involves the selection of the best set of branches to be opened ,one each from each loop, for reducing resistive line losses , and reliving overloads on feeders by shifting the load to adjacent feeders. However ,since there are many candidate switching combinations in the system ,the feeder reconfiguration is a complicated problem. In this paper a new approach is proposed based on a simple optimum loss calculation by determining optimal trees of the given network. From graph theory a distribution network can be represented with a graph that consists a set of nodes and branches. In fact this problem can be viewed as a problem of determining an optimal tree of the graph which simultaneously ensure radial structure of each candidate topology .In this method the refined genetic algorithm is also set up and some improvements of algorithm are made on chromosome coding. In this paper an implementation of the algorithm presented by [7] is applied by modifying in load flow program and a comparison of this method with the proposed method is employed. In [7] an algorithm is proposed that the choice of the switches to be opened is based on simple heuristic rules. This algorithm reduce the number of load flow runs and also reduce the switching combinations to a fewer number and gives the optimum solution. To demonstrate the validity of these methods computer simulations with PSAT and MATLAB programs are carried out on 33-bus test system. The results show that the performance of the proposed method is better than [7] method and also other methods.

Keywords: Distribution System, Reconfiguration, Loss Reduction , Graph Theory , Optimization , Genetic Algorithm

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3778