Search results for: Data Estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8104

Search results for: Data Estimation

6754 From Modeling of Data Structures towards Automatic Programs Generating

Authors: Valentin P. Velikov

Abstract:

Automatic program generation saves time, human resources, and allows receiving syntactically clear and logically correct modules. The 4-th generation programming languages are related to drawing the data and the processes of the subject area, as well as, to obtain a frame of the respective information system. The application can be separated in interface and business logic. That means, for an interactive generation of the needed system to be used an already existing toolkit or to be created a new one.

Keywords: Computer science, graphical user interface, user dialog interface, dialog frames, data modeling, subject area modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1432
6753 Visual Analytics in K 12 Education - Emerging Dimensions of Complexity

Authors: Linnea Stenliden

Abstract:

The aim of this paper is to understand emerging learning conditions, when a visual analytics is implemented and used in K 12 (education). To date, little attention has been paid to the role visual analytics (digital media and technology that highlight visual data communication in order to support analytical tasks) can play in education, and to the extent to which these tools can process actionable data for young students. This study was conducted in three public K 12 schools, in four social science classes with students aged 10 to 13 years, over a period of two to four weeks at each school. Empirical data were generated using video observations and analyzed with help of metaphors within Actor-network theory (ANT). The learning conditions are found to be distinguished by broad complexity, characterized by four dimensions. These emerge from the actors’ deeply intertwined relations in the activities. The paper argues in relation to the found dimensions that novel approaches to teaching and learning could benefit students’ knowledge building as they work with visual analytics, analyzing visualized data.

Keywords: Analytical reasoning, complexity, data use, problem space, visual analytics, visual storytelling, translation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1686
6752 A Post Processing Method for Quantum Prime Factorization Algorithm based on Randomized Approach

Authors: Mir Shahriar Emami, Mohammad Reza Meybodi

Abstract:

Prime Factorization based on Quantum approach in two phases has been performed. The first phase has been achieved at Quantum computer and the second phase has been achieved at the classic computer (Post Processing). At the second phase the goal is to estimate the period r of equation xrN ≡ 1 and to find the prime factors of the composite integer N in classic computer. In this paper we present a method based on Randomized Approach for estimation the period r with a satisfactory probability and the composite integer N will be factorized therefore with the Randomized Approach even the gesture of the period is not exactly the real period at least we can find one of the prime factors of composite N. Finally we present some important points for designing an Emulator for Quantum Computer Simulation.

Keywords: Quantum Prime Factorization, RandomizedAlgorithms, Quantum Computer Simulation, Quantum Computation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1484
6751 An Energy Efficient Cluster Formation Protocol with Low Latency In Wireless Sensor Networks

Authors: A. Allirani, M. Suganthi

Abstract:

Data gathering is an essential operation in wireless sensor network applications. So it requires energy efficiency techniques to increase the lifetime of the network. Similarly, clustering is also an effective technique to improve the energy efficiency and network lifetime of wireless sensor networks. In this paper, an energy efficient cluster formation protocol is proposed with the objective of achieving low energy dissipation and latency without sacrificing application specific quality. The objective is achieved by applying randomized, adaptive, self-configuring cluster formation and localized control for data transfers. It involves application - specific data processing, such as data aggregation or compression. The cluster formation algorithm allows each node to make independent decisions, so as to generate good clusters as the end. Simulation results show that the proposed protocol utilizes minimum energy and latency for cluster formation, there by reducing the overhead of the protocol.

Keywords: Sensor networks, Low latency, Energy sorting protocol, data processing, Cluster formation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2730
6750 The Pixel Value Data Approach for Rainfall Forecasting Based on GOES-9 Satellite Image Sequence Analysis

Authors: C. Yaiprasert, K. Jaroensutasinee, M. Jaroensutasinee

Abstract:

To develop a process of extracting pixel values over the using of satellite remote sensing image data in Thailand. It is a very important and effective method of forecasting rainfall. This paper presents an approach for forecasting a possible rainfall area based on pixel values from remote sensing satellite images. First, a method uses an automatic extraction process of the pixel value data from the satellite image sequence. Then, a data process is designed to enable the inference of correlations between pixel value and possible rainfall occurrences. The result, when we have a high averaged pixel value of daily water vapor data, we will also have a high amount of daily rainfall. This suggests that the amount of averaged pixel values can be used as an indicator of raining events. There are some positive associations between pixel values of daily water vapor images and the amount of daily rainfall at each rain-gauge station throughout Thailand. The proposed approach was proven to be a helpful manual for rainfall forecasting from meteorologists by which using automated analyzing and interpreting process of meteorological remote sensing data.

Keywords: Pixel values, satellite image, water vapor, rainfall, image processing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1851
6749 Measurement and Estimation of Evaporation from Water Surfaces: Application to Dams in Arid and Semi Arid Areas in Algeria

Authors: Malika Fekih, Mohamed Saighi

Abstract:

Many methods exist for either measuring or estimating evaporation from free water surfaces. Evaporation pans provide one of the simplest, inexpensive, and most widely used methods of estimating evaporative losses. In this study, the rate of evaporation starting from a water surface was calculated by modeling with application to dams in wet, arid and semi arid areas in Algeria. We calculate the evaporation rate from the pan using the energy budget equation, which offers the advantage of an ease of use, but our results do not agree completely with the measurements taken by the National Agency of areas carried out using dams located in areas of different climates. For that, we develop a mathematical model to simulate evaporation. This simulation uses an energy budget on the level of a vat of measurement and a Computational Fluid Dynamics (Fluent). Our calculation of evaporation rate is compared then by the two methods and with the measures of areas in situ.

Keywords: Evaporation, Energy budget, Surface water temperature, CFD, Dams

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5736
6748 Comparative Analysis of the Public Funding for Greek Universities: An Ordinal DEA/MCDM Approach

Authors: Yiannis Smirlis, Dimitris K. Despotis

Abstract:

This study performs a comparative analysis of the 21 Greek Universities in terms of their public funding, awarded for covering their operating expenditure. First it introduces a DEA/MCDM model that allocates the fund into four expenditure factors in the most favorable way for each university. Then, it presents a common, consensual assessment model to reallocate the amounts, remaining in the same level of total public budget. From the analysis it derives that a number of universities cannot justify the public funding in terms of their size and operational workload. For them, the sufficient reduction of their public funding amount is estimated as a future target. Due to the lack of precise data for a number of expenditure criteria, the analysis is based on a mixed crisp-ordinal data set.

Keywords: Data envelopment analysis, Greek universities, operating expenditures, ordinal data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1749
6747 The Economic Cost of Health and Safety in Work Places: An Approach on the Costs Calculating Model

Authors: Efat Lali Dastjerdi, Hassan Sadeghi Naeini, Hadi Sanjari

Abstract:

One of the important steps in a safety and risk management system is the economical evaluation of occupational accident and diseases costs in order to decrease accidents from reoccurring in the workplace. This study proposed a plausible method for calculating occupational accident costs and illnesses in work place. This method design for cost estimation takes into account both the personnel, organizational level as well as the community level especially intended for an Iranian work place. The research indicates that a using systematic method for calculating costs which also provides risk evaluation can help managers to plan correctly the investment in health and safety measures. Using this method is that not only is it comprehensive, easy and practical and could be applied in practice by a manager within a short period of time but it also shows the importance of accident costs as well as calculates the real cost of an accident and illnesses.

Keywords: ost calculating model, Economics of health and Safety.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
6746 Inversion of Electrical Resistivity Data: A Review

Authors: Shrey Sharma, Gunjan Kumar Verma

Abstract:

High density electrical prospecting has been widely used in groundwater investigation, civil engineering and environmental survey. For efficient inversion, the forward modeling routine, sensitivity calculation, and inversion algorithm must be efficient. This paper attempts to provide a brief summary of the past and ongoing developments of the method. It includes reviews of the procedures used for data acquisition, processing and inversion of electrical resistivity data based on compilation of academic literature. In recent times there had been a significant evolution in field survey designs and data inversion techniques for the resistivity method. In general 2-D inversion for resistivity data is carried out using the linearized least-square method with the local optimization technique .Multi-electrode and multi-channel systems have made it possible to conduct large 2-D, 3-D and even 4-D surveys efficiently to resolve complex geological structures that were not possible with traditional 1-D surveys. 3-D surveys play an increasingly important role in very complex areas where 2-D models suffer from artifacts due to off-line structures. Continued developments in computation technology, as well as fast data inversion techniques and software, have made it possible to use optimization techniques to obtain model parameters to a higher accuracy. A brief discussion on the limitations of the electrical resistivity method has also been presented.

Keywords: Resistivity, inversion, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6053
6745 Increasing the System Availability of Data Centers by Using Virtualization Technologies

Authors: Chris Ewe, Naoum Jamous, Holger Schrödl

Abstract:

Like most entrepreneurs, data center operators pursue goals such as profit-maximization, improvement of the company’s reputation or basically to exist on the market. Part of those aims is to guarantee a given quality of service. Quality characteristics are specified in a contract called the service level agreement. Central part of this agreement is non-functional properties of an IT service. The system availability is one of the most important properties as it will be shown in this paper. To comply with availability requirements, data center operators can use virtualization technologies. A clear model to assess the effect of virtualization functions on the parts of a data center in relation to the system availability is still missing. This paper aims to introduce a basic model that shows these connections, and consider if the identified effects are positive or negative. Thus, this work also points out possible disadvantages of the technology. In consequence, the paper shows opportunities as well as risks of data center virtualization in relation to system availability.

Keywords: Availability, cloud computing IT service, quality of service, service level agreement, virtualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 988
6744 Data Oriented Modeling of Uniform Random Variable: Applied Approach

Authors: Ahmad Habibizad Navin, Mehdi Naghian Fesharaki, Mirkamal Mirnia, Mohamad Teshnelab, Ehsan Shahamatnia

Abstract:

In this paper we introduce new data oriented modeling of uniform random variable well-matched with computing systems. Due to this conformity with current computers structure, this modeling will be efficiently used in statistical inference.

Keywords: Uniform random variable, Data oriented modeling, Statistical inference, Prodigraph, Statistically complete tree, Uniformdigital probability digraph, Uniform n-complete probability tree.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
6743 Mathematical Modeling of an Avalanche Release and Estimation of Flow Parameters by Numerical Method

Authors: Mahmoud Zarrini

Abstract:

Avalanche release of snow has been modeled in the present studies. Snow is assumed to be represented by semi-solid and the governing equations have been studied from the concept of continuum approach. The dynamical equations have been solved for two different zones [starting zone and track zone] by using appropriate initial and boundary conditions. Effect of density (ρ), Eddy viscosity (η), Slope angle (θ), Slab depth (R) on the flow parameters have been observed in the present studies. Numerical methods have been employed for computing the non linear differential equations. One of the most interesting and fundamental innovation in the present studies is getting initial condition for the computation of velocity by numerical approach. This information of the velocity has obtained through the concept of fracture mechanics applicable to snow. The results on the flow parameters have found to be in qualitative agreement with the published results.

Keywords: Snow avalanche, fracture mechanics, avalanche velocity, avalanche zones.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1761
6742 Topic Modeling Using Latent Dirichlet Allocation and Latent Semantic Indexing on South African Telco Twitter Data

Authors: Phumelele P. Kubheka, Pius A. Owolawi, Gbolahan Aiyetoro

Abstract:

Twitter is one of the most popular social media platforms where users share their opinions on different subjects. Twitter can be considered a great source for mining text due to the high volumes of data generated through the platform daily. Many industries such as telecommunication companies can leverage the availability of Twitter data to better understand their markets and make an appropriate business decision. This study performs topic modeling on Twitter data using Latent Dirichlet Allocation (LDA). The obtained results are benchmarked with another topic modeling technique, Latent Semantic Indexing (LSI). The study aims to retrieve topics on a Twitter dataset containing user tweets on South African Telcos. Results from this study show that LSI is much faster than LDA. However, LDA yields better results with higher topic coherence by 8% for the best-performing model in this experiment. A higher topic coherence score indicates better performance of the model.

Keywords: Big data, latent Dirichlet allocation, latent semantic indexing, Telco, topic modeling, Twitter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 433
6741 Prediction of Compressive Strength of SCC Containing Bottom Ash using Artificial Neural Networks

Authors: Yogesh Aggarwal, Paratibha Aggarwal

Abstract:

The paper presents a comparative performance of the models developed to predict 28 days compressive strengths using neural network techniques for data taken from literature (ANN-I) and data developed experimentally for SCC containing bottom ash as partial replacement of fine aggregates (ANN-II). The data used in the models are arranged in the format of six and eight input parameters that cover the contents of cement, sand, coarse aggregate, fly ash as partial replacement of cement, bottom ash as partial replacement of sand, water and water/powder ratio, superplasticizer dosage and an output parameter that is 28-days compressive strength and compressive strengths at 7 days, 28 days, 90 days and 365 days, respectively for ANN-I and ANN-II. The importance of different input parameters is also given for predicting the strengths at various ages using neural network. The model developed from literature data could be easily extended to the experimental data, with bottom ash as partial replacement of sand with some modifications.

Keywords: Self compacting concrete, bottom ash, strength, prediction, neural network, importance factor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2214
6740 An Optimized Method for 3D Magnetic Navigation of Nanoparticles inside Human Arteries

Authors: Evangelos G. Karvelas, Christos Liosis, Andreas Theodorakakos, Theodoros E. Karakasidis

Abstract:

In the present work, a numerical method for the estimation of the appropriate gradient magnetic fields for optimum driving of the particles into the desired area inside the human body is presented. The proposed method combines Computational Fluid Dynamics (CFD), Discrete Element Method (DEM) and Covariance Matrix Adaptation (CMA) evolution strategy for the magnetic navigation of nanoparticles. It is based on an iteration procedure that intents to eliminate the deviation of the nanoparticles from a desired path. Hence, the gradient magnetic field is constantly adjusted in a suitable way so that the particles’ follow as close as possible to a desired trajectory. Using the proposed method, it is obvious that the diameter of particles is crucial parameter for an efficient navigation. In addition, increase of particles' diameter decreases their deviation from the desired path. Moreover, the navigation method can navigate nanoparticles into the desired areas with efficiency approximately 99%.

Keywords: CFD, CMA evolution strategy, DEM, magnetic navigation, spherical particles.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 516
6739 Estimation of Structural Parameters in Time Domain Using One Dimensional Piezo Zirconium Titanium Patch Model

Authors: N. Jinesh, K. Shankar

Abstract:

This article presents a method of using the one dimensional piezo-electric patch on beam model for structural identification. A hybrid element constituted of one dimensional beam element and a PZT sensor is used with reduced material properties. This model is convenient and simple for identification of beams. Accuracy of this element is first verified against a corresponding 3D finite element model (FEM). The structural identification is carried out as an inverse problem whereby parameters are identified by minimizing the deviation between the predicted and measured voltage response of the patch, when subjected to excitation. A non-classical optimization algorithm Particle Swarm Optimization is used to minimize this objective function. The signals are polluted with 5% Gaussian noise to simulate experimental noise. The proposed method is applied on beam structure and identified parameters are stiffness and damping. The model is also validated experimentally.

Keywords: Structural identification, PZT patches, inverse problem, particle swarm optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 922
6738 Self-Sensing versus Reference Air Gaps

Authors: Alexander Schulz, Ingrid Rottensteiner, Manfred Neumann, Michael Wehse, Johann Wassermann

Abstract:

Self-sensing estimates the air gap within an electro magnetic path by analyzing the bearing coil current and/or voltage waveform. The self-sensing concept presented in this paper has been developed within the research project “Active Magnetic Bearings with Supreme Reliability" and is used for position sensor fault detection. Within this new concept gap calculation is carried out by an alldigital analysis of the digitized coil current and voltage waveform. For analysis those time periods within the PWM period are used, which give the best results. Additionally, the concept allows the digital compensation of nonlinearities, for example magnetic saturation, without degrading signal quality. This increases the accuracy and robustness of the air gap estimation and additionally reduces phase delays. Beneath an overview about the developed concept first measurement results are presented which show the potential of this all-digital self-sensing concept.

Keywords: digital signal analysis, active magnetic bearing, reliability, fault detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1456
6737 A New Vector Quantization Front-End Process for Discrete HMM Speech Recognition System

Authors: M. Debyeche, J.P Haton, A. Houacine

Abstract:

The paper presents a complete discrete statistical framework, based on a novel vector quantization (VQ) front-end process. This new VQ approach performs an optimal distribution of VQ codebook components on HMM states. This technique that we named the distributed vector quantization (DVQ) of hidden Markov models, succeeds in unifying acoustic micro-structure and phonetic macro-structure, when the estimation of HMM parameters is performed. The DVQ technique is implemented through two variants. The first variant uses the K-means algorithm (K-means- DVQ) to optimize the VQ, while the second variant exploits the benefits of the classification behavior of neural networks (NN-DVQ) for the same purpose. The proposed variants are compared with the HMM-based baseline system by experiments of specific Arabic consonants recognition. The results show that the distributed vector quantization technique increase the performance of the discrete HMM system.

Keywords: Hidden Markov Model, Vector Quantization, Neural Network, Speech Recognition, Arabic Language

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2049
6736 A Review: Comparative Analysis of Different Categorical Data Clustering Ensemble Methods

Authors: S. Sarumathi, N. Shanthi, M. Sharmila

Abstract:

Over the past epoch a rampant amount of work has been done in the data clustering research under the unsupervised learning technique in Data mining. Furthermore several algorithms and methods have been proposed focusing on clustering different data types, representation of cluster models, and accuracy rates of the clusters. However no single clustering algorithm proves to be the most efficient in providing best results. Accordingly in order to find the solution to this issue a new technique, called Cluster ensemble method was bloomed. This cluster ensemble is a good alternative approach for facing the cluster analysis problem. The main hope of the cluster ensemble is to merge different clustering solutions in such a way to achieve accuracy and to improve the quality of individual data clustering. Due to the substantial and unremitting development of new methods in the sphere of data mining and also the incessant interest in inventing new algorithms, makes obligatory to scrutinize a critical analysis of the existing techniques and the future novelty. This paper exposes the comparative study of different cluster ensemble methods along with their features, systematic working process and the average accuracy and error rates of each ensemble methods. Consequently this speculative and comprehensive analysis will be very useful for the community of clustering practitioners and also helps in deciding the most suitable one to rectify the problem in hand.

Keywords: Clustering, Cluster Ensemble methods, Co-association matrix, Consensus function, Median partition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2592
6735 Pressure-Detecting Method for Estimating Levitation Gap Height of Swirl Gripper

Authors: Kaige Shi, Chao Jiang, Xin Li

Abstract:

The swirl gripper is an electrically activated noncontact handling device that uses swirling airflow to generate a lifting force. This force can be used to pick up a workpiece placed underneath the swirl gripper without any contact. It is applicable, for example, in the semiconductor wafer production line, where contact must be avoided during the handling and moving of a workpiece to minimize damage. When a workpiece levitates underneath a swirl gripper, the gap height between them is crucial for safe handling. Therefore, in this paper, we propose a method to estimate the levitation gap height by detecting pressure at two points. The method is based on theoretical model of the swirl gripper, and has been experimentally verified. Furthermore, the force between the gripper and the workpiece can also be estimated using the detected pressure. As a result, the nonlinear relationship between the force and gap height can be linearized by adjusting the rotating speed of the fan in the swirl gripper according to the estimated force and gap height. The linearized relationship is expected to enhance handling stability of the workpiece.

Keywords: Swirl gripper, noncontact handling, levitation, gap height estimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 519
6734 Application of Data Envelopment Analysis to Assess Quality Management Efficiency

Authors: Chuen Tse Kuah, Kuan Yew Wong, Farzad Behrouzi

Abstract:

This paper is aimed to give an illustration on the application of Data Envelopment Analysis (DEA) as a tool to assess Quality Management (QM) efficiency. A variant of DEA, slack based measure (SBM) is used for this purpose. From this study, it is found that DEA is suitable to measure QM efficiency and give improvement suggestions to the inefficient QM.

Keywords: Quality Management, Data Envelopment Analysis, Slack Based Measure, Efficiency Measurement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2077
6733 Exploring the Correlation between Population Distribution and Urban Heat Island under Urban Data: Taking Shenzhen Urban Heat Island as an Example

Authors: Wang Yang

Abstract:

Shenzhen is a modern city of China's reform and opening-up policy, the development of urban morphology has been established on the administration of the Chinese government. This city`s planning paradigm is primarily affected by the spatial structure and human behavior. The subjective urban agglomeration center is divided into several groups and centers. In comparisons of this effect, the city development law has better to be neglected. With the continuous development of the internet, extensive data technology has been introduced in China. Data mining and data analysis has become important tools in municipal research. Data mining has been utilized to improve data cleaning such as receiving business data, traffic data and population data. Prior to data mining, government data were collected by traditional means, then were analyzed using city-relationship research, delaying the timeliness of urban development, especially for the contemporary city. Data update speed is very fast and based on the Internet. The city's point of interest (POI) in the excavation serves as data source affecting the city design, while satellite remote sensing is used as a reference object, city analysis is conducted in both directions, the administrative paradigm of government is broken and urban research is restored. Therefore, the use of data mining in urban analysis is very important. The satellite remote sensing data of the Shenzhen city in July 2018 were measured by the satellite Modis sensor and can be utilized to perform land surface temperature inversion, and analyze city heat island distribution of Shenzhen. This article acquired and classified the data from Shenzhen by using Data crawler technology. Data of Shenzhen heat island and interest points were simulated and analyzed in the GIS platform to discover the main features of functional equivalent distribution influence. Shenzhen is located in the east-west area of China. The city’s main streets are also determined according to the direction of city development. Therefore, it is determined that the functional area of the city is also distributed in the east-west direction. The urban heat island can express the heat map according to the functional urban area. Regional POI has correspondence. The research result clearly explains that the distribution of the urban heat island and the distribution of urban POIs are one-to-one correspondence. Urban heat island is primarily influenced by the properties of the underlying surface, avoiding the impact of urban climate. Using urban POIs as analysis object, the distribution of municipal POIs and population aggregation are closely connected, so that the distribution of the population corresponded with the distribution of the urban heat island.

Keywords: POI, satellite remote sensing, the population distribution, urban heat island thermal map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
6732 An Investigation into the Role of Market Beta in Asset Pricing: Evidence from the Romanian Stock Market

Authors: Ioan Popa, Radu Lupu, Cristiana Tudor

Abstract:

In this paper, we apply the FM methodology to the cross-section of Romanian-listed common stocks and investigate the explanatory power of market beta on the cross-section of commons stock returns from Bucharest Stock Exchange. Various assumptions are empirically tested, such us linearity, market efficiency, the “no systematic effect of non-beta risk" hypothesis or the positive expected risk-return trade-off hypothesis. We find that the Romanian stock market shows the same properties as the other emerging markets in terms of efficiency and significance of the linear riskreturn models. Our analysis included weekly returns from January 2002 until May 2010 and the portfolio formation, estimation and testing was performed in a rolling manner using 51 observations (one year) for each stage of the analysis.

Keywords: Bucharest Stock Exchange, Fama-Macbeth methodology, systematic risk, non-linear risk-return dependence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1893
6731 Prototype of a Federative Factory Data Management for the Support of Factory Planning Processes

Authors: Christian Mosch, Reiner Anderl, Antonio Álvaro de Assis Moura, Klaus Schützer

Abstract:

Due to short product life cycles, increasing variety of products and short cycles of leap innovations manufacturing companies have to increase the flexibility of factory structures. Flexibility of factory structures is based on defined factory planning processes in which product, process and resource data of various partial domains have to be considered. Thus factory planning processes can be characterized as iterative, interdisciplinary and participative processes [1]. To support interdisciplinary and participative character of planning processes, a federative factory data management (FFDM) as a holistic solution will be described. FFDM is already implemented in form of a prototype. The interim results of the development of FFDM will be shown in this paper. The principles are the extracting of product, process and resource data from documents of various partial domains providing as web services on a server. The described data can be requested by the factory planner by using a FFDM-browser.

Keywords: BRAGECRIM, Factory Planning Process, FactoryData Management, Web Services

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1483
6730 Transient Analysis & Performance Estimation of Gate Inside Junctionless Transistor (GI-JLT)

Authors: Sangeeta Singh, Pankaj Kumar, P. N. Kondekar

Abstract:

In this paper, the transient device performance analysis of n-type Gate Inside JunctionLess Transistor (GI-JLT) has been evaluated. 3-D Bohm Quantum Potential (BQP) transport device simulation has been used to evaluate the delay and power dissipation performance. GI-JLT has a number of desirable device parameters such as reduced propagation delay, dynamic power dissipation, power and delay product, intrinsic gate delay and energy delay product as compared to Gate-all-around transistors GAA-JLT. In addition to this, various other device performance parameters namely, on/off current ratio, short channel effects (SCE), transconductance Generation Factor (TGF) and unity gain cut-off frequency (fT ) and subthreshold slope (SS) of the GI-JLT and GAA-JLT have been analyzed and compared. GI-JLT shows better device performance characteristics than GAA-JLT for low power and high frequency applications, because of its larger gate electrostatic control on the device operation.

Keywords: Gate-inside junctionless transistor GI-JLT, Gate-all-around junctionless transistor GAA-JLT, propagation delay, power delay product.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2430
6729 Estimation of Seismic Deformation Demands of Tall Buildings with Symmetric Setbacks

Authors: A. Alirezaei, S. Vahdani

Abstract:

This study estimates the seismic demands of tall buildings with central symmetric setbacks by using nonlinear time history analysis. Three setback structures, all 60-story high with setback in three levels, are used for evaluation. The effects of irregularities occurred by setback are evaluated by determination of global-drift, story-displacement and story drift. Story-displacement is modified by roof displacement and first story displacement and story drift is modified by global drift. All results are calculated at the center of mass and in x and y direction. Also the absolute values of these quantities are determined. The results show that increasing of vertical irregularities increases the global drift of the structure and enlarges the deformations in the height of the structure. It is also observed that the effects of geometry irregularity in the seismic deformations of setback structures are higher than those of mass irregularity.

Keywords: Deformation demand, drift, setback, tall building.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2257
6728 The Competitive Newsvendor Game with Overestimated Demand

Authors: Chengli Liu, C. K. M. Lee

Abstract:

The tradition competitive newsvendor game assumes decision makers are rational. However, there are behavioral biases when people make decisions, such as loss aversion, mental accounting and overconfidence. Overestimation of a subject’s own performance is one type of overconfidence. The objective of this research is to analyze the impact of the overestimated demand in the newsvendor competitive game with two players. This study builds a competitive newsvendor game model where newsvendors have private information of their demands, which is overestimated. At the same time, demands of each newsvendor forecasted by a third party institution are available. This research shows that the overestimation leads to demand steal effect, which reduces the competitor’s order quantity. However, the overall supply of the product increases due to overestimation. This study illustrates the boundary condition for the overestimated newsvendor to have the equilibrium order drop due to the demand steal effect from the other newsvendor. A newsvendor who has higher critical fractile will see its equilibrium order decrease with the drop of estimation level from the other newsvendor.

Keywords: Bias, competitive newsvendor, Nash equilibrium, overestimation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1465
6727 Generation of Highly Ordered Porous Antimony-Doped Tin Oxide Film by A Simple Coating Method with Colloidal Template

Authors: Asep Bayu Dani Nandiyanto, Asep Suhendi, Yutaka Kisakibaru, Takashi Ogi, Kikuo Okuyama

Abstract:

An ordered porous antimony-doped tin oxide (ATO) film was successfully prepared using a simple coating process with colloidal templates. The facile production was effective when a combination of 16-nm ATO (as a model of an inorganic nanoparticle) and polystyrene (PS) spheres (as a model of the template) weresimply coated to produce a composite ATO/PS film. Heat treatment was then used to remove the PS and produce the porous film. The porous film with a spherical pore shape and a highly ordered porous structure could be obtained. A potential way for the control of pore size could be also achieved by changing initial template size. The theoretical explanation and mechanism of porous formation were also added, which would be important for the scaling-up prediction and estimation.

Keywords: Porous structure film; ATO particle; Ultra-low refractive index; vertical drop method; Low-density material;

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1560
6726 Analysis of a WDM System for Tanzania

Authors: Shaban Pazi, Chris Chatwin, Rupert Young, Philip Birch

Abstract:

Internet infrastructures in most places of the world have been supported by the advancement of optical fiber technology, most notably wavelength division multiplexing (WDM) system. Optical technology by means of WDM system has revolutionized long distance data transport and has resulted in high data capacity, cost reductions, extremely low bit error rate, and operational simplification of the overall Internet infrastructure. This paper analyses and compares the system impairments, which occur at data transmission rates of 2.5Gb/s and 10 Gb/s per wavelength channel in our proposed optical WDM system for Internet infrastructure in Tanzania. The results show that the data transmission rate of 2.5 Gb/s has minimum system impairments compared with a rate of 10 Gb/s per wavelength channel, and achieves a sufficient system performance to provide a good Internet access service.

Keywords: Internet infrastructure, WDM system, standard single mode fibers, system impairments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1652
6725 Comparison of Data Reduction Algorithms for Image-Based Point Cloud Derived Digital Terrain Models

Authors: M. Uysal, M. Yilmaz, I. Tiryakioğlu

Abstract:

Digital Terrain Model (DTM) is a digital numerical representation of the Earth's surface. DTMs have been applied to a diverse field of tasks, such as urban planning, military, glacier mapping, disaster management. In the expression of the Earth' surface as a mathematical model, an infinite number of point measurements are needed. Because of the impossibility of this case, the points at regular intervals are measured to characterize the Earth's surface and DTM of the Earth is generated. Hitherto, the classical measurement techniques and photogrammetry method have widespread use in the construction of DTM. At present, RADAR, LiDAR, and stereo satellite images are also used for the construction of DTM. In recent years, especially because of its superiorities, Airborne Light Detection and Ranging (LiDAR) has an increased use in DTM applications. A 3D point cloud is created with LiDAR technology by obtaining numerous point data. However recently, by the development in image mapping methods, the use of unmanned aerial vehicles (UAV) for photogrammetric data acquisition has increased DTM generation from image-based point cloud. The accuracy of the DTM depends on various factors such as data collection method, the distribution of elevation points, the point density, properties of the surface and interpolation methods. In this study, the random data reduction method is compared for DTMs generated from image based point cloud data. The original image based point cloud data set (100%) is reduced to a series of subsets by using random algorithm, representing the 75, 50, 25 and 5% of the original image based point cloud data set. Over the ANS campus of Afyon Kocatepe University as the test area, DTM constructed from the original image based point cloud data set is compared with DTMs interpolated from reduced data sets by Kriging interpolation method. The results show that the random data reduction method can be used to reduce the image based point cloud datasets to 50% density level while still maintaining the quality of DTM.

Keywords: DTM, unmanned aerial vehicle, UAV, random, Kriging.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 792