Search results for: Data integration
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8023

Search results for: Data integration

7573 Problems of Innovation Development of Wireless Data Transfer Branch in the Cellular Market of Kazakhstan

Authors: Yessengeldy Kuanyshpayev

Abstract:

Now in some countries of the world the cellular market is on the point of saturation, in others - positive dynamics of development kept on. The reasons for it are also different, but there are united by their general susceptibility to innovation changes, if they are really innovative. If to take as an example the cellular market of Kazakhstan it is defined by the low percent of smart phones at consumers, the low population density, undercapacity of the 3G channel, and absence of universal access to the LTE technology that limits dynamical growth of this branch. These moments are aggravated by failures of starting commercial projects by private companies which prevent to be implemented and widely adopted to a new product among consumers. The object of the research is possible integration of wireless and program technologies at which introduction the idea can regenerate in an innovation. The analysis of existing projects in the market and the possible union of the technologies through a prism of theoretical bases of innovative activity shows that efficiency of the company by development and introduction of innovations is possible only thanks to strict observance of all terms and conditions of the innovative process which main term is profit. Despite that fact that on a global scale the innovativeness issue of companies is very popular, there are no researches about possibility of innovative breaks in the field of wireless access to the Internet in the cellular market of Kazakhstan.

Keywords: Cellular market, commercialization, innovation, the effectiveness of company.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2079
7572 The Link between Unemployment and Inflation Using Johansen’s Co-Integration Approach and Vector Error Correction Modelling

Authors: Sagaren Pillay

Abstract:

In this paper bi-annual time series data on unemployment rates (from the Labour Force Survey) are expanded to quarterly rates and linked to quarterly unemployment rates (from the Quarterly Labour Force Survey). The resultant linked series and the consumer price index (CPI) series are examined using Johansen’s cointegration approach and vector error correction modeling. The study finds that both the series are integrated of order one and are cointegrated. A statistically significant co-integrating relationship is found to exist between the time series of unemployment rates and the CPI. Given this significant relationship, the study models this relationship using Vector Error Correction Models (VECM), one with a restriction on the deterministic term and the other with no restriction.

A formal statistical confirmation of the existence of a unique linear and lagged relationship between inflation and unemployment for the period between September 2000 and June 2011 is presented. For the given period, the CPI was found to be an unbiased predictor of the unemployment rate. This relationship can be explored further for the development of appropriate forecasting models incorporating other study variables.

Keywords: Forecasting, lagged, linear, relationship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2550
7571 Model Order Reduction for Frequency Response and Effect of Order of Method for Matching Condition

Authors: Aref Ghafouri, Mohammad Javad Mollakazemi, Farhad Asadi

Abstract:

In this paper, model order reduction method is used for approximation in linear and nonlinearity aspects in some experimental data. This method can be used for obtaining offline reduced model for approximation of experimental data and can produce and follow the data and order of system and also it can match to experimental data in some frequency ratios. In this study, the method is compared in different experimental data and influence of choosing of order of the model reduction for obtaining the best and sufficient matching condition for following the data is investigated in format of imaginary and reality part of the frequency response curve and finally the effect and important parameter of number of order reduction in nonlinear experimental data is explained further.

Keywords: Frequency response, Order of model reduction, frequency matching condition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2067
7570 Building a Scalable Telemetry Based Multiclass Predictive Maintenance Model in R

Authors: Jaya Mathew

Abstract:

Many organizations are faced with the challenge of how to analyze and build Machine Learning models using their sensitive telemetry data. In this paper, we discuss how users can leverage the power of R without having to move their big data around as well as a cloud based solution for organizations willing to host their data in the cloud. By using ScaleR technology to benefit from parallelization and remote computing or R Services on premise or in the cloud, users can leverage the power of R at scale without having to move their data around.

Keywords: Predictive maintenance, machine learning, big data, cloud, on premise SQL, R.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1929
7569 Islamic Finance: What Is the Outlook for Italy?

Authors: Paolo Pietro Biancone

Abstract:

The spread of Islamic financial instruments is an opportunity to offer integration for the immigrant population and to attract, through the specific products, the richness of sovereign funds from the "Arab" countries. However, it is important to consider the possibility of comparing a traditional finance model, which in recent times has given rise to many doubts, with an "alternative" finance model, where the ethical aspect arising from religious principles is very important.

Keywords: Banks, Europe, Islamic Finance, Italy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2879
7568 Energy Efficient Resource Allocation and Scheduling in Cloud Computing Platform

Authors: Shuen-Tai Wang, Ying-Chuan Chen, Yu-Ching Lin

Abstract:

There has been renewal of interest in the relation between Green IT and cloud computing in recent years. Cloud computing has to be a highly elastic environment which provides stable services to users. The growing use of cloud computing facilities has caused marked energy consumption, putting negative pressure on electricity cost of computing center or data center. Each year more and more network devices, storages and computers are purchased and put to use, but it is not just the number of computers that is driving energy consumption upward. We could foresee that the power consumption of cloud computing facilities will double, triple, or even more in the next decade. This paper aims at resource allocation and scheduling technologies that are short of or have not well developed yet to reduce energy utilization in cloud computing platform. In particular, our approach relies on recalling services dynamically onto appropriate amount of the machines according to user’s requirement and temporarily shutting down the machines after finish in order to conserve energy. We present initial work on integration of resource and power management system that focuses on reducing power consumption such that they suffice for meeting the minimizing quality of service required by the cloud computing platform.

Keywords: Cloud computing, energy utilization, power consumption, resource allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1452
7567 Trade Openness and Its Effects on Economic Growth in Selected South Asian Countries: A Panel Data Study

Authors: Samra Bajwa, Muhammad W. Siddiqi

Abstract:

The study investigates the causal link between trade openness and economic growth for four South Asian countries for period 1972-1985 and 1986-2007 to examine the scenario before and after the implementation of SAARC. Panel cointegration and FMOLS techniques are employed for short run and long run estimates. In 1972-85 short run unidirectional causality from GDP to openness is found whereas, in 1986-2007 there exists bi-directional causality between GDP and openness. The long run elasticity magnitude between GDP and openness contains negative sign in 1972-85 which shows that there exists long run negative relationship. While in time period 1986-2007 the elasticity magnitude has positive sign that indicates positive causation between GDP and openness. So it can be concluded that after the implementation of SAARC overall situation of selected countries got better. Also long run coefficient of error term suggests that short term equilibrium adjustments are driven by adjustment back to long run equilibrium.

Keywords: Causality, Economic Growth, Panel Co-integration, SAARC, Trade Openness.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2654
7566 Development of Manufacturing Simulation Model for Semiconductor Fabrication

Authors: Syahril Ridzuan Ab Rahim, Ibrahim Ahmad, Mohd Azizi Chik, Ahmad Zafir Md. Rejab, and U. Hashim

Abstract:

This research presents the development of simulation modeling for WIP management in semiconductor fabrication. Manufacturing simulation modeling is needed for productivity optimization analysis due to the complex process flows involved more than 35 percent re-entrance processing steps more than 15 times at same equipment. Furthermore, semiconductor fabrication required to produce high product mixed with total processing steps varies from 300 to 800 steps and cycle time between 30 to 70 days. Besides the complexity, expansive wafer cost that potentially impact the company profits margin once miss due date is another motivation to explore options to experiment any analysis using simulation modeling. In this paper, the simulation model is developed using existing commercial software platform AutoSched AP, with customized integration with Manufacturing Execution Systems (MES) and Advanced Productivity Family (APF) for data collections used to configure the model parameters and data source. Model parameters such as processing steps cycle time, equipment performance, handling time, efficiency of operator are collected through this customization. Once the parameters are validated, few customizations are made to ensure the prior model is executed. The accuracy for the simulation model is validated with the actual output per day for all equipments. The comparison analysis from result of the simulation model compared to actual for achieved 95 percent accuracy for 30 days. This model later was used to perform various what if analysis to understand impacts on cycle time and overall output. By using this simulation model, complex manufacturing environment like semiconductor fabrication (fab) now have alternative source of validation for any new requirements impact analysis.

Keywords: Advanced Productivity Family (APF), Complementary Metal Oxide Semiconductor (CMOS), Manufacturing Execution Systems (MES), Work In Progress (WIP).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3233
7565 Big Data Strategy for Telco: Network Transformation

Authors: F. Amin, S. Feizi

Abstract:

Big data has the potential to improve the quality of services; enable infrastructure that businesses depend on to adapt continually and efficiently; improve the performance of employees; help organizations better understand customers; and reduce liability risks. Analytics and marketing models of fixed and mobile operators are falling short in combating churn and declining revenue per user. Big Data presents new method to reverse the way and improve profitability. The benefits of Big Data and next-generation network, however, are more exorbitant than improved customer relationship management. Next generation of networks are in a prime position to monetize rich supplies of customer information—while being mindful of legal and privacy issues. As data assets are transformed into new revenue streams will become integral to high performance.

Keywords: Big Data, Next Generation Networks, Network Transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520
7564 Exploring the Potential of Chatbots in Higher Education: A Preliminary Study

Authors: S. Studente, S. Ellis, S. F. Garivaldis

Abstract:

We report upon a study introducing a chatbot to develop learning communities at a London University, with a largely international student base. The focus of the chatbot was twofold; to ease the transition for students into their first year of university study, and to increase study engagement. Four learning communities were created using the chatbot; level 3 foundation, level 4 undergraduate, level 6 undergraduate and level 7 post-graduate. Students and programme leaders were provided with access to the chat bot via mobile app prior to their study induction and throughout the autumn term of 2019. At the end of the term, data were collected via questionnaires and focus groups with students and teaching staff to allow for identification of benefits and challenges. Findings indicated a positive correlation between study engagement and engagement with peers. Students reported that the chatbot enabled them to obtain support and connect to their programme leader. Both staff and students also made recommendation on how engagement could be further enhanced using the bot in terms of; clearly specified purpose, integration with existing university systems, leading by example and connectivity. Extending upon these recommendations, a second pilot study is planned for September 2020, for which the focus will be upon improving attendance rates, student satisfaction and module pass rates.

Keywords: Chatbot, e-learning, learning communities, student engagement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1743
7563 Collaborative Education Practice in a Data Structure E-Learning Course

Authors: Gang Chen, Ruimin Shen

Abstract:

This paper presented a collaborative education model, which consists four parts: collaborative teaching, collaborative working, collaborative training and interaction. Supported by an e-learning platform, collaborative education was practiced in a data structure e-learning course. Data collected shows that most of students accept collaborative education. This paper goes one step attempting to determine which aspects appear to be most important or helpful in collaborative education.

Keywords: Collaborative work, education, data structures.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1697
7562 Generic Data Warehousing for Consumer Electronics Retail Industry

Authors: S. Habte, K. Ouazzane, P. Patel, S. Patel

Abstract:

The dynamic and highly competitive nature of the consumer electronics retail industry means that businesses in this industry are experiencing different decision making challenges in relation to pricing, inventory control, consumer satisfaction and product offerings. To overcome the challenges facing retailers and create opportunities, we propose a generic data warehousing solution which can be applied to a wide range of consumer electronics retailers with a minimum configuration. The solution includes a dimensional data model, a template SQL script, a high level architectural descriptions, ETL tool developed using C#, a set of APIs, and data access tools. It has been successfully applied by ASK Outlets Ltd UK resulting in improved productivity and enhanced sales growth.

Keywords: Consumer electronics retail, dimensional data model, data analysis, generic data warehousing, reporting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1397
7561 An Algebra for Protein Structure Data

Authors: Yanchao Wang, Rajshekhar Sunderraman

Abstract:

This paper presents an algebraic approach to optimize queries in domain-specific database management system for protein structure data. The approach involves the introduction of several protein structure specific algebraic operators to query the complex data stored in an object-oriented database system. The Protein Algebra provides an extensible set of high-level Genomic Data Types and Protein Data Types along with a comprehensive collection of appropriate genomic and protein functions. The paper also presents a query translator that converts high-level query specifications in algebra into low-level query specifications in Protein-QL, a query language designed to query protein structure data. The query transformation process uses a Protein Ontology that serves the purpose of a dictionary.

Keywords: Domain-Specific Data Management, Protein Algebra, Protein Ontology, Protein Structure Data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1551
7560 The Concept of Decentralization: Modern Challenges for the EU Countries, Prospects for Further Implementation in Ukraine

Authors: Alina Murtishcheva

Abstract:

The tendency of globalization, challenges to democracy and peace caused by the Russian invasion of Ukraine, and other global conflicts require searching general orientations of governmental development, including local government. The formation of a common theoretical framework for local government guarantees not only of harmonisation of European legislation but also creates prerequisites for the integration of new members into the European Union. One of the most important milestones of such a theoretical framework is the concept of decentralization. Decentralization as a phenomenon is characteristic of most European Union countries at different historical stages. For Ukraine, as a country that has clearly defined a European integration vector of development, understanding not only the legal but also the theoretical basis of decentralization processes in European countries is an important prerequisite for further reforms. Decentralization takes different forms, which leads to a variety of understandings in doctrine and, consequently, different interpretations in national legislation. Despite this, decentralization is based on common ideas and values such as democracy, participation, the rule of law, and proximity government that are shared by all EU member states. Nevertheless, not all EU countries are currently implementing broad decentralization in their political and legal practices. Some countries are gradually moving in this direction, while others remain quite centralised. There is also a new, insufficiently studied trend today – recentralisation, which can be broadly defined as the strengthening of centralization tendencies in countries that were considered to be decentralized. Consequently, an exploratory theoretical study is needed to identify how the concept of decentralization is combined with the recentralization tendency in EU member states. The purpose of this study is to empirically analyse scientific approaches to the concept of “decentralization”, to highlight the tendency of recentralisation and its consequences, to analyse Ukraine's experience in the field of decentralization of public power, and to outline the prospects for further development of Ukrainian legislation in this area.

Keywords: Centralization, decentralization, local government, recentralization, reforms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 47
7559 Comparative Spatial Analysis of a Re-arranged Hospital Building

Authors: Burak Köken, Hatice D. Arslan, Bilgehan Y. Çakmak

Abstract:

Analyzing the relation networks between the hospital buildings which have complex structure and distinctive spatial relationships is quite difficult. The hospital buildings which require specialty in spatial relationship solutions during design and selfinnovation through the developing technology should survive and keep giving service even after the disasters such as earthquakes. In this study, a hospital building where the load-bearing system was strengthened because of the insufficient earthquake performance and the construction of an additional building was required to meet the increasing need for space was discussed and a comparative spatial evaluation of the hospital building was made with regard to its status before the change and after the change. For this reason, spatial organizations of the building before change and after the change were analyzed by means of Space Syntax method and the effects of the change on space organization parameters were searched by applying an analytical procedure. Using Depthmap UCL software, Connectivity, Visual Mean Depth, Beta and Visual Integration analyses were conducted. Based on the data obtained after the analyses, it was seen that the relationships between spaces of the building increased after the change and the building has become more explicit and understandable for the occupants. Furthermore, it was determined according to findings of the analysis that the increase in depth causes difficulty in perceiving the spaces and the changes considering this problem generally ease spatial use.

Keywords: Architecture, hospital building, space syntax, strengthening.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2215
7558 A Combined Cipher Text Policy Attribute-Based Encryption and Timed-Release Encryption Method for Securing Medical Data in Cloud

Authors: G. Shruthi, Purohit Shrinivasacharya

Abstract:

The biggest problem in cloud is securing an outsourcing data. A cloud environment cannot be considered to be trusted. It becomes more challenging when outsourced data sources are managed by multiple outsourcers with different access rights. Several methods have been proposed to protect data confidentiality against the cloud service provider to support fine-grained data access control. We propose a method with combined Cipher Text Policy Attribute-based Encryption (CP-ABE) and Timed-release encryption (TRE) secure method to control medical data storage in public cloud.

Keywords: Attribute, encryption, security, trapdoor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 776
7557 Data Mining Classification Methods Applied in Drug Design

Authors: Mária Stachová, Lukáš Sobíšek

Abstract:

Data mining incorporates a group of statistical methods used to analyze a set of information, or a data set. It operates with models and algorithms, which are powerful tools with the great potential. They can help people to understand the patterns in certain chunk of information so it is obvious that the data mining tools have a wide area of applications. For example in the theoretical chemistry data mining tools can be used to predict moleculeproperties or improve computer-assisted drug design. Classification analysis is one of the major data mining methodologies. The aim of thecontribution is to create a classification model, which would be able to deal with a huge data set with high accuracy. For this purpose logistic regression, Bayesian logistic regression and random forest models were built using R software. TheBayesian logistic regression in Latent GOLD software was created as well. These classification methods belong to supervised learning methods. It was necessary to reduce data matrix dimension before construct models and thus the factor analysis (FA) was used. Those models were applied to predict the biological activity of molecules, potential new drug candidates.

Keywords: data mining, classification, drug design, QSAR

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2859
7556 Regionalism and Regionalization in Central Asia

Authors: L. Delovarova, A. Davar, S. Asanov, F. Kukeyeva

Abstract:

This article is dedicated to the question of regionalism and regionalization in contemporary international relations, with a specific focus on Central Asia. The article addresses the question of whether or not Central Asia can be referred to as a true geopolitical region. In addressing this question, the authors examine particular factors that are essential for the formation of a region, including those tied to the economy, energy, culture, and labor migration.

Keywords: Central Asia, integration, regionalization, regionalism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2803
7555 EPR Hiding in Medical Images for Telemedicine

Authors: K. A. Navas, S. Archana Thampy, M. Sasikumar

Abstract:

Medical image data hiding has strict constrains such as high imperceptibility, high capacity and high robustness. Achieving these three requirements simultaneously is highly cumbersome. Some works have been reported in the literature on data hiding, watermarking and stegnography which are suitable for telemedicine applications. None is reliable in all aspects. Electronic Patient Report (EPR) data hiding for telemedicine demand it blind and reversible. This paper proposes a novel approach to blind reversible data hiding based on integer wavelet transform. Experimental results shows that this scheme outperforms the prior arts in terms of zero BER (Bit Error Rate), higher PSNR (Peak Signal to Noise Ratio), and large EPR data embedding capacity with WPSNR (Weighted Peak Signal to Noise Ratio) around 53 dB, compared with the existing reversible data hiding schemes.

Keywords: Biomedical imaging, Data security, Datacommunication, Teleconferencing.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2765
7554 A Robust Method for Encrypted Data Hiding Technique Based on Neighborhood Pixels Information

Authors: Ali Shariq Imran, M. Younus Javed, Naveed Sarfraz Khattak

Abstract:

This paper presents a novel method for data hiding based on neighborhood pixels information to calculate the number of bits that can be used for substitution and modified Least Significant Bits technique for data embedding. The modified solution is independent of the nature of the data to be hidden and gives correct results along with un-noticeable image degradation. The technique, to find the number of bits that can be used for data hiding, uses the green component of the image as it is less sensitive to human eye and thus it is totally impossible for human eye to predict whether the image is encrypted or not. The application further encrypts the data using a custom designed algorithm before embedding bits into image for further security. The overall process consists of three main modules namely embedding, encryption and extraction cm.

Keywords: Data hiding, image processing, information security, stagonography.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2347
7553 Unsupervised Outlier Detection in Streaming Data Using Weighted Clustering

Authors: Yogita, Durga Toshniwal

Abstract:

Outlier detection in streaming data is very challenging because streaming data cannot be scanned multiple times and also new concepts may keep evolving. Irrelevant attributes can be termed as noisy attributes and such attributes further magnify the challenge of working with data streams. In this paper, we propose an unsupervised outlier detection scheme for streaming data. This scheme is based on clustering as clustering is an unsupervised data mining task and it does not require labeled data, both density based and partitioning clustering are combined for outlier detection. In this scheme partitioning clustering is also used to assign weights to attributes depending upon their respective relevance and weights are adaptive. Weighted attributes are helpful to reduce or remove the effect of noisy attributes. Keeping in view the challenges of streaming data, the proposed scheme is incremental and adaptive to concept evolution. Experimental results on synthetic and real world data sets show that our proposed approach outperforms other existing approach (CORM) in terms of outlier detection rate, false alarm rate, and increasing percentages of outliers.

Keywords: Concept Evolution, Irrelevant Attributes, Streaming Data, Unsupervised Outlier Detection.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2643
7552 The Effect of Measurement Distribution on System Identification and Detection of Behavior of Nonlinearities of Data

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

In this paper, we considered and applied parametric modeling for some experimental data of dynamical system. In this study, we investigated the different distribution of output measurement from some dynamical systems. Also, with variance processing in experimental data we obtained the region of nonlinearity in experimental data and then identification of output section is applied in different situation and data distribution. Finally, the effect of the spanning the measurement such as variance to identification and limitation of this approach is explained.

Keywords: Gaussian process, Nonlinearity distribution, Particle filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1728
7551 An Analysis of Learners’ Reports for Measuring Co-Creational Education

Authors: Takatoshi Ishii, Koji Kimita, Keiichi Muramatsu, Yoshiki Shimomura

Abstract:

To increase the quality of learning, teacher and learner need mutual effort for realization of educational value. For this purpose, we need to manage the co-creational education among teacher and learners. In this research, we try to find a feature of co-creational education. To be more precise, we analyzed learners’ reports by natural language processing, and extract some features that describe the state of the co-creational education.

Keywords: Co-creational education, e-portfolios, ICT integration, labeled Latent Dirichlet allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1678
7550 Exponentially Weighted Simultaneous Estimation of Several Quantiles

Authors: Valeriy Naumov, Olli Martikainen

Abstract:

In this paper we propose new method for simultaneous generating multiple quantiles corresponding to given probability levels from data streams and massive data sets. This method provides a basis for development of single-pass low-storage quantile estimation algorithms, which differ in complexity, storage requirement and accuracy. We demonstrate that such algorithms may perform well even for heavy-tailed data.

Keywords: Quantile estimation, data stream, heavy-taileddistribution, tail index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1539
7549 Enhanced Data Access Control of Cooperative Environment used for DMU Based Design

Authors: Wei Lifan, Zhang Huaiyu, Yang Yunbin, Li Jia

Abstract:

Through the analysis of the process digital design based on digital mockup, the fact indicates that a distributed cooperative supporting environment is the foundation conditions to adopt design approach based on DMU. Data access authorization is concerned firstly because the value and sensitivity of the data for the enterprise. The access control for administrators is often rather weak other than business user. So authors established an enhanced system to avoid the administrators accessing the engineering data by potential approach and without authorization. Thus the data security is improved.

Keywords: access control, DMU, PLM, virtual prototype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1471
7548 Pattern Recognition Using Feature Based Die-Map Clusteringin the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: Die-Map Clustering, Feature Extraction, Pattern Recognition, Semiconductor Manufacturing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3160
7547 The Study of Managing the Personal Consent in the Electronic Healthcare Environment

Authors: Yi-Yun Ko, Der-Ming Liou

Abstract:

The Electronic Health Record (EHR) system is very general and we should pay more attention to a patient-s privacy. The patient-s consent is one of the elements when dealing with privacy topics. This study focuses on the creating and managing of patient consent. The integration of the HL7 standards and the IHE BPPC profile provides a base for the creation of patient consent. Establishing the platform offers the patients a way to create, revoke or update their consents. Through this platform, they can manage their consents in an easier manner.

Keywords: consent, EHR, HL7, IHE

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1338
7546 Speed Characteristics of Mixed Traffic Flow on Urban Arterials

Authors: Ashish Dhamaniya, Satish Chandra

Abstract:

Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.

Keywords: Normal distribution, percentile speed, speed spread ratio, traffic volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4253
7545 A Comparative Study between Discrete Wavelet Transform and Maximal Overlap Discrete Wavelet Transform for Testing Stationarity

Authors: Amel Abdoullah Ahmed Dghais, Mohd Tahir Ismail

Abstract:

In this paper the core objective is to apply discrete wavelet transform and maximal overlap discrete wavelet transform functions namely Haar, Daubechies2, Symmlet4, Coiflet2 and discrete approximation of the Meyer wavelets in non stationary financial time series data from Dow Jones index (DJIA30) of US stock market. The data consists of 2048 daily data of closing index from December 17, 2004 to October 23, 2012. Unit root test affirms that the data is non stationary in the level. A comparison between the results to transform non stationary data to stationary data using aforesaid transforms is given which clearly shows that the decomposition stock market index by discrete wavelet transform is better than maximal overlap discrete wavelet transform for original data.

Keywords: Discrete wavelet transform, maximal overlap discrete wavelet transform, stationarity, autocorrelation function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4736
7544 Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses

Authors: K. Chinda, P. Luangpaiboon

Abstract:

This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.

Keywords: Experimental Designs, Box-Cox, Arcsine, Logit Transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627