Search results for: high data rate
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13437

Search results for: high data rate

11907 Parametric Analysis on Information Technology Adoption and Organizational Efficiency in Northern Nigeria

Authors: A. Y. Dutse, S. I. Ningi

Abstract:

The adoption and diffusion of Information Technology (IT) is one of the fastest growing trends in organizations operating within Nigeria’s economy. Public and private organizations make huge capital investments in an attempt acquire and adopt the state-of-the-art IT for improving operational efficiency. In this study the level of IT adoption is considered the primary driver of efficiency witnessed by organizations. The research gathered data on the intensity of IT usage, and resultant efficiency increase in the organizations’ operations. The data was analyzed using multiple regression analysis and reveals that high level of IT usage has enhance efficiency of private and public organizations in Northern part of Nigeria with organizations having strategic intent on IT adoption indicating higher efficiency gains.

Keywords: IT Adoption, Nigeria, Organizational efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
11906 Using Business Intelligence Capabilities to Improve the Quality of Decision-Making: A Case Study of Mellat Bank

Authors: Jalal Haghighat Monfared, Zahra Akbari

Abstract:

Today, business executives need to have useful information to make better decisions. Banks have also been using information tools so that they can direct the decision-making process in order to achieve their desired goals by rapidly extracting information from sources with the help of business intelligence. The research seeks to investigate whether there is a relationship between the quality of decision making and the business intelligence capabilities of Mellat Bank. Each of the factors studied is divided into several components, and these and their relationships are measured by a questionnaire. The statistical population of this study consists of all managers and experts of Mellat Bank's General Departments (including 190 people) who use commercial intelligence reports. The sample size of this study was 123 randomly determined by statistical method. In this research, relevant statistical inference has been used for data analysis and hypothesis testing. In the first stage, using the Kolmogorov-Smirnov test, the normalization of the data was investigated and in the next stage, the construct validity of both variables and their resulting indexes were verified using confirmatory factor analysis. Finally, using the structural equation modeling and Pearson's correlation coefficient, the research hypotheses were tested. The results confirmed the existence of a positive relationship between decision quality and business intelligence capabilities in Mellat Bank. Among the various capabilities, including data quality, correlation with other systems, user access, flexibility and risk management support, the flexibility of the business intelligence system was the most correlated with the dependent variable of the present research. This shows that it is necessary for Mellat Bank to pay more attention to choose the required business intelligence systems with high flexibility in terms of the ability to submit custom formatted reports. Subsequently, the quality of data on business intelligence systems showed the strongest relationship with quality of decision making. Therefore, improving the quality of data, including the source of data internally or externally, the type of data in quantitative or qualitative terms, the credibility of the data and perceptions of who uses the business intelligence system, improves the quality of decision making in Mellat Bank.

Keywords: Business intelligence, business intelligence capability, decision making, decision quality.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1363
11905 Exponentially Weighted Simultaneous Estimation of Several Quantiles

Authors: Valeriy Naumov, Olli Martikainen

Abstract:

In this paper we propose new method for simultaneous generating multiple quantiles corresponding to given probability levels from data streams and massive data sets. This method provides a basis for development of single-pass low-storage quantile estimation algorithms, which differ in complexity, storage requirement and accuracy. We demonstrate that such algorithms may perform well even for heavy-tailed data.

Keywords: Quantile estimation, data stream, heavy-taileddistribution, tail index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1525
11904 Taguchi-Based Surface Roughness Optimization for Slotted and Tapered Cylindrical Products in Milling and Turning Operations

Authors: Vineeth G. Kuriakose, Joseph C. Chen, Ye Li

Abstract:

The research follows a systematic approach to optimize the parameters for parts machined by turning and milling processes. The quality characteristic chosen is surface roughness since the surface finish plays an important role for parts that require surface contact. A tapered cylindrical surface is designed as a test specimen for the research. The material chosen for machining is aluminum alloy 6061 due to its wide variety of industrial and engineering applications. HAAS VF-2 TR computer numerical control (CNC) vertical machining center is used for milling and HAAS ST-20 CNC machine is used for turning in this research. Taguchi analysis is used to optimize the surface roughness of the machined parts. The L9 Orthogonal Array is designed for four controllable factors with three different levels each, resulting in 18 experimental runs. Signal to Noise (S/N) Ratio is calculated for achieving the specific target value of 75 ± 15 µin. The controllable parameters chosen for turning process are feed rate, depth of cut, coolant flow and finish cut and for milling process are feed rate, spindle speed, step over and coolant flow. The uncontrollable factors are tool geometry for turning process and tool material for milling process. Hypothesis testing is conducted to study the significance of different uncontrollable factors on the surface roughnesses. The optimal parameter settings were identified from the Taguchi analysis and the process capability Cp and the process capability index Cpk were improved from 1.76 and 0.02 to 3.70 and 2.10 respectively for turning process and from 0.87 and 0.19 to 3.85 and 2.70 respectively for the milling process. The surface roughnesses were improved from 60.17 µin to 68.50 µin, reducing the defect rate from 52.39% to 0% for the turning process and from 93.18 µin to 79.49 µin, reducing the defect rate from 71.23% to 0% for the milling process. The purpose of this study is to efficiently utilize the Taguchi design analysis to improve the surface roughness.

Keywords: CNC milling, CNC turning, surface roughness, Taguchi analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 738
11903 Online Signature Verification Using Angular Transformation for e-Commerce Services

Authors: Peerapong Uthansakul, Monthippa Uthansakul

Abstract:

The rapid growth of e-Commerce services is significantly observed in the past decade. However, the method to verify the authenticated users still widely depends on numeric approaches. A new search on other verification methods suitable for online e-Commerce is an interesting issue. In this paper, a new online signature-verification method using angular transformation is presented. Delay shifts existing in online signatures are estimated by the estimation method relying on angle representation. In the proposed signature-verification algorithm, all components of input signature are extracted by considering the discontinuous break points on the stream of angular values. Then the estimated delay shift is captured by comparing with the selected reference signature and the error matching can be computed as a main feature used for verifying process. The threshold offsets are calculated by two types of error characteristics of the signature verification problem, False Rejection Rate (FRR) and False Acceptance Rate (FAR). The level of these two error rates depends on the decision threshold chosen whose value is such as to realize the Equal Error Rate (EER; FAR = FRR). The experimental results show that through the simple programming, employed on Internet for demonstrating e-Commerce services, the proposed method can provide 95.39% correct verifications and 7% better than DP matching based signature-verification method. In addition, the signature verification with extracting components provides more reliable results than using a whole decision making.

Keywords: Online signature verification, e-Commerce services, Angular transformation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1570
11902 Outage Capacity Analysis for Next Generation Wireless Communication Using Non-Orthogonal Multiple Access

Authors: Md. Sohidul Islam, Ahmad Fartheen Khan

Abstract:

In recent times, Non-Orthogonal Multiple Access (NOMA) has received significant attention as an upcoming candidate in the world of 5G systems. The main reason for getting NOMA in 5G is because of its capacity to provide services to many users who have the same time and frequency resources. It is best used as "multiple-input, multiple-output" (MIMO) technology. In this paper, we are going to investigate outage probability as a function of signal-to-noise ratio (SNR) and target rate user. These methods will be implemented using cooperative communication and fair power allocation, respectively.

Keywords: Non-orthogonal Multiple Access, Fair Power Allocation, Outage Probability, Target Rate User, Cooperative Communication, massive multiple input multiple output, MIMO, Successive Interference Cancellation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 332
11901 Enhanced Data Access Control of Cooperative Environment used for DMU Based Design

Authors: Wei Lifan, Zhang Huaiyu, Yang Yunbin, Li Jia

Abstract:

Through the analysis of the process digital design based on digital mockup, the fact indicates that a distributed cooperative supporting environment is the foundation conditions to adopt design approach based on DMU. Data access authorization is concerned firstly because the value and sensitivity of the data for the enterprise. The access control for administrators is often rather weak other than business user. So authors established an enhanced system to avoid the administrators accessing the engineering data by potential approach and without authorization. Thus the data security is improved.

Keywords: access control, DMU, PLM, virtual prototype.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1458
11900 MHD Chemically Reacting Viscous Fluid Flow towards a Vertical Surface with Slip and Convective Boundary Conditions

Authors: Ibrahim Yakubu Seini, Oluwole Daniel Makinde

Abstract:

MHD chemically reacting viscous fluid flow towards a vertical surface with slip and convective boundary conditions has been conducted. The temperature and the chemical species concentration of the surface and the velocity of the external flow are assumed to vary linearly with the distance from the vertical surface. The governing differential equations are modeled and transformed into systems of ordinary differential equations, which are then solved numerically by a shooting method. The effects of various parameters on the heat and mass transfer characteristics are discussed. Graphical results are presented for the velocity, temperature, and concentration profiles whilst the skin-friction coefficient and the rate of heat and mass transfers near the surface are presented in tables and discussed. The results revealed that increasing the strength of the magnetic field increases the skin-friction coefficient and the rate of heat and mass transfers toward the surface. The velocity profiles are increased towards the surface due to the presence of the Lorenz force, which attracts the fluid particles near the surface. The rate of chemical reaction is seen to decrease the concentration boundary layer near the surface due to the destructive chemical reaction occurring near the surface.

Keywords: Boundary layer, surface slip, MHD flow, chemical reaction, heat transfer, mass transfer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2228
11899 Fast Forecasting of Stock Market Prices by using New High Speed Time Delay Neural Networks

Authors: Hazem M. El-Bakry, Nikos Mastorakis

Abstract:

Fast forecasting of stock market prices is very important for strategic planning. In this paper, a new approach for fast forecasting of stock market prices is presented. Such algorithm uses new high speed time delay neural networks (HSTDNNs). The operation of these networks relies on performing cross correlation in the frequency domain between the input data and the input weights of neural networks. It is proved mathematically and practically that the number of computation steps required for the presented HSTDNNs is less than that needed by traditional time delay neural networks (TTDNNs). Simulation results using MATLAB confirm the theoretical computations.

Keywords: Fast Forecasting, Stock Market Prices, Time Delay NeuralNetworks, Cross Correlation, Frequency Domain.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2057
11898 Pattern Recognition Using Feature Based Die-Map Clusteringin the Semiconductor Manufacturing Process

Authors: Seung Hwan Park, Cheng-Sool Park, Jun Seok Kim, Youngji Yoo, Daewoong An, Jun-Geol Baek

Abstract:

Depending on the big data analysis becomes important, yield prediction using data from the semiconductor process is essential. In general, yield prediction and analysis of the causes of the failure are closely related. The purpose of this study is to analyze pattern affects the final test results using a die map based clustering. Many researches have been conducted using die data from the semiconductor test process. However, analysis has limitation as the test data is less directly related to the final test results. Therefore, this study proposes a framework for analysis through clustering using more detailed data than existing die data. This study consists of three phases. In the first phase, die map is created through fail bit data in each sub-area of die. In the second phase, clustering using map data is performed. And the third stage is to find patterns that affect final test result. Finally, the proposed three steps are applied to actual industrial data and experimental results showed the potential field application.

Keywords: Die-Map Clustering, Feature Extraction, Pattern Recognition, Semiconductor Manufacturing Process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3136
11897 FEA Modeling of Material Removal Rate in Electrical Discharge Machining of Al6063/SiC Composites

Authors: U. K. Vishwakarma , A. Dvivedi, P. Kumar

Abstract:

Metal matrix composites (MMC) are generating extensive interest in diverse fields like defense, aerospace, electronics and automotive industries. In this present investigation, material removal rate (MRR) modeling has been carried out using an axisymmetric model of Al-SiC composite during electrical discharge machining (EDM). A FEA model of single spark EDM was developed to calculate the temperature distribution.Further, single spark model was extended to simulate the second discharge. For multi-discharge machining material removal was calculated by calculating the number of pulses. Validation of model has been done by comparing the experimental results obtained under the same process parameters with the analytical results. A good agreement was found between the experimental results and the theoretical value.

Keywords: Electrical Discharge Machining, FEA, Metal matrix composites, Multi-discharge

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3722
11896 An Integrated CFD and Experimental Analysis on Double-Skin Window

Authors: Sheam-Chyun Lin, Wei-Kai Chen, Hung-Cheng Yen, Yung-Jen Cheng, Yu-Cheng Chen

Abstract:

Result from the constant dwindle in natural resources, the alternative way to reduce the costs in our daily life would be urgent to be found in the near future. As the ancient technique based on the theory of solar chimney since roman times, the double-skin façade are simply composed of two large glass panels in purpose of daylighting and also natural ventilation in the daytime. Double-skin façade is generally installed on the exterior side of buildings as function as the window, so there is always a huge amount of passive solar energy the façade would receive to induce the airflow every sunny day. Therefore, this article imposes a domestic double-skin window for residential usage and attempts to improve the volume flow rate inside the cavity between the panels by the frame geometry design, the installation of outlet guide plate and the solar energy collection system. Note that the numerical analyses are applied to investigate the characteristics of flow field, and the boundary conditions in the simulation are totally based on the practical experiment of the original prototype. Then we redesign the prototype from the knowledge of the numerical results and fluid dynamic theory, and later the experiments of modified prototype will be conducted to verify the simulation results. The velocities at the inlet of each case are increase by 5%, 45% and 15% from the experimental data, and also the numerical simulation results reported 20% improvement in volume flow rate both for the frame geometry design and installation of outlet guide plate.

Keywords: Solar energy, Double-skin façades, Thermal buoyancy, Fluid machinery.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
11895 Perforation Analysis of the Aluminum Alloy Sheets Subjected to High Rate of Loading and Heated Using Thermal Chamber: Experimental and Numerical Approach

Authors: A. Bendarma, T. Jankowiak, A. Rusinek, T. Lodygowski, M. Klósak, S. Bouslikhane

Abstract:

The analysis of the mechanical characteristics and dynamic behavior of aluminum alloy sheet due to perforation tests based on the experimental tests coupled with the numerical simulation is presented. The impact problems (penetration and perforation) of the metallic plates have been of interest for a long time. Experimental, analytical as well as numerical studies have been carried out to analyze in details the perforation process. Based on these approaches, the ballistic properties of the material have been studied. The initial and residual velocities laser sensor is used during experiments to obtain the ballistic curve and the ballistic limit. The energy balance is also reported together with the energy absorbed by the aluminum including the ballistic curve and ballistic limit. The high speed camera helps to estimate the failure time and to calculate the impact force. A wide range of initial impact velocities from 40 up to 180 m/s has been covered during the tests. The mass of the conical nose shaped projectile is 28 g, its diameter is 12 mm, and the thickness of the aluminum sheet is equal to 1.0 mm. The ABAQUS/Explicit finite element code has been used to simulate the perforation processes. The comparison of the ballistic curve was obtained numerically and was verified experimentally, and the failure patterns are presented using the optimal mesh densities which provide the stability of the results. A good agreement of the numerical and experimental results is observed.

Keywords: Aluminum alloy, ballistic behavior, failure criterion, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 921
11894 Speed Characteristics of Mixed Traffic Flow on Urban Arterials

Authors: Ashish Dhamaniya, Satish Chandra

Abstract:

Speed and traffic volume data are collected on different sections of four lane and six lane roads in three metropolitan cities in India. Speed data are analyzed to fit the statistical distribution to individual vehicle speed data and all vehicles speed data. It is noted that speed data of individual vehicle generally follows a normal distribution but speed data of all vehicle combined at a section of urban road may or may not follow the normal distribution depending upon the composition of traffic stream. A new term Speed Spread Ratio (SSR) is introduced in this paper which is the ratio of difference in 85th and 50th percentile speed to the difference in 50th and 15th percentile speed. If SSR is unity then speed data are truly normally distributed. It is noted that on six lane urban roads, speed data follow a normal distribution only when SSR is in the range of 0.86 – 1.11. The range of SSR is validated on four lane roads also.

Keywords: Normal distribution, percentile speed, speed spread ratio, traffic volume.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4232
11893 Expert System for Chose Material used Gears

Authors: E.V. Butilă, F. Gîrbacia

Abstract:

In order to give high expertise the computer aided design of mechanical systems involves specific activities focused on processing two type of information: knowledge and data. Expert rule based knowledge is generally processing qualitative information and involves searching for proper solutions and their combination into synthetic variant. Data processing is based on computational models and it is supposed to be inter-related with reasoning in the knowledge processing. In this paper an Intelligent Integrated System is proposed, for the objective of choosing the adequate material. The software is developed in Prolog – Flex software and takes into account various constraints that appear in the accurate operation of gears.

Keywords: Expert System, computer aided design, gear boxdesign, chose material, Prolog, Flex

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
11892 Investigating the Treatability of a Compost Leachate in a Hybrid Anaerobic Reactor: An Experimental Study

Authors: Shima Rajabi, Leila Vafajoo

Abstract:

Compost manufacturing plants are one of units where wastewater is produced in significantly large amounts. Wastewater produced in these plants contains high amounts of substrate (organic loads) and is classified as stringent waste which creates significant pollution when discharged into the environment without treatment. A compost production plant in the one of the Iran-s province treating 200 tons/day of waste is one of the most important environmental pollutant operations in this zone. The main objectives of this paper are to investigate the compost wastewater treatability in hybrid anaerobic reactors with an upflow-downflow arrangement, to determine the kinetic constants, and eventually to obtain an appropriate mathematical model. After starting the hybrid anaerobic reactor of the compost production plant, the average COD removal rate efficiency was 95%.

Keywords: Leachate treatment, anaerobic hybrid reactor

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1829
11891 A Comparative Study between Discrete Wavelet Transform and Maximal Overlap Discrete Wavelet Transform for Testing Stationarity

Authors: Amel Abdoullah Ahmed Dghais, Mohd Tahir Ismail

Abstract:

In this paper the core objective is to apply discrete wavelet transform and maximal overlap discrete wavelet transform functions namely Haar, Daubechies2, Symmlet4, Coiflet2 and discrete approximation of the Meyer wavelets in non stationary financial time series data from Dow Jones index (DJIA30) of US stock market. The data consists of 2048 daily data of closing index from December 17, 2004 to October 23, 2012. Unit root test affirms that the data is non stationary in the level. A comparison between the results to transform non stationary data to stationary data using aforesaid transforms is given which clearly shows that the decomposition stock market index by discrete wavelet transform is better than maximal overlap discrete wavelet transform for original data.

Keywords: Discrete wavelet transform, maximal overlap discrete wavelet transform, stationarity, autocorrelation function.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4717
11890 Comparative Study of Transformed and Concealed Data in Experimental Designs and Analyses

Authors: K. Chinda, P. Luangpaiboon

Abstract:

This paper presents the comparative study of coded data methods for finding the benefit of concealing the natural data which is the mercantile secret. Influential parameters of the number of replicates (rep), treatment effects (τ) and standard deviation (σ) against the efficiency of each transformation method are investigated. The experimental data are generated via computer simulations under the specified condition of the process with the completely randomized design (CRD). Three ways of data transformation consist of Box-Cox, arcsine and logit methods. The difference values of F statistic between coded data and natural data (Fc-Fn) and hypothesis testing results were determined. The experimental results indicate that the Box-Cox results are significantly different from natural data in cases of smaller levels of replicates and seem to be improper when the parameter of minus lambda has been assigned. On the other hand, arcsine and logit transformations are more robust and obviously, provide more precise numerical results. In addition, the alternate ways to select the lambda in the power transformation are also offered to achieve much more appropriate outcomes.

Keywords: Experimental Designs, Box-Cox, Arcsine, Logit Transformations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1608
11889 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods

Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin

Abstract:

Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.

Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 501
11888 A Design of Anisotropic Wet Etching System to Reduce Hillocks on Etched Surface of Silicon Substrate

Authors: Alonggot Limcharoen Kaeochotchuangkul, Pathomporn Sawatchai

Abstract:

This research aims to design and build a wet etching system, which is suitable for anisotropic wet etching, in order to reduce etching time, to reduce hillocks on the etched surface (to reduce roughness), and to create a 45-degree wall angle (micro-mirror). This study would start by designing a wet etching system. There are four main components in this system: an ultrasonic cleaning, a condenser, a motor and a substrate holder. After that, an ultrasonic machine was modified by applying a condenser to maintain the consistency of the solution concentration during the etching process and installing a motor for improving the roughness. This effect on the etch rate and the roughness showed that the etch rate increased and the roughness was reduced.

Keywords: Anisotropic wet etching, wet etching system, Hillocks, ultrasonic cleaning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 685
11887 Real Time Lidar and Radar High-Level Fusion for Obstacle Detection and Tracking with Evaluation on a Ground Truth

Authors: Hatem Hajri, Mohamed-Cherif Rahal

Abstract:

Both Lidars and Radars are sensors for obstacle detection. While Lidars are very accurate on obstacles positions and less accurate on their velocities, Radars are more precise on obstacles velocities and less precise on their positions. Sensor fusion between Lidar and Radar aims at improving obstacle detection using advantages of the two sensors. The present paper proposes a real-time Lidar/Radar data fusion algorithm for obstacle detection and tracking based on the global nearest neighbour standard filter (GNN). This algorithm is implemented and embedded in an automative vehicle as a component generated by a real-time multisensor software. The benefits of data fusion comparing with the use of a single sensor are illustrated through several tracking scenarios (on a highway and on a bend) and using real-time kinematic sensors mounted on the ego and tracked vehicles as a ground truth.

Keywords: Ground truth, Hungarian algorithm, lidar Radar data fusion, global nearest neighbor filter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 946
11886 Conceptual Multidimensional Model

Authors: Manpreet Singh, Parvinder Singh, Suman

Abstract:

The data is available in abundance in any business organization. It includes the records for finance, maintenance, inventory, progress reports etc. As the time progresses, the data keep on accumulating and the challenge is to extract the information from this data bank. Knowledge discovery from these large and complex databases is the key problem of this era. Data mining and machine learning techniques are needed which can scale to the size of the problems and can be customized to the application of business. For the development of accurate and required information for particular problem, business analyst needs to develop multidimensional models which give the reliable information so that they can take right decision for particular problem. If the multidimensional model does not possess the advance features, the accuracy cannot be expected. The present work involves the development of a Multidimensional data model incorporating advance features. The criterion of computation is based on the data precision and to include slowly change time dimension. The final results are displayed in graphical form.

Keywords: Multidimensional, data precision.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1445
11885 Real Time Approach for Data Placement in Wireless Sensor Networks

Authors: Sanjeev Gupta, Mayank Dave

Abstract:

The issue of real-time and reliable report delivery is extremely important for taking effective decision in a real world mission critical Wireless Sensor Network (WSN) based application. The sensor data behaves differently in many ways from the data in traditional databases. WSNs need a mechanism to register, process queries, and disseminate data. In this paper we propose an architectural framework for data placement and management. We propose a reliable and real time approach for data placement and achieving data integrity using self organized sensor clusters. Instead of storing information in individual cluster heads as suggested in some protocols, in our architecture we suggest storing of information of all clusters within a cell in the corresponding base station. For data dissemination and action in the wireless sensor network we propose to use Action and Relay Stations (ARS). To reduce average energy dissipation of sensor nodes, the data is sent to the nearest ARS rather than base station. We have designed our architecture in such a way so as to achieve greater energy savings, enhanced availability and reliability.

Keywords: Cluster head, data reliability, real time communication, wireless sensor networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1807
11884 Data Mining in Medicine Domain Using Decision Trees and Vector Support Machine

Authors: Djamila Benhaddouche, Abdelkader Benyettou

Abstract:

In this paper, we used data mining to extract biomedical knowledge. In general, complex biomedical data collected in studies of populations are treated by statistical methods, although they are robust, they are not sufficient in themselves to harness the potential wealth of data. For that you used in step two learning algorithms: the Decision Trees and Support Vector Machine (SVM). These supervised classification methods are used to make the diagnosis of thyroid disease. In this context, we propose to promote the study and use of symbolic data mining techniques.

Keywords: A classifier, Algorithms decision tree, knowledge extraction, Support Vector Machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1862
11883 A Case Study on the Value of Corporate Social Responsibility Systems

Authors: José M. Brotons, Manuel E. Sansalvador

Abstract:

The relationship between Corporate Social Responsibility (CSR) and financial performance (FP) is a subject of great interest that has not yet been resolved. In this work, we have developed a new and original tool to measure this relation. The tool quantifies the value contributed to companies that are committed to CSR. The theoretical model used is the fuzzy discounted cash flow method. Two assumptions have been considered, the first, the company has implemented the IQNet SR10 certification, and the second, the company has not implemented that certification. For the first one, the growth rate used for the time horizon is the rate maintained by the company after obtaining the IQNet SR10 certificate. For the second one, both, the growth rates company prior to the implementation of the certification, and the evolution of the sector will be taken into account. By using triangular fuzzy numbers, it is possible to deal adequately with each company’s forecasts as well as the information corresponding to the sector. Once the annual growth rate of the sales is obtained, the profit and loss accounts are generated from the annual estimate sales. For the remaining elements of this account, their regression with the nets sales has been considered. The difference between these two valuations, made in a fuzzy environment, allows obtaining the value of the IQNet SR10 certification. Although this study presents an innovative methodology to quantify the relation between CSR and FP, the authors are aware that only one company has been analyzed. This is precisely the main limitation of this study which in turn opens up an interesting line for future research: to broaden the sample of companies.

Keywords: Corporate social responsibility, case study, financial performance, company valuation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 780
11882 Treatment of Petroleum Refinery Wastewater by using UASB Reactors

Authors: H.A. Gasim, S.R.M. Kutty, M.H. Isa, M.P.M. Isa

Abstract:

Petroleum refineries discharged large amount of wastewater -during the refining process- that contains hazardous constituents that is hard to degrade. Anaerobic treatment process is well known as an efficient method to degrade high strength wastewaters. Up-flow Anaerobic Sludge Blanker (UASB) is a common process used for various wastewater treatments. Two UASB reactors were set up and operated in parallel to evaluate the treatment efficiency of petroleum refinery wastewater. In this study four organic volumetric loading rates were applied (i.e. 0.58, 0.89, 1.21 and 2.34 kg/m3·d), two loads to each reactor. Each load was applied for a period of 60 days for the reactor to acclimatize and reach steady state, and then the second load applied. The chemical oxygen demand (COD) removals were satisfactory with the removal efficiencies at the loadings applied were 78, 82, 83 and 81 % respectively.

Keywords: Petroleum refinery wastewater, anaerobic treatment, UASB, organic volumetric loading rate

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2491
11881 Performance Enhancement of Cellular OFDM Based Wireless LANs by Exploiting Spatial Diversity Techniques

Authors: S. Ali. Tajer, Babak H. Khalaj

Abstract:

This paper represents an investigation on how exploiting multiple transmit antennas by OFDM based wireless LAN subscribers can mitigate physical layer error rate. Then by comparing the Wireless LANs that utilize spatial diversity techniques with the conventional ones it will reveal how PHY and TCP throughputs behaviors are ameliorated. In the next step it will assess the same issues based on a cellular context operation which is mainly introduced as an innovated solution that beside a multi cell operation scenario benefits spatio-temporal signaling schemes as well. Presented simulations will shed light on the improved performance of the wide range and high quality wireless LAN services provided by the proposed approach.

Keywords: Multiple Input Multiple Output (MIMO), Orthogonal Frequency Division Multiplexing (OFDM), and WirelessLocal Area Network (WLAN).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1394
11880 High Level Characterization and Optimization of Switched-Current Sigma-Delta Modulators with VHDL-AMS

Authors: A. Fakhfakh, N. Ksentini, M. Loulou, N. Masmoudi, J. J. Charlot

Abstract:

Today, design requirements are extending more and more from electronic (analogue and digital) to multidiscipline design. These current needs imply implementation of methodologies to make the CAD product reliable in order to improve time to market, study costs, reusability and reliability of the design process. This paper proposes a high level design approach applied for the characterization and the optimization of Switched-Current Sigma- Delta Modulators. It uses the new hardware description language VHDL-AMS to help the designers to optimize the characteristics of the modulator at a high level with a considerably reduced CPU time before passing to a transistor level characterization.

Keywords: high level design, optimization, switched-Current Sigma-Delta Modulators, VHDL-AMS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
11879 MIMO Performances in Tunnel Environment: Interpretation from the Channel Characteristics

Authors: C. Sanchis-Borras, J. M. Molina-Garcia-Pardo, P. Degauque, M. Lienard

Abstract:

The objective of this contribution is to study the performances in terms of bit error rate, of space-time code algorithms applied to MIMO communication in tunnels. Indeed, the channel characteristics in a tunnel are quite different than those of urban or indoor environment, due to the guiding effect of the tunnel. Therefore, MIMO channel matrices have been measured in a straight tunnel, in a frequency band around 3GHz. Correlation between array elements and properties of the MIMO matrices are first studied as a function of the distance between the transmitter and the receiver. Then, owing to a software tool simulating the link, predicted values of bit error rate are given for VLAST, OSTBC and QSTBC algorithms applied to a MIMO configuration with 2 or 4 array elements. Results are interpreted from the analysis of the channel properties.

Keywords: MIMO, propagation channel, space-time algorithms, tunnel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1877
11878 Viral Advertising: Popularity and Willingness to Share among the Czech Internet Population

Authors: Martin Klepek

Abstract:

This paper presents results of primary quantitative research on viral advertising with focus on popularity and willingness to share viral video among Czech Internet population. It starts with brief theoretical debate on viral advertising, which is used for the comparison of the results. For purpose of collecting data, online questionnaire survey was given to 384 respondents. Statistics utilized in this research included frequency, percentage, correlation and Pearson’s Chi-square test. Data was evaluated using SPSS software. The research analysis disclosed high popularity of viral advertising video among Czech Internet population but implies lower willingness to share it. Significant relationship between likability of viral video technique and age of the viewer was found.

Keywords: Internet advertising, Internet population, promotion, marketing communication, viral advertising, viral video.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2008