Search results for: context based recommendation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12000

Search results for: context based recommendation

10080 Weight-Based Query Optimization System Using Buffer

Authors: Kashif Irfan, Fahad Shahbaz Khan, Tehseen Zia, M. A. Anwar

Abstract:

Fast retrieval of data has been a need of user in any database application. This paper introduces a buffer based query optimization technique in which queries are assigned weights according to their number of execution in a query bank. These queries and their optimized executed plans are loaded into the buffer at the start of the database application. For every query the system searches for a match in the buffer and executes the plan without creating new plans.

Keywords: Query Bank, Query Matcher, Weight Manager.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1243
10079 On the Study of the Electromagnetic Scattering by Large Obstacle Based on the Method of Auxiliary Sources

Authors: Sami Hidouri, Taoufik Aguili

Abstract:

We consider fast and accurate solutions of scattering problems by large perfectly conducting objects (PEC) formulated by an optimization of the Method of Auxiliary Sources (MAS). We present various techniques used to reduce the total computational cost of the scattering problem. The first technique is based on replacing the object by an array of finite number of small (PEC) object with the same shape. The second solution reduces the problem on considering only the half of the object.These t

Keywords: Method of Auxiliary Sources, Scattering, large object, RCS, computational resources.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1859
10078 Phenolic-Based Chemical Production from Catalytic Depolymerization of Alkaline Lignin over Fumed Silica Catalyst

Authors: S. Totong, P. Daorattanachai, N. Laosiripojana

Abstract:

Lignin depolymerization into phenolic-based chemicals is an interesting process for utilizing and upgrading a benefit and value of lignin. In this study, the depolymerization reaction was performed to convert alkaline lignin into smaller molecule compounds. Fumed SiO₂ was used as a catalyst to improve catalytic activity in lignin decomposition. The important parameters in depolymerization process (i.e., reaction temperature, reaction time, etc.) were also investigated. In addition, gas chromatography with mass spectrometry (GC-MS), flame-ironized detector (GC-FID), and Fourier transform infrared spectroscopy (FT-IR) were used to analyze and characterize the lignin products. It was found that fumed SiO₂ catalyst led the good catalytic activity in lignin depolymerization. The main products from catalytic depolymerization were guaiacol, syringol, vanillin, and phenols. Additionally, metal supported on fumed SiO₂ such as Cu/SiO₂ and Ni/SiO₂ increased the catalyst activity in terms of phenolic products yield.

Keywords: Alkaline lignin, catalytic, depolymerization, fumed SiO2, phenolic-based chemicals.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
10077 BER Analysis of Energy Detection Spectrum Sensing in Cognitive Radio Using GNU Radio

Authors: B. Siva Kumar Reddy, B. Lakshmi

Abstract:

Cognitive Radio is a turning out technology that empowers viable usage of the spectrum. Energy Detector-based Sensing is the most broadly utilized spectrum sensing strategy. Besides, it's a lot of generic as receivers doesn't would like any information on the primary user's signals, channel data, of even the sort of modulation. This paper puts forth the execution of energy detection sensing for AM (Amplitude Modulated) signal at 710 KHz, FM (Frequency Modulated) signal at 103.45 MHz (local station frequency), Wi-Fi signal at 2.4 GHz and WiMAX signals at 6 GHz. The OFDM/OFDMA based WiMAX physical layer with convolutional channel coding is actualized utilizing USRP N210 (Universal Software Radio Peripheral) and GNU Radio based Software Defined Radio (SDR). Test outcomes demonstrated the BER (Bit Error Rate) augmentation with channel noise and BER execution is dissected for different Eb/N0 (the energy per bit to noise power spectral density ratio) values.

Keywords: BER, Cognitive Radio, GNU Radio, OFDM, SDR, WiMAX.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4515
10076 Genetic Algorithms for Feature Generation in the Context of Audio Classification

Authors: José A. Menezes, Giordano Cabral, Bruno T. Gomes

Abstract:

Choosing good features is an essential part of machine learning. Recent techniques aim to automate this process. For instance, feature learning intends to learn the transformation of raw data into a useful representation to machine learning tasks. In automatic audio classification tasks, this is interesting since the audio, usually complex information, needs to be transformed into a computationally convenient input to process. Another technique tries to generate features by searching a feature space. Genetic algorithms, for instance, have being used to generate audio features by combining or modifying them. We find this approach particularly interesting and, despite the undeniable advances of feature learning approaches, we wanted to take a step forward in the use of genetic algorithms to find audio features, combining them with more conventional methods, like PCA, and inserting search control mechanisms, such as constraints over a confusion matrix. This work presents the results obtained on particular audio classification problems.

Keywords: Feature generation, feature learning, genetic algorithm, music information retrieval.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1047
10075 Implementation of TinyHash based on Hash Algorithm for Sensor Network

Authors: HangRok Lee, YongJe Choi, HoWon Kim

Abstract:

In recent years, it has been proposed security architecture for sensor network.[2][4]. One of these, TinySec by Chris Kalof, Naveen Sastry, David Wagner had proposed Link layer security architecture, considering some problems of sensor network. (i.e : energy, bandwidth, computation capability,etc). The TinySec employs CBC_mode of encryption and CBC-MAC for authentication based on SkipJack Block Cipher. Currently, This TinySec is incorporated in the TinyOS for sensor network security. This paper introduces TinyHash based on general hash algorithm. TinyHash is the module in order to replace parts of authentication and integrity in the TinySec. it implies that apply hash algorithm on TinySec architecture. For compatibility about TinySec, Components in TinyHash is constructed as similar structure of TinySec. And TinyHash implements the HMAC component for authentication and the Digest component for integrity of messages. Additionally, we define the some interfaces for service associated with hash algorithm.

Keywords: sensor network security, nesC, TinySec, TinyOS, Hash, HMAC, integrity

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2333
10074 Face Recognition Based On Vector Quantization Using Fuzzy Neuro Clustering

Authors: Elizabeth B. Varghese, M. Wilscy

Abstract:

A face recognition system is a computer application for automatically identifying or verifying a person from a digital image or a video frame. A lot of algorithms have been proposed for face recognition. Vector Quantization (VQ) based face recognition is a novel approach for face recognition. Here a new codebook generation for VQ based face recognition using Integrated Adaptive Fuzzy Clustering (IAFC) is proposed. IAFC is a fuzzy neural network which incorporates a fuzzy learning rule into a competitive neural network. The performance of proposed algorithm is demonstrated by using publicly available AT&T database, Yale database, Indian Face database and a small face database, DCSKU database created in our lab. In all the databases the proposed approach got a higher recognition rate than most of the existing methods. In terms of Equal Error Rate (ERR) also the proposed codebook is better than the existing methods.

Keywords: Face Recognition, Vector Quantization, Integrated Adaptive Fuzzy Clustering, Self Organization Map.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2219
10073 New Approach for Manipulation of Stratified Programs

Authors: Amel Grissa-Touzi, Chadlia Jerad, Habib Ounelli

Abstract:

Negation is useful in the majority of the real world applications. However, its introduction leads to semantic and canonical problems. We propose in this paper an approach based on stratification to deal with negation problems. This approach is based on an extension of predicates nets. It is characterized with two main contributions. The first concerns the management of the whole class of stratified programs. The second contribution is related to usual operations optimizations on stratified programs (maximal stratification, incremental updates ...).

Keywords: stratified programs, stratification, standard model, update operations, SEPN formalism.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1334
10072 IMM based Kalman Filter for Channel Estimation in MB OFDM Systems

Authors: C.Ramesh, V.Vaidehi

Abstract:

Ultra-wide band (UWB) communication is one of the most promising technologies for high data rate wireless networks for short range applications. This paper proposes a blind channel estimation method namely IMM (Interactive Multiple Model) Based Kalman algorithm for UWB OFDM systems. IMM based Kalman filter is proposed to estimate frequency selective time varying channel. In the proposed method, two Kalman filters are concurrently estimate the channel parameters. The first Kalman filter namely Static Model Filter (SMF) gives accurate result when the user is static while the second Kalman filter namely the Dynamic Model Filter (DMF) gives accurate result when the receiver is in moving state. The static transition matrix in SMF is assumed as an Identity matrix where as in DMF, it is computed using Yule-Walker equations. The resultant filter estimate is computed as a weighted sum of individual filter estimates. The proposed method is compared with other existing channel estimation methods.

Keywords: Channel estimation, Kalman filter, UWB, Channel model, AR model

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2062
10071 Night-Time Traffic Light Detection Based On SVM with Geometric Moment Features

Authors: Hyun-Koo Kim, Young-Nam Shin, Sa-gong Kuk, Ju H. Park, Ho-Youl Jung

Abstract:

This paper presents an effective traffic lights detection method at the night-time. First, candidate blobs of traffic lights are extracted from RGB color image. Input image is represented on the dominant color domain by using color transform proposed by Ruta, then red and green color dominant regions are selected as candidates. After candidate blob selection, we carry out shape filter for noise reduction using information of blobs such as length, area, area of boundary box, etc. A multi-class classifier based on SVM (Support Vector Machine) applies into the candidates. Three kinds of features are used. We use basic features such as blob width, height, center coordinate, area, area of blob. Bright based stochastic features are also used. In particular, geometric based moment-s values between candidate region and adjacent region are proposed and used to improve the detection performance. The proposed system is implemented on Intel Core CPU with 2.80 GHz and 4 GB RAM and tested with the urban and rural road videos. Through the test, we show that the proposed method using PF, BMF, and GMF reaches up to 93 % of detection rate with computation time of in average 15 ms/frame.

Keywords: Night-time traffic light detection, multi-class classification, driving assistance system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3849
10070 Performance Based Seismic Retrofit of Masonry Infilled Reinforced Concrete Frames Using Passive Energy Dissipation Devices

Authors: Alok Madan, Arshad K. Hashmi

Abstract:

The paper presents a plastic analysis procedure based on the energy balance concept for performance based seismic retrofit of multi-story multi-bay masonry infilled reinforced concrete (R/C) frames with a ‘soft’ ground story using passive energy dissipation (PED) devices with the objective of achieving a target performance level of the retrofitted R/C frame for a given seismic hazard level at the building site. The proposed energy based plastic analysis procedure was employed for developing performance based design (PBD) formulations for PED devices for a simulated application in seismic retrofit of existing frame structures designed in compliance with the prevalent standard codes of practice. The PBD formulations developed for PED devices were implemented for simulated seismic retrofit of a representative code-compliant masonry infilled R/C frame with a ‘soft’ ground story using friction dampers as the PED device. Non-linear dynamic analyses of the retrofitted masonry infilled R/C frames is performed to investigate the efficacy and accuracy of the proposed energy based plastic analysis procedure in achieving the target performance level under design level earthquakes. Results of non-linear dynamic analyses demonstrate that the maximum inter-story drifts in the masonry infilled R/C frames with a ‘soft’ ground story that is retrofitted with the friction dampers designed using the proposed PBD formulations are controlled within the target drifts under near-field as well far-field earthquakes.

Keywords: Energy Methods, Masonry Infilled Frame, Near-field Earthquakes, Seismic Protection, Supplemental damping devices.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2524
10069 Sleep Scheduling Schemes Based on Location of Mobile User in Sensor-Cloud

Authors: N. Mahendran, R. Priya

Abstract:

The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.

Keywords: Sleep scheduling, mobile cloud computing, wireless sensor network, integration, location, network lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 958
10068 Control and Navigation with Knowledge Bases

Authors: Miloš Šeda, Tomáš Březina

Abstract:

In this paper, we focus on the use of knowledge bases in two different application areas – control of systems with unknown or strongly nonlinear models (i.e. hardly controllable by the classical methods), and robot motion planning in eight directions. The first one deals with fuzzy logic and the paper presents approaches for setting and aggregating the rules of a knowledge base. Te second one is concentrated on a case-based reasoning strategy for finding the path in a planar scene with obstacles.

Keywords: fuzzy controller, fuzzification, rule base, inference, defuzzification, genetic algorithm, neural network, case-based reasoning

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1574
10067 A Robust Eyelashes and Eyelid Detection in Transformation Invariant Iris Recognition: In Application with LRC Security System

Authors: R. Bremananth

Abstract:

Biometric authentication is an essential task for any kind of real-life applications. In this paper, we contribute two primary paradigms to Iris recognition such as Robust Eyelash Detection (RED) using pathway kernels and hair curve fitting synthesized model. Based on these two paradigms, rotation invariant iris recognition is enhanced. In addition, the presented framework is tested with real-life iris data to provide the authentication for LRC (Learning Resource Center) users. Recognition performance is significantly improved based on the contributed schemes by evaluating real-life irises. Furthermore, the framework has been implemented using Java programming language. Experiments are performed based on 1250 diverse subjects in different angles of variations on the authentication process. The results revealed that the methodology can deploy in the process on LRC management system and other security required applications.

Keywords: Authentication, biometric, eye lashes detection, iris scanning, LRC security, secure access.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1011
10066 Developing an Audit Quality Model for an Emerging Market

Authors: Bita Mashayekhi, Azadeh Maddahi, Arash Tahriri

Abstract:

The purpose of this paper is developing a model for audit quality, with regard to the contextual and environmental attributes of the audit profession in Iran. For this purpose, using an exploratory approach, and because of the special attributes of the auditing profession in Iran in terms of the legal environment, regulatory and supervisory mechanisms, audit firms size, and etc., we used grounded theory approach as a qualitative research method. Therefore, we got the opinions of the experts in the auditing and capital market areas through unstructured interviews. As a result, the authors revealed the determinants of audit quality, and by using these determinants, developed an Integrated Audit Quality Model, including causal conditions, intervening conditions, context, as well as action strategies related to AQ and their consequences. In this research, audit quality is studied using a systemic approach. According to this approach, the quality of inputs, processes, and outputs of auditing determines the quality of auditing, therefore, the quality of all different parts of this system is considered.

Keywords: Audit quality, integrated audit quality model, audit supply, demand for audit service, grounded theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1262
10065 Enhancing Cache Performance Based on Improved Average Access Time

Authors: Jasim. A. Ghaeb

Abstract:

A high performance computer includes a fast processor and millions bytes of memory. During the data processing, huge amount of information are shuffled between the memory and processor. Because of its small size and its effectiveness speed, cache has become a common feature of high performance computers. Enhancing cache performance proved to be essential in the speed up of cache-based computers. Most enhancement approaches can be classified as either software based or hardware controlled. The performance of the cache is quantified in terms of hit ratio or miss ratio. In this paper, we are optimizing the cache performance based on enhancing the cache hit ratio. The optimum cache performance is obtained by focusing on the cache hardware modification in the way to make a quick rejection to the missed line's tags from the hit-or miss comparison stage, and thus a low hit time for the wanted line in the cache is achieved. In the proposed technique which we called Even- Odd Tabulation (EOT), the cache lines come from the main memory into cache are classified in two types; even line's tags and odd line's tags depending on their Least Significant Bit (LSB). This division is exploited by EOT technique to reject the miss match line's tags in very low time compared to the time spent by the main comparator in the cache, giving an optimum hitting time for the wanted cache line. The high performance of EOT technique against the familiar mapping technique FAM is shown in the simulated results.

Keywords: Caches, Cache performance, Hit time, Cache hit ratio, Cache mapping, Cache memory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1655
10064 Manufacturing Dispersions Based Simulation and Synthesis of Design Tolerances

Authors: Nassima Cheikh, Abdelmadjid Cheikh, Said Hamou

Abstract:

The objective of this work which is based on the approach of simultaneous engineering is to contribute to the development of a CIM tool for the synthesis of functional design dimensions expressed by average values and tolerance intervals. In this paper, the dispersions method known as the Δl method which proved reliable in the simulation of manufacturing dimensions is used to develop a methodology for the automation of the simulation. This methodology is constructed around three procedures. The first procedure executes the verification of the functional requirements by automatically extracting the functional dimension chains in the mechanical sub-assembly. Then a second procedure performs an optimization of the dispersions on the basis of unknown variables. The third procedure uses the optimized values of the dispersions to compute the optimized average values and tolerances of the functional dimensions in the chains. A statistical and cost based approach is integrated in the methodology in order to take account of the capabilities of the manufacturing processes and to distribute optimal values among the individual components of the chains.

Keywords: functional tolerances, manufacturing dispersions, simulation, CIM.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1451
10063 Use of Integrated Knowledge Networks to Increase Innovation in Nanotechnology Research and Development

Authors: R. Byler

Abstract:

Innovation, particularly in technology development, is a crucial aspect of nanotechnology R&D and, although several approaches to effective innovation management exist, organizational structures that promote knowledge exchange have been found to be most effect in supporting new and emerging technologies. This paper discusses Integrated Knowledge Networks (IKNs) and evaluates its use within nanotechnology R&D to increase technology innovation. Specifically, this paper reviews the role of IKNs in bolstering national and international nanotechnology development and in enhancing nanotechnology innovation. Both physical and virtual IKNs, particularly IT-based network platforms for community-based innovation, offer strategies for enhanced technology innovation, interdisciplinary cooperation, and enterprise development. Effectively creating and managing technology R&D networks can facilitate successful knowledge exchange, enhanced innovation, commercialization, and technology transfer. As such, IKNs are crucial to technology development processes and, thus, in increasing the quality and access to new, innovative nanoscience and technologies worldwide.

Keywords: Community-based innovation, integrated knowledge networks, nanotechnology, technology innovation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 877
10062 An Intensional Conceptualization Model for Ontology-Based Semantic Integration

Authors: Fateh Adhnouss, Husam El-Asfour, Kenneth McIsaac, Abdul Mutalib Wahaishi, Idris El-Feghia

Abstract:

Conceptualization is an essential component of semantic ontology-based approaches. There have been several approaches that rely on extensional structure and extensional reduction structure in order to construct conceptualization. In this paper, several limitations are highlighted relating to their applicability to the construction of conceptualizations in dynamic and open environments. These limitations arise from a number of strong assumptions that do not apply to such environments. An intensional structure is strongly argued to be a natural and adequate modeling approach. This paper presents a conceptualization structure based on property, relations, and propositions theory (PRP) to the model ontology that is suitable for open environments. The model extends the First-Order Logic (FOL) notation and defines the formal representation that enables interoperability between software systems and supports semantic integration for software systems in open, dynamic environments.

Keywords: Conceptualization, ontology, extensional structure, intensional structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 375
10061 High Performance in Parallel Data Integration: An Empirical Evaluation of the Ratio Between Processing Time and Number of Physical Nodes

Authors: Caspar von Seckendorff, Eldar Sultanow

Abstract:

Many studies have shown that parallelization decreases efficiency [1], [2]. There are many reasons for these decrements. This paper investigates those which appear in the context of parallel data integration. Integration processes generally cannot be allocated to packages of identical size (i. e. tasks of identical complexity). The reason for this is unknown heterogeneous input data which result in variable task lengths. Process delay is defined by the slowest processing node. It leads to a detrimental effect on the total processing time. With a real world example, this study will show that while process delay does initially increase with the introduction of more nodes it ultimately decreases again after a certain point. The example will make use of the cloud computing platform Hadoop and be run inside Amazon-s EC2 compute cloud. A stochastic model will be set up which can explain this effect.

Keywords: Process delay, speedup, efficiency, parallel computing, data integration, E-Commerce, Amazon Elastic Compute Cloud (EC2), Hadoop, Nutch.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1612
10060 The Estimation of Human Vital Signs Complexity

Authors: L. Bikulciene, E. Venskaityte, G. Jarusevicius

Abstract:

Nonstationary and nonlinear signals generated by living complex systems defy traditional mechanistic approaches, which are based on homeostasis. Previous our studies have shown that the evaluation of the interactions of physiological signals by using special analysis methods is suitable for observation of physiological processes. It is demonstrated the possibility of using deep physiological model, based on the interpretation of the changes of the human body’s functional states combined with an application of the analytical method based on matrix theory for the physiological signals analysis, which was applied on high risk cardiac patients. It is shown that evaluation of cardiac signals interactions show peculiar for each individual functional changes at the onset of hemodynamic restoration procedure. Therefore, we suggest that the alterations of functional state of the body, after patients overcome surgery can be complemented by the data received from the suggested approach of the evaluation of functional variables’ interactions.

Keywords: Cardiac diseases, Complex systems theory, ECG analysis, matrix analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2221
10059 A Novel Design Approach for Mechatronic Systems Based On Multidisciplinary Design Optimization

Authors: Didier Casner, Jean Renaud, Remy Houssin, Dominique Knittel

Abstract:

In this paper, a novel approach for the multidisciplinary design optimization (MDO) of complex mechatronic systems. This approach, which is a part of a global project aiming to include the MDO aspect inside an innovative design process. As a first step, the paper considers the MDO as a redesign approach which is limited to the parametric optimization. After defining and introducing the different keywords, the proposed method which is based on the V-Model which is commonly used in mechatronics.

Keywords: mechatronics, Multidisciplinary Design Optimization (MDO), multiobjective optimization, engineering design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
10058 Impact of Mergers and Acquisitions on Consumers- Welfare: Experience of Indian Manufacturing Sector

Authors: Pulak Mishra, P V Kiran Kumar

Abstract:

In the context of introduction of deregulatory policy measures and subsequent wave of mergers and acquisitions (M&A) in Indian corporate sector since 1991, the present paper attempts to examine the welfare implications of this wave. It is found that M&A do not have any significant impact on consumers- welfare. Instead, consumers- welfare is significantly influenced by exports intensity, imports intensity, advertising intensity, technology related efforts, and past profitability of the firms. While the industries with higher exports orientation or greater product differentiation or better financial performance experience greater loss in consumers- welfare, it is less in the industries with greater competition from imports or better technology. Hence, the wave of M&A in Indian manufacturing sector in the post-liberalization era may not be a matter of serious concern from consumers- welfare point of view. Instead, in many cases, M&A can help the firms in consolidating their business and enhancing competitiveness, and this may benefit the consumers in the form of greater efficiency and lower prices.

Keywords: Mergers, acquisitions, concentration, welfare, IndiaJEL CodesÔÇöL1, L2, L4, L5

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3566
10057 Comparison of Authentication Methods in Internet of Things Technology

Authors: Hafizah Che Hasan, Fateen Nazwa Yusof, Maslina Daud

Abstract:

Internet of Things (IoT) is a powerful industry system, which end-devices are interconnected and automated, allowing the devices to analyze data and execute actions based on the analysis. The IoT technology leverages the technology of Radio-Frequency Identification (RFID) and Wireless Sensor Network (WSN), including mobile and sensor. These technologies contribute to the evolution of IoT. However, due to more devices are connected each other in the Internet, and data from various sources exchanged between things, confidentiality of the data becomes a major concern. This paper focuses on one of the major challenges in IoT; authentication, in order to preserve data integrity and confidentiality are in place. A few solutions are reviewed based on papers from the last few years. One of the proposed solutions is securing the communication between IoT devices and cloud servers with Elliptic Curve Cryptograhpy (ECC) based mutual authentication protocol. This solution focuses on Hyper Text Transfer Protocol (HTTP) cookies as security parameter.  Next proposed solution is using keyed-hash scheme protocol to enable IoT devices to authenticate each other without the presence of a central control server. Another proposed solution uses Physical Unclonable Function (PUF) based mutual authentication protocol. It emphasizes on tamper resistant and resource-efficient technology, which equals a 3-way handshake security protocol.

Keywords: Internet of Things, authentication, PUF ECC, keyed hash scheme protocol.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
10056 Curriculum Based Measurement and Precision Teaching in Writing Empowerment Enhancement: Results from an Italian Learning Center

Authors: I. Pelizzoni, C. Cavallini, I. Salvaderi, F. Cavallini

Abstract:

We present the improvement in writing skills obtained by 94 participants (aged between six and 10 years) with special educational needs through a writing enhancement program based on fluency principles. The study was planned and conducted with a single-subject experimental plan for each of the participants, in order to confirm the results in the literature. These results were obtained using precision teaching (PT) methodology to increase the number of written graphemes per minute in the pre- and post-test, by curriculum based measurement (CBM). Results indicated an increase in the number of written graphemes for all participants. The average overall duration of the intervention is 144 minutes in five months of treatment. These considerations have been analyzed taking account of the complexity of the implementation of measurement systems in real operational contexts (an Italian learning center) and important aspects of replicability and cost-effectiveness of such interventions.

Keywords: Precision teaching, writing skills, CBM, Italian Learning Center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 753
10055 A Set Theory Based Factoring Technique and Its Use for Low Power Logic Design

Authors: Padmanabhan Balasubramanian, Ryuta Arisaka

Abstract:

Factoring Boolean functions is one of the basic operations in algorithmic logic synthesis. A novel algebraic factorization heuristic for single-output combinatorial logic functions is presented in this paper and is developed based on the set theory paradigm. The impact of factoring is analyzed mainly from a low power design perspective for standard cell based digital designs in this paper. The physical implementation of a number of MCNC/IWLS combinational benchmark functions and sub-functions are compared before and after factoring, based on a simple technology mapping procedure utilizing only standard gate primitives (readily available as standard cells in a technology library) and not cells corresponding to optimized complex logic. The power results were obtained at the gate-level by means of an industry-standard power analysis tool from Synopsys, targeting a 130nm (0.13μm) UMC CMOS library, for the typical case. The wire-loads were inserted automatically and the simulations were performed with maximum input activity. The gate-level simulations demonstrate the advantage of the proposed factoring technique in comparison with other existing methods from a low power perspective, for arbitrary examples. Though the benchmarks experimentation reports mixed results, the mean savings in total power and dynamic power for the factored solution over a non-factored solution were 6.11% and 5.85% respectively. In terms of leakage power, the average savings for the factored forms was significant to the tune of 23.48%. The factored solution is expected to better its non-factored counterpart in terms of the power-delay product as it is well-known that factoring, in general, yields a delay-efficient multi-level solution.

Keywords: Factorization, Set theory, Logic function, Standardcell based design, Low power.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1770
10054 Rural Connectivity Technologies Cost Analysis

Authors: F. Simba, L. Trojer, N.H. Mvungi, B.M. Mwinyiwiwa, E.M. Mjema

Abstract:

Rural areas of Tanzania are still disadvantaged in terms of diffusion of IP-based services; this is due to lack of Information and Communication Technology (ICT) infrastructures, especially lack of connectivity. One of the limitations for connectivity problems in rural areas of Tanzania is the high cost to establish infrastructures for IP-based services [1-2]. However the cost of connectivity varies from one technology to the other and at the same time, the cost is also different from one operator (service provider) to another within the country. This paper presents development of software system to calculate cost of connectivity to rural areas of Tanzania. The system is developed to make an easy access of connectivity cost from different technologies and different operators. The development of the calculator follows the V-model software development lifecycle. The calculator is used to evaluate the economic viability of different technologies considered as being potential candidates to provide rural connectivity. In this paper, the evaluation is based on the techno-economic analysis approach.

Keywords: rural, connectivity, cost, V-model, techno economic analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1878
10053 Thermal Cracking Approach Investigation to Improve Biodiesel Properties

Authors: Roghaieh Parvizsedghy, Seyyed Mojtaba Sadrameli

Abstract:

Biodiesel as an alternative diesel fuel is steadily gaining more attention and significance. However, there are some drawbacks while using biodiesel regarding its properties that requires it to be blended with petrol based diesel and/or additives to improve the fuel characteristics. This study analyses thermal cracking as an alternative technology to improve biodiesel characteristics in which, FAME based biodiesel produced by transesterification of castor oil is fed into a continuous thermal cracking reactor at temperatures range of 450-500°C and flowrate range of 20-40 g/hr. Experiments designed by response surface methodology and subsequent statistical studies show that temperature and feed flowrate significantly affect the products yield. Response surfaces were used to study the impact of temperature and flowrate on the product properties. After each experiment, the produced crude bio-oil was distilled and diesel cut was separated. As shorter chain molecules are produced through thermal cracking, the distillation curve of the diesel cut fitted more with petrol based diesel curve in comparison to the biodiesel. Moreover, the produced diesel cut properties adequately pose within property ranges defined by the related standard of petrol based diesel. Cold flow properties, high heating value as the main drawbacks of the biodiesel are improved by this technology. Thermal cracking decreases kinematic viscosity, Flash point and cetane number. 

Keywords: Biodiesel, castor oil, fuel properties, thermal cracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3649
10052 Adoption of Appropriate and Cost Effective Technologies in Housing: Indian Experience

Authors: A. K. Jain, M. C. Paliwal

Abstract:

Construction cost in India is increasing at around 50 per cent over the average inflation levels. It have registered increase of up to 15 per cent every year, primarily due to cost of basic building materials such as steel, cement, bricks, timber and other inputs as well as cost of labour. As a result, the cost of construction using conventional building materials and construction is becoming beyond the affordable limits particularly for low-income groups of population as well as a large cross section of the middle - income groups. Therefore, there is a need to adopt cost-effective construction methods either by up-gradation of traditional technologies using local resources or applying modern construction materials and techniques with efficient inputs leading to economic solutions. This has become the most relevant aspect in the context of the large volume of housing to be constructed in both rural and urban areas and the consideration of limitations in the availability of resources such as building materials and finance. This paper makes an overview of the housing status in India and adoption of appropriate and cost effective technologies in the country.

Keywords: Appropriate, Cost Effective, Ekra, Five year plan, Poverty

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4952
10051 Ranking Alternatives in Multi-Criteria Decision Analysis using Common Weights Based on Ideal and Anti-ideal Frontiers

Authors: Saber Saati Mohtadi, Ali Payan, Azizallah Kord

Abstract:

One of the most important issues in multi-criteria decision analysis (MCDA) is to determine the weights of criteria so that all alternatives can be compared based on the collective performance of criteria. In this paper, one of popular methods in data envelopment analysis (DEA) known as common weights (CWs) is used to determine the weights in MCDA. Two frontiers named ideal and anti-ideal frontiers, instead of ideal and anti-ideal alternatives, are defined based on two new proposed CWs models. Ideal and antiideal frontiers are more flexible than that of alternatives. According to the optimal solutions of these two models, the distances of an alternative from the ideal and anti-ideal frontiers are derived. Then, a relative distance is introduced to measure the value of each alternative. The suggested models are linear and despite weight restrictions are feasible. An example is presented for explaining the method and for comparing to the existing literature.

Keywords: Anti-ideal frontier, Common weights (CWs), Ideal frontier, Multi-criteria decision analysis (MCDA)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1872