Search results for: test data
8462 Applying Lagrangian Relaxation-Based Algorithm for the Airline Coordinated Flight Scheduling Problems
Authors: Chia-Hung Chen, Shangyao Yan
Abstract:
The solution algorithm, based on Lagrangian relaxation, a sub-gradient method and a heuristic to find the upper bound of the solution, is proposed to solve the coordinated fleet routing and flight scheduling problems. Numerical tests are performed to evaluate the proposed algorithm using real operating data from two Taiwan airlines. The test results indicate that the solution algorithm is a significant improvement over those obtained with CPLEX, consequently they could be useful for allied airlines to solve coordinated fleet routing and flight scheduling problems.
Keywords: Coordinated flight scheduling, multiple commodity network flow problem, Lagrangian relaxation.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18138461 Result Validation Analysis of Steel Testing Machines
Authors: Wasiu O. Ajagbe, Habeeb O. Hamzat, Waris A. Adebisi
Abstract:
Structural failures occur due to a number of reasons. These may include under design, poor workmanship, substandard materials, misleading laboratory tests and lots more. Reinforcing steel bar is an important construction material, hence its properties must be accurately known before being utilized in construction. Understanding this property involves carrying out mechanical tests prior to design and during construction to ascertain correlation using steel testing machine which is usually not readily available due to the location of project. This study was conducted to determine the reliability of reinforcing steel testing machines. Reconnaissance survey was conducted to identify laboratories where yield and ultimate tensile strengths tests can be carried out. Six laboratories were identified within Ibadan and environs. However, only four were functional at the time of the study. Three steel samples were tested for yield and tensile strengths, using a steel testing machine, at each of the four laboratories (LM, LO, LP and LS). The yield and tensile strength results obtained from the laboratories were compared with the manufacturer’s specification using a reliability analysis programme. Structured questionnaire was administered to the operators in each laboratory to consider their impact on the test results. The average value of manufacturers’ tensile strength and yield strength are 673.7 N/mm2 and 559.7 N/mm2 respectively. The tensile strength obtained from the four laboratories LM, LO, LP and LS are given as 579.4, 652.7, 646.0 and 649.9 N/mm2 respectively while their yield strengths respectively are 453.3, 597.0, 550.7 and 564.7 N/mm2. Minimum tensile to yield strength ratio is 1.08 for BS 4449: 2005 and 1.15 for ASTM A615. Tensile to yield strength ratio from the four laboratories are 1.28, 1.09, 1.17 and 1.15 for LM, LO, LP and LS respectively. The tensile to yield strength ratio shows that the result obtained from all the laboratories meet the code requirements used for the test. The result of the reliability test shows varying level of reliability between the manufacturers’ specification and the result obtained from the laboratories. Three of the laboratories; LO, LS and LP have high value of reliability with the manufacturer i.e. 0.798, 0.866 and 0.712 respectively. The fourth laboratory, LM has a reliability value of 0.100. Steel test should be carried out in a laboratory using the same code in which the structural design was carried out. More emphasis should be laid on the importance of code provisions.
Keywords: Reinforcing steel bars, reliability analysis, tensile strength, universal testing machine, yield strength.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7488460 Role of Association Rule Mining in Numerical Data Analysis
Authors: Sudhir Jagtap, Kodge B. G., Shinde G. N., Devshette P. M
Abstract:
Numerical analysis naturally finds applications in all fields of engineering and the physical sciences, but in the 21st century, the life sciences and even the arts have adopted elements of scientific computations. The numerical data analysis became key process in research and development of all the fields [6]. In this paper we have made an attempt to analyze the specified numerical patterns with reference to the association rule mining techniques with minimum confidence and minimum support mining criteria. The extracted rules and analyzed results are graphically demonstrated. Association rules are a simple but very useful form of data mining that describe the probabilistic co-occurrence of certain events within a database [7]. They were originally designed to analyze market-basket data, in which the likelihood of items being purchased together within the same transactions are analyzed.Keywords: Numerical data analysis, Data Mining, Association Rule Mining
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28608459 A Survey on Data-Centric and Data-Aware Techniques for Large Scale Infrastructures
Authors: Silvina Caíno-Lores, Jesús Carretero
Abstract:
Large scale computing infrastructures have been widely developed with the core objective of providing a suitable platform for high-performance and high-throughput computing. These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. Currently, large scale data-intensive applications are hindered by the high latencies that result from the access to vastly distributed data. Recent works have suggested that improving data locality is key to move towards exascale infrastructures efficiently, as solutions to this problem aim to reduce the bandwidth consumed in data transfers, and the overheads that arise from them. There are several techniques that attempt to move computations closer to the data. In this survey we analyse the different mechanisms that have been proposed to provide data locality for large scale high-performance and high-throughput systems. This survey intends to assist scientific computing community in understanding the various technical aspects and strategies that have been reported in recent literature regarding data locality. As a result, we present an overview of locality-oriented techniques, which are grouped in four main categories: application development, task scheduling, in-memory computing and storage platforms. Finally, the authors include a discussion on future research lines and synergies among the former techniques.Keywords: Co-scheduling, data-centric, data-intensive, data locality, in-memory storage, large scale.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14908458 Differences in Goal Scoring and Passing Sequences between Winning and Losing Team in UEFA-EURO Championship 2012
Authors: Muhamad S., Norasrudin S, Rahmat A.
Abstract:
The objective of current study is to investigate the differences of winning and losing teams in terms of goal scoring and passing sequences. Total of 31 matches from UEFA-EURO 2012 were analyzed and 5 matches were excluded from analysis due to matches end up drawn. There are two groups of variable used in the study which is; i. the goal scoring variable and: ii. passing sequences variable. Data were analyzed using Wilcoxon matched pair rank test with significant value set at p < 0.05. Current study found the timing of goal scored was significantly higher for winning team at 1st half (Z=-3.416, p=.001) and 2nd half (Z=-3.252, p=.001). The scoring frequency was also found to be increase as time progressed and the last 15 minutes of the game was the time interval the most goals scored. The indicators that were significantly differences between winning and losing team were the goal scored (Z=-4.578, p=.000), the head (Z=-2.500, p=.012), the right foot (Z=-3.788,p=.000), corner (Z=-.2.126,p=.033), open play (Z=-3.744,p=.000), inside the penalty box (Z=-4.174, p=.000) , attackers (Z=-2.976, p=.003) and also the midfielders (Z=-3.400, p=.001). Regarding the passing sequences, there are significance difference between both teams in short passing sequences (Z=-.4.141, p=.000). While for the long passing, there were no significance difference (Z=-.1.795, p=.073). The data gathered in present study can be used by the coaches to construct detailed training program based on their objectives.Keywords: Football, goals scored, passing, timing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28478457 Correction of Infrared Data for Electrical Components on a Board
Authors: Seong-Ho Song, Ki-Seob Kim, Seop-Hyeong Park, Seon-Woo Lee
Abstract:
In this paper, the data correction algorithm is suggested when the environmental air temperature varies. To correct the infrared data in this paper, the initial temperature or the initial infrared image data is used so that a target source system may not be necessary. The temperature data obtained from infrared detector show nonlinear property depending on the surface temperature. In order to handle this nonlinear property, Taylor series approach is adopted. It is shown that the proposed algorithm can reduce the influence of environmental temperature on the components in the board. The main advantage of this algorithm is to use only the initial temperature of the components on the board rather than using other reference device such as black body sources in order to get reference temperatures.Keywords: Infrared camera, Temperature Data compensation, Environmental Ambient Temperature, Electric Component
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15258456 An Enhanced Particle Swarm Optimization Algorithm for Multiobjective Problems
Authors: Houda Abadlia, Nadia Smairi, Khaled Ghedira
Abstract:
Multiobjective Particle Swarm Optimization (MOPSO) has shown an effective performance for solving test functions and real-world optimization problems. However, this method has a premature convergence problem, which may lead to lack of diversity. In order to improve its performance, this paper presents a hybrid approach which embedded the MOPSO into the island model and integrated a local search technique, Variable Neighborhood Search, to enhance the diversity into the swarm. Experiments on two series of test functions have shown the effectiveness of the proposed approach. A comparison with other evolutionary algorithms shows that the proposed approach presented a good performance in solving multiobjective optimization problems.
Keywords: Particle swarm optimization, migration, variable neighborhood search, multiobjective optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8088455 The Importance of Bridge Health Monitoring
Authors: Punya Chupanit, Chayatan Phromsorn
Abstract:
In the past, there were many bridge-s collapses due to lack of bridge structural capacity information. Most of concrete bridge health was relied on information from visual inspection, which sometime was inadequate. This study was conducted in order to investigate relationship between bridge structural condition and bridge visual condition. This study was a part of a big project conducted at Department of Highways of Thailand. In this study, 31 bridges including slab-type bridges, plank-girder bridges, prestressed box-beam bridges, prestressed I-girder bridges and prestressed multibeam bridges were selected for visual inspection and load test. It was found a positive correlation between bridge appearance and bridge-s load carrying capacity. However, statistical characteristic revealed low correlation between them.Keywords: Bridge, Visual Inspection, Load Test, Condition Rating, Rating Factor
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20528454 On the Prediction of Transmembrane Helical Segments in Membrane Proteins
Abstract:
The prediction of transmembrane helical segments (TMHs) in membrane proteins is an important field in the bioinformatics research. In this paper, a method based on discrete wavelet transform (DWT) has been developed to predict the number and location of TMHs in membrane proteins. PDB coded as 1F88 was chosen as an example to describe the prediction of the number and location of TMHs in membrane proteins by using this method. One group of test data sets that contain total 19 protein sequences was utilized to access the effect of this method. Compared with the prediction results of DAS, PRED-TMR2, SOSUI, HMMTOP2.0 and TMHMM2.0, the obtained results indicate that the presented method has higher prediction accuracy.Keywords: hydrophobicity, membrane protein, transmembranehelical segments, wavelet transform
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15798453 A Generalised Relational Data Model
Authors: Georgia Garani
Abstract:
A generalised relational data model is formalised for the representation of data with nested structure of arbitrary depth. A recursive algebra for the proposed model is presented. All the operations are formally defined. The proposed model is proved to be a superset of the conventional relational model (CRM). The functionality and validity of the model is shown by a prototype implementation that has been undertaken in the functional programming language Miranda.Keywords: nested relations, recursive algebra, recursive nested operations, relational data model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15588452 WiFi Data Offloading: Bundling Method in a Canvas Business Model
Authors: Majid Mokhtarnia, Alireza Amini
Abstract:
Mobile operators deal with increasing in the data traffic as a critical issue. As a result, a vital responsibility of the operators is to deal with such a trend in order to create added values. This paper addresses a bundling method in a Canvas business model in a WiFi Data Offloading (WDO) strategy by which some elements of the model may be affected. In the proposed method, it is supposed to sell a number of data packages for subscribers in which there are some packages with a free given volume of data-offloaded WiFi complimentary. The paper on hands analyses this method in the views of attractiveness and profitability. The results demonstrate that the quality of implementation of the WDO strongly affects the final result and helps the decision maker to make the best one.
Keywords: Bundling, canvas business model, telecommunication, WiFi Data Offloading.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8888451 Review of a Real-Time Infectious Waste Management System Using QR Code
Authors: Hiraku Nunomiya, Takuo Ichiju, Yoshiyuki Higuchi
Abstract:
In the management of industrial waste, conversion from the use of paper invoices to electronic forms is currently under way in developed countries. Difficulties in such computerization include the lack of synchronization between the actual goods and the corresponding data managed by the server. Consequently, a system which utilizes the incorporation of a QR code in connection with the waste material has been developed. The code is read at each stage, from discharge until disposal, and progress at each stage can be easily reported. This system can be linked with Japanese public digital authentication service of waste, taking advantage of its good points, and can be used to submit reports to the regulatory authorities. Its usefulness was confirmed by a verification test, and put into actual practice.
Keywords: Infectious Waste, Electronic Manifest, Real Time Management, QR code.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18148450 Big Data Analytics and Data Security in the Cloud via Fully Homomorphic Encryption
Authors: Victor Onomza Waziri, John K. Alhassan, Idris Ismaila, Moses Noel Dogonyaro
Abstract:
This paper describes the problem of building secure computational services for encrypted information in the Cloud Computing without decrypting the encrypted data; therefore, it meets the yearning of computational encryption algorithmic aspiration model that could enhance the security of big data for privacy, confidentiality, availability of the users. The cryptographic model applied for the computational process of the encrypted data is the Fully Homomorphic Encryption Scheme. We contribute a theoretical presentations in a high-level computational processes that are based on number theory and algebra that can easily be integrated and leveraged in the Cloud computing with detail theoretic mathematical concepts to the fully homomorphic encryption models. This contribution enhances the full implementation of big data analytics based cryptographic security algorithm.
Keywords: Data Analytics, Security, Privacy, Bootstrapping, and Fully Homomorphic Encryption Scheme.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 34568449 Physical and Mechanical Properties of Particleboard from Bamboo Waste
Authors: Vanchai Laemlaksakul
Abstract:
This research was to evaluate a technical feasibility of making single-layer experimental particleboard panels from bamboo waste (Dendrocalamus asper Backer) by converting bamboo into strips, which are used to make laminated bamboo furniture. Variable factors were density (600, 700 and 800 kg/m3) and temperature of condition (25, 40 and 55 °C). The experimental panels were tested for their physical and mechanical properties including modulus of elasticity (MOE), modulus of rupture (MOR), internal bonding strength (IB), screw holding strength (SH) and thickness swelling values according to the procedures defined by Japanese Industrial Standard (JIS). The test result of mechanical properties showed that the MOR, MOE and IB values were not in the set criteria, except the MOR values at the density of 700 kg/m3 at 25 °C and at the density of 800 kg/m3 at 25 and 40 °C, the IB values at the density of 600 kg/m3, at 40 °C, and at the density of 800 kg/m3 at 55 °C. The SH values had the test result according to the set standard, except with the density of 600 kg/m3, at 40 and 55 °C. Conclusively, a valuable renewable biomass, bamboo waste could be used to manufacture boards.Keywords: Particleboard, Urea Formaldehyde Resin, BambooWaste
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 56448448 A Power Reduction Technique for Built-In-Self Testing Using Modified Linear Feedback Shift Register
Authors: Mayank Shakya, Soundra Pandian. K. K
Abstract:
A linear feedback shift register (LFSR) is proposed which targets to reduce the power consumption from within. It reduces the power consumption during testing of a Circuit Under Test (CUT) at two stages. At first stage, Control Logic (CL) makes the clocks of the switching units of the register inactive for a time period when output from them is going to be same as previous one and thus reducing unnecessary switching of the flip-flops. And at second stage, the LFSR reorders the test vectors by interchanging the bit with its next and closest neighbor bit. It keeps fault coverage capacity of the vectors unchanged but reduces the Total Hamming Distance (THD) so that there is reduction in power while shifting operation.Keywords: Linear Feedback Shift Register, Total Hamming Distance, Fault Coverage, Control Logic
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 20258447 Laboratory Evaluation of Geogrids Used for Stabilizing Soft Subgrades
Authors: Magdi M. E. Zumrawi, Nehla Mansour
Abstract:
This paper aims to assess the efficiency of using geogrid reinforcement for subgrade stabilization. The literature of applying geogrid reinforcement technique for pavements built on soft subgrades and the previous experiences were reviewed. Laboratory tests were conducted on soil reinforced with geogrids in one or several layers. The soil specimens were compacted in four layers with or without geogrid sheets. The California Bearing Ratio (CBR) test, in soaking condition, was performed on natural soil and soil-geogrid specimens. The test results revealed that the CBR value is much affected by the geogrid sheet location and the number of sheets used in the soil specimen. When a geogrid sheet was placed at the 1st layer of the soil, there was an increment of 26% in the CBR value. Moreover, the CBR value was significantly increased by 62% when geogrid sheets were placed at all four layers. The high CBR value is attributed to interface friction and interlock involved in the geogrid/ soil interactions. It could be concluded that geogrid reinforcement is successful and more economical technique.Keywords: Geogrid, reinforcement, stabilization, subgrade.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 28008446 Hierarchical Clustering Algorithms in Data Mining
Authors: Z. Abdullah, A. R. Hamdan
Abstract:
Clustering is a process of grouping objects and data into groups of clusters to ensure that data objects from the same cluster are identical to each other. Clustering algorithms in one of the area in data mining and it can be classified into partition, hierarchical, density based and grid based. Therefore, in this paper we do survey and review four major hierarchical clustering algorithms called CURE, ROCK, CHAMELEON and BIRCH. The obtained state of the art of these algorithms will help in eliminating the current problems as well as deriving more robust and scalable algorithms for clustering.Keywords: Clustering, method, algorithm, hierarchical, survey.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33748445 Recommender Systems Using Ensemble Techniques
Authors: Yeonjeong Lee, Kyoung-jae Kim, Youngtae Kim
Abstract:
This study proposes a novel recommender system that uses data mining and multi-model ensemble techniques to enhance the recommendation performance through reflecting the precise user’s preference. The proposed model consists of two steps. In the first step, this study uses logistic regression, decision trees, and artificial neural networks to predict customers who have high likelihood to purchase products in each product group. Then, this study combines the results of each predictor using the multi-model ensemble techniques such as bagging and bumping. In the second step, this study uses the market basket analysis to extract association rules for co-purchased products. Finally, the system selects customers who have high likelihood to purchase products in each product group and recommends proper products from same or different product groups to them through above two steps. We test the usability of the proposed system by using prototype and real-world transaction and profile data. In addition, we survey about user satisfaction for the recommended product list from the proposed system and the randomly selected product lists. The results also show that the proposed system may be useful in real-world online shopping store.
Keywords: Product recommender system, Ensemble technique, Association rules, Decision tree, Artificial neural networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 42208444 Iterative Clustering Algorithm for Analyzing Temporal Patterns of Gene Expression
Authors: Seo Young Kim, Jae Won Lee, Jong Sung Bae
Abstract:
Microarray experiments are information rich; however, extensive data mining is required to identify the patterns that characterize the underlying mechanisms of action. For biologists, a key aim when analyzing microarray data is to group genes based on the temporal patterns of their expression levels. In this paper, we used an iterative clustering method to find temporal patterns of gene expression. We evaluated the performance of this method by applying it to real sporulation data and simulated data. The patterns obtained using the iterative clustering were found to be superior to those obtained using existing clustering algorithms.Keywords: Clustering, microarray experiment, temporal pattern of gene expression data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13548443 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are class balancing, data shuffling, and standardization, were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the sequential model and ReLU activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.
Keywords: Spectroscopy, soluble solid content, pineapple, neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1198442 Effective Software-Based Solution for Processing Mass Downstream Data in Interactive Push VOD System
Authors: Ni Hong, Wu Guobin, Wu Gang, Pan Liang
Abstract:
Interactive push VOD system is a new kind of system that incorporates push technology and interactive technique. It can push movies to users at high speeds at off-peak hours for optimal network usage so as to save bandwidth. This paper presents effective software-based solution for processing mass downstream data at terminals of interactive push VOD system, where the service can download movie according to a viewer-s selection. The downstream data is divided into two catalogs: (1) the carousel data delivered according to DSM-CC protocol; (2) IP data delivered according to Euro-DOCSIS protocol. In order to accelerate download speed and reduce data loss rate at terminals, this software strategy introduces caching, multi-thread and resuming mechanisms. The experiments demonstrate advantages of the software-based solution.Keywords: DSM-CC, data carousel, Euro-DOCSIS, push VOD.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14888441 Approaches and Schemes for Storing DTD-Independent XML Data in Relational Databases
Authors: Mehdi Emadi, Masoud Rahgozar, Adel Ardalan, Alireza Kazerani, Mohammad Mahdi Ariyan
Abstract:
The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method's query answering.Keywords: XML Data Management, XPath, DTD-IndependentXML Data
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18808440 Approaches and Schemes for Storing DTDIndependent XML Data in Relational Databases
Authors: Mehdi Emadi, Masoud Rahgozar, Adel Ardalan, Alireza Kazerani, Mohammad Mahdi Ariyan
Abstract:
The volume of XML data exchange is explosively increasing, and the need for efficient mechanisms of XML data management is vital. Many XML storage models have been proposed for storing XML DTD-independent documents in relational database systems. Benchmarking is the best way to highlight pros and cons of different approaches. In this study, we use a common benchmarking scheme, known as XMark to compare the most cited and newly proposed DTD-independent methods in terms of logical reads, physical I/O, CPU time and duration. We show the effect of Label Path, extracting values and storing in another table and type of join needed for each method-s query answering.Keywords: XML Data Management, XPath, DTD-Independent XML Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14528439 Detecting Rat’s Kidney Inflammation Using Real Time Photoacoustic Tomography
Authors: M. Y. Lee, D. H. Shin, S. H. Park, W.C. Ham, S.K. Ko, C. G. Song
Abstract:
Photoacoustic Tomography (PAT) is a promising medical imaging modality that combines optical imaging contrast with the spatial resolution of ultrasound imaging. It can also distinguish the changes in biological features. But, real-time PAT system should be confirmed due to photoacoustic effect for tissue. Thus, we have developed a real-time PAT system using a custom-developed data acquisition board and ultrasound linear probe. To evaluate performance of our system, phantom test was performed. As a result of those experiments, the system showed satisfactory performance and its usefulness has been confirmed. We monitored the degradation of inflammation which induced on the rat’s kidney using real-time PAT.
Keywords: Photoacoustic tomography, inflammation detection, rat, kidney, contrast agent, ultrasound.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13678438 Development of Electric Performance Testing System for Ceramic Chips using PZT Actuator
Authors: Jin-Ho Bae, Yong-Tae Kim, S K Deb Nath, Seo-Ik Kang, Sung-Gaun Kim
Abstract:
Reno-pin contact test is a method that is controlled by DC motor used to characterize electronic chips. This method is used in electronic and telecommunication devices. A new electric performance testing system is developed in which the testing method is controlled by using Piezoelectric Transducer (PZT) instead of DC motor which reduces vibration and noise. The vertical displacement of the Reno-pin is very short in the Reno-pin contact testing system. Now using a flexible guide in the new Reno-pin contact system, the vertical movement of the Reno-pin is increased many times of the existing Reno-pin contact testing method using DC motor. Using the present electric performance testing system with a flexible hinge and PZT instead of DC motor, manufacturing of electronic chips are able to characterize chips with low cost and high speed.Keywords: PZT Actuator, Chip test, Mechanical amplifier
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19918437 Analysis of Urban Population Using Twitter Distribution Data: Case Study of Makassar City, Indonesia
Authors: Yuyun Wabula, B. J. Dewancker
Abstract:
In the past decade, the social networking app has been growing very rapidly. Geolocation data is one of the important features of social media that can attach the user's location coordinate in the real world. This paper proposes the use of geolocation data from the Twitter social media application to gain knowledge about urban dynamics, especially on human mobility behavior. This paper aims to explore the relation between geolocation Twitter with the existence of people in the urban area. Firstly, the study will analyze the spread of people in the particular area, within the city using Twitter social media data. Secondly, we then match and categorize the existing place based on the same individuals visiting. Then, we combine the Twitter data from the tracking result and the questionnaire data to catch the Twitter user profile. To do that, we used the distribution frequency analysis to learn the visitors’ percentage. To validate the hypothesis, we compare it with the local population statistic data and land use mapping released by the city planning department of Makassar local government. The results show that there is the correlation between Twitter geolocation and questionnaire data. Thus, integration the Twitter data and survey data can reveal the profile of the social media users.
Keywords: Geolocation, Twitter, distribution analysis, human mobility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11908436 Database Compression for Intelligent On-board Vehicle Controllers
Authors: Ágoston Winkler, Sándor Juhász, Zoltán Benedek
Abstract:
The vehicle fleet of public transportation companies is often equipped with intelligent on-board passenger information systems. A frequently used but time and labor-intensive way for keeping the on-board controllers up-to-date is the manual update using different memory cards (e.g. flash cards) or portable computers. This paper describes a compression algorithm that enables data transmission using low bandwidth wireless radio networks (e.g. GPRS) by minimizing the amount of data traffic. In typical cases it reaches a compression rate of an order of magnitude better than that of the general purpose compressors. Compressed data can be easily expanded by the low-performance controllers, too.
Keywords: Data analysis, data compression, differentialencoding, run-length encoding, vehicle control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15668435 EUDIS-An Encryption Scheme for User-Data Security in Public Networks
Authors: S. Balaji, M. Rajaram
Abstract:
The method of introducing the proxy interpretation for sending and receiving requests increase the capability of the server and our approach UDIV (User-Data Identity Security) to solve the data and user authentication without extending size of the data makes better than hybrid IDS (Intrusion Detection System). And at the same time all the security stages we have framed have to pass through less through that minimize the response time of the request. Even though an anomaly detected, before rejecting it the proxy extracts its identity to prevent it to enter into system. In case of false anomalies, the request will be reshaped and transformed into legitimate request for further response. Finally we are holding the normal and abnormal requests in two different queues with own priorities.
Keywords: IDS, Data & User authentication, UDIS.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18538434 Five Vital Factors Related to Employees’ Job Performance
Authors: Siri-orn Champatong
Abstract:
The purpose of this research was to study five vital factors related to employees’ job performance. A total of 250 respondents were sampled from employees who worked at a public warehouse organization, Bangkok, Thailand. Samples were divided into two groups according to their work experience. The average working experience was about 9 years for group one and 28 years for group two. A questionnaire was utilized as a tool to collect data. Statistics utilized in this research included frequency, percentage, mean, standard deviation, t-test analysis, one way ANOVA, and Pearson Product-moment correlation coefficient. Data were analyzed by using Statistical Package for the Social Sciences. The findings disclosed that the majority of respondents were female between 23- 31 years old, single, and hold an undergraduate degree. The average income of respondents was less than 30,900 baht. The findings also revealed that the factors of organization chart awareness, job process and technology, internal environment, employee loyalty, and policy and management were ranked as medium level. The hypotheses testing revealed that difference in gender, age, and position had differences in terms of the awareness of organization chart, job process and technology, internal environment, employee loyalty, and policy and management in the same direction with low level.
Keywords: Employees, Factors Related, Job Performance, Public Warehouse Organization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16468433 Parallel Vector Processing Using Multi Level Orbital DATA
Authors: Nagi Mekhiel
Abstract:
Many applications use vector operations by applying single instruction to multiple data that map to different locations in conventional memory. Transferring data from memory is limited by access latency and bandwidth affecting the performance gain of vector processing. We present a memory system that makes all of its content available to processors in time so that processors need not to access the memory, we force each location to be available to all processors at a specific time. The data move in different orbits to become available to other processors in higher orbits at different time. We use this memory to apply parallel vector operations to data streams at first orbit level. Data processed in the first level move to upper orbit one data element at a time, allowing a processor in that orbit to apply another vector operation to deal with serial code limitations inherited in all parallel applications and interleaved it with lower level vector operations.Keywords: Memory organization, parallel processors, serial code, vector processing.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1061