Search results for: optimum data transfer
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9052

Search results for: optimum data transfer

7192 Effect of Alloying Elements and Hot Forging/Rolling Reduction Ratio on Hardness and Impact Toughness of Heat Treated Low Alloy Steels

Authors: Mahmoud M. Tash

Abstract:

The present study was carried out to investigate the effect of alloying elements and thermo-mechanical treatment (TMT) i.e. hot rolling and forging with different reduction ratios on the hardness (HV) and impact toughness (J) of heat-treated low alloy steels. An understanding of the combined effect of TMT and alloying elements and by measuring hardness, impact toughness, resulting from different heat treatment following TMT of the low alloy steels, it is possible to determine which conditions yielded optimum mechanical properties and high strength to weight ratio. Experimental Correlations between hot work reduction ratio, hardness and impact toughness for thermo-mechanically heat treated low alloy steels are analyzed quantitatively, and both regression and mathematical hardness and impact toughness models are developed.

Keywords: Hot Forging, hot rolling, heat treatment, hardness (hv), impact toughness (j), microstructure, low alloy steels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3403
7191 Fault Detection and Identification of COSMED K4b2 Based On PCA and Neural Network

Authors: Jing Zhou, Steven Su, Aihuang Guo

Abstract:

COSMED K4b2 is a portable electrical device designed to test pulmonary functions. It is ideal for many applications that need the measurement of the cardio-respiratory response either in the field or in the lab is capable with the capability to delivery real time data to a sink node or a PC base station with storing data in the memory at the same time. But the actual sensor outputs and data received may contain some errors, such as impulsive noise which can be related to sensors, low batteries, environment or disturbance in data acquisition process. These abnormal outputs might cause misinterpretations of exercise or living activities to persons being monitored. In our paper we propose an effective and feasible method to detect and identify errors in applications by principal component analysis (PCA) and a back propagation (BP) neural network.

Keywords: BP Neural Network, Exercising Testing, Fault Detection and Identification, Principal Component Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3036
7190 Array Data Transformation for Source Code Obfuscation

Authors: S. Praveen, P. Sojan Lal

Abstract:

Obfuscation is a low cost software protection methodology to avoid reverse engineering and re engineering of applications. Source code obfuscation aims in obscuring the source code to hide the functionality of the codes. This paper proposes an Array data transformation in order to obfuscate the source code which uses arrays. The applications using the proposed data structures force the programmer to obscure the logic manually. It makes the developed obscured codes hard to reverse engineer and also protects the functionality of the codes.

Keywords: Reverse Engineering, Source Code Obfuscation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1994
7189 Artificial Neural Network Approach for Inventory Management Problem

Authors: Govind Shay Sharma, Randhir Singh Baghel

Abstract:

The stock management of raw materials and finished goods is a significant issue for industries in fulfilling customer demand. Optimization of inventory strategies is crucial to enhancing customer service, reducing lead times and costs, and meeting market demand. This paper suggests finding an approach to predict the optimum stock level by utilizing past stocks and forecasting the required quantities. In this paper, we utilized Artificial Neural Network (ANN) to determine the optimal value. The objective of this paper is to discuss the optimized ANN that can find the best solution for the inventory model. In the context of the paper, we mentioned that the k-means algorithm is employed to create homogeneous groups of items. These groups likely exhibit similar characteristics or attributes that make them suitable for being managed using uniform inventory control policies. The paper proposes a method that uses the neural fit algorithm to control the cost of inventory.

Keywords: Artificial Neural Network, inventory management, optimization, distributor center.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 109
7188 A Teaching Learning Based Optimization for Optimal Design of a Hybrid Energy System

Authors: Ahmad Rouhani, Masoud Jabbari, Sima Honarmand

Abstract:

This paper introduces a method to optimal design of a hybrid Wind/Photovoltaic/Fuel cell generation system for a typical domestic load that is not located near the electricity grid. In this configuration the combination of a battery, an electrolyser, and a hydrogen storage tank are used as the energy storage system. The aim of this design is minimization of overall cost of generation scheme over 20 years of operation. The Matlab/Simulink is applied for choosing the appropriate structure and the optimization of system sizing. A teaching learning based optimization is used to optimize the cost function. An overall power management strategy is designed for the proposed system to manage power flows among the different energy sources and the storage unit in the system. The results have been analyzed in terms of technical and economic. The simulation results indicate that the proposed hybrid system would be a feasible solution for stand-alone applications at remote locations.

Keywords: Hybrid energy system, optimum sizing, power management, TLBO.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2529
7187 An Efficient Adaptive Thresholding Technique for Wavelet Based Image Denoising

Authors: D.Gnanadurai, V.Sadasivam

Abstract:

This frame work describes a computationally more efficient and adaptive threshold estimation method for image denoising in the wavelet domain based on Generalized Gaussian Distribution (GGD) modeling of subband coefficients. In this proposed method, the choice of the threshold estimation is carried out by analysing the statistical parameters of the wavelet subband coefficients like standard deviation, arithmetic mean and geometrical mean. The noisy image is first decomposed into many levels to obtain different frequency bands. Then soft thresholding method is used to remove the noisy coefficients, by fixing the optimum thresholding value by the proposed method. Experimental results on several test images by using this method show that this method yields significantly superior image quality and better Peak Signal to Noise Ratio (PSNR). Here, to prove the efficiency of this method in image denoising, we have compared this with various denoising methods like wiener filter, Average filter, VisuShrink and BayesShrink.

Keywords: Wavelet Transform, Gaussian Noise, ImageDenoising, Filter Banks and Thresholding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2868
7186 Applying Fuzzy FP-Growth to Mine Fuzzy Association Rules

Authors: Chien-Hua Wang, Wei-Hsuan Lee, Chin-Tzong Pang

Abstract:

In data mining, the association rules are used to find for the associations between the different items of the transactions database. As the data collected and stored, rules of value can be found through association rules, which can be applied to help managers execute marketing strategies and establish sound market frameworks. This paper aims to use Fuzzy Frequent Pattern growth (FFP-growth) to derive from fuzzy association rules. At first, we apply fuzzy partition methods and decide a membership function of quantitative value for each transaction item. Next, we implement FFP-growth to deal with the process of data mining. In addition, in order to understand the impact of Apriori algorithm and FFP-growth algorithm on the execution time and the number of generated association rules, the experiment will be performed by using different sizes of databases and thresholds. Lastly, the experiment results show FFPgrowth algorithm is more efficient than other existing methods.

Keywords: Data mining, association rule, fuzzy frequent patterngrowth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1766
7185 A Supervised Learning Data Mining Approach for Object Recognition and Classification in High Resolution Satellite Data

Authors: Mais Nijim, Rama Devi Chennuboyina, Waseem Al Aqqad

Abstract:

Advances in spatial and spectral resolution of satellite images have led to tremendous growth in large image databases. The data we acquire through satellites, radars, and sensors consists of important geographical information that can be used for remote sensing applications such as region planning, disaster management. Spatial data classification and object recognition are important tasks for many applications. However, classifying objects and identifying them manually from images is a difficult task. Object recognition is often considered as a classification problem, this task can be performed using machine-learning techniques. Despite of many machine-learning algorithms, the classification is done using supervised classifiers such as Support Vector Machines (SVM) as the area of interest is known. We proposed a classification method, which considers neighboring pixels in a region for feature extraction and it evaluates classifications precisely according to neighboring classes for semantic interpretation of region of interest (ROI). A dataset has been created for training and testing purpose; we generated the attributes by considering pixel intensity values and mean values of reflectance. We demonstrated the benefits of using knowledge discovery and data-mining techniques, which can be on image data for accurate information extraction and classification from high spatial resolution remote sensing imagery.

Keywords: Remote sensing, object recognition, classification, data mining, waterbody identification, feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
7184 Investigating Crime Hotspot Places and their Implication to Urban Environmental Design: A Geographic Visualization and Data Mining Approach

Authors: Donna R. Tabangin, Jacqueline C. Flores, Nelson F. Emperador

Abstract:

Information is power. Geographical information is an emerging science that is advancing the development of knowledge to further help in the understanding of the relationship of “place" with other disciplines such as crime. The researchers used crime data for the years 2004 to 2007 from the Baguio City Police Office to determine the incidence and actual locations of crime hotspots. Combined qualitative and quantitative research methodology was employed through extensive fieldwork and observation, geographic visualization with Geographic Information Systems (GIS) and Global Positioning Systems (GPS), and data mining. The paper discusses emerging geographic visualization and data mining tools and methodologies that can be used to generate baseline data for environmental initiatives such as urban renewal and rejuvenation. The study was able to demonstrate that crime hotspots can be computed and were seen to be occurring to some select places in the Central Business District (CBD) of Baguio City. It was observed that some characteristics of the hotspot places- physical design and milieu may play an important role in creating opportunities for crime. A list of these environmental attributes was generated. This derived information may be used to guide the design or redesign of the urban environment of the City to be able to reduce crime and at the same time improve it physically.

Keywords: Crime mapping, data mining, environmental design, geographic visualization, GIS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2565
7183 Closed Form Optimal Solution of a Tuned Liquid Column Damper Responding to Earthquake

Authors: A. Farshidianfar, P. Oliazadeh

Abstract:

In this paper the vibration behaviors of a structure equipped with a tuned liquid column damper (TLCD) under a harmonic type of earthquake loading are studied. However, due to inherent nonlinear liquid damping, it is no doubt that a great deal of computational effort is required to search the optimum parameters of the TLCD, numerically. Therefore by linearization the equation of motion of the single degree of freedom structure equipped with the TLCD, the closed form solutions of the TLCD-structure system are derived. To find the reliability of the analytical method, the results have been compared with other researcher and have good agreement. Further, the effects of optimal design parameters such as length ratio and mass ratio on the performance of the TLCD for controlling the responses of a structure are investigated by using the harmonic type of earthquake excitation. Finally, the Citicorp Center which has a very flexible structure is used as an example to illustrate the design procedure for the TLCD under the earthquake excitation.

Keywords: Closed form solution, Earthquake excitation, TLCD.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
7182 Dynamic Analysis of Composite Doubly Curved Panels with Variable Thickness

Authors: I. Algul, G. Akgun, H. Kurtaran

Abstract:

Dynamic analysis of composite doubly curved panels with variable thickness subjected to different pulse types using Generalized Differential Quadrature method (GDQ) is presented in this study. Panels with variable thickness are used in the construction of aerospace and marine industry. Giving variable thickness to panels can allow the designer to get optimum structural efficiency. For this reason, estimating the response of variable thickness panels is very important to design more reliable structures under dynamic loads. Dynamic equations for composite panels with variable thickness are obtained using virtual work principle. Partial derivatives in the equation of motion are expressed with GDQ and Newmark average acceleration scheme is used for temporal discretization. Several examples are used to highlight the effectiveness of the proposed method. Results are compared with finite element method. Effects of taper ratios, boundary conditions and loading type on the response of composite panel are investigated.

Keywords: Generalized differential quadrature method, doubly curved panels, laminated composite materials, small displacement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 906
7181 Movies and Dynamic Mathematical Objects on Trigonometry for Mobile Phones

Authors: Kazuhisa Takagi

Abstract:

This paper is about movies and dynamic objects for mobile phones. Dynamic objects are the software programmed by JavaScript. They consist of geometric figures and work on HTML5-compliant browsers. Mobile phones are very popular among teenagers. They like watching movies and playing games on them. So, mathematics movies and dynamic objects would enhance teaching and learning processes. In the movies, manga characters speak with artificially synchronized voices. They teach trigonometry together with dynamic mathematical objects. Many movies are created. They are Windows Media files or MP4 movies. These movies and dynamic objects are not only used in the classroom but also distributed to students. By watching movies, students can study trigonometry before or after class.

Keywords: Dynamic mathematical object, JavaScript, Google drive, transfer jet.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 972
7180 Plasmonic Absorption Enhancement in Au/CdS Nanocomposite

Authors: K. Easawi, M. Nabil, T. Abdallah, S. Negm, H. Talaat

Abstract:

Composite nanostructures of metal core/semiconductor shell (Au/CdS) configuration were prepared using organometalic method. UV-Vis spectra for the Au/CdS colloids show initially two well separated bands, corresponding to surface plasmon of the Au core, and the exciton of CdS shell. The absorption of CdS shell is enhanced, while the Au plasmon band is suppressed as the shell thickness increases. The shell sizes were estimated from the optical spectra using the effective mass approximation model (EMA), and compared to the sizes of the Au core and CdS shell measured by high resolution transmission electron microscope (HRTEM). The changes in the absorption features are discussed in terms of gradual increase in the coupling strength of the Au core surface plasmon and the exciton in the CdS. leading to charge transfer and modification of electron oscillation in Au core.

Keywords: Nanocomposites, Plasmonics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2415
7179 An Improved Model for Prediction of the Effective Thermal Conductivity of Nanofluids

Authors: K. Abbaspoursani, M. Allahyari, M. Rahmani

Abstract:

Thermal conductivity is an important characteristic of a nanofluid in laminar flow heat transfer. This paper presents an improved model for the prediction of the effective thermal conductivity of nanofluids based on dimensionless groups. The model expresses the thermal conductivity of a nanofluid as a function of the thermal conductivity of the solid and liquid, their volume fractions and particle size. The proposed model includes a parameter which accounts for the interfacial shell, brownian motion, and aggregation of particle. The validation of the model is verified by applying the results obtained by the experiments of Tio2-water and Al2o3-water nanofluids.

Keywords: Critical particle size, nanofluid, model, and thermal conductivity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2016
7178 Learning and Evaluating Possibilistic Decision Trees using Information Affinity

Authors: Ilyes Jenhani, Salem Benferhat, Zied Elouedi

Abstract:

This paper investigates the issue of building decision trees from data with imprecise class values where imprecision is encoded in the form of possibility distributions. The Information Affinity similarity measure is introduced into the well-known gain ratio criterion in order to assess the homogeneity of a set of possibility distributions representing instances-s classes belonging to a given training partition. For the experimental study, we proposed an information affinity based performance criterion which we have used in order to show the performance of the approach on well-known benchmarks.

Keywords: Data mining from uncertain data, Decision Trees, Possibility Theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1478
7177 Regression Approach for Optimal Purchase of Hosts Cluster in Fixed Fund for Hadoop Big Data Platform

Authors: Haitao Yang, Jianming Lv, Fei Xu, Xintong Wang, Yilin Huang, Lanting Xia, Xuewu Zhu

Abstract:

Given a fixed fund, purchasing fewer hosts of higher capability or inversely more of lower capability is a must-be-made trade-off in practices for building a Hadoop big data platform. An exploratory study is presented for a Housing Big Data Platform project (HBDP), where typical big data computing is with SQL queries of aggregate, join, and space-time condition selections executed upon massive data from more than 10 million housing units. In HBDP, an empirical formula was introduced to predict the performance of host clusters potential for the intended typical big data computing, and it was shaped via a regression approach. With this empirical formula, it is easy to suggest an optimal cluster configuration. The investigation was based on a typical Hadoop computing ecosystem HDFS+Hive+Spark. A proper metric was raised to measure the performance of Hadoop clusters in HBDP, which was tested and compared with its predicted counterpart, on executing three kinds of typical SQL query tasks. Tests were conducted with respect to factors of CPU benchmark, memory size, virtual host division, and the number of element physical host in cluster. The research has been applied to practical cluster procurement for housing big data computing.

Keywords: Hadoop platform planning, optimal cluster scheme at fixed-fund, performance empirical formula, typical SQL query tasks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 795
7176 Analysis of Palm Perspiration Effect with SVM for Diabetes in People

Authors: Hamdi Melih Saraoğlu, Muhlis Yıldırım, Abdurrahman Özbeyaz, Feyzullah Temurtas

Abstract:

In this research, the diabetes conditions of people (healthy, prediabete and diabete) were tried to be identified with noninvasive palm perspiration measurements. Data clusters gathered from 200 subjects were used (1.Individual Attributes Cluster and 2. Palm Perspiration Attributes Cluster). To decrase the dimensions of these data clusters, Principal Component Analysis Method was used. Data clusters, prepared in that way, were classified with Support Vector Machines. Classifications with highest success were 82% for Glucose parameters and 84% for HbA1c parametres.

Keywords: Palm perspiration, Diabetes, Support Vector Machine, Classification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
7175 Design of Gravity Dam by Genetic Algorithms

Authors: Farzin Salmasi

Abstract:

The design of a gravity dam is performed through an interactive process involving a preliminary layout of the structure followed by a stability and stress analysis. This study presents a method to define the optimal top width of gravity dam with genetic algorithm. To solve the optimization task (minimize the cost of the dam), an optimization routine based on genetic algorithms (GAs) was implemented into an Excel spreadsheet. It was found to perform well and GA parameters were optimized in a parametric study. Using the parameters found in the parametric study, the top width of gravity dam optimization was performed and compared to a gradient-based optimization method (classic method). The accuracy of the results was within close proximity. In optimum dam cross section, the ratio of is dam base to dam height is almost equal to 0.85, and ratio of dam top width to dam height is almost equal to 0.13. The computerized methodology may provide the help for computation of the optimal top width for a wide range of height of a gravity dam.

Keywords: Chromosomes, dam, genetic algorithm, globaloptimum, preliminary layout, stress analysis, theoretical profile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4287
7174 Elemental Graph Data Model: A Semantic and Topological Representation of Building Elements

Authors: Yasmeen A. S. Essawy, Khaled Nassar

Abstract:

With the rapid increase of complexity in the building industry, professionals in the A/E/C industry were forced to adopt Building Information Modeling (BIM) in order to enhance the communication between the different project stakeholders throughout the project life cycle and create a semantic object-oriented building model that can support geometric-topological analysis of building elements during design and construction. This paper presents a model that extracts topological relationships and geometrical properties of building elements from an existing fully designed BIM, and maps this information into a directed acyclic Elemental Graph Data Model (EGDM). The model incorporates BIM-based search algorithms for automatic deduction of geometrical data and topological relationships for each building element type. Using graph search algorithms, such as Depth First Search (DFS) and topological sortings, all possible construction sequences can be generated and compared against production and construction rules to generate an optimized construction sequence and its associated schedule. The model is implemented in a C# platform.

Keywords: Building information modeling, elemental graph data model, geometric and topological data models, and graph theory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1150
7173 Extraction of Polystyrene from Styrofoam Waste: Synthesis of Novel Chelating Resin for the Enrichment and Speciation of Cr(III)/Cr(VI) Ions in Industrial Effluents

Authors: Ali N. Siyal, Saima Q. Memon, Latif Elçi, Aydan Elçi

Abstract:

Polystyrene (PS) was extracted from Styrofoam (expanded polystyrene foam) waste, so called white pollutant. The PS was functionalized with N,N- Bis(2-aminobenzylidene)benzene-1,2-diamine (ABA) ligand through an azo spacer. The resin was characterized by FT-IR spectroscopy and elemental analysis. The PS-N=N-ABA resin was used for the enrichment and speciation of Cr(III)/Cr(VI) ions and total Cr determination in aqueous samples by flame atomic absorption spectrometry (FAAS). The separation of Cr(III)/Cr(VI) ions was achieved at pH 2. The recovery of Cr(VI) ions was achieved ≥ 95.0% at optimum parameters: pH 2; resin amount 300mg; flow rates 2.0mL min-1 of solution and 2.0mL min-1 of eluent (2.0mol L-1 HNO3). Total Cr was determined by oxidation of Cr(III) to Cr(VI) ions using H2O2. The limit of detection (LOD) and quantification (LOQ) of Cr(VI) were found to be 0.40 and 1.20μg L-1, respectively with preconcentration factor of 250. Total saturation and breakthrough capacitates of the resin for Cr(IV) ions were found to be 0.181 and 0.531mmol g-1, respectively. The proposed method was successfully applied for the preconcentration/speciation of Cr(III)/Cr(VI) ions and determination of total Cr in industrial effluents.

Keywords: Styrofoam waste, Polymeric resin, Preconcentration, Speciation, Cr(III)/Cr(VI) ions, FAAS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2526
7172 A Materialized View Approach to Support Aggregation Operations over Long Periods in Sensor Networks

Authors: Minsoo Lee, Julee Choi, Sookyung Song

Abstract:

The increasing interest on processing data created by sensor networks has evolved into approaches to implement sensor networks as databases. The aggregation operator, which calculates a value from a large group of data such as computing averages or sums, etc. is an essential function that needs to be provided when implementing such sensor network databases. This work proposes to add the DURING clause into TinySQL to calculate values during a specific long period and suggests a way to implement the aggregation service in sensor networks by applying materialized view and incremental view maintenance techniques that is used in data warehouses. In sensor networks, data values are passed from child nodes to parent nodes and an aggregation value is computed at the root node. As such root nodes need to be memory efficient and low powered, it becomes a problem to recompute aggregate values from all past and current data. Therefore, applying incremental view maintenance techniques can reduce the memory consumption and support fast computation of aggregate values.

Keywords: Aggregation, Incremental View Maintenance, Materialized view, Sensor Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1504
7171 Real Time Data Communication with FlightGear Using Simulink over a UDP Protocol

Authors: Adil Loya, Ali Haider, Arslan A. Ghaffor, Abubaker Siddique

Abstract:

Simulation and modelling of Unmanned Aerial Vehicle (UAV) has gained wide popularity in front of aerospace community. The demand of designing and modelling optimized control system for UAV has increased ten folds since last decade, as next generation warfare is dependent on unmanned technologies. Therefore, this research focuses on the simulation of nonlinear UAV dynamics on Simulink and its integration with Flightgear. There has been lots of research on implementation of optimizing control using Simulink, however, there are fewer known techniques to simulate these dynamics over Flightgear and a tedious technique of acquiring data has been tackled in this research horizon. Sending data to Flightgear is easy but receiving it from Simulink is not that straight forward, i.e. we can only receive control data on the output. However, in this research we have managed to get the data out from the Flightgear by implementation of level 2 s-function block within Simulink. Moreover, the results captured from Flightgear over a Universal Datagram Protocol (UDP) communication are then compared with the attitude signal that were sent previously. This provide useful information regarding the difference in outputs attained from Simulink to Flightgear. It was found that values received on Simulink were in high agreement with that of the Flightgear output. And complete study has been conducted in a discrete way.

Keywords: aerospace, flight control, FlightGear, communication, Simulink

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1059
7170 A Comparison of Real Valued Transforms for Image Compression

Authors: Shivali D. Kulkarni, Ameya K. Naik, Nitin S. Nagori

Abstract:

In this paper we present simulation results for the application of a bandwidth efficient algorithm (mapping algorithm) to an image transmission system. This system considers three different real valued transforms to generate energy compact coefficients. First results are presented for gray scale and color image transmission in the absence of noise. It is seen that the system performs its best when discrete cosine transform is used. Also the performance of the system is dominated more by the size of the transform block rather than the number of coefficients transmitted or the number of bits used to represent each coefficient. Similar results are obtained in the presence of additive white Gaussian noise. The varying values of the bit error rate have very little or no impact on the performance of the algorithm. Optimum results are obtained for the system considering 8x8 transform block and by transmitting 15 coefficients from each block using 8 bits.

Keywords: Additive white Gaussian noise channel, mapping algorithm, peak signal to noise ratio, transform encoding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1467
7169 Comparing Data Analysis, Communication and Information Technologies Expertise Levels in Undergraduate Psychology Students

Authors: Ana Cázares

Abstract:

Aims for this study: first, to compare the expertise level in data analysis, communication and information technologies in undergraduate psychology students. Second, to verify the factor structure of E-ETICA (Escala de Experticia en Tecnologias de la Informacion, la Comunicacion y el Análisis or Data Analysis, Communication and Information'Expertise Scale) which had shown an excellent internal consistency (α= 0.92) as well as a simple factor structure. Three factors, Complex, Basic Information and Communications Technologies and E-Searching and Download Abilities, explains 63% of variance. In the present study, 260 students (119 juniors and 141 seniors) were asked to respond to ETICA (16 items Likert scale of five points 1: null domain to 5: total domain). The results show that both junior and senior students report having very similar expertise level; however, E-ETICA presents a different factor structure for juniors and four factors explained also 63% of variance: Information E-Searching, Download and Process; Data analysis; Organization; and Communication technologies.

Keywords: Data analysis, Information, Communications Technologies, Expertise'Levels.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1249
7168 Omni: Data Science Platform for Evaluate Performance of a LoRaWAN Network

Authors: Emanuele A. Solagna, Ricardo S, Tozetto, Roberto dos S. Rabello

Abstract:

Nowadays, physical processes are becoming digitized by the evolution of communication, sensing and storage technologies which promote the development of smart cities. The evolution of this technology has generated multiple challenges related to the generation of big data and the active participation of electronic devices in society. Thus, devices can send information that is captured and processed over large areas, but there is no guarantee that all the obtained data amount will be effectively stored and correctly persisted. Because, depending on the technology which is used, there are parameters that has huge influence on the full delivery of information. This article aims to characterize the project, currently under development, of a platform that based on data science will perform a performance and effectiveness evaluation of an industrial network that implements LoRaWAN technology considering its main parameters configuration relating these parameters to the information loss.

Keywords: Internet of Things, LoRa, LoRaWAN, smart cities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 670
7167 Making Food Science Education and Research Activities More Attractive for University Students and Food Enterprises by Utilizing Open Innovative Space Approach

Authors: A-M. Saarela

Abstract:

At the Savonia University of Applied Sciences (UAS), curriculum and studies have been improved by applying an Open Innovation Space approach (OIS). It is based on multidisciplinary action learning. The key elements of OIS-ideology are work-life orientation, and student-centric communal learning. In this approach, every participant can learn from each other and innovations will be created. In this social innovation educational approach, all practices are carried out in close collaboration with enterprises in real-life settings, not in classrooms. As an example, in this paper, Savonia UAS’s Future Food RDI hub (FF) shows how OIS practices are implemented by providing food product development and consumer research services for enterprises in close collaboration with academicians, students and consumers. In particular one example of OIS experimentation in the field is provided by a consumer research carried out utilizing verbal analysis protocol combined with audiovisual observation (VAP-WAVO). In this case, all co-learners were acting together in supermarket settings to collect the relevant data for a product development and the marketing department of a company. The company benefitted from the results obtained, students were more satisfied with their studies, educators and academicians were able to obtain good evidence for further collaboration as well as renewing curriculum contents based on the requirements of working life. In addition, society will benefit over time as young university adults find careers more easily through their OIS related food science studies. Also this knowledge interaction model re-news education practices and brings working-life closer to educational research institutes.

Keywords: Collaboration, education, food science, industry, knowledge transfer, RDI, student.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1961
7166 Integration of Image and Patient Data, Software and International Coding Systems for Use in a Mammography Research Project

Authors: V. Balanica, W. I. D. Rae, M. Caramihai, S. Acho, C. P. Herbst

Abstract:

Mammographic images and data analysis to facilitate modelling or computer aided diagnostic (CAD) software development should best be done using a common database that can handle various mammographic image file formats and relate these to other patient information. This would optimize the use of the data as both primary reporting and enhanced information extraction of research data could be performed from the single dataset. One desired improvement is the integration of DICOM file header information into the database, as an efficient and reliable source of supplementary patient information intrinsically available in the images. The purpose of this paper was to design a suitable database to link and integrate different types of image files and gather common information that can be further used for research purposes. An interface was developed for accessing, adding, updating, modifying and extracting data from the common database, enhancing the future possible application of the data in CAD processing. Technically, future developments envisaged include the creation of an advanced search function to selects image files based on descriptor combinations. Results can be further used for specific CAD processing and other research. Design of a user friendly configuration utility for importing of the required fields from the DICOM files must be done.

Keywords: Database Integration, Mammogram Classification, Tumour Classification, Computer Aided Diagnosis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1909
7165 Efficient Pre-Processing of Single-Cell Assay for Transposase Accessible Chromatin with High-Throughput Sequencing Data

Authors: Fan Gao, Lior Pachter

Abstract:

The primary tool currently used to pre-process 10X chromium single-cell ATAC-seq data is Cell Ranger, which can take very long to run on standard datasets. To facilitate rapid pre-processing that enables reproducible workflows, we present a suite of tools called scATAK for pre-processing single-cell ATAC-seq data that is 15 to 18 times faster than Cell Ranger on mouse and human samples. Our tool can also calculate chromatin interaction potential matrices and generate open chromatin signal and interaction traces for cell groups. We use scATAK tool to explore the chromatin regulatory landscape of a healthy adult human brain and unveil cell-type specific features, and show that it provides a convenient and computational efficient approach for pre-processing single-cell ATAC-seq data.

Keywords: single-cell, ATAC-seq, bioinformatics, open chromatin landscape, chromatin interactome

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1065
7164 Optimum Radio Capacity Estimation of a Single-Cell Spread Spectrum MIMO System under Rayleigh Fading Conditions

Authors: P. Varzakas

Abstract:

In this paper, the problem of estimating the optimal radio capacity of a single-cell spread spectrum (SS) multiple-inputmultiple- output (MIMO) system operating in a Rayleigh fading environment is examined. The optimisation between the radio capacity and the theoretically achievable average channel capacity (in the sense of information theory) per user of a MIMO single-cell SS system operating in a Rayleigh fading environment is presented. Then, the spectral efficiency is estimated in terms of the achievable average channel capacity per user, during the operation over a broadcast time-varying link, and leads to a simple novel-closed form expression for the optimal radio capacity value based on the maximization of the achieved spectral efficiency. Numerical results are presented to illustrate the proposed analysis.

Keywords: Channel capacity, MIMO systems, Radio capacity, Rayleigh fading, Spectral efficiency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1240
7163 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: Big data, building-value analysis, machine learning, price prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1129