Search results for: Data Processing Electronics
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8600

Search results for: Data Processing Electronics

7790 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: Automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1996
7789 Distributed Data-Mining by Probability-Based Patterns

Authors: M. Kargar, F. Gharbalchi

Abstract:

In this paper a new method is suggested for distributed data-mining by the probability patterns. These patterns use decision trees and decision graphs. The patterns are cared to be valid, novel, useful, and understandable. Considering a set of functions, the system reaches to a good pattern or better objectives. By using the suggested method we will be able to extract the useful information from massive and multi-relational data bases.

Keywords: Data-mining, Decision tree, Decision graph, Pattern, Relationship.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1552
7788 K-Means for Spherical Clusters with Large Variance in Sizes

Authors: A. M. Fahim, G. Saake, A. M. Salem, F. A. Torkey, M. A. Ramadan

Abstract:

Data clustering is an important data exploration technique with many applications in data mining. The k-means algorithm is well known for its efficiency in clustering large data sets. However, this algorithm is suitable for spherical shaped clusters of similar sizes and densities. The quality of the resulting clusters decreases when the data set contains spherical shaped with large variance in sizes. In this paper, we introduce a competent procedure to overcome this problem. The proposed method is based on shifting the center of the large cluster toward the small cluster, and recomputing the membership of small cluster points, the experimental results reveal that the proposed algorithm produces satisfactory results.

Keywords: K-Means, Data Clustering, Cluster Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3278
7787 Classifier Combination Approach in Motion Imagery Signals Processing for Brain Computer Interface

Authors: Homayoon Zarshenas, Mahdi Bamdad, Hadi Grailu, Akbar A. Shakoori

Abstract:

In this study we focus on improvement performance of a cue based Motor Imagery Brain Computer Interface (BCI). For this purpose, data fusion approach is used on results of different classifiers to make the best decision. At first step Distinction Sensitive Learning Vector Quantization method is used as a feature selection method to determine most informative frequencies in recorded signals and its performance is evaluated by frequency search method. Then informative features are extracted by packet wavelet transform. In next step 5 different types of classification methods are applied. The methodologies are tested on BCI Competition II dataset III, the best obtained accuracy is 85% and the best kappa value is 0.8. At final step ordered weighted averaging (OWA) method is used to provide a proper aggregation classifiers outputs. Using OWA enhanced system accuracy to 95% and kappa value to 0.9. Applying OWA just uses 50 milliseconds for performing calculation.

Keywords: BCI, EEG, Classifier, Fuzzy operator, OWA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1873
7786 Representing Data without Lost Compression Properties in Time Series: A Review

Authors: Nabilah Filzah Mohd Radzuan, Zalinda Othman, Azuraliza Abu Bakar, Abdul Razak Hamdan

Abstract:

Uncertain data is believed to be an important issue in building up a prediction model. The main objective in the time series uncertainty analysis is to formulate uncertain data in order to gain knowledge and fit low dimensional model prior to a prediction task. This paper discusses the performance of a number of techniques in dealing with uncertain data specifically those which solve uncertain data condition by minimizing the loss of compression properties.

Keywords: Compression properties, uncertainty, uncertain time series, mining technique, weather prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1615
7785 Are XBRL-based Financial Reports Better than Non-XBRL Reports? A Quality Assessment

Authors: Zhenkun Wang, Simon S. Gao

Abstract:

Using a scoring system, this paper provides a comparative assessment of the quality of data between XBRL formatted financial reports and non-XBRL financial reports. It shows a major improvement in the quality of data of XBRL formatted financial reports. Although XBRL formatted financial reports do not show much advantage in the quality at the beginning, XBRL financial reports lately display a large improvement in the quality of data in almost all aspects. With the improved XBRL web data managing, presentation and analysis applications, XBRL formatted financial reports have a much better accessibility, are more accurate and better in timeliness.

Keywords: Data Quality; Financial Report; Information; XBRL

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2557
7784 Modeling of Random Variable with Digital Probability Hyper Digraph: Data-Oriented Approach

Authors: A. Habibizad Navin, M. Naghian Fesharaki, M. Mirnia, M. Kargar

Abstract:

In this paper we introduce Digital Probability Hyper Digraph for modeling random variable as the hierarchical data-oriented model.

Keywords: Data-Oriented Models, Data Structure, DigitalProbability Hyper Digraph, Random Variable, Statistic andProbability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1267
7783 EEG-Based Fractal Analysis of Different Motor Imagery Tasks using Critical Exponent Method

Authors: Montri Phothisonothai, Masahiro Nakagawa

Abstract:

The objective of this paper is to characterize the spontaneous Electroencephalogram (EEG) signals of four different motor imagery tasks and to show hereby a possible solution for the present binary communication between the brain and a machine ora Brain-Computer Interface (BCI). The processing technique used in this paper was the fractal analysis evaluated by the Critical Exponent Method (CEM). The EEG signal was registered in 5 healthy subjects,sampling 15 measuring channels at 1024 Hz.Each channel was preprocessed by the Laplacian space ltering so as to reduce the space blur and therefore increase the spaceresolution. The EEG of each channel was segmented and its Fractaldimension (FD) calculated. The FD was evaluated in the time interval corresponding to the motor imagery and averaged out for all the subjects (each channel). In order to characterize the FD distribution,the linear regression curves of FD over the electrodes position were applied. The differences FD between the proposed mental tasks are quantied and evaluated for each experimental subject. The obtained results of the proposed method are a substantial fractal dimension in the EEG signal of motor imagery tasks and can be considerably utilized as the multiple-states BCI applications.

Keywords: electroencephalogram (EEG), motor imagery tasks, mental tasks, biomedical signals processing, human-machine interface, fractal analysis, critical exponent method (CEM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2253
7782 Wireless Transmission of Big Data Using Novel Secure Algorithm

Authors: K. Thiagarajan, K. Saranya, A. Veeraiah, B. Sudha

Abstract:

This paper presents a novel algorithm for secure, reliable and flexible transmission of big data in two hop wireless networks using cooperative jamming scheme. Two hop wireless networks consist of source, relay and destination nodes. Big data has to transmit from source to relay and from relay to destination by deploying security in physical layer. Cooperative jamming scheme determines transmission of big data in more secure manner by protecting it from eavesdroppers and malicious nodes of unknown location. The novel algorithm that ensures secure and energy balance transmission of big data, includes selection of data transmitting region, segmenting the selected region, determining probability ratio for each node (capture node, non-capture and eavesdropper node) in every segment, evaluating the probability using binary based evaluation. If it is secure transmission resume with the two- hop transmission of big data, otherwise prevent the attackers by cooperative jamming scheme and transmit the data in two-hop transmission.

Keywords: Big data, cooperative jamming, energy balance, physical layer, two-hop transmission, wireless security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2176
7781 A Bionic Approach to Dynamic, Multimodal Scene Perception and Interpretation in Buildings

Authors: Rosemarie Velik, Dietmar Bruckner

Abstract:

Today, building automation is advancing from simple monitoring and control tasks of lightning and heating towards more and more complex applications that require a dynamic perception and interpretation of different scenes occurring in a building. Current approaches cannot handle these newly upcoming demands. In this article, a bionically inspired approach for multimodal, dynamic scene perception and interpretation is presented, which is based on neuroscientific and neuro-psychological research findings about the perceptual system of the human brain. This approach bases on data from diverse sensory modalities being processed in a so-called neuro-symbolic network. With its parallel structure and with its basic elements being information processing and storing units at the same time, a very efficient method for scene perception is provided overcoming the problems and bottlenecks of classical dynamic scene interpretation systems.

Keywords: building automation, biomimetrics, dynamic scene interpretation, human-like perception, neuro-symbolic networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1611
7780 Hand Gesture Detection via EmguCV Canny Pruning

Authors: N. N. Mosola, S. J. Molete, L. S. Masoebe, M. Letsae

Abstract:

Hand gesture recognition is a technique used to locate, detect, and recognize a hand gesture. Detection and recognition are concepts of Artificial Intelligence (AI). AI concepts are applicable in Human Computer Interaction (HCI), Expert systems (ES), etc. Hand gesture recognition can be used in sign language interpretation. Sign language is a visual communication tool. This tool is used mostly by deaf societies and those with speech disorder. Communication barriers exist when societies with speech disorder interact with others. This research aims to build a hand recognition system for Lesotho’s Sesotho and English language interpretation. The system will help to bridge the communication problems encountered by the mentioned societies. The system has various processing modules. The modules consist of a hand detection engine, image processing engine, feature extraction, and sign recognition. Detection is a process of identifying an object. The proposed system uses Canny pruning Haar and Haarcascade detection algorithms. Canny pruning implements the Canny edge detection. This is an optimal image processing algorithm. It is used to detect edges of an object. The system employs a skin detection algorithm. The skin detection performs background subtraction, computes the convex hull, and the centroid to assist in the detection process. Recognition is a process of gesture classification. Template matching classifies each hand gesture in real-time. The system was tested using various experiments. The results obtained show that time, distance, and light are factors that affect the rate of detection and ultimately recognition. Detection rate is directly proportional to the distance of the hand from the camera. Different lighting conditions were considered. The more the light intensity, the faster the detection rate. Based on the results obtained from this research, the applied methodologies are efficient and provide a plausible solution towards a light-weight, inexpensive system which can be used for sign language interpretation.

Keywords: Canny pruning, hand recognition, machine learning, skin tracking.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1299
7779 Study of Efficiency and Capability LZW++ Technique in Data Compression

Authors: Yusof. Mohd Kamir, Mat Deris. Mohd Sufian, Abidin. Ahmad Faisal Amri

Abstract:

The purpose of this paper is to show efficiency and capability LZWµ in data compression. The LZWµ technique is enhancement from existing LZW technique. The modification the existing LZW is needed to produce LZWµ technique. LZW read one by one character at one time. Differ with LZWµ technique, where the LZWµ read three characters at one time. This paper focuses on data compression and tested efficiency and capability LZWµ by different data format such as doc type, pdf type and text type. Several experiments have been done by different types of data format. The results shows LZWµ technique is better compared to existing LZW technique in term of file size.

Keywords: Data Compression, Huffman Encoding, LZW, LZWµ, RLL, Size.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2085
7778 Impact of Stack Caches: Locality Awareness and Cost Effectiveness

Authors: Abdulrahman K. Alshegaifi, Chun-Hsi Huang

Abstract:

Treating data based on its location in memory has received much attention in recent years due to its different properties, which offer important aspects for cache utilization. Stack data and non-stack data may interfere with each other’s locality in the data cache. One of the important aspects of stack data is that it has high spatial and temporal locality. In this work, we simulate non-unified cache design that split data cache into stack and non-stack caches in order to maintain stack data and non-stack data separate in different caches. We observe that the overall hit rate of non-unified cache design is sensitive to the size of non-stack cache. Then, we investigate the appropriate size and associativity for stack cache to achieve high hit ratio especially when over 99% of accesses are directed to stack cache. The result shows that on average more than 99% of stack cache accuracy is achieved by using 2KB of capacity and 1-way associativity. Further, we analyze the improvement in hit rate when adding small, fixed, size of stack cache at level1 to unified cache architecture. The result shows that the overall hit rate of unified cache design with adding 1KB of stack cache is improved by approximately, on average, 3.9% for Rijndael benchmark. The stack cache is simulated by using SimpleScalar toolset.

Keywords: Hit rate, Locality of program, Stack cache, and Stack data.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1503
7777 Normal and Peaberry Coffee Beans Classification from Green Coffee Bean Images Using Convolutional Neural Networks and Support Vector Machine

Authors: Hira Lal Gope, Hidekazu Fukai

Abstract:

The aim of this study is to develop a system which can identify and sort peaberries automatically at low cost for coffee producers in developing countries. In this paper, the focus is on the classification of peaberries and normal coffee beans using image processing and machine learning techniques. The peaberry is not bad and not a normal bean. The peaberry is born in an only single seed, relatively round seed from a coffee cherry instead of the usual flat-sided pair of beans. It has another value and flavor. To make the taste of the coffee better, it is necessary to separate the peaberry and normal bean before green coffee beans roasting. Otherwise, the taste of total beans will be mixed, and it will be bad. In roaster procedure time, all the beans shape, size, and weight must be unique; otherwise, the larger bean will take more time for roasting inside. The peaberry has a different size and different shape even though they have the same weight as normal beans. The peaberry roasts slower than other normal beans. Therefore, neither technique provides a good option to select the peaberries. Defect beans, e.g., sour, broken, black, and fade bean, are easy to check and pick up manually by hand. On the other hand, the peaberry pick up is very difficult even for trained specialists because the shape and color of the peaberry are similar to normal beans. In this study, we use image processing and machine learning techniques to discriminate the normal and peaberry bean as a part of the sorting system. As the first step, we applied Deep Convolutional Neural Networks (CNN) and Support Vector Machine (SVM) as machine learning techniques to discriminate the peaberry and normal bean. As a result, better performance was obtained with CNN than with SVM for the discrimination of the peaberry. The trained artificial neural network with high performance CPU and GPU in this work will be simply installed into the inexpensive and low in calculation Raspberry Pi system. We assume that this system will be used in under developed countries. The study evaluates and compares the feasibility of the methods in terms of accuracy of classification and processing speed.

Keywords: Convolutional neural networks, coffee bean, peaberry, sorting, support vector machine.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1542
7776 Performance Analysis of Digital Signal Processors Using SMV Benchmark

Authors: Erh-Wen Hu, Cyril S. Ku, Andrew T. Russo, Bogong Su, Jian Wang

Abstract:

Unlike general-purpose processors, digital signal processors (DSP processors) are strongly application-dependent. To meet the needs for diverse applications, a wide variety of DSP processors based on different architectures ranging from the traditional to VLIW have been introduced to the market over the years. The functionality, performance, and cost of these processors vary over a wide range. In order to select a processor that meets the design criteria for an application, processor performance is usually the major concern for digital signal processing (DSP) application developers. Performance data are also essential for the designers of DSP processors to improve their design. Consequently, several DSP performance benchmarks have been proposed over the past decade or so. However, none of these benchmarks seem to have included recent new DSP applications. In this paper, we use a new benchmark that we recently developed to compare the performance of popular DSP processors from Texas Instruments and StarCore. The new benchmark is based on the Selectable Mode Vocoder (SMV), a speech-coding program from the recent third generation (3G) wireless voice applications. All benchmark kernels are compiled by the compilers of the respective DSP processors and run on their simulators. Weighted arithmetic mean of clock cycles and arithmetic mean of code size are used to compare the performance of five DSP processors. In addition, we studied how the performance of a processor is affected by code structure, features of processor architecture and optimization of compiler. The extensive experimental data gathered, analyzed, and presented in this paper should be helpful for DSP processor and compiler designers to meet their specific design goals.

Keywords: digital signal processors, DSP benchmark, instruction level parallelism, modified cyclomatic complexity, performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1604
7775 Harnessing the Power of AI: Transforming DevSecOps for Enhanced Cloud Security

Authors: Ashly Joseph, Jithu Paulose

Abstract:

The increased usage of cloud computing has revolutionized the IT landscape, but it has also raised new security concerns. DevSecOps emerged as a way for tackling these difficulties by integrating security into the software development process. However, the rising complexity and sophistication of cyber threats need more advanced solutions. This paper looks into the usage of artificial intelligence (AI) techniques in the DevSecOps framework to increase cloud security. This study uses quantitative and qualitative techniques to assess the usefulness of AI approaches such as machine learning, natural language processing, and deep learning in reducing security issues. This paper thoroughly examines the symbiotic relationship between AI and DevSecOps, concentrating on how AI may be seamlessly integrated into the continuous integration and continuous delivery (CI/CD) pipeline, automated security testing, and real-time monitoring methods. The findings emphasize AI's huge potential to improve threat detection, risk assessment, and incident response skills. Furthermore, the paper examines the implications and challenges of using AI in DevSecOps workflows, considering factors like as scalability, interpretability, and adaptability. This paper adds to a better understanding of AI's revolutionary role in cloud security and provides valuable insights for practitioners and scholars in the field.

Keywords: Cloud Security, DevSecOps, Artificial Intelligence, AI, Machine Learning, Natural Language Processing, NLP, cybersecurity, AI-driven Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106
7774 Cross Project Software Fault Prediction at Design Phase

Authors: Pradeep Singh, Shrish Verma

Abstract:

Software fault prediction models are created by using the source code, processed metrics from the same or previous version of code and related fault data. Some company do not store and keep track of all artifacts which are required for software fault prediction. To construct fault prediction model for such company, the training data from the other projects can be one potential solution. Earlier we predicted the fault the less cost it requires to correct. The training data consists of metrics data and related fault data at function/module level. This paper investigates fault predictions at early stage using the cross-project data focusing on the design metrics. In this study, empirical analysis is carried out to validate design metrics for cross project fault prediction. The machine learning techniques used for evaluation is Naïve Bayes. The design phase metrics of other projects can be used as initial guideline for the projects where no previous fault data is available. We analyze seven datasets from NASA Metrics Data Program which offer design as well as code metrics. Overall, the results of cross project is comparable to the within company data learning.

Keywords: Software Metrics, Fault prediction, Cross project, Within project.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2539
7773 Extreme Temperature Forecast in Mbonge, Cameroon through Return Level Analysis of the Generalized Extreme Value (GEV) Distribution

Authors: Nkongho Ayuketang Arreyndip, Ebobenow Joseph

Abstract:

In this paper, temperature extremes are forecast by employing the block maxima method of the Generalized extreme value(GEV) distribution to analyse temperature data from the Cameroon Development Corporation (C.D.C). By considering two sets of data (Raw data and simulated data) and two (stationary and non-stationary) models of the GEV distribution, return levels analysis is carried out and it was found that in the stationary model, the return values are constant over time with the raw data while in the simulated data, the return values show an increasing trend but with an upper bound. In the non-stationary model, the return levels of both the raw data and simulated data show an increasing trend but with an upper bound. This clearly shows that temperatures in the tropics even-though show a sign of increasing in the future, there is a maximum temperature at which there is no exceedence. The results of this paper are very vital in Agricultural and Environmental research.

Keywords: Return level, Generalized extreme value (GEV), Meteorology, Forecasting.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2099
7772 Design and Implementation of Shared Memory based Parallel File System Logging Method for High Performance Computing

Authors: Hyeyoung Cho, Sungho Kim, SangDong Lee

Abstract:

I/O workload is a critical and important factor to analyze I/O pattern and file system performance. However tracing I/O operations on the fly distributed parallel file system is non-trivial due to collection overhead and a large volume of data. In this paper, we design and implement a parallel file system logging method for high performance computing using shared memory-based multi-layer scheme. It minimizes the overhead with reduced logging operation response time and provides efficient post-processing scheme through shared memory. Separated logging server can collect sequential logs from multiple clients in a cluster through packet communication. Implementation and evaluation result shows low overhead and high scalability of this architecture for high performance parallel logging analysis.

Keywords: I/O workload, PVFS, I/O Trace.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1558
7771 Mining Multicity Urban Data for Sustainable Population Relocation

Authors: Xu Du, Aparna S. Varde

Abstract:

In this research, we propose to conduct diagnostic and predictive analysis about the key factors and consequences of urban population relocation. To achieve this goal, urban simulation models extract the urban development trends as land use change patterns from a variety of data sources. The results are treated as part of urban big data with other information such as population change and economic conditions. Multiple data mining methods are deployed on this data to analyze nonlinear relationships between parameters. The result determines the driving force of population relocation with respect to urban sprawl and urban sustainability and their related parameters. This work sets the stage for developing a comprehensive urban simulation model for catering to specific questions by targeted users. It contributes towards achieving sustainability as a whole.

Keywords: Data Mining, Environmental Modeling, Sustainability, Urban Planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1780
7770 An Ant-based Clustering System for Knowledge Discovery in DNA Chip Analysis Data

Authors: Minsoo Lee, Yun-mi Kim, Yearn Jeong Kim, Yoon-kyung Lee, Hyejung Yoon

Abstract:

Biological data has several characteristics that strongly differentiate it from typical business data. It is much more complex, usually large in size, and continuously changes. Until recently business data has been the main target for discovering trends, patterns or future expectations. However, with the recent rise in biotechnology, the powerful technology that was used for analyzing business data is now being applied to biological data. With the advanced technology at hand, the main trend in biological research is rapidly changing from structural DNA analysis to understanding cellular functions of the DNA sequences. DNA chips are now being used to perform experiments and DNA analysis processes are being used by researchers. Clustering is one of the important processes used for grouping together similar entities. There are many clustering algorithms such as hierarchical clustering, self-organizing maps, K-means clustering and so on. In this paper, we propose a clustering algorithm that imitates the ecosystem taking into account the features of biological data. We implemented the system using an Ant-Colony clustering algorithm. The system decides the number of clusters automatically. The system processes the input biological data, runs the Ant-Colony algorithm, draws the Topic Map, assigns clusters to the genes and displays the output. We tested the algorithm with a test data of 100 to1000 genes and 24 samples and show promising results for applying this algorithm to clustering DNA chip data.

Keywords: Ant colony system, biological data, clustering, DNA chip.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1969
7769 The Resource Description Framework (RDF) as a Modern Structure for Medical Data

Authors: Gabriela Lindemann, Danilo Schmidt, Thomas Schrader, Dietmar Keune

Abstract:

The amount and heterogeneity of data in biomedical research, notably in interdisciplinary fields, requires new methods for the collection, presentation and analysis of information. Important data from laboratory experiments as well as patient trials are available but come out of distributed resources. The Charité - University Hospital Berlin has established together with the German Research Foundation (DFG) a new information service centre for kidney diseases and transplantation (Open European Nephrology Science Centre - OpEN.SC). Beside a collaborative aspect to create new research groups every single partner or institution of this science information centre making his own data available is allowed to search the whole data pool of the various involved centres. A core task is the implementation of a non-restricting open data structure for the various different data sources. We decided to use a modern RDF model and in a first phase transformed original data coming from the web-based Electronic Patient Record database TBase©.

Keywords: Medical databases, Resource Description Framework (RDF), metadata repository.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2028
7768 Stability of Electrical Motor Supplied by a Five Level Inverter

Authors: Kelaiaia Mounia Samira, Labar Hocine, Bounaya Kamel, Kelaiaia Samia

Abstract:

The development of the power electronics has allowed increasing the precision and reliability of the electrical trainings, thanks to the adjustable inverters, as the Pulse Wide Modulation (PWM) five level inverters, which is the object of study in this article.The authors treat the relation between the law order adopted for a given system and the oscillations of the electrical and mechanical parameters of which the tolerance depends on the process with which they are integrated (paper factory, lifting of the heavy loads, etc.).Thus the best choice of the regulation indexes allows us to achieve stability and safety training without investment (management of existing equipment).

Keywords: multi level inverter, PWM, Harmonics, oscillation, control

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1649
7767 Analytical Comparison of Conventional Algorithms with Vedic Algorithm for Digital Multiplier

Authors: Akhilesh G. Naik, Dipankar Pal

Abstract:

In today’s scenario, the complexity of digital signal processing (DSP) applications and various microcontroller architectures have been increasing to such an extent that the traditional approaches to multiplier design in most processors are becoming outdated for being comparatively slow. Modern processing applications require suitable pipelined approaches, and therefore, algorithms that are friendlier with pipelined architectures. Traditional algorithms like Wallace Tree, Radix-4 Booth, Radix-8 Booth, Dadda architectures have been proven to be comparatively slow for pipelined architectures. These architectures, therefore, need to be optimized or combined with other architectures amongst them to enhance its performances and to be made suitable for pipelined hardware/architectures. Recently, Vedic algorithm mathematically has proven to be efficient by appearing to be less complex and with fewer steps for its output establishment and have assumed renewed importance. This paper describes and shows how the Vedic algorithm can be better suited for pipelined architectures and also can be combined with traditional architectures and algorithms for enhancing its ability even further. In this paper, we also established that for complex applications on DSP and other microcontroller architectures, using Vedic approach for multiplication proves to be the best available and efficient option.

Keywords: Wallace tree, Radix-4 Booth, Radix-8 Booth, Dadda, Vedic, Single-Stage Karatsuba, Looped Karatsuba.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 830
7766 Organization Model of Semantic Document Repository and Search Techniques for Studying Information Technology

Authors: Nhon Do, Thuong Huynh, An Pham

Abstract:

Nowadays, organizing a repository of documents and resources for learning on a special field as Information Technology (IT), together with search techniques based on domain knowledge or document-s content is an urgent need in practice of teaching, learning and researching. There have been several works related to methods of organization and search by content. However, the results are still limited and insufficient to meet user-s demand for semantic document retrieval. This paper presents a solution for the organization of a repository that supports semantic representation and processing in search. The proposed solution is a model which integrates components such as an ontology describing domain knowledge, a database of document repository, semantic representation for documents and a file system; with problems, semantic processing techniques and advanced search techniques based on measuring semantic similarity. The solution is applied to build a IT learning materials management system of a university with semantic search function serving students, teachers, and manager as well. The application has been implemented, tested at the University of Information Technology, Ho Chi Minh City, Vietnam and has achieved good results.

Keywords: document retrieval system, knowledgerepresentation, document representation, semantic search, ontology.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1706
7765 Influence of Microstructural Features on Wear Resistance of Biomedical Titanium Materials

Authors: Mohsin T. Mohammed, Zahid A. Khan, Arshad N. Siddiquee

Abstract:

The field of biomedical materials plays an imperative requisite and a critical role in manufacturing a variety of biological artificial replacements in a modern world. Recently, titanium (Ti) materials are being used as biomaterials because of their superior corrosion resistance and tremendous specific strength, free- allergic problems and the greatest biocompatibility compared to other competing biomaterials such as stainless steel, Co-Cr alloys, ceramics, polymers, and composite materials. However, regardless of these excellent performance properties, Implantable Ti materials have poor shear strength and wear resistance which limited their applications as biomaterials. Even though the wear properties of Ti alloys has revealed some improvements, the crucial effectiveness of biomedical Ti alloys as wear components requires a comprehensive deep understanding of the wear reasons, mechanisms, and techniques that can be used to improve wear behavior. This review examines current information on the effect of thermal and thermomechanical processing of implantable Ti materials on the long-term prosthetic requirement which related with wear behavior. This paper focuses mainly on the evolution, evaluation and development of effective microstructural features that can improve wear properties of bio grade Ti materials using thermal and thermomechanical treatments.

Keywords: Wear Resistance, Heat Treatment, Thermomechanical Processing, Biomedical Titanium Materials.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3659
7764 Interfacing and Replication of Electronic Machinery Using MATLAB / SIMULINK

Authors: Abdulatif Abdusalam, Mohamed Shaban

Abstract:

This paper introduces Interfacing and Replication of electronic tools based on the MATLAB/ SIMULINK mock-up package. Mock-up components contain dc-dc converters, power issue rectifiers, motivation machines, dc gear, synchronous gear, and more entire systems. The power issue rectifier model includes solid state device models. The tools provide clear-cut structures and mock-up of complex energy systems, connecting with power electronic machines.

Keywords: Power electronics, Machine, Matlab/Simulink.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3046
7763 JConqurr - A Multi-Core Programming Toolkit for Java

Authors: G.A.C.P. Ganegoda, D.M.A. Samaranayake, L.S. Bandara, K.A.D.N.K. Wimalawarne

Abstract:

With the popularity of the multi-core and many-core architectures there is a great requirement for software frameworks which can support parallel programming methodologies. In this paper we introduce an Eclipse toolkit, JConqurr which is easy to use and provides robust support for flexible parallel progrmaming. JConqurr is a multi-core and many-core programming toolkit for Java which is capable of providing support for common parallel programming patterns which include task, data, divide and conquer and pipeline parallelism. The toolkit uses an annotation and a directive mechanism to convert the sequential code into parallel code. In addition to that we have proposed a novel mechanism to achieve the parallelism using graphical processing units (GPU). Experiments with common parallelizable algorithms have shown that our toolkit can be easily and efficiently used to convert sequential code to parallel code and significant performance gains can be achieved.

Keywords: Multi-core, parallel programming patterns, GPU, Java, Eclipse plugin, toolkit,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
7762 Optical Verification of an Ophthalmological Examination Apparatus Employing the Electroretinogram Function on Fundus-Related Perimetry

Authors: Naoto Suzuki

Abstract:

Japanese are affected by the most common causes of eyesight loss such as glaucoma, diabetic retinopathy, pigmentary retinal degeneration, and age-related macular degeneration. We developed an ophthalmological examination apparatus with a fundus camera, precisely fundus-related perimetry (microperimetry), and electroretinogram (ERG) functions to diagnose a variety of diseases that cause eyesight loss. The experimental apparatus was constructed with the same optical system as a fundus camera. The microperimetry optical system was calculated and added to the experimental apparatus using the German company Optenso's optical engineering software (OpTaliX-LT 10.8). We also added an Edmund infrared camera (EO-0413), a lens with a 25 mm focal length, a 45° cold mirror, a 12 V/50 W halogen lamp, and an 8-inch monitor. We made the artificial eye of a plane-convex lens, a black spacer, and a hemispherical cup. The hemispherical cup had a small section of the paper at the bottom. The artificial eye was photographed five times using the experimental apparatus. The software was created to display the examination target on the monitor and save examination data using C++Builder 10.2. The retinal fundus was displayed on the monitor at a length and width of 1 mm and a resolution of 70.4 ± 4.1 and 74.7 ± 6.8 pixels, respectively. The microperimetry and ERG functions were successfully added to the experimental ophthalmological apparatus. A moving machine was developed to measure the artificial eye's movement. The artificial eye's rear part was painted black and white in the central area. It was rotated 10 degrees from one side to the other. The movement was captured five times as motion videos. Three static images were extracted from one of the motion videos captured. The images display the artificial eye facing the center, right, and left directions. The three images were processed using Scilab 6.1.0 and Image Processing and Computer Vision Toolbox 4.1.2, including trimming, binarization, making a window, deleting peripheral area, and morphological operations. To calculate the artificial eye's fundus center, we added a gravity method to the program to calculate the gravity position of connected components. From the three images, the image processing could calculate the center position.

Keywords: Ophthalmological examination apparatus, microperimetry, electroretinogram, eye movement.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 556
7761 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: Economic production quantity, random cost, supply chain management, vendor-managed inventory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 672