Search results for: service based environment.
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 13472

Search results for: service based environment.

9692 A Risk Management Approach for Nigeria Manufacturing Industries

Authors: Olaniyi O. Omoyajowo

Abstract:

To be successful in today’s competitive global environment, manufacturing industry must be able to respond quickly to changes in technology. These changes in technology introduce new risks and hazards. The management of risk/hazard in a manufacturing process recommends method through which the success rate of an organization can be increased. Thus, there is a continual need for manufacturing industries to invest significant amount of resources in risk management, which in turn optimizes the production output and profitability of any manufacturing industry (if implemented properly). To help improve the existing risk prevention and mitigation practices in Small and Medium Enterprise (SME) in Nigeria Manufacturing Industries (NMI), the researcher embarks on this research to develop a systematic Risk Management process.

Keywords: Manufacturing industries, production output, risk, risk management, SMEs.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1916
9691 Optimal Duty-Cycle Modulation Scheme for Analog-To-Digital Conversion Systems

Authors: G. Sonfack, J. Mbihi, B. Lonla Moffo

Abstract:

This paper presents an optimal duty-cycle modulation (ODCM) scheme for analog-to-digital conversion (ADC) systems. The overall ODCM-Based ADC problem is decoupled into optimal DCM and digital filtering sub-problems, while taking into account constraints of mutual design parameters between the two. Using a set of three lemmas and four morphological theorems, the ODCM sub-problem is modelled as a nonlinear cost function with nonlinear constraints. Then, a weighted least pth norm of the error between ideal and predicted frequency responses is used as a cost function for the digital filtering sub-problem. In addition, MATLAB fmincon and MATLAB iirlnorm tools are used as optimal DCM and least pth norm solvers respectively. Furthermore, the virtual simulation scheme of an overall prototyping ODCM-based ADC system is implemented and well tested with the help of Simulink tool according to relevant set of design data, i.e., 3 KHz of modulating bandwidth, 172 KHz of maximum modulation frequency and 25 MHZ of sampling frequency. Finally, the results obtained and presented show that the ODCM-based ADC achieves under 3 KHz of modulating bandwidth: 57 dBc of SINAD (signal-to-noise and distorsion), 58 dB of SFDR (Surpious free dynamic range) -80 dBc of THD (total harmonic distorsion), and 10 bits of minimum resolution. These performance levels appear to be a great challenge within the class of oversampling ADC topologies, with 2nd order IIR (infinite impulse response) decimation filter.

Keywords: Digital IIR filter, morphological lemmas and theorems, optimal DCM-based DAC, virtual simulation, weighted least pth norm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 907
9690 Finite Element Prediction and Experimental Verification of the Failure Pattern of Proximal Femur using Quantitative Computed Tomography Images

Authors: Majid Mirzaei, Saeid Samiezadeh , Abbas Khodadadi, Mohammad R. Ghazavi

Abstract:

This paper presents a novel method for prediction of the mechanical behavior of proximal femur using the general framework of the quantitative computed tomography (QCT)-based finite element Analysis (FEA). A systematic imaging and modeling procedure was developed for reliable correspondence between the QCT-based FEA and the in-vitro mechanical testing. A speciallydesigned holding frame was used to define and maintain a unique geometrical reference system during the analysis and testing. The QCT images were directly converted into voxel-based 3D finite element models for linear and nonlinear analyses. The equivalent plastic strain and the strain energy density measures were used to identify the critical elements and predict the failure patterns. The samples were destructively tested using a specially-designed gripping fixture (with five degrees of freedom) mounted within a universal mechanical testing machine. Very good agreements were found between the experimental and the predicted failure patterns and the associated load levels.

Keywords: Bone, Osteoporosis, Noninvasive methods, Failure Analysis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2082
9689 Modeling and Dynamics Analysis for Intelligent Skid-Steering Vehicle Based on Trucksim-Simulink

Authors: Yansong Zhang, Xueyuan Li, Junjie Zhou, Xufeng Yin, Shihua Yuan, Shuxian Liu

Abstract:

Aiming at the verification of control algorithms for skid-steering vehicles, a vehicle simulation model of 6×6 electric skid-steering unmanned vehicle was established based on Trucksim and Simulink. The original transmission and steering mechanism of Trucksim are removed, and the electric skid-steering model and a closed-loop controller for the vehicle speed and yaw rate are built in Simulink. The simulation results are compared with the ones got by theoretical formulas. The results show that the predicted tire mechanics and vehicle kinematics of Trucksim-Simulink simulation model are closed to the theoretical results. Therefore, it can be used as an effective approach to study the dynamic performance and control algorithm of skid-steering vehicle. In this paper, a method of motion control based on feed forward control is also designed. The simulation results show that the feed forward control strategy can make the vehicle follow the target yaw rate more quickly and accurately, which makes the vehicle have more maneuverability.

Keywords: Skid-steering, Trucksim-Simulink, feedforward control, dynamics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
9688 New Subband Adaptive IIR Filter Based On Polyphase Decomposition

Authors: Young-Seok Choi

Abstract:

We present a subband adaptive infinite-impulse response (IIR) filtering method, which is based on a polyphase decomposition of IIR filter. Motivated by the fact that the polyphase structure has benefits in terms of convergence rate and stability, we introduce the polyphase decomposition to subband IIR filtering, i.e., in each subband high order IIR filter is decomposed into polyphase IIR filters with lower order. Computer simulations demonstrate that the proposed method has improved convergence rate over conventional IIR filters.

Keywords: Subband adaptive filter, IIR filtering. Polyphase decomposition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2475
9687 On Quantum BCH Codes and Its Duals

Authors: J. S. Bhullar, Manish Gupta

Abstract:

Classical Bose-Chaudhuri-Hocquenghem (BCH) codes C that contain their dual codes can be used to construct quantum stabilizer codes this chapter studies the properties of such codes. It had been shown that a BCH code of length n which contains its dual code satisfies the bound on weight of any non-zero codeword in C and converse is also true. One impressive difficulty in quantum communication and computation is to protect informationcarrying quantum states against undesired interactions with the environment. To address this difficulty, many good quantum errorcorrecting codes have been derived as binary stabilizer codes. We were able to shed more light on the structure of dual containing BCH codes. These results make it possible to determine the parameters of quantum BCH codes in terms of weight of non-zero dual codeword.

Keywords: Quantum Codes, BCH Codes, Dual BCH Codes, Designed Distance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1729
9686 Developing Manufacturing Process for the Graphene Sensors

Authors: Abdullah Faqihi, John Hedley

Abstract:

Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.

Keywords: Laser scribing, LightScribe DVD, graphene oxide, scanning electron microscopy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 628
9685 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions

Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag

Abstract:

Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.

Keywords: GSCM solutions, multi-criteria analysis, FAHP, TOPSIS, PROMETHEE, decision support system.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 913
9684 A Comparative Study into Observer based Fault Detection and Diagnosis in DC Motors: Part-I

Authors: Padmakumar S., Vivek Agarwal, Kallol Roy

Abstract:

A model based fault detection and diagnosis technique for DC motor is proposed in this paper. Fault detection using Kalman filter and its different variants are compared. Only incipient faults are considered for the study. The Kalman Filter iterations and all the related computations required for fault detection and fault confirmation are presented. A second order linear state space model of DC motor is used for this work. A comparative assessment of the estimates computed from four different observers and their relative performance is evaluated.

Keywords: DC motor model, Fault detection and diagnosis Kalman Filter, Unscented Kalman Filter

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2476
9683 Stabilization of a New Configurable Two- Wheeled Machine Using a PD-PID and a Hybrid FL Control Strategies: A Comparative Study

Authors: M. Almeshal, M. O. Tokhi, K. M. Goher

Abstract:

A novel design of two-wheeled robotic vehicle with moving payload is presented in this paper. A mathematical model describing the vehicle dynamics is derived and simulated in Matlab Simulink environment. Two control strategies were developed to stabilise the vehicle in the upright position. A robust Proportional- Integral-Derivative (PID) control strategy has been implemented and initially tested to measure the system performance, while the second control strategy is to use a hybrid fuzzy logic controller (FLC). The results are given on a comparative basis for the system performance in terms of disturbance rejection, control algorithms robustness as well as the control effort in terms of input torque.

Keywords: double inverted pendulum, modelling, robust control, simulation,

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1519
9682 Codebook Generation for Vector Quantization on Orthogonal Polynomials based Transform Coding

Authors: R. Krishnamoorthi, N. Kannan

Abstract:

In this paper, a new algorithm for generating codebook is proposed for vector quantization (VQ) in image coding. The significant features of the training image vectors are extracted by using the proposed Orthogonal Polynomials based transformation. We propose to generate the codebook by partitioning these feature vectors into a binary tree. Each feature vector at a non-terminal node of the binary tree is directed to one of the two descendants by comparing a single feature associated with that node to a threshold. The binary tree codebook is used for encoding and decoding the feature vectors. In the decoding process the feature vectors are subjected to inverse transformation with the help of basis functions of the proposed Orthogonal Polynomials based transformation to get back the approximated input image training vectors. The results of the proposed coding are compared with the VQ using Discrete Cosine Transform (DCT) and Pairwise Nearest Neighbor (PNN) algorithm. The new algorithm results in a considerable reduction in computation time and provides better reconstructed picture quality.

Keywords: Orthogonal Polynomials, Image Coding, Vector Quantization, TSVQ, Binary Tree Classifier

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129
9681 Exergy Based Performance Analysis of Double Flow Solar Air Heater with Corrugated Absorber

Authors: S. P. Sharma, Som Nath Saha

Abstract:

This paper presents the performance, based on exergy analysis of double flow solar air heaters with corrugated and flat plate absorber. A mathematical model of double flow solar air heater based on energy balance equations has been presented and the results obtained have been compared with that of a conventional flat-plate solar air heater. The double flow corrugated absorber solar air heater performs thermally better than the flat plate double flow and conventional flat-plate solar air heater under same operating conditions. However, the corrugated absorber leads to higher pressure drop thereby increasing pumping power. The results revealed that the energy and exergy efficiencies of double flow corrugated absorber solar air heater is much higher than conventional solar air heater with the concept involving of increase in heat transfer surface area and turbulence in air flow. The results indicate that the energy efficiency increases, however, exergy efficiency decreases with increase in mass flow rate.

Keywords: Corrugated absorber, double flow, exergy efficiency, solar air heater.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 919
9680 Urban Big Data: An Experimental Approach to Building-Value Estimation Using Web-Based Data

Authors: Sun-Young Jang, Sung-Ah Kim, Dongyoun Shin

Abstract:

Current real-estate value estimation, difficult for laymen, usually is performed by specialists. This paper presents an automated estimation process based on big data and machine-learning technology that calculates influences of building conditions on real-estate price measurement. The present study analyzed actual building sales sample data for Nonhyeon-dong, Gangnam-gu, Seoul, Korea, measuring the major influencing factors among the various building conditions. Further to that analysis, a prediction model was established and applied using RapidMiner Studio, a graphical user interface (GUI)-based tool for derivation of machine-learning prototypes. The prediction model is formulated by reference to previous examples. When new examples are applied, it analyses and predicts accordingly. The analysis process discerns the crucial factors effecting price increases by calculation of weighted values. The model was verified, and its accuracy determined, by comparing its predicted values with actual price increases.

Keywords: Big data, building-value analysis, machine learning, price prediction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1147
9679 Generative Adversarial Network Based Fingerprint Anti-Spoofing Limitations

Authors: Yehjune Heo

Abstract:

Fingerprint Anti-Spoofing approaches have been actively developed and applied in real-world applications. One of the main problems for Fingerprint Anti-Spoofing is not robust to unseen samples, especially in real-world scenarios. A possible solution will be to generate artificial, but realistic fingerprint samples and use them for training in order to achieve good generalization. This paper contains experimental and comparative results with currently popular GAN based methods and uses realistic synthesis of fingerprints in training in order to increase the performance. Among various GAN models, the most popular StyleGAN is used for the experiments. The CNN models were first trained with the dataset that did not contain generated fake images and the accuracy along with the mean average error rate were recorded. Then, the fake generated images (fake images of live fingerprints and fake images of spoof fingerprints) were each combined with the original images (real images of live fingerprints and real images of spoof fingerprints), and various CNN models were trained. The best performances for each CNN model, trained with the dataset of generated fake images and each time the accuracy and the mean average error rate, were recorded. We observe that current GAN based approaches need significant improvements for the Anti-Spoofing performance, although the overall quality of the synthesized fingerprints seems to be reasonable. We include the analysis of this performance degradation, especially with a small number of samples. In addition, we suggest several approaches towards improved generalization with a small number of samples, by focusing on what GAN based approaches should learn and should not learn.

Keywords: Anti-spoofing, CNN, fingerprint recognition, GAN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 568
9678 Support Vector Fuzzy Based Neural Networks For Exchange Rate Modeling

Authors: Prof. Chokri SLIM

Abstract:

A Novel fuzzy neural network combining with support vector learning mechanism called support-vector-based fuzzy neural networks (SVBFNN) is proposed. The SVBFNN combine the capability of minimizing the empirical risk (training error) and expected risk (testing error) of support vector learning in high dimensional data spaces and the efficient human-like reasoning of FNN.

Keywords: Neural network, fuzzy inference, machine learning, fuzzy modeling and rule extraction, support vector regression.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 16666
9677 Cognitive Virtual Exploration for Optimization Model Reduction

Authors: Livier Serna, Xavier Fischer, Fouad Bennis

Abstract:

In this paper, a decision aid method for preoptimization is presented. The method is called “negotiation", and it is based on the identification, formulation, modeling and use of indicators defined as “negotiation indicators". These negotiation indicators are used to explore the solution space by means of a classbased approach. The classes are subdomains for the negotiation indicators domain. They represent equivalent cognitive solutions in terms of the negotiation indictors being used. By this method, we reduced the size of the solution space and the criteria, thus aiding the optimization methods. We present an example to show the method.

Keywords: Optimization Model Reduction, Pre-Optimization, Negotiation Process, Class-Making, Cognition Based VirtualExploration.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1407
9676 Emergency Response Plan Establishment and Computerization through the Analysis of the Disasters Occurring on Long-Span Bridges by Type

Authors: Sungnam Hong, Sun-Kyu Park, Dooyong Cho, Jinwoong Choi

Abstract:

In this paper, a strategy for long-span bridge disaster response was developed, divided into risk analysis, business impact analysis, and emergency response plan. At the risk analysis stage, the critical risk was estimated. The critical risk was “car accident."The critical process by critical-risk classification was assessed at the business impact analysis stage. The critical process was the task related to the road conditions and traffic safety. Based on the results of the precedent analysis, an emergency response plan was established. By making the order of the standard operating procedures clear, an effective plan for dealing with disaster was formulated. Finally, a prototype software was developed based on the research findings. This study laid the foundation of an information-technology-based disaster response guideline and is significant in that it computerized the disaster response plan to improve the plan-s accessibility.

Keywords: Emergency response; Long-span bridge; Disaster management; Standard operating procedure; Ubiquitous.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814
9675 Mining User-Generated Contents to Detect Service Failures with Topic Model

Authors: Kyung Bae Park, Sung Ho Ha

Abstract:

Online user-generated contents (UGC) significantly change the way customers behave (e.g., shop, travel), and a pressing need to handle the overwhelmingly plethora amount of various UGC is one of the paramount issues for management. However, a current approach (e.g., sentiment analysis) is often ineffective for leveraging textual information to detect the problems or issues that a certain management suffers from. In this paper, we employ text mining of Latent Dirichlet Allocation (LDA) on a popular online review site dedicated to complaint from users. We find that the employed LDA efficiently detects customer complaints, and a further inspection with the visualization technique is effective to categorize the problems or issues. As such, management can identify the issues at stake and prioritize them accordingly in a timely manner given the limited amount of resources. The findings provide managerial insights into how analytics on social media can help maintain and improve their reputation management. Our interdisciplinary approach also highlights several insights by applying machine learning techniques in marketing research domain. On a broader technical note, this paper illustrates the details of how to implement LDA in R program from a beginning (data collection in R) to an end (LDA analysis in R) since the instruction is still largely undocumented. In this regard, it will help lower the boundary for interdisciplinary researcher to conduct related research.

Keywords: Latent Dirichlet allocation, R program, text mining, topic model, user generated contents, visualization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1199
9674 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2497
9673 A Learning-Community Recommendation Approach for Web-Based Cooperative Learning

Authors: Jian-Wei Li, Yao-Tien Wang, Yi-Chun Chang

Abstract:

Cooperative learning has been defined as learners working together as a team to solve a problem to complete a task or to accomplish a common goal, which emphasizes the importance of interactions among members to promote the whole learning performance. With the popularity of society networks, cooperative learning is no longer limited to traditional classroom teaching activities. Since society networks facilitate to organize online learners, to establish common shared visions, and to advance learning interaction, the online community and online learning community have triggered the establishment of web-based societies. Numerous research literatures have indicated that the collaborative learning community is a critical issue to enhance learning performance. Hence, this paper proposes a learning community recommendation approach to facilitate that a learner joins the appropriate learning communities, which is based on k-nearest neighbor (kNN) classification. To demonstrate the viability of the proposed approach, the proposed approach is implemented for 117 students to recommend learning communities. The experimental results indicate that the proposed approach can effectively recommend appropriate learning communities for learners.

Keywords: k-nearest neighbor classification, learning community, Cooperative/Collaborative Learning and Environments.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
9672 Literature-Based Discoveries in Lupus Treatment

Authors: Oluwaseyi Jaiyeoba, Vetria Byrd

Abstract:

Systemic lupus erythematosus (aka lupus) is a chronic disease known for its chameleon-like ability to mimic symptoms of other diseases rendering it hard to detect, diagnose and treat. The heterogeneous nature of the disease generates disparate data that are often multifaceted and multi-dimensional. Musculoskeletal manifestation of lupus is one of the most common clinical manifestations of lupus. This research links disparate literature on the treatment of lupus as it affects the musculoskeletal system using the discoveries from literature-based research articles available on the PubMed database. Several Natural Language Processing (NPL) tools exist to connect disjointed but related literature, such as Connected Papers, Bitola, and Gopalakrishnan. Literature-based discovery (LBD) has been used to bridge unconnected disciplines based on text mining procedures. The technical/medical literature consists of many technical/medical concepts, each having its  sub-literature. This approach has been used to link Parkinson’s, Raynaud, and Multiple Sclerosis treatment within works of literature.  Literature-based discovery methods can connect two or more related but disjointed literature concepts to produce a novel and plausible approach to solving a research problem. Data visualization techniques with the help of natural language processing tools are used to visually represent the result of literature-based discoveries. Literature search results can be voluminous, but Data visualization processes can provide insight and detect subtle patterns in large data. These insights and patterns can lead to discoveries that would have otherwise been hidden from disjointed literature. In this research, literature data are mined and combined with visualization techniques for heterogeneous data to discover viable treatments reported in the literature for lupus expression in the musculoskeletal system. This research answers the question of using literature-based discovery to identify potential treatments for a multifaceted disease like lupus. A three-pronged methodology is used in this research: text mining, natural language processing, and data visualization. These three research-related fields are employed to identify patterns in lupus-related data that, when visually represented, could aid research in the treatment of lupus. This work introduces a method for visually representing interconnections of various lupus-related literature. The methodology outlined in this work is the first step toward literature-based research and treatment planning for the musculoskeletal manifestation of lupus. The results also outline the interconnection of complex, disparate data associated with the manifestation of lupus in the musculoskeletal system. The societal impact of this work is broad. Advances in this work will improve the quality of life for millions of persons in the workforce currently diagnosed and silently living with a musculoskeletal disease associated with lupus.

Keywords: Systemic lupus erythematosus, LBD, Data Visualization, musculoskeletal system, treatment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
9671 A Method for Quality Inspection of Motors by Detecting Abnormal Sound

Authors: Tadatsugu Kitamoto

Abstract:

Recently, a quality of motors is inspected by human ears. In this paper, I propose two systems using a method of speech recognition for automation of the inspection. The first system is based on a method of linear processing which uses K-means and Nearest Neighbor method, and the second is based on a method of non-linear processing which uses neural networks. I used motor sounds in these systems, and I successfully recognize 86.67% of motor sounds in the linear processing system and 97.78% in the non-linear processing system.

Keywords: Acoustical diagnosis, Neural networks, K-means, Short-time Fourier transformation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1682
9670 Performance Evaluation of a Minimum Mean Square Error-Based Physical Sidelink Share Channel Receiver under Fading Channel

Authors: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Abstract:

Cellular Vehicle to Everything (C-V2X) is considered a promising solution for future autonomous driving. From Release 16 to Release 17, the Third Generation Partnership Project (3GPP) has introduced the definitions and services for 5G New Radio (NR) V2X. Since establishing a simulator for C-V2X communications is an essential preliminary step to achieve reliable and stable communication links, this paper proposes a complete framework of a link-level simulator based on the 3GPP specifications for the Physical Sidelink Share Channel (PSSCH) of the 5G NR Physical Layer (PHY). In this framework, several algorithms in the receiver part, i.e., sliding window in channel estimation and Minimum Mean Square Error (MMSE)-based equalization, are developed. Finally, the performance of the developed PSSCH receiver is validated through extensive simulations under different assumptions.

Keywords: Yang Fu, Jaime Rodrigo Navarro, Jose F. Monserrat, Faiza Bouchmal, Oscar Carrasco Quilis

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106
9669 Shifted Window Based Self-Attention via Swin Transformer for Zero-Shot Learning

Authors: Yasaswi Palagummi, Sareh Rowlands

Abstract:

Generalised Zero-Shot Learning, often known as GZSL, is an advanced variant of zero-shot learning in which the samples in the unseen category may be either seen or unseen. GZSL methods typically have a bias towards the seen classes because they learn a model to perform recognition for both the seen and unseen classes using data samples from the seen classes. This frequently leads to the misclassification of data from the unseen classes into the seen classes, making the task of GZSL more challenging. In this work, we propose an approach leveraging the Shifted Window based Self-Attention in the Swin Transformer (Swin-GZSL) to work in the inductive GZSL problem setting. We run experiments on three popular benchmark datasets: CUB, SUN, and AWA2, which are specifically used for ZSL and its other variants. The results show that our model based on Swin Transformer has achieved state-of-the-art harmonic mean for two datasets - AWA2 and SUN and near-state-of-the-art for the other dataset - CUB. More importantly, this technique has a linear computational complexity, which reduces training time significantly. We have also observed less bias than most of the existing GZSL models.

Keywords: Generalised Zero-shot Learning, Inductive Learning, Shifted-Window Attention, Swin Transformer, Vision Transformer.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 177
9668 On Simulation based WSN Multi-Parametric Performance Analysis

Authors: Ch. Antonopoulos, Th. Kapourniotis, V. Triantafillou

Abstract:

Optimum communication and performance in Wireless Sensor Networks, constitute multi-facet challenges due to the specific networking characteristics as well as the scarce resource availability. Furthermore, it is becoming increasingly apparent that isolated layer based approaches often do not meet the demands posed by WSNs applications due to omission of critical inter-layer interactions and dependencies. As a counterpart, cross-layer is receiving high interest aiming to exploit these interactions and increase network performance. However, in order to clearly identify existing dependencies, comprehensive performance studies are required evaluating the effect of different critical network parameters on system level performance and behavior.This paper-s main objective is to address the need for multi-parametric performance evaluations considering critical network parameters using a well known network simulator, offering useful and practical conclusions and guidelines. The results reveal strong dependencies among considered parameters which can be utilized by and drive future research efforts, towards designing and implementing highly efficient protocols and architectures.

Keywords: Wireless sensor network, Communication Systems, cross-layer architectures, simulation based performance evaluation

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
9667 Control Chart Pattern Recognition Using Wavelet Based Neural Networks

Authors: Jun Seok Kim, Cheong-Sool Park, Jun-Geol Baek, Sung-Shick Kim

Abstract:

Control chart pattern recognition is one of the most important tools to identify the process state in statistical process control. The abnormal process state could be classified by the recognition of unnatural patterns that arise from assignable causes. In this study, a wavelet based neural network approach is proposed for the recognition of control chart patterns that have various characteristics. The procedure of proposed control chart pattern recognizer comprises three stages. First, multi-resolution wavelet analysis is used to generate time-shape and time-frequency coefficients that have detail information about the patterns. Second, distance based features are extracted by a bi-directional Kohonen network to make reduced and robust information. Third, a back-propagation network classifier is trained by these features. The accuracy of the proposed method is shown by the performance evaluation with numerical results.

Keywords: Control chart pattern recognition, Multi-resolution wavelet analysis, Bi-directional Kohonen network, Back-propagation network, Feature extraction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2459
9666 Dynamic TDMA Slot Reservation Protocol for QoS Provisioning in Cognitive Radio Ad Hoc Networks

Authors: S. M. Kamruzzaman

Abstract:

In this paper, we propose a dynamic TDMA slot reservation (DTSR) protocol for cognitive radio ad hoc networks. Quality of Service (QoS) guarantee plays a critically important role in such networks. We consider the problem of providing QoS guarantee to users as well as to maintain the most efficient use of scarce bandwidth resources. According to one hop neighboring information and the bandwidth requirement, our proposed protocol dynamically changes the frame length and the transmission schedule. A dynamic frame length expansion and shrinking scheme that controls the excessive increase of unassigned slots has been proposed. This method efficiently utilizes the channel bandwidth by assigning unused slots to new neighboring nodes and increasing the frame length when the number of slots in the frame is insufficient to support the neighboring nodes. It also shrinks the frame length when half of the slots in the frame of a node are empty. An efficient slot reservation protocol not only guarantees successful data transmissions without collisions but also enhance channel spatial reuse to maximize the system throughput. Our proposed scheme, which provides both QoS guarantee and efficient resource utilization, be employed to optimize the channel spatial reuse and maximize the system throughput. Extensive simulation results show that the proposed mechanism achieves desirable performance in multichannel multi-rate cognitive radio ad hoc networks.

Keywords: TDMA, cognitive radio, ad hoc networks, QoSguarantee, dynamic frame length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2636
9665 Quick Sequential Search Algorithm Used to Decode High-Frequency Matrices

Authors: Mohammed M. Siddeq, Mohammed H. Rasheed, Omar M. Salih, Marcos A. Rodrigues

Abstract:

This research proposes a data encoding and decoding method based on the Matrix Minimization algorithm. This algorithm is applied to high-frequency coefficients for compression/encoding. The algorithm starts by converting every three coefficients to a single value; this is accomplished based on three different keys. The decoding/decompression uses a search method called QSS (Quick Sequential Search) Decoding Algorithm presented in this research based on the sequential search to recover the exact coefficients. In the next step, the decoded data are saved in an auxiliary array. The basic idea behind the auxiliary array is to save all possible decoded coefficients; this is because another algorithm, such as conventional sequential search, could retrieve encoded/compressed data independently from the proposed algorithm. The experimental results showed that our proposed decoding algorithm retrieves original data faster than conventional sequential search algorithms.

Keywords: Matrix Minimization Algorithm, Decoding Sequential Search Algorithm, image compression, Discrete Cosine Transform, Discrete Wavelet Transform.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 208
9664 Geometric Modeling of Illumination on the TFT-LCD Panel using Bezier Surface

Authors: Kyong-min Lee, Moon Soo Chang, PooGyeon Park

Abstract:

In this paper, we propose a geometric modeling of illumination on the patterned image containing etching transistor. This image is captured by a commercial camera during the inspection of a TFT-LCD panel. Inspection of defect is an important process in the production of LCD panel, but the regional difference in brightness, which has a negative effect on the inspection, is due to the uneven illumination environment. In order to solve this problem, we present a geometric modeling of illumination consisting of an interpolation using the least squares method and 3D modeling using bezier surface. Our computational time, by using the sampling method, is shorter than the previous methods. Moreover, it can be further used to correct brightness in every patterned image.

Keywords: Bezier, defect, geometric modeling, illumination, inspection, LCD, panel.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1828
9663 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design

Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan

Abstract:

Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.

Keywords: Banking system, data envelopment analysis, DEA, integrated resilience engineering, IRE, performance evaluation, perturbation analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 814