Search results for: Equality of P and NP Complexity Classes.
364 Tidal Data Analysis using ANN
Authors: Ritu Vijay, Rekha Govil
Abstract:
The design of a complete expansion that allows for compact representation of certain relevant classes of signals is a central problem in signal processing applications. Achieving such a representation means knowing the signal features for the purpose of denoising, classification, interpolation and forecasting. Multilayer Neural Networks are relatively a new class of techniques that are mathematically proven to approximate any continuous function arbitrarily well. Radial Basis Function Networks, which make use of Gaussian activation function, are also shown to be a universal approximator. In this age of ever-increasing digitization in the storage, processing, analysis and communication of information, there are numerous examples of applications where one needs to construct a continuously defined function or numerical algorithm to approximate, represent and reconstruct the given discrete data of a signal. Many a times one wishes to manipulate the data in a way that requires information not included explicitly in the data, which is done through interpolation and/or extrapolation. Tidal data are a very perfect example of time series and many statistical techniques have been applied for tidal data analysis and representation. ANN is recent addition to such techniques. In the present paper we describe the time series representation capabilities of a special type of ANN- Radial Basis Function networks and present the results of tidal data representation using RBF. Tidal data analysis & representation is one of the important requirements in marine science for forecasting.Keywords: ANN, RBF, Tidal Data.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1656363 Implementation of Adder-Subtracter Design with VerilogHDL
Authors: May Phyo Thwal, Khin Htay Kyi, Kyaw Swar Soe
Abstract:
According to the density of the chips, designers are trying to put so any facilities of computational and storage on single chips. Along with the complexity of computational and storage circuits, the designing, testing and debugging become more and more complex and expensive. So, hardware design will be built by using very high speed hardware description language, which is more efficient and cost effective. This paper will focus on the implementation of 32-bit ALU design based on Verilog hardware description language. Adder and subtracter operate correctly on both unsigned and positive numbers. In ALU, addition takes most of the time if it uses the ripple-carry adder. The general strategy for designing fast adders is to reduce the time required to form carry signals. Adders that use this principle are called carry look- ahead adder. The carry look-ahead adder is to be designed with combination of 4-bit adders. The syntax of Verilog HDL is similar to the C programming language. This paper proposes a unified approach to ALU design in which both simulation and formal verification can co-exist.Keywords: Addition, arithmetic logic unit, carry look-ahead adder, Verilog HDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8926362 Can Smart Meters Create Smart Behaviour?
Authors: Candice Moy, Damien Guirco, Thomas Boyle
Abstract:
Intelligent technologies are increasingly facilitating sustainable water management strategies in Australia. While this innovation can present clear cost benefits to utilities through immediate leak detection and deference of capital costs, the impact of this technology on households is less distinct. By offering real-time engagement and detailed end-use consumption breakdowns, there is significant potential for demand reduction as a behavioural response to increased information. Despite this potential, passive implementation without well-planned residential engagement strategies is likely to result in a lost opportunity. This paper begins this research process by exploring the effect of smart water meters through the lens of three behaviour change theories. The Theory of Planned Behaviour (TPB), Belief Revision theory (BR) and Practice Theory emphasise different variables that can potentially influence and predict household water engagements. In acknowledging the strengths of each theory, the nuances and complexity of household water engagement can be recognised which can contribute to effective planning for residential smart meter engagement strategies.
Keywords: Behaviour, information, household, smart meters, water.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1865361 Mathematical Expression for Machining Performance
Authors: Md. Ashikur Rahman Khan, M. M. Rahman
Abstract:
In electrical discharge machining (EDM), a complete and clear theory has not yet been established. The developed theory (physical models) yields results far from reality due to the complexity of the physics. It is difficult to select proper parameter settings in order to achieve better EDM performance. However, modelling can solve this critical problem concerning the parameter settings. Therefore, the purpose of the present work is to develop mathematical model to predict performance characteristics of EDM on Ti-5Al-2.5Sn titanium alloy. Response surface method (RSM) and artificial neural network (ANN) are employed to develop the mathematical models. The developed models are verified through analysis of variance (ANOVA). The ANN models are trained, tested, and validated utilizing a set of data. It is found that the developed ANN and mathematical model can predict performance of EDM effectively. Thus, the model has found a precise tool that turns EDM process cost-effective and more efficient.
Keywords: Analysis of variance, artificial neural network, material removal rate, modelling, response surface method, surface finish.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 731360 Performance Analysis of Cluster Based Dual Tired Network Model with INTK Security Scheme in a Wireless Sensor Network
Authors: D. Satish Kumar, S. Karthik
Abstract:
A dual tiered network model is designed to overcome the problem of energy alert and fault tolerance. This model minimizes the delay time and overcome failure of links. Performance analysis of the dual tiered network model is studied in this paper where the CA and LS schemes are compared with DEO optimal. We then evaluate the Integrated Network Topological Control and Key Management (INTK) Schemes, which was proposed to add security features of the wireless sensor networks. Clustering efficiency, level of protections, the time complexity is some of the parameters of INTK scheme that were analyzed. We then evaluate the Cluster based Energy Competent n-coverage scheme (CEC n-coverage scheme) to ensure area coverage for wireless sensor networks.
Keywords: CEC n-coverage scheme, Clustering efficiency, Dual tired network, Wireless sensor networks.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1672359 A Hybrid Feature Selection and Deep Learning Algorithm for Cancer Disease Classification
Authors: Niousha Bagheri Khulenjani, Mohammad Saniee Abadeh
Abstract:
Learning from very big datasets is a significant problem for most present data mining and machine learning algorithms. MicroRNA (miRNA) is one of the important big genomic and non-coding datasets presenting the genome sequences. In this paper, a hybrid method for the classification of the miRNA data is proposed. Due to the variety of cancers and high number of genes, analyzing the miRNA dataset has been a challenging problem for researchers. The number of features corresponding to the number of samples is high and the data suffer from being imbalanced. The feature selection method has been used to select features having more ability to distinguish classes and eliminating obscures features. Afterward, a Convolutional Neural Network (CNN) classifier for classification of cancer types is utilized, which employs a Genetic Algorithm to highlight optimized hyper-parameters of CNN. In order to make the process of classification by CNN faster, Graphics Processing Unit (GPU) is recommended for calculating the mathematic equation in a parallel way. The proposed method is tested on a real-world dataset with 8,129 patients, 29 different types of tumors, and 1,046 miRNA biomarkers, taken from The Cancer Genome Atlas (TCGA) database.
Keywords: Cancer classification, feature selection, deep learning, genetic algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1271358 Using HMM-based Classifier Adapted to Background Noises with Improved Sounds Features for Audio Surveillance Application
Authors: Asma Rabaoui, Zied Lachiri, Noureddine Ellouze
Abstract:
Discrimination between different classes of environmental sounds is the goal of our work. The use of a sound recognition system can offer concrete potentialities for surveillance and security applications. The first paper contribution to this research field is represented by a thorough investigation of the applicability of state-of-the-art audio features in the domain of environmental sound recognition. Additionally, a set of novel features obtained by combining the basic parameters is introduced. The quality of the features investigated is evaluated by a HMM-based classifier to which a great interest was done. In fact, we propose to use a Multi-Style training system based on HMMs: one recognizer is trained on a database including different levels of background noises and is used as a universal recognizer for every environment. In order to enhance the system robustness by reducing the environmental variability, we explore different adaptation algorithms including Maximum Likelihood Linear Regression (MLLR), Maximum A Posteriori (MAP) and the MAP/MLLR algorithm that combines MAP and MLLR. Experimental evaluation shows that a rather good recognition rate can be reached, even under important noise degradation conditions when the system is fed by the convenient set of features.Keywords: Sounds recognition, HMM classifier, Multi-style training, Environmental Adaptation, Feature combinations.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1645357 Business-Intelligence Mining of Large Decentralized Multimedia Datasets with a Distributed Multi-Agent System
Authors: Karima Qayumi, Alex Norta
Abstract:
The rapid generation of high volume and a broad variety of data from the application of new technologies pose challenges for the generation of business-intelligence. Most organizations and business owners need to extract data from multiple sources and apply analytical methods for the purposes of developing their business. Therefore, the recently decentralized data management environment is relying on a distributed computing paradigm. While data are stored in highly distributed systems, the implementation of distributed data-mining techniques is a challenge. The aim of this technique is to gather knowledge from every domain and all the datasets stemming from distributed resources. As agent technologies offer significant contributions for managing the complexity of distributed systems, we consider this for next-generation data-mining processes. To demonstrate agent-based business intelligence operations, we use agent-oriented modeling techniques to develop a new artifact for mining massive datasets.
Keywords: Agent-oriented modeling, business Intelligence management, distributed data mining, multi-agent system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1374356 Beef Cattle Farmers Perception toward Urea Mineral Molasses Block
Authors: Veronica Sri Lestari, Djoni Prawira Rahardja, Tanrigiling Rasyid, Aslina Asnawi, Ikrar Muhammad Saleh, Ilham Rasyid
Abstract:
Urea Mineral Molasses Block is very important for beef cattle, because it can increase beef production. The purpose of this research was to know beef cattle farmers’ perception towards Urea Mineral Molasses Block (UMMB). This research was conducted in Gowa Regency, South Sulawesi, Indonesia in 2016. The population of this research were all beef cattle farmers. Sample was chosen through purposive sampling. Data were collected through observation and face to face with deep interview using questionnaire. Variables of perception consisted of relative advantage, compatibility, complexity, observability and triability. There were 10 questions. The answer for each question was scored by 1, 2, 3 which refer to disagree, agree enough, strongly agree. The data were analyzed descriptively using frequency distribution. The research revealed that beef cattle farmers’ perception towards UMMB was categorized as strongly agree.
Keywords: Beef cattle, farmers, perception, urea mineral molasses block.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1923355 A Reconfigurable Distributed Multiagent System Optimized for Scalability
Authors: Summiya Moheuddin, Afzel Noore, Muhammad Choudhry
Abstract:
This paper proposes a novel solution for optimizing the size and communication overhead of a distributed multiagent system without compromising the performance. The proposed approach addresses the challenges of scalability especially when the multiagent system is large. A modified spectral clustering technique is used to partition a large network into logically related clusters. Agents are assigned to monitor dedicated clusters rather than monitor each device or node. The proposed scalable multiagent system is implemented using JADE (Java Agent Development Environment) for a large power system. The performance of the proposed topologyindependent decentralized multiagent system and the scalable multiagent system is compared by comprehensively simulating different fault scenarios. The time taken for reconfiguration, the overall computational complexity, and the communication overhead incurred are computed. The results of these simulations show that the proposed scalable multiagent system uses fewer agents efficiently, makes faster decisions to reconfigure when a fault occurs, and incurs significantly less communication overhead.Keywords: Multiagent system, scalable design, spectral clustering, reconfiguration.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1381354 Research of the Behavior of Solar Module Frame Installed by Solar Clamping System by Finite Element Method
Authors: Li-Chung Su, Chia-Yu Chen, Tzu-Yuan Lai, Sheng-Jye Hwang
Abstract:
Mechanical design of the thin-film solar framed module and mounting system is important to enhance module reliability and to increase areas of applications. The stress induced by different mounting positions played a main role controlling the stability of the whole mechanical structure. From the finite element method, under the pressure from the back of module, the stress at Lc (center point of the Long frame) increased and the stresses at Center, Corner and Sc (center point of the Short frame) decreased while the mounting position was away from the center of the module. In addition, not only the stress of the glass but also the stress of the frame decreased. Accordingly it was safer to mount in the position away from the center of the module. The emphasis of designing frame system of the module was on the upper support of the Short frame. Strength of the overall structure and design of the corner were also important due to the complexity of the stress in the Long frame.Keywords: Finite element method, Framed module, Mountingposition
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1710353 VLSI Design of 2-D Discrete Wavelet Transform for Area-Efficient and High-Speed Image Computing
Authors: Mountassar Maamoun, Mehdi Neggazi, Abdelhamid Meraghni, Daoud Berkani
Abstract:
This paper presents a VLSI design approach of a highspeed and real-time 2-D Discrete Wavelet Transform computing. The proposed architecture, based on new and fast convolution approach, reduces the hardware complexity in addition to reduce the critical path to the multiplier delay. Furthermore, an advanced twodimensional (2-D) discrete wavelet transform (DWT) implementation, with an efficient memory area, is designed to produce one output in every clock cycle. As a result, a very highspeed is attained. The system is verified, using JPEG2000 coefficients filters, on Xilinx Virtex-II Field Programmable Gate Array (FPGA) device without accessing any external memory. The resulting computing rate is up to 270 M samples/s and the (9,7) 2-D wavelet filter uses only 18 kb of memory (16 kb of first-in-first-out memory) with 256×256 image size. In this way, the developed design requests reduced memory and provide very high-speed processing as well as high PSNR quality.Keywords: Discrete Wavelet Transform (DWT), Fast Convolution, FPGA, VLSI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1966352 Semi-automatic Construction of Ontology-based CBR System for Knowledge Integration
Authors: Junjie Gao, Guishi Deng
Abstract:
In order to integrate knowledge in heterogeneous case-based reasoning (CBR) systems, ontology-based CBR system has become a hot topic. To solve the facing problems of ontology-based CBR system, for example, its architecture is nonstandard, reusing knowledge in legacy CBR is deficient, ontology construction is difficult, etc, we propose a novel approach for semi-automatically construct ontology-based CBR system whose architecture is based on two-layer ontology. Domain knowledge implied in legacy case bases can be mapped from relational database schema and knowledge items to relevant OWL local ontology automatically by a mapping algorithm with low time-complexity. By concept clustering based on formal concept analysis, computing concept equation measure and concept inclusion measure, some suggestions about enriching or amending concept hierarchy of OWL local ontologies are made automatically that can aid designers to achieve semi-automatic construction of OWL domain ontology. Validation of the approach is done by an application example.Keywords: OWL ontology, Case-based Reasoning, FormalConcept Analysis, Knowledge Integration
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2011351 Measuring Cognitive Load - A Solution to Ease Learning of Programming
Authors: Muhammed Yousoof, Mohd Sapiyan, Khaja Kamaluddin
Abstract:
Learning programming is difficult for many learners. Some researches have found that the main difficulty relates to cognitive load. Cognitive overload happens in programming due to the nature of the subject which is intrinisicly over-bearing on the working memory. It happens due to the complexity of the subject itself. The problem is made worse by the poor instructional design methodology used in the teaching and learning process. Various efforts have been proposed to reduce the cognitive load, e.g. visualization softwares, part-program method etc. Use of many computer based systems have also been tried to tackle the problem. However, little success has been made to alleviate the problem. More has to be done to overcome this hurdle. This research attempts at understanding how cognitive load can be managed so as to reduce the problem of overloading. We propose a mechanism to measure the cognitive load during pre instruction, post instruction and in instructional stages of learning. This mechanism is used to help the instruction. As the load changes the instruction is made to adapt itself to ensure cognitive viability. This mechanism could be incorporated as a sub domain in the student model of various computer based instructional systems to facilitate the learning of programming.
Keywords: Cognitive load, Working memory, Cognitive Loadmeasurement.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2561350 Speech Coding and Recognition
Authors: M. Satya Sai Ram, P. Siddaiah, M. Madhavi Latha
Abstract:
This paper investigates the performance of a speech recognizer in an interactive voice response system for various coded speech signals, coded by using a vector quantization technique namely Multi Switched Split Vector Quantization Technique. The process of recognizing the coded output can be used in Voice banking application. The recognition technique used for the recognition of the coded speech signals is the Hidden Markov Model technique. The spectral distortion performance, computational complexity, and memory requirements of Multi Switched Split Vector Quantization Technique and the performance of the speech recognizer at various bit rates have been computed. From results it is found that the speech recognizer is showing better performance at 24 bits/frame and it is found that the percentage of recognition is being varied from 100% to 93.33% for various bit rates.Keywords: Linear predictive coding, Speech Recognition, Voice banking, Multi Switched Split Vector Quantization, Hidden Markov Model, Linear Predictive Coefficients.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1845349 Modeling the Symptom-Disease Relationship by Using Rough Set Theory and Formal Concept Analysis
Authors: Mert Bal, Hayri Sever, Oya Kalıpsız
Abstract:
Medical Decision Support Systems (MDSSs) are sophisticated, intelligent systems that can provide inference due to lack of information and uncertainty. In such systems, to model the uncertainty various soft computing methods such as Bayesian networks, rough sets, artificial neural networks, fuzzy logic, inductive logic programming and genetic algorithms and hybrid methods that formed from the combination of the few mentioned methods are used. In this study, symptom-disease relationships are presented by a framework which is modeled with a formal concept analysis and theory, as diseases, objects and attributes of symptoms. After a concept lattice is formed, Bayes theorem can be used to determine the relationships between attributes and objects. A discernibility relation that forms the base of the rough sets can be applied to attribute data sets in order to reduce attributes and decrease the complexity of computation.
Keywords: Formal Concept Analysis, Rough Set Theory, Granular Computing, Medical Decision Support System.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1814348 A Characterized and Optimized Approach for End-to-End Delay Constrained QoS Routing
Authors: P.S.Prakash, S.Selvan
Abstract:
QoS Routing aims to find paths between senders and receivers satisfying the QoS requirements of the application which efficiently using the network resources and underlying routing algorithm to be able to find low-cost paths that satisfy given QoS constraints. The problem of finding least-cost routing is known to be NP hard or complete and some algorithms have been proposed to find a near optimal solution. But these heuristics or algorithms either impose relationships among the link metrics to reduce the complexity of the problem which may limit the general applicability of the heuristic, or are too costly in terms of execution time to be applicable to large networks. In this paper, we analyzed two algorithms namely Characterized Delay Constrained Routing (CDCR) and Optimized Delay Constrained Routing (ODCR). The CDCR algorithm dealt an approach for delay constrained routing that captures the trade-off between cost minimization and risk level regarding the delay constraint. The ODCR which uses an adaptive path weight function together with an additional constraint imposed on the path cost, to restrict search space and hence ODCR finds near optimal solution in much quicker time.Keywords: QoS, Delay, Routing, Optimization
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1274347 Participatory Patterns of Community in Water and Waste Management: A Case Study of Municipality in Amphawa District, Samut Songkram Province
Authors: Srisuwan Kasemsawat
Abstract:
This is a survey research using quantitative and qualitative methodology. There were three objectives: 1) To study participatory level of community in water and waste environment management. 2) To study the affecting factors for community participation in water and waste environment management in Ampawa District, Samut Songkram Province. 3) To search for the participatory patterns in water and waste management. The population sample for the quantitative research was 1,364 people living in Ampawa District. The methodology was simple random sampling. Research instrument was a questionnaire and the qualitative research used purposive sampling in 6 Sub Districts which are Ta Ka, Suanluang, Bangkae, Muangmai, Kwae-om, and Bangnanglee Sub District Administration Organization. Total population is 63. For data analysis, the study used content analysis from quantitative research to synthesize and build question frame from the content for interview and conducting focus group interview. The study found that the community participatory in the issue of level in water and waste management are moderate of planning, operation, and evaluation. The issue of being beneficial is at low level. Therefore, the overall participatory level of community in water and waste environment management is at a medium level. The factors affecting the participatory of community in water and waste management are age, the period dwelling in the community and membership in which the mean difference is statistic significant at 0.05 in area of operation, being beneficial, and evaluation. For patterns of community participation, there is the correlation with water and waste management in 4 concerns which are 1) Participation in planning 2) Participation in operation 3) Participation in being beneficial both directly and indirectly benefited 4) Participation in evaluation and monitoring. The recommendation from this study is the need to create conscious awareness in order to increase participation level of people by organizing activities that promote participation with volunteer spirit. Government should open opportunities for people to participate in sharing ideas and create the culture of living together with equality which would build more concrete participation.
Keywords: Participation, Participatory Patterns, Water and Waste Management, Environmental Management.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2367346 A Cost Function for Joint Blind Equalization and Phase Recovery
Authors: Reza Berangi, Morteza Babaee, Majid Soleimanipour
Abstract:
In this paper a new cost function for blind equalization is proposed. The proposed cost function, referred to as the modified maximum normalized cumulant criterion (MMNC), is an extension of the previously proposed maximum normalized cumulant criterion (MNC). While the MNC requires a separate phase recovery system after blind equalization, the MMNC performs joint blind equalization and phase recovery. To achieve this, the proposed algorithm maximizes a cost function that considers both amplitude and phase of the equalizer output. The simulation results show that the proposed algorithm has an improved channel equalization effect than the MNC algorithm and simultaneously can correct the phase error that the MNC algorithm is unable to do. The simulation results also show that the MMNC algorithm has lower complexity than the MNC algorithm. Moreover, the MMNC algorithm outperforms the MNC algorithm particularly when the symbols block size is small.Keywords: Blind equalization, maximum normalized cumulant criterion (MNC), intersymbol interference (ISI), modified MNC criterion (MMNC), phase recovery.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1764345 Spatial-Temporal Awareness Approach for Extensive Re-Identification
Authors: Tyng-Rong Roan, Fuji Foo, Wenwey Hseush
Abstract:
Recent development of AI and edge computing plays a critical role to capture meaningful events such as detection of an unattended bag. One of the core problems is re-identification across multiple CCTVs. Immediately following the detection of a meaningful event is to track and trace the objects related to the event. In an extensive environment, the challenge becomes severe when the number of CCTVs increases substantially, imposing difficulties in achieving high accuracy while maintaining real-time performance. The algorithm that re-identifies cross-boundary objects for extensive tracking is referred to Extensive Re-Identification, which emphasizes the issues related to the complexity behind a great number of CCTVs. The Spatial-Temporal Awareness approach challenges the conventional thinking and concept of operations which is labor intensive and time consuming. The ability to perform Extensive Re-Identification through a multi-sensory network provides the next-level insights – creating value beyond traditional risk management.
Keywords: Long-short-term memory, re-identification, security critical application, spatial-temporal awareness.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 532344 ECG-Based Heartbeat Classification Using Convolutional Neural Networks
Authors: Jacqueline R. T. Alipo-on, Francesca I. F. Escobar, Myles J. T. Tan, Hezerul Abdul Karim, Nouar AlDahoul
Abstract:
Electrocardiogram (ECG) signal analysis and processing are crucial in the diagnosis of cardiovascular diseases which are considered as one of the leading causes of mortality worldwide. However, the traditional rule-based analysis of large volumes of ECG data is time-consuming, labor-intensive, and prone to human errors. With the advancement of the programming paradigm, algorithms such as machine learning have been increasingly used to perform an analysis on the ECG signals. In this paper, various deep learning algorithms were adapted to classify five classes of heart beat types. The dataset used in this work is the synthetic MIT-Beth Israel Hospital (MIT-BIH) Arrhythmia dataset produced from generative adversarial networks (GANs). Various deep learning models such as ResNet-50 convolutional neural network (CNN), 1-D CNN, and long short-term memory (LSTM) were evaluated and compared. ResNet-50 was found to outperform other models in terms of recall and F1 score using a five-fold average score of 98.88% and 98.87%, respectively. 1-D CNN, on the other hand, was found to have the highest average precision of 98.93%.
Keywords: Heartbeat classification, convolutional neural network, electrocardiogram signals, ECG signals, generative adversarial networks, long short-term memory, LSTM, ResNet-50.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188343 Using Analytical Hierarchy Process and TOPSIS Approaches in Designing a Finite Element Analysis Automation Program
Authors: Ming Wen, Nasim Nezamoddini
Abstract:
Sophisticated numerical simulations like finite element analysis (FEA) involve a complicated process from model setup to post-processing tasks that require replication of time-consuming steps. Utilizing FEA automation program simplifies the complexity of the involved steps while minimizing human errors in analysis set up, calculations, and results processing. One of the main challenges in designing FEA automation programs is to identify user requirements and link them to possible design alternatives. This paper presents a decision-making framework to design a Python based FEA automation program for modal analysis, frequency response analysis, and random vibration fatigue (RVF) analysis procedures. Analytical hierarchy process (AHP) and technique for order preference by similarity to ideal solution (TOPSIS) are applied to evaluate design alternatives considering the feedback received from experts and program users.
Keywords: FEA, random vibration fatigue, process automation, AHP, TOPSIS, multiple-criteria decision-making, MCDM.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 530342 Educational Experiences in Engineering in the COVID-19 Era and Their Comparative Analysis: Spain, March-June 2020
Authors: Borja Bordel, Ramón Alcarria, Marina Pérez
Abstract:
In March 2020, in Spain, a sanitary and unexpected crisis caused by COVID-19 was declared. All of a sudden, all degrees, classes and evaluation tests and projects had to be transformed into online activities. However, the chaotic situation generated by a complex operation like that, executed without any well-established procedure, led to very different experiences and, finally, results. In this paper, we are describing three experiences in two different Universities in Madrid. On the one hand, the Technical University of Madrid, a public university with little experience in online education was considered. On the other hand, Alfonso X el Sabio University, a private university with more than five years of experience in online teaching was involved. All analyzed subjects were related to computer engineering. Professors and students answered a survey and personal interviews were also carried out. Besides, the professors’ workload and the students’ academic results were also compared. From the comparative analysis of all these experiences, we are extracting the most successful strategies, methodologies, and activities. The recommendations in this paper will be useful for courses during the next months when the sanitary situation is still affecting an educational organization. While, at the same time, they will be considered as input for the upcoming digitalization process of higher education.
Keywords: educational experience, online education, higher education digitalization, COVID, Spain
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 456341 Development of Elementary Literacy in the Czech Republic
Authors: Iva Košek Bartošová
Abstract:
There is great attention being paid in the field of development of first reading, thus early literacy skills in the Czech Republic. Yet inconclusive results of PISA and PIRLS force us to think over the teacher´s work, his/her roles in the education process and methods and forms used in lessons. There is also a significant importance to monitor the family environment and the pupil, themselves. The aim of the publishing output is to focus on one side dealing with methods of practicing reading technique and their results in the process of comprehension. In the first part of the contribution there are the goals of development of reading literacy and the methods used in reading practice in some EU countries and a follow-up comparison of research implemented by the help of modern technology of an eye tracker device in the year 2015 and a research conducted at the Institute of Education and Psychological Counselling of the Czech Republic in the year 2011/12. These are the results of a diagnostic test of reading in first classes of primary schools, taught by the genetic method and analytic-synthetic method. The results show that in the first stage of practice there are no statistically significant differences between any researched subjects taught by different methods of reading practice (with the use of several diagnostic texts focused on reading technique and its comprehension). Different results are shown at the end of Grade One and during Grade Two of primary school.
Keywords: Elementary literacy, eye tracker device, diagnostic reading tests, reading teaching method.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1081340 The Effect of an Al Andalus Fused Curriculum Model on the Learning Outcomes of Elementary School Students
Authors: Sobhy Fathy A. Hashesh
Abstract:
The study was carried out in the Elementary Classes of Andalus Private Schools, girls section using control and experimental groups formed by Random Assignment Strategy. The study aimed at investigating the effect of Al-Andalus Fused Curriculum (AFC) model of learning and the effect of separate subjects’ approach on the development of students’ conceptual learning and skills acquiring. The society of the study composed of Al-Andalus Private Schools, elementary school students, Girls Section (N=240), while the sample of the study composed of two randomly assigned groups (N=28) with one experimental group and one control group. The study followed the quantitative and qualitative approaches in collecting and analyzing data to investigate the study hypotheses. Results of the study revealed that there were significant statistical differences between students’ conceptual learning and skills acquiring for the favor of the experimental group. The study recommended applying this model on different educational variables and on other age groups to generate more data leading to more educational results for the favor of students’ learning outcomes.
Keywords: AFC, Lego Education, mechatronics, STEAM, Al-Andalus Fused Curriculum.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 859339 A Simplified Adaptive Decision Feedback Equalization Technique for π/4-DQPSK Signals
Authors: V. Prapulla, A. Mitra, R. Bhattacharjee, S. Nandi
Abstract:
We present a simplified equalization technique for a π/4 differential quadrature phase shift keying ( π/4 -DQPSK) modulated signal in a multipath fading environment. The proposed equalizer is realized as a fractionally spaced adaptive decision feedback equalizer (FS-ADFE), employing exponential step-size least mean square (LMS) algorithm as the adaptation technique. The main advantage of the scheme stems from the usage of exponential step-size LMS algorithm in the equalizer, which achieves similar convergence behavior as that of a recursive least squares (RLS) algorithm with significantly reduced computational complexity. To investigate the finite-precision performance of the proposed equalizer along with the π/4 -DQPSK modem, the entire system is evaluated on a 16-bit fixed point digital signal processor (DSP) environment. The proposed scheme is found to be attractive even for those cases where equalization is to be performed within a restricted number of training samples.Keywords: Adaptive decision feedback equalizer, Fractionally spaced equalizer, π/4 DQPSK signal, Digital signal processor.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5737338 A Comparison of Different Soft Computing Models for Credit Scoring
Authors: Nnamdi I. Nwulu, Shola G. Oroja
Abstract:
It has become crucial over the years for nations to improve their credit scoring methods and techniques in light of the increasing volatility of the global economy. Statistical methods or tools have been the favoured means for this; however artificial intelligence or soft computing based techniques are becoming increasingly preferred due to their proficient and precise nature and relative simplicity. This work presents a comparison between Support Vector Machines and Artificial Neural Networks two popular soft computing models when applied to credit scoring. Amidst the different criteria-s that can be used for comparisons; accuracy, computational complexity and processing times are the selected criteria used to evaluate both models. Furthermore the German credit scoring dataset which is a real world dataset is used to train and test both developed models. Experimental results obtained from our study suggest that although both soft computing models could be used with a high degree of accuracy, Artificial Neural Networks deliver better results than Support Vector Machines.Keywords: Artificial Neural Networks, Credit Scoring, SoftComputing Models, Support Vector Machines.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2129337 Characterization of Corn Cobs from Microwave and Potassium Hydroxide Pretreatment
Authors: Boonyisa Wanitwattanarumlug, Apanee Luengnaruemitchai, Sujitra Wongkasemjit
Abstract:
The complexity of lignocellulosic biomass requires a pretreatment step to improve the yield of fermentable sugars. The efficient pretreatment of corn cobs using microwave and potassium hydroxide and enzymatic hydrolysis was investigated. The objective of this work was to characterize the optimal condition of pretreatment of corn cobs using microwave and potassium hydroxide enhance enzymatic hydrolysis. Corn cobs were submerged in different potassium hydroxide concentration at varies temperature and resident time. The pretreated corn cobs were hydrolyzed to produce the reducing sugar for analysis. The morphology and microstructure of samples were investigated by Thermal gravimetric analysis (TGA, scanning electron microscope (SEM), X-ray diffraction (XRD). The results showed that lignin and hemicellulose were removed by microwave/potassium hydroxide pretreatment. The crystallinity of the pretreated corn cobs was higher than the untreated. This method was compared with autoclave and conventional heating method. The results indicated that microwave-alkali treatment was an efficient way to improve the enzymatic hydrolysis rate by increasing its accessibility hydrolysis enzymes.Keywords: Corn cobs, Enzymatic hydrolysis, Microwave, Potassium hydroxide, Pretreatment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2283336 A Performance Appraisal of Neural Networks Developed for Response Prediction across Heterogeneous Domains
Authors: H. Soleimanjahi, M. J. Nategh, S. Falahi
Abstract:
Deciding the numerous parameters involved in designing a competent artificial neural network is a complicated task. The existence of several options for selecting an appropriate architecture for neural network adds to this complexity, especially when different applications of heterogeneous natures are concerned. Two completely different applications in engineering and medical science were selected in the present study including prediction of workpiece's surface roughness in ultrasonic-vibration assisted turning and papilloma viruses oncogenicity. Several neural network architectures with different parameters were developed for each application and the results were compared. It was illustrated in this paper that some applications such as the first one mentioned above are apt to be modeled by a single network with sufficient accuracy, whereas others such as the second application can be best modeled by different expert networks for different ranges of output. Development of knowledge about the essentials of neural networks for different applications is regarded as the cornerstone of multidisciplinary network design programs to be developed as a means of reducing inconsistencies and the burden of the user intervention.Keywords: Artificial Neural Network, Malignancy Diagnosis, Papilloma Viruses Oncogenicity, Surface Roughness, UltrasonicVibration-Assisted Turning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1514335 Intelligent Modeling of the Electrical Activity of the Human Heart
Authors: Lambros V. Skarlas, Grigorios N. Beligiannis, Efstratios F. Georgopoulos, Adam V. Adamopoulos
Abstract:
The aim of this contribution is to present a new approach in modeling the electrical activity of the human heart. A recurrent artificial neural network is being used in order to exhibit a subset of the dynamics of the electrical behavior of the human heart. The proposed model can also be used, when integrated, as a diagnostic tool of the human heart system. What makes this approach unique is the fact that every model is being developed from physiological measurements of an individual. This kind of approach is very difficult to apply successfully in many modeling problems, because of the complexity and entropy of the free variables describing the complex system. Differences between the modeled variables and the variables of an individual, measured at specific moments, can be used for diagnostic purposes. The sensor fusion used in order to optimize the utilization of biomedical sensors is another point that this paper focuses on. Sensor fusion has been known for its advantages in applications such as control and diagnostics of mechanical and chemical processes.Keywords: Artificial Neural Networks, Diagnostic System, Health Condition Modeling Tool, Heart Diagnostics Model, Heart Electricity Model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1827