Search results for: elliptic curve digital signature algorithm
2038 Using Flow Line Modelling, Remote Sensing for Reconstructing Glacier Volume Loss Model for Athabasca Glacier, Canadian Rockies
Authors: Rituparna Nath, Shawn J. Marshall
Abstract:
Glaciers are one of the main sensitive climatic indicators, as they respond strongly to small climatic shifts. We develop a flow line model of glacier dynamics to simulate the past and future extent of glaciers in the Canadian Rocky Mountains, with the aim of coupling this model within larger scale regional climate models of glacier response to climate change. This paper will focus on glacier-climate modeling and reconstructions of glacier volume from the Little Ice Age (LIA) to present for Athabasca Glacier, Alberta, Canada. Glacier thickness, volume and mass change will be constructed using flow line modelling and examination of different climate scenarios that are able to give good reconstructions of LIA ice extent. With the availability of SPOT 5 imagery, Digital elevation models and GIS Arc Hydro tool, ice catchment properties-glacier width and LIA moraines have been extracted using automated procedures. Simulation of glacier mass change will inform estimates of meltwater run off over the historical period and model calibration from the LIA reconstruction will aid in future projections of the effects of climate change on glacier recession. Furthermore, the model developed will be effective for further future studies with ensembles of glaciers.Keywords: flow line modeling, Athabasca Glacier, glacier mass balance, Remote Sensing, Arc hydro tool, little ice age
Procedia PDF Downloads 2682037 Harmonic Analysis to Improve Power Quality
Authors: Rumana Ali
Abstract:
The presence of nonlinear and power electronic switching devices produce distorted output and harmonics into the system. This paper presents a technique to analyze harmonics using digital series oscilloscope (DSO). In power distribution system further measurements are done by DSO, and the waveforms are analyzed using FFT program. The results of this proposed work are helpful for the investigator to install an appropriate compensating device to mitigate the harmonics, in turn, improve the power quality. This case study is carried out at AIT Chikmagalur. It is done as a starting step towards the improvement of energy efficiency at AIT Chikmagalur, and with an overall aim of reducing the electricity bill with a complete energy audit of the institution. Strategies were put forth to reach the above objective: The following strategies were proposed to be implemented to analyze the power quality in EEE department of the institution. Strategy 1: The power factor has to be measured using the energy meter. Power factor improvement may reduce the voltage drop in lines. This brings the voltages at the socket in the labs closer to the nominal voltage of 230V, and thus power quality improves. Strategy 2: The harmonics at the power inlet has to be measured by means of a DSO. The DSO waveform is analyzed using FFT to know the percentage harmonic up to the 13th harmonics of 50Hz. Reduction in the harmonics in the inlet of the EEE department may reduce line losses and therefore reduces energy bill to the institution.Keywords: harmonic analysis, energy bill, power quality, electronic switching devices
Procedia PDF Downloads 3092036 PEA Design of the Direct Control for Training Motor Drives
Authors: Abdulatif Abdulsalam Mohamed Shaban
Abstract:
This paper states that the art of Procedure Entry Array (PEA) plan with a focus on control system applications. This paper begins with an impression of PEA technology development, followed by an arrangement of design technologies, and the use of programmable description languages and system-level design tools. They allow a practical approach based on a unique model for complete engineering electronics systems. There are three main design rules are implemented in the system. These are algorithm based fine-tuning, modularity, and the control act and the architectural constraints. An overview of contributions and limits of PEAs is also given, followed by a short survey of PEA-based gifted controllers for recent engineering systems. Finally, two complete and timely case studies are presented to illustrate the benefits of a PEA implementation when using the proposed system modelling and devise attitude. These consist of the direct control for training motor drives and the control of a diesel-driven stand-alone generator with the help of logical design.Keywords: control (DC), engineering electronics systems, training motor drives, procedure entry array
Procedia PDF Downloads 5142035 Sensor Fault-Tolerant Model Predictive Control for Linear Parameter Varying Systems
Authors: Yushuai Wang, Feng Xu, Junbo Tan, Xueqian Wang, Bin Liang
Abstract:
In this paper, a sensor fault-tolerant control (FTC) scheme using robust model predictive control (RMPC) and set theoretic fault detection and isolation (FDI) is extended to linear parameter varying (LPV) systems. First, a group of set-valued observers are designed for passive fault detection (FD) and the observer gains are obtained through minimizing the size of invariant set of state estimation-error dynamics. Second, an input set for fault isolation (FI) is designed offline through set theory for actively isolating faults after FD. Third, an RMPC controller based on state estimation for LPV systems is designed to control the system in the presence of disturbance and measurement noise and tolerate faults. Besides, an FTC algorithm is proposed to maintain the plant operate in the corresponding mode when the fault occurs. Finally, a numerical example is used to show the effectiveness of the proposed results.Keywords: fault detection, linear parameter varying, model predictive control, set theory
Procedia PDF Downloads 2522034 Performance of Neural Networks vs. Radial Basis Functions When Forming a Metamodel for Residential Buildings
Authors: Philip Symonds, Jon Taylor, Zaid Chalabi, Michael Davies
Abstract:
With the world climate projected to warm and major cities in developing countries becoming increasingly populated and polluted, governments are tasked with the problem of overheating and air quality in residential buildings. This paper presents the development of an adaptable model of these risks. Simulations are performed using the EnergyPlus building physics software. An accurate metamodel is formed by randomly sampling building input parameters and training on the outputs of EnergyPlus simulations. Metamodels are used to vastly reduce the amount of computation time required when performing optimisation and sensitivity analyses. Neural Networks (NNs) are compared to a Radial Basis Function (RBF) algorithm when forming a metamodel. These techniques were implemented using the PyBrain and scikit-learn python libraries, respectively. NNs are shown to perform around 15% better than RBFs when estimating overheating and air pollution metrics modelled by EnergyPlus.Keywords: neural networks, radial basis functions, metamodelling, python machine learning libraries
Procedia PDF Downloads 4472033 Sparse-View CT Reconstruction Based on Nonconvex L1 − L2 Regularizations
Authors: Ali Pour Yazdanpanah, Farideh Foroozandeh Shahraki, Emma Regentova
Abstract:
The reconstruction from sparse-view projections is one of important problems in computed tomography (CT) limited by the availability or feasibility of obtaining of a large number of projections. Traditionally, convex regularizers have been exploited to improve the reconstruction quality in sparse-view CT, and the convex constraint in those problems leads to an easy optimization process. However, convex regularizers often result in a biased approximation and inaccurate reconstruction in CT problems. Here, we present a nonconvex, Lipschitz continuous and non-smooth regularization model. The CT reconstruction is formulated as a nonconvex constrained L1 − L2 minimization problem and solved through a difference of convex algorithm and alternating direction of multiplier method which generates a better result than L0 or L1 regularizers in the CT reconstruction. We compare our method with previously reported high performance methods which use convex regularizers such as TV, wavelet, curvelet, and curvelet+TV (CTV) on the test phantom images. The results show that there are benefits in using the nonconvex regularizer in the sparse-view CT reconstruction.Keywords: computed tomography, non-convex, sparse-view reconstruction, L1-L2 minimization, difference of convex functions
Procedia PDF Downloads 3162032 Biometric Identification with Latitude and Longitude Fingerprint Verification for Attendance
Authors: Muhammad Fezan Afzal, Imran Khan, Salma Imtiaz
Abstract:
The need for human verification and identification requires from centuries for authentication. Since it is being used in big institutes like financial, government and crime departments, a continued struggle is important to make this system more efficient to prevent security breaches. Therefore, multiple devices are used to authenticate the biometric for each individual. A large number of devices are required to cover a large number of users. As the number of devices increases, cost will automatically increase. Furthermore, it is time-consuming for biometrics due to the devices being insufficient and are not available at every door. In this paper, we propose the framework and algorithm where the mobile of each individual can also perform the biometric authentication of attendance and security. Every mobile has a biometric authentication system that is used in different mobile applications for security purposes. Therefore, each individual can use the biometric system mobile without moving from one place to another. Moreover, by using the biometrics mobile, the cost of biometric systems can be removed that are mostly deployed in different organizations for the attendance of students, employees and for other security purposes.Keywords: fingerprint, fingerprint authentication, mobile verification, mobile biometric verification, mobile fingerprint sensor
Procedia PDF Downloads 692031 Effective Validation Model and Use of Mobile-Health Apps for Elderly People
Authors: Leonardo Ramirez Lopez, Edward Guillen Pinto, Carlos Ramos Linares
Abstract:
The controversy brought about by the increasing use of mHealth apps and their effectiveness for disease prevention and diagnosis calls for immediate control. Although a critical topic in research areas such as medicine, engineering, economics, among others, this issue lacks reliable implementation models. However, projects such as Open Web Application Security Project (OWASP) and various studies have helped to create useful and reliable apps. This research is conducted under a quality model to optimize two mHealth apps for older adults. Results analysis on the use of two physical activity monitoring apps - AcTiv (physical activity) and SMCa (energy expenditure) - is positive and ideal. Through a theoretical and practical analysis, precision calculations and personal information control of older adults for disease prevention and diagnosis were performed. Finally, apps are validated by a physician and, as a result, they may be used as health monitoring tools in physical performance centers or any other physical activity. The results obtained provide an effective validation model for this type of mobile apps, which, in turn, may be applied by other software developers that along with medical staff would offer digital healthcare tools for elderly people.Keywords: model, validation, effective, healthcare, elderly people, mobile app
Procedia PDF Downloads 2182030 Metareasoning Image Optimization Q-Learning
Authors: Mahasa Zahirnia
Abstract:
The purpose of this paper is to explore new and effective ways of optimizing satellite images using artificial intelligence, and the process of implementing reinforcement learning to enhance the quality of data captured within the image. In our implementation of Bellman's Reinforcement Learning equations, associated state diagrams, and multi-stage image processing, we were able to enhance image quality, detect and define objects. Reinforcement learning is the differentiator in the area of artificial intelligence, and Q-Learning relies on trial and error to achieve its goals. The reward system that is embedded in Q-Learning allows the agent to self-evaluate its performance and decide on the best possible course of action based on the current and future environment. Results show that within a simulated environment, built on the images that are commercially available, the rate of detection was 40-90%. Reinforcement learning through Q-Learning algorithm is not just desired but required design criteria for image optimization and enhancements. The proposed methods presented are a cost effective method of resolving uncertainty of the data because reinforcement learning finds ideal policies to manage the process using a smaller sample of images.Keywords: Q-learning, image optimization, reinforcement learning, Markov decision process
Procedia PDF Downloads 2152029 Multiple Images Stitching Based on Gradually Changing Matrix
Authors: Shangdong Zhu, Yunzhou Zhang, Jie Zhang, Hang Hu, Yazhou Zhang
Abstract:
Image stitching is a very important branch in the field of computer vision, especially for panoramic map. In order to eliminate shape distortion, a novel stitching method is proposed based on gradually changing matrix when images are horizontal. For images captured horizontally, this paper assumes that there is only translational operation in image stitching. By analyzing each parameter of the homography matrix, the global homography matrix is gradually transferred to translation matrix so as to eliminate the effects of scaling, rotation, etc. in the image transformation. This paper adopts matrix approximation to get the minimum value of the energy function so that the shape distortion at those regions corresponding to the homography can be minimized. The proposed method can avoid multiple horizontal images stitching failure caused by accumulated shape distortion. At the same time, it can be combined with As-Projective-As-Possible algorithm to ensure precise alignment of overlapping area.Keywords: image stitching, gradually changing matrix, horizontal direction, matrix approximation, homography matrix
Procedia PDF Downloads 3192028 A Learning Automata Based Clustering Approach for Underwater Sensor Networks to Reduce Energy Consumption
Authors: Motahareh Fadaei
Abstract:
Wireless sensor networks that are used to monitor a special environment, are formed from a large number of sensor nodes. The role of these sensors is to sense special parameters from ambient and to make connection. In these networks, the most important challenge is the management of energy usage. Clustering is one of the methods that are broadly used to face this challenge. In this paper, a distributed clustering protocol based on learning automata is proposed for underwater wireless sensor networks. The proposed algorithm that is called LA-Clustering forms clusters in the same energy level, based on the energy level of nodes and the connection radius regardless of size and the structure of sensor network. The proposed approach is simulated and is compared with some other protocols with considering some metrics such as network lifetime, number of alive nodes, and number of transmitted data. The simulation results demonstrate the efficiency of the proposed approach.Keywords: clustering, energy consumption, learning automata, underwater sensor networks
Procedia PDF Downloads 3142027 A Multi Agent Based Protection Scheme for Smart Distribution Network in Presence of Distributed Energy Resources
Authors: M. R. Ebrahimi, B. Mahdaviani
Abstract:
Conventional electric distribution systems are radial in nature, supplied at one end through a main source. These networks generally have a simple protection system usually implemented using fuses, re-closers, and over-current relays. Recently, great attention has been paid to applying Distributed energy resources (DERs) throughout electric distribution systems. Presence of such generation in a network leads to losing coordination of protection devices. Therefore, it is desired to develop an algorithm which is capable of protecting distribution systems that include DER. On the other hand smart grid brings opportunities to the power system. Fast advancement in communication and measurement techniques accelerates the development of multi agent system (MAS). So in this paper, a new approach for the protection of distribution networks in the presence of DERs is presented base on MAS. The proposed scheme has been implemented on a sample 27-bus distribution network.Keywords: distributed energy resource, distribution network, protection, smart grid, multi agent system
Procedia PDF Downloads 6082026 Sparse Coding Based Classification of Electrocardiography Signals Using Data-Driven Complete Dictionary Learning
Authors: Fuad Noman, Sh-Hussain Salleh, Chee-Ming Ting, Hadri Hussain, Syed Rasul
Abstract:
In this paper, a data-driven dictionary approach is proposed for the automatic detection and classification of cardiovascular abnormalities. Electrocardiography (ECG) signal is represented by the trained complete dictionaries that contain prototypes or atoms to avoid the limitations of pre-defined dictionaries. The data-driven trained dictionaries simply take the ECG signal as input rather than extracting features to study the set of parameters that yield the most descriptive dictionary. The approach inherently learns the complicated morphological changes in ECG waveform, which is then used to improve the classification. The classification performance was evaluated with ECG data under two different preprocessing environments. In the first category, QT-database is baseline drift corrected with notch filter and it filters the 60 Hz power line noise. In the second category, the data are further filtered using fast moving average smoother. The experimental results on QT database confirm that our proposed algorithm shows a classification accuracy of 92%.Keywords: electrocardiogram, dictionary learning, sparse coding, classification
Procedia PDF Downloads 3862025 Students’ Perceptions of Using Wiki Technology to Enhance Language Learning
Authors: Hani Mustafa, Cristina Gonzalez Ruiz, Estelle Bech
Abstract:
The growing influence of digital technologies has made learning and interaction more accessible, resulting in effective collaboration if properly managed. Technology enabled learning has become an important conduit for learning, including collaborative learning. The use of wiki technology, for example, has opened a new learning platform that enables the integration of social, linguistic, and cognitive processes of language learning. It encourages students to collaborate in the construction, analysis, and understanding of knowledge. But to what extent is the use of wikis effective in promoting collaborative learning among students. In addition, how do students perceive this technology in enhancing their language learning? In this study, students were be given a wiki project to complete collaboratively with their group members. Students had to write collaboratively to produce and present a seven-day travel plan in which they had to describe places to visit and things to do to explore the best historical and cultural aspects of the country. The study involves students learning French, Malay, and Spanish as a foreign language. In completing this wiki project, students will move from passive learning of language to real engagement with classmates, requiring them to collaborate and negotiate effectively with one another. The objective of the study is to ascertain to what extent does wiki technology helped in promoting collaborative learning and improving language skills from students’ perception. It is found that while there was improvement in students language skills, the overall experience was less positive due to unfamiliarity with a new learning tool.Keywords: collaborative learning, foreign language, wiki, teaching
Procedia PDF Downloads 1362024 Estimating the Effect of Fluid in Pressing Process
Authors: A. Movaghar, R. A. Mahdavinejad
Abstract:
To analyze the effect of various parameters of fluid on the material properties such as surface and depth defects and/or cracks, it is possible to determine the affection of pressure field on these specifications. Stress tensor analysis is also able to determine the points in which the probability of defection creation is more. Besides, from pressure field, it is possible to analyze the affection of various fluid specifications such as viscosity and density on defect created in the material. In this research, the concerned boundary conditions are analyzed first. Then the solution network and stencil used are mentioned. With the determination of relevant equation on the fluid flow between notch and matrix and their discretion according to the governed boundary conditions, these equations can be solved. Finally, with the variation creations on fluid parameters such as density and viscosity, the affection of these variations can be determined on pressure field. In this direction, the flowchart and solution algorithm with their results as vortex and current function contours for two conditions with most applications in pressing process are introduced and discussed.Keywords: pressing, notch, matrix, flow function, vortex
Procedia PDF Downloads 2902023 Applications of Building Information Modeling (BIM) in Knowledge Sharing and Management in Construction
Authors: Shu-Hui Jan, Shih-Ping Ho, Hui-Ping Tserng
Abstract:
Construction knowledge can be referred to and reused among involved project managers and job-site engineers to alleviate problems on a construction job-site and reduce the time and cost of solving problems related to constructability. This paper proposes a new methodology to provide sharing of construction knowledge by using the Building Information Modeling (BIM) approach. The main characteristics of BIM include illustrating 3D CAD-based presentations and keeping information in a digital format, and facilitation of easy updating and transfer of information in the 3D BIM environment. Using the BIM approach, project managers and engineers can gain knowledge related to 3D BIM and obtain feedback provided by job-site engineers for future reference. This study addresses the application of knowledge sharing management in the construction phase of construction projects and proposes a BIM-based Knowledge Sharing Management (BIMKSM) system for project managers and engineers. The BIMKSM system is then applied in a selected case study of a construction project in Taiwan to verify the proposed methodology and demonstrate the effectiveness of sharing knowledge in the BIM environment. The combined results demonstrate that the BIMKSM system can be used as a visual BIM-based knowledge sharing management platform by utilizing the BIM approach and web technology.Keywords: construction knowledge management, building information modeling, project management, web-based information system
Procedia PDF Downloads 3522022 Inferential Reasoning for Heterogeneous Multi-Agent Mission
Authors: Sagir M. Yusuf, Chris Baber
Abstract:
We describe issues bedeviling the coordination of heterogeneous (different sensors carrying agents) multi-agent missions such as belief conflict, situation reasoning, etc. We applied Bayesian and agents' presumptions inferential reasoning to solve the outlined issues with the heterogeneous multi-agent belief variation and situational-base reasoning. Bayesian Belief Network (BBN) was used in modeling the agents' belief conflict due to sensor variations. Simulation experiments were designed, and cases from agents’ missions were used in training the BBN using gradient descent and expectation-maximization algorithms. The output network is a well-trained BBN for making inferences for both agents and human experts. We claim that the Bayesian learning algorithm prediction capacity improves by the number of training data and argue that it enhances multi-agents robustness and solve agents’ sensor conflicts.Keywords: distributed constraint optimization problem, multi-agent system, multi-robot coordination, autonomous system, swarm intelligence
Procedia PDF Downloads 1542021 An Application to Predict the Best Study Path for Information Technology Students in Learning Institutes
Authors: L. S. Chathurika
Abstract:
Early prediction of student performance is an important factor to be gained academic excellence. Whatever the study stream in secondary education, students lay the foundation for higher studies during the first year of their degree or diploma program in Sri Lanka. The information technology (IT) field has certain improvements in the education domain by selecting specialization areas to show the talents and skills of students. These specializations can be software engineering, network administration, database administration, multimedia design, etc. After completing the first-year, students attempt to select the best path by considering numerous factors. The purpose of this experiment is to predict the best study path using machine learning algorithms. Five classification algorithms: decision tree, support vector machine, artificial neural network, Naïve Bayes, and logistic regression are selected and tested. The support vector machine obtained the highest accuracy, 82.4%. Then affecting features are recognized to select the best study path.Keywords: algorithm, classification, evaluation, features, testing, training
Procedia PDF Downloads 1192020 A Test Methodology to Measure the Open-Loop Voltage Gain of an Operational Amplifier
Authors: Maninder Kaur Gill, Alpana Agarwal
Abstract:
It is practically not feasible to measure the open-loop voltage gain of the operational amplifier in the open loop configuration. It is because the open-loop voltage gain of the operational amplifier is very large. In order to avoid the saturation of the output voltage, a very small input should be given to operational amplifier which is not possible to be measured practically by a digital multimeter. A test circuit for measurement of open loop voltage gain of an operational amplifier has been proposed and verified using simulation tools as well as by experimental methods on breadboard. The main advantage of this test circuit is that it is simple, fast, accurate, cost effective, and easy to handle even on a breadboard. The test circuit requires only the device under test (DUT) along with resistors. This circuit has been tested for measurement of open loop voltage gain for different operational amplifiers. The underlying goal is to design testable circuits for various analog devices that are simple to realize in VLSI systems, giving accurate results and without changing the characteristics of the original system. The DUTs used are LM741CN and UA741CP. For LM741CN, the simulated gain and experimentally measured gain (average) are calculated as 89.71 dB and 87.71 dB, respectively. For UA741CP, the simulated gain and experimentally measured gain (average) are calculated as 101.15 dB and 105.15 dB, respectively. These values are found to be close to the datasheet values.Keywords: Device Under Test (DUT), open loop voltage gain, operational amplifier, test circuit
Procedia PDF Downloads 4472019 Prediction of Unsteady Heat Transfer over Square Cylinder in the Presence of Nanofluid by Using ANN
Authors: Ajoy Kumar Das, Prasenjit Dey
Abstract:
Heat transfer due to forced convection of copper water based nanofluid has been predicted by Artificial Neural network (ANN). The present nanofluid is formed by mixing copper nano particles in water and the volume fractions are considered here are 0% to 15% and the Reynolds number are kept constant at 100. The back propagation algorithm is used to train the network. The present ANN is trained by the input and output data which has been obtained from the numerical simulation, performed in finite volume based Computational Fluid Dynamics (CFD) commercial software Ansys Fluent. The numerical simulation based results are compared with the back propagation based ANN results. It is found that the forced convection heat transfer of water based nanofluid can be predicted correctly by ANN. It is also observed that the back propagation ANN can predict the heat transfer characteristics of nanofluid very quickly compared to standard CFD method.Keywords: forced convection, square cylinder, nanofluid, neural network
Procedia PDF Downloads 3202018 Knowledge Representation Based on Interval Type-2 CFCM Clustering
Authors: Lee Myung-Won, Kwak Keun-Chang
Abstract:
This paper is concerned with knowledge representation and extraction of fuzzy if-then rules using Interval Type-2 Context-based Fuzzy C-Means clustering (IT2-CFCM) with the aid of fuzzy granulation. This proposed clustering algorithm is based on information granulation in the form of IT2 based Fuzzy C-Means (IT2-FCM) clustering and estimates the cluster centers by preserving the homogeneity between the clustered patterns from the IT2 contexts produced in the output space. Furthermore, we can obtain the automatic knowledge representation in the design of Radial Basis Function Networks (RBFN), Linguistic Model (LM), and Adaptive Neuro-Fuzzy Networks (ANFN) from the numerical input-output data pairs. We shall focus on a design of ANFN in this paper. The experimental results on an estimation problem of energy performance reveal that the proposed method showed a good knowledge representation and performance in comparison with the previous works.Keywords: IT2-FCM, IT2-CFCM, context-based fuzzy clustering, adaptive neuro-fuzzy network, knowledge representation
Procedia PDF Downloads 3222017 Sensitivity Analysis of the Heat Exchanger Design in Net Power Oxy-Combustion Cycle for Carbon Capture
Authors: Hirbod Varasteh, Hamidreza Gohari Darabkhani
Abstract:
The global warming and its impact on climate change is one of main challenges for current century. Global warming is mainly due to the emission of greenhouse gases (GHG) and carbon dioxide (CO2) is known to be the major contributor to the GHG emission profile. Whilst the energy sector is the primary source for CO2 emission, Carbon Capture and Storage (CCS) are believed to be the solution for controlling this emission. Oxyfuel combustion (Oxy-combustion) is one of the major technologies for capturing CO2 from power plants. For gas turbines, several Oxy-combustion power cycles (Oxyturbine cycles) have been investigated by means of thermodynamic analysis. NetPower cycle is one of the leading oxyturbine power cycles with almost full carbon capture capability from a natural gas fired power plant. In this manuscript, sensitivity analysis of the heat exchanger design in NetPower cycle is completed by means of process modelling. The heat capacity variation and supercritical CO2 with gaseous admixtures are considered for multi-zone analysis with Aspen Plus software. It is found that the heat exchanger design has a major role to increase the efficiency of NetPower cycle. The pinch-point analysis is done to extract the composite and grand composite curve for the heat exchanger. In this paper, relationship between the cycle efficiency and the minimum approach temperature (∆Tmin) of the heat exchanger has also been evaluated. Increase in ∆Tmin causes a decrease in the temperature of the recycle flue gases (RFG) and an overall decrease in the required power for the recycled gas compressor. The main challenge in the design of heat exchangers in power plants is a tradeoff between the capital and operational costs. To achieve lower ∆Tmin, larger size of heat exchanger is required. This means a higher capital cost but leading to a better heat recovery and lower operational cost. To achieve this, ∆Tmin is selected from the minimum point in the diagrams of capital and operational costs. This study provides an insight into the NetPower Oxy-combustion cycle’s performance analysis and operational condition based on its heat exchanger design.Keywords: carbon capture and storage, oxy-combustion, netpower cycle, oxy turbine cycles, zero emission, heat exchanger design, supercritical carbon dioxide, oxy-fuel power plant, pinch point analysis
Procedia PDF Downloads 2042016 The Practice of Teaching Chemistry by the Application of Online Tests
Authors: Nikolina Ribarić
Abstract:
E-learning is most commonly defined as a set of applications and processes, such as Web-based learning, computer-based learning, virtual classrooms, and digital collaboration, that enable access to instructional content through a variety of electronic media. The main goal of an e-learning system is learning, and the way to evaluate the impact of an e-learning system is by examining whether students learn effectively with the help of that system. Testmoz is a program for online preparation of knowledge evaluation assignments. The program provides teachers with computer support during the design of assignments and evaluating them. Students can review and solve assignments and also check the correctness of their solutions. Research into the increase of motivation by the practice of providing teaching content by applying online tests prepared in the Testmoz program was carried out with students of the 8th grade of Ljubo Babić Primary School in Jastrebarsko. The students took the tests in their free time, from home, for an unlimited number of times. SPSS was used to process the data obtained by the research instruments. The results of the research showed that students preferred to practice teaching content and achieved better educational results in chemistry when they had access to online tests for repetition and practicing in relation to subject content which was checked after repetition and practicing in "the classical way" -i.e., solving assignments in a workbook or writing assignments in worksheets.Keywords: chemistry class, e-learning, motivation, Testmoz
Procedia PDF Downloads 1602015 An Improved Dynamic Window Approach with Environment Awareness for Local Obstacle Avoidance of Mobile Robots
Authors: Baoshan Wei, Shuai Han, Xing Zhang
Abstract:
Local obstacle avoidance is critical for mobile robot navigation. It is a challenging task to ensure path optimality and safety in cluttered environments. We proposed an Environment Aware Dynamic Window Approach in this paper to cope with the issue. The method integrates environment characterization into Dynamic Window Approach (DWA). Two strategies are proposed in order to achieve the integration. The local goal strategy guides the robot to move through openings before approaching the final goal, which solves the local minima problem in DWA. The adaptive control strategy endows the robot to adjust its state according to the environment, which addresses path safety compared with DWA. Besides, the evaluation shows that the path generated from the proposed algorithm is safer and smoother compared with state-of-the-art algorithms.Keywords: adaptive control, dynamic window approach, environment aware, local obstacle avoidance, mobile robots
Procedia PDF Downloads 1592014 Bioinformatics High Performance Computation and Big Data
Authors: Javed Mohammed
Abstract:
Right now, bio-medical infrastructure lags well behind the curve. Our healthcare system is dispersed and disjointed; medical records are a bit of a mess; and we do not yet have the capacity to store and process the crazy amounts of data coming our way from widespread whole-genome sequencing. And then there are privacy issues. Despite these infrastructure challenges, some researchers are plunging into bio medical Big Data now, in hopes of extracting new and actionable knowledge. They are doing delving into molecular-level data to discover bio markers that help classify patients based on their response to existing treatments; and pushing their results out to physicians in novel and creative ways. Computer scientists and bio medical researchers are able to transform data into models and simulations that will enable scientists for the first time to gain a profound under-standing of the deepest biological functions. Solving biological problems may require High-Performance Computing HPC due either to the massive parallel computation required to solve a particular problem or to algorithmic complexity that may range from difficult to intractable. Many problems involve seemingly well-behaved polynomial time algorithms (such as all-to-all comparisons) but have massive computational requirements due to the large data sets that must be analyzed. High-throughput techniques for DNA sequencing and analysis of gene expression have led to exponential growth in the amount of publicly available genomic data. With the increased availability of genomic data traditional database approaches are no longer sufficient for rapidly performing life science queries involving the fusion of data types. Computing systems are now so powerful it is possible for researchers to consider modeling the folding of a protein or even the simulation of an entire human body. This research paper emphasizes the computational biology's growing need for high-performance computing and Big Data. It illustrates this article’s indispensability in meeting the scientific and engineering challenges of the twenty-first century, and how Protein Folding (the structure and function of proteins) and Phylogeny Reconstruction (evolutionary history of a group of genes) can use HPC that provides sufficient capability for evaluating or solving more limited but meaningful instances. This article also indicates solutions to optimization problems, and benefits Big Data and Computational Biology. The article illustrates the Current State-of-the-Art and Future-Generation Biology of HPC Computing with Big Data.Keywords: high performance, big data, parallel computation, molecular data, computational biology
Procedia PDF Downloads 3632013 A Comparative Study of Language Used in English Newspaper Dailies of Mumbai in Addressing Disability Related Issues
Authors: Amrin Moger, Martin Mathew, Sagar Bhalerao
Abstract:
Mass media may be categorized into print and digital, former being the traditional form of reaching the masses to inform and educate on various issues. The Indian print media is more than two centuries old. Its strengths have largely been shaped by its historical experience and, in particular, by its association with the freedom struggle as well as movements for social emancipation, reform, and amelioration. Therefore, it is highly regarded in the Indian society. Persons with disability are part of Indian Society. Persons with Disability have always been looked down upon and not considered as part of the society. People with disabilities were commonly feared, pitied, and neglected. Much of the literature on disability in India has pointed to the importance of the concept of karma in attitudes to disability, with disability perceived either as punishment for misdeeds in the past lives of the PWD, or the wrongdoings of their parents. Some Indian authors consider the passage of the PWD Act as a landmark step in the history of rehabilitation services in India have put it, ‘At a profoundly serious and spiritual level, disability represents divine justice’. The newspaper has to play a role where it changes this attitude of the people. A short comparative content analysis of two English newspapers of Mumbai edition was selected, to analyze the language that is used for reporting disability issues. Software Package for Social Science (SPSS) was used to gather and analyze data.Keywords: content analysis, disability, newspaper dailies, language
Procedia PDF Downloads 2862012 The Influence of Covariance Hankel Matrix Dimension on Algorithms for VARMA Models
Authors: Celina Pestano-Gabino, Concepcion Gonzalez-Concepcion, M. Candelaria Gil-Fariña
Abstract:
Some estimation methods for VARMA models, and Multivariate Time Series Models in general, rely on the use of a Hankel matrix. It is known that if the data sample is populous enough and the dimension of the Hankel matrix is unnecessarily large, this may result in an unnecessary number of computations as well as in numerical problems. In this sense, the aim of this paper is two-fold. First, we provide some theoretical results for these matrices which translate into a lower dimension for the matrices normally used in the algorithms. This contribution thus serves to improve those methods from a numerical and, presumably, statistical point of view. Second, we have chosen an estimation algorithm to illustrate in practice our improvements. The results we obtained in a simulation of VARMA models show that an increase in the size of the Hankel matrix beyond the theoretical bound proposed as valid does not necessarily lead to improved practical results. Therefore, for future research, we propose conducting similar studies using any of the linear system estimation methods that depend on Hankel matrices.Keywords: covariances Hankel matrices, Kronecker indices, system identification, VARMA models
Procedia PDF Downloads 2432011 Quantitative Evaluation of Endogenous Reference Genes for ddPCR under Salt Stress Using a Moderate Halophile
Authors: Qinghua Xing, Noha M. Mesbah, Haisheng Wang, Jun Li, Baisuo Zhao
Abstract:
Droplet digital PCR (ddPCR) is being increasingly adopted for gene detection and quantification because of its higher sensitivity and specificity. According to previous observations and our lab data, it is essential to use endogenous reference genes (RGs) when investigating gene expression at the mRNA level under salt stress. This study aimed to select and validate suitable RGs for gene expression under salt stress using ddPCR. Six candidate RGs were selected based on the tandem mass tag (TMT)-labeled quantitative proteomics of Alkalicoccus halolimnae at four salinities. The expression stability of these candidate genes was evaluated using statistical algorithms (geNorm, NormFinder, BestKeeper and RefFinder). There was a small fluctuation in cycle threshold (Ct) value and copy number of the pdp gene. Its expression stability was ranked in the vanguard of all algorithms, and was the most suitable RG for quantification of expression by both qPCR and ddPCR of A. halolimnae under salt stress. Single RG pdp and RG combinations were used to normalize the expression of ectA, ectB, ectC, and ectD under four salinities. The present study constitutes the first systematic analysis of endogenous RG selection for halophiles responding to salt stress. This work provides a valuable theory and an approach reference of internal control identification for ddPCR-based stress response models.Keywords: endogenous reference gene, salt stress, ddPCR, RT-qPCR, Alkalicoccus halolimnae
Procedia PDF Downloads 1042010 DesignChain: Automated Design of Products Featuring a Large Number of Variants
Authors: Lars Rödel, Jonas Krebs, Gregor Müller
Abstract:
The growing price pressure due to the increasing number of global suppliers, the growing individualization of products and ever-shorter delivery times are upcoming challenges in the industry. In this context, Mass Personalization stands for the individualized production of customer products in batch size 1 at the price of standardized products. The possibilities of digitalization and automation of technical order processing open up the opportunity for companies to significantly reduce their cost of complexity and lead times and thus enhance their competitiveness. Many companies already use a range of CAx tools and configuration solutions today. Often, the expert knowledge of employees is hidden in "knowledge silos" and is rarely networked across processes. DesignChain describes the automated digital process from the recording of individual customer requirements, through design and technical preparation, to production. Configurators offer the possibility of mapping variant-rich products within the Design Chain. This transformation of customer requirements into product features makes it possible to generate even complex CAD models, such as those for large-scale plants, on a rule-based basis. With the aid of an automated CAx chain, production-relevant documents are thus transferred digitally to production. This process, which can be fully automated, allows variants to always be generated on the basis of current version statuses.Keywords: automation, design, CAD, CAx
Procedia PDF Downloads 762009 A Method for Reduction of Association Rules in Data Mining
Authors: Diego De Castro Rodrigues, Marcelo Lisboa Rocha, Daniela M. De Q. Trevisan, Marcos Dias Da Conceicao, Gabriel Rosa, Rommel M. Barbosa
Abstract:
The use of association rules algorithms within data mining is recognized as being of great value in the knowledge discovery in databases. Very often, the number of rules generated is high, sometimes even in databases with small volume, so the success in the analysis of results can be hampered by this quantity. The purpose of this research is to present a method for reducing the quantity of rules generated with association algorithms. Therefore, a computational algorithm was developed with the use of a Weka Application Programming Interface, which allows the execution of the method on different types of databases. After the development, tests were carried out on three types of databases: synthetic, model, and real. Efficient results were obtained in reducing the number of rules, where the worst case presented a gain of more than 50%, considering the concepts of support, confidence, and lift as measures. This study concluded that the proposed model is feasible and quite interesting, contributing to the analysis of the results of association rules generated from the use of algorithms.Keywords: data mining, association rules, rules reduction, artificial intelligence
Procedia PDF Downloads 161