Search results for: Information retrieval systems
4497 Continuous Functions Modeling with Artificial Neural Network: An Improvement Technique to Feed the Input-Output Mapping
Authors: A. Belayadi, A. Mougari, L. Ait-Gougam, F. Mekideche-Chafa
Abstract:
The artificial neural network is one of the interesting techniques that have been advantageously used to deal with modeling problems. In this study, the computing with artificial neural network (CANN) is proposed. The model is applied to modulate the information processing of one-dimensional task. We aim to integrate a new method which is based on a new coding approach of generating the input-output mapping. The latter is based on increasing the neuron unit in the last layer. Accordingly, to show the efficiency of the approach under study, a comparison is made between the proposed method of generating the input-output set and the conventional method. The results illustrated that the increasing of the neuron units, in the last layer, allows to find the optimal network’s parameters that fit with the mapping data. Moreover, it permits to decrease the training time, during the computation process, which avoids the use of computers with high memory usage.
Keywords: Neural network computing, information processing, input-output mapping, training time, computers with high memory.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 13244496 Augmented Reality in Advertising and Brand Communication: An Experimental Study
Authors: O. Mauroner, L. Le, S. Best
Abstract:
Digital technologies offer many opportunities in the design and implementation of brand communication and advertising. Augmented reality (AR) is an innovative technology in marketing communication that focuses on the fact that virtual interaction with a product ad offers additional value to consumers. AR enables consumers to obtain (almost) real product experiences by the way of virtual information even before the purchase of a certain product. Aim of AR applications in relation with advertising is in-depth examination of product characteristics to enhance product knowledge as well as brand knowledge. Interactive design of advertising provides observers with an intense examination of a specific advertising message and therefore leads to better brand knowledge. The elaboration likelihood model and the central route to persuasion strongly support this argumentation. Nevertheless, AR in brand communication is still in an initial stage and therefore scientific findings about the impact of AR on information processing and brand attitude are rare. The aim of this paper is to empirically investigate the potential of AR applications in combination with traditional print advertising. To that effect an experimental design with different levels of interactivity is built to measure the impact of interactivity of an ad on different variables o advertising effectiveness.Keywords: Advertising effectiveness, augmented reality, brand communication, brand recall, interactivity.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 49174495 Analysis of Driving Conditions and Preferred Media on Diversion
Authors: Yoon-Hyuk Choi
Abstract:
Studies on the distribution of traffic demands have been proceeding by providing traffic information for reducing greenhouse gases and reinforcing the road's competitiveness in the transport section, however, since it is preferentially required the extensive studies on the driver's behavior changing routes and its influence factors, this study has been developed a discriminant model for changing routes considering driving conditions including traffic conditions of roads and driver's preferences for information media. It is divided into three groups depending on driving conditions in group classification with the CART analysis, which is statistically meaningful. And the extent that driving conditions and preferred media affect a route change is examined through a discriminant analysis, and it is developed a discriminant model equation to predict a route change. As a result of building the discriminant model equation, it is shown that driving conditions affect a route change much more, the entire discriminant hit ratio is derived as 64.2%, and this discriminant equation shows high discriminant ability more than a certain degree.Keywords: CART analysis, Diversion, Discriminant model, Driving conditions, and preferred media
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10544494 Reduction of Power Losses in Distribution Systems
Authors: Y. Al-Mahroqi, I.A. Metwally, A. Al-Hinai, A. Al-Badi
Abstract:
Losses reduction initiatives in distribution systems have been activated due to the increasing cost of supplying electricity, the shortage in fuel with ever-increasing cost to produce more power, and the global warming concerns. These initiatives have been introduced to the utilities in shape of incentives and penalties. Recently, the electricity distribution companies in Oman have been incentivized to reduce the distribution technical and non-technical losses with an equal annual reduction rate for 6 years. In this paper, different techniques for losses reduction in Mazoon Electricity Company (MZEC) are addressed. In this company, high numbers of substation and feeders were found to be non-compliant with the Distribution System Security Standard (DSSS). Therefore, 33 projects have been suggested to bring non-complying 29 substations and 28 feeders to meet the planed criteria and to comply with the DSSS. The largest part of MZEC-s network (South Batinah region) was modeled by ETAP software package. The model has been extended to implement the proposed projects and to examine their effects on losses reduction. Simulation results have shown that the implementation of these projects leads to a significant improvement in voltage profile, and reduction in the active and the reactive power losses. Finally, the economical analysis has revealed that the implementation of the proposed projects in MZEC leads to an annual saving of about US$ 5 million.Keywords: Losses Reduction, Technical Losses, Non-Technical Losses, Cost Analysis
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 93704493 Adversarial Disentanglement Using Latent Classifier for Pose-Independent Representation
Authors: Hamed Alqahtani, Manolya Kavakli-Thorne
Abstract:
The large pose discrepancy is one of the critical challenges in face recognition during video surveillance. Due to the entanglement of pose attributes with identity information, the conventional approaches for pose-independent representation lack in providing quality results in recognizing largely posed faces. In this paper, we propose a practical approach to disentangle the pose attribute from the identity information followed by synthesis of a face using a classifier network in latent space. The proposed approach employs a modified generative adversarial network framework consisting of an encoder-decoder structure embedded with a classifier in manifold space for carrying out factorization on the latent encoding. It can be further generalized to other face and non-face attributes for real-life video frames containing faces with significant attribute variations. Experimental results and comparison with state of the art in the field prove that the learned representation of the proposed approach synthesizes more compelling perceptual images through a combination of adversarial and classification losses.Keywords: Video surveillance, disentanglement, face detection.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 6074492 Hybrid Control Mode Based On Multi-Sensor Information by Fuzzy Approach for Navigation Task of Autonomous Mobile Robot
Authors: Jonqlan Lin, C. Y. Tasi, K. H. Lin
Abstract:
This paper addresses the issue of the autonomous mobile robot (AMR) navigation task based on the hybrid control modes. The novel hybrid control mode, based on multi-sensors information by using the fuzzy approach, has been presented in this research. The system operates in real time, is robust, enables the robot to operate with imprecise knowledge, and takes into account the physical limitations of the environment in which the robot moves, obtaining satisfactory responses for a large number of different situations. An experiment is simulated and carried out with a pioneer mobile robot. From the experimental results, the effectiveness and usefulness of the proposed AMR obstacle avoidance and navigation scheme are confirmed. The experimental results show the feasibility, and the control system has improved the navigation accuracy. The implementation of the controller is robust, has a low execution time, and allows an easy design and tuning of the fuzzy knowledge base.
Keywords: Autonomous mobile robot, obstacle avoidance, MEMS, hybrid control mode, navigation control.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22084491 Collision Detection Algorithm Based on Data Parallelism
Authors: Zhen Peng, Baifeng Wu
Abstract:
Modern computing technology enters the era of parallel computing with the trend of sustainable and scalable parallelism. Single Instruction Multiple Data (SIMD) is an important way to go along with the trend. It is able to gather more and more computing ability by increasing the number of processor cores without the need of modifying the program. Meanwhile, in the field of scientific computing and engineering design, many computation intensive applications are facing the challenge of increasingly large amount of data. Data parallel computing will be an important way to further improve the performance of these applications. In this paper, we take the accurate collision detection in building information modeling as an example. We demonstrate a model for constructing a data parallel algorithm. According to the model, a complex object is decomposed into the sets of simple objects; collision detection among complex objects is converted into those among simple objects. The resulting algorithm is a typical SIMD algorithm, and its advantages in parallelism and scalability is unparalleled in respect to the traditional algorithms.
Keywords: Data parallelism, collision detection, single instruction multiple data, building information modeling, continuous scalability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 12354490 Testing a Flexible Manufacturing System Facility Production Capacity through Discrete Event Simulation: Automotive Case Study
Authors: Justyna Rybicka, Ashutosh Tiwari, Shane Enticott
Abstract:
In the age of automation and computation aiding manufacturing, it is clear that manufacturing systems have become more complex than ever before. Although technological advances provide the capability to gain more value with fewer resources, sometimes utilisation of the manufacturing capabilities available to organisations is difficult to achieve. Flexible manufacturing systems (FMS) provide a unique capability to manufacturing organisations where there is a need for product range diversification by providing line efficiency through production flexibility. This is very valuable in trend driven production set-ups or niche volume production requirements. Although FMS provides flexible and efficient facilities, its optimal set-up is key in achieving production performance. As many variables are interlinked due to the flexibility provided by the FMS, analytical calculations are not always sufficient to predict the FMS’ performance. Simulation modelling is capable of capturing the complexity and constraints associated with FMS. This paper demonstrates how discrete event simulation (DES) can address complexity in an FMS to optimise the production line performance. A case study of an automotive FMS is presented. The DES model demonstrates different configuration options depending on prioritising objectives: utilisation and throughput. Additionally, this paper provides insight into understanding the impact of system set-up constraints on the FMS performance and demonstrates the exploration into the optimal production set-up.
Keywords: Automotive, capacity performance, discrete event simulation, flexible manufacturing system.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 29324489 Application of Exact String Matching Algorithms towards SMILES Representation of Chemical Structure
Authors: Ahmad Fadel Klaib, Zurinahni Zainol, Nurul Hashimah Ahamed, Rosma Ahmad, Wahidah Hussin
Abstract:
Bioinformatics and Cheminformatics use computer as disciplines providing tools for acquisition, storage, processing, analysis, integrate data and for the development of potential applications of biological and chemical data. A chemical database is one of the databases that exclusively designed to store chemical information. NMRShiftDB is one of the main databases that used to represent the chemical structures in 2D or 3D structures. SMILES format is one of many ways to write a chemical structure in a linear format. In this study we extracted Antimicrobial Structures in SMILES format from NMRShiftDB and stored it in our Local Data Warehouse with its corresponding information. Additionally, we developed a searching tool that would response to user-s query using the JME Editor tool that allows user to draw or edit molecules and converts the drawn structure into SMILES format. We applied Quick Search algorithm to search for Antimicrobial Structures in our Local Data Ware House.
Keywords: Exact String-matching Algorithms, NMRShiftDB, SMILES Format, Antimicrobial Structures.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 22244488 Need to Implement the Environmental Accounting Education for Sustainable Development: An Overview
Authors: Noor Mohammad
Abstract:
Environmental accounting is a recent phenomenon in the modern jurisprudence. It may reflect the corporate governance mechanisms in line with the natural resources and environmental sound management and administration systems in any country of the world. It may be a corporate focused on the improving of the environmental quality. But it is often identified that it is ignored due to some reasons such as unconsciousness, lack of ethical education etc. At present, the world community is very much concerned about the state of the environmental accounting and auditing systems as it bears sustainability on the mother earth for our generations. It is one of the important tools for understanding on the role played by the natural environment in the economy. It provides adequate data which is highlighted both in the contribution of natural resources to economic well-being as well as the costs imposed by pollution or resource degradation. It can play a critical role as on be a part of the many international environmental organizations such as IUCN, WWF, PADELIA, WRI etc.; as they have been taking many initiatives for ensuring the environmental accouting for our competent survivals. The global state actors have already taken some greening accounting initiatives under the forum of the United Nations Division for Sustainable Dedevolpment, the United Nations Statistical Division, the United Nations Conference on Environment and development known as Earth Summit in Rio de Janeiro, Johannesburg Conference 2002 etc. This study will provide an overview of the environmental accounting education consisting of 25 respondents based on the primary and secondary sources.
Keywords: Environmental Accounting, Auditing Education and Sustainable Development
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 33624487 The Social Dynamics of Pandemics: A Clinical Sociological Analysis of Precautions and Risks
Authors: C. Ardil
Abstract:
The COVID-19 pandemic has revealed the complex and multifaceted relationship between societal structures and public health, emphasizing the need for a holistic approach to understanding pandemic responses. This study utilizes a clinical sociological perspective to analyze the social impacts of pandemics, with a particular focus on how social determinants such as income, education, race, and geographical location influence vulnerability and resilience. It explores the critical role of risk perception, communication strategies, and community dynamics in shaping public adherence to precautionary measures like mask-wearing, social distancing, and vaccination. By examining the ways in which social norms, structural inequalities, and trust in institutions affect public behavior, this study provides insights into the challenges of managing health crises in diverse communities. Comparative case studies and policy analysis are employed to highlight the variations in pandemic responses across different countries and regions, illustrating the importance of coordinated strategies and community-based interventions. The findings underscore that effective pandemic response requires addressing underlying social inequities, fostering community cohesion, and ensuring equitable access to healthcare and information. This study contributes to a deeper understanding of the broader societal implications of pandemics and offers recommendations for building more resilient, inclusive public health systems capable of mitigating the impact of future global health emergencies.
Keywords: Behavioral medicine, clinical sociology, community health, COVID-19, COVID-19 pandemic, epidemiology, infectious diseases, pandemics, precautions, psychology, public health, risks, social determinants, social dynamics, social psychiatry, social psychology, socioeconomic status, structural functionalism
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 374486 Data Centers’ Temperature Profile Simulation Optimized by Finite Elements and Discretization Methods
Authors: José Alberto García Fernández, Zhimin Du, Xinqiao Jin
Abstract:
Nowadays, data center industry faces strong challenges for increasing the speed and data processing capacities while at the same time is trying to keep their devices a suitable working temperature without penalizing that capacity. Consequently, the cooling systems of this kind of facilities use a large amount of energy to dissipate the heat generated inside the servers, and developing new cooling techniques or perfecting those already existing would be a great advance in this type of industry. The installation of a temperature sensor matrix distributed in the structure of each server would provide the necessary information for collecting the required data for obtaining a temperature profile instantly inside them. However, the number of temperature probes required to obtain the temperature profiles with sufficient accuracy is very high and expensive. Therefore, other less intrusive techniques are employed where each point that characterizes the server temperature profile is obtained by solving differential equations through simulation methods, simplifying data collection techniques but increasing the time to obtain results. In order to reduce these calculation times, complicated and slow computational fluid dynamics simulations are replaced by simpler and faster finite element method simulations which solve the Burgers‘ equations by backward, forward and central discretization techniques after simplifying the energy and enthalpy conservation differential equations. The discretization methods employed for solving the first and second order derivatives of the obtained Burgers‘ equation after these simplifications are the key for obtaining results with greater or lesser accuracy regardless of the characteristic truncation error.
Keywords: Burgers’ equations, CFD simulation, data center, discretization methods, FEM simulation, temperature profile.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5164485 Keyloggers Prevention with Time-Sensitive Obfuscation
Authors: Chien-Wei Hung, Fu-Hau Hsu, Chuan-Sheng Wang, Chia-Hao Lee
Abstract:
Nowadays, the abuse of keyloggers is one of the most widespread approaches to steal sensitive information. In this paper, we propose an On-Screen Prompts Approach to Keyloggers (OSPAK) and its analysis, which is installed in public computers. OSPAK utilizes a canvas to cue users when their keystrokes are going to be logged or ignored by OSPAK. This approach can protect computers against recoding sensitive inputs, which obfuscates keyloggers with letters inserted among users' keystrokes. It adds a canvas below each password field in a webpage and consists of three parts: two background areas, a hit area and a moving foreground object. Letters at different valid time intervals are combined in accordance with their time interval orders, and valid time intervals are interleaved with invalid time intervals. It utilizes animation to visualize valid time intervals and invalid time intervals, which can be integrated in a webpage as a browser extension. We have tested it against a series of known keyloggers and also performed a study with 95 users to evaluate how easily the tool is used. Experimental results made by volunteers show that OSPAK is a simple approach.Keywords: Authentication, computer security, keylogger, privacy, information leakage.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7774484 Facial Expression Phoenix (FePh): An Annotated Sequenced Dataset for Facial and Emotion-Specified Expressions in Sign Language
Authors: Marie Alaghband, Niloofar Yousefi, Ivan Garibay
Abstract:
Facial expressions are important parts of both gesture and sign language recognition systems. Despite the recent advances in both fields, annotated facial expression datasets in the context of sign language are still scarce resources. In this manuscript, we introduce an annotated sequenced facial expression dataset in the context of sign language, comprising over 3000 facial images extracted from the daily news and weather forecast of the public tv-station PHOENIX. Unlike the majority of currently existing facial expression datasets, FePh provides sequenced semi-blurry facial images with different head poses, orientations, and movements. In addition, in the majority of images, identities are mouthing the words, which makes the data more challenging. To annotate this dataset we consider primary, secondary, and tertiary dyads of seven basic emotions of "sad", "surprise", "fear", "angry", "neutral", "disgust", and "happy". We also considered the "None" class if the image’s facial expression could not be described by any of the aforementioned emotions. Although we provide FePh as a facial expression dataset of signers in sign language, it has a wider application in gesture recognition and Human Computer Interaction (HCI) systems.Keywords: Annotated Facial Expression Dataset, Sign Language Recognition, Gesture Recognition, Sequenced Facial Expression Dataset.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7224483 A Two-Stage Expert System for Diagnosis of Leukemia Based on Type-2 Fuzzy Logic
Authors: Ali Akbar Sadat Asl
Abstract:
Diagnosis and deciding about diseases in medical fields is facing innate uncertainty which can affect the whole process of treatment. This decision is made based on expert knowledge and the way in which an expert interprets the patient's condition, and the interpretation of the various experts from the patient's condition may be different. Fuzzy logic can provide mathematical modeling for many concepts, variables, and systems that are unclear and ambiguous and also it can provide a framework for reasoning, inference, control, and decision making in conditions of uncertainty. In systems with high uncertainty and high complexity, fuzzy logic is a suitable method for modeling. In this paper, we use type-2 fuzzy logic for uncertainty modeling that is in diagnosis of leukemia. The proposed system uses an indirect-direct approach and consists of two stages: In the first stage, the inference of blood test state is determined. In this step, we use an indirect approach where the rules are extracted automatically by implementing a clustering approach. In the second stage, signs of leukemia, duration of disease until its progress and the output of the first stage are combined and the final diagnosis of the system is obtained. In this stage, the system uses a direct approach and final diagnosis is determined by the expert. The obtained results show that the type-2 fuzzy expert system can diagnose leukemia with the average accuracy about 97%.
Keywords: Expert system, leukemia, medical diagnosis, type-2 fuzzy logic.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 10534482 Intelligent Earthquake Prediction System Based On Neural Network
Authors: Emad Amar, Tawfik Khattab, Fatma Zada
Abstract:
Predicting earthquakes is an important issue in the study of geography. Accurate prediction of earthquakes can help people to take effective measures to minimize the loss of personal and economic damage, such as large casualties, destruction of buildings and broken of traffic, occurred within a few seconds. United States Geological Survey (USGS) science organization provides reliable scientific information about Earthquake Existed throughout history & the Preliminary database from the National Center Earthquake Information (NEIC) show some useful factors to predict an earthquake in a seismic area like Aleutian Arc in the U.S. state of Alaska. The main advantage of this prediction method that it does not require any assumption, it makes prediction according to the future evolution of the object's time series. The article compares between simulation data result from trained BP and RBF neural network versus actual output result from the system calculations. Therefore, this article focuses on analysis of data relating to real earthquakes. Evaluation results show better accuracy and higher speed by using radial basis functions (RBF) neural network.
Keywords: BP neural network, Prediction, RBF neural network.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32194481 Pallet Tracking and Cost Optimization of the Flow of Goods in Logistics Operations by Serial Shipping Container Code
Authors: Dominika Crnjac Milic, Martina Martinovic, Vladimir Simovic
Abstract:
The case study method in this paper shows the implementation of Information Technology (IT) and the Serial Shipping Container Code (SSCC) in a Croatian company that deals with logistics operations and provides logistics services in the cold chain segment. This company is aware of the sensitivity of the goods entrusted to them by the user of the service, as well as of the importance of speed and accuracy in providing logistics services. To that end, it has implemented and used the latest IT to ensure the highest standard of high-quality logistics services to its customers. Looking for efficiency and optimization of supply chain management, while maintaining a high level of quality of the products that are sold, today's users of outsourced logistics services are open to the implementation of new IT products that ultimately deliver savings. By analysing the positive results and the difficulties that arise when using this technology, we aim to provide an insight into the potential of this approach of the logistics service provider.
Keywords: Logistics operations, serial shipping container code, SSCC, information technology, cost optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9404480 Designing Pictogram for Food Portion Size
Authors: Y.C. Liu, S.J. Lu, Y.C. Weng, H. Su
Abstract:
The objective of this paper is to investigate a new approach based on the idea of pictograms for food portion size. This approach adopts the model of the United States Pharmacopeia- Drug Information (USP-DI). The representation of each food portion size composed of three parts: frame, the connotation of dietary portion sizes and layout. To investigate users- comprehension based on this approach, two experiments were conducted, included 122 Taiwanese people, 60 male and 62 female with ages between 16 and 64 (divided into age groups of 16-30, 31-45 and 46-64). In Experiment 1, the mean correcting rate of the understanding level of food items is 48.54% (S.D.= 95.08) and the mean response time 2.89sec (S.D.=2.14). The difference on the correct rates for different age groups is significant (P*=0.00<0.05). In Experiment 2, the correcting rate of selecting the right life-size measurement aid is 65.02% (S.D.=21.31). The result showed the potential of the approach for certain food potion sizes. Issues raised for discussions including comprehension on numerous food varieties in an open environment, selection of photograph or drawing, reasons of different correcting rates for the measurement aid. This research also could be used for those interested in systematic and pictorial representation of dietary portion size information.Keywords: Comprehension, Food Portion Size, Model of DietaryInformation, Pictogram Design, USP-DI.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 19364479 Effective Collaboration in Product Development via a Common Sharable Ontology
Authors: Sihem Mostefai, Abdelaziz Bouras, Mohamed Batouche
Abstract:
To achieve competitive advantage nowadays, most of the industrial companies are considering that success is sustained to great product development. That is to manage the product throughout its entire lifetime ranging from design, manufacture, operation and destruction. Achieving this goal requires a tight collaboration between partners from a wide variety of domains, resulting in various product data types and formats, as well as different software tools. So far, the lack of a meaningful unified representation for product data semantics has slowed down efficient product development. This paper proposes an ontology based approach to enable such semantic interoperability. Generic and extendible product ontology is described, gathering main concepts pertaining to the mechanical field and the relations that hold among them. The ontology is not exhaustive; nevertheless, it shows that such a unified representation is possible and easily exploitable. This is illustrated thru a case study with an example product and some semantic requests to which the ontology responds quite easily. The study proves the efficiency of ontologies as a support to product data exchange and information sharing, especially in product development environments where collaboration is not just a choice but a mandatory prerequisite.Keywords: Information exchange, product lifecyclemanagement, product ontology, semantic interoperability.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15934478 Efficient Dimensionality Reduction of Directional Overcurrent Relays Optimal Coordination Problem
Authors: Fouad Salha , X. Guillaud
Abstract:
Directional over current relays (DOCR) are commonly used in power system protection as a primary protection in distribution and sub-transmission electrical systems and as a secondary protection in transmission systems. Coordination of protective relays is necessary to obtain selective tripping. In this paper, an approach for efficiency reduction of DOCRs nonlinear optimum coordination (OC) is proposed. This was achieved by modifying the objective function and relaxing several constraints depending on the four constraints classification, non-valid, redundant, pre-obtained and valid constraints. According to this classification, the far end fault effect on the objective function and constraints, and in consequently on relay operating time, was studied. The study was carried out, firstly by taking into account the near-end and far-end faults in DOCRs coordination problem formulation; and then faults very close to the primary relays (nearend faults). The optimal coordination (OC) was achieved by simultaneously optimizing all variables (TDS and Ip) in nonlinear environment by using of Genetic algorithm nonlinear programming techniques. The results application of the above two approaches on 6-bus and 26-bus system verify that the far-end faults consideration on OC problem formulation don-t lose the optimality.
Keywords: Backup/Primary relay, Coordination time interval (CTI), directional over current relays, Genetic algorithm, time dial setting (TDS), pickup current setting (Ip), nonlinear programming.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 15844477 Modeling Uncertainty in Multiple Criteria Decision Making Using the Technique for Order Preference by Similarity to Ideal Solution for the Selection of Stealth Combat Aircraft
Authors: C. Ardil
Abstract:
Uncertainty set theory is a generalization of fuzzy set theory and intuitionistic fuzzy set theory. It serves as an effective tool for dealing with inconsistent, imprecise, and vague information. The technique for order preference by similarity to ideal solution (TOPSIS) method is a multiple-attribute method used to identify solutions from a finite set of alternatives. It simultaneously minimizes the distance from an ideal point and maximizes the distance from a nadir point. In this paper, an extension of the TOPSIS method for multiple attribute group decision-making (MAGDM) based on uncertainty sets is presented. In uncertainty decision analysis, decision-makers express information about attribute values and weights using uncertainty numbers to select the best stealth combat aircraft.
Keywords: Uncertainty set, stealth combat aircraft selection multiple criteria decision-making analysis, MCDM, uncertainty decision analysis, TOPSIS
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1454476 Individual Differences and Paired Learning in Virtual Environments
Authors: Patricia M. Boechler, Heather M. Gautreau
Abstract:
In this research study, postsecondary students completed an information learning task in an avatar-based 3D virtual learning environment. Three factors were of interest in relation to learning; 1) the influence of collaborative vs. independent conditions, 2) the influence of the spatial arrangement of the virtual environment (linear, random and clustered), and 3) the relationship of individual differences such as spatial skill, general computer experience and video game experience to learning. Students completed pretest measures of prior computer experience and prior spatial skill. Following the premeasure administration, students were given instruction to move through the virtual environment and study all the material within 10 information stations. In the collaborative condition, students proceeded in randomly assigned pairs, while in the independent condition they proceeded alone. After this learning phase, all students individually completed a multiple choice test to determine information retention. The overall results indicated that students in pairs did not perform any better or worse than independent students. As far as individual differences, only spatial ability predicted the performance of students. General computer experience and video game experience did not. Taking a closer look at the pairs and spatial ability, comparisons were made on pairs high/matched spatial ability, pairs low/matched spatial ability and pairs that were mismatched on spatial ability. The results showed that both high/matched pairs and mismatched pairs outperformed low/matched pairs. That is, if a pair had even one individual with strong spatial ability they would perform better than pairs with only low spatial ability individuals. This suggests that, in virtual environments, the specific individuals that are paired together are important for performance outcomes. The paper also includes a discussion of trends within the data that have implications for virtual environment education.
Keywords: Avatar-based, virtual environment, paired learning, individual differences.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 7804475 A Model for Estimation of Efforts in Development of Software Systems
Authors: Parvinder S. Sandhu, Manisha Prashar, Pourush Bassi, Atul Bisht
Abstract:
Software effort estimation is the process of predicting the most realistic use of effort required to develop or maintain software based on incomplete, uncertain and/or noisy input. Effort estimates may be used as input to project plans, iteration plans, budgets. There are various models like Halstead, Walston-Felix, Bailey-Basili, Doty and GA Based models which have already used to estimate the software effort for projects. In this study Statistical Models, Fuzzy-GA and Neuro-Fuzzy (NF) Inference Systems are experimented to estimate the software effort for projects. The performances of the developed models were tested on NASA software project datasets and results are compared with the Halstead, Walston-Felix, Bailey-Basili, Doty and Genetic Algorithm Based models mentioned in the literature. The result shows that the NF Model has the lowest MMRE and RMSE values. The NF Model shows the best results as compared with the Fuzzy-GA based hybrid Inference System and other existing Models that are being used for the Effort Prediction with lowest MMRE and RMSE values.Keywords: Neuro-Fuzzy Model, Halstead Model, Walston-Felix Model, Bailey-Basili Model, Doty Model, GA Based Model, Genetic Algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 32274474 The Experimental and Numerical Analysis of the Joining Processes for Air Conditioning Systems
Authors: M.St. Węglowski, D. Miara, S. Błacha, J. Dworak, J. Rykała, K. Kwieciński, J. Pikuła, G. Ziobro, A. Szafron, P. Zimierska-Nowak, M. Richert, P. Noga
Abstract:
In the paper the results of welding of car’s air-conditioning elements are presented. These systems based on, mainly, the environmental unfriendly refrigerants. Thus, the producers of cars will have to stop using traditional refrigerant and to change it to carbon dioxide (R744). This refrigerant is environmental friendly. However, it should be noted that the air condition system working with R744 refrigerant operates at high temperature (up to 150 °C) and high pressure (up to 130 bar). These two parameters are much higher than for other refrigerants. Thus new materials, design as well as joining technologies are strongly needed for these systems. AISI 304 and 316L steels as well as aluminium alloys 5xxx are ranked among the prospective materials. As a joining process laser welding, plasma welding, electron beam welding as well as high rotary friction welding can be applied. In the study, the metallographic examination based on light microscopy as well as SEM was applied to estimate the quality of welded joints. The analysis of welding was supported by numerical modelling based on Sysweld software. The results indicated that using laser, plasma and electron beam welding, it is possible to obtain proper quality of welds in stainless steel. Moreover, high rotary friction welding allows to guarantee the metallic continuity in the aluminium welded area. The metallographic examination revealed that the grain growth in the heat affected zone (HAZ) in laser and electron beam welded joints were not observed. It is due to low heat input and short welding time. The grain growth and subgrains can be observed at room temperature when the solidification mode is austenitic. This caused low microstructural changes during solidification. The columnar grain structure was found in the weld metal. Meanwhile, the equiaxed grains were detected in the interface. The numerical modelling of laser welding process allowed to estimate the temperature profile in the welded joint as well as predicts the dimensions of welds. The agreement between FEM analysis and experimental data was achieved.
Keywords: Car’s air–conditioning, microstructure, numerical modelling, welding.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 8054473 Optimal Path Planning under Priori Information in Stochastic, Time-varying Networks
Authors: Siliang Wang, Minghui Wang, Jun Hu
Abstract:
A novel path planning approach is presented to solve optimal path in stochastic, time-varying networks under priori traffic information. Most existing studies make use of dynamic programming to find optimal path. However, those methods are proved to be unable to obtain global optimal value, moreover, how to design efficient algorithms is also another challenge. This paper employs a decision theoretic framework for defining optimal path: for a given source S and destination D in urban transit network, we seek an S - D path of lowest expected travel time where its link travel times are discrete random variables. To solve deficiency caused by the methods of dynamic programming, such as curse of dimensionality and violation of optimal principle, an integer programming model is built to realize assignment of discrete travel time variables to arcs. Simultaneously, pruning techniques are also applied to reduce computation complexity in the algorithm. The final experiments show the feasibility of the novel approach.Keywords: pruning method, stochastic, time-varying networks, optimal path planning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18544472 Learning Classifier Systems Approach for Automated Discovery of Crisp and Fuzzy Hierarchical Production Rules
Authors: Suraiya Jabin, Kamal K. Bharadwaj
Abstract:
This research presents a system for post processing of data that takes mined flat rules as input and discovers crisp as well as fuzzy hierarchical structures using Learning Classifier System approach. Learning Classifier System (LCS) is basically a machine learning technique that combines evolutionary computing, reinforcement learning, supervised or unsupervised learning and heuristics to produce adaptive systems. A LCS learns by interacting with an environment from which it receives feedback in the form of numerical reward. Learning is achieved by trying to maximize the amount of reward received. Crisp description for a concept usually cannot represent human knowledge completely and practically. In the proposed Learning Classifier System initial population is constructed as a random collection of HPR–trees (related production rules) and crisp / fuzzy hierarchies are evolved. A fuzzy subsumption relation is suggested for the proposed system and based on Subsumption Matrix (SM), a suitable fitness function is proposed. Suitable genetic operators are proposed for the chosen chromosome representation method. For implementing reinforcement a suitable reward and punishment scheme is also proposed. Experimental results are presented to demonstrate the performance of the proposed system.Keywords: Hierarchical Production Rule, Data Mining, Learning Classifier System, Fuzzy Subsumption Relation, Subsumption matrix, Reinforcement Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 14564471 A Wavelet Based Object Watermarking System for Image and Video
Authors: Abdessamad Essaouabi, Ibnelhaj Elhassane
Abstract:
Efficient storage, transmission and use of video information are key requirements in many multimedia applications currently being addressed by MPEG-4. To fulfill these requirements, a new approach for representing video information which relies on an object-based representation, has been adopted. Therefore, objectbased watermarking schemes are needed for copyright protection. This paper proposes a novel blind object watermarking scheme for images and video using the in place lifting shape adaptive-discrete wavelet transform (SA-DWT). In order to make the watermark robust and transparent, the watermark is embedded in the average of wavelet blocks using the visual model based on the human visual system. Wavelet coefficients n least significant bits (LSBs) are adjusted in concert with the average. Simulation results shows that the proposed watermarking scheme is perceptually invisible and robust against many attacks such as lossy image/video compression (e.g. JPEG, JPEG2000 and MPEG-4), scaling, adding noise, filtering, etc.
Keywords: Watermark, visual model, robustness, in place lifting shape adaptive-discrete wavelet transform.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 18994470 An Exploratory Study Regarding the Effects of Auditor Switch, Auditee’s Industry, and Auditee’s Location on Audit Fees in Australia
Authors: Ashkan Mirzay Fashami
Abstract:
This study examines the effects of auditor switch, auditee’s industry, and auditee’s location on audit fees in Australia. It uses fee data of Australian Securities Exchange 500 companies, considering all industry classifications throughout the country from 2006 until 2016. Main findings show that auditor switch does not affect audit fees. However, auditee’s industry affects audit fees. This effect occurs in information technology, financials, energy, and materials sectors among the top 500 companies. Financials, energy, and materials sectors face a fee rise, whereas information technology has a fee cut. The extent of fee changes is different among various industries, wherein the financial sector has the highest increase. Further, auditee’s location affects audit fees. Top 500 companies in Hobart, Perth, and Brisbane face a fee reduction, wherein the highest cut is in Hobart. Further analysis suggests that the Australian audit market is being increasingly concentrated in the hands of the Big Four audit firms.
Keywords: Audit fee, auditor switch, Australia, industry, location.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 9314469 An Information Theoretic Approach to Rescoring Peptides Produced by De Novo Peptide Sequencing
Authors: John R. Rose, James P. Cleveland, Alvin Fox
Abstract:
Tandem mass spectrometry (MS/MS) is the engine driving high-throughput protein identification. Protein mixtures possibly representing thousands of proteins from multiple species are treated with proteolytic enzymes, cutting the proteins into smaller peptides that are then analyzed generating MS/MS spectra. The task of determining the identity of the peptide from its spectrum is currently the weak point in the process. Current approaches to de novo sequencing are able to compute candidate peptides efficiently. The problem lies in the limitations of current scoring functions. In this paper we introduce the concept of proteome signature. By examining proteins and compiling proteome signatures (amino acid usage) it is possible to characterize likely combinations of amino acids and better distinguish between candidate peptides. Our results strongly support the hypothesis that a scoring function that considers amino acid usage patterns is better able to distinguish between candidate peptides. This in turn leads to higher accuracy in peptide prediction.Keywords: Tandem mass spectrometry, proteomics, scoring, peptide, de novo, mutual information
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 17294468 A Microcontroller Implementation of Constrained Model Predictive Control
Authors: Amira Kheriji Abbes, Faouzi Bouani, Mekki Ksouri
Abstract:
Model Predictive Control (MPC) is an established control technique in a wide range of process industries. The reason for this success is its ability to handle multivariable systems and systems having input, output or state constraints. Neverthless comparing to PID controller, the implementation of the MPC in miniaturized devices like Field Programmable Gate Arrays (FPGA) and microcontrollers has historically been very small scale due to its complexity in implementation and its computation time requirement. At the same time, such embedded technologies have become an enabler for future manufacturing enterprisers as well as a transformer of organizations and markets. In this work, we take advantage of these recent advances in this area in the deployment of one of the most studied and applied control technique in the industrial engineering. In this paper, we propose an efficient firmware for the implementation of constrained MPC in the performed STM32 microcontroller using interior point method. Indeed, performances study shows good execution speed and low computational burden. These results encourage to develop predictive control algorithms to be programmed in industrial standard processes. The PID anti windup controller was also implemented in the STM32 in order to make a performance comparison with the MPC. The main features of the proposed constrained MPC framework are illustrated through two examples.Keywords: Embedded software, microcontroller, constrainedModel Predictive Control, interior point method, PID antiwindup, Keil tool, C/Cµ language.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2798