Search results for: cloud computing systems
9616 Hybrid Subspace Approach for Time Delay Estimation in MIMO Systems
Authors: Mojtaba Saeedinezhad, Sarah Yousefi
Abstract:
In this paper, we present a hybrid subspace approach for Time Delay Estimation (TDE) in multivariable systems. While several methods have been proposed for time delay estimation in SISO systems, delay estimation in MIMO systems were always a big challenge. In these systems the existing TDE methods have significant limitations because most of procedures are just based on system response estimation or correlation analysis. We introduce a new hybrid method for TDE in MIMO systems based on subspace identification and explicit output error method; and compare its performance with previously introduced procedures in presence of different noise levels and in a statistical manner. Then the best method is selected with multi objective decision making technique. It is shown that the performance of new approach is much better than the existing methods, even in low signal-to-noise conditions.Keywords: system identification, time delay estimation, ARX, OE, merit ratio, multi variable decision making
Procedia PDF Downloads 3469615 A Multi-agent System Framework for Stakeholder Analysis of Local Energy Systems
Authors: Mengqiu Deng, Xiao Peng, Yang Zhao
Abstract:
The development of local energy systems requires the collective involvement of different actors from various levels of society. However, the stakeholder analysis of local energy systems still has been under-developed. This paper proposes an multi-agent system (MAS) framework to facilitate the development of stakeholder analysis of local energy systems. The framework takes into account the most influencing stakeholders, including prosumers/consumers, system operators, energy companies and government bodies. Different stakeholders are modeled based on agent architectures for example the belief-desire-intention (BDI) to better reflect their motivations and interests in participating in local energy systems. The agent models of different stakeholders are then integrated in one model of the whole energy system. An illustrative case study is provided to elaborate how to develop a quantitative agent model for different stakeholders, as well as to demonstrate the practicability of the proposed framework. The findings from the case study indicate that the suggested framework and agent model can serve as analytical instruments for enhancing the government’s policy-making process by offering a systematic view of stakeholder interconnections in local energy systems.Keywords: multi-agent system, BDI agent, local energy systems, stakeholders
Procedia PDF Downloads 879614 Interoperability Model Design of Smart Grid Power System
Authors: Seon-Hack Hong, Tae-Il Choi
Abstract:
Interoperability is defined as systems, components, and devices developed by different entities smoothly exchanging information and functioning organically without mutual consultation, being able to communicate with each other and computer systems of the same type or different types, and exchanging information or the ability of two or more systems to exchange information and use the information exchanged without extra effort. Insufficiencies such as duplication of functions when developing systems and applications due to lack of interoperability in the electric power system and low efficiency due to a lack of mutual information transmission system between the inside of the application program and the design is improved, and the seamless linkage of newly developed systems is improved. Since it is necessary to secure interoperability for this purpose, we designed the smart grid-based interoperability standard model in this paper.Keywords: interoperability, power system, common information model, SCADA, IEEE2030, Zephyr
Procedia PDF Downloads 1249613 Some Tips for Increasing Online Services Safety
Authors: Mohsen Rezaee
Abstract:
Although robust security softwares, including anti-viruses, anti-spywares, anti-spam and firewalls are amalgamated with new technologies such as safe zone, hybrid cloud, sand box and etc., and although it can be said that they have managed to prepare highest level of security against viruses, spywares and other malwares in 2012, in fact, hacker attacks to websites are increasingly becoming more and more complicated. Because of security matters developments it can be said it was expected to happen so. Here in this work we try to point out some functional and vital notes to enhance security on the web, enabling the user to browse safely in unlimited web world and to use virtual space securely.Keywords: firewalls, security, web services, computer science
Procedia PDF Downloads 4049612 Robust Control of Cyber-Physical System under Cyber Attacks Based on Invariant Tubes
Authors: Bruno Vilić Belina, Jadranko Matuško
Abstract:
The rapid development of cyber-physical systems significantly influences modern control systems introducing a whole new range of applications of control systems but also putting them under new challenges to ensure their resiliency to possible cyber attacks, either in the form of data integrity attacks or deception attacks. This paper presents a model predictive approach to the control of cyber-physical systems robust to cyber attacks. We assume that a cyber attack can be modelled as an additive disturbance that acts in the measuring channel. For such a system, we designed a tube-based predictive controller based. The performance of the designed controller has been verified in Matlab/Simulink environment.Keywords: control systems, cyber attacks, resiliency, robustness, tube based model predictive control
Procedia PDF Downloads 679611 Dynamical Systems and Fibonacci Numbers
Authors: Vandana N. Purav
Abstract:
The Dynamical systems concept is a mathematical formalization for any fixed rule that describes the time dependence of a points position in its ambient space. e.g. pendulum of a clock, the number of fish each spring in a lake, the number of rabbits spring in an enclosure, etc. The Dynamical system theory used to describe the complex nature that is dynamical systems with differential equations called continuous dynamical system or dynamical system with difference equations called discrete dynamical system. The concept of dynamical system has its origin in Newtonian mechanics.Keywords: dynamical systems, Fibonacci numbers, Newtonian mechanics, discrete dynamical system
Procedia PDF Downloads 4929610 New Results on Exponential Stability of Hybrid Systems
Authors: Grienggrai Rajchakit
Abstract:
This paper is concerned with the exponential stability of switched linear systems with interval time-varying delays. The time delay is any continuous function belonging to a given interval, in which the lower bound of delay is not restricted to zero. By constructing a suitable augmented Lyapunov-Krasovskii functional combined with Leibniz-Newton's formula, a switching rule for the exponential stability of switched linear systems with interval time-varying delays and new delay-dependent sufficient conditions for the exponential stability of the systems are first established in terms of LMIs. Finally, some examples are exploited to illustrate the effectiveness of the proposed schemes.Keywords: exponential stability, hybrid systems, time-varying delays, lyapunov-krasovskii functional, leibniz-newton's formula
Procedia PDF Downloads 5449609 On the Problems of Human Concept Learning within Terminological Systems
Authors: Farshad Badie
Abstract:
The central focus of this article is on the fact that knowledge is constructed from an interaction between humans’ experiences and over their conceptions of constructed concepts. Logical characterisation of ‘human inductive learning over human’s constructed concepts’ within terminological systems and providing a logical background for theorising over the Human Concept Learning Problem (HCLP) in terminological systems are the main contributions of this research. This research connects with the topics ‘human learning’, ‘epistemology’, ‘cognitive modelling’, ‘knowledge representation’ and ‘ontological reasoning’.Keywords: human concept learning, concept construction, knowledge construction, terminological systems
Procedia PDF Downloads 3259608 Thermal Performance and Environmental Assessment of Evaporative Cooling Systems: Case of Mina Valley, Saudi Arabia
Authors: A. Alharbi, R. Boukhanouf, T. Habeebullah, H. Ibrahim
Abstract:
This paper presents a detailed description of evaporative cooling systems used for space cooling in Mina Valley, Saudi Arabia. The thermal performance and environmental impact of the evaporative coolers were evaluated. It was found that the evaporative cooling systems used for space cooling in pilgrims’ accommodations and in the train stations could reduce energy consumption by as much as 75% and cut carbon dioxide emission by 78% compared to traditional vapour compression systems.Keywords: evaporative cooling, vapor compression, electricity consumption, CO2 emission
Procedia PDF Downloads 4349607 Combined Heat and Power Generation in Pressure Reduction City Gas Station (CGS)
Authors: Sadegh Torfi
Abstract:
Realization of anticipated energy efficiency from recuperative run-around energy recovery (RER) systems requires identification of the system components influential parameters. Because simulation modeling is considered as an integral part of the design and economic evaluation of RER systems, it is essential to calibrate the developed models and validate the performance predictions by means of comparison with data from experimental measurements. Several theoretical and numerical analyses on RER systems by researchers have been done, but generally the effect of distance between hot and cold flow is ignored. The objective of this study is to develop a thermohydroulic model for a typical RER system that accounts for energy loss from the interconnecting piping and effects of interconnecting pipes length performance of run-around energy recovery systems. Numerical simulation shows that energy loss from the interconnecting piping is change linear with pipes length and if pipes are properly isolated, maximum reduction of effectiveness of RER systems is 2% in typical piping systems.Keywords: combined heat and power, heat recovery, effectiveness, CGS
Procedia PDF Downloads 2009606 The Implantable MEMS Blood Pressure Sensor Model With Wireless Powering And Data Transmission
Authors: Vitaliy Petrov, Natalia Shusharina, Vitaliy Kasymov, Maksim Patrushev, Evgeny Bogdanov
Abstract:
The leading worldwide death reasons are ischemic heart disease and other cardiovascular illnesses. Generally, the common symptom is high blood pressure. Long-time blood pressure control is very important for the prophylaxis, correct diagnosis and timely therapy. Non-invasive methods which are based on Korotkoff sounds are impossible to apply often and for a long time. Implantable devices can combine longtime monitoring with high accuracy of measurements. The main purpose of this work is to create a real-time monitoring system for decreasing the death rate from cardiovascular diseases. These days implantable electronic devices began to play an important role in medicine. Usually implantable devices consist of a transmitter, powering which could be wireless with a special made battery and measurement circuit. Common problems in making implantable devices are short lifetime of the battery, big size and biocompatibility. In these work, blood pressure measure will be the focus because it’s one of the main symptoms of cardiovascular diseases. Our device will consist of three parts: the implantable pressure sensor, external transmitter and automated workstation in a hospital. The Implantable part of pressure sensors could be based on piezoresistive or capacitive technologies. Both sensors have some advantages and some limitations. The Developed circuit is based on a small capacitive sensor which is made of the technology of microelectromechanical systems (MEMS). The Capacitive sensor can provide high sensitivity, low power consumption and minimum hysteresis compared to the piezoresistive sensor. For this device, it was selected the oscillator-based circuit where frequency depends from the capacitance of sensor hence from capacitance one can calculate pressure. The external device (transmitter) used for wireless charging and signal transmission. Some implant devices for these applications are passive, the external device sends radio wave signal on internal LC circuit device. The external device gets reflected the signal from the implant and from a change of frequency is possible to calculate changing of capacitance and then blood pressure. However, this method has some disadvantages, such as the patient position dependence and static using. Developed implantable device doesn’t have these disadvantages and sends blood pressure data to the external part in real-time. The external device continuously sends information about blood pressure to hospital cloud service for analysis by a physician. Doctor’s automated workstation at the hospital also acts as a dashboard, which displays actual medical data of patients (which require attention) and stores it in cloud service. Usually, critical heart conditions occur few hours before heart attack but the device is able to send an alarm signal to the hospital for an early action of medical service. The system was tested with wireless charging and data transmission. These results can be used for ASIC design for MEMS pressure sensor.Keywords: MEMS sensor, RF power, wireless data, oscillator-based circuit
Procedia PDF Downloads 5899605 Advancing Sustainable Futures: A Study on Low Carbon Ventures
Authors: Gaurav Kumar Sinha
Abstract:
As the world grapples with climate challenges, this study highlights the instrumental role of AWS services in amplifying the impact of LCVs. Their ability to harness the cloud, data analytics, and scalable infrastructure offered by AWS empowers LCVs to innovate, scale, and drive meaningful change in the quest for a sustainable future. This study serves as a rallying cry, urging stakeholders to recognize, embrace, and maximize the potential of AWS-powered solutions in advancing sustainable and resilient global initiatives.Keywords: low carbon ventures, sustainability solutions, AWS services, data analytics
Procedia PDF Downloads 659604 State Estimation Method Based on Unscented Kalman Filter for Vehicle Nonlinear Dynamics
Authors: Wataru Nakamura, Tomoaki Hashimoto, Liang-Kuang Chen
Abstract:
This paper provides a state estimation method for automatic control systems of nonlinear vehicle dynamics. A nonlinear tire model is employed to represent the realistic behavior of a vehicle. In general, all the state variables of control systems are not precisedly known, because those variables are observed through output sensors and limited parts of them might be only measurable. Hence, automatic control systems must incorporate some type of state estimation. It is needed to establish a state estimation method for nonlinear vehicle dynamics with restricted measurable state variables. For this purpose, unscented Kalman filter method is applied in this study for estimating the state variables of nonlinear vehicle dynamics. The objective of this paper is to propose a state estimation method using unscented Kalman filter for nonlinear vehicle dynamics. The effectiveness of the proposed method is verified by numerical simulations.Keywords: state estimation, control systems, observer systems, nonlinear systems
Procedia PDF Downloads 1359603 The Development and Provision of a Knowledge Management Ecosystem, Optimized for Genomics
Authors: Matthew I. Bellgard
Abstract:
The field of bioinformatics has made, and continues to make, substantial progress and contributions to life science research and development. However, this paper contends that a systems approach integrates bioinformatics activities for any project in a defined manner. The application of critical control points in this bioinformatics systems approach may be useful to identify and evaluate points in a pathway where specified activity risk can be reduced, monitored and quality enhanced.Keywords: bioinformatics, food security, personalized medicine, systems approach
Procedia PDF Downloads 4229602 Lyapunov Type Inequalities for Fractional Impulsive Hamiltonian Systems
Authors: Kazem Ghanbari, Yousef Gholami
Abstract:
This paper deals with study about fractional order impulsive Hamiltonian systems and fractional impulsive Sturm-Liouville type problems derived from these systems. The main purpose of this paper devotes to obtain so called Lyapunov type inequalities for mentioned problems. Also, in view point on applicability of obtained inequalities, some qualitative properties such as stability, disconjugacy, nonexistence and oscillatory behaviour of fractional Hamiltonian systems and fractional Sturm-Liouville type problems under impulsive conditions will be derived. At the end, we want to point out that for studying fractional order Hamiltonian systems, we will apply recently introduced fractional Conformable operators.Keywords: fractional derivatives and integrals, Hamiltonian system, Lyapunov-type inequalities, stability, disconjugacy
Procedia PDF Downloads 3559601 Evaluation of Parameters of Subject Models and Their Mutual Effects
Authors: A. G. Kovalenko, Y. N. Amirgaliyev, A. U. Kalizhanova, L. S. Balgabayeva, A. H. Kozbakova, Z. S. Aitkulov
Abstract:
It is known that statistical information on operation of the compound multisite system is often far from the description of actual state of the system and does not allow drawing any conclusions about the correctness of its operation. For example, from the world practice of operation of systems of water supply, water disposal, it is known that total measurements at consumers and at suppliers differ between 40-60%. It is connected with mathematical measure of inaccuracy as well as ineffective running of corresponding systems. Analysis of widely-distributed systems is more difficult, in which subjects, which are self-maintained in decision-making, carry out economic interaction in production, act of purchase and sale, resale and consumption. This work analyzed mathematical models of sellers, consumers, arbitragers and the models of their interaction in the provision of dispersed single-product market of perfect competition. On the basis of these models, the methods, allowing estimation of every subject’s operating options and systems as a whole are given.Keywords: dispersed systems, models, hydraulic network, algorithms
Procedia PDF Downloads 2849600 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 2009599 Analysis of Fault Tolerance on Grid Computing in Real Time Approach
Authors: Parampal Kaur, Deepak Aggarwal
Abstract:
In the computational Grid, fault tolerance is an imperative issue to be considered during job scheduling. Due to the widespread use of resources, systems are highly prone to errors and failures. Hence, fault tolerance plays a key role in the grid to avoid the problem of unreliability. Scheduling the task to the appropriate resource is a vital requirement in computational Grid. The fittest resource scheduling algorithm searches for the appropriate resource based on the job requirements, in contrary to the general scheduling algorithms where jobs are scheduled to the resources with best performance factor. The proposed method is to improve the fault tolerance of the fittest resource scheduling algorithm by scheduling the job in coordination with job replication when the resource has low reliability. Based on the reliability index of the resource, the resource is identified as critical. The tasks are scheduled based on the criticality of the resources. Results show that the execution time of the tasks is comparatively reduced with the proposed algorithm using real-time approach rather than a simulator.Keywords: computational grid, fault tolerance, task replication, job scheduling
Procedia PDF Downloads 4369598 Digitalisation of the Railway Industry: Recent Advances in the Field of Dialogue Systems: Systematic Review
Authors: Andrei Nosov
Abstract:
This paper discusses the development directions of dialogue systems within the digitalisation of the railway industry, where technologies based on conversational AI are already potentially applied or will be applied. Conversational AI is one of the popular natural language processing (NLP) tasks, as it has great prospects for real-world applications today. At the same time, it is a challenging task as it involves many areas of NLP based on complex computations and deep insights from linguistics and psychology. In this review, we focus on dialogue systems and their implementation in the railway domain. We comprehensively review the state-of-the-art research results on dialogue systems and analyse them from three perspectives: type of problem to be solved, type of model, and type of system. In particular, from the perspective of the type of tasks to be solved, we discuss characteristics and applications. This will help to understand how to prioritise tasks. In terms of the type of models, we give an overview that will allow researchers to become familiar with how to apply them in dialogue systems. By analysing the types of dialogue systems, we propose an unconventional approach in contrast to colleagues who traditionally contrast goal-oriented dialogue systems with open-domain systems. Our view focuses on considering retrieval and generative approaches. Furthermore, the work comprehensively presents evaluation methods and datasets for dialogue systems in the railway domain to pave the way for future research. Finally, some possible directions for future research are identified based on recent research results.Keywords: digitalisation, railway, dialogue systems, conversational AI, natural language processing, natural language understanding, natural language generation
Procedia PDF Downloads 639597 e-Learning Security: A Distributed Incident Response Generator
Authors: Bel G Raggad
Abstract:
An e-Learning setting is a distributed computing environment where information resources can be connected to any public network. Public networks are very unsecure which can compromise the reliability of an e-Learning environment. This study is only concerned with the intrusion detection aspect of e-Learning security and how incident responses are planned. The literature reported great advances in intrusion detection system (ids) but neglected to study an important ids weakness: suspected events are detected but an intrusion is not determined because it is not defined in ids databases. We propose an incident response generator (DIRG) that produces incident responses when the working ids system suspects an event that does not correspond to a known intrusion. Data involved in intrusion detection when ample uncertainty is present is often not suitable to formal statistical models including Bayesian. We instead adopt Dempster and Shafer theory to process intrusion data for the unknown event. The DIRG engine transforms data into a belief structure using incident scenarios deduced by the security administrator. Belief values associated with various incident scenarios are then derived and evaluated to choose the most appropriate scenario for which an automatic incident response is generated. This article provides a numerical example demonstrating the working of the DIRG system.Keywords: decision support system, distributed computing, e-Learning security, incident response, intrusion detection, security risk, statefull inspection
Procedia PDF Downloads 4379596 State Estimation Based on Unscented Kalman Filter for Burgers’ Equation
Authors: Takashi Shimizu, Tomoaki Hashimoto
Abstract:
Controlling the flow of fluids is a challenging problem that arises in many fields. Burgers’ equation is a fundamental equation for several flow phenomena such as traffic, shock waves, and turbulence. The optimal feedback control method, so-called model predictive control, has been proposed for Burgers’ equation. However, the model predictive control method is inapplicable to systems whose all state variables are not exactly known. In practical point of view, it is unusual that all the state variables of systems are exactly known, because the state variables of systems are measured through output sensors and limited parts of them can be only available. In fact, it is usual that flow velocities of fluid systems cannot be measured for all spatial domains. Hence, any practical feedback controller for fluid systems must incorporate some type of state estimator. To apply the model predictive control to the fluid systems described by Burgers’ equation, it is needed to establish a state estimation method for Burgers’ equation with limited measurable state variables. To this purpose, we apply unscented Kalman filter for estimating the state variables of fluid systems described by Burgers’ equation. The objective of this study is to establish a state estimation method based on unscented Kalman filter for Burgers’ equation. The effectiveness of the proposed method is verified by numerical simulations.Keywords: observer systems, unscented Kalman filter, nonlinear systems, Burgers' equation
Procedia PDF Downloads 1539595 Simple Ways to Enhance the Security of Web Services
Authors: Majid Azarniush, Soroush Mokallaei
Abstract:
Although robust security software, including anti-viruses, anti spy wares, anti-spam and firewalls, are amalgamated with new technologies such as Safe Zone, Hybrid Cloud, Sand Box etc., and it can be said that they have managed to prepare highest level of security against viruses, spy wares and other malwares in 2012, but in fact hackers' attacks to websites are increasingly becoming more and more complicated. Because of security matters and developments, it can be said that it was expected to happen so. Here in this work, we try to point out to some functional and vital notes to enhance security on the web enabling the user to browse safely in no limit web world and to use virtual space securely.Keywords: firewalls, security, web services, software
Procedia PDF Downloads 5129594 Innovation in PhD Training in the Interdisciplinary Research Institute
Authors: B. Shaw, K. Doherty
Abstract:
The Cultural Communication and Computing Research Institute (C3RI) is a diverse multidisciplinary research institute including art, design, media production, communication studies, computing and engineering. Across these disciplines it can seem like there are enormous differences of research practice and convention, including differing positions on objectivity and subjectivity, certainty and evidence, and different political and ethical parameters. These differences sit within, often unacknowledged, histories, codes, and communication styles of specific disciplines, and it is all these aspects that can make understanding of research practice across disciplines difficult. To explore this, a one day event was orchestrated, testing how a PhD community might communicate and share research in progress in a multi-disciplinary context. Instead of presenting results at a conference, research students were tasked to articulate their method of inquiry. A working party of students from across disciplines had to design a conference call, visual identity and an event framework that would work for students across all disciplines. The process of establishing the shape and identity of the conference was revealing. Even finding a linguistic frame that would meet the expectations of different disciplines for the conference call was challenging. The first abstracts submitted either resorted to reporting findings, or only described method briefly. It took several weeks of supported intervention for research students to get ‘inside’ their method and to understand their research practice as a process rich with philosophical and practical decisions and implications. In response to the abstracts the conference committee generated key methodological categories for conference sessions, including sampling, capturing ‘experience’, ‘making models’, researcher identities, and ‘constructing data’. Each session involved presentations by visual artists, communications students and computing researchers with inter-disciplinary dialogue, facilitated by alumni Chairs. The apparently simple focus on method illuminated research process as a site of creativity, innovation and discovery, and also built epistemological awareness, drawing attention to what is being researched and how it can be known. It was surprisingly difficult to limit students to discussing method, and it was apparent that the vocabulary available for method is sometimes limited. However, by focusing on method rather than results, the genuine process of research, rather than one constructed for approval, could be captured. In unlocking the twists and turns of planning and implementing research, and the impact of circumstance and contingency, students had to reflect frankly on successes and failures. This level of self – and public- critique emphasised the degree of critical thinking and rigour required in executing research and demonstrated that honest reportage of research, faults and all, is good valid research. The process also revealed the degree that disciplines can learn from each other- the computing students gained insights from the sensitive social contextualizing generated by communications and art and design students, and art and design students gained understanding from the greater ‘distance’ and emphasis on application that computing students applied to their subjects. Finding the means to develop dialogue across disciplines makes researchers better equipped to devise and tackle research problems across disciplines, potentially laying the ground for more effective collaboration.Keywords: interdisciplinary, method, research student, training
Procedia PDF Downloads 2069593 The Guaranteed Detection of the Seismoacoustic Emission Source in the C-OTDR Systems
Authors: Andrey V. Timofeev
Abstract:
A method is proposed for stable detection of seismoacoustic sources in C-OTDR systems that guarantee given upper bounds for probabilities of type I and type II errors. Properties of the proposed method are rigorously proved. The results of practical applications of the proposed method in a real C-OTDR-system are presented in this.Keywords: guaranteed detection, C-OTDR systems, change point, interval estimation
Procedia PDF Downloads 2569592 Analyze of Nanoscale Materials and Devices for Future Communication and Telecom Networks in the Gas Refinery
Authors: Mohamad Bagher Heidari, Hefzollah Mohammadian
Abstract:
New discoveries in materials on the nanometer-length scale are expected to play an important role in addressing ongoing and future challenges in the field of communication. Devices and systems for ultra-high speed short and long range communication links, portable and power efficient computing devices, high-density memory and logics, ultra-fast interconnects, and autonomous and robust energy scavenging devices for accessing ambient intelligence and needed information will critically depend on the success of next-generation emerging nonmaterials and devices. This article presents some exciting recent developments in nonmaterials that have the potential to play a critical role in the development and transformation of future intelligent communication and telecom networks in the gas refinery. The industry is benefiting from nanotechnology advances with numerous applications including those in smarter sensors, logic elements, computer chips, memory storage devices, optoelectronics.Keywords: nonmaterial, intelligent communication, nanoscale, nanophotonic, telecom
Procedia PDF Downloads 3339591 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Mpho Mokoatle, Darlington Mapiye, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on $k$-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0%, 80.5%, 80.5%, 63.6%, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanisms.Keywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1679590 Phenotype Prediction of DNA Sequence Data: A Machine and Statistical Learning Approach
Authors: Darlington Mapiye, Mpho Mokoatle, James Mashiyane, Stephanie Muller, Gciniwe Dlamini
Abstract:
Great advances in high-throughput sequencing technologies have resulted in availability of huge amounts of sequencing data in public and private repositories, enabling a holistic understanding of complex biological phenomena. Sequence data are used for a wide range of applications such as gene annotations, expression studies, personalized treatment and precision medicine. However, this rapid growth in sequence data poses a great challenge which calls for novel data processing and analytic methods, as well as huge computing resources. In this work, a machine and statistical learning approach for DNA sequence classification based on k-mer representation of sequence data is proposed. The approach is tested using whole genome sequences of Mycobacterium tuberculosis (MTB) isolates to (i) reduce the size of genomic sequence data, (ii) identify an optimum size of k-mers and utilize it to build classification models, (iii) predict the phenotype from whole genome sequence data of a given bacterial isolate, and (iv) demonstrate computing challenges associated with the analysis of whole genome sequence data in producing interpretable and explainable insights. The classification models were trained on 104 whole genome sequences of MTB isoloates. Cluster analysis showed that k-mers maybe used to discriminate phenotypes and the discrimination becomes more concise as the size of k-mers increase. The best performing classification model had a k-mer size of 10 (longest k-mer) an accuracy, recall, precision, specificity, and Matthews Correlation coeffient of 72.0 %, 80.5 %, 80.5 %, 63.6 %, and 0.4 respectively. This study provides a comprehensive approach for resampling whole genome sequencing data, objectively selecting a k-mer size, and performing classification for phenotype prediction. The analysis also highlights the importance of increasing the k-mer size to produce more biological explainable results, which brings to the fore the interplay that exists amongst accuracy, computing resources and explainability of classification results. However, the analysis provides a new way to elucidate genetic information from genomic data, and identify phenotype relationships which are important especially in explaining complex biological mechanismsKeywords: AWD-LSTM, bootstrapping, k-mers, next generation sequencing
Procedia PDF Downloads 1599589 Replacing an Old PFN System with a Solid State Modulator without Changing the Klystron Transformer
Authors: Klas Elmquist, Anders Larsson
Abstract:
Until the year 2000, almost all short pulse modulators in the accelerator world were made with the pulse forming network (PFN) technique. The pulse forming network systems have since then been replaced with solid state modulators that have better efficiency, better stability, and lower cost of ownership, and they are much smaller. In this paper, it is shown that it is possible to replace a pulse forming network system with a solid-state system without changing the klystron tank and the klystron transformer. The solid-state modulator uses semiconductors switching at 1 kV level. A first pulse transformer transforms the voltage up to 10 kV. The 10 kV pulse is finally fed into the original transformer that is placed under the klystron. A flatness of 0.8 percent and stability of 100 PPM is achieved. The test is done with a CPI 8262 type of klystron. It is also shown that it is possible to run such a system with long cables between the transformers. When using this technique, it will be possible to keep original sub-systems like filament systems, vacuum systems, focusing solenoid systems, and cooling systems for the klystron. This will substantially reduce the cost of an upgrade and prolong the life of the klystron system.Keywords: modulator, solid-state, PFN-system, thyratron
Procedia PDF Downloads 1349588 Comparison Between Genetic Algorithms and Particle Swarm Optimization Optimized Proportional Integral Derirative and PSS for Single Machine Infinite System
Authors: Benalia Nadia, Zerzouri Nora, Ben Si Ali Nadia
Abstract:
Abstract: Among the many different modern heuristic optimization methods, genetic algorithms (GA) and the particle swarm optimization (PSO) technique have been attracting a lot of interest. The GA has gained popularity in academia and business mostly because to its simplicity, ability to solve highly nonlinear mixed integer optimization problems that are typical of complex engineering systems, and intuitiveness. The mechanics of the PSO methodology, a relatively recent heuristic search tool, are modeled after the swarming or cooperative behavior of biological groups. It is suitable to compare the performance of the two techniques since they both aim to solve a particular objective function but make use of distinct computing methods. In this article, PSO and GA optimization approaches are used for the parameter tuning of the power system stabilizer and Proportional integral derivative regulator. Load angle and rotor speed variations in the single machine infinite bus bar system is used to measure the performance of the suggested solution.Keywords: SMIB, genetic algorithm, PSO, transient stability, power system stabilizer, PID
Procedia PDF Downloads 839587 Overview of Different Approaches Used in Optimal Operation Control of Hybrid Renewable Energy Systems
Authors: K. Kusakana
Abstract:
A hybrid energy system is a combination of renewable energy sources with back up, as well as a storage system used to respond to given load energy requirements. Given that the electrical output of each renewable source is fluctuating with changes in weather conditions, and since the load demand also varies with time; one of the main attributes of hybrid systems is to be able to respond to the load demand at any time by optimally controlling each energy source, storage and back-up system. The induced optimization problem is to compute the optimal operation control of the system with the aim of minimizing operation costs while efficiently and reliably responding to the load energy requirement. Current optimization research and development on hybrid systems are mainly focusing on the sizing aspect. Thus, the aim of this paper is to report on the state-of-the-art of optimal operation control of hybrid renewable energy systems. This paper also discusses different challenges encountered, as well as future developments that can help in improving the optimal operation control of hybrid renewable energy systems.Keywords: renewable energies, hybrid systems, optimization, operation control
Procedia PDF Downloads 379