Search results for: ambient computing
70 Solving an Extended Resource Leveling Problem with Multiobjective Evolutionary Algorithms
Authors: Javier Roca, Etienne Pugnaghi, Gaëtan Libert
Abstract:
We introduce an extended resource leveling model that abstracts real life projects that consider specific work ranges for each resource. Contrary to traditional resource leveling problems this model considers scarce resources and multiple objectives: the minimization of the project makespan and the leveling of each resource usage over time. We formulate this model as a multiobjective optimization problem and we propose a multiobjective genetic algorithm-based solver to optimize it. This solver consists in a two-stage process: a main stage where we obtain non-dominated solutions for all the objectives, and a postprocessing stage where we seek to specifically improve the resource leveling of these solutions. We propose an intelligent encoding for the solver that allows including domain specific knowledge in the solving mechanism. The chosen encoding proves to be effective to solve leveling problems with scarce resources and multiple objectives. The outcome of the proposed solvers represent optimized trade-offs (alternatives) that can be later evaluated by a decision maker, this multi-solution approach represents an advantage over the traditional single solution approach. We compare the proposed solver with state-of-art resource leveling methods and we report competitive and performing results.
Keywords: Intelligent problem encoding, multiobjective decision making, evolutionary computing, RCPSP, resource leveling.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 419369 Neural Network Implementation Using FPGA: Issues and Application
Authors: A. Muthuramalingam, S. Himavathi, E. Srinivasan
Abstract:
.Hardware realization of a Neural Network (NN), to a large extent depends on the efficient implementation of a single neuron. FPGA-based reconfigurable computing architectures are suitable for hardware implementation of neural networks. FPGA realization of ANNs with a large number of neurons is still a challenging task. This paper discusses the issues involved in implementation of a multi-input neuron with linear/nonlinear excitation functions using FPGA. Implementation method with resource/speed tradeoff is proposed to handle signed decimal numbers. The VHDL coding developed is tested using Xilinx XC V50hq240 Chip. To improve the speed of operation a lookup table method is used. The problems involved in using a lookup table (LUT) for a nonlinear function is discussed. The percentage saving in resource and the improvement in speed with an LUT for a neuron is reported. An attempt is also made to derive a generalized formula for a multi-input neuron that facilitates to estimate approximately the total resource requirement and speed achievable for a given multilayer neural network. This facilitates the designer to choose the FPGA capacity for a given application. Using the proposed method of implementation a neural network based application, namely, a Space vector modulator for a vector-controlled drive is presented
Keywords: FPGA implementation, multi-input neuron, neural network, nn based space vector modulator.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 442368 Spacecraft Neural Network Control System Design using FPGA
Authors: Hanaa T. El-Madany, Faten H. Fahmy, Ninet M. A. El-Rahman, Hassen T. Dorrah
Abstract:
Designing and implementing intelligent systems has become a crucial factor for the innovation and development of better products of space technologies. A neural network is a parallel system, capable of resolving paradigms that linear computing cannot. Field programmable gate array (FPGA) is a digital device that owns reprogrammable properties and robust flexibility. For the neural network based instrument prototype in real time application, conventional specific VLSI neural chip design suffers the limitation in time and cost. With low precision artificial neural network design, FPGAs have higher speed and smaller size for real time application than the VLSI and DSP chips. So, many researchers have made great efforts on the realization of neural network (NN) using FPGA technique. In this paper, an introduction of ANN and FPGA technique are briefly shown. Also, Hardware Description Language (VHDL) code has been proposed to implement ANNs as well as to present simulation results with floating point arithmetic. Synthesis results for ANN controller are developed using Precision RTL. Proposed VHDL implementation creates a flexible, fast method and high degree of parallelism for implementing ANN. The implementation of multi-layer NN using lookup table LUT reduces the resource utilization for implementation and time for execution.
Keywords: Spacecraft, neural network, FPGA, VHDL.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 300967 Investigating the Usability of a University Website from the Users’ Perspective: An Empirical Study of Benue State University Website
Authors: Abraham Undu, Stephen Akuma
Abstract:
Websites are becoming a major component of an organization’s success in our ever globalizing competitive world. The website symbolizes an organization, interacting or projecting an organization’s principles, culture, values, vision, and perspectives. It is an interface connecting organizations and their clients. The university, as an academic institution, makes use of a website to communicate and offer computing services to its stakeholders (students, staff, host community, university management etc). Unfortunately, website designers often give more consideration to the technology, organizational structure and business objectives of the university than to the usability of the site. Website designers end up designing university websites which do not meet the needs of the primary users. This empirical study investigated the Benue State University website from the point view of students. This research was realized by using a standardized website usability questionnaire based on the five factors of usability defined by WAMMI (Website Analysis and Measurement Inventory): attractiveness, controllability, efficiency, learnability and helpfulness. The result of the investigation showed that the university website (https://portal.bsum.edu.ng/) has neutral usability level because of the usability issues associated with the website. The research recommended feasible solutions to improve the usability of the website from the users’ perspective and also provided a modified usability model that will be used for better evaluation of the Benue State University website.
Keywords: Usability, usability factors, university websites, user’s perspective, WAMMI, modified usability model, Benue State University.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 106266 A New Composition Method of Admissible Support Vector Kernel Based on Reproducing Kernel
Authors: Wei Zhang, Xin Zhao, Yi-Fan Zhu, Xin-Jian Zhang
Abstract:
Kernel function, which allows the formulation of nonlinear variants of any algorithm that can be cast in terms of dot products, makes the Support Vector Machines (SVM) have been successfully applied in many fields, e.g. classification and regression. The importance of kernel has motivated many studies on its composition. It-s well-known that reproducing kernel (R.K) is a useful kernel function which possesses many properties, e.g. positive definiteness, reproducing property and composing complex R.K by simple operation. There are two popular ways to compute the R.K with explicit form. One is to construct and solve a specific differential equation with boundary value whose handicap is incapable of obtaining a unified form of R.K. The other is using a piecewise integral of the Green function associated with a differential operator L. The latter benefits the computation of a R.K with a unified explicit form and theoretical analysis, whereas there are relatively later studies and fewer practical computations. In this paper, a new algorithm for computing a R.K is presented. It can obtain the unified explicit form of R.K in general reproducing kernel Hilbert space. It avoids constructing and solving the complex differential equations manually and benefits an automatic, flexible and rigorous computation for more general RKHS. In order to validate that the R.K computed by the algorithm can be used in SVM well, some illustrative examples and a comparison between R.K and Gaussian kernel (RBF) in support vector regression are presented. The result shows that the performance of R.K is close or slightly superior to that of RBF.
Keywords: admissible support vector kernel, reproducing kernel, reproducing kernel Hilbert space, Green function, support vectorregression
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 154465 The Challenges and Solutions for Developing Mobile Apps in a Small University
Authors: Greg Turner, Bin Lu, Cheer-Sun Yang
Abstract:
As computing technology advances, smartphone applications can assist student learning in a pervasive way. For example, the idea of using mobile apps for the PA Common Trees, Pests, Pathogens, in the field as a reference tool allows middle school students to learn about trees and associated pests/pathogens without bringing a textbook. While working on the development of three heterogeneous mobile apps, we ran into numerous challenges. Both the traditional waterfall model and the more modern agile methodologies failed in practice. The waterfall model emphasizes the planning of the duration for each phase. When the duration of each phase is not consistent with the availability of developers, the waterfall model cannot be employed. When applying Agile Methodologies, we cannot maintain the high frequency of the iterative development review process, known as ‘sprints’. In this paper, we discuss the challenges and solutions. We propose a hybrid model known as the Relay Race Methodology to reflect the concept of racing and relaying during the process of software development in practice. Based on the development project, we observe that the modeling of the relay race transition between any two phases is manifested naturally. Thus, we claim that the RRM model can provide a de fecto rather than a de jure basis for the core concept in the software development model. In this paper, the background of the project is introduced first. Then, the challenges are pointed out followed by our solutions. Finally, the experiences learned and the future works are presented.Keywords: Agile methods, mobile apps, software process model, waterfall model.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 160264 Improving Flash Flood Forecasting with a Bayesian Probabilistic Approach: A Case Study on the Posina Basin in Italy
Authors: Zviad Ghadua, Biswa Bhattacharya
Abstract:
The Flash Flood Guidance (FFG) provides the rainfall amount of a given duration necessary to cause flooding. The approach is based on the development of rainfall-runoff curves, which helps us to find out the rainfall amount that would cause flooding. An alternative approach, mostly experimented with Italian Alpine catchments, is based on determining threshold discharges from past events and on finding whether or not an oncoming flood has its magnitude more than some critical discharge thresholds found beforehand. Both approaches suffer from large uncertainties in forecasting flash floods as, due to the simplistic approach followed, the same rainfall amount may or may not cause flooding. This uncertainty leads to the question whether a probabilistic model is preferable over a deterministic one in forecasting flash floods. We propose the use of a Bayesian probabilistic approach in flash flood forecasting. A prior probability of flooding is derived based on historical data. Additional information, such as antecedent moisture condition (AMC) and rainfall amount over any rainfall thresholds are used in computing the likelihood of observing these conditions given a flash flood has occurred. Finally, the posterior probability of flooding is computed using the prior probability and the likelihood. The variation of the computed posterior probability with rainfall amount and AMC presents the suitability of the approach in decision making in an uncertain environment. The methodology has been applied to the Posina basin in Italy. From the promising results obtained, we can conclude that the Bayesian approach in flash flood forecasting provides more realistic forecasting over the FFG.
Keywords: Flash flood, Bayesian, flash flood guidance, FFG, forecasting, Posina.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74863 Computing Entropy for Ortholog Detection
Authors: Hsing-Kuo Pao, John Case
Abstract:
Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.
Keywords: compression, decision tree, entropy, ortholog, ROC.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 182762 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms
Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos
Abstract:
The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.
Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 252161 The Perception on 21st Century Skills of Nursing Instructors and Nursing Students at Boromarajonani College of Nursing, Chonburi
Authors: Kamolrat Turner, Somporn Rakkwamsuk, Ladda Leungratanamart
Abstract:
The aim of this descriptive study was to determine the perception of 21st century skills among nursing professors and nursing students at Boromarajonani College of Nursing, Chonburi. A total of 38 nursing professors and 75 second year nursing students took part in the study. Data were collected by 21st century skills questionnaires comprised of 63 items. Descriptive statistics were used to describe the findings. The results have shown that the overall mean scores of the perception of nursing professors on 21st century skills were at a high level. The highest mean scores were recorded for computing and ICT literacy, and career and leaning skills. The lowest mean scores were recorded for reading and writing and mathematics. The overall mean scores on perception of nursing students on 21st century skills were at a high level. The highest mean scores were recorded for computer and ICT literacy, for which the highest item mean scores were recorded for competency on computer programs. The lowest mean scores were recorded for the reading, writing, and mathematics components, in which the highest item mean score was reading Thai correctly, and the lowest item mean score was English reading and translate to other correctly. The findings from this study have shown that the perceptions of nursing professors were consistent with those of nursing students. Moreover, any activities aiming to raise capacity on English reading and translate information to others should be taken into the consideration.
Keywords: 21st century skills, perception, nursing instructor, nursing student.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184760 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study
Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah
Abstract:
Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.
Keywords: Cloud service provider, enterprise applications, quality of service, selection criteria, small and medium enterprise.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 79059 Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems
Authors: Kai Häussermann, Christoph Hubig, Paul Levi, Frank Leymann, Oliver Siemoneit, Matthias Wieland, Oliver Zweigle
Abstract:
Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.Keywords: context-awareness, ethics, facilitation of system use through workflows, situation recognition and learning based on situation templates and situation ontology's, theory of situationaware systems
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 175958 Behavioral Analysis of Team Members in Virtual Organization based on Trust Dimension and Learning
Authors: Indiramma M., K. R. Anandakumar
Abstract:
Trust management and Reputation models are becoming integral part of Internet based applications such as CSCW, E-commerce and Grid Computing. Also the trust dimension is a significant social structure and key to social relations within a collaborative community. Collaborative Decision Making (CDM) is a difficult task in the context of distributed environment (information across different geographical locations) and multidisciplinary decisions are involved such as Virtual Organization (VO). To aid team decision making in VO, Decision Support System and social network analysis approaches are integrated. In such situations social learning helps an organization in terms of relationship, team formation, partner selection etc. In this paper we focus on trust learning. Trust learning is an important activity in terms of information exchange, negotiation, collaboration and trust assessment for cooperation among virtual team members. In this paper we have proposed a reinforcement learning which enhances the trust decision making capability of interacting agents during collaboration in problem solving activity. Trust computational model with learning that we present is adapted for best alternate selection of new project in the organization. We verify our model in a multi-agent simulation where the agents in the community learn to identify trustworthy members, inconsistent behavior and conflicting behavior of agents.Keywords: Collaborative Decision making, Trust, Multi Agent System (MAS), Bayesian Network, Reinforcement Learning.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 189357 Evaluation of State of the Art IDS Message Exchange Protocols
Authors: Robert Koch, Mario Golling, Gabi Dreo
Abstract:
During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.
Keywords: Cyber Defence, Cyber Warfare, Intrusion Detection Information Exchange, Early Warning Systems, Joint Intrusion Detection, Cyber Conflict
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 229356 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea
Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti
Abstract:
This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.Keywords: 4D modelling, Black Sea, maritime archaeology, underwater photogrammetry, Bronze Age, low visibility.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 153855 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization
Authors: Hebberly Ahatlan
Abstract:
The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.
Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 11654 Predictive Semi-Empirical NOx Model for Diesel Engine
Authors: Saurabh Sharma, Yong Sun, Bruce Vernham
Abstract:
Accurate prediction of NOx emission is a continuous challenge in the field of diesel engine-out emission modeling. Performing experiments for each conditions and scenario cost significant amount of money and man hours, therefore model-based development strategy has been implemented in order to solve that issue. NOx formation is highly dependent on the burn gas temperature and the O2 concentration inside the cylinder. The current empirical models are developed by calibrating the parameters representing the engine operating conditions with respect to the measured NOx. This makes the prediction of purely empirical models limited to the region where it has been calibrated. An alternative solution to that is presented in this paper, which focus on the utilization of in-cylinder combustion parameters to form a predictive semi-empirical NOx model. The result of this work is shown by developing a fast and predictive NOx model by using the physical parameters and empirical correlation. The model is developed based on the steady state data collected at entire operating region of the engine and the predictive combustion model, which is developed in Gamma Technology (GT)-Power by using Direct Injected (DI)-Pulse combustion object. In this approach, temperature in both burned and unburnt zone is considered during the combustion period i.e. from Intake Valve Closing (IVC) to Exhaust Valve Opening (EVO). Also, the oxygen concentration consumed in burnt zone and trapped fuel mass is also considered while developing the reported model. Several statistical methods are used to construct the model, including individual machine learning methods and ensemble machine learning methods. A detailed validation of the model on multiple diesel engines is reported in this work. Substantial numbers of cases are tested for different engine configurations over a large span of speed and load points. Different sweeps of operating conditions such as Exhaust Gas Recirculation (EGR), injection timing and Variable Valve Timing (VVT) are also considered for the validation. Model shows a very good predictability and robustness at both sea level and altitude condition with different ambient conditions. The various advantages such as high accuracy and robustness at different operating conditions, low computational time and lower number of data points requires for the calibration establishes the platform where the model-based approach can be used for the engine calibration and development process. Moreover, the focus of this work is towards establishing a framework for the future model development for other various targets such as soot, Combustion Noise Level (CNL), NO2/NOx ratio etc.
Keywords: Diesel engine, machine learning, NOx emission, semi-empirical.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 85553 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes
Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani
Abstract:
In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.
Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188852 Use of Locomotor Activity of Rainbow Trout Juveniles in Identifying Sublethal Concentrations of Landfill Leachate
Authors: Tomas Makaras, Gintaras Svecevičius
Abstract:
Landfill waste is a common problem as it has an economic and environmental impact even if it is closed. Landfill waste contains a high density of various persistent compounds such as heavy metals, organic and inorganic materials. As persistent compounds are slowly-degradable or even non-degradable in the environment, they often produce sublethal or even lethal effects on aquatic organisms. The aims of the present study were to estimate sublethal effects of the Kairiai landfill (WGS: 55°55‘46.74“, 23°23‘28.4“) leachate on the locomotor activity of rainbow trout Oncorhynchus mykiss juveniles using the original system package developed in our laboratory for automated monitoring, recording and analysis of aquatic organisms’ activity, and to determine patterns of fish behavioral response to sublethal effects of leachate. Four different concentrations of leachate were chosen: 0.125; 0.25; 0.5 and 1.0 mL/L (0.0025; 0.005; 0.01 and 0.002 as part of 96-hour LC50, respectively). Locomotor activity was measured after 5, 10 and 30 minutes of exposure during 1-minute test-periods of each fish (7 fish per treatment). The threshold-effect-concentration amounted to 0.18 mL/L (0.0036 parts of 96-hour LC50). This concentration was found to be even 2.8-fold lower than the concentration generally assumed to be “safe” for fish. At higher concentrations, the landfill leachate solution elicited behavioral response of test fish to sublethal levels of pollutants. The ability of the rainbow trout to detect and avoid contaminants occurred after 5 minutes of exposure. The intensity of locomotor activity reached a peak within 10 minutes, evidently decreasing after 30 minutes. This could be explained by the physiological and biochemical adaptation of fish to altered environmental conditions. It has been established that the locomotor activity of juvenile trout depends on leachate concentration and exposure duration. Modeling of these parameters showed that the activity of juveniles increased at higher leachate concentrations, but slightly decreased with the increasing exposure duration. Experiment results confirm that the behavior of rainbow trout juveniles is a sensitive and rapid biomarker that can be used in combination with the system for fish behavior monitoring, registration and analysis to determine sublethal concentrations of pollutants in ambient water. Further research should be focused on software improvement aimed to include more parameters of aquatic organisms’ behavior and to investigate the most rapid and appropriate behavioral responses in different species. In practice, this study could be the basis for the development and creation of biological early-warning systems (BEWS).
Keywords: Fish behavior biomarker, landfill leachate, locomotor activity, rainbow trout juveniles, sublethal effects.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 184251 Cyber Warriors for Cyber Security and Information Assurance- An Academic Perspective
Authors: Ronald F. Gonzales, Gordon W. Romney, Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha
Abstract:
A virtualized and virtual approach is presented on academically preparing students to successfully engage at a strategic perspective to understand those concerns and measures that are both structured and not structured in the area of cyber security and information assurance. The Master of Science in Cyber Security and Information Assurance (MSCSIA) is a professional degree for those who endeavor through technical and managerial measures to ensure the security, confidentiality, integrity, authenticity, control, availability and utility of the world-s computing and information systems infrastructure. The National University Cyber Security and Information Assurance program is offered as a Master-s degree. The emphasis of the MSCSIA program uniquely includes hands-on academic instruction using virtual computers. This past year, 2011, the NU facility has become fully operational using system architecture to provide a Virtual Education Laboratory (VEL) accessible to both onsite and online students. The first student cohort completed their MSCSIA training this past March 2, 2012 after fulfilling 12 courses, for a total of 54 units of college credits. The rapid pace scheduling of one course per month is immensely challenging, perpetually changing, and virtually multifaceted. This paper analyses these descriptive terms in consideration of those globalization penetration breaches as present in today-s world of cyber security. In addition, we present current NU practices to mitigate risks.Keywords: Cyber security, information assurance, mitigate risks, virtual machines, strategic perspective.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 187650 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis
Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya
Abstract:
In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 97749 Continuous FAQ Updating for Service Incident Ticket Resolution
Authors: Kohtaroh Miyamoto
Abstract:
As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use a FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such a FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.
Keywords: FAQ System, Resolution Time, Service Incident Tickets, IT System Maintenance.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 249348 Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience
Authors: Ashish R. Tanna, Hiren H. Joshi
Abstract:
The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.
Keywords: Spinel ferrites, Localized canting of spins, X-ray diffraction, Programming in Borland C.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 380647 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps
Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li
Abstract:
The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.
Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 59646 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators
Authors: Wei Zhang
Abstract:
With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 89545 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach
Authors: Jean Berger, Nassirou Lo, Martin Noel
Abstract:
Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.
Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 248044 PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions
Authors: Anthony Wachs, Guillaume Vinay, Gilles Ferrer, Jacques Kouakou, Calin Dan, Laurence Girolami
Abstract:
An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.
Keywords: Particulate flow, distributed lagrange multiplier/fictitious domain method, discrete element method, polygonal shape, sedimentation, distributed computing, MPI
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 212443 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms
Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov
Abstract:
The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.
Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 37142 The Cloud Systems Used in Education: Properties and Overview
Authors: Agah Tuğrul Korucu, Handan Atun
Abstract:
Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.
Keywords: Cloud systems, cloud systems in education, distance learning, e-learning, integration of information technologies, online learning environment.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 101841 Peculiarities of Internal Friction and Shear Modulus in 60Co γ-Rays Irradiated Monocrystalline SiGe Alloys
Authors: I. Kurashvili, G. Darsavelidze, T. Kimeridze, G. Chubinidze, I. Tabatadze
Abstract:
At present, a number of modern semiconductor devices based on SiGe alloys have been created in which the latest achievements of high technologies are used. These devices might cause significant changes to networking, computing, and space technology. In the nearest future new materials based on SiGe will be able to restrict the A3B5 and Si technologies and firmly establish themselves in medium frequency electronics. Effective realization of these prospects requires the solution of prediction and controlling of structural state and dynamical physical –mechanical properties of new SiGe materials. Based on these circumstances, a complex investigation of structural defects and structural-sensitive dynamic mechanical characteristics of SiGe alloys under different external impacts (deformation, radiation, thermal cycling) acquires great importance. Internal friction (IF) and shear modulus temperature and amplitude dependences of the monocrystalline boron-doped Si1-xGex(x≤0.05) alloys grown by Czochralski technique is studied in initial and 60Co gamma-irradiated states. In the initial samples, a set of dislocation origin relaxation processes and accompanying modulus defects are revealed in a temperature interval of 400-800 ⁰C. It is shown that after gamma-irradiation intensity of relaxation internal friction in the vicinity of 280 ⁰C increases and simultaneously activation parameters of high temperature relaxation processes reveal clear rising. It is proposed that these changes of dynamical mechanical characteristics might be caused by a decrease of the dislocation mobility in the Cottrell atmosphere enriched by the radiation defects.
Keywords: Gamma-irradiation, internal friction, shear modulus, SiGe alloys.
Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 624