Search results for: Economy-based computing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 689

Search results for: Economy-based computing

59 Computing Entropy for Ortholog Detection

Authors: Hsing-Kuo Pao, John Case

Abstract:

Biological sequences from different species are called or-thologs if they evolved from a sequence of a common ancestor species and they have the same biological function. Approximations of Kolmogorov complexity or entropy of biological sequences are already well known to be useful in extracting similarity information between such sequences -in the interest, for example, of ortholog detection. As is well known, the exact Kolmogorov complexity is not algorithmically computable. In prac-tice one can approximate it by computable compression methods. How-ever, such compression methods do not provide a good approximation to Kolmogorov complexity for short sequences. Herein is suggested a new ap-proach to overcome the problem that compression approximations may notwork well on short sequences. This approach is inspired by new, conditional computations of Kolmogorov entropy. A main contribution of the empir-ical work described shows the new set of entropy-based machine learning attributes provides good separation between positive (ortholog) and nega-tive (non-ortholog) data - better than with good, previously known alter-natives (which do not employ some means to handle short sequences well).Also empirically compared are the new entropy based attribute set and a number of other, more standard similarity attributes sets commonly used in genomic analysis. The various similarity attributes are evaluated by cross validation, through boosted decision tree induction C5.0, and by Receiver Operating Characteristic (ROC) analysis. The results point to the conclu-sion: the new, entropy based attribute set by itself is not the one giving the best prediction; however, it is the best attribute set for use in improving the other, standard attribute sets when conjoined with them.

Keywords: compression, decision tree, entropy, ortholog, ROC.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1825
58 An Investigation of Performance versus Security in Cognitive Radio Networks with Supporting Cloud Platforms

Authors: Kurniawan D. Irianto, Demetres D. Kouvatsos

Abstract:

The growth of wireless devices affects the availability of limited frequencies or spectrum bands as it has been known that spectrum bands are a natural resource that cannot be added. Meanwhile, the licensed frequencies are idle most of the time. Cognitive radio is one of the solutions to solve those problems. Cognitive radio is a promising technology that allows the unlicensed users known as secondary users (SUs) to access licensed bands without making interference to licensed users or primary users (PUs). As cloud computing has become popular in recent years, cognitive radio networks (CRNs) can be integrated with cloud platform. One of the important issues in CRNs is security. It becomes a problem since CRNs use radio frequencies as a medium for transmitting and CRNs share the same issues with wireless communication systems. Another critical issue in CRNs is performance. Security has adverse effect to performance and there are trade-offs between them. The goal of this paper is to investigate the performance related to security trade-off in CRNs with supporting cloud platforms. Furthermore, Queuing Network Models with preemptive resume and preemptive repeat identical priority are applied in this project to measure the impact of security to performance in CRNs with or without cloud platform. The generalized exponential (GE) type distribution is used to reflect the bursty inter-arrival and service times at the servers. The results show that the best performance is obtained when security is disabled and cloud platform is enabled.

Keywords: Cloud Platforms, Cognitive Radio Networks, GEtype Distribution, Performance Vs Security.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2520
57 The Perception on 21st Century Skills of Nursing Instructors and Nursing Students at Boromarajonani College of Nursing, Chonburi

Authors: Kamolrat Turner, Somporn Rakkwamsuk, Ladda Leungratanamart

Abstract:

The aim of this descriptive study was to determine the perception of 21st century skills among nursing professors and nursing students at Boromarajonani College of Nursing, Chonburi. A total of 38 nursing professors and 75 second year nursing students took part in the study. Data were collected by 21st century skills questionnaires comprised of 63 items. Descriptive statistics were used to describe the findings. The results have shown that the overall mean scores of the perception of nursing professors on 21st century skills were at a high level. The highest mean scores were recorded for computing and ICT literacy, and career and leaning skills. The lowest mean scores were recorded for reading and writing and mathematics. The overall mean scores on perception of nursing students on 21st century skills were at a high level. The highest mean scores were recorded for computer and ICT literacy, for which the highest item mean scores were recorded for competency on computer programs. The lowest mean scores were recorded for the reading, writing, and mathematics components, in which the highest item mean score was reading Thai correctly, and the lowest item mean score was English reading and translate to other correctly. The findings from this study have shown that the perceptions of nursing professors were consistent with those of nursing students. Moreover, any activities aiming to raise capacity on English reading and translate information to others should be taken into the consideration.

Keywords: 21st century skills, perception, nursing instructor, nursing student.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
56 Cloud Enterprise Application Provider Selection Model for the Small and Medium Enterprise: A Pilot Study

Authors: Rowland R. Ogunrinde, Yusmadi Y. Jusoh, Noraini Che Pa, Wan Nurhayati W. Rahman, Azizol B. Abdullah

Abstract:

Enterprise Applications (EAs) aid the organizations achieve operational excellence and competitive advantage. Over time, most Small and Medium Enterprises (SMEs), which are known to be the major drivers of most thriving global economies, use the costly on-premise versions of these applications thereby making business difficult to competitively thrive in the same market environment with their large enterprise counterparts. The advent of cloud computing presents the SMEs an affordable offer and great opportunities as such EAs can be cloud-hosted and rented on a pay-per-use basis which does not require huge initial capital. However, as there are numerous Cloud Service Providers (CSPs) offering EAs as Software-as-a-Service (SaaS), there is a challenge of choosing a suitable provider with Quality of Service (QoS) that meet the organizations’ customized requirements. The proposed model takes care of that and goes a step further to select the most affordable among a selected few of the CSPs. In the earlier stage, before developing the instrument and conducting the pilot test, the researchers conducted a structured interview with three experts to validate the proposed model. In conclusion, the validity and reliability of the instrument were tested through experts, typical respondents, and analyzed with SPSS 22. Results confirmed the validity of the proposed model and the validity and reliability of the instrument.

Keywords: Cloud service provider, enterprise applications, quality of service, selection criteria, small and medium enterprise.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 789
55 Understanding and Designing Situation-Aware Mobile and Ubiquitous Computing Systems

Authors: Kai Häussermann, Christoph Hubig, Paul Levi, Frank Leymann, Oliver Siemoneit, Matthias Wieland, Oliver Zweigle

Abstract:

Using spatial models as a shared common basis of information about the environment for different kinds of contextaware systems has been a heavily researched topic in the last years. Thereby the research focused on how to create, to update, and to merge spatial models so as to enable highly dynamic, consistent and coherent spatial models at large scale. In this paper however, we want to concentrate on how context-aware applications could use this information so as to adapt their behavior according to the situation they are in. The main idea is to provide the spatial model infrastructure with a situation recognition component based on generic situation templates. A situation template is – as part of a much larger situation template library – an abstract, machinereadable description of a certain basic situation type, which could be used by different applications to evaluate their situation. In this paper, different theoretical and practical issues – technical, ethical and philosophical ones – are discussed important for understanding and developing situation dependent systems based on situation templates. A basic system design is presented which allows for the reasoning with uncertain data using an improved version of a learning algorithm for the automatic adaption of situation templates. Finally, for supporting the development of adaptive applications, we present a new situation-aware adaptation concept based on workflows.

Keywords: context-awareness, ethics, facilitation of system use through workflows, situation recognition and learning based on situation templates and situation ontology's, theory of situationaware systems

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
54 Behavioral Analysis of Team Members in Virtual Organization based on Trust Dimension and Learning

Authors: Indiramma M., K. R. Anandakumar

Abstract:

Trust management and Reputation models are becoming integral part of Internet based applications such as CSCW, E-commerce and Grid Computing. Also the trust dimension is a significant social structure and key to social relations within a collaborative community. Collaborative Decision Making (CDM) is a difficult task in the context of distributed environment (information across different geographical locations) and multidisciplinary decisions are involved such as Virtual Organization (VO). To aid team decision making in VO, Decision Support System and social network analysis approaches are integrated. In such situations social learning helps an organization in terms of relationship, team formation, partner selection etc. In this paper we focus on trust learning. Trust learning is an important activity in terms of information exchange, negotiation, collaboration and trust assessment for cooperation among virtual team members. In this paper we have proposed a reinforcement learning which enhances the trust decision making capability of interacting agents during collaboration in problem solving activity. Trust computational model with learning that we present is adapted for best alternate selection of new project in the organization. We verify our model in a multi-agent simulation where the agents in the community learn to identify trustworthy members, inconsistent behavior and conflicting behavior of agents.

Keywords: Collaborative Decision making, Trust, Multi Agent System (MAS), Bayesian Network, Reinforcement Learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1892
53 Evaluation of State of the Art IDS Message Exchange Protocols

Authors: Robert Koch, Mario Golling, Gabi Dreo

Abstract:

During the last couple of years, the degree of dependence on IT systems has reached a dimension nobody imagined to be possible 10 years ago. The increased usage of mobile devices (e.g., smart phones), wireless sensor networks and embedded devices (Internet of Things) are only some examples of the dependency of modern societies on cyber space. At the same time, the complexity of IT applications, e.g., because of the increasing use of cloud computing, is rising continuously. Along with this, the threats to IT security have increased both quantitatively and qualitatively, as recent examples like STUXNET or the supposed cyber attack on Illinois water system are proofing impressively. Once isolated control systems are nowadays often publicly available - a fact that has never been intended by the developers. Threats to IT systems don’t care about areas of responsibility. Especially with regard to Cyber Warfare, IT threats are no longer limited to company or industry boundaries, administrative jurisdictions or state boundaries. One of the important countermeasures is increased cooperation among the participants especially in the field of Cyber Defence. Besides political and legal challenges, there are technical ones as well. A better, at least partially automated exchange of information is essential to (i) enable sophisticated situational awareness and to (ii) counter the attacker in a coordinated way. Therefore, this publication performs an evaluation of state of the art Intrusion Detection Message Exchange protocols in order to guarantee a secure information exchange between different entities.

Keywords: Cyber Defence, Cyber Warfare, Intrusion Detection Information Exchange, Early Warning Systems, Joint Intrusion Detection, Cyber Conflict

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2292
52 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea, maritime archaeology, underwater photogrammetry, Bronze Age, low visibility.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1537
51 A Proposal for a Secure and Interoperable Data Framework for Energy Digitalization

Authors: Hebberly Ahatlan

Abstract:

The process of digitizing energy systems involves transforming traditional energy infrastructure into interconnected, data-driven systems that enhance efficiency, sustainability, and responsiveness. As smart grids become increasingly integral to the efficient distribution and management of electricity from both fossil and renewable energy sources, the energy industry faces strategic challenges associated with digitalization and interoperability — particularly in the context of modern energy business models, such as virtual power plants (VPPs). The critical challenge in modern smart grids is to seamlessly integrate diverse technologies and systems, including virtualization, grid computing and service-oriented architecture (SOA), across the entire energy ecosystem. Achieving this requires addressing issues like semantic interoperability, Information Technology (IT) and Operational Technology (OT) convergence, and digital asset scalability, all while ensuring security and risk management. This paper proposes a four-layer digitalization framework to tackle these challenges, encompassing persistent data protection, trusted key management, secure messaging, and authentication of IoT resources. Data assets generated through this framework enable AI systems to derive insights for improving smart grid operations, security, and revenue generation. Furthermore, this paper also proposes a Trusted Energy Interoperability Alliance as a universal guiding standard in the development of this digitalization framework to support more dynamic and interoperable energy markets.

Keywords: Digitalization, IT/OT convergence, semantic interoperability, TEIA alliance, VPP.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114
50 Hierarchies Based On the Number of Cooperating Systems of Finite Automata on Four-Dimensional Input Tapes

Authors: Makoto Sakamoto, Yasuo Uchida, Makoto Nagatomo, Takao Ito, Tsunehiro Yoshinaga, Satoshi Ikeda, Masahiro Yokomichi, Hiroshi Furutani

Abstract:

In theoretical computer science, the Turing machine has played a number of important roles in understanding and exploiting basic concepts and mechanisms in computing and information processing [20]. It is a simple mathematical model of computers [9]. After that, M.Blum and C.Hewitt first proposed two-dimensional automata as a computational model of two-dimensional pattern processing, and investigated their pattern recognition abilities in 1967 [7]. Since then, a lot of researchers in this field have been investigating many properties about automata on a two- or three-dimensional tape. On the other hand, the question of whether processing fourdimensional digital patterns is much more difficult than two- or threedimensional ones is of great interest from the theoretical and practical standpoints. Thus, the study of four-dimensional automata as a computasional model of four-dimensional pattern processing has been meaningful [8]-[19],[21]. This paper introduces a cooperating system of four-dimensional finite automata as one model of four-dimensional automata. A cooperating system of four-dimensional finite automata consists of a finite number of four-dimensional finite automata and a four-dimensional input tape where these finite automata work independently (in parallel). Those finite automata whose input heads scan the same cell of the input tape can communicate with each other, that is, every finite automaton is allowed to know the internal states of other finite automata on the same cell it is scanning at the moment. In this paper, we mainly investigate some accepting powers of a cooperating system of eight- or seven-way four-dimensional finite automata. The seven-way four-dimensional finite automaton is an eight-way four-dimensional finite automaton whose input head can move east, west, south, north, up, down, or in the fu-ture, but not in the past on a four-dimensional input tape.

Keywords: computational complexity, cooperating system, finite automaton, four-dimension, hierarchy, multihead.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1887
49 Cyber Warriors for Cyber Security and Information Assurance- An Academic Perspective

Authors: Ronald F. Gonzales, Gordon W. Romney, Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha

Abstract:

A virtualized and virtual approach is presented on academically preparing students to successfully engage at a strategic perspective to understand those concerns and measures that are both structured and not structured in the area of cyber security and information assurance. The Master of Science in Cyber Security and Information Assurance (MSCSIA) is a professional degree for those who endeavor through technical and managerial measures to ensure the security, confidentiality, integrity, authenticity, control, availability and utility of the world-s computing and information systems infrastructure. The National University Cyber Security and Information Assurance program is offered as a Master-s degree. The emphasis of the MSCSIA program uniquely includes hands-on academic instruction using virtual computers. This past year, 2011, the NU facility has become fully operational using system architecture to provide a Virtual Education Laboratory (VEL) accessible to both onsite and online students. The first student cohort completed their MSCSIA training this past March 2, 2012 after fulfilling 12 courses, for a total of 54 units of college credits. The rapid pace scheduling of one course per month is immensely challenging, perpetually changing, and virtually multifaceted. This paper analyses these descriptive terms in consideration of those globalization penetration breaches as present in today-s world of cyber security. In addition, we present current NU practices to mitigate risks.

Keywords: Cyber security, information assurance, mitigate risks, virtual machines, strategic perspective.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1875
48 Automatic Staging and Subtype Determination for Non-Small Cell Lung Carcinoma Using PET Image Texture Analysis

Authors: Seyhan Karaçavuş, Bülent Yılmaz, Ömer Kayaaltı, Semra İçer, Arzu Taşdemir, Oğuzhan Ayyıldız, Kübra Eset, Eser Kaya

Abstract:

In this study, our goal was to perform tumor staging and subtype determination automatically using different texture analysis approaches for a very common cancer type, i.e., non-small cell lung carcinoma (NSCLC). Especially, we introduced a texture analysis approach, called Law’s texture filter, to be used in this context for the first time. The 18F-FDG PET images of 42 patients with NSCLC were evaluated. The number of patients for each tumor stage, i.e., I-II, III or IV, was 14. The patients had ~45% adenocarcinoma (ADC) and ~55% squamous cell carcinoma (SqCCs). MATLAB technical computing language was employed in the extraction of 51 features by using first order statistics (FOS), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), and Laws’ texture filters. The feature selection method employed was the sequential forward selection (SFS). Selected textural features were used in the automatic classification by k-nearest neighbors (k-NN) and support vector machines (SVM). In the automatic classification of tumor stage, the accuracy was approximately 59.5% with k-NN classifier (k=3) and 69% with SVM (with one versus one paradigm), using 5 features. In the automatic classification of tumor subtype, the accuracy was around 92.7% with SVM one vs. one. Texture analysis of FDG-PET images might be used, in addition to metabolic parameters as an objective tool to assess tumor histopathological characteristics and in automatic classification of tumor stage and subtype.

Keywords: Cancer stage, cancer cell type, non-small cell lung carcinoma, PET, texture analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 975
47 Continuous FAQ Updating for Service Incident Ticket Resolution

Authors: Kohtaroh Miyamoto

Abstract:

As enterprise computing becomes more and more complex, the costs and technical challenges of IT system maintenance and support are increasing rapidly. One popular approach to managing IT system maintenance is to prepare and use a FAQ (Frequently Asked Questions) system to manage and reuse systems knowledge. Such a FAQ system can help reduce the resolution time for each service incident ticket. However, there is a major problem where over time the knowledge in such FAQs tends to become outdated. Much of the knowledge captured in the FAQ requires periodic updates in response to new insights or new trends in the problems addressed in order to maintain its usefulness for problem resolution. These updates require a systematic approach to define the exact portion of the FAQ and its content. Therefore, we are working on a novel method to hierarchically structure the FAQ and automate the updates of its structure and content. We use structured information and the unstructured text information with the timelines of the information in the service incident tickets. We cluster the tickets by structured category information, by keywords, and by keyword modifiers for the unstructured text information. We also calculate an urgency score based on trends, resolution times, and priorities. We carefully studied the tickets of one of our projects over a 2.5-year time period. After the first 6 months we started to create FAQs and confirmed they improved the resolution times. We continued observing over the next 2 years to assess the ongoing effectiveness of our method for the automatic FAQ updates. We improved the ratio of tickets covered by the FAQ from 32.3% to 68.9% during this time. Also, the average time reduction of ticket resolution was between 31.6% and 43.9%. Subjective analysis showed more than 75% reported that the FAQ system was useful in reducing ticket resolution times.

Keywords: FAQ System, Resolution Time, Service Incident Tickets, IT System Maintenance.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2492
46 Computer Aided X-Ray Diffraction Intensity Analysis for Spinels: Hands-On Computing Experience

Authors: Ashish R. Tanna, Hiren H. Joshi

Abstract:

The mineral having chemical compositional formula MgAl2O4 is called “spinel". The ferrites crystallize in spinel structure are known as spinel-ferrites or ferro-spinels. The spinel structure has a fcc cage of oxygen ions and the metallic cations are distributed among tetrahedral (A) and octahedral (B) interstitial voids (sites). The X-ray diffraction (XRD) intensity of each Bragg plane is sensitive to the distribution of cations in the interstitial voids of the spinel lattice. This leads to the method of determination of distribution of cations in the spinel oxides through XRD intensity analysis. The computer program for XRD intensity analysis has been developed in C language and also tested for the real experimental situation by synthesizing the spinel ferrite materials Mg0.6Zn0.4AlxFe2- xO4 and characterized them by X-ray diffractometry. The compositions of Mg0.6Zn0.4AlxFe2-xO4(x = 0.0 to 0.6) ferrites have been prepared by ceramic method and powder X-ray diffraction patterns were recorded. Thus, the authenticity of the program is checked by comparing the theoretically calculated data using computer simulation with the experimental ones. Further, the deduced cation distributions were used to fit the magnetization data using Localized canting of spins approach to explain the “recovery" of collinear spin structure due to Al3+ - substitution in Mg-Zn ferrites which is the case if A-site magnetic dilution and non-collinear spin structure. Since the distribution of cations in the spinel ferrites plays a very important role with regard to their electrical and magnetic properties, it is essential to determine the cation distribution in spinel lattice.

Keywords: Spinel ferrites, Localized canting of spins, X-ray diffraction, Programming in Borland C.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3805
45 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 595
44 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: Deep learning, field programmable gate array, FPGA, hardware acceleration, convolutional neural networks, CNN.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 892
43 A New Multi-Target, Multi-Agent Search-and-Rescue Path Planning Approach

Authors: Jean Berger, Nassirou Lo, Martin Noel

Abstract:

Perfectly suited for natural or man-made emergency and disaster management situations such as flood, earthquakes, tornadoes, or tsunami, multi-target search path planning for a team of rescue agents is known to be computationally hard, and most techniques developed so far come short to successfully estimate optimality gap. A novel mixed-integer linear programming (MIP) formulation is proposed to optimally solve the multi-target multi-agent discrete search and rescue (SAR) path planning problem. Aimed at maximizing cumulative probability of successful target detection, it captures anticipated feedback information associated with possible observation outcomes resulting from projected path execution, while modeling agent discrete actions over all possible moving directions. Problem modeling further takes advantage of network representation to encompass decision variables, expedite compact constraint specification, and lead to substantial problem-solving speed-up. The proposed MIP approach uses CPLEX optimization machinery, efficiently computing near-optimal solutions for practical size problems, while giving a robust upper bound obtained from Lagrangean integrality constraint relaxation. Should eventually a target be positively detected during plan execution, a new problem instance would simply be reformulated from the current state, and then solved over the next decision cycle. A computational experiment shows the feasibility and the value of the proposed approach.

Keywords: Search path planning, search and rescue, multi-agent, mixed-integer linear programming, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2479
42 PeliGRIFF: A Parallel DEM-DLM/FD Method for DNS of Particulate Flows with Collisions

Authors: Anthony Wachs, Guillaume Vinay, Gilles Ferrer, Jacques Kouakou, Calin Dan, Laurence Girolami

Abstract:

An original Direct Numerical Simulation (DNS) method to tackle the problem of particulate flows at moderate to high concentration and finite Reynolds number is presented. Our method is built on the framework established by Glowinski and his coworkers [1] in the sense that we use their Distributed Lagrange Multiplier/Fictitious Domain (DLM/FD) formulation and their operator-splitting idea but differs in the treatment of particle collisions. The novelty of our contribution relies on replacing the simple artificial repulsive force based collision model usually employed in the literature by an efficient Discrete Element Method (DEM) granular solver. The use of our DEM solver enables us to consider particles of arbitrary shape (at least convex) and to account for actual contacts, in the sense that particles actually touch each other, in contrast with the simple repulsive force based collision model. We recently upgraded our serial code, GRIFF 1 [2], to full MPI capabilities. Our new code, PeliGRIFF 2, is developed under the framework of the full MPI open source platform PELICANS [3]. The new MPI capabilities of PeliGRIFF open new perspectives in the study of particulate flows and significantly increase the number of particles that can be considered in a full DNS approach: O(100000) in 2D and O(10000) in 3D. Results on the 2D/3D sedimentation/fluidization of isometric polygonal/polyedral particles with collisions are presented.

Keywords: Particulate flow, distributed lagrange multiplier/fictitious domain method, discrete element method, polygonal shape, sedimentation, distributed computing, MPI

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2123
41 Parallel Pipelined Conjugate Gradient Algorithm on Heterogeneous Platforms

Authors: Sergey Kopysov, Nikita Nedozhogin, Leonid Tonkov

Abstract:

The article presents a parallel iterative solver for large sparse linear systems which can be used on a heterogeneous platform. Traditionally, the problem of solving linear systems do not scale well on cluster containing multiple Central Processing Units (multi-CPUs cluster) or cluster containing multiple Graphics Processing Units (multi-GPUs cluster). For example, most of the attempts to implement the classical conjugate gradient method were at best counted in the same amount of time as the problem was enlarged. The paper proposes the pipelined variant of the conjugate gradient method (PCG), a formulation that is potentially better suited for hybrid CPU/GPU computing since it requires only one synchronization point per one iteration, instead of two for standard CG (Conjugate Gradient). The standard and pipelined CG methods need the vector entries generated by current GPU and other GPUs for matrix-vector product. So the communication between GPUs becomes a major performance bottleneck on miltiGPU cluster. The article presents an approach to minimize the communications between parallel parts of algorithms. Additionally, computation and communication can be overlapped to reduce the impact of data exchange. Using pipelined version of the CG method with one synchronization point, the possibility of asynchronous calculations and communications, load balancing between the CPU and GPU for solving the large linear systems allows for scalability. The algorithm is implemented with the combined use of technologies: MPI, OpenMP and CUDA. We show that almost optimum speed up on 8-CPU/2GPU may be reached (relatively to a one GPU execution). The parallelized solver achieves a speedup of up to 5.49 times on 16 NVIDIA Tesla GPUs, as compared to one GPU.

Keywords: Conjugate Gradient, GPU, parallel programming, pipelined algorithm.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 370
40 The Cloud Systems Used in Education: Properties and Overview

Authors: Agah Tuğrul Korucu, Handan Atun

Abstract:

Diversity and usefulness of information that used in education are have increased due to development of technology. Web technologies have made enormous contributions to the distance learning system especially. Mobile systems, one of the most widely used technology in distance education, made much easier to access web technologies. Not bounding by space and time, individuals have had the opportunity to access the information on web. In addition to this, the storage of educational information and resources and accessing these information and resources is crucial for both students and teachers. Because of this importance, development and dissemination of web technologies supply ease of access to information and resources are provided by web technologies. Dynamic web technologies introduced as new technologies that enable sharing and reuse of information, resource or applications via the Internet and bring websites into expandable platforms are commonly known as Web 2.0 technologies. Cloud systems are one of the dynamic web technologies that defined as a model provides approaching the demanded information independent from time and space in appropriate circumstances and developed by NIST. One of the most important advantages of cloud systems is meeting the requirements of users directly on the web regardless of hardware, software, and dealing with install. Hence, this study aims at using cloud services in education and investigating the services provided by the cloud computing. Survey method has been used as research method. In the findings of this research the fact that cloud systems are used such studies as resource sharing, collaborative work, assignment submission and feedback, developing project in the field of education, and also, it is revealed that cloud systems have plenty of significant advantages in terms of facilitating teaching activities and the interaction between teacher, student and environment.

Keywords: Cloud systems, cloud systems in education, distance learning, e-learning, integration of information technologies, online learning environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1017
39 Peculiarities of Internal Friction and Shear Modulus in 60Co γ-Rays Irradiated Monocrystalline SiGe Alloys

Authors: I. Kurashvili, G. Darsavelidze, T. Kimeridze, G. Chubinidze, I. Tabatadze

Abstract:

At present, a number of modern semiconductor devices based on SiGe alloys have been created in which the latest achievements of high technologies are used. These devices might cause significant changes to networking, computing, and space technology. In the nearest future new materials based on SiGe will be able to restrict the A3B5 and Si technologies and firmly establish themselves in medium frequency electronics. Effective realization of these prospects requires the solution of prediction and controlling of structural state and dynamical physical –mechanical properties of new SiGe materials. Based on these circumstances, a complex investigation of structural defects and structural-sensitive dynamic mechanical characteristics of SiGe alloys under different external impacts (deformation, radiation, thermal cycling) acquires great importance. Internal friction (IF) and shear modulus temperature and amplitude dependences of the monocrystalline boron-doped Si1-xGex(x≤0.05) alloys grown by Czochralski technique is studied in initial and 60Co gamma-irradiated states. In the initial samples, a set of dislocation origin relaxation processes and accompanying modulus defects are revealed in a temperature interval of 400-800 ⁰C. It is shown that after gamma-irradiation intensity of relaxation internal friction in the vicinity of 280 ⁰C increases and simultaneously activation parameters of high temperature relaxation processes reveal clear rising. It is proposed that these changes of dynamical mechanical characteristics might be caused by a decrease of the dislocation mobility in the Cottrell atmosphere enriched by the radiation defects.

Keywords: Gamma-irradiation, internal friction, shear modulus, SiGe alloys.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 622
38 A Novel Approach to Allocate Channels Dynamically in Wireless Mesh Networks

Authors: Y. Harold Robinson, M. Rajaram

Abstract:

Wireless mesh networking is rapidly gaining in popularity with a variety of users: from municipalities to enterprises, from telecom service providers to public safety and military organizations. This increasing popularity is based on two basic facts: ease of deployment and increase in network capacity expressed in bandwidth per footage; WMNs do not rely on any fixed infrastructure. Many efforts have been used to maximizing throughput of the network in a multi-channel multi-radio wireless mesh network. Current approaches are purely based on either static or dynamic channel allocation approaches. In this paper, we use a hybrid multichannel multi radio wireless mesh networking architecture, where static and dynamic interfaces are built in the nodes. Dynamic Adaptive Channel Allocation protocol (DACA), it considers optimization for both throughput and delay in the channel allocation. The assignment of the channel has been allocated to be codependent with the routing problem in the wireless mesh network and that should be based on passage flow on every link. Temporal and spatial relationship rises to re compute the channel assignment every time when the pattern changes in mesh network, channel assignment algorithms assign channels in network. In this paper a computing path which captures the available path bandwidth is the proposed information and the proficient routing protocol based on the new path which provides both static and dynamic links. The consistency property guarantees that each node makes an appropriate packet forwarding decision and balancing the control usage of the network, so that a data packet will traverse through the right path.

Keywords: Wireless mesh network, spatial time division multiple access, hybrid topology, timeslot allocation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1839
37 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: Autonomous vehicles, deformable part model, dpm, pedestrian recognition.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1396
36 Theoretical and Analytical Approaches for Investigating the Relations between Sediment Transport and Channel Shape

Authors: Nidal Hadadin

Abstract:

This study investigated the effect of cross sectional geometry on sediment transport rate. The processes of sediment transport are generally associated to environmental management, such as pollution caused by the forming of suspended sediment in the channel network of a watershed and preserving physical habitats and native vegetations, and engineering applications, such as the influence of sediment transport on hydraulic structures and flood control design. Many equations have been proposed for computing the sediment transport, the influence of many variables on sediment transport has been understood; however, the effect of other variables still requires further research. For open channel flow, sediment transport capacity is recognized to be a function of friction slope, flow velocity, grain size, grain roughness and form roughness, the hydraulic radius of the bed section and the type and quantity of vegetation cover. The effect of cross sectional geometry of the channel on sediment transport is one of the variables that need additional investigation. The width-depth ratio (W/d) is a comparative indicator of the channel shape. The width is the total distance across the channel and the depth is the mean depth of the channel. The mean depth is best calculated as total cross-sectional area divided by the top width. Channels with high W/d ratios tend to be shallow and wide, while channels with low (W/d) ratios tend to be narrow and deep. In this study, the effects of the width-depth ratio on sediment transport was demonstrated theoretically by inserting the shape factor in sediment continuity equation and analytically by utilizing the field data sets for Yalobusha River. It was found by utilizing the two approaches as a width-depth ratio increases the sediment transport decreases.

Keywords: Sediment transport, shape factor, hydraulicgeometry, flow discharge, width depth ratio.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1393
35 Investigation of Combined use of MFCC and LPC Features in Speech Recognition Systems

Authors: К. R. Aida–Zade, C. Ardil, S. S. Rustamov

Abstract:

Statement of the automatic speech recognition problem, the assignment of speech recognition and the application fields are shown in the paper. At the same time as Azerbaijan speech, the establishment principles of speech recognition system and the problems arising in the system are investigated. The computing algorithms of speech features, being the main part of speech recognition system, are analyzed. From this point of view, the determination algorithms of Mel Frequency Cepstral Coefficients (MFCC) and Linear Predictive Coding (LPC) coefficients expressing the basic speech features are developed. Combined use of cepstrals of MFCC and LPC in speech recognition system is suggested to improve the reliability of speech recognition system. To this end, the recognition system is divided into MFCC and LPC-based recognition subsystems. The training and recognition processes are realized in both subsystems separately, and recognition system gets the decision being the same results of each subsystems. This results in decrease of error rate during recognition. The training and recognition processes are realized by artificial neural networks in the automatic speech recognition system. The neural networks are trained by the conjugate gradient method. In the paper the problems observed by the number of speech features at training the neural networks of MFCC and LPC-based speech recognition subsystems are investigated. The variety of results of neural networks trained from different initial points in training process is analyzed. Methodology of combined use of neural networks trained from different initial points in speech recognition system is suggested to improve the reliability of recognition system and increase the recognition quality, and obtained practical results are shown.

Keywords: Speech recognition, cepstral analysis, Voice activation detection algorithm, Mel Frequency Cepstral Coefficients, features of speech, Cepstral Mean Subtraction, neural networks, Linear Predictive Coding.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 911
34 Design and Development of On-Line, On-Site, In-Situ Induction Motor Performance Analyser

Authors: G. S. Ayyappan, Srinivas Kota, Jaffer R. C. Sheriff, C. Prakash Chandra Joshua

Abstract:

In the present scenario of energy crises, energy conservation in the electrical machines is very important in the industries. In order to conserve energy, one needs to monitor the performance of an induction motor on-site and in-situ. The instruments available for this purpose are very meager and very expensive. This paper deals with the design and development of induction motor performance analyser on-line, on-site, and in-situ. The system measures only few electrical input parameters like input voltage, line current, power factor, frequency, powers, and motor shaft speed. These measured data are coupled to name plate details and compute the operating efficiency of induction motor. This system employs the method of computing motor losses with the help of equivalent circuit parameters. The equivalent circuit parameters of the concerned motor are estimated using the developed algorithm at any load conditions and stored in the system memory. The developed instrument is a reliable, accurate, compact, rugged, and cost-effective one. This portable instrument could be used as a handy tool to study the performance of both slip ring and cage induction motors. During the analysis, the data can be stored in SD Memory card and one can perform various analyses like load vs. efficiency, torque vs. speed characteristics, etc. With the help of the developed instrument, one can operate the motor around its Best Operating Point (BOP). Continuous monitoring of the motor efficiency could lead to Life Cycle Assessment (LCA) of motors. LCA helps in taking decisions on motor replacement or retaining or refurbishment.

Keywords: Energy conservation, equivalent circuit parameters, induction motor efficiency, life cycle assessment, motor performance analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 956
33 Authentic Learning for Computer Network with Mobile Device-Based Hands-On Labware

Authors: Kai Qian, Ming Yang, Minzhe Guo, Prabir Bhattacharya, Lixin Tao

Abstract:

Computer network courses are essential parts of college computer science curriculum and hands-on networking experience is well recognized as an effective approach to help students understand better about the network concepts, the layered architecture of network protocols, and the dynamics of the networks. However, existing networking labs are usually server-based and relatively cumbersome, which require a certain level of specialty and resource to set up and maintain the lab environment. Many universities/colleges lack the resources and build-ups in this field and have difficulty to provide students with hands-on practice labs. A new affordable and easily-adoptable approach to networking labs is desirable to enhance network teaching and learning. In addition, current network labs are short on providing hands-on practice for modern wireless and mobile network learning. With the prevalence of smart mobile devices, wireless and mobile network are permeating into various aspects of our information society. The emerging and modern mobile technology provides computer science students with more authentic learning experience opportunities especially in network learning. A mobile device based hands-on labware can provide an excellent ‘real world’ authentic learning environment for computer network especially for wireless network study. In this paper, we present our mobile device-based hands-on labware (series of lab module) for computer network learning which is guided by authentic learning principles to immerse students in a real world relevant learning environment. We have been using this labware in teaching computer network, mobile security, and wireless network classes. The student feedback shows that students can learn more when they have hands-on authentic learning experience. 

Keywords: Mobile computing, android, network, labware.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2073
32 A Secure Auditing Framework for Load Balancing in Cloud Environment

Authors: R. Geetha, T. Padmavathy

Abstract:

Security audit is an important aspect or feature to be considered in cloud service customer. It is basically a certification process to audit the controls that deliver the security requirements. Security audits are conducted by trained and qualified staffs that belong to an independent auditing organization. Security audits must be carried as a standard of security controls. Proper check to be made that the cloud user has a proper reporting and logging facilities with the customer's system and hence ensuring appropriate business and operational flow of data through cloud service. We propose a cloud-based secure auditing framework, which enables confided in power to safely store their mystery information on the semi-believed cloud specialist co-ops, and specifically share their mystery information with a wide scope of information recipient, to diminish the key administration intricacy for power proprietors and information collectors. Unique in relation to past cloud-based information framework, data proprietors transfer their mystery information into cloud utilizing static and dynamic evaluating plan. Another propelled determination is, if any information beneficiary needs individual record to download, the information collector will send the solicitation to the expert. The specialist proprietor has the Access Control. At the off probability, the businessman must impart the primary record to the knowledge collector, acknowledge statistics beneficiary solicitation. Once the acknowledgement for the records is over, the recipient downloads the first record and this record shifting time with date and downloading time with date are monitored by the inspector. In addition to deduplication concept, diminished cloud memory area using dynamic document distribution has been proposed.

Keywords: Cloud computing, cloud storage auditing, data integrity, key exposure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1167
31 Factors Affecting M-Government Deployment and Adoption

Authors: Saif Obaid Alkaabi, Nabil Ayad

Abstract:

Governments constantly seek to offer faster, more secure, efficient and effective services for their citizens. Recent changes and developments to communication services and technologies, mainly due the Internet, have led to immense improvements in the way governments of advanced countries carry out their interior operations Therefore, advances in e-government services have been broadly adopted and used in various developed countries, as well as being adapted to developing countries. The implementation of advances depends on the utilization of the most innovative structures of data techniques, mainly in web dependent applications, to enhance the main functions of governments. These functions, in turn, have spread to mobile and wireless techniques, generating a new advanced direction called m-government. This paper discusses a selection of available m-government applications and several business modules and frameworks in various fields. Practically, the m-government models, techniques and methods have become the improved version of e-government. M-government offers the potential for applications which will work better, providing citizens with services utilizing mobile communication and data models incorporating several government entities. Developing countries can benefit greatly from this innovation due to the fact that a large percentage of their population is young and can adapt to new technology and to the fact that mobile computing devices are more affordable. The use of models of mobile transactions encourages effective participation through the use of mobile portals by businesses, various organizations, and individual citizens. Although the application of m-government has great potential, it does have major limitations. The limitations include: the implementation of wireless networks and relative communications, the encouragement of mobile diffusion, the administration of complicated tasks concerning the protection of security (including the ability to offer privacy for information), and the management of the legal issues concerning mobile applications and the utilization of services.

Keywords: E-government, m-government, system dependability, system security, trust.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1771
30 Saving Energy through Scalable Architecture

Authors: John Lamb, Robert Epstein, Vasundhara L. Bhupathi, Sanjeev Kumar Marimekala

Abstract:

In this paper, we focus on the importance of scalable architecture for data centers and buildings in general to help an enterprise achieve environmental sustainability. The scalable architecture helps in many ways, such as adaptability to the business and user requirements, promotes high availability and disaster recovery solutions that are cost effective and low maintenance. The scalable architecture also plays a vital role in three core areas of sustainability: economy, environment, and social, which are also known as the 3 pillars of a sustainability model. If the architecture is scalable, it has many advantages. A few examples are that scalable architecture helps businesses and industries to adapt to changing technology, drive innovation, promote platform independence, and build resilience against natural disasters. Most importantly, having a scalable architecture helps industries bring in cost-effective measures for energy consumption, reduce wastage, increase productivity, and enable a robust environment. It also helps in the reduction of carbon emissions with advanced monitoring and metering capabilities. Scalable architectures help in reducing waste by optimizing the designs to utilize materials efficiently, minimize resources, decrease carbon footprints by using low-impact materials that are environmentally friendly. In this paper we also emphasize the importance of cultural shift towards the reuse and recycling of natural resources for a balanced ecosystem and maintain a circular economy. Also, since all of us are involved in the use of computers, much of the scalable architecture we have studied is related to data centers.

Keywords: Scalable Architectures, Sustainability, Application Design, Disruptive Technology, Machine Learning, Natural Language Processing, AI, Social Media Platform, Cloud Computing, Advanced Networking, Storage Devices, Advanced Monitoring, Metering Infrastructure, Climate change.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 53