Search results for: Successful Intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1062

Search results for: Successful Intelligence

162 Packet Forwarding with Multiprotocol Label Switching

Authors: R.N.Pise, S.A.Kulkarni, R.V.Pawar

Abstract:

MultiProtocol Label Switching (MPLS) is an emerging technology that aims to address many of the existing issues associated with packet forwarding in today-s Internetworking environment. It provides a method of forwarding packets at a high rate of speed by combining the speed and performance of Layer 2 with the scalability and IP intelligence of Layer 3. In a traditional IP (Internet Protocol) routing network, a router analyzes the destination IP address contained in the packet header. The router independently determines the next hop for the packet using the destination IP address and the interior gateway protocol. This process is repeated at each hop to deliver the packet to its final destination. In contrast, in the MPLS forwarding paradigm routers on the edge of the network (label edge routers) attach labels to packets based on the forwarding Equivalence class (FEC). Packets are then forwarded through the MPLS domain, based on their associated FECs , through swapping the labels by routers in the core of the network called label switch routers. The act of simply swapping the label instead of referencing the IP header of the packet in the routing table at each hop provides a more efficient manner of forwarding packets, which in turn allows the opportunity for traffic to be forwarded at tremendous speeds and to have granular control over the path taken by a packet. This paper deals with the process of MPLS forwarding mechanism, implementation of MPLS datapath , and test results showing the performance comparison of MPLS and IP routing. The discussion will focus primarily on MPLS IP packet networks – by far the most common application of MPLS today.

Keywords: Forwarding equivalence class, incoming label map, label, next hop label forwarding entry.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2693
161 Dynamic Threshold Adjustment Approach For Neural Networks

Authors: Hamza A. Ali, Waleed A. J. Rasheed

Abstract:

The use of neural networks for recognition application is generally constrained by their inherent parameters inflexibility after the training phase. This means no adaptation is accommodated for input variations that have any influence on the network parameters. Attempts were made in this work to design a neural network that includes an additional mechanism that adjusts the threshold values according to the input pattern variations. The new approach is based on splitting the whole network into two subnets; main traditional net and a supportive net. The first deals with the required output of trained patterns with predefined settings, while the second tolerates output generation dynamically with tuning capability for any newly applied input. This tuning comes in the form of an adjustment to the threshold values. Two levels of supportive net were studied; one implements an extended additional layer with adjustable neuronal threshold setting mechanism, while the second implements an auxiliary net with traditional architecture performs dynamic adjustment to the threshold value of the main net that is constructed in dual-layer architecture. Experiment results and analysis of the proposed designs have given quite satisfactory conducts. The supportive layer approach achieved over 90% recognition rate, while the multiple network technique shows more effective and acceptable level of recognition. However, this is achieved at the price of network complexity and computation time. Recognition generalization may be also improved by accommodating capabilities involving all the innate structures in conjugation with Intelligence abilities with the needs of further advanced learning phases.

Keywords: Classification, Recognition, Neural Networks, Pattern Recognition, Generalization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1627
160 Dynamic TDMA Slot Reservation Protocol for QoS Provisioning in Cognitive Radio Ad Hoc Networks

Authors: S. M. Kamruzzaman

Abstract:

In this paper, we propose a dynamic TDMA slot reservation (DTSR) protocol for cognitive radio ad hoc networks. Quality of Service (QoS) guarantee plays a critically important role in such networks. We consider the problem of providing QoS guarantee to users as well as to maintain the most efficient use of scarce bandwidth resources. According to one hop neighboring information and the bandwidth requirement, our proposed protocol dynamically changes the frame length and the transmission schedule. A dynamic frame length expansion and shrinking scheme that controls the excessive increase of unassigned slots has been proposed. This method efficiently utilizes the channel bandwidth by assigning unused slots to new neighboring nodes and increasing the frame length when the number of slots in the frame is insufficient to support the neighboring nodes. It also shrinks the frame length when half of the slots in the frame of a node are empty. An efficient slot reservation protocol not only guarantees successful data transmissions without collisions but also enhance channel spatial reuse to maximize the system throughput. Our proposed scheme, which provides both QoS guarantee and efficient resource utilization, be employed to optimize the channel spatial reuse and maximize the system throughput. Extensive simulation results show that the proposed mechanism achieves desirable performance in multichannel multi-rate cognitive radio ad hoc networks.

Keywords: TDMA, cognitive radio, ad hoc networks, QoSguarantee, dynamic frame length.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2654
159 Solubility of Organics in Water and Silicon Oil: A Comparative Study

Authors: Edison Muzenda

Abstract:

The aim of this study was to compare the solubility of selected volatile organic compounds in water and silicon oil using the simple static headspace method. The experimental design allowed equilibrium achievement within 30 – 60 minutes. Infinite dilution activity coefficients and Henry-s law constants for various organics representing esters, ketones, alkanes, aromatics, cycloalkanes and amines were measured at 303K. The measurements were reproducible with a relative standard deviation and coefficient of variation of 1.3x10-3 and 1.3 respectively. The static determined activity coefficients using shaker flasks were reasonably comparable to those obtained using the gas liquid - chromatographic technique and those predicted using the group contribution methods mainly the UNIFAC. Silicon oil chemically known as polydimethysiloxane was found to be better absorbent for VOCs than water which quickly becomes saturated. For example the infinite dilution mole fraction based activity coefficients of hexane is 0.503 and 277 000 in silicon oil and water respectively. Thus silicon oil gives a superior factor of 550 696. Henry-s law constants and activity coefficients at infinite dilution play a significant role in the design of scrubbers for abatement of volatile organic compounds from contaminated air streams. This paper presents the phase equilibrium of volatile organic compounds in very dilute aqueous and polymeric solutions indicating the movement and fate of chemical in air and solvent. The successful comparison of the results obtained here and those obtained using other methods by the same authors and in literature, means that the results obtained here are reliable.

Keywords: Abatement, absorbent, activity coefficients, equilibrium, Henry's law constant.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2676
158 Endeavor in Management Process by Executive Dashboards: The Case of the Financial Directorship in Brazilian Navy

Authors: R. S. Quintal, J. L. Tesch Santos, M. D. Davis, E. C. de Santana, M. de F. Bandeira dos Santos

Abstract:

The objective is to identify the contributions from the introduction of the computerized system deal within the Accounting Department of Brazilian Navy Financial Directorship and its possible effects on the budgetary and financial harvest of Brazilian Navy. The relevance lies in the fact that the management process is responsible for the continuous improvement of organizational performance through higher levels of quality in their activities. Improvements in organizational processes have direct effects on crops cost, quality, reliability, flexibility and speed. The method of study of this research is the case study. The choice of case study attended, among other demands, a need for greater flexibility to study processes related to a computerized system. The sources of evidence were used literature, documentary and direct observation. Direct observation was made by monitoring the implementation of the computerized system in the Division of Management Analysis. The main findings of the study point to the fact that the computerized system may contribute significantly to the standardization of information. There was improvement of internal processes in the division of management analysis, made possible the consolidation of a standard management and performance analysis that contribute to global homogeneity in the treatment of information essential to the process of decision making. This study has limitations related to the fact the search result be subject exclusively to the case studied, and it is impossible to generalize to other organs of government.

Keywords: Process Management, Management Control, Business Intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1985
157 An Evaluation of Drivers in Implementing Sustainable Manufacturing in India: Using DEMATEL Approach

Authors: D. Garg, S. Luthra, A. Haleem

Abstract:

Due to growing concern about environmental and social consequences throughout the world, a need has been felt to incorporate sustainability concepts in conventional manufacturing. This paper is an attempt to identify and evaluate drivers in implementing sustainable manufacturing in Indian context. Nine possible drivers for successful implementation of sustainable manufacturing have been identified from extensive review. Further, Decision Making Trial and Evaluation Laboratory (DEMATEL) approach has been utilized to evaluate and categorize these identified drivers for implementing sustainable manufacturing in to the cause and effect groups. Five drivers (Societal Pressure and Public Concerns; Regulations and Government Policies; Top Management Involvement, Commitment and Support; Effective Strategies and Activities towards Socially Responsible Manufacturing and Market Trends) have been categorized into the cause group and four drivers (Holistic View in Manufacturing Systems; Supplier Participation; Building Sustainable culture in Organization; and Corporate Image and Benefits) have been categorized into the effect group. “Societal Pressure and Public Concerns” has been found the most critical driver and “Corporate Image and Benefits” as least critical or the most easily influenced driver to implementing sustainable manufacturing in Indian context. This paper may surely help practitioners in better understanding of these drivers and their priorities towards effective implementation of sustainable manufacturing.

Keywords: Drivers, Decision Making Trial and Evaluation Laboratory (DEMATEL), India, Sustainable Manufacturing (SM).

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3221
156 Software Vulnerability Markets: Discoverers and Buyers

Authors: Abdullah M. Algarni, Yashwant K. Malaiya

Abstract:

Some of the key aspects of vulnerability—discovery, dissemination, and disclosure—have received some attention recently. However, the role of interaction among the vulnerability discoverers and vulnerability acquirers has not yet been adequately addressed. Our study suggests that a major percentage of discoverers, a majority in some cases, are unaffiliated with the software developers and thus are free to disseminate the vulnerabilities they discover in any way they like. As a result, multiple vulnerability markets have emerged. In some of these markets, the exchange is regulated, but in others, there is little or no regulation. In recent vulnerability discovery literature, the vulnerability discoverers have remained anonymous individuals. Although there has been an attempt to model the level of their efforts, information regarding their identities, modes of operation, and what they are doing with the discovered vulnerabilities has not been explored.

Reports of buying and selling of the vulnerabilities are now appearing in the press; however, the existence of such markets requires validation, and the natures of the markets need to be analyzed. To address this need, we have attempted to collect detailed information. We have identified the most prolific vulnerability discoverers throughout the past decade and examined their motivation and methods. A large percentage of these discoverers are located in Eastern and Western Europe and in the Far East. We have contacted several of them in order to collect firsthand information regarding their techniques, motivations, and involvement in the vulnerability markets. We examine why many of the discoverers appear to retire after a highly successful vulnerability-finding career. The paper identifies the actual vulnerability markets, rather than the hypothetical ideal markets that are often examined. The emergence of worldwide government agencies as vulnerability buyers has significant implications. We discuss potential factors that can impact the risk to society and the need for detailed exploration.

Keywords: Risk management, software security, vulnerability discoverers, vulnerability markets.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3263
155 The Characteristics of Transformation of Institutional Changes and Georgia

Authors: Nazira Kakulia

Abstract:

The analysis of transformation of institutional changes outlines two important characteristics. These are: the speed of the changes and their sequence. Successful transformation must be carried out in three different stages; On the first stage, macroeconomic stabilization must be achieved with the help of fiscal and monetary tools. Two-tier banking system should be established and the active functions of central bank should be replaced by the passive ones (reserve requirements and refinancing rate), together with the involvement growth of private sector. Fiscal policy by itself here means the creation of tax system which must replace previously existing direct state revenues; the share of subsidies in the state expenses must be reduced also. The second stage begins after reaching the macroeconomic stabilization at a time of change of formal institutes which must stimulate the private business. Corporate legislation creates a competitive environment at the market and the privatization of state companies takes place. Bankruptcy and contract law is created. he third stage is the most extended one, which means the formation of all state structures that is necessary for the further proper functioning of a market economy. These three stages about the cycle period of political and social transformation and the hierarchy of changes can also be grouped by the different methodology: on the first and the most short-term stage the transfer of power takes place. On the second stage institutions corresponding to new goal are created. The last phase of transformation is extended in time and it includes the infrastructural, socio-cultural and socio-structural changes. The main goal of this research is to explore and identify the features of such kind of models.

Keywords: Competitive, environment, fiscal policy, macro-economic stabilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 952
154 Business Process Management and Organizational Culture in Big Companies: Cross-Country Analysis

Authors: Dalia Suša Vugec

Abstract:

Business process management (BPM) is widely used approach focused on designing, mapping, changing, managing and analyzing business processes of an organization, which eventually leads to better performance and derives many other benefits. Since every organization strives to improve its performance in order to be sustainable and to remain competitive on the market in long-term period, numerous organizations are nowadays adopting and implementing BPM. However, not all organizations are equally successful in that. One of the ways of measuring BPM success is by measuring its maturity by calculating Process Performance Index (PPI) using ten BPM success factors. Still, although BPM is a holistic concept, organizational culture is not taken into consideration in calculating PPI. Hence, aim of this paper is twofold; first, it aims to explore and analyze the current state of BPM success factors within the big organizations from Slovenia, Croatia, and Austria and second, it aims to analyze the structure of organizational culture within the observed companies, focusing on the link with BPM success factors as well. The presented study is based on the results of the questionnaire conducted as the part of the PROSPER project (IP-2014-09-3729) and financed by Croatian Science Foundation. The results of the questionnaire reveal differences in the achieved levels of BPM success factors and therefore BPM maturity in total between the three observed countries. Moreover, the structure of organizational culture across three countries also differs. This paper discusses the revealed differences between countries as well as the link between organizational culture and BPM success factors.

Keywords: Business process management, BPM maturity, BPM success factors, organizational culture, process performance index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1596
153 An Application of Path Planning Algorithms for Autonomous Inspection of Buried Pipes with Swarm Robots

Authors: Richard Molyneux, Christopher Parrott, Kirill Horoshenkov

Abstract:

This paper aims to demonstrate how various algorithms can be implemented within swarms of autonomous robots to provide continuous inspection within underground pipeline networks. Current methods of fault detection within pipes are costly, time consuming and inefficient. As such, solutions tend toward a more reactive approach, repairing faults, as opposed to proactively seeking leaks and blockages. The paper presents an efficient inspection method, showing that autonomous swarm robotics is a viable way of monitoring underground infrastructure. Tailored adaptations of various Vehicle Routing Problems (VRP) and path-planning algorithms provide a customised inspection procedure for complicated networks of underground pipes. The performance of multiple algorithms is compared to determine their effectiveness and feasibility. Notable inspirations come from ant colonies and stigmergy, graph theory, the k-Chinese Postman Problem ( -CPP) and traffic theory. Unlike most swarm behaviours which rely on fast communication between agents, underground pipe networks are a highly challenging communication environment with extremely limited communication ranges. This is due to the extreme variability in the pipe conditions and relatively high attenuation of acoustic and radio waves with which robots would usually communicate. This paper illustrates how to optimise the inspection process and how to increase the frequency with which the robots pass each other, without compromising the routes they are able to take to cover the whole network.

Keywords: Autonomous inspection, buried pipes, stigmergy, swarm intelligence, vehicle routing problem.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1014
152 A Static Android Malware Detection Based on Actual Used Permissions Combination and API Calls

Authors: Xiaoqing Wang, Junfeng Wang, Xiaolan Zhu

Abstract:

Android operating system has been recognized by most application developers because of its good open-source and compatibility, which enriches the categories of applications greatly. However, it has become the target of malware attackers due to the lack of strict security supervision mechanisms, which leads to the rapid growth of malware, thus bringing serious safety hazards to users. Therefore, it is critical to detect Android malware effectively. Generally, the permissions declared in the AndroidManifest.xml can reflect the function and behavior of the application to a large extent. Since current Android system has not any restrictions to the number of permissions that an application can request, developers tend to apply more than actually needed permissions in order to ensure the successful running of the application, which results in the abuse of permissions. However, some traditional detection methods only consider the requested permissions and ignore whether it is actually used, which leads to incorrect identification of some malwares. Therefore, a machine learning detection method based on the actually used permissions combination and API calls was put forward in this paper. Meanwhile, several experiments are conducted to evaluate our methodology. The result shows that it can detect unknown malware effectively with higher true positive rate and accuracy while maintaining a low false positive rate. Consequently, the AdaboostM1 (J48) classification algorithm based on information gain feature selection algorithm has the best detection result, which can achieve an accuracy of 99.8%, a true positive rate of 99.6% and a lowest false positive rate of 0.

Keywords: Android, permissions combination, API calls, machine learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1915
151 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: Field development, reservoir characterization, reservoir engineering, well test analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1114
150 A Practice of Zero Trust Architecture in Financial Transactions

Authors: L. Wang, Y. Chen, T. Wu, S. Hu

Abstract:

In order to enhance the security of critical financial infrastructure, this study carries out a transformation of the architecture of a financial trading terminal to a zero trust architecture (ZTA), constructs an active defense system for the cybersecurity, improves the security level of trading services in the Internet environment, enhances the ability to prevent network attacks and unknown risks, and reduces the industry and security risks brought about by cybersecurity risks. This study introduces Software Defined Perimeter (SDP) technology of ZTA, adapts and applies it to a financial trading terminal to achieve security optimization and fine-grained business grading control. The upgraded architecture of the trading terminal moves security protection forward to the user access layer, replaces VPN to optimize remote access and significantly improves the security protection capability of Internet transactions. The study achieves: 1. deep integration with the access control architecture of the transaction system; 2. no impact on the performance of terminals and gateways, and no perception of application system upgrades; 3. customized checklist and policy configuration; 4. introduction of industry-leading security technology such as single-packet authorization (SPA) and secondary authentication. This study carries out a successful application of ZTA in the field of financial trading, and provides transformation ideas for other similar systems while improving the security level of financial transaction services in the Internet environment.

Keywords: Zero trust, trading terminal, architecture, network security, cybersecurity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 222
149 Evolutionary Approach for Automated Discovery of Censored Production Rules

Authors: Kamal K. Bharadwaj, Basheer M. Al-Maqaleh

Abstract:

In the recent past, there has been an increasing interest in applying evolutionary methods to Knowledge Discovery in Databases (KDD) and a number of successful applications of Genetic Algorithms (GA) and Genetic Programming (GP) to KDD have been demonstrated. The most predominant representation of the discovered knowledge is the standard Production Rules (PRs) in the form If P Then D. The PRs, however, are unable to handle exceptions and do not exhibit variable precision. The Censored Production Rules (CPRs), an extension of PRs, were proposed by Michalski & Winston that exhibit variable precision and supports an efficient mechanism for handling exceptions. A CPR is an augmented production rule of the form: If P Then D Unless C, where C (Censor) is an exception to the rule. Such rules are employed in situations, in which the conditional statement 'If P Then D' holds frequently and the assertion C holds rarely. By using a rule of this type we are free to ignore the exception conditions, when the resources needed to establish its presence are tight or there is simply no information available as to whether it holds or not. Thus, the 'If P Then D' part of the CPR expresses important information, while the Unless C part acts only as a switch and changes the polarity of D to ~D. This paper presents a classification algorithm based on evolutionary approach that discovers comprehensible rules with exceptions in the form of CPRs. The proposed approach has flexible chromosome encoding, where each chromosome corresponds to a CPR. Appropriate genetic operators are suggested and a fitness function is proposed that incorporates the basic constraints on CPRs. Experimental results are presented to demonstrate the performance of the proposed algorithm.

Keywords: Censored Production Rule, Data Mining, MachineLearning, Evolutionary Algorithms.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1881
148 Memristor-A Promising Candidate for Neural Circuits in Neuromorphic Computing Systems

Authors: Juhi Faridi, Mohd. Ajmal Kafeel

Abstract:

The advancements in the field of Artificial Intelligence (AI) and technology has led to an evolution of an intelligent era. Neural networks, having the computational power and learning ability similar to the brain is one of the key AI technologies. Neuromorphic computing system (NCS) consists of the synaptic device, neuronal circuit, and neuromorphic architecture. Memristor are a promising candidate for neuromorphic computing systems, but when it comes to neuromorphic computing, the conductance behavior of the synaptic memristor or neuronal memristor needs to be studied thoroughly in order to fathom the neuroscience or computer science. Furthermore, there is a need of more simulation work for utilizing the existing device properties and providing guidance to the development of future devices for different performance requirements. Hence, development of NCS needs more simulation work to make use of existing device properties. This work aims to provide an insight to build neuronal circuits using memristors to achieve a Memristor based NCS.  Here we throw a light on the research conducted in the field of memristors for building analog and digital circuits in order to motivate the research in the field of NCS by building memristor based neural circuits for advanced AI applications. This literature is a step in the direction where we describe the various Key findings about memristors and its analog and digital circuits implemented over the years which can be further utilized in implementing the neuronal circuits in the NCS. This work aims to help the electronic circuit designers to understand how the research progressed in memristors and how these findings can be used in implementing the neuronal circuits meant for the recent progress in the NCS.

Keywords: Analog circuits, digital circuits, memristors, neuromorphic computing systems.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1214
147 Recycled Aggregates from Construction and Demolition Waste in the Production of Concrete Blocks

Authors: Juan A. Ferriz-Papi, Simon Thomas

Abstract:

The construction industry generates large amounts of waste, usually mixed, which can be composed of different origin materials, most of them catalogued as non-hazardous. The European Union targets for this waste for 2020 have been already achieved by the UK, but it is mainly developed in downcycling processes (backfilling) whereas upcycling (such as recycle in new concrete batches) still keeps at a low percentage. The aim of this paper is to explore further in the use of recycled aggregates from construction and demolition waste (CDW) in concrete mixes so as to improve upcycling. A review of most recent research and legislation applied in the UK is developed regarding the production of concrete blocks. As a case study, initial tests were developed with a CDW recycled aggregate sample from a CDW plant in Swansea. Composition by visual inspection and sieving tests of two samples were developed and compared to original aggregates. More than 70% was formed by soil waste from excavation, and the rest was a mix of waste from mortar, concrete, and ceramics with small traces of plaster, glass and organic matter. Two concrete mixes were made with 80% replacement of recycled aggregates and different water/cement ratio. Tests were carried out for slump, absorption, density and compression strength. The results were compared to a reference sample and showed a substantial reduction of quality in both mixes. Despite that, the discussion brings to identify different aspects to solve, such as heterogeneity or composition, and analyze them for the successful use of these recycled aggregates in the production of concrete blocks. The conclusions obtained can help increase upcycling processes ratio with mixed CDW as recycled aggregates in concrete mixes.

Keywords: Recycled aggregate, concrete, concrete block, construction and demolition waste, recycling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2020
146 An Embedded System for Artificial Intelligence Applications

Authors: Ioannis P. Panagopoulos, Christos C. Pavlatos, George K. Papakonstantinou

Abstract:

Conventional approaches in the implementation of logic programming applications on embedded systems are solely of software nature. As a consequence, a compiler is needed that transforms the initial declarative logic program to its equivalent procedural one, to be programmed to the microprocessor. This approach increases the complexity of the final implementation and reduces the overall system's performance. On the contrary, presenting hardware implementations which are only capable of supporting logic programs prevents their use in applications where logic programs need to be intertwined with traditional procedural ones, for a specific application. We exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of those derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation is programmable, supports the execution of hybrid applications, increases the performance of logic derivations (experimental analysis yields an approximate 1000% increase in performance) and reduces the complexity of the final implemented code. The proposed hardware design is supported by a proposed extended C-language called C-AG.

Keywords: Attribute Grammars, Logic Programming, RISC microprocessor.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 5087
145 Distributed System Computing Resource Scheduling Algorithm Based on Deep Reinforcement Learning

Authors: Yitao Lei, Xingxiang Zhai, Burra Venkata Durga Kumar

Abstract:

As the quantity and complexity of computing in large-scale software systems increase, distributed system computing becomes increasingly important. The distributed system realizes high-performance computing by collaboration between different computing resources. If there are no efficient resource scheduling resources, the abuse of distributed computing may cause resource waste and high costs. However, resource scheduling is usually an NP-hard problem, so we cannot find a general solution. However, some optimization algorithms exist like genetic algorithm, ant colony optimization, etc. The large scale of distributed systems makes this traditional optimization algorithm challenging to work with. Heuristic and machine learning algorithms are usually applied in this situation to ease the computing load. As a result, we do a review of traditional resource scheduling optimization algorithms and try to introduce a deep reinforcement learning method that utilizes the perceptual ability of neural networks and the decision-making ability of reinforcement learning. Using the machine learning method, we try to find important factors that influence the performance of distributed system computing and help the distributed system do an efficient computing resource scheduling. This paper surveys the application of deep reinforcement learning on distributed system computing resource scheduling. The research proposes a deep reinforcement learning method that uses a recurrent neural network to optimize the resource scheduling. The paper concludes the challenges and improvement directions for Deep Reinforcement Learning-based resource scheduling algorithms.

Keywords: Resource scheduling, deep reinforcement learning, distributed system, artificial intelligence.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 495
144 The Evaluation of Event Sport Tourism on Regional Economic Development

Authors: Huei-Wen Lin, Huei-Fu Lu

Abstract:

Event sport tourism (EST) has become an especially important economic sector around the world. As the magnitude continues to grow, attracting more tourists, media, and investment for the host community, and many local areas/regions and states have identified the expenditures by visitors as a potential source of economic or employment growth. The main purposes of this study are to investigate stakeholders’ insights into the feature of hosting EST and using them as a regional development strategy. Continuing the focus of previous literature on the regional development and economic benefits by hosting EST, a total of fıve semi-structured interview questions are designed and a thematic analysis is employed to conduct with eight key sport and tourism decision makers in Atlanta during July to August 2016. Through the depth interviews, the study will contribute to a better understanding of stakeholders’ decision-making, identifying benefits and constraints as well as leveraging the impacts of hosting EST. These findings have provided stakeholders’ perspectives of hosting EST and using them as a reference of regional development in emerging sport tourism markets in the US. Additionally, this study examines key considerations and issues that affect and are critical to reliable understanding of the economic impacts of hosting EST on the regional development, and it will be able to benefit future management authorities (i.e. governments and communities) in their sport tourism development endeavors in defining and hosting successful EST. Furthermore, the insights gained from the qualitative analysis could help other cities/regions analyzing the economic impacts of hosting EST and using it as an instrument of city development strategy.

Keywords: Event sport tourism, regional economic development, thematic analysis, stakeholder.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2723
143 Methods and Algorithms of Ensuring Data Privacy in AI-Based Healthcare Systems and Technologies

Authors: Omar Farshad Jeelani, Makaire Njie, Viktoriia M. Korzhuk

Abstract:

Recently, the application of AI-powered algorithms in healthcare continues to flourish. Particularly, access to healthcare information, including patient health history, diagnostic data, and PII (Personally Identifiable Information) is paramount in the delivery of efficient patient outcomes. However, as the exchange of healthcare information between patients and healthcare providers through AI-powered solutions increases, protecting a person’s information and their privacy has become even more important. Arguably, the increased adoption of healthcare AI has resulted in a significant concentration on the security risks and protection measures to the security and privacy of healthcare data, leading to escalated analyses and enforcement. Since these challenges are brought by the use of AI-based healthcare solutions to manage healthcare data, AI-based data protection measures are used to resolve the underlying problems. Consequently, these projects propose AI-powered safeguards and policies/laws to protect the privacy of healthcare data. The project present the best-in-school techniques used to preserve data privacy of AI-powered healthcare applications. Popular privacy-protecting methods like Federated learning, cryptography techniques, differential privacy methods, and hybrid methods are discussed together with potential cyber threats, data security concerns, and prospects. Also, the project discusses some of the relevant data security acts/laws that govern the collection, storage, and processing of healthcare data to guarantee owners’ privacy is preserved. This inquiry discusses various gaps and uncertainties associated with healthcare AI data collection procedures, and identifies potential correction/mitigation measures.

Keywords: Data privacy, artificial intelligence, healthcare AI, data sharing, healthcare organizations.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 114
142 Non-Burn Treatment of Health Care Risk Waste

Authors: Jefrey Pilusa, Tumisang Seodigeng

Abstract:

This research discusses a South African case study for the potential of utilizing refuse-derived fuel (RDF) obtained from non-burn treatment of health care risk waste (HCRW) as potential feedstock for green energy production. This specific waste stream can be destroyed via non-burn treatment technology involving high-speed mechanical shredding followed by steam or chemical injection to disinfect the final product. The RDF obtained from this process is characterised by a low moisture, low ash, and high calorific value which means it can be potentially used as high-value solid fuel. Due to the raw feed of this RDF being classified as hazardous, the final RDF has been reported to be non-infectious and can blend with other combustible wastes such as rubber and plastic for waste to energy applications. This study evaluated non-burn treatment technology as a possible solution for on-site destruction of HCRW in South African private and public health care centres. Waste generation quantities were estimated based on the number of registered patient beds, theoretical bed occupancy. Time and motion study was conducted to evaluate the logistics viability of on-site treatment. Non-burn treatment technology for HCRW is a promising option for South Africa, and successful implementation of this method depends upon the initial capital investment, operational cost and environmental permitting of such technology; there are other influencing factors such as the size of the waste stream, product off-take price as well as product demand.

Keywords: Autoclave, disposal, fuel, incineration, medical waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1165
141 Identifying the Barriers behind the Lack of Six Sigma Use in Libyan Manufacturing Companies

Authors: Osama Elgadi, Martin Birkett, Wai Ming Cheung

Abstract:

This paper investigates the barriers behind the underutilisation of six sigma in Libyan manufacturing companies (LMCs). A mixed-method methodology is proposed, starting by conducting interviews to collect qualitative data followed by the development of a questionnaire to obtain quantitative data. The focus of this paper is on discussing the findings of the interview stage and how these can be used to further develop the questionnaire stage. The interview results showed that only four key barriers were highlighted as being encountered by LMCs. With a difference in terms of their significance, these factors were identified, and placed in descending order according to their importance, namely: “Lack of top management commitment”, “Lack of training”, “Lack of knowledge about six sigma”, and “Culture effect”. The findings also showed that some barriers which, were found in previous studies of six sigma implementation were not considered as barriers to LMCs but can, in fact, be considered as success factors or enablers for six sigma adoption. These factors were identified as: “sufficiency of time and financial resources”; “customers unsatisfied”; “good communication between all departments in the company”; “we are certain about its results and benefits to our company and unhappy with the current quality system”. These results suggest that LMCs face fewer barriers to adopting six sigma than many well-established global companies operating in other countries and could take advantage of these successful factors by developing and implementing a six sigma framework to improve their product quality and competitiveness.

Keywords: Six sigma, barriers, Libyan manufacturing companies, interview.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1758
140 Developing Electronic Medical Record System to Enhance the Satisfaction of Patients and Service Providers

Authors: Siham Jemal Kedir

Abstract:

Information communication technology is dramatically transforming the health sector, especially in developing countries with few resources and burgeoning access to an internet connection. As a result, processes such as record keeping, administration, and human resources have been vastly simplified, allowing hospitals to focus on delivering urgent medical care. This paper will explore the impact of IT through a study of the electronic medical record system in the Mekelle City Health Center in Tigray Region, Ethiopia. This paper has four specific objectives: 1. developing artifacts in the Electronic Medical Record system, 2. preparing a diagram for step-by-step development of Electronic Medical Records, 3. creating a draft website with the proposed Electronic Medical Record system, and 4. Testing and evaluating the performance and user acceptance of the system. The research will be done in a qualitative manner employing interviews and in-person observation. The research has found the following major results: firstly, the medical record system has been difficult to implement. Second, the Mekelle Health Center is using a manual recording system which is time-consuming and inefficient. The old recording system in the Center leads to the dissatisfaction of patients as well as the service provider staff. As a result, to transform the manual recording system into a digital system, an electronic medical recording system has been developed. The developed system has been tested for implementation and has been successful. Consequently, the administrator of the health center is ready to implement and use the developed software to introduce a medical recording system in Mekelle Health Center.

Keywords: Electronic Health Record Implementation, EMR System Development, Medical Record.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 62
139 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
138 Detecting Cavitation in a Vertical Sea water Centrifugal Lift Pump Related to Iran Oil Industry Cooling Water Circulation System

Authors: Omid A. Zargar

Abstract:

Cavitation is one of the most well-known process faults that may occur in different industrial equipment especially centrifugal pumps. Cavitation also may happen in water pumps and turbines. Sometimes cavitation has been severe enough to wear holes in the impeller and damage the vanes to such a degree that the impeller becomes very ineffective. More commonly, the pump efficiency will decrease significantly during cavitation and continue to decrease as damage to the impeller increases. Typically, when cavitation occurs, an audible sound similar to ‘marbles’ or ‘crackling’ is reported to be emitted from the pump. In this paper, the most effective monitoring items and techniques in detecting cavitation discussed in details. Besides, some successful solutions for solving this problem for sea water vertical Centrifugal lift Pump discussed through a case history related to Iran oil industry. Furthermore, balance line modification, strainer choking and random resonance in sea water pumps discussed. In addition, a new Method for diagnosing mechanical conditions of sea water vertical Centrifugal lift Pumps introduced. This method involves disaggregating bus current by device into disaggregated currents having correspondences with operating currents in response to measured bus current. Moreover, some new patents and innovations in mechanical sea water pumping and cooling systems discussed in this paper.

Keywords: Cavitation, Vibration Analysis, Centrifugal Pump, Vertical Pump, Sea Water Pump, Balance Line, Strainer, Time Wave Form (TWF), Fast Fourier Transform (FFT)

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4159
137 Bayes Net Classifiers for Prediction of Renal Graft Status and Survival Period

Authors: Jiakai Li, Gursel Serpen, Steven Selman, Matt Franchetti, Mike Riesen, Cynthia Schneider

Abstract:

This paper presents the development of a Bayesian belief network classifier for prediction of graft status and survival period in renal transplantation using the patient profile information prior to the transplantation. The objective was to explore feasibility of developing a decision making tool for identifying the most suitable recipient among the candidate pool members. The dataset was compiled from the University of Toledo Medical Center Hospital patients as reported to the United Network Organ Sharing, and had 1228 patient records for the period covering 1987 through 2009. The Bayes net classifiers were developed using the Weka machine learning software workbench. Two separate classifiers were induced from the data set, one to predict the status of the graft as either failed or living, and a second classifier to predict the graft survival period. The classifier for graft status prediction performed very well with a prediction accuracy of 97.8% and true positive values of 0.967 and 0.988 for the living and failed classes, respectively. The second classifier to predict the graft survival period yielded a prediction accuracy of 68.2% and a true positive rate of 0.85 for the class representing those instances with kidneys failing during the first year following transplantation. Simulation results indicated that it is feasible to develop a successful Bayesian belief network classifier for prediction of graft status, but not the graft survival period, using the information in UNOS database.

Keywords: Bayesian network classifier, renal transplantation, graft survival period, United Network for Organ Sharing

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2109
136 Systems Engineering Management Using Transdisciplinary Quality System Development Lifecycle Model

Authors: Mohamed Asaad Abdelrazek, Amir Taher El-Sheikh, M. Zayan, A.M. Elhady

Abstract:

The successful realization of complex systems is dependent not only on the technology issues and the process for implementing them, but on the management issues as well. Managing the systems development lifecycle requires technical management. Systems engineering management is the technical management. Systems engineering management is accomplished by incorporating many activities. The three major activities are development phasing, systems engineering process and lifecycle integration. Systems engineering management activities are performed across the system development lifecycle. Due to the ever-increasing complexity of systems as well the difficulty of managing and tracking the development activities, new ways to achieve systems engineering management activities are required. This paper presents a systematic approach used as a design management tool applied across systems engineering management roles. In this approach, Transdisciplinary System Development Lifecycle (TSDL) Model has been modified and integrated with Quality Function Deployment. Hereinafter, the name of the systematic approach is the Transdisciplinary Quality System Development Lifecycle (TQSDL) Model. The QFD translates the voice of customers (VOC) into measurable technical characteristics. The modified TSDL model is based on Axiomatic Design developed by Suh which is applicable to all designs: products, processes, systems and organizations. The TQSDL model aims to provide a robust structure and systematic thinking to support the implementation of systems engineering management roles. This approach ensures that the customer requirements are fulfilled as well as satisfies all the systems engineering manager roles and activities.

Keywords: Axiomatic design, quality function deployment, systems engineering management, system development lifecycle.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
135 NANCY: Combining Adversarial Networks with Cycle-Consistency for Robust Multi-Modal Image Registration

Authors: Mirjana Ruppel, Rajendra Persad, Amit Bahl, Sanja Dogramadzi, Chris Melhuish, Lyndon Smith

Abstract:

Multimodal image registration is a profoundly complex task which is why deep learning has been used widely to address it in recent years. However, two main challenges remain: Firstly, the lack of ground truth data calls for an unsupervised learning approach, which leads to the second challenge of defining a feasible loss function that can compare two images of different modalities to judge their level of alignment. To avoid this issue altogether we implement a generative adversarial network consisting of two registration networks GAB, GBA and two discrimination networks DA, DB connected by spatial transformation layers. GAB learns to generate a deformation field which registers an image of the modality B to an image of the modality A. To do that, it uses the feedback of the discriminator DB which is learning to judge the quality of alignment of the registered image B. GBA and DA learn a mapping from modality A to modality B. Additionally, a cycle-consistency loss is implemented. For this, both registration networks are employed twice, therefore resulting in images ˆA, ˆB which were registered to ˜B, ˜A which were registered to the initial image pair A, B. Thus the resulting and initial images of the same modality can be easily compared. A dataset of liver CT and MRI was used to evaluate the quality of our approach and to compare it against learning and non-learning based registration algorithms. Our approach leads to dice scores of up to 0.80 ± 0.01 and is therefore comparable to and slightly more successful than algorithms like SimpleElastix and VoxelMorph.

Keywords: Multimodal image registration, GAN, cycle consistency, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 810
134 Off-Shore Port Management on the Environmental Issue - Case Study of Sichang Harbor

Authors: Sarisa Pechpoothong

Abstract:

The research is to minimize environmental damage pertinent to maritime activities about the operation of lighter boat anchorage and its tugboat. The guidance on upgrading current harbor service and infrastructure has been provided to Kho Sichang Municpality. This will involve a study of the maritime logistics of the water area under jurisdiction of the Sichang island Municipality and possible recommendations may involve charging taxes, regulations and fees. With implementing these recommendations will help in protection of the marine environment and in increasing operator functionality. Additionally, our recommendation is to generate a consistent revenue stream to the municipality. The action items contained in this research are feasible and effective, the success of these initiatives are heavily dependent upon successful promotion and enforcement. Promoting new rules and regulations effectively and peacefully can be done through theories and techniques used in the psychology of persuasion. In order to assure compliance with the regulations, the municipality must maintain stringent patrols and fines for violators. In order to become success, the Municipality must preserve a consistent, transparent and significant enforcement system. Considering potential opportunities outside of the current state of the municipality, the authors recommend that Koh Sichang be given additional jurisdiction to capture value from the master vessels, as well as to confront the more significant environmental challenges these vessels pose. Finally, the authors recommend that the Port of Koh Sichang Island obtain a free port status in order to increase economic viability and overall sustainability.

Keywords: Harbor, Garbage Collection Service, Environment, Off-shore port.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1968
133 Positive Energy Districts in the Swedish Energy System

Authors: Vartan Ahrens Kayayan, Mattias Gustafsson, Erik Dotzauer

Abstract:

The European Union is introducing the positive energy district concept, which has the goal to reduce overall carbon dioxide emissions. The Swedish energy system is unique compared to others in Europe, due to the implementation of low-carbon electricity and heat energy sources and high uptake of district heating. The goal for this paper is to start the discussion about how the concept of positive energy districts can best be applied to the Swedish context and meet their mitigation goals. To explore how these differences impact the formation of positive energy districts, two cases were analyzed for their methods and how these integrate into the Swedish energy system: a district in Uppsala with a focus on energy and another in Helsingborg with a focus on climate. The case in Uppsala uses primary energy calculations which can be criticized but take a virtual border that allows for its surrounding system to be considered. The district in Helsingborg has a complex methodology for considering the life cycle emissions of the neighborhood. It is successful in considering the energy balance on a monthly basis, but it can be problematized in terms of creating sub-optimized systems due to setting tight geographical constraints. The discussion of shaping the definitions and methodologies for positive energy districts is taking place in Europe and Sweden. We identify three pitfalls that must be avoided so that positive energy districts meet their mitigation goals in the Swedish context. The goal of pushing out fossil fuels is not relevant in the current energy system, the mismatch between summer electricity production and winter energy demands should be addressed, and further implementations should consider collaboration with the established district heating grid.

Keywords: Positive energy districts, energy system, renewable energy, European Union.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 73