Search results for: residual network 50
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5408

Search results for: residual network 50

4388 Replacing an Old PFN System with a Solid State Modulator without Changing the Klystron Transformer

Authors: Klas Elmquist, Anders Larsson

Abstract:

Until the year 2000, almost all short pulse modulators in the accelerator world were made with the pulse forming network (PFN) technique. The pulse forming network systems have since then been replaced with solid state modulators that have better efficiency, better stability, and lower cost of ownership, and they are much smaller. In this paper, it is shown that it is possible to replace a pulse forming network system with a solid-state system without changing the klystron tank and the klystron transformer. The solid-state modulator uses semiconductors switching at 1 kV level. A first pulse transformer transforms the voltage up to 10 kV. The 10 kV pulse is finally fed into the original transformer that is placed under the klystron. A flatness of 0.8 percent and stability of 100 PPM is achieved. The test is done with a CPI 8262 type of klystron. It is also shown that it is possible to run such a system with long cables between the transformers. When using this technique, it will be possible to keep original sub-systems like filament systems, vacuum systems, focusing solenoid systems, and cooling systems for the klystron. This will substantially reduce the cost of an upgrade and prolong the life of the klystron system.

Keywords: modulator, solid-state, PFN-system, thyratron

Procedia PDF Downloads 135
4387 Natural Gas Flow Optimization Using Pressure Profiling and Isolation Techniques

Authors: Syed Tahir Shah, Fazal Muhammad, Syed Kashif Shah, Maleeha Gul

Abstract:

In recent days, natural gas has become a relatively clean and quality source of energy, which is recovered from deep wells by expensive drilling activities. The recovered substance is purified by processing in multiple stages to remove the unwanted/containments like dust, dirt, crude oil and other particles. Mostly, gas utilities are concerned with essential objectives of quantity/quality of natural gas delivery, financial outcome and safe natural gas volumetric inventory in the transmission gas pipeline. Gas quantity and quality are primarily related to standards / advanced metering procedures in processing units/transmission systems, and the financial outcome is defined by purchasing and selling gas also the operational cost of the transmission pipeline. SNGPL (Sui Northern Gas Pipelines Limited) Pakistan has a wide range of diameters of natural gas transmission pipelines network of over 9125 km. This research results in answer a few of the issues in accuracy/metering procedures via multiple advanced gadgets for gas flow attributes after being utilized in the transmission system and research. The effects of good pressure management in transmission gas pipeline network in contemplation to boost the gas volume deposited in the existing network and finally curbing gas losses UFG (Unaccounted for gas) for financial benefits. Furthermore, depending on the results and their observation, it is directed to enhance the maximum allowable working/operating pressure (MAOP) of the system to 1235 PSIG from the current round about 900 PSIG, such that the capacity of the network could be entirely utilized. In gross, the results depict that the current model is very efficient and provides excellent results in the minimum possible time.

Keywords: natural gas, pipeline network, UFG, transmission pack, AGA

Procedia PDF Downloads 95
4386 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 383
4385 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation

Authors: Zhidong Zhang

Abstract:

This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.

Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis

Procedia PDF Downloads 182
4384 Tracing Back the Bot Master

Authors: Sneha Leslie

Abstract:

The current situation in the cyber world is that crimes performed by Botnets are increasing and the masterminds (botmaster) are not detectable easily. The botmaster in the botnet compromises the legitimate host machines in the network and make them bots or zombies to initiate the cyber-attacks. This paper will focus on the live detection of the botmaster in the network by using the strong framework 'metasploit', when distributed denial of service (DDOS) attack is performed by the botnet. The affected victim machine will be continuously monitoring its incoming packets. Once the victim machine gets to know about the excessive count of packets from any IP, that particular IP is noted and details of the noted systems are gathered. Using the vulnerabilities present in the zombie machines (already compromised by botmaster), the victim machine will compromise them. By gaining access to the compromised systems, applications are run remotely. By analyzing the incoming packets of the zombies, the victim comes to know the address of the botmaster. This is an effective and a simple system where no specific features of communication protocol are considered.

Keywords: bonet, DDoS attack, network security, detection system, metasploit framework

Procedia PDF Downloads 254
4383 Trend Detection Using Community Rank and Hawkes Process

Authors: Shashank Bhatnagar, W. Wilfred Godfrey

Abstract:

We develop in this paper, an approach to find the trendy topic, which not only considers the user-topic interaction but also considers the community, in which user belongs. This method modifies the previous approach of user-topic interaction to user-community-topic interaction with better speed-up in the range of [1.1-3]. We assume that trend detection in a social network is dependent on two things. The one is, broadcast of messages in social network governed by self-exciting point process, namely called Hawkes process and the second is, Community Rank. The influencer node links to others in the community and decides the community rank based on its PageRank and the number of users links to that community. The community rank decides the influence of one community over the other. Hence, the Hawkes process with the kernel of user-community-topic decides the trendy topic disseminated into the social network.

Keywords: community detection, community rank, Hawkes process, influencer node, pagerank, trend detection

Procedia PDF Downloads 384
4382 Off-Policy Q-learning Technique for Intrusion Response in Network Security

Authors: Zheni S. Stefanova, Kandethody M. Ramachandran

Abstract:

With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.

Keywords: cyber security, intrusion prevention, optimal policy, Q-learning

Procedia PDF Downloads 239
4381 Prediction of Unsteady Heat Transfer over Square Cylinder in the Presence of Nanofluid by Using ANN

Authors: Ajoy Kumar Das, Prasenjit Dey

Abstract:

Heat transfer due to forced convection of copper water based nanofluid has been predicted by Artificial Neural network (ANN). The present nanofluid is formed by mixing copper nano particles in water and the volume fractions are considered here are 0% to 15% and the Reynolds number are kept constant at 100. The back propagation algorithm is used to train the network. The present ANN is trained by the input and output data which has been obtained from the numerical simulation, performed in finite volume based Computational Fluid Dynamics (CFD) commercial software Ansys Fluent. The numerical simulation based results are compared with the back propagation based ANN results. It is found that the forced convection heat transfer of water based nanofluid can be predicted correctly by ANN. It is also observed that the back propagation ANN can predict the heat transfer characteristics of nanofluid very quickly compared to standard CFD method.

Keywords: forced convection, square cylinder, nanofluid, neural network

Procedia PDF Downloads 321
4380 Would Intra-Individual Variability in Attention to Be the Indicator of Impending the Senior Adults at Risk of Cognitive Decline: Evidence from Attention Network Test(ANT)

Authors: Hanna Lu, Sandra S. M. Chan, Linda C. W. Lam

Abstract:

Objectives: Intra-individual variability (IIV) has been considered as a biomarker of healthy ageing. However, the composite role of IIV in attention, as an impending indicator for neurocognitive disorders warrants further exploration. This study aims to investigate the IIV, as well as their relationships with attention network functions in adults with neurocognitive disorders (NCD). Methods: 36adults with NCD due to Alzheimer’s disease(NCD-AD), 31adults with NCD due to vascular disease (NCD-vascular), and 137 healthy controls were recruited. Intraindividual standard deviations (iSD) and intraindividual coefficient of variation of reaction time (ICV-RT) were used to evaluate the IIV. Results: NCD groups showed greater IIV (iSD: F= 11.803, p < 0.001; ICV-RT:F= 9.07, p < 0.001). In ROC analyses, the indices of IIV could differentiateNCD-AD (iSD: AUC value = 0.687, p= 0.001; ICV-RT: AUC value = 0.677, p= 0.001) and NCD-vascular (iSD: AUC value = 0.631, p= 0.023;ICV-RT: AUC value = 0.615, p= 0.045) from healthy controls. Moreover, the processing speed could distinguish NCD-AD from NCD-vascular (AUC value = 0.647, p= 0.040). Discussion: Intra-individual variability in attention provides a stable measure of cognitive performance, and seems to help distinguish the senior adults with different cognitive status.

Keywords: intra-individual variability, attention network, neurocognitive disorders, ageing

Procedia PDF Downloads 476
4379 A Neurosymbolic Learning Method for Uplink LTE-A Channel Estimation

Authors: Lassaad Smirani

Abstract:

In this paper we propose a Neurosymbolic Learning System (NLS) as a channel estimator for Long Term Evolution Advanced (LTE-A) uplink. The proposed system main idea based on Neural Network has modules capable of performing bidirectional information transfer between symbolic module and connectionist module. We demonstrate various strengths of the NLS especially the ability to integrate theoretical knowledge (rules) and experiential knowledge (examples), and to make an initial knowledge base (rules) converted into a connectionist network. Also to use empirical knowledge witch by learning will have the ability to revise the theoretical knowledge and acquire new one and explain it, and finally the ability to improve the performance of symbolic or connectionist systems. Compared with conventional SC-FDMA channel estimation systems, The performance of NLS in terms of complexity and quality is confirmed by theoretical analysis and simulation and shows that this system can make the channel estimation accuracy improved and bit error rate decreased.

Keywords: channel estimation, SC-FDMA, neural network, hybrid system, BER, LTE-A

Procedia PDF Downloads 394
4378 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach

Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday

Abstract:

One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.

Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach

Procedia PDF Downloads 199
4377 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks

Authors: Bahareh Golchin, Nooshin Riahi

Abstract:

One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.

Keywords: emotion classification, sentiment analysis, social networks, deep neural networks

Procedia PDF Downloads 139
4376 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis

Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram

Abstract:

Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.

Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification

Procedia PDF Downloads 297
4375 A Novel Gateway Location Algorithm for Wireless Mesh Networks

Authors: G. M. Komba

Abstract:

The Internet Gateway (IGW) has extra ability than a simple Mesh Router (MR) and the responsibility to route mostly the all traffic from Mesh Clients (MCs) to the Internet backbone however, IGWs are more expensive. Choosing strategic locations for the Internet Gateways (IGWs) best location in Backbone Wireless Mesh (BWM) precarious to the Wireless Mesh Network (WMN) and the location of IGW can improve a quantity of performance related problem. In this paper, we propose a novel algorithm, namely New Gateway Location Algorithm (NGLA), which aims to achieve four objectives, decreasing the network cost effective, minimizing delay, optimizing the throughput capacity, Different from existing algorithms, the NGLA increasingly recognizes IGWs, allocates mesh routers (MRs) to identify IGWs and promises to find a feasible IGW location and install minimum as possible number of IGWs while regularly conserving the all Quality of Service (QoS) requests. Simulation results showing that the NGLA outperforms other different algorithms by comparing the number of IGWs with a large margin and it placed 40% less IGWs and 80% gain of throughput. Furthermore the NGLA is easy to implement and could be employed for BWM.

Keywords: Wireless Mesh Network, Gateway Location Algorithm, Quality of Service, BWM

Procedia PDF Downloads 373
4374 Dynamic Cellular Remanufacturing System (DCRS) Design

Authors: Tariq Aljuneidi, Akif Asil Bulgak

Abstract:

Remanufacturing may be defined as the process of bringing used products to “like-new” functional state with warranty to match, and it is one of the most popular product end-of-life scenarios. An efficient remanufacturing network lead to an efficient design of sustainable manufacturing enterprise. In remanufacturing network, products are collected from the customer zone, disassembled and remanufactured at a suitable remanufacturing facility. In this respect, another issue to consider is how the returned product to be remanufactured, in other words, what is the best layout for such facility. In order to achieve a sustainable manufacturing system, Cellular Manufacturing System (CMS) designs are highly recommended, CMSs combine high throughput rates of line layouts with the flexibility offered by functional layouts (job shop). Introducing the CMS while designing a remanufacturing network will benefit the utilization of such a network. This paper presents and analyzes a comprehensive mathematical model for the design of Dynamic Cellular Remanufacturing Systems (DCRSs). In this paper, the proposed model is the first one to date that consider CMS and remanufacturing system simultaneously. The proposed DCRS model considers several manufacturing attributes such as multi-period production planning, dynamic system reconfiguration, duplicate machines, machine capacity, available time for workers, worker assignments, and machine procurement, where the demand is totally satisfied from a returned product. A numerical example is presented to illustrate the proposed model.

Keywords: cellular manufacturing system, remanufacturing, mathematical programming, sustainability

Procedia PDF Downloads 379
4373 Instant Fire Risk Assessment Using Artifical Neural Networks

Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan

Abstract:

Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.

Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index

Procedia PDF Downloads 138
4372 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China

Authors: Weikang Peng

Abstract:

The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.

Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network

Procedia PDF Downloads 22
4371 Router 1X3 - RTL Design and Verification

Authors: Nidhi Gopal

Abstract:

Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.

Keywords: data packets, networking, router, routing

Procedia PDF Downloads 815
4370 Social Media, Networks and Related Technology: Business and Governance Perspectives

Authors: M. A. T. AlSudairi, T. G. K. Vasista

Abstract:

The concept of social media is becoming the top of the agenda for many business executives and public sector executives today. Decision makers as well as consultants, try to identify ways in which firms and enterprises can make profitable use of social media and network related applications such as Wikipedia, Face book, YouTube, Google+, Twitter. While it is fun and useful to participating in this media and network for achieving the communication effectively and efficiently, semantic and sentiment analysis and interpretation becomes a crucial issue. So, the objective of this paper is to provide literature review on social media, network and related technology related to semantics and sentiment or opinion analysis covering business and governance perspectives. In this regard, a case study on the use and adoption of Social media in Saudi Arabia has been discussed. It is concluded that semantic web technology play a significant role in analyzing the social networks and social media content for extracting the interpretational knowledge towards strategic decision support.

Keywords: CRASP methodology, formative assessment, literature review, semantic web services, social media, social networks

Procedia PDF Downloads 452
4369 Selecting a Foreign Country to Build a Naval Base Using a Fuzzy Hybrid Decision Support System

Authors: Latif Yanar, Muammer Kaçan

Abstract:

Decision support systems are getting more important in many fields of science and technology and used effectively especially when the problems to be solved are complicated with many criteria. In this kind of problems one of the main challenges for the decision makers are that sometimes they cannot produce a countable data for evaluating the criteria but the knowledge and sense of experts. In recent years, fuzzy set theory and fuzzy logic based decision models gaining more place in literature. In this study, a decision support model to determine a country to build naval base is proposed and the application of the model is performed, considering Turkish Navy by the evaluations of Turkish Navy officers and academicians of international relations departments of various Universities located in Istanbul. The results achieved from the evaluations made by the experts in our model are calculated by a decision support tool named DESTEC 1.0, which is developed by the authors using C Sharp programming language. The tool gives advices to the decision maker using Analytic Hierarchy Process, Analytic Network Process, Fuzzy Analytic Hierarchy Process and Fuzzy Analytic Network Process all at once. The calculated results for five foreign countries are shown in the conclusion.

Keywords: decision support system, analytic hierarchy process, fuzzy analytic hierarchy process, analytic network process, fuzzy analytic network process, naval base, country selection, international relations

Procedia PDF Downloads 593
4368 Application of a Confirmatory Composite Model for Assessing the Extent of Agricultural Digitalization: A Case of Proactive Land Acquisition Strategy (PLAS) Farmers in South Africa

Authors: Mazwane S., Makhura M. N., Ginege A.

Abstract:

Digitalization in South Africa has received considerable attention from policymakers. The support for the development of the digital economy by the South African government has been demonstrated through the enactment of various national policies and strategies. This study sought to develop an index for agricultural digitalization by applying composite confirmatory analysis (CCA). Another aim was to determine the factors that affect the development of digitalization in PLAS farms. Data on the indicators of the three dimensions of digitalization were collected from 300 Proactive Land Acquisition Strategy (PLAS) farms in South Africa using semi-structured questionnaires. Confirmatory composite analysis (CCA) was employed to reduce the items into three digitalization dimensions and ultimately to a digitalization index. Standardized digitalization index scores were extracted and fitted to a linear regression model to determine the factors affecting digitalization development. The results revealed that the model shows practical validity and can be used to measure digitalization development as measures of fit (geodesic distance, standardized root mean square residual, and squared Euclidean distance) were all below their respective 95%quantiles of bootstrap discrepancies (HI95 values). Therefore, digitalization is an emergent variable that can be measured using CCA. The average level of digitalization in PLAS farms was 0.2 and varied significantly across provinces. The factors that significantly influence digitalization development in PLAS land reform farms were age, gender, farm type, network type, and cellular data type. This should enable researchers and policymakers to understand the level of digitalization and patterns of development, as well as correctly attribute digitalization development to the contributing factors.

Keywords: agriculture, digitalization, confirmatory composite model, land reform, proactive land acquisition strategy, South Africa

Procedia PDF Downloads 64
4367 Tabu Search to Draw Evacuation Plans in Emergency Situations

Authors: S. Nasri, H. Bouziri

Abstract:

Disasters are quite experienced in our days. They are caused by floods, landslides, and building fires that is the main objective of this study. To cope with these unexpected events, precautions must be taken to protect human lives. The emphasis on disposal work focuses on the resolution of the evacuation problem in case of no-notice disaster. The problem of evacuation is listed as a dynamic network flow problem. Particularly, we model the evacuation problem as an earliest arrival flow problem with load dependent transit time. This problem is classified as NP-Hard. Our challenge here is to propose a metaheuristic solution for solving the evacuation problem. We define our objective as the maximization of evacuees during earliest periods of a time horizon T. The objective provides the evacuation of persons as soon as possible. We performed an experimental study on emergency evacuation from the tunisian children’s hospital. This work prompts us to look for evacuation plans corresponding to several situations where the network dynamically changes.

Keywords: dynamic network flow, load dependent transit time, evacuation strategy, earliest arrival flow problem, tabu search metaheuristic

Procedia PDF Downloads 372
4366 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers

Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang

Abstract:

In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.

Keywords: centrality, patent coupling network, patent influence, social network analysis

Procedia PDF Downloads 54
4365 Measurement Technologies for Advanced Characterization of Magnetic Materials Used in Electric Drives and Automotive Applications

Authors: Lukasz Mierczak, Patrick Denke, Piotr Klimczyk, Stefan Siebert

Abstract:

Due to the high complexity of the magnetization in electrical machines and influence of the manufacturing processes on the magnetic properties of their components, the assessment and prediction of hysteresis and eddy current losses has remained a challenge. In the design process of electric motors and generators, the power losses of stators and rotors are calculated based on the material supplier’s data from standard magnetic measurements. This type of data does not include the additional loss from non-sinusoidal multi-harmonic motor excitation nor the detrimental effects of residual stress remaining in the motor laminations after manufacturing processes, such as punching, housing shrink fitting and winding. Moreover, in production, considerable attention is given to the measurements of mechanical dimensions of stator and rotor cores, whereas verification of their magnetic properties is typically neglected, which can lead to inconsistent efficiency of assembled motors. Therefore, to enable a comprehensive characterization of motor materials and components, Brockhaus Measurements developed a range of in-line and offline measurement technologies for testing their magnetic properties under actual motor operating conditions. Multiple sets of experimental data were obtained to evaluate the influence of various factors, such as elevated temperature, applied and residual stress, and arbitrary magnetization on the magnetic properties of different grades of non-oriented steel. Measured power loss for tested samples and stator cores varied significantly, by more than 100%, comparing to standard measurement conditions. Quantitative effects of each of the applied measurement were analyzed. This research and applied Brockhaus measurement methodologies emphasized the requirement for advanced characterization of magnetic materials used in electric drives and automotive applications.

Keywords: magnetic materials, measurement technologies, permanent magnets, stator and rotor cores

Procedia PDF Downloads 141
4364 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks

Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah

Abstract:

Ant colony based routing algorithms are known to grantee the packet delivery, but they su ffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.

Keywords: ad-hoc network, MANET, ant colony routing, position based routing

Procedia PDF Downloads 426
4363 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures

Authors: Jungyeol Hong, Dongjoo Park

Abstract:

The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.

Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership

Procedia PDF Downloads 179
4362 Coupling Random Demand and Route Selection in the Transportation Network Design Problem

Authors: Shabnam Najafi, Metin Turkay

Abstract:

Network design problem (NDP) is used to determine the set of optimal values for certain pre-specified decision variables such as capacity expansion of nodes and links by optimizing various system performance measures including safety, congestion, and accessibility. The designed transportation network should improve objective functions defined for the system by considering the route choice behaviors of network users at the same time. The NDP studies mostly investigated the random demand and route selection constraints separately due to computational challenges. In this work, we consider both random demand and route selection constraints simultaneously. This work presents a nonlinear stochastic model for land use and road network design problem to address the development of different functional zones in urban areas by considering both cost function and air pollution. This model minimizes cost function and air pollution simultaneously with random demand and stochastic route selection constraint that aims to optimize network performance via road capacity expansion. The Bureau of Public Roads (BPR) link impedance function is used to determine the travel time function in each link. We consider a city with origin and destination nodes which can be residential or employment or both. There are set of existing paths between origin-destination (O-D) pairs. Case of increasing employed population is analyzed to determine amount of roads and origin zones simultaneously. Minimizing travel and expansion cost of routes and origin zones in one side and minimizing CO emission in the other side is considered in this analysis at the same time. In this work demand between O-D pairs is random and also the network flow pattern is subject to stochastic user equilibrium, specifically logit route choice model. Considering both demand and route choice, random is more applicable to design urban network programs. Epsilon-constraint is one of the methods to solve both linear and nonlinear multi-objective problems. In this work epsilon-constraint method is used to solve the problem. The problem was solved by keeping first objective (cost function) as the objective function of the problem and second objective as a constraint that should be less than an epsilon, where epsilon is an upper bound of the emission function. The value of epsilon should change from the worst to the best value of the emission function to generate the family of solutions representing Pareto set. A numerical example with 2 origin zones and 2 destination zones and 7 links is solved by GAMS and the set of Pareto points is obtained. There are 15 efficient solutions. According to these solutions as cost function value increases, emission function value decreases and vice versa.

Keywords: epsilon-constraint, multi-objective, network design, stochastic

Procedia PDF Downloads 648
4361 Predicting Indonesia External Debt Crisis: An Artificial Neural Network Approach

Authors: Riznaldi Akbar

Abstract:

In this study, we compared the performance of the Artificial Neural Network (ANN) model with back-propagation algorithm in correctly predicting in-sample and out-of-sample external debt crisis in Indonesia. We found that exchange rate, foreign reserves, and exports are the major determinants to experiencing external debt crisis. The ANN in-sample performance provides relatively superior results. The ANN model is able to classify correctly crisis of 89.12 per cent with reasonably low false alarms of 7.01 per cent. In out-of-sample, the prediction performance fairly deteriorates compared to their in-sample performances. It could be explained as the ANN model tends to over-fit the data in the in-sample, but it could not fit the out-of-sample very well. The 10-fold cross-validation has been used to improve the out-of-sample prediction accuracy. The results also offer policy implications. The out-of-sample performance could be very sensitive to the size of the samples, as it could yield a higher total misclassification error and lower prediction accuracy. The ANN model could be used to identify past crisis episodes with some accuracy, but predicting crisis outside the estimation sample is much more challenging because of the presence of uncertainty.

Keywords: debt crisis, external debt, artificial neural network, ANN

Procedia PDF Downloads 445
4360 Liver Transplantation after Downstaging with Electrochemotherapy of Large Hepatocellular Carcinoma and Portal Vein Tumor Thrombosis: A Case Report

Authors: Luciano Tarantino, Emanuele Balzano, Aurelio Nasto

Abstract:

S.R. 53 years. January 2009: HCV-related cirrhosis, Child-Pugh A5 class, EGDS no aesophageal Varices. No important comorbidities. Treated with PEG-IFN+Ribavirin (march-november 2009) with subsequent sustained virologic response. HCVRNA absent overtime. October 2016 :CT detected small HCC nodule in the VIII segment (diam.=12 mm). Treated with US guided RF-ablation. November 2016 CT: complete necrosis. Unfortunately, the patient dropped out US and CT follow-up controls.September 2018: asthenia and weight loss. CT showed a large tumor infiltrating V-VII-VI segments and complete PVTT of right portal vein and its branches . Surgical Consultation excluded indication to Liver resection and OLT . 23 october 2018: ECT of a large peri-hilar area of the tumor including the PVTT. 1 and 3 months post-treatment CT showed complete necrosis and retraction of the thrombus and residual viable tumor in the peripheral portion of the right lobe . Therefor, the patient was reevaluated for OLT and considered eligible in waiting list . March 2019: CT showed no perihilar or portal vein recurrence and distant progression in the right lobe . March 2019 : Trans-arterial-Radio-therapy (TARE) of the right lobe. Post-treatment CT demonstrated no perihilar or portal vein recurrence and extensive necrosis of the residual tumor . December 2019: CT demonstrated several recurrences of HCC infiltrating the VI and VII segment . Howewer no recurrence was observed at hepatic hilum and in portal vessels . Therefore, on February 2020 the patient received OLT. At 44 months follow-up, no complication or recurrence or liver disfunction have been observed.

Keywords: hepatocellular carcinoma, portal vein tumor thrombosis, interventional ultrasound, liver tumor ablation, liver transplantation

Procedia PDF Downloads 67
4359 An Evaluation of Neural Network Efficacies for Image Recognition on Edge-AI Computer Vision Platform

Authors: Jie Zhao, Meng Su

Abstract:

Image recognition, as one of the most critical technologies in computer vision, works to help machine-like robotics understand a scene, that is, if deployed appropriately, will trigger the revolution in remote sensing and industry automation. With the developments of AI technologies, there are many prevailing and sophisticated neural networks as technologies developed for image recognition. However, computer vision platforms as hardware, supporting neural networks for image recognition, as crucial as the neural network technologies, need to be more congruently addressed as the research subjects. In contrast, different computer vision platforms are deterministic to leverage the performance of different neural networks for recognition. In this paper, three different computer vision platforms – Jetson Nano(with 4GB), a standalone laptop(with RTX 3000s, using CUDA), and Google Colab (web-based, using GPU) are explored and four prominent neural network architectures (including AlexNet, VGG(16/19), GoogleNet, and ResNet(18/34/50)), are investigated. In the context of pairwise usage between different computer vision platforms and distinctive neural networks, with the merits of recognition accuracy and time efficiency, the performances are evaluated. In the case study using public imageNets, our findings provide a nuanced perspective on optimizing image recognition tasks across Edge-AI platforms, offering guidance on selecting appropriate neural network structures to maximize performance under hardware constraints.

Keywords: alexNet, VGG, googleNet, resNet, Jetson nano, CUDA, COCO-NET, cifar10, imageNet large scale visual recognition challenge (ILSVRC), google colab

Procedia PDF Downloads 92