Search results for: convolution neuron network
3943 Modeling of the Mechanism of Ion Channel Opening of the Visual Receptor's Rod on the Light and Allosteric Effect of Rhodopsin in the Phosphorylation Process
Authors: N. S. Vassilieva-Vashakmadze, R. A. Gakhokidze, I. M. Khachatryan
Abstract:
In the first part of the paper it is shown that both the depolarization of the cytoplasmic membrane of rods observed in invertebrates and hyperpolarization characteristic of vertebrates on the light may activate the functioning of ion (Na+) channels of cytoplasmic membrane of rods and thus provide the emergence of nerve impulse and its transfer to the neighboring neuron etc. In the second part, using the quantum mechanical program for modeling of the molecular processes, we got a clear picture demonstrating the effect of charged phosphate groups on the protein components of α-helical subunits of the visual rhodopsin receptor. The analysis shows that the phosphorylation of terminal amino acid of seventh α-helical subunits of the visual rhodopsin causes a redistribution of electron density on the atoms, i.e. polarization of subunits, also the changing the configuration of the nuclear subsystem, which corresponds to the deformation process in the molecule. Based on the use of models it can be concluded that this system has an internal relationship between polarization and deformation processes that indicates on the allosteric effect. The allosteric effect is based on quantum-mechanical principle of the self-consistency of the molecules.Keywords: membrane potential, ion channels, visual rhodopsin, allosteric effect
Procedia PDF Downloads 2723942 Students’ Online Forum Activities and Social Network Analysis in an E-Learning Environment
Authors: P. L. Cheng, I. N. Umar
Abstract:
Online discussion forum is a popular e-learning technique that allows participants to interact and construct knowledge. This study aims to examine the levels of participation, categories of participants and the structure of their interactions in a forum. A convenience sampling of one course coordinator and 23 graduate students was selected in this study. The forums’ log file and the Social Network Analysis software were used in this study. The analysis reveals 610 activities (including viewing forum’s topic, viewing discussion thread, posting a new thread, replying to other participants’ post, updating an existing thread and deleting a post) performed by them in this forum, with an average of 3.83 threads posted. Also, this forum consists of five at-risk participants, six bridging participants, four isolated participants and five leaders of information. In addition, the network density value is 0.15 and there exist five reciprocal interactions in this forum. The closeness value varied between 28 and 68 while the eigen vector centrality value varied between 0.008 and 0.39. The finding indicates that the participants tend to listen more rather than express their opinions in the forum. It was also revealed that those who actively provide supports in the discussion forum were not the same people who received the most responses from their peers. This study found that cliques do not exist in the forum and the participants are not selective to whom they response to, rather, it was based on the content of the posts made by their peers. Based upon the findings, further analysis with different method and population, larger sample size and a longer time frame are recommended.Keywords: e-learning, learning management system, online forum, social network analysis
Procedia PDF Downloads 3913941 Advancing the Hi-Tech Ecosystem in the Periphery: The Case of the Sea of Galilee Region
Authors: Yael Dubinsky, Orit Hazzan
Abstract:
There is a constant need for hi-tech innovation to be decentralized to peripheral regions. This work describes how we applied design science research (DSR) principles to define what we refer to as the Sea of Galilee (SoG) method. The goal of the SoG method is to harness existing and new technological initiatives in peripheral regions to create a socio-technological network that can initiate and maintain hi-tech activities. The SoG method consists of a set of principles, a stakeholder network, and actual hi-tech business initiatives, including their infrastructure and practices. The three cycles of DSR, the Relevance, Design, and Rigor cycles, layout a research framework to sharpen the requirements, collect data from case studies, and iteratively refine the SoG method based on the existing knowledge base. We propose that the SoG method can be deployed by regional authorities that wish to be considered smart regions (an extension of the notion of smart cities).Keywords: design science research, socio-technological initiatives, Sea of Galilee method, periphery stakeholder network, hi-tech initiatieves
Procedia PDF Downloads 1323940 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 913939 D3Advert: Data-Driven Decision Making for Ad Personalization through Personality Analysis Using BiLSTM Network
Authors: Sandesh Achar
Abstract:
Personalized advertising holds greater potential for higher conversion rates compared to generic advertisements. However, its widespread application in the retail industry faces challenges due to complex implementation processes. These complexities impede the swift adoption of personalized advertisement on a large scale. Personalized advertisement, being a data-driven approach, necessitates consumer-related data, adding to its complexity. This paper introduces an innovative data-driven decision-making framework, D3Advert, which personalizes advertisements by analyzing personalities using a BiLSTM network. The framework utilizes the Myers–Briggs Type Indicator (MBTI) dataset for development. The employed BiLSTM network, specifically designed and optimized for D3Advert, classifies user personalities into one of the sixteen MBTI categories based on their social media posts. The classification accuracy is 86.42%, with precision, recall, and F1-Score values of 85.11%, 84.14%, and 83.89%, respectively. The D3Advert framework personalizes advertisements based on these personality classifications. Experimental implementation and performance analysis of D3Advert demonstrate a 40% improvement in impressions. D3Advert’s innovative and straightforward approach has the potential to transform personalized advertising and foster widespread personalized advertisement adoption in marketing.Keywords: personalized advertisement, deep Learning, MBTI dataset, BiLSTM network, NLP.
Procedia PDF Downloads 453938 Study of Energy Efficient and Quality of Service Based Routing Protocols in Wireless Sensor Networking
Authors: Sachin Sharma
Abstract:
A wireless sensor network (WSN) consists of a large number of sensor nodes which are deployed over an area to perform local computations based on information gathered from the surroundings. With the increasing demand for real-time applications in WSN, real-time critical events anticipate an efficient quality-of-service (QoS) based routing for data delivery from the network infrastructure. Hence, maximizing the lifetime of the network through minimizing the energy is an important challenge in WSN; sensors cannot be easily replaced or recharged due to their ad-hoc deployment in a hazardous environment. Considerable research has been focused on developing robust energy efficient QoS based routing protocols. The main focus of this article is primarily on periodical cycling schemes which represent the most compatible technique for energy saving and we also focus on the data-driven approaches that can be used to improve the energy efficiency. Finally, we will make a review on some communication protocols proposed for sensor networks.Keywords: energy efficient, quality of service, wireless sensor networks, MAC
Procedia PDF Downloads 3493937 Replacing an Old PFN System with a Solid State Modulator without Changing the Klystron Transformer
Authors: Klas Elmquist, Anders Larsson
Abstract:
Until the year 2000, almost all short pulse modulators in the accelerator world were made with the pulse forming network (PFN) technique. The pulse forming network systems have since then been replaced with solid state modulators that have better efficiency, better stability, and lower cost of ownership, and they are much smaller. In this paper, it is shown that it is possible to replace a pulse forming network system with a solid-state system without changing the klystron tank and the klystron transformer. The solid-state modulator uses semiconductors switching at 1 kV level. A first pulse transformer transforms the voltage up to 10 kV. The 10 kV pulse is finally fed into the original transformer that is placed under the klystron. A flatness of 0.8 percent and stability of 100 PPM is achieved. The test is done with a CPI 8262 type of klystron. It is also shown that it is possible to run such a system with long cables between the transformers. When using this technique, it will be possible to keep original sub-systems like filament systems, vacuum systems, focusing solenoid systems, and cooling systems for the klystron. This will substantially reduce the cost of an upgrade and prolong the life of the klystron system.Keywords: modulator, solid-state, PFN-system, thyratron
Procedia PDF Downloads 1363936 Natural Gas Flow Optimization Using Pressure Profiling and Isolation Techniques
Authors: Syed Tahir Shah, Fazal Muhammad, Syed Kashif Shah, Maleeha Gul
Abstract:
In recent days, natural gas has become a relatively clean and quality source of energy, which is recovered from deep wells by expensive drilling activities. The recovered substance is purified by processing in multiple stages to remove the unwanted/containments like dust, dirt, crude oil and other particles. Mostly, gas utilities are concerned with essential objectives of quantity/quality of natural gas delivery, financial outcome and safe natural gas volumetric inventory in the transmission gas pipeline. Gas quantity and quality are primarily related to standards / advanced metering procedures in processing units/transmission systems, and the financial outcome is defined by purchasing and selling gas also the operational cost of the transmission pipeline. SNGPL (Sui Northern Gas Pipelines Limited) Pakistan has a wide range of diameters of natural gas transmission pipelines network of over 9125 km. This research results in answer a few of the issues in accuracy/metering procedures via multiple advanced gadgets for gas flow attributes after being utilized in the transmission system and research. The effects of good pressure management in transmission gas pipeline network in contemplation to boost the gas volume deposited in the existing network and finally curbing gas losses UFG (Unaccounted for gas) for financial benefits. Furthermore, depending on the results and their observation, it is directed to enhance the maximum allowable working/operating pressure (MAOP) of the system to 1235 PSIG from the current round about 900 PSIG, such that the capacity of the network could be entirely utilized. In gross, the results depict that the current model is very efficient and provides excellent results in the minimum possible time.Keywords: natural gas, pipeline network, UFG, transmission pack, AGA
Procedia PDF Downloads 963935 A Smart Sensor Network Approach Using Affordable River Water Level Sensors
Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan
Abstract:
Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.Keywords: smart sensing, internet of things, water level sensor, flooding
Procedia PDF Downloads 3833934 An Exploratory Sequential Design: A Mixed Methods Model for the Statistics Learning Assessment with a Bayesian Network Representation
Authors: Zhidong Zhang
Abstract:
This study established a mixed method model in assessing statistics learning with Bayesian network models. There are three variants in exploratory sequential designs. There are three linked steps in one of the designs: qualitative data collection and analysis, quantitative measure, instrument, intervention, and quantitative data collection analysis. The study used a scoring model of analysis of variance (ANOVA) as a content domain. The research study is to examine students’ learning in both semantic and performance aspects at fine grain level. The ANOVA score model, y = α+ βx1 + γx1+ ε, as a cognitive task to collect data during the student learning process. When the learning processes were decomposed into multiple steps in both semantic and performance aspects, a hierarchical Bayesian network was established. This is a theory-driven process. The hierarchical structure was gained based on qualitative cognitive analysis. The data from students’ ANOVA score model learning was used to give evidence to the hierarchical Bayesian network model from the evidential variables. Finally, the assessment results of students’ ANOVA score model learning were reported. Briefly, this was a mixed method research design applied to statistics learning assessment. The mixed methods designs expanded more possibilities for researchers to establish advanced quantitative models initially with a theory-driven qualitative mode.Keywords: exploratory sequential design, ANOVA score model, Bayesian network model, mixed methods research design, cognitive analysis
Procedia PDF Downloads 1853933 Tracing Back the Bot Master
Authors: Sneha Leslie
Abstract:
The current situation in the cyber world is that crimes performed by Botnets are increasing and the masterminds (botmaster) are not detectable easily. The botmaster in the botnet compromises the legitimate host machines in the network and make them bots or zombies to initiate the cyber-attacks. This paper will focus on the live detection of the botmaster in the network by using the strong framework 'metasploit', when distributed denial of service (DDOS) attack is performed by the botnet. The affected victim machine will be continuously monitoring its incoming packets. Once the victim machine gets to know about the excessive count of packets from any IP, that particular IP is noted and details of the noted systems are gathered. Using the vulnerabilities present in the zombie machines (already compromised by botmaster), the victim machine will compromise them. By gaining access to the compromised systems, applications are run remotely. By analyzing the incoming packets of the zombies, the victim comes to know the address of the botmaster. This is an effective and a simple system where no specific features of communication protocol are considered.Keywords: bonet, DDoS attack, network security, detection system, metasploit framework
Procedia PDF Downloads 2553932 Trend Detection Using Community Rank and Hawkes Process
Authors: Shashank Bhatnagar, W. Wilfred Godfrey
Abstract:
We develop in this paper, an approach to find the trendy topic, which not only considers the user-topic interaction but also considers the community, in which user belongs. This method modifies the previous approach of user-topic interaction to user-community-topic interaction with better speed-up in the range of [1.1-3]. We assume that trend detection in a social network is dependent on two things. The one is, broadcast of messages in social network governed by self-exciting point process, namely called Hawkes process and the second is, Community Rank. The influencer node links to others in the community and decides the community rank based on its PageRank and the number of users links to that community. The community rank decides the influence of one community over the other. Hence, the Hawkes process with the kernel of user-community-topic decides the trendy topic disseminated into the social network.Keywords: community detection, community rank, Hawkes process, influencer node, pagerank, trend detection
Procedia PDF Downloads 3863931 Off-Policy Q-learning Technique for Intrusion Response in Network Security
Authors: Zheni S. Stefanova, Kandethody M. Ramachandran
Abstract:
With the increasing dependency on our computer devices, we face the necessity of adequate, efficient and effective mechanisms, for protecting our network. There are two main problems that Intrusion Detection Systems (IDS) attempt to solve. 1) To detect the attack, by analyzing the incoming traffic and inspect the network (intrusion detection). 2) To produce a prompt response when the attack occurs (intrusion prevention). It is critical creating an Intrusion detection model that will detect a breach in the system on time and also challenging making it provide an automatic and with an acceptable delay response at every single stage of the monitoring process. We cannot afford to adopt security measures with a high exploiting computational power, and we are not able to accept a mechanism that will react with a delay. In this paper, we will propose an intrusion response mechanism that is based on artificial intelligence, and more precisely, reinforcement learning techniques (RLT). The RLT will help us to create a decision agent, who will control the process of interacting with the undetermined environment. The goal is to find an optimal policy, which will represent the intrusion response, therefore, to solve the Reinforcement learning problem, using a Q-learning approach. Our agent will produce an optimal immediate response, in the process of evaluating the network traffic.This Q-learning approach will establish the balance between exploration and exploitation and provide a unique, self-learning and strategic artificial intelligence response mechanism for IDS.Keywords: cyber security, intrusion prevention, optimal policy, Q-learning
Procedia PDF Downloads 2413930 Prediction of Unsteady Heat Transfer over Square Cylinder in the Presence of Nanofluid by Using ANN
Authors: Ajoy Kumar Das, Prasenjit Dey
Abstract:
Heat transfer due to forced convection of copper water based nanofluid has been predicted by Artificial Neural network (ANN). The present nanofluid is formed by mixing copper nano particles in water and the volume fractions are considered here are 0% to 15% and the Reynolds number are kept constant at 100. The back propagation algorithm is used to train the network. The present ANN is trained by the input and output data which has been obtained from the numerical simulation, performed in finite volume based Computational Fluid Dynamics (CFD) commercial software Ansys Fluent. The numerical simulation based results are compared with the back propagation based ANN results. It is found that the forced convection heat transfer of water based nanofluid can be predicted correctly by ANN. It is also observed that the back propagation ANN can predict the heat transfer characteristics of nanofluid very quickly compared to standard CFD method.Keywords: forced convection, square cylinder, nanofluid, neural network
Procedia PDF Downloads 3223929 Would Intra-Individual Variability in Attention to Be the Indicator of Impending the Senior Adults at Risk of Cognitive Decline: Evidence from Attention Network Test(ANT)
Authors: Hanna Lu, Sandra S. M. Chan, Linda C. W. Lam
Abstract:
Objectives: Intra-individual variability (IIV) has been considered as a biomarker of healthy ageing. However, the composite role of IIV in attention, as an impending indicator for neurocognitive disorders warrants further exploration. This study aims to investigate the IIV, as well as their relationships with attention network functions in adults with neurocognitive disorders (NCD). Methods: 36adults with NCD due to Alzheimer’s disease(NCD-AD), 31adults with NCD due to vascular disease (NCD-vascular), and 137 healthy controls were recruited. Intraindividual standard deviations (iSD) and intraindividual coefficient of variation of reaction time (ICV-RT) were used to evaluate the IIV. Results: NCD groups showed greater IIV (iSD: F= 11.803, p < 0.001; ICV-RT:F= 9.07, p < 0.001). In ROC analyses, the indices of IIV could differentiateNCD-AD (iSD: AUC value = 0.687, p= 0.001; ICV-RT: AUC value = 0.677, p= 0.001) and NCD-vascular (iSD: AUC value = 0.631, p= 0.023;ICV-RT: AUC value = 0.615, p= 0.045) from healthy controls. Moreover, the processing speed could distinguish NCD-AD from NCD-vascular (AUC value = 0.647, p= 0.040). Discussion: Intra-individual variability in attention provides a stable measure of cognitive performance, and seems to help distinguish the senior adults with different cognitive status.Keywords: intra-individual variability, attention network, neurocognitive disorders, ageing
Procedia PDF Downloads 4763928 A Neurosymbolic Learning Method for Uplink LTE-A Channel Estimation
Authors: Lassaad Smirani
Abstract:
In this paper we propose a Neurosymbolic Learning System (NLS) as a channel estimator for Long Term Evolution Advanced (LTE-A) uplink. The proposed system main idea based on Neural Network has modules capable of performing bidirectional information transfer between symbolic module and connectionist module. We demonstrate various strengths of the NLS especially the ability to integrate theoretical knowledge (rules) and experiential knowledge (examples), and to make an initial knowledge base (rules) converted into a connectionist network. Also to use empirical knowledge witch by learning will have the ability to revise the theoretical knowledge and acquire new one and explain it, and finally the ability to improve the performance of symbolic or connectionist systems. Compared with conventional SC-FDMA channel estimation systems, The performance of NLS in terms of complexity and quality is confirmed by theoretical analysis and simulation and shows that this system can make the channel estimation accuracy improved and bit error rate decreased.Keywords: channel estimation, SC-FDMA, neural network, hybrid system, BER, LTE-A
Procedia PDF Downloads 3963927 Estimation of Fouling in a Cross-Flow Heat Exchanger Using Artificial Neural Network Approach
Authors: Rania Jradi, Christophe Marvillet, Mohamed Razak Jeday
Abstract:
One of the most frequently encountered problems in industrial heat exchangers is fouling, which degrades the thermal and hydraulic performances of these types of equipment, leading thus to failure if undetected. And it occurs due to the accumulation of undesired material on the heat transfer surface. So, it is necessary to know about the heat exchanger fouling dynamics to plan mitigation strategies, ensuring a sustainable and safe operation. This paper proposes an Artificial Neural Network (ANN) approach to estimate the fouling resistance in a cross-flow heat exchanger by the collection of the operating data of the phosphoric acid concentration loop. The operating data of 361 was used to validate the proposed model. The ANN attains AARD= 0.048%, MSE= 1.811x10⁻¹¹, RMSE= 4.256x 10⁻⁶ and r²=99.5 % of accuracy which confirms that it is a credible and valuable approach for industrialists and technologists who are faced with the drawbacks of fouling in heat exchangers.Keywords: cross-flow heat exchanger, fouling, estimation, phosphoric acid concentration loop, artificial neural network approach
Procedia PDF Downloads 2003926 Emotion Detection in Twitter Messages Using Combination of Long Short-Term Memory and Convolutional Deep Neural Networks
Authors: Bahareh Golchin, Nooshin Riahi
Abstract:
One of the most significant issues as attended a lot in recent years is that of recognizing the sentiments and emotions in social media texts. The analysis of sentiments and emotions is intended to recognize the conceptual information such as the opinions, feelings, attitudes and emotions of people towards the products, services, organizations, people, topics, events and features in the written text. These indicate the greatness of the problem space. In the real world, businesses and organizations are always looking for tools to gather ideas, emotions, and directions of people about their products, services, or events related to their own. This article uses the Twitter social network, one of the most popular social networks with about 420 million active users, to extract data. Using this social network, users can share their information and opinions about personal issues, policies, products, events, etc. It can be used with appropriate classification of emotional states due to the availability of its data. In this study, supervised learning and deep neural network algorithms are used to classify the emotional states of Twitter users. The use of deep learning methods to increase the learning capacity of the model is an advantage due to the large amount of available data. Tweets collected on various topics are classified into four classes using a combination of two Bidirectional Long Short Term Memory network and a Convolutional network. The results obtained from this study with an average accuracy of 93%, show good results extracted from the proposed framework and improved accuracy compared to previous work.Keywords: emotion classification, sentiment analysis, social networks, deep neural networks
Procedia PDF Downloads 1403925 Neural Network Based Decision Trees Using Machine Learning for Alzheimer's Diagnosis
Authors: P. S. Jagadeesh Kumar, Tracy Lin Huan, S. Meenakshi Sundaram
Abstract:
Alzheimer’s disease is one of the prevalent kind of ailment, expected for impudent reconciliation or an effectual therapy is to be accredited hitherto. Probable detonation of patients in the upcoming years, and consequently an enormous deal of apprehension in early discovery of the disorder, this will conceivably chaperon to enhanced healing outcomes. Complex impetuosity of the brain is an observant symbolic of the disease and a unique recognition of genetic sign of the disease. Machine learning alongside deep learning and decision tree reinforces the aptitude to absorb characteristics from multi-dimensional data’s and thus simplifies automatic classification of Alzheimer’s disease. Susceptible testing was prophesied and realized in training the prospect of Alzheimer’s disease classification built on machine learning advances. It was shrewd that the decision trees trained with deep neural network fashioned the excellent results parallel to related pattern classification.Keywords: Alzheimer's diagnosis, decision trees, deep neural network, machine learning, pattern classification
Procedia PDF Downloads 2983924 A Novel Gateway Location Algorithm for Wireless Mesh Networks
Authors: G. M. Komba
Abstract:
The Internet Gateway (IGW) has extra ability than a simple Mesh Router (MR) and the responsibility to route mostly the all traffic from Mesh Clients (MCs) to the Internet backbone however, IGWs are more expensive. Choosing strategic locations for the Internet Gateways (IGWs) best location in Backbone Wireless Mesh (BWM) precarious to the Wireless Mesh Network (WMN) and the location of IGW can improve a quantity of performance related problem. In this paper, we propose a novel algorithm, namely New Gateway Location Algorithm (NGLA), which aims to achieve four objectives, decreasing the network cost effective, minimizing delay, optimizing the throughput capacity, Different from existing algorithms, the NGLA increasingly recognizes IGWs, allocates mesh routers (MRs) to identify IGWs and promises to find a feasible IGW location and install minimum as possible number of IGWs while regularly conserving the all Quality of Service (QoS) requests. Simulation results showing that the NGLA outperforms other different algorithms by comparing the number of IGWs with a large margin and it placed 40% less IGWs and 80% gain of throughput. Furthermore the NGLA is easy to implement and could be employed for BWM.Keywords: Wireless Mesh Network, Gateway Location Algorithm, Quality of Service, BWM
Procedia PDF Downloads 3733923 Dynamic Cellular Remanufacturing System (DCRS) Design
Authors: Tariq Aljuneidi, Akif Asil Bulgak
Abstract:
Remanufacturing may be defined as the process of bringing used products to “like-new” functional state with warranty to match, and it is one of the most popular product end-of-life scenarios. An efficient remanufacturing network lead to an efficient design of sustainable manufacturing enterprise. In remanufacturing network, products are collected from the customer zone, disassembled and remanufactured at a suitable remanufacturing facility. In this respect, another issue to consider is how the returned product to be remanufactured, in other words, what is the best layout for such facility. In order to achieve a sustainable manufacturing system, Cellular Manufacturing System (CMS) designs are highly recommended, CMSs combine high throughput rates of line layouts with the flexibility offered by functional layouts (job shop). Introducing the CMS while designing a remanufacturing network will benefit the utilization of such a network. This paper presents and analyzes a comprehensive mathematical model for the design of Dynamic Cellular Remanufacturing Systems (DCRSs). In this paper, the proposed model is the first one to date that consider CMS and remanufacturing system simultaneously. The proposed DCRS model considers several manufacturing attributes such as multi-period production planning, dynamic system reconfiguration, duplicate machines, machine capacity, available time for workers, worker assignments, and machine procurement, where the demand is totally satisfied from a returned product. A numerical example is presented to illustrate the proposed model.Keywords: cellular manufacturing system, remanufacturing, mathematical programming, sustainability
Procedia PDF Downloads 3793922 Instant Fire Risk Assessment Using Artifical Neural Networks
Authors: Tolga Barisik, Ali Fuat Guneri, K. Dastan
Abstract:
Major industrial facilities have a high potential for fire risk. In particular, the indices used for the detection of hidden fire are used very effectively in order to prevent the fire from becoming dangerous in the initial stage. These indices provide the opportunity to prevent or intervene early by determining the stage of the fire, the potential for hazard, and the type of the combustion agent with the percentage values of the ambient air components. In this system, artificial neural network will be modeled with the input data determined using the Levenberg-Marquardt algorithm, which is a multi-layer sensor (CAA) (teacher-learning) type, before modeling the modeling methods in the literature. The actual values produced by the indices will be compared with the outputs produced by the network. Using the neural network and the curves to be created from the resulting values, the feasibility of performance determination will be investigated.Keywords: artifical neural networks, fire, Graham Index, levenberg-marquardt algoritm, oxygen decrease percentage index, risk assessment, Trickett Index
Procedia PDF Downloads 1393921 Research on the Spatial Organization and Collaborative Innovation of Innovation Corridors from the Perspective of Ecological Niche: A Case Study of Seven Municipal Districts in Jiangsu Province, China
Authors: Weikang Peng
Abstract:
The innovation corridor is an important spatial carrier to promote regional collaborative innovation, and its development process is the spatial re-organization process of regional innovation resources. This paper takes the Nanjing-Zhenjiang G312 Industrial Innovation Corridor, which involves seven municipal districts in Jiangsu Province, as empirical evidence. Based on multi-source spatial big data in 2010, 2016, and 2022, this paper applies triangulated irregular network (TIN), head/tail breaks, regional innovation ecosystem (RIE) niche fitness evaluation model, and social network analysis to carry out empirical research on the spatial organization and functional structural evolution characteristics of innovation corridors and their correlation with the structural evolution of collaborative innovation network. The results show, first, the development of innovation patches in the corridor has fractal characteristics in time and space and tends to be multi-center and cluster layout along the Nanjing Bypass Highway and National Highway G312. Second, there are large differences in the spatial distribution pattern of niche fitness in the corridor in various dimensions, and the niche fitness of innovation patches along the highway has increased significantly. Third, the scale of the collaborative innovation network in the corridor is expanding fast. The core of the network is shifting from the main urban area to the periphery of the city along the highway, with small-world and hierarchical levels, and the core-edge network structure is highlighted. With the development of the Innovation Corridor, the main collaborative mode in the corridor is changing from collaboration within innovation patches to collaboration between innovation patches, and innovation patches with high ecological suitability tend to be the active areas of collaborative innovation. Overall, polycentric spatial layout, graded functional structure, diversified innovation clusters, and differentiated environmental support play an important role in effectively constructing collaborative innovation linkages and the stable expansion of the scale of collaborative innovation within the innovation corridor.Keywords: innovation corridor development, spatial structure, niche fitness evaluation model, head/tail breaks, innovation network
Procedia PDF Downloads 223920 Router 1X3 - RTL Design and Verification
Authors: Nidhi Gopal
Abstract:
Routing is the process of moving a packet of data from source to destination and enables messages to pass from one computer to another and eventually reach the target machine. A router is a networking device that forwards data packets between computer networks. It is connected to two or more data lines from different networks (as opposed to a network switch, which connects data lines from one single network). This paper mainly emphasizes upon the study of router device, its top level architecture, and how various sub-modules of router i.e. Register, FIFO, FSM and Synchronizer are synthesized, and simulated and finally connected to its top module.Keywords: data packets, networking, router, routing
Procedia PDF Downloads 8153919 Social Media, Networks and Related Technology: Business and Governance Perspectives
Authors: M. A. T. AlSudairi, T. G. K. Vasista
Abstract:
The concept of social media is becoming the top of the agenda for many business executives and public sector executives today. Decision makers as well as consultants, try to identify ways in which firms and enterprises can make profitable use of social media and network related applications such as Wikipedia, Face book, YouTube, Google+, Twitter. While it is fun and useful to participating in this media and network for achieving the communication effectively and efficiently, semantic and sentiment analysis and interpretation becomes a crucial issue. So, the objective of this paper is to provide literature review on social media, network and related technology related to semantics and sentiment or opinion analysis covering business and governance perspectives. In this regard, a case study on the use and adoption of Social media in Saudi Arabia has been discussed. It is concluded that semantic web technology play a significant role in analyzing the social networks and social media content for extracting the interpretational knowledge towards strategic decision support.Keywords: CRASP methodology, formative assessment, literature review, semantic web services, social media, social networks
Procedia PDF Downloads 4523918 Selecting a Foreign Country to Build a Naval Base Using a Fuzzy Hybrid Decision Support System
Authors: Latif Yanar, Muammer Kaçan
Abstract:
Decision support systems are getting more important in many fields of science and technology and used effectively especially when the problems to be solved are complicated with many criteria. In this kind of problems one of the main challenges for the decision makers are that sometimes they cannot produce a countable data for evaluating the criteria but the knowledge and sense of experts. In recent years, fuzzy set theory and fuzzy logic based decision models gaining more place in literature. In this study, a decision support model to determine a country to build naval base is proposed and the application of the model is performed, considering Turkish Navy by the evaluations of Turkish Navy officers and academicians of international relations departments of various Universities located in Istanbul. The results achieved from the evaluations made by the experts in our model are calculated by a decision support tool named DESTEC 1.0, which is developed by the authors using C Sharp programming language. The tool gives advices to the decision maker using Analytic Hierarchy Process, Analytic Network Process, Fuzzy Analytic Hierarchy Process and Fuzzy Analytic Network Process all at once. The calculated results for five foreign countries are shown in the conclusion.Keywords: decision support system, analytic hierarchy process, fuzzy analytic hierarchy process, analytic network process, fuzzy analytic network process, naval base, country selection, international relations
Procedia PDF Downloads 5933917 Tabu Search to Draw Evacuation Plans in Emergency Situations
Authors: S. Nasri, H. Bouziri
Abstract:
Disasters are quite experienced in our days. They are caused by floods, landslides, and building fires that is the main objective of this study. To cope with these unexpected events, precautions must be taken to protect human lives. The emphasis on disposal work focuses on the resolution of the evacuation problem in case of no-notice disaster. The problem of evacuation is listed as a dynamic network flow problem. Particularly, we model the evacuation problem as an earliest arrival flow problem with load dependent transit time. This problem is classified as NP-Hard. Our challenge here is to propose a metaheuristic solution for solving the evacuation problem. We define our objective as the maximization of evacuees during earliest periods of a time horizon T. The objective provides the evacuation of persons as soon as possible. We performed an experimental study on emergency evacuation from the tunisian children’s hospital. This work prompts us to look for evacuation plans corresponding to several situations where the network dynamically changes.Keywords: dynamic network flow, load dependent transit time, evacuation strategy, earliest arrival flow problem, tabu search metaheuristic
Procedia PDF Downloads 3723916 Centrality and Patent Impact: Coupled Network Analysis of Artificial Intelligence Patents Based on Co-Cited Scientific Papers
Authors: Xingyu Gao, Qiang Wu, Yuanyuan Liu, Yue Yang
Abstract:
In the era of the knowledge economy, the relationship between scientific knowledge and patents has garnered significant attention. Understanding the intricate interplay between the foundations of science and technological innovation has emerged as a pivotal challenge for both researchers and policymakers. This study establishes a coupled network of artificial intelligence patents based on co-cited scientific papers. Leveraging centrality metrics from network analysis offers a fresh perspective on understanding the influence of information flow and knowledge sharing within the network on patent impact. The study initially obtained patent numbers for 446,890 granted US AI patents from the United States Patent and Trademark Office’s artificial intelligence patent database for the years 2002-2020. Subsequently, specific information regarding these patents was acquired using the Lens patent retrieval platform. Additionally, a search and deduplication process was performed on scientific non-patent references (SNPRs) using the Web of Science database, resulting in the selection of 184,603 patents that cited 37,467 unique SNPRs. Finally, this study constructs a coupled network comprising 59,379 artificial intelligence patents by utilizing scientific papers co-cited in patent backward citations. In this network, nodes represent patents, and if patents reference the same scientific papers, connections are established between them, serving as edges within the network. Nodes and edges collectively constitute the patent coupling network. Structural characteristics such as node degree centrality, betweenness centrality, and closeness centrality are employed to assess the scientific connections between patents, while citation count is utilized as a quantitative metric for patent influence. Finally, a negative binomial model is employed to test the nonlinear relationship between these network structural features and patent influence. The research findings indicate that network structural features such as node degree centrality, betweenness centrality, and closeness centrality exhibit inverted U-shaped relationships with patent influence. Specifically, as these centrality metrics increase, patent influence initially shows an upward trend, but once these features reach a certain threshold, patent influence starts to decline. This discovery suggests that moderate network centrality is beneficial for enhancing patent influence, while excessively high centrality may have a detrimental effect on patent influence. This finding offers crucial insights for policymakers, emphasizing the importance of encouraging moderate knowledge flow and sharing to promote innovation when formulating technology policies. It suggests that in certain situations, data sharing and integration can contribute to innovation. Consequently, policymakers can take measures to promote data-sharing policies, such as open data initiatives, to facilitate the flow of knowledge and the generation of innovation. Additionally, governments and relevant agencies can achieve broader knowledge dissemination by supporting collaborative research projects, adjusting intellectual property policies to enhance flexibility, or nurturing technology entrepreneurship ecosystems.Keywords: centrality, patent coupling network, patent influence, social network analysis
Procedia PDF Downloads 553915 Cluster Based Ant Colony Routing Algorithm for Mobile Ad-Hoc Networks
Authors: Alaa Eddien Abdallah, Bajes Yousef Alskarnah
Abstract:
Ant colony based routing algorithms are known to grantee the packet delivery, but they suffer from the huge overhead of control messages which are needed to discover the route. In this paper we utilize the network nodes positions to group the nodes in connected clusters. We use clusters-heads only on forwarding the route discovery control messages. Our simulations proved that the new algorithm has decreased the overhead dramatically without affecting the delivery rate.Keywords: ad-hoc network, MANET, ant colony routing, position based routing
Procedia PDF Downloads 4263914 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 179