Search results for: MobileNetV2 neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5316

Search results for: MobileNetV2 neural network

3276 Smart Grids Cyber Security Issues and Challenges

Authors: Imen Aouini, Lamia Ben Azzouz

Abstract:

The energy need is growing rapidly due to the population growth and the large new usage of power. Several works put considerable efforts to make the electricity grid more intelligent to reduce essentially energy consumption and provide efficiency and reliability of power systems. The Smart Grid is a complex architecture that covers critical devices and systems vulnerable to significant attacks. Hence, security is a crucial factor for the success and the wide deployment of Smart Grids. In this paper, we present security issues of the Smart Grid architecture and we highlight open issues that will make the Smart Grid security a challenging research area in the future.

Keywords: smart grids, smart meters, home area network, neighbor area network

Procedia PDF Downloads 427
3275 Alphabet Recognition Using Pixel Probability Distribution

Authors: Vaidehi Murarka, Sneha Mehta, Dishant Upadhyay

Abstract:

Our project topic is “Alphabet Recognition using pixel probability distribution”. The project uses techniques of Image Processing and Machine Learning in Computer Vision. Alphabet recognition is the mechanical or electronic translation of scanned images of handwritten, typewritten or printed text into machine-encoded text. It is widely used to convert books and documents into electronic files etc. Alphabet Recognition based OCR application is sometimes used in signature recognition which is used in bank and other high security buildings. One of the popular mobile applications includes reading a visiting card and directly storing it to the contacts. OCR's are known to be used in radar systems for reading speeders license plates and lots of other things. The implementation of our project has been done using Visual Studio and Open CV (Open Source Computer Vision). Our algorithm is based on Neural Networks (machine learning). The project was implemented in three modules: (1) Training: This module aims “Database Generation”. Database was generated using two methods: (a) Run-time generation included database generation at compilation time using inbuilt fonts of OpenCV library. Human intervention is not necessary for generating this database. (b) Contour–detection: ‘jpeg’ template containing different fonts of an alphabet is converted to the weighted matrix using specialized functions (contour detection and blob detection) of OpenCV. The main advantage of this type of database generation is that the algorithm becomes self-learning and the final database requires little memory to be stored (119kb precisely). (2) Preprocessing: Input image is pre-processed using image processing concepts such as adaptive thresholding, binarizing, dilating etc. and is made ready for segmentation. “Segmentation” includes extraction of lines, words, and letters from the processed text image. (3) Testing and prediction: The extracted letters are classified and predicted using the neural networks algorithm. The algorithm recognizes an alphabet based on certain mathematical parameters calculated using the database and weight matrix of the segmented image.

Keywords: contour-detection, neural networks, pre-processing, recognition coefficient, runtime-template generation, segmentation, weight matrix

Procedia PDF Downloads 392
3274 Social Entrepreneurship against Depopulation: Network Analysis within the Theoretical Framework of the Quadruple Helix

Authors: Esperanza Garcia-Uceda, Josefina L. Murillo-Luna, M. Pilar Latorre-Martinez, Marta Ferrer-Serrano

Abstract:

Social entrepreneurship represents an innovation of traditional business models. During the last decade, its important role in contributing to rural and regional development has been widely recognized, due to its capacity to combat the problem of depopulation through the creation of employment. However, the success of this type of innovative business initiatives depends to a large extent on the existence of an adequate ecosystem of support resources. Based on the theoretical framework of the quadruple helix (QH), which highlights the need for collaboration between different interest groups -university, industry, government and civil society- for the development of regional innovations, in this work the network analysis is applied to study the ecosystem of resources to support social entrepreneurship in the rural area of the province of Zaragoza (Spain). It is a quantitative analysis that can be used to measure the interactions between the different actors that make up the quadruple helix, as well as the networks created between the different institutions and support organizations, through the study of the complex networks they form. The results show the importance of the involvement of local governments and the university, as key elements in the development process, but also allow identifying other issues that are susceptible to improvement.

Keywords: ecosystem of support resources, network analysis, quadruple helix, social entrepreneurship

Procedia PDF Downloads 259
3273 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 41
3272 Horizontal-Vertical and Enhanced-Unicast Interconnect Testing Techniques for Network-on-Chip

Authors: Mahdiar Hosseinghadiry, Razali Ismail, F. Fotovati

Abstract:

One of the most important and challenging tasks in testing network-on-chip based system-on-chips (NoC based SoCs) is to verify the communication entity. It is important because of its usage for transferring both data packets and test patterns for intellectual properties (IPs) during normal and test mode. Hence, ensuring of NoC reliability is required for reliable IPs functionality and testing. On the other hand, it is challenging due to the required time to test it and the way of transferring test patterns from the tester to the NoC components. In this paper, two testing techniques for mesh-based NoC interconnections are proposed. The first one is based on one-by-one testing and the second one divides NoC interconnects into three parts, horizontal links of switches in even columns, horizontal links of switches in odd columns and all vertical. A design for testability (DFT) architecture is represented to send test patterns directly to each switch under test and also support the proposed testing techniques by providing a loopback path in each switch. The simulation results shows the second proposed testing mechanism outperforms in terms of test time because this method test all the interconnects in only three phases, independent to the number of existed interconnects in the network, while test time of other methods are highly dependent to the number of switches and interconnects in the NoC.

Keywords: on chip, interconnection testing, horizontal-vertical testing, enhanced unicast

Procedia PDF Downloads 557
3271 General Time-Dependent Sequenced Route Queries in Road Networks

Authors: Mohammad Hossein Ahmadi, Vahid Haghighatdoost

Abstract:

Spatial databases have been an active area of research over years. In this paper, we study how to answer the General Time-Dependent Sequenced Route queries. Given the origin and destination of a user over a time-dependent road network graph, an ordered list of categories of interests and a departure time interval, our goal is to find the minimum travel time path along with the best departure time that minimizes the total travel time from the source location to the given destination passing through a sequence of points of interests belonging to each of the specified categories of interest. The challenge of this problem is the added complexity to the optimal sequenced route queries, where we assume that first the road network is time dependent, and secondly the user defines a departure time interval instead of one single departure time instance. For processing general time-dependent sequenced route queries, we propose two solutions as Discrete-Time and Continuous-Time Sequenced Route approaches, finding approximate and exact solutions, respectively. Our proposed approaches traverse the road network based on A*-search paradigm equipped with an efficient heuristic function, for shrinking the search space. Extensive experiments are conducted to verify the efficiency of our proposed approaches.

Keywords: trip planning, time dependent, sequenced route query, road networks

Procedia PDF Downloads 326
3270 Comparative Advantage of Mobile Agent Application in Procuring Software Products on the Internet

Authors: Michael K. Adu, Boniface K. Alese, Olumide S. Ogunnusi

Abstract:

This paper brings to fore the inherent advantages in application of mobile agents to procure software products rather than downloading software content on the Internet. It proposes a system whereby the products come on compact disk with mobile agent as deliverable. The client/user purchases a software product, but must connect to the remote server of the software developer before installation. The user provides an activation code that activates mobile agent which is part of the software product on compact disk. The validity of the activation code is checked on connection at the developer’s end to ascertain authenticity and prevent piracy. The system is implemented by downloading two different software products as compare with installing same products on compact disk with mobile agent’s application. Downloading software contents from developer’s database as in the traditional method requires a continuously open connection between the client and the developer’s end, a fixed network is not economically or technically feasible. Mobile agent after being dispatched into the network becomes independent of the creating process and can operate asynchronously and autonomously. It can reconnect later after completing its task and return for result delivery. Response Time and Network Load are very minimal with application of Mobile agent.

Keywords: software products, software developer, internet, activation code, mobile agent

Procedia PDF Downloads 315
3269 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 27
3268 Sustainable Material Selection for Buildings: Analytic Network Process Method and Life Cycle Assessment Approach

Authors: Samira Mahmoudkelayeh, Katayoun Taghizade, Mitra Pourvaziri, Elnaz Asadian

Abstract:

Over the recent decades, depletion of resources and environmental concerns made researchers and practitioners present sustainable approaches. Since construction process consumes a great deal of both renewable and non-renewable resources, it is of great significance regarding environmental impacts. Choosing sustainable construction materials is a remarkable strategy presented in many researches and has a significant effect on building’s environmental footprint. This paper represents an assessment framework for selecting best sustainable materials for exterior enclosure in the city of Tehran based on sustainability principles (eco-friendly, cost effective and socio-cultural viable solutions). To perform a comprehensive analysis of environmental impacts, life cycle assessment, a cradle to grave approach is used. A questionnaire survey of construction experts has been conducted to determine the relative importance of criteria. Analytic Network Process (ANP) is applied as a multi-criteria decision-making method to choose sustainable material which consider interdependencies of criteria and sub-criteria. Finally, it prioritizes and aggregates relevant criteria into ultimate assessed score.

Keywords: sustainable materials, building, analytic network process, life cycle assessment

Procedia PDF Downloads 245
3267 Creating Knowledge Networks: Comparative Analysis of Reference Cases

Authors: Sylvia Villarreal, Edna Bravo

Abstract:

Knowledge management focuses on coordinating technologies, people, processes, and structures to generate a competitive advantage and considering that networks are perceived as mechanisms for knowledge creation and transfer, this research presents the stages and practices related to the creation of knowledge networks. The methodology started with a literature review adapted from the systematic literature review (SLR). The descriptive analysis includes variables such as approach (conceptual or practical), industry, knowledge management processes and mythologies (qualitative or quantitative), etc. The content analysis includes identification of reference cases. These cases were characterized based on variables as scope, creation goal, years, network approach, actors and creation methodology. It was possible to do a comparative analysis to determinate similarities and differences in these cases documented in knowledge network scientific literature. Consequently, it was shown that even the need and impact of knowledge networks in organizations, the initial guidelines for their creation are not documented, so there is not a guide of good practices and lessons learned. The reference cases are from industries as energy, education, creative, automotive and textile. Their common points are the human approach; it is oriented to interactions to facilitate the appropriation of knowledge, explicit and tacit. The stages of every case are analyzed to propose the main successful elements.

Keywords: creation, knowledge management, network, stages

Procedia PDF Downloads 305
3266 Postmodern Navy to Transnational Adaptive Navy: Positive Peace with Borderless Institutional Network

Authors: Serkan Tezgel

Abstract:

Effectively managing threats and power that transcend national boundaries requires a reformulation from the traditional post-modern navy to an adaptive and institutional transnational navy. By analyzing existing soft power concept, post-modern navy, and sea power, this study proposes the transnational navy, founded on the triangle of main attributes of transnational companies, 'Global Competitiveness, Local Responsiveness, Worldwide Learning and Innovation Sharing', a new model which will lead to a positive peace with an institutional network. This transnational model necessitates 'Transnational Navies' to help establish peace with collective and transnational understanding during a transition period 'Reactive Postmodern Navy' has been experiencing. In this regard, it is fairly claimed that a new paradigm shift will revolve around sea power to establish good order at sea with collective and collaborative initiatives and bound to breed new theories and ideas in the forthcoming years. However, there are obstacles to overcome. Postmodern navies, currently shaped by 'Collective Maritime Security' and 'Collective Defense' concepts, can not abandon reactive applications and acts. States deploying postmodern navies to realize their policies on international platforms and seapower structures shaped by the axis of countries’ absolute interests resulted in multipolar alliances and coalitions, but the establishment of the peace. These obstacles can be categorized into three tiers in establishing a unique transnational model navy: Strategic, Organizational and Management challenges. To overcome these obstacles and challenges, postmodern navies should transform into cooperative, collective and independent soft transnational navies with the transnational mentality, global commons, and institutional network. Such an adaptive institution can help the world navigate to a positive peace.

Keywords: postmodern navy, transnational navy, transnational mentality, institutional network

Procedia PDF Downloads 524
3265 Fuzzy Rules Based Improved BEENISH Protocol for Wireless Sensor Networks

Authors: Rishabh Sharma

Abstract:

The main design parameter of WSN (wireless sensor network) is the energy consumption. To compensate this parameter, hierarchical clustering is a technique that assists in extending duration of the networks life by efficiently consuming the energy. This paper focuses on dealing with the WSNs and the FIS (fuzzy interface system) which are deployed to enhance the BEENISH protocol. The node energy, mobility, pause time and density are considered for the selection of CH (cluster head). The simulation outcomes exhibited that the projected system outperforms the traditional system with regard to the energy utilization and number of packets transmitted to sink.

Keywords: wireless sensor network, sink, sensor node, routing protocol, fuzzy rule, fuzzy inference system

Procedia PDF Downloads 111
3264 The Effective Use of the Network in the Distributed Storage

Authors: Mamouni Mohammed Dhiya Eddine

Abstract:

This work aims at studying the exploitation of high-speed networks of clusters for distributed storage. Parallel applications running on clusters require both high-performance communications between nodes and efficient access to the storage system. Many studies on network technologies led to the design of dedicated architectures for clusters with very fast communications between computing nodes. Efficient distributed storage in clusters has been essentially developed by adding parallelization mechanisms so that the server(s) may sustain an increased workload. In this work, we propose to improve the performance of distributed storage systems in clusters by efficiently using the underlying high-performance network to access distant storage systems. The main question we are addressing is: do high-speed networks of clusters fit the requirements of a transparent, efficient and high-performance access to remote storage? We show that storage requirements are very different from those of parallel computation. High-speed networks of clusters were designed to optimize communications between different nodes of a parallel application. We study their utilization in a very different context, storage in clusters, where client-server models are generally used to access remote storage (for instance NFS, PVFS or LUSTRE). Our experimental study based on the usage of the GM programming interface of MYRINET high-speed networks for distributed storage raised several interesting problems. Firstly, the specific memory utilization in the storage access system layers does not easily fit the traditional memory model of high-speed networks. Secondly, client-server models that are used for distributed storage have specific requirements on message control and event processing, which are not handled by existing interfaces. We propose different solutions to solve communication control problems at the filesystem level. We show that a modification of the network programming interface is required. Data transfer issues need an adaptation of the operating system. We detail several propositions for network programming interfaces which make their utilization easier in the context of distributed storage. The integration of a flexible processing of data transfer in the new programming interface MYRINET/MX is finally presented. Performance evaluations show that its usage in the context of both storage and other types of applications is easy and efficient.

Keywords: distributed storage, remote file access, cluster, high-speed network, MYRINET, zero-copy, memory registration, communication control, event notification, application programming interface

Procedia PDF Downloads 224
3263 An Approach towards Smart Future: Ict Infrastructure Integrated into Urban Water Networks

Authors: Ahsan Ali, Mayank Ostwal, Nikhil Agarwal

Abstract:

Abstract—According to a World Bank report, millions of people across the globe still do not have access to improved water services. With uninterrupted growth of cities and urban inhabitants, there is a mounting need to safeguard the sustainable expansion of cities. Efficient functioning of the urban components and high living standards of the residents are needed to be ensured. The water and sanitation network of an urban development is one of its most essential parts of its critical infrastructure. The growth in urban population is leading towards increased water demand, and thus, the local water resources are severely strained. 'Smart water' is referred to water and waste water infrastructure that is able to manage the limited resources and the energy used to transport it. It enables the sustainable consumption of water resources through co-ordinate water management system, by integrating Information Communication Technology (ICT) solutions, intended at maximizing the socioeconomic benefits without compromising the environmental values. This paper presents a case study from a medium sized city in North-western Pakistan. Currently, water is getting contaminated due to the proximity between water and sewer pipelines in the study area, leading to public health issues. Due to unsafe grey water infiltration, the scarce ground water is also getting polluted. This research takes into account the design of smart urban water network by integrating ICT (Information and Communication Technology) with urban water network. The proximity between the existing water supply network and sewage network is analyzed and a design of new water supply system is proposed. Real time mapping of the existing urban utility networks will be projected with the help of GIS applications. The issue of grey water infiltration is addressed by providing sustainable solutions with the help of locally available materials, keeping in mind the economic condition of the area. To deal with the current growth of urban population, it is vital to develop new water resources. Hence, distinctive and cost effective procedures to harness rain water would be suggested as a part of the research study experiment.

Keywords: GIS, smart water, sustainability, urban water management

Procedia PDF Downloads 220
3262 Blockchain-Based Approach on Security Enhancement of Distributed System in Healthcare Sector

Authors: Loong Qing Zhe, Foo Jing Heng

Abstract:

A variety of data files are now available on the internet due to the advancement of technology across the globe today. As more and more data are being uploaded on the internet, people are becoming more concerned that their private data, particularly medical health records, are being compromised and sold to others for money. Hence, the accessibility and confidentiality of patients' medical records have to be protected through electronic means. Blockchain technology is introduced to offer patients security against adversaries or unauthorised parties. In the blockchain network, only authorised personnel or organisations that have been validated as nodes may share information and data. For any change within the network, including adding a new block or modifying existing information about the block, a majority of two-thirds of the vote is required to confirm its legitimacy. Additionally, a consortium permission blockchain will connect all the entities within the same community. Consequently, all medical data in the network can be safely shared with all authorised entities. Also, synchronization can be performed within the cloud since the data is real-time. This paper discusses an efficient method for storing and sharing electronic health records (EHRs). It also examines the framework of roles within the blockchain and proposes a new approach to maintain EHRs with keyword indexes to search for patients' medical records while ensuring data privacy.

Keywords: healthcare sectors, distributed system, blockchain, electronic health records (EHR)

Procedia PDF Downloads 196
3261 Performance Analysis and Energy Consumption of Routing Protocol in Manet Using Grid Topology

Authors: Vivek Kumar Singh, Tripti Singh

Abstract:

An ad hoc wireless network consists of mobile networks which creates an underlying architecture for communication without the help of traditional fixed-position routers. Ad-hoc On-demand Distance Vector (AODV) is a routing protocol used for Mobile Ad hoc Network (MANET). Nevertheless, the architecture must maintain communication routes although the hosts are mobile and they have limited transmission range. There are different protocols for handling the routing in the mobile environment. Routing protocols used in fixed infrastructure networks cannot be efficiently used for mobile ad-hoc networks, so that MANET requires different protocols. This paper presents the performance analysis of the routing protocols used various parameter-patterns with Two-ray model.

Keywords: AODV, packet transmission rate, pause time, ZRP, QualNet 6.1

Procedia PDF Downloads 834
3260 Performance Analysis of Traffic Classification with Machine Learning

Authors: Htay Htay Yi, Zin May Aye

Abstract:

Network security is role of the ICT environment because malicious users are continually growing that realm of education, business, and then related with ICT. The network security contravention is typically described and examined centrally based on a security event management system. The firewalls, Intrusion Detection System (IDS), and Intrusion Prevention System are becoming essential to monitor or prevent of potential violations, incidents attack, and imminent threats. In this system, the firewall rules are set only for where the system policies are needed. Dataset deployed in this system are derived from the testbed environment. The traffic as in DoS and PortScan traffics are applied in the testbed with firewall and IDS implementation. The network traffics are classified as normal or attacks in the existing testbed environment based on six machine learning classification methods applied in the system. It is required to be tested to get datasets and applied for DoS and PortScan. The dataset is based on CICIDS2017 and some features have been added. This system tested 26 features from the applied dataset. The system is to reduce false positive rates and to improve accuracy in the implemented testbed design. The system also proves good performance by selecting important features and comparing existing a dataset by machine learning classifiers.

Keywords: false negative rate, intrusion detection system, machine learning methods, performance

Procedia PDF Downloads 122
3259 Discovering the Effects of Meteorological Variables on the Air Quality of Bogota, Colombia, by Data Mining Techniques

Authors: Fabiana Franceschi, Martha Cobo, Manuel Figueredo

Abstract:

Bogotá, the capital of Colombia, is its largest city and one of the most polluted in Latin America due to the fast economic growth over the last ten years. Bogotá has been affected by high pollution events which led to the high concentration of PM10 and NO2, exceeding the local 24-hour legal limits (100 and 150 g/m3 each). The most important pollutants in the city are PM10 and PM2.5 (which are associated with respiratory and cardiovascular problems) and it is known that their concentrations in the atmosphere depend on the local meteorological factors. Therefore, it is necessary to establish a relationship between the meteorological variables and the concentrations of the atmospheric pollutants such as PM10, PM2.5, CO, SO2, NO2 and O3. This study aims to determine the interrelations between meteorological variables and air pollutants in Bogotá, using data mining techniques. Data from 13 monitoring stations were collected from the Bogotá Air Quality Monitoring Network within the period 2010-2015. The Principal Component Analysis (PCA) algorithm was applied to obtain primary relations between all the parameters, and afterwards, the K-means clustering technique was implemented to corroborate those relations found previously and to find patterns in the data. PCA was also used on a per shift basis (morning, afternoon, night and early morning) to validate possible variation of the previous trends and a per year basis to verify that the identified trends have remained throughout the study time. Results demonstrated that wind speed, wind direction, temperature, and NO2 are the most influencing factors on PM10 concentrations. Furthermore, it was confirmed that high humidity episodes increased PM2,5 levels. It was also found that there are direct proportional relationships between O3 levels and wind speed and radiation, while there is an inverse relationship between O3 levels and humidity. Concentrations of SO2 increases with the presence of PM10 and decreases with the wind speed and wind direction. They proved as well that there is a decreasing trend of pollutant concentrations over the last five years. Also, in rainy periods (March-June and September-December) some trends regarding precipitations were stronger. Results obtained with K-means demonstrated that it was possible to find patterns on the data, and they also showed similar conditions and data distribution among Carvajal, Tunal and Puente Aranda stations, and also between Parque Simon Bolivar and las Ferias. It was verified that the aforementioned trends prevailed during the study period by applying the same technique per year. It was concluded that PCA algorithm is useful to establish preliminary relationships among variables, and K-means clustering to find patterns in the data and understanding its distribution. The discovery of patterns in the data allows using these clusters as an input to an Artificial Neural Network prediction model.

Keywords: air pollution, air quality modelling, data mining, particulate matter

Procedia PDF Downloads 261
3258 Revitalization Strategy of Beijing-Tianjin-Hebei Rural Areas Organized by Production-Living-Ecology Spatial Network at Township Level

Authors: Liuhui Zhu, Peng Zeng

Abstract:

The rural revitalization strategy means to take the country and the city on the same level, and achieve urban-rural integration and comprehensive development of rural areas. Beijing-Tianjin-Hebei rural areas have always been the weak links in the region, with prominently uneven development between urban and rural areas. The rural areas need to join the overall regional synergy. Based on the analysis of the characteristics and problems of rural development in the region from the perspective of production-living-ecology space, the paper proposes the township as the basic unit for rural revitalization according to the overall requirements of the rural revitalization strategy. The basic unit helps to realize resource arrangement, functional organization, and collaborative governance organized by the production-living-ecology spatial network. The paper summarizes the planning strategies for the basic unit. Through spatial cognition and spatial reconstruction, the three space is networked through the base, nodes, and connections to improve the comprehensive value of rural areas and achieve the multiple goals of rural revitalization.

Keywords: rural revitalization, Beijing-Tianjin-Hebei region, township level, production-living-ecology spatial network

Procedia PDF Downloads 198
3257 Analysis of Causality between Defect Causes Using Association Rule Mining

Authors: Sangdeok Lee, Sangwon Han, Changtaek Hyun

Abstract:

Construction defects are major components that result in negative impacts on project performance including schedule delays and cost overruns. Since construction defects generally occur when a few associated causes combine, a thorough understanding of defect causality is required in order to more systematically prevent construction defects. To address this issue, this paper uses association rule mining (ARM) to quantify the causality between defect causes, and social network analysis (SNA) to find indirect causality among them. The suggested approach is validated with 350 defect instances from concrete works in 32 projects in Korea. The results show that the interrelationships revealed by the approach reflect the characteristics of the concrete task and the important causes that should be prevented.

Keywords: causality, defect causes, social network analysis, association rule mining

Procedia PDF Downloads 371
3256 Performance Evaluation of Routing Protocols in Vehicular Adhoc Networks

Authors: Salman Naseer, Usman Zafar, Iqra Zafar

Abstract:

This study explores the implication of Vehicular Adhoc Network (VANET) - in the rural and urban scenarios that is one domain of Mobile Adhoc Network (MANET). VANET provides wireless communication between vehicle to vehicle and also roadside units. The Federal Commission Committee of United States of American has been allocated 75 MHz of the spectrum band in the 5.9 GHz frequency range for dedicated short-range communications (DSRC) that are specifically designed to enhance any road safety applications and entertainment/information applications. There are several vehicular related projects viz; California path, car 2 car communication consortium, the ETSI, and IEEE 1609 working group that have already been conducted to improve the overall road safety or traffic management. After the critical literature review, the selection of routing protocols is determined, and its performance was well thought-out in the urban and rural scenarios. Numerous routing protocols for VANET are applied to carry out current research. Its evaluation was conceded with the help of selected protocols through simulation via performance metric i.e. throughput and packet drop. Excel and Google graph API tools are used for plotting the graphs after the simulation results in order to compare the selected routing protocols which result with each other. In addition, the sum of the output from each scenario was computed to undoubtedly present the divergence in results. The findings of the current study present that DSR gives enhanced performance for low packet drop and high throughput as compared to AODV and DSDV in an urban congested area and in rural environments. On the other hand, in low-density area, VANET AODV gives better results as compared to DSR. The worth of the current study may be judged as the information exchanged between vehicles is useful for comfort, safety, and entertainment. Furthermore, the communication system performance depends on the way routing is done in the network and moreover, the routing of the data based on protocols implement in the network. The above-presented results lead to policy implication and develop our understanding of the broader spectrum of VANET.

Keywords: AODV, DSDV, DSR, Adhoc network

Procedia PDF Downloads 291
3255 Cooperative Communication of Energy Harvesting Synchronized-OOK IR-UWB Based Tags

Authors: M. A. Mulatu, L. C. Chang, Y. S. Han

Abstract:

Energy harvesting tags with cooperative communication capabilities are emerging as possible infrastructure for internet of things (IoT) applications. This paper studies about the \ cooperative transmission strategy for a network of energy harvesting active networked tags (EnHANTs), that is adapted to the available energy resource and identification request. We consider a network of EnHANT-equipped objects to communicate with the destination either directly or by cooperating with neighboring objects. We formulate the the problem as a Markov decision process (MDP) under synchronised On/Off keying (S-OOK) pulse modulation format. The simulation results are provided to show the the performance of the cooperative transmission policy and compared against the greedy and conservative policies of single-link transmission.

Keywords: cooperative communication, transmission strategy, energy harvesting, Markov decision process, value iteration

Procedia PDF Downloads 494
3254 Artificial Intelligence in Penetration Testing of a Connected and Autonomous Vehicle Network

Authors: Phillip Garrad, Saritha Unnikrishnan

Abstract:

The recent popularity of connected and autonomous vehicles (CAV) corresponds with an increase in the risk of cyber-attacks. These cyber-attacks have been instigated by both researchers or white-coat hackers and cyber-criminals. As Connected Vehicles move towards full autonomy, the impact of these cyber-attacks also grows. The current research details challenges faced in cybersecurity testing of CAV, including access and cost of the representative test setup. Other challenges faced are lack of experts in the field. Possible solutions to how these challenges can be overcome are reviewed and discussed. From these findings, a software simulated CAV network is established as a cost-effective representative testbed. Penetration tests are then performed on this simulation, demonstrating a cyber-attack in CAV. Studies have shown Artificial Intelligence (AI) to improve runtime, increase efficiency and comprehensively cover all the typical test aspects in penetration testing in other industries. There is an attempt to introduce similar AI models to the software simulation. The expectation from this implementation is to see similar improvements in runtime and efficiency for the CAV model. If proven to be an effective means of penetration test for CAV, this methodology may be used on a full CAV test network.

Keywords: cybersecurity, connected vehicles, software simulation, artificial intelligence, penetration testing

Procedia PDF Downloads 113
3253 Uncovering the Complex Structure of Building Design Process Based on Royal Institute of British Architects Plan of Work

Authors: Fawaz A. Binsarra, Halim Boussabaine

Abstract:

The notion of complexity science has been attracting the interest of researchers and professionals due to the need of enhancing the efficiency of understanding complex systems dynamic and structure of interactions. In addition, complexity analysis has been used as an approach to investigate complex systems that contains a large number of components interacts with each other to accomplish specific outcomes and emerges specific behavior. The design process is considered as a complex action that involves large number interacted components, which are ranked as design tasks, design team, and the components of the design process. Those three main aspects of the building design process consist of several components that interact with each other as a dynamic system with complex information flow. In this paper, the goal is to uncover the complex structure of information interactions in building design process. The Investigating of Royal Institute of British Architects Plan Of Work 2013 information interactions as a case study to uncover the structure and building design process complexity using network analysis software to model the information interaction will significantly enhance the efficiency of the building design process outcomes.

Keywords: complexity, process, building desgin, Riba, design complexity, network, network analysis

Procedia PDF Downloads 533
3252 Preparation of Papers - Developing a Leukemia Diagnostic System Based on Hybrid Deep Learning Architectures in Actual Clinical Environments

Authors: Skyler Kim

Abstract:

An early diagnosis of leukemia has always been a challenge to doctors and hematologists. On a worldwide basis, it was reported that there were approximately 350,000 new cases in 2012, and diagnosing leukemia was time-consuming and inefficient because of an endemic shortage of flow cytometry equipment in current clinical practice. As the number of medical diagnosis tools increased and a large volume of high-quality data was produced, there was an urgent need for more advanced data analysis methods. One of these methods was the AI approach. This approach has become a major trend in recent years, and several research groups have been working on developing these diagnostic models. However, designing and implementing a leukemia diagnostic system in real clinical environments based on a deep learning approach with larger sets remains complex. Leukemia is a major hematological malignancy that results in mortality and morbidity throughout different ages. We decided to select acute lymphocytic leukemia to develop our diagnostic system since acute lymphocytic leukemia is the most common type of leukemia, accounting for 74% of all children diagnosed with leukemia. The results from this development work can be applied to all other types of leukemia. To develop our model, the Kaggle dataset was used, which consists of 15135 total images, 8491 of these are images of abnormal cells, and 5398 images are normal. In this paper, we design and implement a leukemia diagnostic system in a real clinical environment based on deep learning approaches with larger sets. The proposed diagnostic system has the function of detecting and classifying leukemia. Different from other AI approaches, we explore hybrid architectures to improve the current performance. First, we developed two independent convolutional neural network models: VGG19 and ResNet50. Then, using both VGG19 and ResNet50, we developed a hybrid deep learning architecture employing transfer learning techniques to extract features from each input image. In our approach, fusing the features from specific abstraction layers can be deemed as auxiliary features and lead to further improvement of the classification accuracy. In this approach, features extracted from the lower levels are combined into higher dimension feature maps to help improve the discriminative capability of intermediate features and also overcome the problem of network gradient vanishing or exploding. By comparing VGG19 and ResNet50 and the proposed hybrid model, we concluded that the hybrid model had a significant advantage in accuracy. The detailed results of each model’s performance and their pros and cons will be presented in the conference.

Keywords: acute lymphoblastic leukemia, hybrid model, leukemia diagnostic system, machine learning

Procedia PDF Downloads 191
3251 Ensemble Machine Learning Approach for Estimating Missing Data from CO₂ Time Series

Authors: Atbin Mahabbati, Jason Beringer, Matthias Leopold

Abstract:

To address the global challenges of climate and environmental changes, there is a need for quantifying and reducing uncertainties in environmental data, including observations of carbon, water, and energy. Global eddy covariance flux tower networks (FLUXNET), and their regional counterparts (i.e., OzFlux, AmeriFlux, China Flux, etc.) were established in the late 1990s and early 2000s to address the demand. Despite the capability of eddy covariance in validating process modelling analyses, field surveys and remote sensing assessments, there are some serious concerns regarding the challenges associated with the technique, e.g. data gaps and uncertainties. To address these concerns, this research has developed an ensemble model to fill the data gaps of CO₂ flux to avoid the limitations of using a single algorithm, and therefore, provide less error and decline the uncertainties associated with the gap-filling process. In this study, the data of five towers in the OzFlux Network (Alice Springs Mulga, Calperum, Gingin, Howard Springs and Tumbarumba) during 2013 were used to develop an ensemble machine learning model, using five feedforward neural networks (FFNN) with different structures combined with an eXtreme Gradient Boosting (XGB) algorithm. The former methods, FFNN, provided the primary estimations in the first layer, while the later, XGB, used the outputs of the first layer as its input to provide the final estimations of CO₂ flux. The introduced model showed slight superiority over each single FFNN and the XGB, while each of these two methods was used individually, overall RMSE: 2.64, 2.91, and 3.54 g C m⁻² yr⁻¹ respectively (3.54 provided by the best FFNN). The most significant improvement happened to the estimation of the extreme diurnal values (during midday and sunrise), as well as nocturnal estimations, which is generally considered as one of the most challenging parts of CO₂ flux gap-filling. The towers, as well as seasonality, showed different levels of sensitivity to improvements provided by the ensemble model. For instance, Tumbarumba showed more sensitivity compared to Calperum, where the differences between the Ensemble model on the one hand and the FFNNs and XGB, on the other hand, were the least of all 5 sites. Besides, the performance difference between the ensemble model and its components individually were more significant during the warm season (Jan, Feb, Mar, Oct, Nov, and Dec) compared to the cold season (Apr, May, Jun, Jul, Aug, and Sep) due to the higher amount of photosynthesis of plants, which led to a larger range of CO₂ exchange. In conclusion, the introduced ensemble model slightly improved the accuracy of CO₂ flux gap-filling and robustness of the model. Therefore, using ensemble machine learning models is potentially capable of improving data estimation and regression outcome when it seems to be no more room for improvement while using a single algorithm.

Keywords: carbon flux, Eddy covariance, extreme gradient boosting, gap-filling comparison, hybrid model, OzFlux network

Procedia PDF Downloads 145
3250 Community Empowerment: The Contribution of Network Urbanism on Urban Poverty Reduction

Authors: Lucia Antonela Mitidieri

Abstract:

This research analyzes the application of a model of settlements management based on networks of territorial integration that advocates planning as a cyclical and participatory process that engages early on with civic society, the private sector and the state. Through qualitative methods such as participant observation, interviews with snowball technique and an active research on territories, concrete results of community empowerment are obtained from the promotion of productive enterprises and community spaces of encounter and exchange. Studying the cultural and organizational dimensions of empowerment allows building indicators such as increase of capacities or community cohesion that can lead to support local governments in achieving sustainable urban development for a reduction of urban poverty.

Keywords: community spaces, empowerment, network urbanism, participatory process

Procedia PDF Downloads 336
3249 Time Series Analysis the Case of China and USA Trade Examining during Covid-19 Trade Enormity of Abnormal Pricing with the Exchange rate

Authors: Md. Mahadi Hasan Sany, Mumenunnessa Keya, Sharun Khushbu, Sheikh Abujar

Abstract:

Since the beginning of China's economic reform, trade between the U.S. and China has grown rapidly, and has increased since China's accession to the World Trade Organization in 2001. The US imports more than it exports from China, reducing the trade war between China and the U.S. for the 2019 trade deficit, but in 2020, the opposite happens. In international and U.S. trade, Washington launched a full-scale trade war against China in March 2016, which occurred a catastrophic epidemic. The main goal of our study is to measure and predict trade relations between China and the U.S., before and after the arrival of the COVID epidemic. The ML model uses different data as input but has no time dimension that is present in the time series models and is only able to predict the future from previously observed data. The LSTM (a well-known Recurrent Neural Network) model is applied as the best time series model for trading forecasting. We have been able to create a sustainable forecasting system in trade between China and the US by closely monitoring a dataset published by the State Website NZ Tatauranga Aotearoa from January 1, 2015, to April 30, 2021. Throughout the survey, we provided a 180-day forecast that outlined what would happen to trade between China and the US during COVID-19. In addition, we have illustrated that the LSTM model provides outstanding outcome in time series data analysis rather than RFR and SVR (e.g., both ML models). The study looks at how the current Covid outbreak affects China-US trade. As a comparative study, RMSE transmission rate is calculated for LSTM, RFR and SVR. From our time series analysis, it can be said that the LSTM model has given very favorable thoughts in terms of China-US trade on the future export situation.

Keywords: RFR, China-U.S. trade war, SVR, LSTM, deep learning, Covid-19, export value, forecasting, time series analysis

Procedia PDF Downloads 202
3248 Self-Organizing Maps for Credit Card Fraud Detection

Authors: ChunYi Peng, Wei Hsuan CHeng, Shyh Kuang Ueng

Abstract:

This study focuses on the application of self-organizing maps (SOM) technology in analyzing credit card transaction data, aiming to enhance the accuracy and efficiency of fraud detection. Som, as an artificial neural network, is particularly suited for pattern recognition and data classification, making it highly effective for the complex and variable nature of credit card transaction data. By analyzing transaction characteristics with SOM, the research identifies abnormal transaction patterns that could indicate potentially fraudulent activities. Moreover, this study has developed a specialized visualization tool to intuitively present the relationships between SOM analysis outcomes and transaction data, aiding financial institution personnel in quickly identifying and responding to potential fraud, thereby reducing financial losses. Additionally, the research explores the integration of SOM technology with composite intelligent system technologies (including finite state machines, fuzzy logic, and decision trees) to further improve fraud detection accuracy. This multimodal approach provides a comprehensive perspective for identifying and understanding various types of fraud within credit card transactions. In summary, by integrating SOM technology with visualization tools and composite intelligent system technologies, this research offers a more effective method of fraud detection for the financial industry, not only enhancing detection accuracy but also deepening the overall understanding of fraudulent activities.

Keywords: self-organizing map technology, fraud detection, information visualization, data analysis, composite intelligent system technologies, decision support technologies

Procedia PDF Downloads 64
3247 Finding Viable Pollution Routes in an Urban Network under a Predefined Cost

Authors: Dimitra Alexiou, Stefanos Katsavounis, Ria Kalfakakou

Abstract:

In an urban area the determination of transportation routes should be planned so as to minimize the provoked pollution taking into account the cost of such routes. In the sequel these routes are cited as pollution routes. The transportation network is expressed by a weighted graph G= (V, E, D, P) where every vertex represents a location to be served and E contains unordered pairs (edges) of elements in V that indicate a simple road. The distances/cost and a weight that depict the provoked air pollution by a vehicle transition at every road are assigned to each road as well. These are the items of set D and P respectively. Furthermore the investigated pollution routes must not exceed predefined corresponding values concerning the route cost and the route pollution level during the vehicle transition. In this paper we present an algorithm that generates such routes in order that the decision maker selects the most appropriate one.

Keywords: bi-criteria, pollution, shortest paths, computation

Procedia PDF Downloads 379