Search results for: decision based artificial neural network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33736

Search results for: decision based artificial neural network

31216 Impact Force Difference on Natural Grass Versus Synthetic Turf Football Fields

Authors: Nathaniel C. Villanueva, Ian K. H. Chun, Alyssa S. Fujiwara, Emily R. Leibovitch, Brennan E. Yamamoto, Loren G. Yamamoto

Abstract:

Introduction: In previous studies of high school sports, over 15% of concussions were attributed to contact with the playing surface. While artificial turf fields are increasing in popularity due to lower maintenance costs, artificial turf has been associated with more ankle and knee injuries, with inconclusive data on concussions. In this study, natural grass and artificial football fields were compared in terms of deceleration on fall impact. Methods: Accelerometers were placed on the forehead, apex of the head, and right ear of a Century Body Opponent Bag (BOB) manikin. A Riddell HITS football helmet was secured onto the head of the manikin over the accelerometers. This manikin was dropped onto natural grass (n = 10) and artificial turf (n = 9) high school football fields. The manikin was dropped from a stationary position at a height of 60 cm onto its front, back, and left side. Each of these drops was conducted 10 times at the 40-yard line, 20-yard line, and endzone. The net deceleration on impact was calculated as a net vector from each of the three accelerometers’ x, y, and z vectors from the three different locations on the manikin’s head (9 vector measurements per drop). Results: Mean values for the multiple drops were calculated for each accelerometer and drop type for each field. All accelerometers in forward and backward falls and one accelerometer in side falls showed significantly greater impact force on synthetic turf compared to the natural grass surfaces. Conclusion: Impact force was higher on synthetic fields for all drop types for at least one of the accelerometer locations. These findings suggest that concussion risk might be higher for athletes playing on artificial turf fields.

Keywords: concussion, football, biomechanics, sports

Procedia PDF Downloads 158
31215 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment

Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane

Abstract:

Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.

Keywords: artificial intelligence, computer science, criminal investigation, digital forensics

Procedia PDF Downloads 212
31214 RV-YOLOX: Object Detection on Inland Waterways Based on Optimized YOLOX Through Fusion of Vision and 3+1D Millimeter Wave Radar

Authors: Zixian Zhang, Shanliang Yao, Zile Huang, Zhaodong Wu, Xiaohui Zhu, Yong Yue, Jieming Ma

Abstract:

Unmanned Surface Vehicles (USVs) are valuable due to their ability to perform dangerous and time-consuming tasks on the water. Object detection tasks are significant in these applications. However, inherent challenges, such as the complex distribution of obstacles, reflections from shore structures, water surface fog, etc., hinder the performance of object detection of USVs. To address these problems, this paper provides a fusion method for USVs to effectively detect objects in the inland surface environment, utilizing vision sensors and 3+1D Millimeter-wave radar. MMW radar is complementary to vision sensors, providing robust environmental information. The radar 3D point cloud is transferred to 2D radar pseudo image to unify radar and vision information format by utilizing the point transformer. We propose a multi-source object detection network (RV-YOLOX )based on radar-vision fusion for inland waterways environment. The performance is evaluated on our self-recording waterways dataset. Compared with the YOLOX network, our fusion network significantly improves detection accuracy, especially for objects with bad light conditions.

Keywords: inland waterways, YOLO, sensor fusion, self-attention

Procedia PDF Downloads 124
31213 A Smart Sensor Network Approach Using Affordable River Water Level Sensors

Authors: Dian Zhang, Brendan Heery, Maria O’Neill, Ciprian Briciu-Burghina, Noel E. O’Connor, Fiona Regan

Abstract:

Recent developments in sensors, wireless data communication and the cloud computing have brought the sensor web to a whole new generation. The introduction of the concept of ‘Internet of Thing (IoT)’ has brought the sensor research into a new level, which involves the developing of long lasting, low cost, environment friendly and smart sensors; new wireless data communication technologies; big data analytics algorithms and cloud based solutions that are tailored to large scale smart sensor network. The next generation of smart sensor network consists of several layers: physical layer, where all the smart sensors resident and data pre-processes occur, either on the sensor itself or field gateway; data transmission layer, where data and instructions exchanges happen; the data process layer, where meaningful information is extracted and organized from the pre-process data stream. There are many definitions of smart sensor, however, to summarize all these definitions, a smart sensor must be Intelligent and Adaptable. In future large scale sensor network, collected data are far too large for traditional applications to send, store or process. The sensor unit must be intelligent that pre-processes collected data locally on board (this process may occur on field gateway depends on the sensor network structure). In this case study, three smart sensing methods, corresponding to simple thresholding, statistical model and machine learning based MoPBAS method, are introduced and their strength and weakness are discussed as an introduction to the smart sensing concept. Data fusion, the integration of data and knowledge from multiple sources, are key components of the next generation smart sensor network. For example, in the water level monitoring system, weather forecast can be extracted from external sources and if a heavy rainfall is expected, the server can send instructions to the sensor notes to, for instance, increase the sampling rate or switch on the sleeping mode vice versa. In this paper, we describe the deployment of 11 affordable water level sensors in the Dublin catchment. The objective of this paper is to use the deployed river level sensor network at the Dodder catchment in Dublin, Ireland as a case study to give a vision of the next generation of a smart sensor network for flood monitoring to assist agencies in making decisions about deploying resources in the case of a severe flood event. Some of the deployed sensors are located alongside traditional water level sensors for validation purposes. Using the 11 deployed river level sensors in a network as a case study, a vision of the next generation of smart sensor network is proposed. Each key component of the smart sensor network is discussed, which hopefully inspires the researchers who are working in the sensor research domain.

Keywords: smart sensing, internet of things, water level sensor, flooding

Procedia PDF Downloads 381
31212 Performance Evaluation of Hierarchical Location-Based Services Coupled to the Greedy Perimeter Stateless Routing Protocol for Wireless Sensor Networks

Authors: Rania Khadim, Mohammed Erritali, Abdelhakim Maaden

Abstract:

Nowadays Wireless Sensor Networks have attracted worldwide research and industrial interest, because they can be applied in various areas. Geographic routing protocols are very suitable to those networks because they use location information when they need to route packets. Obviously, location information is maintained by Location-Based Services provided by network nodes in a distributed way. In this paper we choose to evaluate the performance of two hierarchical rendezvous location based-services, GLS (Grid Location Service) and HLS (Hierarchical Location Service) coupled to the GPSR routing protocol (Greedy Perimeter Stateless Routing) for Wireless Sensor Network. The simulations were performed using NS2 simulator to evaluate the performance and power of the two services in term of location overhead, the request travel time (RTT) and the query Success ratio (QSR). This work presents also a new scalability performance study of both GLS and HLS, specifically, what happens if the number of nodes N increases. The study will focus on three qualitative metrics: The location maintenance cost, the location query cost and the storage cost.

Keywords: location based-services, routing protocols, scalability, wireless sensor networks

Procedia PDF Downloads 372
31211 Network Traffic Classification Scheme for Internet Network Based on Application Categorization for Ipv6

Authors: Yaser Miaji, Mohammed Aloryani

Abstract:

The rise of recent applications in everyday implementation like videoconferencing, online recreation and voice speech communication leads to pressing the need for novel mechanism and policy to serve this steep improvement within the application itself and users‟ wants. This diversity in web traffics needs some classification and prioritization of the traffics since some traffics merit abundant attention with less delay and loss, than others. This research is intended to reinforce the mechanism by analysing the performance in application according to the proposed mechanism implemented. The mechanism used is quite direct and analytical. The mechanism is implemented by modifying the queue limit in the algorithm.

Keywords: traffic classification, IPv6, internet, application categorization

Procedia PDF Downloads 565
31210 Intelligent Agent Travel Reservation System Requirements Definitions Using the Behavioral Patterns Analysis (BPA) Approach

Authors: Assem El-Ansary

Abstract:

This paper illustrates the event-oriented Behavioral Pattern Analysis (BPA) modeling approach in developing an Intelligent Agent Reservation System (IARS). The Event defined in BPA is a real-life conceptual entity that is unrelated to any implementation. The major contributions of this research are developing the Behavioral Pattern Analysis (BPA) modeling methodology, and developing an interactive software tool (DECISION) which is based on a combination of the Analytic Hierarchy Process (AHP) and the ELECTRE Multi-Criteria Decision Making (MCDM) methods.

Keywords: analysis, intelligent agent, reservation system, modeling methodology, software modeling, event-oriented, behavioral pattern, use cases

Procedia PDF Downloads 484
31209 Measure of Pleasure of Drug Users

Authors: Vano Tsertsvadze, Marina Chavchanidze, Lali Khurtsia

Abstract:

Problem of drug use is often seen as a combination of psychological and social problems, but this problem can be considered as economically rational decision in the process of buying pleasure (looking after children, reading, harvesting fruits in the fall, sex, eating, etc.). Before the adoption of the decisions people face to a trade-off - when someone chooses a delicious meal, she takes a completely rational decision, that the pleasure of eating has a lot more value than the pleasure which she will experience after two months diet on the summer beach showing off her beautiful body. This argument is also true for alcohol, drugs and cigarettes. Smoking has a negative effect on health, but smokers are not afraid of the threat of a lung cancer after 40 years, more valuable moment is a pleasure from smoking. Our hypothesis - unsatisfied pleasure and frustration, probably determines the risk of dependence on drug abuse. The purpose of research: 1- to determine the relative measure unit of pleasure, which will be used to measure and assess the intensity of various human pleasures. 2- to compare the intensity of the pleasure from different kinds of activity, with pleasures received from drug use. 3- Based on the analysis of data, to identify factors affecting the rational decision making. Research method: Respondents will be asked to recall the greatest pleasure of their life, which will be used as a measure of the other pleasures. The study will use focus groups and structured interviews.

Keywords: drug, drug-user, measurement, satisfaction

Procedia PDF Downloads 322
31208 ‘Doctor Knows Best’: Reconsidering Paternalism in the NICU

Authors: Rebecca Greenberg, Nipa Chauhan, Rashad Rehman

Abstract:

Paternalism, in its traditional form, seems largely incompatible with Western medicine. In contrast, Family-Centred Care, a partial response to historically authoritative paternalism, carries its own challenges, particularly when operationalized as family-directed care. Specifically, in neonatology, decision-making is left entirely to Substitute Decision Makers (most commonly parents). Most models of shared decision-making employ both the parents’ and medical team’s perspectives but do not recognize the inherent asymmetry of information and experience – asking parents to act like physicians to evaluate technical data and encourage physicians to refrain from strong medical opinions and proposals. They also do not fully appreciate the difficulties in adjudicating which perspective to prioritize and, moreover, how to mitigate disagreement. Introducing a mild form of paternalism can harness the unique skillset both parents and clinicians bring to shared decision-making and ultimately work towards decision-making in the best interest of the child. The notion expressed here is that within the model of shared decision-making, mild paternalism is prioritized inasmuch as optimal care is prioritized. This mild form of paternalism is known as Beneficent Paternalism and justifies our encouragement for physicians to root down in their own medical expertise to propose treatment plans informed by medical expertise, standards of care, and the parents’ values. This does not mean that we forget that paternalism was historically justified on ‘beneficent’ grounds; however, our recommendation is that a re-integration of mild paternalism is appropriate within our current Western healthcare climate. Through illustrative examples from the NICU, this paper explores the appropriateness and merits of Beneficent Paternalism and ultimately its use in promoting family-centered care, patient’s best interests and reducing moral distress. A distinctive feature of the NICU is the fact that communication regarding a patient’s treatment is exclusively done with substitute decision-makers and not the patient, i.e., the neonate themselves. This leaves the burden of responsibility entirely on substitute decision-makers and the clinical team; the patient in the NICU does not have any prior wishes, values, or beliefs that can guide decision-making on their behalf. Therefore, the wishes, values, and beliefs of the parent become the map upon which clinical proposals are made, giving extra weight to the family’s decision-making responsibility. This leads to why Family Directed Care is common in the NICU, where shared decision-making is mandatory. However, the zone of parental discretion is not as all-encompassing as it is currently considered; there are appropriate times when the clinical team should strongly root down in medical expertise and perhaps take the lead in guiding family decision-making: this is just what it means to adopt Beneficent Paternalism.

Keywords: care, ethics, expertise, NICU, paternalism

Procedia PDF Downloads 145
31207 Presentation of HVA Faults in SONELGAZ Underground Network and Methods of Faults Diagnostic and Faults Location

Authors: I. Touaїbia, E. Azzag, O. Narjes

Abstract:

Power supply networks are growing continuously and their reliability is getting more important than ever. The complexity of the whole network comprises numerous components that can fail and interrupt the power supply for the end user. Underground distribution systems are normally exposed to permanent faults, due to specific construction characteristics. In these systems, visual inspection cannot be performed. In order to enhance service restoration, accurate fault location techniques must be applied. This paper describes the different faults that affect the underground distribution system of SONELGAZ (National Society of Electricity and Gas of Algeria), and cable fault location procedure with impulse reflection method (TDR), based in the analyses of the cable response of the electromagnetic impulse, allows cable fault prelocation. The results are obtained from real test in the underground distribution feeder from electrical network of energy distribution company of Souk-Ahras, in order to know the influence of cable characteristics in the types and frequency of faults.

Keywords: distribution networks, fault location, TDR, underground cable

Procedia PDF Downloads 533
31206 Case Study: Throughput Analysis over PLC Infrastructure as Last Mile Residential Solution in Colombia

Authors: Edward P. Guillen, A. Karina Martinez Barliza

Abstract:

Powerline Communications (PLC) as last mile solution to provide communication services, has the advantage of transmitting over channels already used for electrical distribution. However these channels have been not designed with this purpose, for that reason telecommunication companies in Colombia want to know how good would be using PLC in costs and network performance in comparison to cable modem or DSL. This paper analyzes PLC throughput for residential complex scenarios using a PLC network scenarios and some statistical results are shown.

Keywords: home network, power line communication, throughput analysis, power factor, cost, last mile solution

Procedia PDF Downloads 266
31205 Parameter Identification Analysis in the Design of Rock Fill Dams

Authors: G. Shahzadi, A. Soulaimani

Abstract:

This research work aims to identify the physical parameters of the constitutive soil model in the design of a rockfill dam by inverse analysis. The best parameters of the constitutive soil model, are those that minimize the objective function, defined as the difference between the measured and numerical results. The Finite Element code (Plaxis) has been utilized for numerical simulation. Polynomial and neural network-based response surfaces have been generated to analyze the relationship between soil parameters and displacements. The performance of surrogate models has been analyzed and compared by evaluating the root mean square error. A comparative study has been done based on objective functions and optimization techniques. Objective functions are categorized by considering measured data with and without uncertainty in instruments, defined by the least square method, which estimates the norm between the predicted displacements and the measured values. Hydro Quebec provided data sets for the measured values of the Romaine-2 dam. Stochastic optimization, an approach that can overcome local minima, and solve non-convex and non-differentiable problems with ease, is used to obtain an optimum value. Genetic Algorithm (GA), Particle Swarm Optimization (PSO) and Differential Evolution (DE) are compared for the minimization problem, although all these techniques take time to converge to an optimum value; however, PSO provided the better convergence and best soil parameters. Overall, parameter identification analysis could be effectively used for the rockfill dam application and has the potential to become a valuable tool for geotechnical engineers for assessing dam performance and dam safety.

Keywords: Rockfill dam, parameter identification, stochastic analysis, regression, PLAXIS

Procedia PDF Downloads 146
31204 Market Acceptance of a Murabaha-Based Finance Structure within a Social Network of Non-Islamic Small and Medium Enterprise Owners in African Procurement

Authors: Craig M. Allen

Abstract:

Twenty two African entrepreneurs with Small and Medium Enterprises (SMEs) in a single social network centered around a non-Muslim population in a smaller African country, selected an Islamic financing structure, a form of Murabaha, based solely on market rationale. These entrepreneurs had all won procurement contracts from major purchasers of goods within their country and faced difficulty arranging traditional bank financing to support their supply-chain needs. The Murabaha-based structure satisfied their market-driven demand and provided an attractive alternative to the traditional bank-offered lending products. The Murabaha-styled trade-financing structure was not promoted with any religious implications, but solely as a market solution to the existing problems associated with bank-related financing. This indicates the strong market forces that draw SMEs to financing structures that are traditionally considered within the framework of Islamic finance.

Keywords: Africa, entrepreneurs, Islamic finance, market acceptance, Murabaha, SMEs

Procedia PDF Downloads 181
31203 Task Based Functional Connectivity within Reward Network in Food Image Viewing Paradigm Using Functional MRI

Authors: Preetham Shankapal, Jill King, Kori Murray, Corby Martin, Paula Giselman, Jason Hicks, Owen Carmicheal

Abstract:

Activation of reward and satiety networks in the brain while processing palatable food cues, as well as functional connectivity during rest has been studied using functional Magnetic Resonance Imaging of the brain in various obesity phenotypes. However, functional connectivity within the reward and satiety network during food cue processing is understudied. 14 obese individuals underwent two fMRI scans during viewing of Macronutrient Picture System images. Each scan included two blocks of images of High Sugar/High Fat (HSHF), High Carbohydrate/High Fat (HCHF), Low Sugar/Low Fat (LSLF) and also non-food images. Seed voxels within seven food reward relevant ROIs: Insula, putamen and cingulate, precentral, parahippocampal, medial frontal and superior temporal gyri were isolated based on a prior meta-analysis. Beta series correlation for task-related functional connectivity between these seed voxels and the rest of the brain was computed. Voxel-level differences in functional connectivity were calculated between: first and the second scan; individuals who saw novel (N=7) vs. Repeated (N=7) images in the second scan; and between the HC/HF, HSHF blocks vs LSLF and non-food blocks. Computations and analysis showed that during food image viewing, reward network ROIs showed significant functional connectivity with each other and with other regions responsible for attentional and motor control, including inferior parietal lobe and precentral gyrus. These functional connectivity values were heightened among individuals who viewed novel HS/HF images in the second scan. In the second scan session, functional connectivity was reduced within the reward network but increased within attention, memory and recognition regions, suggesting habituation to reward properties and increased recollection of previously viewed images. In conclusion it can be inferred that Functional Connectivity within reward network and between reward and other brain regions, varies by important experimental conditions during food photography viewing, including habituation to shown foods.

Keywords: fMRI, functional connectivity, task-based, beta series correlation

Procedia PDF Downloads 270
31202 Use of Artificial Intelligence and Two Object-Oriented Approaches (k-NN and SVM) for the Detection and Characterization of Wetlands in the Centre-Val de Loire Region, France

Authors: Bensaid A., Mostephaoui T., Nedjai R.

Abstract:

Nowadays, wetlands are the subject of contradictory debates opposing scientific, political and administrative meanings. Indeed, given their multiple services (drinking water, irrigation, hydrological regulation, mineral, plant and animal resources...), wetlands concentrate many socio-economic and biodiversity issues. In some regions, they can cover vast areas (>100 thousand ha) of the landscape, such as the Camargue area in the south of France, inside the Rhone delta. The high biological productivity of wetlands, the strong natural selection pressures and the diversity of aquatic environments have produced many species of plants and animals that are found nowhere else. These environments are tremendous carbon sinks and biodiversity reserves depending on their age, composition and surrounding environmental conditions, wetlands play an important role in global climate projections. Covering more than 3% of the earth's surface, wetlands have experienced since the beginning of the 1990s a tremendous revival of interest, which has resulted in the multiplication of inventories, scientific studies and management experiments. The geographical and physical characteristics of the wetlands of the central region conceal a large number of natural habitats that harbour a great biological diversity. These wetlands, one of the natural habitats, are still influenced by human activities, especially agriculture, which affects its layout and functioning. In this perspective, decision-makers need to delimit spatial objects (natural habitats) in a certain way to be able to take action. Thus, wetlands are no exception to this rule even if it seems to be a difficult exercise to delimit a type of environment as whose main characteristic is often to occupy the transition between aquatic and terrestrial environment. However, it is possible to map wetlands with databases, derived from the interpretation of photos and satellite images, such as the European database Corine Land cover, which allows quantifying and characterizing for each place the characteristic wetland types. Scientific studies have shown limitations when using high spatial resolution images (SPOT, Landsat, ASTER) for the identification and characterization of small wetlands (1 hectare). To address this limitation, it is important to note that these wetlands generally represent spatially complex features. Indeed, the use of very high spatial resolution images (>3m) is necessary to map small and large areas. However, with the recent evolution of artificial intelligence (AI) and deep learning methods for satellite image processing have shown a much better performance compared to traditional processing based only on pixel structures. Our research work is also based on spectral and textural analysis on THR images (Spot and IRC orthoimage) using two object-oriented approaches, the nearest neighbour approach (k-NN) and the Super Vector Machine approach (SVM). The k-NN approach gave good results for the delineation of wetlands (wet marshes and moors, ponds, artificial wetlands water body edges, ponds, mountain wetlands, river edges and brackish marshes) with a kappa index higher than 85%.

Keywords: land development, GIS, sand dunes, segmentation, remote sensing

Procedia PDF Downloads 72
31201 Multi-Period Portfolio Optimization Using Predictive Machine Learning Models

Authors: Peng Liu, Chyng Wen Tee, Xiaofei Xu

Abstract:

This paper integrates machine learning forecasting techniques into the multi-period portfolio optimization framework, enabling dynamic asset allocation based on multiple future periods. We explore both theoretical foundations and practical applications, employing diverse machine learning models for return forecasting. This comprehensive guide demonstrates the superiority of multi-period optimization over single-period approaches, particularly in risk mitigation through strategic rebalancing and enhanced market trend forecasting. Our goal is to promote wider adoption of multi-period optimization, providing insights that can significantly enhance the decision-making capabilities of practitioners and researchers alike.

Keywords: multi-period portfolio optimization, look-ahead constrained optimization, machine learning, sequential decision making

Procedia PDF Downloads 48
31200 Adaptive Energy-Aware Routing (AEAR) for Optimized Performance in Resource-Constrained Wireless Sensor Networks

Authors: Innocent Uzougbo Onwuegbuzie

Abstract:

Wireless Sensor Networks (WSNs) are crucial for numerous applications, yet they face significant challenges due to resource constraints such as limited power and memory. Traditional routing algorithms like Dijkstra, Ad hoc On-Demand Distance Vector (AODV), and Bellman-Ford, while effective in path establishment and discovery, are not optimized for the unique demands of WSNs due to their large memory footprint and power consumption. This paper introduces the Adaptive Energy-Aware Routing (AEAR) model, a solution designed to address these limitations. AEAR integrates reactive route discovery, localized decision-making using geographic information, energy-aware metrics, and dynamic adaptation to provide a robust and efficient routing strategy. We present a detailed comparative analysis using a dataset of 50 sensor nodes, evaluating power consumption, memory footprint, and path cost across AEAR, Dijkstra, AODV, and Bellman-Ford algorithms. Our results demonstrate that AEAR significantly reduces power consumption and memory usage while optimizing path weight. This improvement is achieved through adaptive mechanisms that balance energy efficiency and link quality, ensuring prolonged network lifespan and reliable communication. The AEAR model's superior performance underlines its potential as a viable routing solution for energy-constrained WSN environments, paving the way for more sustainable and resilient sensor network deployments.

Keywords: wireless sensor networks (WSNs), adaptive energy-aware routing (AEAR), routing algorithms, energy, efficiency, network lifespan

Procedia PDF Downloads 37
31199 Understanding the Conflict Between Ecological Environment and Human Activities in the Process of Urbanization

Authors: Yazhou Zhou, Yong Huang, Guoqin Ge

Abstract:

In the process of human social development, the coupling and coordinated development among the ecological environment(E), production(P), and living functions(L) is of great significance for sustainable development. This study uses an improved coupling coordination degree model (CCDM) to discover the coordination conflict between E and human settlement environment. The main work of this study is as follows: (1) It is found that in the process of urbanization development of Ya 'an city from 2014 to 2018, the degree of coupling (DOC) value between E, P, and L is high, but the coupling coordination degree (CCD) of the three is low, especially the DOC value of E and the other two has the biggest decline. (2) A more objective weight value is obtained, which can avoid the analysis error caused by subjective judgment weight value.

Keywords: ecological environment, coupling coordination degree, neural network, sustainable development

Procedia PDF Downloads 82
31198 Preprocessing and Fusion of Multiple Representation of Finger Vein patterns using Conventional and Machine Learning techniques

Authors: Tomas Trainys, Algimantas Venckauskas

Abstract:

Application of biometric features to the cryptography for human identification and authentication is widely studied and promising area of the development of high-reliability cryptosystems. Biometric cryptosystems typically are designed for patterns recognition, which allows biometric data acquisition from an individual, extracts feature sets, compares the feature set against the set stored in the vault and gives a result of the comparison. Preprocessing and fusion of biometric data are the most important phases in generating a feature vector for key generation or authentication. Fusion of biometric features is critical for achieving a higher level of security and prevents from possible spoofing attacks. The paper focuses on the tasks of initial processing and fusion of multiple representations of finger vein modality patterns. These tasks are solved by applying conventional image preprocessing methods and machine learning techniques, Convolutional Neural Network (SVM) method for image segmentation and feature extraction. An article presents a method for generating sets of biometric features from a finger vein network using several instances of the same modality. Extracted features sets were fused at the feature level. The proposed method was tested and compared with the performance and accuracy results of other authors.

Keywords: bio-cryptography, biometrics, cryptographic key generation, data fusion, information security, SVM, pattern recognition, finger vein method.

Procedia PDF Downloads 150
31197 Success Factors for Innovations in SME Networks

Authors: J. Gochermann

Abstract:

Due to complex markets and products, and increasing need to innovate, cooperation between small and medium size enterprises arose during the last decades, which are not prior driven by process optimization or sales enhancement. Especially small and medium sized enterprises (SME) collaborate increasingly in innovation and knowledge networks to enhance their knowledge and innovation potential, and to find strategic partners for product and market development. These networks are characterized by dual objectives, the superordinate goal of the total network, and the specific objectives of the network members, which can cause target conflicts. Moreover, most SMEs do not have structured innovation processes and they are not accustomed to collaborate in complex innovation projects in an open network structure. On the other hand, SMEs have suitable characteristics for promising networking. They are flexible and spontaneous, they have flat hierarchies, and the acting people are not anonymous. These characteristics indeed distinguish them from bigger concerns. Investigation of German SME networks have been done to identify success factors for SME innovation networks. The fundamental network principles, donation-return and confidence, could be confirmed and identified as basic success factors. Further factors are voluntariness, adequate number of network members, quality of communication, neutrality and competence of the network management, as well as reliability and obligingness of the network services. Innovation and knowledge networks with an appreciable number of members from science and technology institutions need also active sense-making to bring different disciplines into successful collaboration. It has also been investigated, whether and how the involvement in an innovation network impacts the innovation structure and culture inside the member companies. The degree of reaction grows with time and intensity of commitment.

Keywords: innovation and knowledge networks, SME, success factors, innovation structure and culture

Procedia PDF Downloads 283
31196 Upon One Smoothing Problem in Project Management

Authors: Dimitri Golenko-Ginzburg

Abstract:

A CPM network project with deterministic activity durations, in which activities require homogenous resources with fixed capacities, is considered. The problem is to determine the optimal schedule of starting times for all network activities within their maximal allowable limits (in order not to exceed the network's critical time) to minimize the maximum required resources for the project at any point in time. In case when a non-critical activity may start only at discrete moments with the pregiven time span, the problem becomes NP-complete and an optimal solution may be obtained via a look-over algorithm. For the case when a look-over requires much computational time an approximate algorithm is suggested. The algorithm's performance ratio, i.e., the relative accuracy error, is determined. Experimentation has been undertaken to verify the suggested algorithm.

Keywords: resource smoothing problem, CPM network, lookover algorithm, lexicographical order, approximate algorithm, accuracy estimate

Procedia PDF Downloads 302
31195 Urban Transport Demand Management Multi-Criteria Decision Using AHP and SERVQUAL Models: Case Study of Nigerian Cities

Authors: Suleiman Hassan Otuoze, Dexter Vernon Lloyd Hunt, Ian Jefferson

Abstract:

Urbanization has continued to widen the gap between demand and resources available to provide resilient and sustainable transport services in many fast-growing developing countries' cities. Transport demand management is a decision-based optimization concept for both benchmarking and ensuring efficient use of transport resources. This study assesses the service quality of infrastructure and mobility services in the Nigerian cities of Kano and Lagos through five dimensions of quality (i.e., Tangibility, Reliability, Responsibility, Safety Assurance and Empathy). The methodology adopts a hybrid AHP-SERVQUAL model applied on questionnaire surveys to gauge the quality of satisfaction and the views of experts in the field. The AHP results prioritize tangibility, which defines the state of transportation infrastructure and services in terms of satisfaction qualities and intervention decision weights in the two cities. The results recorded ‘unsatisfactory’ indices of quality of performance and satisfaction rating values of 48% and 49% for Kano and Lagos, respectively. The satisfaction indices are identified as indicators of low performances of transportation demand management (TDM) measures and the necessity to re-order priorities and take proactive steps towards infrastructure. The findings pilot a framework for comparative assessment of recognizable standards in transport services, best ethics of management and a necessity of quality infrastructure to guarantee both resilient and sustainable urban mobility.

Keywords: transportation demand management, multi-criteria decision support, transport infrastructure, service quality, sustainable transport

Procedia PDF Downloads 224
31194 Deep Supervision Based-Unet to Detect Buildings Changes from VHR Aerial Imagery

Authors: Shimaa Holail, Tamer Saleh, Xiongwu Xiao

Abstract:

Building change detection (BCD) from satellite imagery is an essential topic in urbanization monitoring, agricultural land management, and updating geospatial databases. Recently, methods for detecting changes based on deep learning have made significant progress and impressive results. However, it has the problem of being insensitive to changes in buildings with complex spectral differences, and the features being extracted are not discriminatory enough, resulting in incomplete buildings and irregular boundaries. To overcome these problems, we propose a dual Siamese network based on the Unet model with the addition of a deep supervision strategy (DS) in this paper. This network consists of a backbone (encoder) based on ImageNet pre-training, a fusion block, and feature pyramid networks (FPN) to enhance the step-by-step information of the changing regions and obtain a more accurate BCD map. To train the proposed method, we created a new dataset (EGY-BCD) of high-resolution and multi-temporal aerial images captured over New Cairo in Egypt to detect building changes for this purpose. The experimental results showed that the proposed method is effective and performs well with the EGY-BCD dataset regarding the overall accuracy, F1-score, and mIoU, which were 91.6 %, 80.1 %, and 73.5 %, respectively.

Keywords: building change detection, deep supervision, semantic segmentation, EGY-BCD dataset

Procedia PDF Downloads 120
31193 Research Activity in Computational Science Using High Performance Computing: Co-Authorship Network Analysis

Authors: Sul-Ah Ahn, Youngim Jung

Abstract:

The research activities of the computational scientists using high-performance computing are analyzed using bibliometric approaches. This study aims at providing computational scientists using high-performance computing and relevant policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of computational scientists using high-performance computing as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2006-2015. We extracted the author rank in the computational science field using high-performance computing by the number of papers published during ten years from 2006. Finally, we drew the co-authorship network for 50 top-authors and their coauthors and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

Keywords: co-authorship network analysis, computational science, high performance computing, research activity

Procedia PDF Downloads 323
31192 A Research and Application of Feature Selection Based on IWO and Tabu Search

Authors: Laicheng Cao, Xiangqian Su, Youxiao Wu

Abstract:

Feature selection is one of the important problems in network security, pattern recognition, data mining and other fields. In order to remove redundant features, effectively improve the detection speed of intrusion detection system, proposes a new feature selection method, which is based on the invasive weed optimization (IWO) algorithm and tabu search algorithm(TS). Use IWO as a global search, tabu search algorithm for local search, to improve the results of IWO algorithm. The experimental results show that the feature selection method can effectively remove the redundant features of network data information in feature selection, reduction time, and to guarantee accurate detection rate, effectively improve the speed of detection system.

Keywords: intrusion detection, feature selection, iwo, tabu search

Procedia PDF Downloads 530
31191 Bidirectional Long Short-Term Memory-Based Signal Detection for Orthogonal Frequency Division Multiplexing With All Index Modulation

Authors: Mahmut Yildirim

Abstract:

This paper proposed the bidirectional long short-term memory (Bi-LSTM) network-aided deep learning (DL)-based signal detection for Orthogonal frequency division multiplexing with all index modulation (OFDM-AIM), namely Bi-DeepAIM. OFDM-AIM is developed to increase the spectral efficiency of OFDM with index modulation (OFDM-IM), a promising multi-carrier technique for communication systems beyond 5G. In this paper, due to its strong classification ability, Bi-LSTM is considered an alternative to the maximum likelihood (ML) algorithm, which is used for signal detection in the classical OFDM-AIM scheme. The performance of the Bi-DeepAIM is compared with LSTM network-aided DL-based OFDM-AIM (DeepAIM) and classic OFDM-AIM that uses (ML)-based signal detection via BER performance and computational time criteria. Simulation results show that Bi-DeepAIM obtains better bit error rate (BER) performance than DeepAIM and lower computation time in signal detection than ML-AIM.

Keywords: bidirectional long short-term memory, deep learning, maximum likelihood, OFDM with all index modulation, signal detection

Procedia PDF Downloads 72
31190 DAG Design and Tradeoff for Full Live Virtual Machine Migration over XIA Network

Authors: Dalu Zhang, Xiang Jin, Dejiang Zhou, Jianpeng Wang, Haiying Jiang

Abstract:

Traditional TCP/IP network is showing lots of shortages and research for future networks is becoming a hotspot. FIA (Future Internet Architecture) and FIA-NP (Next Phase) are supported by US NSF for future Internet designing. Moreover, virtual machine migration is a significant technique in cloud computing. As a network application, it should also be supported in XIA (expressive Internet Architecture), which is in both FIA and FIA-NP projects. This paper is an experimental study aims at verifying the feasibility of VM migration over XIA. We present three ways to maintain VM connectivity and communication states concerning DAG design and routing table modification. VM migration experiments are conducted intra-AD and inter-AD with KVM instances. The procedure is achieved by a migration control protocol which is suitable for the characters of XIA. Evaluation results show that our solutions can well supports full live VM migration over XIA network respectively, keeping services seamless.

Keywords: DAG, downtime, virtual machine migration, XIA

Procedia PDF Downloads 855
31189 A Comprehensive Study of Spread Models of Wildland Fires

Authors: Manavjit Singh Dhindsa, Ursula Das, Kshirasagar Naik, Marzia Zaman, Richard Purcell, Srinivas Sampalli, Abdul Mutakabbir, Chung-Horng Lung, Thambirajah Ravichandran

Abstract:

These days, wildland fires, also known as forest fires, are more prevalent than ever. Wildfires have major repercussions that affect ecosystems, communities, and the environment in several ways. Wildfires lead to habitat destruction and biodiversity loss, affecting ecosystems and causing soil erosion. They also contribute to poor air quality by releasing smoke and pollutants that pose health risks, especially for individuals with respiratory conditions. Wildfires can damage infrastructure, disrupt communities, and cause economic losses. The economic impact of firefighting efforts, combined with their direct effects on forestry and agriculture, causes significant financial difficulties for the areas impacted. This research explores different forest fire spread models and presents a comprehensive review of various techniques and methodologies used in the field. A forest fire spread model is a computational or mathematical representation that is used to simulate and predict the behavior of a forest fire. By applying scientific concepts and data from empirical studies, these models attempt to capture the intricate dynamics of how a fire spreads, taking into consideration a variety of factors like weather patterns, topography, fuel types, and environmental conditions. These models assist authorities in understanding and forecasting the potential trajectory and intensity of a wildfire. Emphasizing the need for a comprehensive understanding of wildfire dynamics, this research explores the approaches, assumptions, and findings derived from various models. By using a comparison approach, a critical analysis is provided by identifying patterns, strengths, and weaknesses among these models. The purpose of the survey is to further wildfire research and management techniques. Decision-makers, researchers, and practitioners can benefit from the useful insights that are provided by synthesizing established information. Fire spread models provide insights into potential fire behavior, facilitating authorities to make informed decisions about evacuation activities, allocating resources for fire-fighting efforts, and planning for preventive actions. Wildfire spread models are also useful in post-wildfire mitigation strategies as they help in assessing the fire's severity, determining high-risk regions for post-fire dangers, and forecasting soil erosion trends. The analysis highlights the importance of customized modeling approaches for various circumstances and promotes our understanding of the way forest fires spread. Some of the known models in this field are Rothermel’s wildland fuel model, FARSITE, WRF-SFIRE, FIRETEC, FlamMap, FSPro, cellular automata model, and others. The key characteristics that these models consider include weather (includes factors such as wind speed and direction), topography (includes factors like landscape elevation), and fuel availability (includes factors like types of vegetation) among other factors. The models discussed are physics-based, data-driven, or hybrid models, also utilizing ML techniques like attention-based neural networks to enhance the performance of the model. In order to lessen the destructive effects of forest fires, this initiative aims to promote the development of more precise prediction tools and effective management techniques. The survey expands its scope to address the practical needs of numerous stakeholders. Access to enhanced early warning systems enables decision-makers to take prompt action. Emergency responders benefit from improved resource allocation strategies, strengthening the efficacy of firefighting efforts.

Keywords: artificial intelligence, deep learning, forest fire management, fire risk assessment, fire simulation, machine learning, remote sensing, wildfire modeling

Procedia PDF Downloads 81
31188 Deep Learning Based Road Crack Detection on an Embedded Platform

Authors: Nurhak Altın, Ayhan Kucukmanisa, Oguzhan Urhan

Abstract:

It is important that highways are in good condition for traffic safety. Road crashes (road cracks, erosion of lane markings, etc.) can cause accidents by affecting driving. Image processing based methods for detecting road cracks are available in the literature. In this paper, a deep learning based road crack detection approach is proposed. YOLO (You Look Only Once) is adopted as core component of the road crack detection approach presented. The YOLO network structure, which is developed for object detection, is trained with road crack images as a new class that is not previously used in YOLO. The performance of the proposed method is compared using different training methods: using randomly generated weights and training their own pre-trained weights (transfer learning). A similar training approach is applied to the simplified version of the YOLO network model (tiny yolo) and the results of the performance are examined. The developed system is able to process 8 fps on NVIDIA Jetson TX1 development kit.

Keywords: deep learning, embedded platform, real-time processing, road crack detection

Procedia PDF Downloads 339
31187 Leveraging Automated and Connected Vehicles with Deep Learning for Smart Transportation Network Optimization

Authors: Taha Benarbia

Abstract:

The advent of automated and connected vehicles has revolutionized the transportation industry, presenting new opportunities for enhancing the efficiency, safety, and sustainability of our transportation networks. This paper explores the integration of automated and connected vehicles into a smart transportation framework, leveraging the power of deep learning techniques to optimize the overall network performance. The first aspect addressed in this paper is the deployment of automated vehicles (AVs) within the transportation system. AVs offer numerous advantages, such as reduced congestion, improved fuel efficiency, and increased safety through advanced sensing and decisionmaking capabilities. The paper delves into the technical aspects of AVs, including their perception, planning, and control systems, highlighting the role of deep learning algorithms in enabling intelligent and reliable AV operations. Furthermore, the paper investigates the potential of connected vehicles (CVs) in creating a seamless communication network between vehicles, infrastructure, and traffic management systems. By harnessing real-time data exchange, CVs enable proactive traffic management, adaptive signal control, and effective route planning. Deep learning techniques play a pivotal role in extracting meaningful insights from the vast amount of data generated by CVs, empowering transportation authorities to make informed decisions for optimizing network performance. The integration of deep learning with automated and connected vehicles paves the way for advanced transportation network optimization. Deep learning algorithms can analyze complex transportation data, including traffic patterns, demand forecasting, and dynamic congestion scenarios, to optimize routing, reduce travel times, and enhance overall system efficiency. The paper presents case studies and simulations demonstrating the effectiveness of deep learning-based approaches in achieving significant improvements in network performance metrics

Keywords: automated vehicles, connected vehicles, deep learning, smart transportation network

Procedia PDF Downloads 79