Search results for: Deep Neural Network
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6771

Search results for: Deep Neural Network

4701 A Secure Routing Algorithm for ‎Underwater Wireless Sensor Networks

Authors: Seyed Mahdi Jameii

Abstract:

Underwater wireless sensor networks have been attracting the interest of many ‎researchers lately, and the past three decades have beheld the rapid progress of ‎underwater acoustic communication. One of the major problems in underwater wireless ‎sensor networks is how to transfer data from the moving node to the base stations and ‎choose the optimized route for data transmission. Secure routing in underwater ‎wireless sensor network (UWCNs) is necessary for packet delivery. Some routing ‎protocols are proposed for underwater wireless sensor networks. However, a few ‎researches have been done on secure routing in underwater sensor networks. In this ‎article, a secure routing protocol is provided to resist against wormhole and sybil ‎attacks. The results indicated acceptable performance in terms of increasing the packet ‎delivery ratio with regards to the attacks, increasing network lifetime by creating ‎balance in the network energy consumption, high detection rates against the attacks, ‎and low-end to end delay.‎

Keywords: attacks, routing, security, underwater wireless sensor networks

Procedia PDF Downloads 418
4700 Rheological Characteristics of Ice Slurries Based on Propylene- and Ethylene-Glycol at High Ice Fractions

Authors: Senda Trabelsi, Sébastien Poncet, Michel Poirier

Abstract:

Ice slurries are considered as a promising phase-changing secondary fluids for air-conditioning, packaging or cooling industrial processes. An experimental study has been here carried out to measure the rheological characteristics of ice slurries. Ice slurries consist in a solid phase (flake ice crystals) and a liquid phase. The later is composed of a mixture of liquid water and an additive being here either (1) Propylene-Glycol (PG) or (2) Ethylene-Glycol (EG) used to lower the freezing point of water. Concentrations of 5%, 14% and 24% of both additives are investigated with ice mass fractions ranging from 5% to 85%. The rheological measurements are carried out using a Discovery HR-2 vane-concentric cylinder with four full-length blades. The experimental results show that the behavior of ice slurries is generally non-Newtonian with shear-thinning or shear-thickening behaviors depending on the experimental conditions. In order to determine the consistency and the flow index, the Herschel-Bulkley model is used to describe the behavior of ice slurries. The present results are finally validated against an experimental database found in the literature and the predictions of an Artificial Neural Network model.

Keywords: ice slurry, propylene-glycol, ethylene-glycol, rheology

Procedia PDF Downloads 263
4699 An Efficient Algorithm for Global Alignment of Protein-Protein Interaction Networks

Authors: Duc Dong Do, Ngoc Ha Tran, Thanh Hai Dang, Cao Cuong Dang, Xuan Huan Hoang

Abstract:

Global aligning two protein-protein interaction networks is an essentially important task in bioinformatics/computational biology field of study. It is a challenging and widely studied research topic in recent years. Accurately aligned networks allow us to identify functional modules of proteins and/ororthologous proteins from which unknown functions of a protein can be inferred. We here introduce a novel efficient heuristic global network alignment algorithm called FASTAn, including two phases: the first to construct an initial alignment and the second to improve such alignment by exerting a local optimization repeated procedure. The experimental results demonstrated that FASTAn outperformed the state-of-the-art global network alignment algorithm namely SPINAL in terms of both commonly used objective scores and the run-time.

Keywords: FASTAn, Heuristic algorithm, biological network alignment, protein-protein interaction networks

Procedia PDF Downloads 604
4698 Scheduling Tasks in Embedded Systems Based on NoC Architecture

Authors: D. Dorota

Abstract:

This paper presents a method to generate and schedule task in the architecture of embedded systems based on the simulated annealing. This method takes into account the attribute of divisibility of tasks. A proposal represents the process in the form of trees. Despite the fact that the architecture of Network-on-Chip (NoC) is an interesting alternative to a bus architecture based on multi-processors systems, it requires a lot of work that ensures the optimization of communication. This paper proposes an effective approach to generate dedicated NoC topology solving communication problems. Network NoC is generated taking into account the energy consumption and resource issues. Ultimately generated is minimal, dedicated NoC topology. The proposed solution is assumed to be a simple router design and the minimum number of lines.

Keywords: Network-on-Chip, NoC-based embedded systems, scheduling task in embedded systems, simulated annealing

Procedia PDF Downloads 377
4697 Investigation of Clustering Algorithms Used in Wireless Sensor Networks

Authors: Naim Karasekreter, Ugur Fidan, Fatih Basciftci

Abstract:

Wireless sensor networks are networks in which more than one sensor node is organized among themselves. The working principle is based on the transfer of the sensed data over the other nodes in the network to the central station. Wireless sensor networks concentrate on routing algorithms, energy efficiency and clustering algorithms. In the clustering method, the nodes in the network are divided into clusters using different parameters and the most suitable cluster head is selected from among them. The data to be sent to the center is sent per cluster, and the cluster head is transmitted to the center. With this method, the network traffic is reduced and the energy efficiency of the nodes is increased. In this study, clustering algorithms were examined in terms of clustering performances and cluster head selection characteristics to try to identify weak and strong sides. This work is supported by the Project 17.Kariyer.123 of Afyon Kocatepe University BAP Commission.

Keywords: wireless sensor networks (WSN), clustering algorithm, cluster head, clustering

Procedia PDF Downloads 513
4696 Application of Support Vector Machines in Forecasting Non-Residential

Authors: Wiwat Kittinaraporn, Napat Harnpornchai, Sutja Boonyachut

Abstract:

This paper deals with the application of a novel neural network technique, so-called Support Vector Machine (SVM). The objective of this study is to explore the variable and parameter of forecasting factors in the construction industry to build up forecasting model for construction quantity in Thailand. The scope of the research is to study the non-residential construction quantity in Thailand. There are 44 sets of yearly data available, ranging from 1965 to 2009. The correlation between economic indicators and construction demand with the lag of one year was developed by Apichat Buakla. The selected variables are used to develop SVM models to forecast the non-residential construction quantity in Thailand. The parameters are selected by using ten-fold cross-validation method. The results are indicated in term of Mean Absolute Percentage Error (MAPE). The MAPE value for the non-residential construction quantity predicted by Epsilon-SVR in corporation with Radial Basis Function (RBF) of kernel function type is 5.90. Analysis of the experimental results show that the support vector machine modelling technique can be applied to forecast construction quantity time series which is useful for decision planning and management purpose.

Keywords: forecasting, non-residential, construction, support vector machines

Procedia PDF Downloads 434
4695 Deep Learning-Based Approach to Automatic Abstractive Summarization of Patent Documents

Authors: Sakshi V. Tantak, Vishap K. Malik, Neelanjney Pilarisetty

Abstract:

A patent is an exclusive right granted for an invention. It can be a product or a process that provides an innovative method of doing something, or offers a new technical perspective or solution to a problem. A patent can be obtained by making the technical information and details about the invention publicly available. The patent owner has exclusive rights to prevent or stop anyone from using the patented invention for commercial uses. Any commercial usage, distribution, import or export of a patented invention or product requires the patent owner’s consent. It has been observed that the central and important parts of patents are scripted in idiosyncratic and complex linguistic structures that can be difficult to read, comprehend or interpret for the masses. The abstracts of these patents tend to obfuscate the precise nature of the patent instead of clarifying it via direct and simple linguistic constructs. This makes it necessary to have an efficient access to this knowledge via concise and transparent summaries. However, as mentioned above, due to complex and repetitive linguistic constructs and extremely long sentences, common extraction-oriented automatic text summarization methods should not be expected to show a remarkable performance when applied to patent documents. Other, more content-oriented or abstractive summarization techniques are able to perform much better and generate more concise summaries. This paper proposes an efficient summarization system for patents using artificial intelligence, natural language processing and deep learning techniques to condense the knowledge and essential information from a patent document into a single summary that is easier to understand without any redundant formatting and difficult jargon.

Keywords: abstractive summarization, deep learning, natural language Processing, patent document

Procedia PDF Downloads 123
4694 Effective Stacking of Deep Neural Models for Automated Object Recognition in Retail Stores

Authors: Ankit Sinha, Soham Banerjee, Pratik Chattopadhyay

Abstract:

Automated product recognition in retail stores is an important real-world application in the domain of Computer Vision and Pattern Recognition. In this paper, we consider the problem of automatically identifying the classes of the products placed on racks in retail stores from an image of the rack and information about the query/product images. We improve upon the existing approaches in terms of effectiveness and memory requirement by developing a two-stage object detection and recognition pipeline comprising of a Faster-RCNN-based object localizer that detects the object regions in the rack image and a ResNet-18-based image encoder that classifies the detected regions into the appropriate classes. Each of the models is fine-tuned using appropriate data sets for better prediction and data augmentation is performed on each query image to prepare an extensive gallery set for fine-tuning the ResNet-18-based product recognition model. This encoder is trained using a triplet loss function following the strategy of online-hard-negative-mining for improved prediction. The proposed models are lightweight and can be connected in an end-to-end manner during deployment to automatically identify each product object placed in a rack image. Extensive experiments using Grozi-32k and GP-180 data sets verify the effectiveness of the proposed model.

Keywords: retail stores, faster-RCNN, object localization, ResNet-18, triplet loss, data augmentation, product recognition

Procedia PDF Downloads 157
4693 Rain Gauges Network Optimization in Southern Peninsular Malaysia

Authors: Mohd Khairul Bazli Mohd Aziz, Fadhilah Yusof, Zulkifli Yusop, Zalina Mohd Daud, Mohammad Afif Kasno

Abstract:

Recent developed rainfall network design techniques have been discussed and compared by many researchers worldwide due to the demand of acquiring higher levels of accuracy from collected data. In many studies, rain-gauge networks are designed to provide good estimation for areal rainfall and for flood modelling and prediction. In a certain study, even using lumped models for flood forecasting, a proper gauge network can significantly improve the results. Therefore existing rainfall network in Johor must be optimized and redesigned in order to meet the required level of accuracy preset by rainfall data users. The well-known geostatistics method (variance-reduction method) that is combined with simulated annealing was used as an algorithm of optimization in this study to obtain the optimal number and locations of the rain gauges. Rain gauge network structure is not only dependent on the station density; station location also plays an important role in determining whether information is acquired accurately. The existing network of 84 rain gauges in Johor is optimized and redesigned by using rainfall, humidity, solar radiation, temperature and wind speed data during monsoon season (November – February) for the period of 1975 – 2008. Three different semivariogram models which are Spherical, Gaussian and Exponential were used and their performances were also compared in this study. Cross validation technique was applied to compute the errors and the result showed that exponential model is the best semivariogram. It was found that the proposed method was satisfied by a network of 64 rain gauges with the minimum estimated variance and 20 of the existing ones were removed and relocated. An existing network may consist of redundant stations that may make little or no contribution to the network performance for providing quality data. Therefore, two different cases were considered in this study. The first case considered the removed stations that were optimally relocated into new locations to investigate their influence in the calculated estimated variance and the second case explored the possibility to relocate all 84 existing stations into new locations to determine the optimal position. The relocations of the stations in both cases have shown that the new optimal locations have managed to reduce the estimated variance and it has proven that locations played an important role in determining the optimal network.

Keywords: geostatistics, simulated annealing, semivariogram, optimization

Procedia PDF Downloads 302
4692 Survey Based Data Security Evaluation in Pakistan Financial Institutions against Malicious Attacks

Authors: Naveed Ghani, Samreen Javed

Abstract:

In today’s heterogeneous network environment, there is a growing demand for distrust clients to jointly execute secure network to prevent from malicious attacks as the defining task of propagating malicious code is to locate new targets to attack. Residual risk is always there no matter what solutions are implemented or whet so ever security methodology or standards being adapted. Security is the first and crucial phase in the field of Computer Science. The main aim of the Computer Security is gathering of information with secure network. No one need wonder what all that malware is trying to do: It's trying to steal money through data theft, bank transfers, stolen passwords, or swiped identities. From there, with the help of our survey we learn about the importance of white listing, antimalware programs, security patches, log files, honey pots, and more used in banks for financial data protection but there’s also a need of implementing the IPV6 tunneling with Crypto data transformation according to the requirements of new technology to prevent the organization from new Malware attacks and crafting of its own messages and sending them to the target. In this paper the writer has given the idea of implementing IPV6 Tunneling Secessions on private data transmission from financial organizations whose secrecy needed to be safeguarded.

Keywords: network worms, malware infection propagating malicious code, virus, security, VPN

Procedia PDF Downloads 358
4691 Detecting Port Maritime Communities in Spain with Complex Network Analysis

Authors: Nicanor Garcia Alvarez, Belarmino Adenso-Diaz, Laura Calzada Infante

Abstract:

In recent years, researchers have shown an interest in modelling maritime traffic as a complex network. In this paper, we propose a bipartite weighted network to model maritime traffic and detect port maritime communities. The bipartite weighted network considers two different types of nodes. The first one represents Spanish ports, while the second one represents the countries with which there is major import/export activity. The flow among both types of nodes is modeled by weighting the volume of product transported. To illustrate the model, the data is segmented by each type of traffic. This will allow fine tuning and the creation of communities for each type of traffic and therefore finding similar ports for a specific type of traffic, which will provide decision-makers with tools to search for alliances or identify their competitors. The traffic with the greatest impact on the Spanish gross domestic product is selected, and the evolution of the communities formed by the most important ports and their differences between 2019 and 2009 will be analyzed. Finally, the set of communities formed by the ports of the Spanish port system will be inspected to determine global similarities between them, analyzing the sum of the membership of the different ports in communities formed for each type of traffic in particular.

Keywords: bipartite networks, competition, infomap, maritime traffic, port communities

Procedia PDF Downloads 149
4690 The Role of Bridging Stakeholder in Water Management: Examining Social Networks in Working Groups and Co-Management

Authors: Fariba Ebrahimi, Mehdi Ghorbani

Abstract:

Comprehensive water management considers economic, environmental, technical and social sustainability of water resources for future generations. Integrated water management implies cooperative approach and involves all stakeholders and also introduces issues to managers and decision makers. Solving these issues needs integrated and system approach according to the recognition of actors or key persons in necessary to apply cooperative management of water resources. Therefore, social network analysis can be used to demonstrate the most effective actors for environmental base decisions. The linkage of diverse sets of actors and knowledge systems across management levels and institutional boundaries often poses one of the greatest challenges in adaptive water management. Bridging stakeholder can facilitate interactions among actors in management settings by lowering the transaction costs of collaboration. This research examines how network connections between group members affect in co- management. Cohesive network structures allow groups to more effectively achieve their goals and objectives Strong; centralized leadership is a better predictor of working group success in achieving goals and objectives. Finally, geometric position of each actor was illustrated in the network. The results of the research based on between centrality index have a key and bridging actor in recognition of cooperative management of water resources in Darbandsar village and also will help managers and planners of water in the case of recognition to organization and implementation of sustainable management of water resources and water security.

Keywords: co-management, water management, social network, bridging stakeholder, darbandsar village

Procedia PDF Downloads 308
4689 Visualizing the Commercial Activity of a City by Analyzing the Data Information in Layers

Authors: Taras Agryzkov, Jose L. Oliver, Leandro Tortosa, Jose Vicent

Abstract:

This paper aims to demonstrate how network models can be used to understand and to deal with some aspects of urban complexity. As it is well known, the Theory of Architecture and Urbanism has been using for decades’ intellectual tools based on the ‘sciences of complexity’ as a strategy to propose theoretical approaches about cities and about architecture. In this sense, it is possible to find a vast literature in which for instance network theory is used as an instrument to understand very diverse questions about cities: from their commercial activity to their heritage condition. The contribution of this research consists in adding one step of complexity to this process: instead of working with one single primal graph as it is usually done, we will show how new network models arise from the consideration of two different primal graphs interacting in two layers. When we model an urban network through a mathematical structure like a graph, the city is usually represented by a set of nodes and edges that reproduce its topology, with the data generated or extracted from the city embedded in it. All this information is normally displayed in a single layer. Here, we propose to separate the information in two layers so that we can evaluate the interaction between them. Besides, both layers may be composed of structures that do not have to coincide: from this bi-layer system, groups of interactions emerge, suggesting reflections and in consequence, possible actions.

Keywords: graphs, mathematics, networks, urban studies

Procedia PDF Downloads 180
4688 Colorectal Resection in Endometriosis: A Study on Conservative Vascular Approach

Authors: A. Zecchin, E. Vallicella, I. Alberi, A. Dalle Carbonare, A. Festi, F. Galeone, S. Garzon, R. Raffaelli, P. Pomini, M. Franchi

Abstract:

Introduction: Severe endometriosis is a multiorgan disease, that involves bowel in 31% of cases. Disabling symptoms and deep infiltration can lead to bowel obstruction: surgical bowel treatment may be needed. In these cases, colorectal segment resection is usually performed by inferior mesenteric artery ligature, as radically as for oncological surgery. This study was made on surgery based on intestinal vascular axis’ preservation. It was assessed postoperative complications risks (mainly rate of dehiscence of intestinal anastomoses), and results were compared with the ones found in literature about classical colorectal resection. Materials and methods: This was a retrospective study based on 62 patients with deep infiltrating endometriosis of the bowel, which undergo segmental resection with intestinal vascular axis preservation, between 2013 and 2016. It was assessed complications related to the intervention both during hospitalization and 30-60 days after resection. Particular attention was paid to the presence of anastomotic dehiscence. 52 patients were finally telephonically interviewed in order to investigate the presence or absence of intestinal constipation. Results and Conclusion: Segmental intestinal resection performed in this study ensured a more conservative vascular approach, with lower rate of anastomotic dehiscence (1.6%) compared to classical literature data (10.0% to 11.4% ). No complications were observed regarding spontaneous recovery of intestinal motility and bladder emptying. Constipation in some patients, even after years of intervention, is not assessable in the absence of a preoperative constipation state assessment.

Keywords: anastomotic dehiscence, deep infiltrating endometriosis, colorectal resection, vascular axis preservation

Procedia PDF Downloads 204
4687 Security in Resource Constraints Network Light Weight Encryption for Z-MAC

Authors: Mona Almansoori, Ahmed Mustafa, Ahmad Elshamy

Abstract:

Wireless sensor network was formed by a combination of nodes, systematically it transmitting the data to their base stations, this transmission data can be easily compromised if the limited processing power and the data consistency from these nodes are kept in mind; there is always a discussion to address the secure data transfer or transmission in actual time. This will present a mechanism to securely transmit the data over a chain of sensor nodes without compromising the throughput of the network by utilizing available battery resources available in the sensor node. Our methodology takes many different advantages of Z-MAC protocol for its efficiency, and it provides a unique key by sharing the mechanism using neighbor node MAC address. We present a light weighted data integrity layer which is embedded in the Z-MAC protocol to prove that our protocol performs well than Z-MAC when we introduce the different attack scenarios.

Keywords: hybrid MAC protocol, data integrity, lightweight encryption, neighbor based key sharing, sensor node dataprocessing, Z-MAC

Procedia PDF Downloads 144
4686 Correlation of SPT N-Value and Equipment Drilling Parameters in Deep Soil Mixing

Authors: John Eric C. Bargas, Maria Cecilia M. Marcos

Abstract:

One of the most common ground improvement techniques is Deep Soil Mixing (DSM). As the technique progresses, there is still lack in the development when it comes to depth control. This was the issue experienced during the installation of DSM in one of the National projects in the Philippines. This study assesses the feasibility of using equipment drilling parameters such as hydraulic pressure, drilling speed and rotational speed in determining the Standard Penetration Test N-value of a specific soil. Hydraulic pressure and drilling speed with a constant rotational speed of 30 rpm have a positive correlation with SPT N-value for cohesive soil and sand. A linear trend was observed for cohesive soil. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.5377 while the correlation of SPT N-value and drilling speed has a R²=0.6355. While the best fitted model for sand is polynomial trend. The correlation of SPT N-value and hydraulic pressure yielded a R²=0.7088 while the correlation of SPT N-value and drilling speed has a R²=0.4354. The low correlation may be attributed to the behavior of sand when the auger penetrates. Sand tends to follow the rotation of the auger rather than resisting which was observed for very loose to medium dense sand. Specific Energy and the product of hydraulic pressure and drilling speed yielded same R² with a positive correlation. Linear trend was observed for cohesive soil while polynomial trend for sand. Cohesive soil yielded a R²=0.7320 which has a strong relationship. Sand also yielded a strong relationship having a coefficient of determination, R²=0.7203. It is feasible to use hydraulic pressure and drilling speed to estimate the SPT N-value of the soil. Also, the product of hydraulic pressure and drilling speed can be a substitute to specific energy when estimating the SPT N-value of a soil. However, additional considerations are necessary to account for other influencing factors like ground water and physical and mechanical properties of soil.

Keywords: ground improvement, equipment drilling parameters, standard penetration test, deep soil mixing

Procedia PDF Downloads 49
4685 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment

Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay

Abstract:

Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.

Keywords: machine learning, system performance, performance metrics, IoT, edge

Procedia PDF Downloads 195
4684 Exploring the Psychosocial Brain: A Retrospective Analysis of Personality, Social Networks, and Dementia Outcomes

Authors: Felicia N. Obialo, Aliza Wingo, Thomas Wingo

Abstract:

Psychosocial factors such as personality traits and social networks influence cognitive aging and dementia outcomes both positively and negatively. The inherent complexity of these factors makes defining the underlying mechanisms of their influence difficult; however, exploring their interactions affords promise in the field of cognitive aging. The objective of this study was to elucidate some of these interactions by determining the relationship between social network size and dementia outcomes and by determining whether personality traits mediate this relationship. The longitudinal Alzheimer’s Disease (AD) database provided by Rush University’s Religious Orders Study/Memory and Aging Project was utilized to perform retrospective regression and mediation analyses on 3,591 participants. Participants who were cognitively impaired at baseline were excluded, and analyses were adjusted for age, sex, common chronic diseases, and vascular risk factors. Dementia outcome measures included cognitive trajectory, clinical dementia diagnosis, and postmortem beta-amyloid plaque (AB), and neurofibrillary tangle (NT) accumulation. Personality traits included agreeableness (A), conscientiousness (C), extraversion (E), neuroticism (N), and openness (O). The results show a positive correlation between social network size and cognitive trajectory (p-value = 0.004) and a negative relationship between social network size and odds of dementia diagnosis (p = 0.024/ Odds Ratio (OR) = 0.974). Only neuroticism mediates the positive relationship between social network size and cognitive trajectory (p < 2e-16). Agreeableness, extraversion, and neuroticism all mediate the negative relationship between social network size and dementia diagnosis (p=0.098, p=0.054, and p < 2e-16, respectively). All personality traits are independently associated with dementia diagnosis (A: p = 0.016/ OR = 0.959; C: p = 0.000007/ OR = 0.945; E: p = 0.028/ OR = 0.961; N: p = 0.000019/ OR = 1.036; O: p = 0.027/ OR = 0.972). Only conscientiousness and neuroticism are associated with postmortem AD pathologies; specifically, conscientiousness is negatively associated (AB: p = 0.001, NT: p = 0.025) and neuroticism is positively associated with pathologies (AB: p = 0.002, NT: p = 0.002). These results support the study’s objectives, demonstrating that social network size and personality traits are strongly associated with dementia outcomes, particularly the odds of receiving a clinical diagnosis of dementia. Personality traits interact significantly and beneficially with social network size to influence the cognitive trajectory and future dementia diagnosis. These results reinforce previous literature linking social network size to dementia risk and provide novel insight into the differential roles of individual personality traits in cognitive protection.

Keywords: Alzheimer’s disease, cognitive trajectory, personality traits, social network size

Procedia PDF Downloads 127
4683 Extraction of Nutraceutical Bioactive Compounds from the Native Algae Using Solvents with a Deep Natural Eutectic Point and Ultrasonic-assisted Extraction

Authors: Seyedeh Bahar Hashemi, Alireza Rahimi, Mehdi Arjmand

Abstract:

Food is the source of energy and growth through the breakdown of its vital components and plays a vital role in human health and nutrition. Many natural compounds found in plant and animal materials play a special role in biological systems and the origin of many such compounds directly or indirectly is algae. Algae is an enormous source of polysaccharides and have gained much interest in human flourishing. In this study, algae biomass extraction is conducted using deep eutectic-based solvents (NADES) and Ultrasound-assisted extraction (UAE). The aim of this research is to extract bioactive compounds including total carotenoid, antioxidant activity, and polyphenolic contents. For this purpose, the influence of three important extraction parameters namely, biomass-to-solvent ratio, temperature, and time are studied with respect to their impact on the recovery of carotenoids, and phenolics, and on the extracts’ antioxidant activity. Here we employ the Response Surface Methodology for the process optimization. The influence of the independent parameters on each dependent is determined through Analysis of Variance. Our results show that Ultrasound-assisted extraction (UAE) for 50 min is the best extraction condition, and proline:lactic acid (1:1) and choline chloride:urea (1:2) extracts show the highest total phenolic contents (50.00 ± 0.70 mgGAE/gdw) and antioxidant activity [60.00 ± 1.70 mgTE/gdw, 70.00 ± 0.90 mgTE/gdw in 2.2-diphenyl-1-picrylhydrazyl (DPPH), and 2.2′-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS)]. Our results confirm that the combination of UAE and NADES provides an excellent alternative to organic solvents for sustainable and green extraction and has huge potential for use in industrial applications involving the extraction of bioactive compounds from algae. This study is among the first attempts to optimize the effects of ultrasonic-assisted extraction, ultrasonic devices, and deep natural eutectic point and investigate their application in bioactive compounds extraction from algae. We also study the future perspective of ultrasound technology which helps to understand the complex mechanism of ultrasonic-assisted extraction and further guide its application in algae.

Keywords: natural deep eutectic solvents, ultrasound-assisted extraction, algae, antioxidant activity, phenolic compounds, carotenoids

Procedia PDF Downloads 179
4682 Enhancing Throughput for Wireless Multihop Networks

Authors: K. Kalaiarasan, B. Pandeeswari, A. Arockia John Francis

Abstract:

Wireless, Multi-hop networks consist of one or more intermediate nodes along the path that receive and forward packets via wireless links. The backpressure algorithm provides throughput optimal routing and scheduling decisions for multi-hop networks with dynamic traffic. Xpress, a cross-layer backpressure architecture was designed to reach the capacity of wireless multi-hop networks and it provides well coordination between layers of network by turning a mesh network into a wireless switch. Transmission over the network is scheduled using a throughput-optimal backpressure algorithm. But this architecture operates much below their capacity due to out-of-order packet delivery and variable packet size. In this paper, we present Xpress-T, a throughput optimal backpressure architecture with TCP support designed to reach maximum throughput of wireless multi-hop networks. Xpress-T operates at the IP layer, and therefore any transport protocol, including TCP, can run on top of Xpress-T. The proposed design not only avoids bottlenecks but also handles out-of-order packet delivery and variable packet size, optimally load-balances traffic across them when needed, improving fairness among competing flows. Our simulation results shows that Xpress-T gives 65% more throughput than Xpress.

Keywords: backpressure scheduling and routing, TCP, congestion control, wireless multihop network

Procedia PDF Downloads 518
4681 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network

Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio

Abstract:

The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.

Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling

Procedia PDF Downloads 280
4680 Energy Efficient Assessment of Energy Internet Based on Data-Driven Fuzzy Integrated Cloud Evaluation Algorithm

Authors: Chuanbo Xu, Xinying Li, Gejirifu De, Yunna Wu

Abstract:

Energy Internet (EI) is a new form that deeply integrates the Internet and the entire energy process from production to consumption. The assessment of energy efficient performance is of vital importance for the long-term sustainable development of EI project. Although the newly proposed fuzzy integrated cloud evaluation algorithm considers the randomness of uncertainty, it relies too much on the experience and knowledge of experts. Fortunately, the enrichment of EI data has enabled the utilization of data-driven methods. Therefore, the main purpose of this work is to assess the energy efficient of park-level EI by using a combination of a data-driven method with the fuzzy integrated cloud evaluation algorithm. Firstly, the indicators for the energy efficient are identified through literature review. Secondly, the artificial neural network (ANN)-based data-driven method is employed to cluster the values of indicators. Thirdly, the energy efficient of EI project is calculated through the fuzzy integrated cloud evaluation algorithm. Finally, the applicability of the proposed method is demonstrated by a case study.

Keywords: energy efficient, energy internet, data-driven, fuzzy integrated evaluation, cloud model

Procedia PDF Downloads 202
4679 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System

Authors: Nareshkumar Harale, B. B. Meshram

Abstract:

The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.

Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design

Procedia PDF Downloads 227
4678 Description of the Non-Iterative Learning Algorithm of Artificial Neuron

Authors: B. S. Akhmetov, S. T. Akhmetova, A. I. Ivanov, T. S. Kartbayev, A. Y. Malygin

Abstract:

The problem of training of a network of artificial neurons in biometric appendices is that this process has to be completely automatic, i.e. the person operator should not participate in it. Therefore, this article discusses the issues of training the network of artificial neurons and the description of the non-iterative learning algorithm of artificial neuron.

Keywords: artificial neuron, biometrics, biometrical applications, learning of neuron, non-iterative algorithm

Procedia PDF Downloads 496
4677 Language Development and Growing Spanning Trees in Children Semantic Network

Authors: Somayeh Sadat Hashemi Kamangar, Fatemeh Bakouie, Shahriar Gharibzadeh

Abstract:

In this study, we target to exploit Maximum Spanning Trees (MST) of children's semantic networks to investigate their language development. To do so, we examine the graph-theoretic properties of word-embedding networks. The networks are made of words children learn prior to the age of 30 months as the nodes and the links which are built from the cosine vector similarity of words normatively acquired by children prior to two and a half years of age. These networks are weighted graphs and the strength of each link is determined by the numerical similarities of the two words (nodes) on the sides of the link. To avoid changing the weighted networks to the binaries by setting a threshold, constructing MSTs might present a solution. MST is a unique sub-graph that connects all the nodes in such a way that the sum of all the link weights is maximized without forming cycles. MSTs as the backbone of the semantic networks are suitable to examine developmental changes in semantic network topology in children. From these trees, several parameters were calculated to characterize the developmental change in network organization. We showed that MSTs provides an elegant method sensitive to capture subtle developmental changes in semantic network organization.

Keywords: maximum spanning trees, word-embedding, semantic networks, language development

Procedia PDF Downloads 145
4676 Training of Future Computer Science Teachers Based on Machine Learning Methods

Authors: Meruert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova

Abstract:

The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers.

Keywords: algorithm, artificial intelligence, education, machine learning

Procedia PDF Downloads 73
4675 Scheduling Nodes Activity and Data Communication for Target Tracking in Wireless Sensor Networks

Authors: AmirHossein Mohajerzadeh, Mohammad Alishahi, Saeed Aslishahi, Mohsen Zabihi

Abstract:

In this paper, we consider sensor nodes with the capability of measuring the bearings (relative angle to the target). We use geometric methods to select a set of observer nodes which are responsible for collecting data from the target. Considering the characteristics of target tracking applications, it is clear that significant numbers of sensor nodes are usually inactive. Therefore, in order to minimize the total network energy consumption, a set of sensor nodes, called sentinel, is periodically selected for monitoring, controlling the environment and transmitting data through the network. The other nodes are inactive. Furthermore, the proposed algorithm provides a joint scheduling and routing algorithm to transmit data between network nodes and the fusion center (FC) in which not only provides an efficient way to estimate the target position but also provides an efficient target tracking. Performance evaluation confirms the superiority of the proposed algorithm.

Keywords: coverage, routing, scheduling, target tracking, wireless sensor networks

Procedia PDF Downloads 378
4674 Efficient Rehearsal Free Zero Forgetting Continual Learning Using Adaptive Weight Modulation

Authors: Yonatan Sverdlov, Shimon Ullman

Abstract:

Artificial neural networks encounter a notable challenge known as continual learning, which involves acquiring knowledge of multiple tasks over an extended period. This challenge arises due to the tendency of previously learned weights to be adjusted to suit the objectives of new tasks, resulting in a phenomenon called catastrophic forgetting. Most approaches to this problem seek a balance between maximizing performance on the new tasks and minimizing the forgetting of previous tasks. In contrast, our approach attempts to maximize the performance of the new task, while ensuring zero forgetting. This is accomplished through the introduction of task-specific modulation parameters for each task, and only these parameters are learned for the new task, after a set of initial tasks have been learned. Through comprehensive experimental evaluations, our model demonstrates superior performance in acquiring and retaining novel tasks that pose difficulties for other multi-task models. This emphasizes the efficacy of our approach in preventing catastrophic forgetting while accommodating the acquisition of new tasks.

Keywords: continual learning, life-long learning, neural analogies, adaptive modulation

Procedia PDF Downloads 71
4673 Leveraging Li-Fi to Enhance Security and Performance of Medical Devices

Authors: Trevor Kroeger, Hayden Williams, Edward Holzinger, David Coleman, Brian Haberman

Abstract:

The network connectivity of medical devices is increasing at a rapid rate. Many medical devices, such as vital sign monitors, share information via wireless or wired connections. However, these connectivity options suffer from a variety of well-known limitations. Wireless connectivity, especially in the unlicensed radio frequency bands, can be disrupted. Such disruption could be due to benign reasons, such as a crowded spectrum, or to malicious intent. While wired connections are less susceptible to interference, they inhibit the mobility of the medical devices, which could be critical in a variety of scenarios. This work explores the application of Light Fidelity (Li-Fi) communication to enhance the security, performance, and mobility of medical devices in connected healthcare scenarios. A simple bridge for connected devices serves as an avenue to connect traditional medical devices to the Li-Fi network. This bridge was utilized to conduct bandwidth tests on a small Li-Fi network installed into a Mock-ICU setting with a backend enterprise network similar to that of a hospital. Mobile and stationary tests were conducted to replicate various different situations that might occur within a hospital setting. Results show that in room Li-Fi connectivity provides reasonable bandwidth and latency within a hospital like setting.

Keywords: hospital, light fidelity, Li-Fi, medical devices, security

Procedia PDF Downloads 102
4672 Poster : Incident Signals Estimation Based on a Modified MCA Learning Algorithm

Authors: Rashid Ahmed , John N. Avaritsiotis

Abstract:

Many signal subspace-based approaches have already been proposed for determining the fixed Direction of Arrival (DOA) of plane waves impinging on an array of sensors. Two procedures for DOA estimation based neural networks are presented. First, Principal Component Analysis (PCA) is employed to extract the maximum eigenvalue and eigenvector from signal subspace to estimate DOA. Second, minor component analysis (MCA) is a statistical method of extracting the eigenvector associated with the smallest eigenvalue of the covariance matrix. In this paper, we will modify a Minor Component Analysis (MCA(R)) learning algorithm to enhance the convergence, where a convergence is essential for MCA algorithm towards practical applications. The learning rate parameter is also presented, which ensures fast convergence of the algorithm, because it has direct effect on the convergence of the weight vector and the error level is affected by this value. MCA is performed to determine the estimated DOA. Preliminary results will be furnished to illustrate the convergences results achieved.

Keywords: Direction of Arrival, neural networks, Principle Component Analysis, Minor Component Analysis

Procedia PDF Downloads 451