Search results for: thyroid nodes
599 Energy Efficient Firefly Algorithm in Wireless Sensor Network
Authors: Wafa’ Alsharafat, Khalid Batiha, Alaa Kassab
Abstract:
Wireless sensor network (WSN) is comprised of a huge number of small and cheap devices known as sensor nodes. Usually, these sensor nodes are massively and deployed randomly as in Ad-hoc over hostile and harsh environment to sense, collect and transmit data to the needed locations (i.e., base station). One of the main advantages of WSN is that the ability to work in unattended and scattered environments regardless the presence of humans such as remote active volcanoes environments or earthquakes. In WSN expanding network, lifetime is a major concern. Clustering technique is more important to maximize network lifetime. Nature-inspired algorithms are developed and optimized to find optimized solutions for various optimization problems. We proposed Energy Efficient Firefly Algorithm to improve network lifetime as long as possible.Keywords: wireless network, SN, Firefly, energy efficiency
Procedia PDF Downloads 389598 Evaluation and Association of Thyroid Function Tests with Liver Function Parameters LDL and LDH Level Before and after I131 Therapy
Authors: Sabika Rafiq, Rubaida Mehmood, Sajid Hussain, Atia Iqbal
Abstract:
Background and objectives: The pathogenesis of liver function abnormalities and cardiac dysfunction in hyperthyroid patients after I131 treatment is still unclear. This study aimed to determine the effects of radioiodine I131 on liver function parameters, lactate dehydrogenase (LDH) and low-density lipoproteins (LDL) before and after I131 therapy hyperthyroidism patients. Material & Methods: A total of 52 patients of hyperthyroidism recommended for I131were involved in this study with ages ranging from 12–65 years (mean age=38.6±14.8 & BMI=11.5±3.7). The significance of the differences between the results of 1st, 2nd and 3rd-time serum analysis was assessed by unpaired student’s t-test. Associations between the parameters were assessed by Spearman correlation analysis. Results: Significant variations were observed for thyroid profile free FT3 (p=0.04), FT4 (p=0.01), TSH (p=0.005) during the follow-up treatment. Before taking I131 (serum analyzed at 1st time), negative correlation of FT3 with AST (r=-0.458, p=0.032) and LDL (r=-0.454, p=0.039) were observed. During 2nd time (after stopping carbimazole), no correlation was assessed. Two months after the administration of I131 drops, a significant negative association of FT3 (r=-0.62, p=0.04) and FT4(r=-0.61, p=0.02) with ALB were observed. FT3(r=-0.82, p=0.00) & FT4 (r=-0.71, p=0.00) also showed negative correlation with LDL after I131 therapy. Whereas TSH showed significant positive association with ALB (r=0.61, p=0.01) and LDL (r=0.70, p=0.00) respectively. Conclusion: Current findings suggested that the association of TFTs with biochemical parameters in patients with goiter recommended for iodine therapy is an important diagnostic and therapeutic tool. The significant changes increased in transaminases and low-density lipoprotein levels after taking I131drops are alarming signs for heart and liver function abnormalities and warrant physicians' attention on an urgent basis.Keywords: hyperthyroidism, carbimazole, radioiodine I131, liver functions, low-density lipoprotein
Procedia PDF Downloads 158597 A Multicopy Strategy for Improved Security Wireless Sensor Network
Authors: Tuğçe Yücel
Abstract:
A Wireless Sensor Network(WSN) is a collection of sensor nodes which are deployed randomly in an area for surveillance. Efficient utilization of limited battery energy of sensors for increased network lifetime as well as data security are major design objectives for WSN. Moreover secure transmission of data sensed to a base station for further processing. Producing multiple copies of data packets and sending them on different paths is one of the strategies for this purpose, which leads to redundant energy consumption and hence reduced network lifetime. In this work we develop a restricted multi-copy multipath strategy where data move through ‘frequently’ or ‘heavily’ used sensors is copied by the sensor incident to such central nodes and sent on node-disjoint paths. We develop a mixed integer programing(MIP) model and heuristic approach present some preleminary test results.Keywords: MIP, sensor, telecommunications, WSN
Procedia PDF Downloads 516596 Valorization of Dates Nodes as a Carbon Source Using Biological Denitrification
Authors: Ouerdia Benbelkacem Belouanas
Abstract:
Heterotrophic denitrification has been proven to be one of the most feasible processes for removing nitrate from waste water and drinking water. In this process, heterotrophic bacteria use organic carbon for both growth and as an electron source. Underground water pollution by nitrates become alarming in Algeria. A survey carried out revealed that the nitrate concentration is in continual increase. Studies in some region revealed contamination exceeding the recommended permissible dose which is 50 mg/L. Worrying values in the regions of Mascara, Ouled saber, El Eulma, Bouira and Algiers are respectively 72 mg/L, 75 mg/L, 97 mg/L, 102 mg/L, and 158 mg/L. High concentration of nitrate in drinking water is associated with serious health risks. Research on nitrate removal technologies from municipal water supplies is increasing because of nitrate contamination. Biological denitrification enables transformation of oxidized nitrogen compounds by a wide spectrum of heterotrophic bacteria into harmless nitrogen gas with accompanying carbon removal. Globally, denitrification is commonly employed in biological nitrogen removal processes to enhance water quality. The study investigated the valorization of a vegetable residue as a carbon source (dates nodes) in water treatment using the denitrification process. Throughout the study, the effect of inoculums addition, pH, and initial concentration of nitrates was also investigated. In this research, a natural organic substance: dates nodes were investigated as a carbon source in the biological denitrification of drinking water. This material acts as a solid substrate and bio-film carrier. The experiments were carried out in batch processes. Complete denitrification was achieved varied between 80 and 100% according to the type of process used. It was found that the nitrate removal rate based on our results, we concluded that the removal of organic matter and nitrogen compounds depended mainly on initial concentration of nitrate. The effluent pH was mainly affected by the C/N ratio, where a decrease increases pH.Keywords: biofilm, carbon source, dates nodes, heterotrophic denitrification, nitrate, nitrite
Procedia PDF Downloads 423595 Predictive Value of Primary Tumor Depth for Cervical Lymphadenopathy in Squamous Cell Carcinoma of Buccal Mucosa
Authors: Zohra Salim
Abstract:
Objective: To access the relationship of primary tumor thickness with cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa. Methodology: A cross-sectional observational study was carried out on 80 Patients with biopsy-proven oral squamous cell carcinoma of buccal mucosa at Dow University of Health Sciences. All the study participants were treated with wide local excision of the primary tumor with elective neck dissection. Patients with prior head and neck malignancy or those with prior radiotherapy or chemotherapy were excluded from the study. Data was entered and analyzed on SPSS 21. Chi-squared test with 95% C.I and 80% power of the test was used to evaluate the relationship of tumor depth with cervical lymph nodes. Results: 50 participants were male, and 30 patients were female. 30 patients were in the age range of 20-40 years, 36 patients in the range of 40-60 years, while 14 patients were beyond age 60 years. Tumor size ranged from 0.3cm to 5cm with a mean of 2.03cm. Tumor depth ranged from 0.2cm to 5cm. 20% of the participants reported with tumor depth greater than 2.5cm, while 80% of patients reported with tumor depth less than 2.5cm. Out of 80 patients, 27 reported with negative lymph nodes, while 53 patients reported with positive lymph nodes. Conclusion: Our study concludes that relationship exists between the depth of primary tumor and cervical lymphadenopathy in squamous cell carcinoma of buccal mucosa.Keywords: squamous cell carcinoma, tumor depth, cervical lymphadenopathy, buccal mucosa
Procedia PDF Downloads 238594 Enhanced Weighted Centroid Localization Algorithm for Indoor Environments
Authors: I. Nižetić Kosović, T. Jagušt
Abstract:
Lately, with the increasing number of location-based applications, demand for highly accurate and reliable indoor localization became urgent. This is a challenging problem, due to the measurement variance which is the consequence of various factors like obstacles, equipment properties and environmental changes in complex nature of indoor environments. In this paper we propose low-cost custom-setup infrastructure solution and localization algorithm based on the Weighted Centroid Localization (WCL) method. Localization accuracy is increased by several enhancements: calibration of RSSI values gained from wireless nodes, repetitive measurements of RSSI to exclude deviating values from the position estimation, and by considering orientation of the device according to the wireless nodes. We conducted several experiments to evaluate the proposed algorithm. High accuracy of ~1m was achieved.Keywords: indoor environment, received signal strength indicator, weighted centroid localization, wireless localization
Procedia PDF Downloads 236593 Node Insertion in Coalescence Hidden-Variable Fractal Interpolation Surface
Authors: Srijanani Anurag Prasad
Abstract:
The Coalescence Hidden-variable Fractal Interpolation Surface (CHFIS) was built by combining interpolation data from the Iterated Function System (IFS). The interpolation data in a CHFIS comprises a row and/or column of uncertain values when a single point is entered. Alternatively, a row and/or column of additional points are placed in the given interpolation data to demonstrate the node added CHFIS. There are three techniques for inserting new points that correspond to the row and/or column of nodes inserted, and each method is further classified into four types based on the values of the inserted nodes. As a result, numerous forms of node insertion can be found in a CHFIS.Keywords: fractal, interpolation, iterated function system, coalescence, node insertion, knot insertion
Procedia PDF Downloads 106592 Study on the Efficient Routing Algorithms in Delay-Tolerant Networks
Authors: Si-Gwan Kim
Abstract:
In Delay Tolerant Networks (DTN), there may not exist an end-to-end path between source and destination at the time of message transmission. Employing ‘Store Carry and Forward’ delivery mechanism for message transmission in such networks usually incurs long message delays. In this paper, we present the modified Binary Spray and Wait (BSW) routing protocol that enhances the performance of the original one. Our proposed algorithm adjusts the number of forward messages depending on the number of neighbor nodes. By using beacon messages periodically, the number of neighbor nodes can be managed. The simulation using ONE simulator results shows that our modified version gives higher delivery ratio and less latency as compared to BSW.Keywords: delay tolerant networks, store carry and forward, one simulator, binary spray and wait
Procedia PDF Downloads 127591 Localization of Buried People Using Received Signal Strength Indication Measurement of Wireless Sensor
Authors: Feng Tao, Han Ye, Shaoyi Liao
Abstract:
City constructions collapse after earthquake and people will be buried under ruins. Search and rescue should be conducted as soon as possible to save them. Therefore, according to the complicated environment, irregular aftershocks and rescue allow of no delay, a kind of target localization method based on RSSI (Received Signal Strength Indication) is proposed in this article. The target localization technology based on RSSI with the features of low cost and low complexity has been widely applied to nodes localization in WSN (Wireless Sensor Networks). Based on the theory of RSSI transmission and the environment impact to RSSI, this article conducts the experiments in five scenes, and multiple filtering algorithms are applied to original RSSI value in order to establish the signal propagation model with minimum test error respectively. Target location can be calculated from the distance, which can be estimated from signal propagation model, through improved centroid algorithm. Result shows that the localization technology based on RSSI is suitable for large-scale nodes localization. Among filtering algorithms, mixed filtering algorithm (average of average, median and Gaussian filtering) performs better than any other single filtering algorithm, and by using the signal propagation model, the minimum error of distance between known nodes and target node in the five scene is about 3.06m.Keywords: signal propagation model, centroid algorithm, localization, mixed filtering, RSSI
Procedia PDF Downloads 306590 Capacitated Multiple Allocation P-Hub Median Problem on a Cluster Based Network under Congestion
Authors: Çağrı Özgün Kibiroğlu, Zeynep Turgut
Abstract:
This paper considers a hub location problem where the network service area partitioned into predetermined zones (represented by node clusters is given) and potential hub nodes capacity levels are determined a priori as a selection criteria of hub to investigate congestion effect on network. The objective is to design hub network by determining all required hub locations in the node clusters and also allocate non-hub nodes to hubs such that the total cost including transportation cost, opening cost of hubs and penalty cost for exceed of capacity level at hubs is minimized. A mixed integer linear programming model is developed introducing additional constraints to the traditional model of capacitated multiple allocation hub location problem and empirically tested.Keywords: hub location problem, p-hub median problem, clustering, congestion
Procedia PDF Downloads 496589 Simulation Approach for a Comparison of Linked Cluster Algorithm and Clusterhead Size Algorithm in Ad Hoc Networks
Authors: Ameen Jameel Alawneh
Abstract:
A Mobile ad-hoc network (MANET) is a collection of wireless mobile hosts that dynamically form a temporary network without the aid of a system administrator. It has neither fixed infrastructure nor wireless ad hoc sessions. It inherently reaches several nodes with a single transmission, and each node functions as both a host and a router. The network maybe represented as a set of clusters each managed by clusterhead. The cluster size is not fixed and it depends on the movement of nodes. We proposed a clusterhead size algorithm (CHSize). This clustering algorithm can be used by several routing algorithms for ad hoc networks. An elected clusterhead is assigned for communication with all other clusters. Analysis and simulation of the algorithm has been implemented using GloMoSim networks simulator, MATLAB and MAPL11 proved that the proposed algorithm achieves the goals.Keywords: simulation, MANET, Ad-hoc, cluster head size, linked cluster algorithm, loss and dropped packets
Procedia PDF Downloads 397588 Sync Consensus Algorithm: Trying to Reach an Agreement at Full Speed
Authors: Yuri Zinchenko
Abstract:
Recently, distributed storage systems have been used more and more in various aspects of everyday life. They provide such necessary properties as Scalability, Fault Tolerance, Durability, and others. At the same time, not only reliable but also fast data storage remains one of the most pressing issues in this area. That brings us to the consensus algorithm as one of the most important components that has a great impact on the functionality of a distributed system. This paper is the result of an analysis of several well-known consensus algorithms, such as Paxos and Raft. The algorithm it offers, called Sync, promotes, but does not insist on simultaneous writing to the nodes (which positively affects the overall writing speed) and tries to minimize the system's inactive time. This allows nodes to reach agreement on the system state in a shorter period, which is a critical factor for distributed systems. Also when developing Sync, a lot of attention was paid to such criteria as simplicity and intuitiveness, the importance of which is difficult to overestimate.Keywords: sync, consensus algorithm, distributed system, leader-based, synchronization.
Procedia PDF Downloads 67587 Facilitators and Barriers of Family Resilience in Cancer Patients Based on the Theoretical Domains Framework: An Integrative Review
Authors: Jiang Yuqi
Abstract:
Aims: The aim is to analyze the facilitators and barriers of family resilience in cancer patients based on the theoretical domain framework, provide a basis for intervention in the family resilience of cancer patients, and identify the progress and enlightenment of existing intervention projects. Methods: NVivo software was used to code the influencing factors using the framework of 14 theoretical domains as primary nodes; secondary nodes were then refined using thematic analysis, and specific influencing factors were aggregated and analyzed for evaluator reliability. Data sources: PubMed, Embase, CINAHL, Web of Science, Cochrane Library, MEDLINE, CNKI, and Wanfang (search dates: from construction to November 2023). Results: A total of 35 papers were included, with 142 coding points across 14 theoretical domains and 38 secondary nodes. The three most relevant theoretical domains are social influences (norms), the environment and resources, and emotions (mood). The factors with the greatest impact were family support, mood, confidence and beliefs, external support, quality of life, economic circumstances, family adaptation, coping styles with illness, and management. Conclusion: The factors influencing family resilience in cancer patients cover most of the theoretical domains in the Theoretical Domains Framework and are cross-cutting, multi-sourced, and complex. Further in-depth exploration of the key factors influencing family resilience is necessary to provide a basis for intervention research.Keywords: cancer, survivors, family resilience, theoretical domains framework, literature review
Procedia PDF Downloads 51586 Clustering Based and Centralized Routing Table Topology of Control Protocol in Mobile Wireless Sensor Networks
Authors: Mbida Mohamed, Ezzati Abdellah
Abstract:
A strong challenge in the wireless sensor networks (WSN) is to save the energy and have a long life time in the network without having a high rate of loss information. However, topology control (TC) protocols are designed in a way that the network is divided and having a standard system of exchange packets between nodes. In this article, we will propose a clustering based and centralized routing table protocol of TC (CBCRT) which delegates a leader node that will encapsulate a single routing table in every cluster nodes. Hence, if a node wants to send packets to the sink, it requests the information's routing table of the current cluster from the node leader in order to root the packet.Keywords: mobile wireless sensor networks, routing, topology of control, protocols
Procedia PDF Downloads 277585 The Effects of Prolonged Use of Caffeine on Thyroid and Adrenal Glands – A Retrospective Cohort Study
Authors: Vasishtha Avadhani Upadrasta, Mradul Kumar Daga, Smita Kaushik
Abstract:
Background: Caffeine consumption has skyrocketed in the recent decades as we try to match pace with the machines. Studies have been conducted on animals and a few on humans, mainly on the acute effects of high-dose caffeine intake. Almost none have been conducted on the chronic effects of caffeine consumption. This study involved Medical professionals as case subjects, who consumed caffeine daily. Methods: This study, over a period of 3 months, involved 96 volunteers (chosen randomly w.r.t. gender and field in medical fraternity), including people who drank >500mg of caffeine a day to people who consumed none. People with any co-morbidities at all were excluded straight away. Two sets of blood samples were drawn and assessed. Three groups were created, Group 1 (>200mg caffeine/day) and Group 2 (15-200 mg caffeine/day) and Group 3 (<200mg Caffeine/day). Results: The result of the study found that exposure to caffeine at doses >200mg/day for more than 6 months leads to a significant difference in circulating free T3 [(-0.96 pmol/L ± 0.07) = (-18.5%), CI 95%, p = .000024] and Cortisol [(-123 nmol/L ± 9.8) = (-46.8%), CI 95%, p = .00029] hormones but shows an insignificant effect on circulating TSH [0.4 mIU/L, CI 95%, p=.37] and ACTH [(-3.2 pg/ml ± 0.3), CI 95%, p = .53) hormones, which stay within normal physiological ranges, irrespective of the daily dose of consumption. Results also highlight that women are more susceptible to decrement in fT3 than men (Relative Risk =1.58, ANOVA F-static = 7.15, p = 0.0105). Conclusions: Caffeine consumption in excess of 200mg/day, for more than or equal to 6 months, causes significant derangement in basal fT3 and Cortisol hormone levels, without affecting the TSH and ACTH (regulatory) hormone levels, indicating disturbance of action at the peripheral and/or cellular levels, possibly via the Paraventricular Nucleus –Leptin-CAR-Adenosine interactions. Women are more susceptible to decrement in fT3 levels than men (at same dose of caffeine).Keywords: ACTH, adrenals, caffeine, cortisol, thyroid, thyroxin, TSH
Procedia PDF Downloads 80584 Optimizing the Location of Parking Areas Adapted for Dangerous Goods in the European Road Transport Network
Authors: María Dolores Caro, Eugenio M. Fedriani, Ángel F. Tenorio
Abstract:
The transportation of dangerous goods by lorries throughout Europe must be done by using the roads conforming the European Road Transport Network. In this network, there are several parking areas where lorry drivers can park to rest according to the regulations. According to the "European Agreement concerning the International Carriage of Dangerous Goods by Road", parking areas where lorries transporting dangerous goods can park to rest, must follow several security stipulations to keep safe the rest of road users. At this respect, these lorries must be parked in adapted areas with strict and permanent surveillance measures. Moreover, drivers must satisfy several restrictions about resting and driving time. Under these facts, one may expect that there exist enough parking areas for the transport of this type of goods in order to obey the regulations prescribed by the European Union and its member countries. However, the already-existing parking areas are not sufficient to cover all the stops required by drivers transporting dangerous goods. Our main goal is, starting from the already-existing parking areas and the loading-and-unloading location, to provide an optimal answer to the following question: how many additional parking areas must be built and where must they be located to assure that lorry drivers can transport dangerous goods following all the stipulations about security and safety for their stops? The sense of the word “optimal” is due to the fact that we give a global solution for the location of parking areas throughout the whole European Road Transport Network, adjusting the number of additional areas to be as lower as possible. To do so, we have modeled the problem using graph theory since we are working with a road network. As nodes, we have considered the locations of each already-existing parking area, each loading-and-unloading area each road bifurcation. Each road connecting two nodes is considered as an edge in the graph whose weight corresponds to the distance between both nodes in the edge. By applying a new efficient algorithm, we have found the additional nodes for the network representing the new parking areas adapted for dangerous goods, under the fact that the distance between two parking areas must be less than or equal to 400 km.Keywords: trans-european transport network, dangerous goods, parking areas, graph-based modeling
Procedia PDF Downloads 283583 Optimal Sizing and Placement of Distributed Generators for Profit Maximization Using Firefly Algorithm
Authors: Engy Adel Mohamed, Yasser Gamal-Eldin Hegazy
Abstract:
This paper presents a firefly based algorithm for optimal sizing and allocation of distributed generators for profit maximization. Distributed generators in the proposed algorithm are of photovoltaic and combined heat and power technologies. Combined heat and power distributed generators are modeled as voltage controlled nodes while photovoltaic distributed generators are modeled as constant power nodes. The proposed algorithm is implemented in MATLAB environment and tested the unbalanced IEEE 37-node feeder. The results show the effectiveness of the proposed algorithm in optimal selection of distributed generators size and site in order to maximize the total system profit.Keywords: distributed generators, firefly algorithm, IEEE 37-node feeder, profit maximization
Procedia PDF Downloads 446582 Saliency Detection Using a Background Probability Model
Authors: Junling Li, Fang Meng, Yichun Zhang
Abstract:
Image saliency detection has been long studied, while several challenging problems are still unsolved, such as detecting saliency inaccurately in complex scenes or suppressing salient objects in the image borders. In this paper, we propose a new saliency detection algorithm in order to solving these problems. We represent the image as a graph with superixels as nodes. By considering appearance similarity between the boundary and the background, the proposed method chooses non-saliency boundary nodes as background priors to construct the background probability model. The probability that each node belongs to the model is computed, which measures its similarity with backgrounds. Thus we can calculate saliency by the transformed probability as a metric. We compare our algorithm with ten-state-of-the-art salient detection methods on the public database. Experimental results show that our simple and effective approach can attack those challenging problems that had been baffling in image saliency detection.Keywords: visual saliency, background probability, boundary knowledge, background priors
Procedia PDF Downloads 432581 Using an Empathy Intervention Model to Enhance Empathy and Socially Shared Regulation in Youth with Autism Spectrum Disorder
Authors: Yu-Chi Chou
Abstract:
The purpose of this study was to establish a logical path of an instructional model of empathy and social regulation, providing feasibility evidence on the model implementation in students with autism spectrum disorder (ASD). This newly developed Emotional Bug-Out Bag (BoB) curriculum was designed to enhance the empathy and socially shared regulation of students with ASD. The BoB model encompassed three instructional phases of basic theory lessons (BTL), action plan practices (APP), and final theory practices (FTP) during implementation. Besides, a learning flow (teacher-directed instruction, student self-directed problem-solving, group-based task completion, group-based reflection) was infused into the progress of instructional phases to deliberately promote the social regulatory process in group-working activities. A total of 23 junior high school students with ASD were implemented with the BoB curriculum. To examine the logical path for model implementation, data was collected from the participating students’ self-report scores on the learning nodes and understanding questions. Path analysis using structural equation modeling (SEM) was utilized for analyzing scores on 10 learning nodes and 41 understanding questions through the three phases of the BoB model. Results showed (a) all participants progressed throughout the implementation of the BoB model, and (b) the models of learning nodes and phases were positive and significant as expected, confirming the hypothesized logic path of this curriculum.Keywords: autism spectrum disorder, empathy, regulation, socially shared regulation
Procedia PDF Downloads 69580 Energy Efficient Routing Protocol with Ad Hoc On-Demand Distance Vector for MANET
Authors: K. Thamizhmaran, Akshaya Devi Arivazhagan, M. Anitha
Abstract:
On the case of most important systematic issue that must need to be solved in means of implementing a data transmission algorithm on the source of Mobile adhoc networks (MANETs). That is, how to save mobile nodes energy on meeting the requirements of applications or users as the mobile nodes are with battery limited. On while satisfying the energy saving requirement, hence it is also necessary of need to achieve the quality of service. In case of emergency work, it is necessary to deliver the data on mean time. Achieving quality of service in MANETs is also important on while. In order to achieve this requirement, Hence, we further implement the Energy-Aware routing protocol for system of Mobile adhoc networks were it being proposed, that on which saves the energy as on every node by means of efficiently selecting the mode of energy efficient path in the routing process by means of Enhanced AODV routing protocol.Keywords: Ad-Hoc networks, MANET, routing, AODV, EAODV
Procedia PDF Downloads 373579 Ensuring Uniform Energy Consumption in Non-Deterministic Wireless Sensor Network to Protract Networks Lifetime
Authors: Vrince Vimal, Madhav J. Nigam
Abstract:
Wireless sensor networks have enticed much of the spotlight from researchers all around the world, owing to its extensive applicability in agricultural, industrial and military fields. Energy conservation node deployment stratagems play a notable role for active implementation of Wireless Sensor Networks. Clustering is the approach in wireless sensor networks which improves energy efficiency in the network. The clustering algorithm needs to have an optimum size and number of clusters, as clustering, if not implemented properly, cannot effectively increase the life of the network. In this paper, an algorithm has been proposed to address connectivity issues with the aim of ensuring the uniform energy consumption of nodes in every part of the network. The results obtained after simulation showed that the proposed algorithm has an edge over existing algorithms in terms of throughput and networks lifetime.Keywords: Wireless Sensor network (WSN), Random Deployment, Clustering, Isolated Nodes, Networks Lifetime
Procedia PDF Downloads 340578 Magnetic Navigation in Underwater Networks
Authors: Kumar Divyendra
Abstract:
Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.Keywords: clustering, deep learning, network backbone, parallel computing
Procedia PDF Downloads 100577 Scheduling Algorithm Based on Load-Aware Queue Partitioning in Heterogeneous Multi-Core Systems
Authors: Hong Kai, Zhong Jun Jie, Chen Lin Qi, Wang Chen Guang
Abstract:
There are inefficient global scheduling parallelism and local scheduling parallelism prone to processor starvation in current scheduling algorithms. Regarding this issue, this paper proposed a load-aware queue partitioning scheduling strategy by first allocating the queues according to the number of processor cores, calculating the load factor to specify the load queue capacity, and it assigned the awaiting nodes to the appropriate perceptual queues through the precursor nodes and the communication computation overhead. At the same time, real-time computation of the load factor could effectively prevent the processor from being starved for a long time. Experimental comparison with two classical algorithms shows that there is a certain improvement in both performance metrics of scheduling length and task speedup ratio.Keywords: load-aware, scheduling algorithm, perceptual queue, heterogeneous multi-core
Procedia PDF Downloads 149576 Quality of Service of Transportation Networks: A Hybrid Measurement of Travel Time and Reliability
Authors: Chin-Chia Jane
Abstract:
In a transportation network, travel time refers to the transmission time from source node to destination node, whereas reliability refers to the probability of a successful connection from source node to destination node. With an increasing emphasis on quality of service (QoS), both performance indexes are significant in the design and analysis of transportation systems. In this work, we extend the well-known flow network model for transportation networks so that travel time and reliability are integrated into the QoS measurement simultaneously. In the extended model, in addition to the general arc capacities, each intermediate node has a time weight which is the travel time for per unit of commodity going through the node. Meanwhile, arcs and nodes are treated as binary random variables that switch between operation and failure with associated probabilities. For pre-specified travel time limitation and demand requirement, the QoS of a transportation network is the probability that source can successfully transport the demand requirement to destination while the total transmission time is under the travel time limitation. This work is pioneering, since existing literatures that evaluate travel time reliability via a single optimization path, the proposed QoS focuses the performance of the whole network system. To compute the QoS of transportation networks, we first transfer the extended network model into an equivalent min-cost max-flow network model. In the transferred network, each arc has a new travel time weight which takes value 0. Each intermediate node is replaced by two nodes u and v, and an arc directed from u to v. The newly generated nodes u and v are perfect nodes. The new direct arc has three weights: travel time, capacity, and operation probability. Then the universal set of state vectors is recursively decomposed into disjoint subsets of reliable, unreliable, and stochastic vectors until no stochastic vector is left. The decomposition is made possible by applying existing efficient min-cost max-flow algorithm. Because the reliable subsets are disjoint, QoS can be obtained directly by summing the probabilities of these reliable subsets. Computational experiments are conducted on a benchmark network which has 11 nodes and 21 arcs. Five travel time limitations and five demand requirements are set to compute the QoS value. To make a comparison, we test the exhaustive complete enumeration method. Computational results reveal the proposed algorithm is much more efficient than the complete enumeration method. In this work, a transportation network is analyzed by an extended flow network model where each arc has a fixed capacity, each intermediate node has a time weight, and both arcs and nodes are independent binary random variables. The quality of service of the transportation network is an integration of customer demands, travel time, and the probability of connection. We present a decomposition algorithm to compute the QoS efficiently. Computational experiments conducted on a prototype network show that the proposed algorithm is superior to existing complete enumeration methods.Keywords: quality of service, reliability, transportation network, travel time
Procedia PDF Downloads 223575 Comparative Performance of Standing Whole Body Monitor and Shielded Chair Counter for In-vivo Measurements
Authors: M. Manohari, S. Priyadharshini, K. Bajeer Sulthan, R. Santhanam, S. Chandrasekaran, B. Venkatraman
Abstract:
In-vivo monitoring facility at Indira Gandhi Centre for Atomic Research (IGCAR), Kalpakkam, caters to the monitoring of internal exposure of occupational radiation workers from various radioactive facilities of IGCAR. Internal exposure measurement is done using Na(Tl) based Scintillation detectors. Two types of whole-body counters, namely Shielded Chair Counter (SC) and Standing Whole-Body Monitor (SWBM), are being used. The shielded Chair is based on a NaI detector of 20.3 cm diameter and 10.15 cm thick. The chair of the system is shielded using lead shots of 10 cm lead equivalent and the detector with 8 cm lead bricks. Counting geometry is sitting geometry. Calibration is done using 95 percentile BOMAB phantom. The minimum Detectable Activity (MDA) for 137Cs for the 60s is 1150 Bq. Standing Wholebody monitor (SWBM) has two NaI(Tl) detectors of size 10.16 x 10.16 x 40.64 cm3 positioned serially, one over the other. It has a shielding thickness of 5cm lead equivalent. Counting is done in standup geometry. Calibration is done with the help of Ortec Phantom, having a uniform distribution of mixed radionuclides for the thyroid, thorax and pelvis. The efficiency of SWBM is 2.4 to 3.5 times higher than that of the shielded chair in the energy range of 279 to 1332 keV. MDA of 250 Bq for 137Cs can be achieved with a counting time of 60s. MDA for 131I in the thyroid was estimated as 100 Bq from the MDA of whole-body for one-day post intake. Standing whole body monitor is better in terms of efficiency, MDA and ease of positioning. In case of emergency situations, the optimal MDAs for in-vivo monitoring service are 1000 Bq for 137Cs and 100 Bq for 131I. Hence, SWBM is more suitable for the rapid screening of workers as well as the public in the case of an emergency. While a person reports for counting, there is a potential for external contamination. In SWBM, there is a feasibility to discriminate them as the subject can be counted in anterior or posterior geometry which is not possible in SC.Keywords: minimum detectable activity, shielded chair, shielding thickness, standing whole body monitor
Procedia PDF Downloads 48574 Multi-Sender MAC Protocol Based on Temporal Reuse in Underwater Acoustic Networks
Authors: Dongwon Lee, Sunmyeng Kim
Abstract:
Underwater acoustic networks (UANs) have become a very active research area in recent years. Compared with wireless networks, UANs are characterized by the limited bandwidth, long propagation delay and high channel dynamic in acoustic modems, which pose challenges to the design of medium access control (MAC) protocol. The characteristics severely affect network performance. In this paper, we study a MS-MAC (Multi-Sender MAC) protocol in order to improve network performance. The proposed protocol exploits temporal reuse by learning the propagation delays to neighboring nodes. A source node locally calculates the transmission schedules of its neighboring nodes and itself based on the propagation delays to avoid collisions. Performance evaluation is conducted using simulation, and confirms that the proposed protocol significantly outperforms the previous protocol in terms of throughput.Keywords: acoustic channel, MAC, temporal reuse, UAN
Procedia PDF Downloads 354573 Coal Mining Safety Monitoring Using Wsn
Authors: Somdatta Saha
Abstract:
The main purpose was to provide an implementable design scenario for underground coal mines using wireless sensor networks (WSNs). The main reason being that given the intricacies in the physical structure of a coal mine, only low power WSN nodes can produce accurate surveillance and accident detection data. The work mainly concentrated on designing and simulating various alternate scenarios for a typical mine and comparing them based on the obtained results to arrive at a final design. In the Era of embedded technology, the Zigbee protocols are used in more and more applications. Because of the rapid development of sensors, microcontrollers, and network technology, a reliable technological condition has been provided for our automatic real-time monitoring of coal mine. The underground system collects temperature, humidity and methane values of coal mine through sensor nodes in the mine; it also collects the number of personnel inside the mine with the help of an IR sensor, and then transmits the data to information processing terminal based on ARM.Keywords: ARM, embedded board, wireless sensor network (Zigbee)
Procedia PDF Downloads 342572 Error Correction Method for 2D Ultra-Wideband Indoor Wireless Positioning System Using Logarithmic Error Model
Authors: Phornpat Chewasoonthorn, Surat Kwanmuang
Abstract:
Indoor positioning technologies have been evolved rapidly. They augment the Global Positioning System (GPS) which requires line-of-sight to the sky to track the location of people or objects. This study developed an error correction method for an indoor real-time location system (RTLS) based on an ultra-wideband (UWB) sensor from Decawave. Multiple stationary nodes (anchor) were installed throughout the workspace. The distance between stationary and moving nodes (tag) can be measured using a two-way-ranging (TWR) scheme. The result has shown that the uncorrected ranging error from the sensor system can be as large as 1 m. To reduce ranging error and thus increase positioning accuracy, This study purposes an online correction algorithm using the Kalman filter. The results from experiments have shown that the system can reduce ranging error down to 5 cm.Keywords: indoor positioning, ultra-wideband, error correction, Kalman filter
Procedia PDF Downloads 164571 Impacts on Marine Ecosystems Using a Multilayer Network Approach
Authors: Nelson F. F. Ebecken, Gilberto C. Pereira, Lucio P. de Andrade
Abstract:
Bays, estuaries and coastal ecosystems are some of the most used and threatened natural systems globally. Its deterioration is due to intense and increasing human activities. This paper aims to monitor the socio-ecological in Brazil, model and simulate it through a multilayer network representing a DPSIR structure (Drivers, Pressures, States-Impacts-Responses) considering the concept of Management based on Ecosystems to support decision-making under the National/State/Municipal Coastal Management policy. This approach considers several interferences and can represent a significant advance in several scientific aspects. The main objective of this paper is the coupling of three different types of complex networks, the first being an ecological network, the second a social network, and the third a network of economic activities, in order to model the marine ecosystem. Multilayer networks comprise two or more "layers", which may represent different types of interactions, different communities, different points in time, and so on. The dependency between layers results from processes that affect the various layers. For example, the dispersion of individuals between two patches affects the network structure of both samples. A multilayer network consists of (i) a set of physical nodes representing entities (e.g., species, people, companies); (ii) a set of layers, which may include multiple layering aspects (e.g., time dependency and multiple types of relationships); (iii) a set of state nodes, each of which corresponds to the manifestation of a given physical node in a layer-specific; and (iv) a set of edges (weighted or not) to connect the state nodes among themselves. The edge set includes the intralayer edges familiar and interlayer ones, which connect state nodes between layers. The applied methodology in an existent case uses the Flow cytometry process and the modeling of ecological relationships (trophic and non-trophic) following fuzzy theory concepts and graph visualization. The identification of subnetworks in the fuzzy graphs is carried out using a specific computational method. This methodology allows considering the influence of different factors and helps their contributions to the decision-making process.Keywords: marine ecosystems, complex systems, multilayer network, ecosystems management
Procedia PDF Downloads 117570 Extending ACOSOG Z0011 to Encompass Mastectomy Patients: A Retrospective Review
Authors: Ruqayya Naheed Khan, Awais Amjad Malik, Awais Naeem, Amina Khan, Asad Parvaiz
Abstract:
Introduction: Axillary nodal status in breast cancer patients is a paramount prognosticator, next to primary tumor size and grade. It has been well established that patients with negative sentinel lymph node biopsy can safely avoid axillary lymph node dissection. A positive sentinel lymph node has traditionally required subsequent axillary dissection. According to ACOSOG Z11 trial, patients who underwent axillary dissection with 3 or more positive sentinel nodes or opted for observation in case of negative sentinel lymph node, did not find any difference in Overall Survival (OS) and Disease Free Survival (DFS). The Z11 trial included patients who underwent breast conserving surgery and excluded patients with mastectomies. The purpose of this study is to determine whether Z0011 can be applied to mastectomy patients as well in 1-3 positive sentinel lymph nodes and avoid unnecessary ALND. Methods: A retrospective review was conducted at Shaukat Khanam Memorial Cancer Hospital Pakistan from Jan 2015 to Dec 2017 including patients who were treated for invasive breast cancer and required upfront mastectomy. They were clinically node negative, so sentinel lymph node biopsy was performed. Patients underwent ALND with positive sentinel lymph node. A total of 156 breast cancer patients with mastectomies were reviewed. Results: 95% of the patients were female while 3% were male. Average age was 44 years. There was no difference in race, comorbidities, histology, T stage, N stage, and overall stage, use of adjuvant chemotherapy and radiation therapy. 64 patients underwent ALND for positive lymph node while 92 patients were spared of axillary dissection due to negative sentinel lymph node biopsy. Out of 64 patients, 38 patients (59%) had only 1 lymph node positive which was the sentinel node. 18 patients (28%) had 2 lymph nodes positive including the sentinel node while only 8 patients (13%) had 3 or more positive nodes. Conclusion: Keeping in mind the complications related to ALND, above results clearly show that ALND could have been avoided in 87% of patients in the setting of adjuvant radiation, possibly avoiding the morbidity associated with axillary lymphadenectomy although a prospective randomized trial needs to confirm these results.Keywords: mastectomy, sentinel lymph node biopsy, axillary lymph node dissection, breast cancer
Procedia PDF Downloads 198