Search results for: fuzzy neural network
2793 Social Structure of Corporate Social Responsibility Programme in Pantai Harapan Jaya Village, Bekasi Regency, West Java
Authors: Auliya Adzilatin Uzhma, Ismu Rini Dwi, I. Nyoman Suluh Wijaya
Abstract:
Corporate Social Responsibility (CSR) programme in Pantai Harapan Jaya village is cultivation of mangrove and fishery capital distribution, to achieve the goal the CSR programme needed participation from the society in it. Moeliono in Fahrudin (2011) mentioned that participation from society is based by intrinsic reason from inside people it self and extrinsic reason from the other who related to him. The fundamental connection who caused more boundaries from action which the organization can do called the social structure. The purpose of this research is to know the form of public participation and the social structure typology of the villager and people who is participated in CSR programme. The key actors of the society and key actors of the people who’s participated also can be known. This research use Social Network Analysis method by knew the Rate of Participation, Density and Centrality. The result of the research is people who is involved in the programme is lived in Dusun Pondok Dua and they work in fisheries field. The density value from the participant is 0.516 it’s mean that 51.6% of the people that participated is involved in the same step of CSR programme.Keywords: social structure, social network analysis, corporate social responsibility, public participation
Procedia PDF Downloads 4802792 Bilateral Telecontrol of AutoMerlin Mobile Robot Using Time Domain Passivity Control
Authors: Aamir Shahzad, Hubert Roth
Abstract:
This paper is presenting the bilateral telecontrol of AutoMerlin Mobile Robot having communication delay. Passivity Observers has been designed to monitor the net energy at both ports of a two port network and if any or both ports become active making net energy negative, then the passivity controllers dissipate the proper energy to make the overall system passive in the presence of time delay. The environment force is modeled and sent back to human operator so that s/he can feel it and has additional information about the environment in the vicinity of mobile robot. The experimental results have been presented to show the performance and stability of bilateral controller. The results show the whenever the passivity observers observe active behavior then the passivity controller come into action to neutralize the active behavior to make overall system passive.Keywords: bilateral control, human operator, haptic device, communication network, time domain passivity control, passivity observer, passivity controller, time delay, mobile robot, environment force
Procedia PDF Downloads 3922791 Networking the Biggest Challenge in Hybrid Cloud Deployment
Authors: Aishwarya Shekhar, Devesh Kumar Srivastava
Abstract:
Cloud computing has emerged as a promising direction for cost efficient and reliable service delivery across data communication networks. The dynamic location of service facilities and the virtualization of hardware and software elements are stressing the communication networks and protocols, especially when data centres are interconnected through the internet. Although the computing aspects of cloud technologies have been largely investigated, lower attention has been devoted to the networking services without involving IT operating overhead. Cloud computing has enabled elastic and transparent access to infrastructure services without involving IT operating overhead. Virtualization has been a key enabler for cloud computing. While resource virtualization and service abstraction have been widely investigated, networking in cloud remains a difficult puzzle. Even though network has significant role in facilitating hybrid cloud scenarios, it hasn't received much attention in research community until recently. We propose Network as a Service (NaaS), which forms the basis of unifying public and private clouds. In this paper, we identify various challenges in adoption of hybrid cloud. We discuss the design and implementation of a cloud platform.Keywords: cloud computing, networking, infrastructure, hybrid cloud, open stack, naas
Procedia PDF Downloads 4272790 South Asia’s Political Landscape: Precipitating Terrorism
Authors: Saroj Kumar Rath
Abstract:
India's Muslims represent 15 percent of the nation's population, the world's third largest group in any nation after Indonesia and Pakistan. Extremist groups like the Islamic State, Al Qaeda, the Taliban and the Haqqani network increasingly view India as a target. Several trends explain the rise: Terrorism threats in South Asia are linked and mobile - if one source is batted down, jihadists relocate to find another Islamic cause. As NATO withdraws from Afghanistan, some jihadists will eye India. Pakistan regards India as a top enemy and some officials even encourage terrorists to target areas like Kashmir or Mumbai. Meanwhile, a stream of Wahhabi preachers have visited India, offering hard-line messages; extremist groups like Al Qaeda and the Islamic State compete for influence, and militants even pay jihadists. Muslims as a minority population in India could offer fertile ground for the extremist recruiters. This paper argues that there is an urgent need for the Indian government to profile militants and examine social media sites to attack Wahhabi indoctrination while supporting education and entrepreneurship for all of India's citizens.Keywords: Al Qaeda, terrorism, Islamic state, India, haqqani network, Pakistan, Taliban
Procedia PDF Downloads 6172789 Evaluating Service Trustworthiness for Service Selection in Cloud Environment
Authors: Maryam Amiri, Leyli Mohammad-Khanli
Abstract:
Cloud computing is becoming increasingly popular and more business applications are moving to cloud. In this regard, services that provide similar functional properties are increasing. So, the ability to select a service with the best non-functional properties, corresponding to the user preference, is necessary for the user. This paper presents an Evaluation Framework of Service Trustworthiness (EFST) that evaluates the trustworthiness of equivalent services without need to additional invocations of them. EFST extracts user preference automatically. Then, it assesses trustworthiness of services in two dimensions of qualitative and quantitative metrics based on the experiences of past usage of services. Finally, EFST determines the overall trustworthiness of services using Fuzzy Inference System (FIS). The results of experiments and simulations show that EFST is able to predict the missing values of Quality of Service (QoS) better than other competing approaches. Also, it propels users to select the most appropriate services.Keywords: user preference, cloud service, trustworthiness, QoS metrics, prediction
Procedia PDF Downloads 2872788 Component-Based Approach in Assessing Sewer Manholes
Authors: Khalid Kaddoura, Tarek Zayed
Abstract:
Sewer networks are constructed to protect the communities and the environment from any contact with the sewer mediums. Pipelines, being laterals or sewer mains, and manholes form the huge underground infrastructure in every urban city. Due to the sewer networks importance, the infrastructure asset management field has extensive advancement in condition assessment and rehabilitation decision models. However, most of the focus was devoted to pipelines giving little attention toward manholes condition assessment. In fact, recent studies started to emerge in this area to preserve manholes from any malfunction. Therefore, the main objective of this study is to propose a condition assessment model for sewer manholes. The model divides the manhole into several components and determines the relative importance weight of each component using the Analytic Network Process (ANP) decision-making method. Later, the condition of the manhole is computed by aggregating the condition of each component with its corresponding weight. Accordingly, the proposed assessment model will enable decision-makers to have a final index suggesting the overall condition of the manhole and a backward analysis to check the condition of each component. Consequently, better decisions are made pertinent to maintenance, rehabilitation, and replacement actions.Keywords: Analytic Network Process (ANP), condition assessment, decision-making, manholes
Procedia PDF Downloads 3542787 Sharing Experience in Authentic Learning for Mobile Security
Abstract:
Mobile devices such as smartphones are getting more and more popular in our daily lives. The security vulnerability and threat attacks become a very emerging and important research and education topic in computing security discipline. There is a need to have an innovative mobile security hands-on laboratory to provide students with real world relevant mobile threat analysis and protection experience. This paper presents an authentic teaching and learning mobile security approach with smartphone devices which covers most important mobile threats in most aspects of mobile security. Each lab focuses on one type of mobile threats, such as mobile messaging threat, and conveys the threat analysis and protection in multiple ways, including lectures and tutorials, multimedia or app-based demonstration for threats analysis, and mobile app development for threat protections. This authentic learning approach is affordable and easily-adoptable which immerse students in a real world relevant learning environment with real devices. This approach can also be applied to many other mobile related courses such as mobile Java programming, database, network, and any security relevant courses so that can learn concepts and principles better with the hands-on authentic learning experience.Keywords: mobile computing, Android, network, security, labware
Procedia PDF Downloads 4062786 Improving Cryptographically Generated Address Algorithm in IPv6 Secure Neighbor Discovery Protocol through Trust Management
Authors: M. Moslehpour, S. Khorsandi
Abstract:
As transition to widespread use of IPv6 addresses has gained momentum, it has been shown to be vulnerable to certain security attacks such as those targeting Neighbor Discovery Protocol (NDP) which provides the address resolution functionality in IPv6. To protect this protocol, Secure Neighbor Discovery (SEND) is introduced. This protocol uses Cryptographically Generated Address (CGA) and asymmetric cryptography as a defense against threats on integrity and identity of NDP. Although SEND protects NDP against attacks, it is computationally intensive due to Hash2 condition in CGA. To improve the CGA computation speed, we parallelized CGA generation process and used the available resources in a trusted network. Furthermore, we focused on the influence of the existence of malicious nodes on the overall load of un-malicious ones in the network. According to the evaluation results, malicious nodes have adverse impacts on the average CGA generation time and on the average number of tries. We utilized a Trust Management that is capable of detecting and isolating the malicious node to remove possible incentives for malicious behavior. We have demonstrated the effectiveness of the Trust Management System in detecting the malicious nodes and hence improving the overall system performance.Keywords: CGA, ICMPv6, IPv6, malicious node, modifier, NDP, overall load, SEND, trust management
Procedia PDF Downloads 1842785 Estimating the Traffic Impacts of Green Light Optimal Speed Advisory Systems Using Microsimulation
Authors: C. B. Masera, M. Imprialou, L. Budd, C. Morton
Abstract:
Even though signalised intersections are necessary for urban road traffic management, they can act as bottlenecks and disrupt traffic operations. Interrupted traffic flow causes congestion, delays, stop-and-go conditions (i.e. excessive acceleration/deceleration) and longer journey times. Vehicle and infrastructure connectivity offers the potential to provide improved new services with additional functions of assisting drivers. This paper focuses on one of the applications of vehicle-to-infrastructure communication namely Green Light Optimal Speed Advisory (GLOSA). To assess the effectiveness of GLOSA in the urban road network, an integrated microscopic traffic simulation framework is built into VISSIM software. Vehicle movements and vehicle-infrastructure communications are simulated through the interface of External Driver Model. A control algorithm is developed for recommending an optimal speed that is continuously updated in every time step for all vehicles approaching a signal-controlled point. This algorithm allows vehicles to pass a traffic signal without stopping or to minimise stopping times at a red phase. This study is performed with all connected vehicles at 100% penetration rate. Conventional vehicles are also simulated in the same network as a reference. A straight road segment composed of two opposite directions with two traffic lights per lane is studied. The simulation is implemented under 150 vehicles per hour and 200 per hour traffic volume conditions to identify how different traffic densities influence the benefits of GLOSA. The results indicate that traffic flow is improved by the application of GLOSA. According to this study, vehicles passed through the traffic lights more smoothly, and waiting times were reduced by up to 28 seconds. Average delays decreased for the entire network by 86.46% and 83.84% under traffic densities of 150 vehicles per hour per lane and 200 vehicles per hour per lane, respectively.Keywords: connected vehicles, GLOSA, intelligent transport systems, vehicle-to-infrastructure communication
Procedia PDF Downloads 1712784 The Scientific Study of the Relationship Between Physicochemical and Microstructural Properties of Ultrafiltered Cheese: Protein Modification and Membrane Separation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The loss of curd cohesiveness and syneresis are two common problems in the ultrafiltered cheese industry. In this study, by using membrane technology and protein modification, a modified cheese was developed and its properties were compared with a control sample. In order to decrease the lactose content and adjust the protein, acidity, dry matter and milk minerals, a combination of ultrafiltration, nanofiltration and reverse osmosis technologies was employed. For protein modification, a two-stage chemical and enzymatic reaction was employed before and after ultrafiltration. The physicochemical and microstructural properties of the modified ultrafiltered cheese were compared with the control one. Results showed that the modified protein enhanced the functional properties of the final cheese significantly (pvalue< 0.05), even if the protein content was 50% lower than the control one. The modified cheese showed 21 ± 0.70, 18 ± 1.10 & 25±1.65% higher hardness, cohesiveness and water-holding capacity values, respectively, than the control sample. This behavior could be explained by the developed microstructure of the gel network. Furthermore, chemical-enzymatic modification of milk protein induced a significant change in the network parameter of the final cheese. In this way, the indices of network linkage strength, network linkage density, and time scale of junctions were 10.34 ± 0.52, 68.50 ± 2.10 & 82.21 ± 3.85% higher than the control sample, whereas the distance between adjacent linkages was 16.77 ± 1.10% lower than the control sample. These results were supported by the results of the textural analysis. A non-linear viscoelastic study showed a triangle waveform stress of the modified protein contained cheese, while the control sample showed rectangular waveform stress, which suggested a better sliceability of the modified cheese. Moreover, to study the shelf life of the products, the acidity, as well as molds and yeast population, were determined in 120 days. It’s worth mentioning that the lactose content of modified cheese was adjusted at 2.5% before fermentation, while the lactose of the control one was at 4.5%. The control sample showed 8 weeks shelf life, while the shelf life of the modified cheese was 18 weeks in the refrigerator. During 18 weeks, the acidity of modified and control samples increased from 82 ± 1.50 to 94 ± 2.20 °D and 88 ± 1.64 to 194 ± 5.10 °D, respectively. The mold and yeast populations, with time, followed the semicircular shape model (R2 = 0.92, R2adj = 0.89, RMSE = 1.25). Furthermore, the mold and yeast counts and their growth rate in the modified cheese were lower than those for control one; Aforementioned result could be explained by the shortage of the source of energy for the microorganism in the modified cheese. The lactose content of the modified sample was less than 0.2 ± 0.05% at the end of fermentation, while this was 3.7 ± 0.68% in the control sample.Keywords: non-linear viscoelastic, protein modification, semicircular shape model, ultrafiltered cheese
Procedia PDF Downloads 742783 Evaluation of Tumor Microenvironment Using Molecular Imaging
Authors: Fakhrosadat Sajjadian, Ramin Ghasemi Shayan
Abstract:
The tumor microenvironment plays an fundamental part in tumor start, movement, metastasis, and treatment resistance. It varies from ordinary tissue in terms of its extracellular network, vascular and lymphatic arrange, as well as physiological conditions. The clinical application of atomic cancer imaging is regularly prevented by the tall commercialization costs of focused on imaging operators as well as the constrained clinical applications and little showcase measure of a few operators. . Since numerous cancer types share comparable characteristics of the tumor microenvironment, the capacity to target these biomarkers has the potential to supply clinically translatable atomic imaging advances for numerous types encompassing cancer and broad clinical applications. Noteworthy advance has been made in focusing on the tumor microenvironment for atomic cancer imaging. In this survey, we summarize the standards and methodologies of later progresses in atomic imaging of the tumor microenvironment, utilizing distinctive imaging modalities for early discovery and conclusion of cancer. To conclude, The tumor microenvironment (TME) encompassing tumor cells could be a profoundly energetic and heterogeneous composition of safe cells, fibroblasts, forerunner cells, endothelial cells, flagging atoms and extracellular network (ECM) components.Keywords: molecular, imaging, TME, medicine
Procedia PDF Downloads 462782 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 562781 Second Order Cone Optimization Approach to Two-stage Network DEA
Authors: K. Asanimoghadam, M. Salahi, A. Jamalian
Abstract:
Data envelopment analysis is an approach to measure the efficiency of decision making units with multiple inputs and outputs. The structure of many decision making units also has decision-making subunits that are not considered in most data envelopment analysis models. Also, the inputs and outputs of the decision-making units usually are considered desirable, while in some real-world problems, the nature of some inputs or outputs are undesirable. In this thesis, we study the evaluation of the efficiency of two stage decision-making units, where some outputs are undesirable using two non-radial models, the SBM and the ASBM models. We formulate the nonlinear ASBM model as a second order cone optimization problem. Finally, we compare two models for both external and internal evaluation approaches for two real world example in the presence of undesirable outputs. The results show that, in both external and internal evaluations, the overall efficiency of ASBM model is greater than or equal to the overall efficiency value of the SBM model, and in internal evaluation, the ASBM model is more flexible than the SBM model.Keywords: network DEA, conic optimization, undesirable output, SBM
Procedia PDF Downloads 1942780 Participation in the Decision Making and Job Satisfaction in Greek Fish Farms
Authors: S. Anastasiou, C. Nathanailides
Abstract:
There is considerable evidence to suggest that employees participation in the decision-making process of an organisation, has a positive effect on job satisfaction and work performance of the employees. The purpose of the present work was to examine the HRM practices, demographics and the level of job satisfaction of employees in Greek Aquaculture fish farms. A survey of employees (n=86) in 6 Greek Aquaculture Firms was carried out. The results indicate that HRM practices such as recruitment of the personnel and communication between the departments did not vary between different firms. The most frequent method of recruitment was through the professional network or the personal network of the managers. The preferred method of HRM communication was through the line managers and through group meeting. The level of job satisfaction increased with work experience participation and participation in the decision making process. A high percentage of the employees (81,3%±8.39) felt that they frequently participated in the decision making process. The Aquaculture employees exhibited high level of job satisfaction (88,1±6.95). The level of job satisfaction was related with participation in the decision making process (-0.633, P<0.05) but was not related with as age or gender. In terms of the working conditions, employees were mostly satisfied with their work itself, their colleagues and mostly dissatisfied with working hours, salary issues and low prospects of pay rises.Keywords: aquaculture, human resources, job satisfaction
Procedia PDF Downloads 4682779 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution
Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy
Abstract:
The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.Keywords: cerebrovascular, compartmental model, CSF model, vascular network
Procedia PDF Downloads 2752778 Modeling of Power Network by ATP-Draw for Lightning Stroke Studies
Authors: John Morales, Armando Guzman
Abstract:
Protection relay algorithms play a crucial role in Electric Power System stability, where, it is clear that lightning strokes produce the mayor percentage of faults and outages of Transmission Lines (TLs) and Distribution Feeders (DFs). In this context, it is imperative to develop novel protection relay algorithms. However, in order to get this aim, Electric Power Systems (EPS) network have to be simulated as real as possible, especially the lightning phenomena, and EPS elements that affect their behavior like direct and indirect lightning, insulator string, overhead line, soil ionization and other. However, researchers have proposed new protection relay algorithms considering common faults, which are not produced by lightning strokes, omitting these imperative phenomena for the transmission line protection relays behavior. Based on the above said, this paper presents the possibilities of using the Alternative Transient Program ATP-Draw for the modeling and simulation of some models to make lightning stroke studies, especially for protection relays, which are developed through Transient Analysis of Control Systems (TACS) and MODELS language corresponding to the ATP-Draw.Keywords: back-flashover, faults, flashover, lightning stroke, modeling of lightning, outages, protection relays
Procedia PDF Downloads 3162777 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2142776 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting
Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi
Abstract:
An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power
Procedia PDF Downloads 4112775 Strengthening Farmer-to-farmer Knowledge Sharing Network: A Pathway to Improved Extension Service Delivery
Authors: Farouk Shehu Abdulwahab
Abstract:
The concept of farmer-farmer knowledge sharing was introduced to bridge the extension worker-farmer ratio gap in developing countries. However, the idea was poorly accepted, especially in typical agrarian communities. Therefore, the study explores the concept of a farmer-to-farmer knowledge-sharing network to enhance extension service delivery. The study collected data from 80 farmers randomly selected through a series of multiple stages. The Data was analysed using a 5-point Likert scale and descriptive statistics. The Likert scale results revealed that 62.5% of the farmers are satisfied with farmer-to-farmer knowledge-sharing networks. Moreover, descriptive statistics show that lack of capacity building and low level of education are the most significant problems affecting farmer-farmer sharing networks. The major implication of these findings is that the concept of farmer-farmer knowledge-sharing networks can work better for farmers in developing countries as it was perceived by them as a reliable alternative for information sharing. Therefore, the study recommends introducing incentives into the concept of farmer-farmer knowledge-sharing networks and enhancing the capabilities of farmers who are opinion leaders in the farmer-farmer concept of knowledge-sharing to make it more sustainable.Keywords: agricultural productivity, extension, farmer-to-farmer, livelihood, technology transfer
Procedia PDF Downloads 652774 Intelligent Rainwater Reuse System for Irrigation
Authors: Maria M. S. Pires, Andre F. X. Gloria, Pedro J. A. Sebastiao
Abstract:
The technological advances in the area of Internet of Things have been creating more and more solutions in the area of agriculture. These solutions are quite important for life, as they lead to the saving of the most precious resource, water, being this need to save water a concern worldwide. The paper proposes the creation of an Internet of Things system based on a network of sensors and interconnected actuators that automatically monitors the quality of the rainwater that is stored inside a tank in order to be used for irrigation. The main objective is to promote sustainability by reusing rainwater for irrigation systems instead of water that is usually available for other functions, such as other productions or even domestic tasks. A mobile application was developed for Android so that the user can control and monitor his system in real time. In the application, it is possible to visualize the data that translate the quality of the water inserted in the tank, as well as perform some actions on the implemented actuators, such as start/stop the irrigation system and pour the water in case of poor water quality. The implemented system translates a simple solution with a high level of efficiency and tests and results obtained within the possible environment.Keywords: internet of things, irrigation system, wireless sensor and actuator network, ESP32, sustainability, water reuse, water efficiency
Procedia PDF Downloads 1492773 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1252772 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.Keywords: ICA, RSN, refractory epilepsy, rsfMRI
Procedia PDF Downloads 762771 Advanced Simulation and Enhancement for Distributed and Energy Efficient Scheduling for IEEE802.11s Wireless Enhanced Distributed Channel Access Networks
Authors: Fisayo G. Ojo, Shamala K. Subramaniam, Zuriati Ahmad Zukarnain
Abstract:
As technology is advancing and wireless applications are becoming dependable sources, while the physical layer of the applications are been embedded into tiny layer, so the more the problem on energy efficiency and consumption. This paper reviews works done in recent years in wireless applications and distributed computing, we discovered that applications are becoming dependable, and resource allocation sharing with other applications in distributed computing. Applications embedded in distributed system are suffering from power stability and efficiency. In the reviews, we also prove that discrete event simulation has been left behind untouched and not been adapted into distributed system as a simulation technique in scheduling of each event that took place in the development of distributed computing applications. We shed more lights on some researcher proposed techniques and results in our reviews to prove the unsatisfactory results, and to show that more work still have to be done on issues of energy efficiency in wireless applications, and congestion in distributed computing.Keywords: discrete event simulation (DES), distributed computing, energy efficiency (EE), internet of things (IOT), quality of service (QOS), user equipment (UE), wireless mesh network (WMN), wireless sensor network (wsn), worldwide interoperability for microwave access x (WiMAX)
Procedia PDF Downloads 1922770 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty
Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus
Abstract:
Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming
Procedia PDF Downloads 1792769 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm
Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene
Abstract:
Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest
Procedia PDF Downloads 1182768 Other-Generated Disclosure: A Challenge to Privacy on Social Network Sites
Authors: Tharntip Tawnie Chutikulrungsee, Oliver Kisalay Burmeister, Maumita Bhattacharya, Dragana Calic
Abstract:
Sharing on social network sites (SNSs) has rapidly emerged as a new social norm and has become a global phenomenon. Billions of users reveal not only their own information (self disclosure) but also information about others (other-generated disclosure), resulting in a risk and a serious threat to either personal or informational privacy. Self-disclosure (SD) has been extensively researched in the literature, particularly regarding control of individual and existing privacy management. However, far too little attention has been paid to other-generated disclosure (OGD), especially by insiders. OGD has a strong influence on self-presentation, self-image, and electronic word of mouth (eWOM). Moreover, OGD is more credible and less likely manipulated than SD, but lacks privacy control and legal protection to some extent. This article examines OGD in depth, ranging from motivation to both online and offline impacts, based upon lived experiences from both ‘the disclosed’ and ‘the discloser’. Using purposive sampling, this phenomenological study involves an online survey and in-depth interviews. The findings report the influence of peer disclosure as well as users’ strategies to mitigate privacy issues. This article also calls attention to the challenge of OGD privacy and inadequacies in the law related to privacy protection in the digital domain.Keywords: facebook, online privacy, other-generated disclosure, social networks sites (SNSs)
Procedia PDF Downloads 2512767 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks
Procedia PDF Downloads 2412766 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction
Authors: Omer Cahana, Ofer Levi, Maya Herman
Abstract:
Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning
Procedia PDF Downloads 912765 Performance Based Road Asset Evaluation
Authors: Kidus Dawit Gedamu
Abstract:
Addis Ababa City Road Authority is responsible for managing and setting performance evaluation of the city’s road network using the International Roughness Index (IRI). This helps the authority to conduct pavement condition assessments of asphalt roads each year to determine the health status or Level of service (LOS) of the roadway network and plan program improvements such as maintenance, resurfacing and rehabilitation. For a lower IRI limit economical and acceptable maintenance strategy may be selected among a number of maintenance alternatives. The Highway Development and Management (HDM-4) tool can do such measures to help decide which option is the best by evaluating the economic and structural conditions. This paper specifically addresses flexible pavement, including two principal arterial streets under the administration of the Addis Ababa City Roads Authority. The roads include the road from Megenagna Interchange to Ayat Square and from Ayat Square to Tafo RA. First, it was assessed the procedures followed by the city's road authority to develop the appropriate road maintenance strategies. Questionnaire surveys and interviews are used to collect information from the city's road maintenance departments. Second, the project analysis was performed for functional and economic comparison of different maintenance alternatives using HDM-4.Keywords: appropriate maintenance strategy, cost stream, road deterioration, maintenance alternative
Procedia PDF Downloads 612764 Strengthening by Assessment: A Case Study of Rail Bridges
Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas
Abstract:
The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening
Procedia PDF Downloads 309