Search results for: Bayesian neural network
2748 Evaluation of Tumor Microenvironment Using Molecular Imaging
Authors: Fakhrosadat Sajjadian, Ramin Ghasemi Shayan
Abstract:
The tumor microenvironment plays an fundamental part in tumor start, movement, metastasis, and treatment resistance. It varies from ordinary tissue in terms of its extracellular network, vascular and lymphatic arrange, as well as physiological conditions. The clinical application of atomic cancer imaging is regularly prevented by the tall commercialization costs of focused on imaging operators as well as the constrained clinical applications and little showcase measure of a few operators. . Since numerous cancer types share comparable characteristics of the tumor microenvironment, the capacity to target these biomarkers has the potential to supply clinically translatable atomic imaging advances for numerous types encompassing cancer and broad clinical applications. Noteworthy advance has been made in focusing on the tumor microenvironment for atomic cancer imaging. In this survey, we summarize the standards and methodologies of later progresses in atomic imaging of the tumor microenvironment, utilizing distinctive imaging modalities for early discovery and conclusion of cancer. To conclude, The tumor microenvironment (TME) encompassing tumor cells could be a profoundly energetic and heterogeneous composition of safe cells, fibroblasts, forerunner cells, endothelial cells, flagging atoms and extracellular network (ECM) components.Keywords: molecular, imaging, TME, medicine
Procedia PDF Downloads 452747 Optimizing Heavy-Duty Green Hydrogen Refueling Stations: A Techno-Economic Analysis of Turbo-Expander Integration
Authors: Christelle Rabbat, Carole Vouebou, Sary Awad, Alan Jean-Marie
Abstract:
Hydrogen has been proven to be a viable alternative to standard fuels as it is easy to produce and only generates water vapour and zero carbon emissions. However, despite the hydrogen benefits, the widespread adoption of hydrogen fuel cell vehicles and internal combustion engine vehicles is impeded by several challenges. The lack of refueling infrastructures remains one of the main hindering factors due to the high costs associated with their design, construction, and operation. Besides, the lack of hydrogen vehicles on the road diminishes the economic viability of investing in refueling infrastructure. Simultaneously, the absence of accessible refueling stations discourages consumers from adopting hydrogen vehicles, perpetuating a cycle of limited market uptake. To address these challenges, the implementation of adequate policies incentivizing the use of hydrogen vehicles and the reduction of the investment and operation costs of hydrogen refueling stations (HRS) are essential to put both investors and customers at ease. Even though the transition to hydrogen cars has been rather slow, public transportation companies have shown a keen interest in this highly promising fuel. Besides, their hydrogen demand is easier to predict and regulate than personal vehicles. Due to the reduced complexity of designing a suitable hydrogen supply chain for public vehicles, this sub-sector could be a great starting point to facilitate the adoption of hydrogen vehicles. Consequently, this study will focus on designing a chain of on-site green HRS for the public transportation network in Nantes Metropole leveraging the latest relevant technological advances aiming to reduce the costs while ensuring reliability, safety, and ease of access. To reduce the cost of HRS and encourage their widespread adoption, a network of 7 H35-T40 HRS has been designed, replacing the conventional J-T valves with turbo-expanders. Each station in the network has a daily capacity of 1,920 kg. Thus, the HRS network can produce up to 12.5 tH2 per day. The detailed cost analysis has revealed a CAPEX per station of 16.6 M euros leading to a network CAPEX of 116.2 M euros. The proposed station siting prioritized Nantes metropole’s 5 bus depots and included 2 city-centre locations. Thanks to the turbo-expander technology, the cooling capacity of the proposed HRS is 19% lower than that of a conventional station equipped with J-T valves, resulting in significant CAPEX savings estimated at 708,560 € per station, thus nearly 5 million euros for the whole HRS network. Besides, the turbo-expander power generation ranges from 7.7 to 112 kW. Thus, the power produced can be used within the station or sold as electricity to the main grid, which would, in turn, maximize the station’s profit. Despite the substantial initial investment required, the environmental benefits, cost savings, and energy efficiencies realized through the transition to hydrogen fuel cell buses and the deployment of HRS equipped with turbo-expanders offer considerable advantages for both TAN and Nantes Metropole. These initiatives underscore their enduring commitment to fostering green mobility and combatting climate change in the long term.Keywords: green hydrogen, refueling stations, turbo-expander, heavy-duty vehicles
Procedia PDF Downloads 562746 Second Order Cone Optimization Approach to Two-stage Network DEA
Authors: K. Asanimoghadam, M. Salahi, A. Jamalian
Abstract:
Data envelopment analysis is an approach to measure the efficiency of decision making units with multiple inputs and outputs. The structure of many decision making units also has decision-making subunits that are not considered in most data envelopment analysis models. Also, the inputs and outputs of the decision-making units usually are considered desirable, while in some real-world problems, the nature of some inputs or outputs are undesirable. In this thesis, we study the evaluation of the efficiency of two stage decision-making units, where some outputs are undesirable using two non-radial models, the SBM and the ASBM models. We formulate the nonlinear ASBM model as a second order cone optimization problem. Finally, we compare two models for both external and internal evaluation approaches for two real world example in the presence of undesirable outputs. The results show that, in both external and internal evaluations, the overall efficiency of ASBM model is greater than or equal to the overall efficiency value of the SBM model, and in internal evaluation, the ASBM model is more flexible than the SBM model.Keywords: network DEA, conic optimization, undesirable output, SBM
Procedia PDF Downloads 1942745 Participation in the Decision Making and Job Satisfaction in Greek Fish Farms
Authors: S. Anastasiou, C. Nathanailides
Abstract:
There is considerable evidence to suggest that employees participation in the decision-making process of an organisation, has a positive effect on job satisfaction and work performance of the employees. The purpose of the present work was to examine the HRM practices, demographics and the level of job satisfaction of employees in Greek Aquaculture fish farms. A survey of employees (n=86) in 6 Greek Aquaculture Firms was carried out. The results indicate that HRM practices such as recruitment of the personnel and communication between the departments did not vary between different firms. The most frequent method of recruitment was through the professional network or the personal network of the managers. The preferred method of HRM communication was through the line managers and through group meeting. The level of job satisfaction increased with work experience participation and participation in the decision making process. A high percentage of the employees (81,3%±8.39) felt that they frequently participated in the decision making process. The Aquaculture employees exhibited high level of job satisfaction (88,1±6.95). The level of job satisfaction was related with participation in the decision making process (-0.633, P<0.05) but was not related with as age or gender. In terms of the working conditions, employees were mostly satisfied with their work itself, their colleagues and mostly dissatisfied with working hours, salary issues and low prospects of pay rises.Keywords: aquaculture, human resources, job satisfaction
Procedia PDF Downloads 4672744 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution
Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy
Abstract:
The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.Keywords: cerebrovascular, compartmental model, CSF model, vascular network
Procedia PDF Downloads 2752743 Modeling of Power Network by ATP-Draw for Lightning Stroke Studies
Authors: John Morales, Armando Guzman
Abstract:
Protection relay algorithms play a crucial role in Electric Power System stability, where, it is clear that lightning strokes produce the mayor percentage of faults and outages of Transmission Lines (TLs) and Distribution Feeders (DFs). In this context, it is imperative to develop novel protection relay algorithms. However, in order to get this aim, Electric Power Systems (EPS) network have to be simulated as real as possible, especially the lightning phenomena, and EPS elements that affect their behavior like direct and indirect lightning, insulator string, overhead line, soil ionization and other. However, researchers have proposed new protection relay algorithms considering common faults, which are not produced by lightning strokes, omitting these imperative phenomena for the transmission line protection relays behavior. Based on the above said, this paper presents the possibilities of using the Alternative Transient Program ATP-Draw for the modeling and simulation of some models to make lightning stroke studies, especially for protection relays, which are developed through Transient Analysis of Control Systems (TACS) and MODELS language corresponding to the ATP-Draw.Keywords: back-flashover, faults, flashover, lightning stroke, modeling of lightning, outages, protection relays
Procedia PDF Downloads 3162742 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2142741 Proposing an Algorithm to Cluster Ad Hoc Networks, Modulating Two Levels of Learning Automaton and Nodes Additive Weighting
Authors: Mohammad Rostami, Mohammad Reza Forghani, Elahe Neshat, Fatemeh Yaghoobi
Abstract:
An Ad Hoc network consists of wireless mobile equipment which connects to each other without any infrastructure, using connection equipment. The best way to form a hierarchical structure is clustering. Various methods of clustering can form more stable clusters according to nodes' mobility. In this research we propose an algorithm, which allocates some weight to nodes based on factors, i.e. link stability and power reduction rate. According to the allocated weight in the previous phase, the cellular learning automaton picks out in the second phase nodes which are candidates for being cluster head. In the third phase, learning automaton selects cluster head nodes, member nodes and forms the cluster. Thus, this automaton does the learning from the setting and can form optimized clusters in terms of power consumption and link stability. To simulate the proposed algorithm we have used omnet++4.2.2. Simulation results indicate that newly formed clusters have a longer lifetime than previous algorithms and decrease strongly network overload by reducing update rate.Keywords: mobile Ad Hoc networks, clustering, learning automaton, cellular automaton, battery power
Procedia PDF Downloads 4112740 Strengthening Farmer-to-farmer Knowledge Sharing Network: A Pathway to Improved Extension Service Delivery
Authors: Farouk Shehu Abdulwahab
Abstract:
The concept of farmer-farmer knowledge sharing was introduced to bridge the extension worker-farmer ratio gap in developing countries. However, the idea was poorly accepted, especially in typical agrarian communities. Therefore, the study explores the concept of a farmer-to-farmer knowledge-sharing network to enhance extension service delivery. The study collected data from 80 farmers randomly selected through a series of multiple stages. The Data was analysed using a 5-point Likert scale and descriptive statistics. The Likert scale results revealed that 62.5% of the farmers are satisfied with farmer-to-farmer knowledge-sharing networks. Moreover, descriptive statistics show that lack of capacity building and low level of education are the most significant problems affecting farmer-farmer sharing networks. The major implication of these findings is that the concept of farmer-farmer knowledge-sharing networks can work better for farmers in developing countries as it was perceived by them as a reliable alternative for information sharing. Therefore, the study recommends introducing incentives into the concept of farmer-farmer knowledge-sharing networks and enhancing the capabilities of farmers who are opinion leaders in the farmer-farmer concept of knowledge-sharing to make it more sustainable.Keywords: agricultural productivity, extension, farmer-to-farmer, livelihood, technology transfer
Procedia PDF Downloads 642739 Intelligent Rainwater Reuse System for Irrigation
Authors: Maria M. S. Pires, Andre F. X. Gloria, Pedro J. A. Sebastiao
Abstract:
The technological advances in the area of Internet of Things have been creating more and more solutions in the area of agriculture. These solutions are quite important for life, as they lead to the saving of the most precious resource, water, being this need to save water a concern worldwide. The paper proposes the creation of an Internet of Things system based on a network of sensors and interconnected actuators that automatically monitors the quality of the rainwater that is stored inside a tank in order to be used for irrigation. The main objective is to promote sustainability by reusing rainwater for irrigation systems instead of water that is usually available for other functions, such as other productions or even domestic tasks. A mobile application was developed for Android so that the user can control and monitor his system in real time. In the application, it is possible to visualize the data that translate the quality of the water inserted in the tank, as well as perform some actions on the implemented actuators, such as start/stop the irrigation system and pour the water in case of poor water quality. The implemented system translates a simple solution with a high level of efficiency and tests and results obtained within the possible environment.Keywords: internet of things, irrigation system, wireless sensor and actuator network, ESP32, sustainability, water reuse, water efficiency
Procedia PDF Downloads 1492738 Ambivalence as Ethical Practice: Methodologies to Address Noise, Bias in Care, and Contact Evaluations
Authors: Anthony Townsend, Robyn Fasser
Abstract:
While complete objectivity is a desirable scientific position from which to conduct a care and contact evaluation (CCE), it is precisely the recognition that we are inherently incapable of operating objectively that is the foundation of ethical practice and skilled assessment. Drawing upon recent research from Daniel Kahneman (2021) on the differences between noise and bias, as well as different inherent biases collectively termed “The Elephant in the Brain” by Kevin Simler and Robin Hanson (2019) from Oxford University, this presentation addresses both the various ways in which our judgments, perceptions and even procedures can be distorted and contaminated while conducting a CCE, but also considers the value of second order cybernetics and the psychodynamic concept of ‘ambivalence’ as a conceptual basis to inform our assessment methodologies to limit such errors or at least better identify them. Both a conceptual framework for ambivalence, our higher-order capacity to allow for the convergence and consideration of multiple emotional experiences and cognitive perceptions to inform our reasoning, and a practical methodology for assessment relying on data triangulation, Bayesian inference and hypothesis testing is presented as a means of promoting ethical practice for health care professionals conducting CCEs. An emphasis on widening awareness and perspective, limiting ‘splitting’, is demonstrated both in how this form of emotional processing plays out in alienating dynamics in families as well as the assessment thereof. In addressing this concept, this presentation aims to illuminate the value of ambivalence as foundational to ethical practice for assessors.Keywords: ambivalence, forensic, psychology, noise, bias, ethics
Procedia PDF Downloads 862737 Towards End-To-End Disease Prediction from Raw Metagenomic Data
Authors: Maxence Queyrel, Edi Prifti, Alexandre Templier, Jean-Daniel Zucker
Abstract:
Analysis of the human microbiome using metagenomic sequencing data has demonstrated high ability in discriminating various human diseases. Raw metagenomic sequencing data require multiple complex and computationally heavy bioinformatics steps prior to data analysis. Such data contain millions of short sequences read from the fragmented DNA sequences and stored as fastq files. Conventional processing pipelines consist in multiple steps including quality control, filtering, alignment of sequences against genomic catalogs (genes, species, taxonomic levels, functional pathways, etc.). These pipelines are complex to use, time consuming and rely on a large number of parameters that often provide variability and impact the estimation of the microbiome elements. Training Deep Neural Networks directly from raw sequencing data is a promising approach to bypass some of the challenges associated with mainstream bioinformatics pipelines. Most of these methods use the concept of word and sentence embeddings that create a meaningful and numerical representation of DNA sequences, while extracting features and reducing the dimensionality of the data. In this paper we present an end-to-end approach that classifies patients into disease groups directly from raw metagenomic reads: metagenome2vec. This approach is composed of four steps (i) generating a vocabulary of k-mers and learning their numerical embeddings; (ii) learning DNA sequence (read) embeddings; (iii) identifying the genome from which the sequence is most likely to come and (iv) training a multiple instance learning classifier which predicts the phenotype based on the vector representation of the raw data. An attention mechanism is applied in the network so that the model can be interpreted, assigning a weight to the influence of the prediction for each genome. Using two public real-life data-sets as well a simulated one, we demonstrated that this original approach reaches high performance, comparable with the state-of-the-art methods applied directly on processed data though mainstream bioinformatics workflows. These results are encouraging for this proof of concept work. We believe that with further dedication, the DNN models have the potential to surpass mainstream bioinformatics workflows in disease classification tasks.Keywords: deep learning, disease prediction, end-to-end machine learning, metagenomics, multiple instance learning, precision medicine
Procedia PDF Downloads 1252736 Resting-State Functional Connectivity Analysis Using an Independent Component Approach
Authors: Eric Jacob Bacon, Chaoyang Jin, Dianning He, Shuaishuai Hu, Lanbo Wang, Han Li, Shouliang Qi
Abstract:
Objective: Refractory epilepsy is a complicated type of epilepsy that can be difficult to diagnose. Recent technological advancements have made resting-state functional magnetic resonance (rsfMRI) a vital technique for studying brain activity. However, there is still much to learn about rsfMRI. Investigating rsfMRI connectivity may aid in the detection of abnormal activities. In this paper, we propose studying the functional connectivity of rsfMRI candidates to diagnose epilepsy. Methods: 45 rsfMRI candidates, comprising 26 with refractory epilepsy and 19 healthy controls, were enrolled in this study. A data-driven approach known as independent component analysis (ICA) was used to achieve our goal. First, rsfMRI data from both patients and healthy controls were analyzed using group ICA. The components that were obtained were then spatially sorted to find and select meaningful ones. A two-sample t-test was also used to identify abnormal networks in patients and healthy controls. Finally, based on the fractional amplitude of low-frequency fluctuations (fALFF), a chi-square statistic test was used to distinguish the network properties of the patient and healthy control groups. Results: The two-sample t-test analysis yielded abnormal in the default mode network, including the left superior temporal lobe and the left supramarginal. The right precuneus was found to be abnormal in the dorsal attention network. In addition, the frontal cortex showed an abnormal cluster in the medial temporal gyrus. In contrast, the temporal cortex showed an abnormal cluster in the right middle temporal gyrus and the right fronto-operculum gyrus. Finally, the chi-square statistic test was significant, producing a p-value of 0.001 for the analysis. Conclusion: This study offers evidence that investigating rsfMRI connectivity provides an excellent diagnosis option for refractory epilepsy.Keywords: ICA, RSN, refractory epilepsy, rsfMRI
Procedia PDF Downloads 762735 Advanced Simulation and Enhancement for Distributed and Energy Efficient Scheduling for IEEE802.11s Wireless Enhanced Distributed Channel Access Networks
Authors: Fisayo G. Ojo, Shamala K. Subramaniam, Zuriati Ahmad Zukarnain
Abstract:
As technology is advancing and wireless applications are becoming dependable sources, while the physical layer of the applications are been embedded into tiny layer, so the more the problem on energy efficiency and consumption. This paper reviews works done in recent years in wireless applications and distributed computing, we discovered that applications are becoming dependable, and resource allocation sharing with other applications in distributed computing. Applications embedded in distributed system are suffering from power stability and efficiency. In the reviews, we also prove that discrete event simulation has been left behind untouched and not been adapted into distributed system as a simulation technique in scheduling of each event that took place in the development of distributed computing applications. We shed more lights on some researcher proposed techniques and results in our reviews to prove the unsatisfactory results, and to show that more work still have to be done on issues of energy efficiency in wireless applications, and congestion in distributed computing.Keywords: discrete event simulation (DES), distributed computing, energy efficiency (EE), internet of things (IOT), quality of service (QOS), user equipment (UE), wireless mesh network (WMN), wireless sensor network (wsn), worldwide interoperability for microwave access x (WiMAX)
Procedia PDF Downloads 1922734 Multi-Objective Electric Vehicle Charge Coordination for Economic Network Management under Uncertainty
Authors: Ridoy Das, Myriam Neaimeh, Yue Wang, Ghanim Putrus
Abstract:
Electric vehicles are a popular transportation medium renowned for potential environmental benefits. However, large and uncontrolled charging volumes can impact distribution networks negatively. Smart charging is widely recognized as an efficient solution to achieve both improved renewable energy integration and grid relief. Nevertheless, different decision-makers may pursue diverse and conflicting objectives. In this context, this paper proposes a multi-objective optimization framework to control electric vehicle charging to achieve both energy cost reduction and peak shaving. A weighted-sum method is developed due to its intuitiveness and efficiency. Monte Carlo simulations are implemented to investigate the impact of uncertain electric vehicle driving patterns and provide decision-makers with a robust outcome in terms of prospective cost and network loading. The results demonstrate that there is a conflict between energy cost efficiency and peak shaving, with the decision-makers needing to make a collaborative decision.Keywords: electric vehicles, multi-objective optimization, uncertainty, mixed integer linear programming
Procedia PDF Downloads 1792733 Manufacturing Anomaly Detection Using a Combination of Gated Recurrent Unit Network and Random Forest Algorithm
Authors: Atinkut Atinafu Yilma, Eyob Messele Sefene
Abstract:
Anomaly detection is one of the essential mechanisms to control and reduce production loss, especially in today's smart manufacturing. Quick anomaly detection aids in reducing the cost of production by minimizing the possibility of producing defective products. However, developing an anomaly detection model that can rapidly detect a production change is challenging. This paper proposes Gated Recurrent Unit (GRU) combined with Random Forest (RF) to detect anomalies in the production process in real-time quickly. The GRU is used as a feature detector, and RF as a classifier using the input features from GRU. The model was tested using various synthesis and real-world datasets against benchmark methods. The results show that the proposed GRU-RF outperforms the benchmark methods with the shortest time taken to detect anomalies in the production process. Based on the investigation from the study, this proposed model can eliminate or reduce unnecessary production costs and bring a competitive advantage to manufacturing industries.Keywords: anomaly detection, multivariate time series data, smart manufacturing, gated recurrent unit network, random forest
Procedia PDF Downloads 1182732 Other-Generated Disclosure: A Challenge to Privacy on Social Network Sites
Authors: Tharntip Tawnie Chutikulrungsee, Oliver Kisalay Burmeister, Maumita Bhattacharya, Dragana Calic
Abstract:
Sharing on social network sites (SNSs) has rapidly emerged as a new social norm and has become a global phenomenon. Billions of users reveal not only their own information (self disclosure) but also information about others (other-generated disclosure), resulting in a risk and a serious threat to either personal or informational privacy. Self-disclosure (SD) has been extensively researched in the literature, particularly regarding control of individual and existing privacy management. However, far too little attention has been paid to other-generated disclosure (OGD), especially by insiders. OGD has a strong influence on self-presentation, self-image, and electronic word of mouth (eWOM). Moreover, OGD is more credible and less likely manipulated than SD, but lacks privacy control and legal protection to some extent. This article examines OGD in depth, ranging from motivation to both online and offline impacts, based upon lived experiences from both ‘the disclosed’ and ‘the discloser’. Using purposive sampling, this phenomenological study involves an online survey and in-depth interviews. The findings report the influence of peer disclosure as well as users’ strategies to mitigate privacy issues. This article also calls attention to the challenge of OGD privacy and inadequacies in the law related to privacy protection in the digital domain.Keywords: facebook, online privacy, other-generated disclosure, social networks sites (SNSs)
Procedia PDF Downloads 2512731 Relay Node Placement for Connectivity Restoration in Wireless Sensor Networks Using Genetic Algorithms
Authors: Hanieh Tarbiat Khosrowshahi, Mojtaba Shakeri
Abstract:
Wireless Sensor Networks (WSNs) consist of a set of sensor nodes with limited capability. WSNs may suffer from multiple node failures when they are exposed to harsh environments such as military zones or disaster locations and lose connectivity by getting partitioned into disjoint segments. Relay nodes (RNs) are alternatively introduced to restore connectivity. They cost more than sensors as they benefit from mobility, more power and more transmission range, enforcing a minimum number of them to be used. This paper addresses the problem of RN placement in a multiple disjoint network by developing a genetic algorithm (GA). The problem is reintroduced as the Steiner tree problem (which is known to be an NP-hard problem) by the aim of finding the minimum number of Steiner points where RNs are to be placed for restoring connectivity. An upper bound to the number of RNs is first computed to set up the length of initial chromosomes. The GA algorithm then iteratively reduces the number of RNs and determines their location at the same time. Experimental results indicate that the proposed GA is capable of establishing network connectivity using a reasonable number of RNs compared to the best existing work.Keywords: connectivity restoration, genetic algorithms, multiple-node failure, relay nodes, wireless sensor networks
Procedia PDF Downloads 2412730 End-to-End Pyramid Based Method for Magnetic Resonance Imaging Reconstruction
Authors: Omer Cahana, Ofer Levi, Maya Herman
Abstract:
Magnetic Resonance Imaging (MRI) is a lengthy medical scan that stems from a long acquisition time. Its length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach such as Compress Sensing (CS) or Parallel Imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. To achieve that, two conditions must be satisfied: i) the signal must be sparse under a known transform domain, and ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm must be applied to recover the signal. While the rapid advances in Deep Learning (DL) have had tremendous successes in various computer vision tasks, the field of MRI reconstruction is still in its early stages. In this paper, we present an end-to-end method for MRI reconstruction from k-space to image. Our method contains two parts. The first is sensitivity map estimation (SME), which is a small yet effective network that can easily be extended to a variable number of coils. The second is reconstruction, which is a top-down architecture with lateral connections developed for building high-level refinement at all scales. Our method holds the state-of-art fastMRI benchmark, which is the largest, most diverse benchmark for MRI reconstruction.Keywords: magnetic resonance imaging, image reconstruction, pyramid network, deep learning
Procedia PDF Downloads 912729 Non-Linear Regression Modeling for Composite Distributions
Authors: Mostafa Aminzadeh, Min Deng
Abstract:
Modeling loss data is an important part of actuarial science. Actuaries use models to predict future losses and manage financial risk, which can be beneficial for marketing purposes. In the insurance industry, small claims happen frequently while large claims are rare. Traditional distributions such as Normal, Exponential, and inverse-Gaussian are not suitable for describing insurance data, which often show skewness and fat tails. Several authors have studied classical and Bayesian inference for parameters of composite distributions, such as Exponential-Pareto, Weibull-Pareto, and Inverse Gamma-Pareto. These models separate small to moderate losses from large losses using a threshold parameter. This research introduces a computational approach using a nonlinear regression model for loss data that relies on multiple predictors. Simulation studies were conducted to assess the accuracy of the proposed estimation method. The simulations confirmed that the proposed method provides precise estimates for regression parameters. It's important to note that this approach can be applied to datasets if goodness-of-fit tests confirm that the composite distribution under study fits the data well. To demonstrate the computations, a real data set from the insurance industry is analyzed. A Mathematica code uses the Fisher information algorithm as an iteration method to obtain the maximum likelihood estimation (MLE) of regression parameters.Keywords: maximum likelihood estimation, fisher scoring method, non-linear regression models, composite distributions
Procedia PDF Downloads 342728 Performance Based Road Asset Evaluation
Authors: Kidus Dawit Gedamu
Abstract:
Addis Ababa City Road Authority is responsible for managing and setting performance evaluation of the city’s road network using the International Roughness Index (IRI). This helps the authority to conduct pavement condition assessments of asphalt roads each year to determine the health status or Level of service (LOS) of the roadway network and plan program improvements such as maintenance, resurfacing and rehabilitation. For a lower IRI limit economical and acceptable maintenance strategy may be selected among a number of maintenance alternatives. The Highway Development and Management (HDM-4) tool can do such measures to help decide which option is the best by evaluating the economic and structural conditions. This paper specifically addresses flexible pavement, including two principal arterial streets under the administration of the Addis Ababa City Roads Authority. The roads include the road from Megenagna Interchange to Ayat Square and from Ayat Square to Tafo RA. First, it was assessed the procedures followed by the city's road authority to develop the appropriate road maintenance strategies. Questionnaire surveys and interviews are used to collect information from the city's road maintenance departments. Second, the project analysis was performed for functional and economic comparison of different maintenance alternatives using HDM-4.Keywords: appropriate maintenance strategy, cost stream, road deterioration, maintenance alternative
Procedia PDF Downloads 612727 Strengthening by Assessment: A Case Study of Rail Bridges
Authors: Evangelos G. Ilias, Panagiotis G. Ilias, Vasileios T. Popotas
Abstract:
The United Kingdom has one of the oldest railway networks in the world dating back to 1825 when the world’s first passenger railway was opened. The network has some 40,000 bridges of various construction types using a wide range of materials including masonry, steel, cast iron, wrought iron, concrete and timber. It is commonly accepted that the successful operation of the network is vital for the economy of the United Kingdom, consequently the cost effective maintenance of the existing infrastructure is a high priority to maintain the operability of the network, prevent deterioration and to extend the life of the assets. Every bridge on the railway network is required to be assessed every eighteen years and a structured approach to assessments is adopted with three main types of progressively more detailed assessments used. These assessment types include Level 0 (standardized spreadsheet assessment tools), Level 1 (analytical hand calculations) and Level 2 (generally finite element analyses). There is a degree of conservatism in the first two types of assessment dictated to some extent by the relevant standards which can lead to some structures not achieving the required load rating. In these situations, a Level 2 Assessment is often carried out using finite element analysis to uncover ‘latent strength’ and improve the load rating. If successful, the more sophisticated analysis can save on costly strengthening or replacement works and avoid disruption to the operational railway. This paper presents the ‘strengthening by assessment’ achieved by Level 2 analyses. The use of more accurate analysis assumptions and the implementation of non-linear modelling and functions (material, geometric and support) to better understand buckling modes and the structural behaviour of historic construction details that are not specifically covered by assessment codes are outlined. Metallic bridges which are susceptible to loss of section size through corrosion have largest scope for improvement by the Level 2 Assessment methodology. Three case studies are presented, demonstrating the effectiveness of the sophisticated Level 2 Assessment methodology using finite element analysis against the conservative approaches employed for Level 0 and Level 1 Assessments. One rail overbridge and two rail underbridges that did not achieve the required load rating by means of a Level 1 Assessment due to the inadequate restraint provided by U-Frame action are examined and the increase in assessed capacity given by the Level 2 Assessment is outlined.Keywords: assessment, bridges, buckling, finite element analysis, non-linear modelling, strengthening
Procedia PDF Downloads 3092726 Using Hidden Markov Chain for Improving the Dependability of Safety-Critical Wireless Sensor Networks
Authors: Issam Alnader, Aboubaker Lasebae, Rand Raheem
Abstract:
Wireless sensor networks (WSNs) are distributed network systems used in a wide range of applications, including safety-critical systems. The latter provide critical services, often concerned with human life or assets. Therefore, ensuring the dependability requirements of Safety critical systems is of paramount importance. The purpose of this paper is to utilize the Hidden Markov Model (HMM) to elongate the service availability of WSNs by increasing the time it takes a node to become obsolete via optimal load balancing. We propose an HMM algorithm that, given a WSN, analyses and predicts undesirable situations, notably, nodes dying unexpectedly or prematurely. We apply this technique to improve on C. Lius’ algorithm, a scheduling-based algorithm which has served to improve the lifetime of WSNs. Our experiments show that our HMM technique improves the lifetime of the network, achieved by detecting nodes that die early and rebalancing their load. Our technique can also be used for diagnosis and provide maintenance warnings to WSN system administrators. Finally, our technique can be used to improve algorithms other than C. Liu’s.Keywords: wireless sensor networks, IoT, dependability of safety WSNs, energy conservation, sleep awake schedule
Procedia PDF Downloads 1002725 DTI Connectome Changes in the Acute Phase of Aneurysmal Subarachnoid Hemorrhage Improve Outcome Classification
Authors: Sarah E. Nelson, Casey Weiner, Alexander Sigmon, Jun Hua, Haris I. Sair, Jose I. Suarez, Robert D. Stevens
Abstract:
Graph-theoretical information from structural connectomes indicated significant connectivity changes and improved acute prognostication in a Random Forest (RF) model in aneurysmal subarachnoid hemorrhage (aSAH), which can lead to significant morbidity and mortality and has traditionally been fraught by poor methods to predict outcome. This study’s hypothesis was that structural connectivity changes occur in canonical brain networks of acute aSAH patients, and that these changes are associated with functional outcome at six months. In a prospective cohort of patients admitted to a single institution for management of acute aSAH, patients underwent diffusion tensor imaging (DTI) as part of a multimodal MRI scan. A weighted undirected structural connectome was created of each patient’s images using Constant Solid Angle (CSA) tractography, with 176 regions of interest (ROIs) defined by the Johns Hopkins Eve atlas. ROIs were sorted into four networks: Default Mode Network, Executive Control Network, Salience Network, and Whole Brain. The resulting nodes and edges were characterized using graph-theoretic features, including Node Strength (NS), Betweenness Centrality (BC), Network Degree (ND), and Connectedness (C). Clinical (including demographics and World Federation of Neurologic Surgeons scale) and graph features were used separately and in combination to train RF and Logistic Regression classifiers to predict two outcomes: dichotomized modified Rankin Score (mRS) at discharge and at six months after discharge (favorable outcome mRS 0-2, unfavorable outcome mRS 3-6). A total of 56 aSAH patients underwent DTI a median (IQR) of 7 (IQR=8.5) days after admission. The best performing model (RF) combining clinical and DTI graph features had a mean Area Under the Receiver Operator Characteristic Curve (AUROC) of 0.88 ± 0.00 and Area Under the Precision Recall Curve (AUPRC) of 0.95 ± 0.00 over 500 trials. The combined model performed better than the clinical model alone (AUROC 0.81 ± 0.01, AUPRC 0.91 ± 0.00). The highest-ranked graph features for prediction were NS, BC, and ND. These results indicate reorganization of the connectome early after aSAH. The performance of clinical prognostic models was increased significantly by the inclusion of DTI-derived graph connectivity metrics. This methodology could significantly improve prognostication of aSAH.Keywords: connectomics, diffusion tensor imaging, graph theory, machine learning, subarachnoid hemorrhage
Procedia PDF Downloads 1892724 An Evaluation of Impact of Video Billboard on the Marketing of GSM Services in Lagos Metropolis
Authors: Shola Haruna Adeosun, F. Adebiyi Ajoke, Odedeji Adeoye
Abstract:
Video billboard advertising by networks and brand switching was conceived out of inquisition at the huge billboard advertising expenditures made by the three major GSM network operators in Nigeria. The study was anchored on Lagos State Metropolis with a current census population over 1,000,000. From this population, a purposive sample of 400 was adopted, and the questionnaire designed for the survey was carefully allocated to members of this ample in the five geographical zones of the city so that each rung of the society was well represented. The data obtained were analyzed using tables and simple percentages. The results obtained showed that subscribers of these networks were hardly influenced by the video billboard advertisements. They overwhelmingly showed that rather than the slogans of the GSM networks carried on the video billboards, it was the incentives to subscribers as well as the promotional strategies of these organizations that moved them to switch from one network to another. These switching lasted only as long as the incentives and promotions were in effect. The results of the study also seemed to rekindle the age-old debate on media effects, by the unyielding schools of the theory of ‘all-powerful media’, ‘the limited effects media’, ‘the controlled effects media’ and ‘the negotiated media influence’.Keywords: evaluation, impact, video billboard, marketing, services
Procedia PDF Downloads 2532723 Investigation of Oscillation Mechanism of a Large-scale Solar Photovoltaic and Wind Hybrid Power Plant
Authors: Ting Kai Chia, Ruifeng Yan, Feifei Bai, Tapan Saha
Abstract:
This research presents a real-world power system oscillation incident in 2022 originated by a hybrid solar photovoltaic (PV) and wind renewable energy farm with a rated capacity of approximately 300MW in Australia. The voltage and reactive power outputs recorded at the point of common coupling (PCC) oscillated at a sub-synchronous frequency region, which sustained for approximately five hours in the network. The reactive power oscillation gradually increased over time and reached a recorded maximum of approximately 250MVar peak-to-peak (from inductive to capacitive). The network service provider was not able to quickly identify the location of the oscillation source because the issue was widespread across the network. After the incident, the original equipment manufacturer (OEM) concluded that the oscillation problem was caused by the incorrect setting recovery of the hybrid power plant controller (HPPC) in the voltage and reactive power control loop after a loss of communication event. The voltage controller normally outputs a reactive (Q) reference value to the Q controller which controls the Q dispatch setpoint of PV and wind plants in the hybrid farm. Meanwhile, a feed-forward (FF) configuration is used to bypass the Q controller in case there is a loss of communication. Further study found that the FF control mode was still engaged when communication was re-established, which ultimately resulted in the oscillation event. However, there was no detailed explanation of why the FF control mode can cause instability in the hybrid farm. Also, there was no duplication of the event in the simulation to analyze the root cause of the oscillation. Therefore, this research aims to model and replicate the oscillation event in a simulation environment and investigate the underlying behavior of the HPPC and the consequent oscillation mechanism during the incident. The outcome of this research will provide significant benefits to the safe operation of large-scale renewable energy generators and power networks.Keywords: PV, oscillation, modelling, wind
Procedia PDF Downloads 372722 F-VarNet: Fast Variational Network for MRI Reconstruction
Authors: Omer Cahana, Maya Herman, Ofer Levi
Abstract:
Magnetic resonance imaging (MRI) is a long medical scan that stems from a long acquisition time. This length is mainly due to the traditional sampling theorem, which defines a lower boundary for sampling. However, it is still possible to accelerate the scan by using a different approach, such as compress sensing (CS) or parallel imaging (PI). These two complementary methods can be combined to achieve a faster scan with high-fidelity imaging. In order to achieve that, two properties have to exist: i) the signal must be sparse under a known transform domain, ii) the sampling method must be incoherent. In addition, a nonlinear reconstruction algorithm needs to be applied to recover the signal. While the rapid advance in the deep learning (DL) field, which has demonstrated tremendous successes in various computer vision task’s, the field of MRI reconstruction is still in an early stage. In this paper, we present an extension of the state-of-the-art model in MRI reconstruction -VarNet. We utilize VarNet by using dilated convolution in different scales, which extends the receptive field to capture more contextual information. Moreover, we simplified the sensitivity map estimation (SME), for it holds many unnecessary layers for this task. Those improvements have shown significant decreases in computation costs as well as higher accuracy.Keywords: MRI, deep learning, variational network, computer vision, compress sensing
Procedia PDF Downloads 1622721 Supergrid Modeling and Operation and Control of Multi Terminal DC Grids for the Deployment of a Meshed HVDC Grid in South Asia
Authors: Farhan Beg, Raymond Moberly
Abstract:
The Indian subcontinent is facing a massive challenge with regards to energy security in member countries, to provide reliable electricity to facilitate development across various sectors of the economy and consequently achieve the developmental targets. The instability of the current precarious situation is observable in the frequent system failures and blackouts. The deployment of interconnected electricity ‘Supergrid’ designed to carry huge quanta of power across the Indian sub-continent is proposed in this paper. Besides enabling energy security in the subcontinent, it will also provide a platform for Renewable Energy Sources (RES) integration. This paper assesses the need and conditions for a Supergrid deployment and consequently proposes a meshed topology based on Voltage Source High Voltage Direct Current (VSC-HVDC) converters for the Supergrid modeling. Various control schemes for the control of voltage and power are utilized for the regulation of the network parameters. A 3 terminal Multi Terminal Direct Current (MTDC) network is used for the simulations.Keywords: super grid, wind and solar energy, high voltage direct current, electricity management, load flow analysis
Procedia PDF Downloads 4282720 DNpro: A Deep Learning Network Approach to Predicting Protein Stability Changes Induced by Single-Site Mutations
Authors: Xiao Zhou, Jianlin Cheng
Abstract:
A single amino acid mutation can have a significant impact on the stability of protein structure. Thus, the prediction of protein stability change induced by single site mutations is critical and useful for studying protein function and structure. Here, we presented a deep learning network with the dropout technique for predicting protein stability changes upon single amino acid substitution. While using only protein sequence as input, the overall prediction accuracy of the method on a standard benchmark is >85%, which is higher than existing sequence-based methods and is comparable to the methods that use not only protein sequence but also tertiary structure, pH value and temperature. The results demonstrate that deep learning is a promising technique for protein stability prediction. The good performance of this sequence-based method makes it a valuable tool for predicting the impact of mutations on most proteins whose experimental structures are not available. Both the downloadable software package and the user-friendly web server (DNpro) that implement the method for predicting protein stability changes induced by amino acid mutations are freely available for the community to use.Keywords: bioinformatics, deep learning, protein stability prediction, biological data mining
Procedia PDF Downloads 4682719 Geospatial Network Analysis Using Particle Swarm Optimization
Authors: Varun Singh, Mainak Bandyopadhyay, Maharana Pratap Singh
Abstract:
The shortest path (SP) problem concerns with finding the shortest path from a specific origin to a specified destination in a given network while minimizing the total cost associated with the path. This problem has widespread applications. Important applications of the SP problem include vehicle routing in transportation systems particularly in the field of in-vehicle Route Guidance System (RGS) and traffic assignment problem (in transportation planning). Well known applications of evolutionary methods like Genetic Algorithms (GA), Ant Colony Optimization, Particle Swarm Optimization (PSO) have come up to solve complex optimization problems to overcome the shortcomings of existing shortest path analysis methods. It has been reported by various researchers that PSO performs better than other evolutionary optimization algorithms in terms of success rate and solution quality. Further Geographic Information Systems (GIS) have emerged as key information systems for geospatial data analysis and visualization. This research paper is focused towards the application of PSO for solving the shortest path problem between multiple points of interest (POI) based on spatial data of Allahabad City and traffic speed data collected using GPS. Geovisualization of results of analysis is carried out in GIS.Keywords: particle swarm optimization, GIS, traffic data, outliers
Procedia PDF Downloads 483