Search results for: sparse graph
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 604

Search results for: sparse graph

94 Social Media Idea Ontology: A Concept for Semantic Search of Product Ideas in Customer Knowledge through User-Centered Metrics and Natural Language Processing

Authors: Martin H¨ausl, Maximilian Auch, Johannes Forster, Peter Mandl, Alexander Schill

Abstract:

In order to survive on the market, companies must constantly develop improved and new products. These products are designed to serve the needs of their customers in the best possible way. The creation of new products is also called innovation and is primarily driven by a company’s internal research and development department. However, a new approach has been taking place for some years now, involving external knowledge in the innovation process. This approach is called open innovation and identifies customer knowledge as the most important source in the innovation process. This paper presents a concept of using social media posts as an external source to support the open innovation approach in its initial phase, the Ideation phase. For this purpose, the social media posts are semantically structured with the help of an ontology and the authors are evaluated using graph-theoretical metrics such as density. For the structuring and evaluation of relevant social media posts, we also use the findings of Natural Language Processing, e. g. Named Entity Recognition, specific dictionaries, Triple Tagger and Part-of-Speech-Tagger. The selection and evaluation of the tools used are discussed in this paper. Using our ontology and metrics to structure social media posts enables users to semantically search these posts for new product ideas and thus gain an improved insight into the external sources such as customer needs.

Keywords: idea ontology, innovation management, semantic search, open information extraction

Procedia PDF Downloads 165
93 Thermoluminescence Characteristic of Nanocrystalline BaSO4 Doped with Europium

Authors: Kanika S. Raheja, A. Pandey, Shaila Bahl, Pratik Kumar, S. P. Lochab

Abstract:

The subject of undertaking for this paper is the study of BaSO4 nanophosphor doped with Europium in which mainly the concentration of the rare earth impurity Eu (0.05, 0.1, 0.2, 0.5, and 1 mol %) has been varied. A comparative study of the thermoluminescence(TL) properties of the given nanophosphor has also been done using a well-known standard dosimetry material i.e. TLD-100.Firstly, a number of samples were prepared successfully by the chemical co-precipitation method. The whole lot was then compared to a well established standard material (TLD-100) for its TL sensitivity property. BaSO4:Eu ( 0.2 mol%) showed the highest sensitivity out of the lot. It was also found that when compared to the standard TLD-100, BaSo4:Eu (0.2mol%) showed surprisingly high sensitivity for a large range of doses. The TL response curve for all prepared samples has also been studied over a wide range of doses i.e 10Gy to 2kGy for gamma radiation. Almost all the samples of BaSO4:Eu showed a remarkable linearity for a broad range of doses, which is a characteristic feature of a fine TL dosimeter. The graph remained linear even beyond 1kGy for gamma radiation. Thus, the given nanophosphor has been successfully optimised for the concentration of the dopant material to achieve its highest TL sensitivity. Further, the comparative study with the standard material revealed that the current optimised sample shows an astonishingly better TL sensitivity and a phenomenal linear response curve for an incredibly wide range of doses for gamma radiation (Co-60) as compared to the standard TLD-100, which makes the current optimised BaSo4:Eu quite promising as an efficient gamma radiation dosimeter. Lastly, the present phosphor has been optimised for its annealing temperature to acquire the best results while also studying its fading and reusability properties.

Keywords: gamma radiation, nanoparticles, radiation dosimetry, thermoluminescence

Procedia PDF Downloads 410
92 Data Clustering Algorithm Based on Multi-Objective Periodic Bacterial Foraging Optimization with Two Learning Archives

Authors: Chen Guo, Heng Tang, Ben Niu

Abstract:

Clustering splits objects into different groups based on similarity, making the objects have higher similarity in the same group and lower similarity in different groups. Thus, clustering can be treated as an optimization problem to maximize the intra-cluster similarity or inter-cluster dissimilarity. In real-world applications, the datasets often have some complex characteristics: sparse, overlap, high dimensionality, etc. When facing these datasets, simultaneously optimizing two or more objectives can obtain better clustering results than optimizing one objective. However, except for the objectives weighting methods, traditional clustering approaches have difficulty in solving multi-objective data clustering problems. Due to this, evolutionary multi-objective optimization algorithms are investigated by researchers to optimize multiple clustering objectives. In this paper, the Data Clustering algorithm based on Multi-objective Periodic Bacterial Foraging Optimization with two Learning Archives (DC-MPBFOLA) is proposed. Specifically, first, to reduce the high computing complexity of the original BFO, periodic BFO is employed as the basic algorithmic framework. Then transfer the periodic BFO into a multi-objective type. Second, two learning strategies are proposed based on the two learning archives to guide the bacterial swarm to move in a better direction. On the one hand, the global best is selected from the global learning archive according to the convergence index and diversity index. On the other hand, the personal best is selected from the personal learning archive according to the sum of weighted objectives. According to the aforementioned learning strategies, a chemotaxis operation is designed. Third, an elite learning strategy is designed to provide fresh power to the objects in two learning archives. When the objects in these two archives do not change for two consecutive times, randomly initializing one dimension of objects can prevent the proposed algorithm from falling into local optima. Fourth, to validate the performance of the proposed algorithm, DC-MPBFOLA is compared with four state-of-art evolutionary multi-objective optimization algorithms and one classical clustering algorithm on evaluation indexes of datasets. To further verify the effectiveness and feasibility of designed strategies in DC-MPBFOLA, variants of DC-MPBFOLA are also proposed. Experimental results demonstrate that DC-MPBFOLA outperforms its competitors regarding all evaluation indexes and clustering partitions. These results also indicate that the designed strategies positively influence the performance improvement of the original BFO.

Keywords: data clustering, multi-objective optimization, bacterial foraging optimization, learning archives

Procedia PDF Downloads 113
91 Predictors of Quality of Life among Older Refugees Aging out of Place

Authors: Jonix Owino, Heather Fuller

Abstract:

Refugees flee from their home countries due to civil unrest, war, persecution and migrate to Western countries such as the United States in search of a safe haven. Transitioning into a new society and culture can be challenging, thereby affecting refugee’s quality of life and well-being in the host communities. Moreover, as individuals age, they experience physical, cognitive and socioemotional changes that may impact their quality of life. However, little is known about the predictors of quality of life among aging refugees. It is not clear how quality of life varies by age, that is, between midlife refugees in comparison to their older counterparts. In addition to age, other sociodemographic factors such as gender, socioeconomic status, or country of origin are likely to have differential associations to quality of life, yet research on such variations among older refugees is sparse. Thus the present study seeks to explore factors associated with quality of life by asking the following research questions: 1) Do sociodemographic factors (such as age and gender) predict quality of life among older refugees, 2) Is there an association between social integration and quality of life, and 3) Is there an association between migratory related experiences (such as post migratory adjustments) and quality of life. The present study recruited 90 refugees (primarily originating from Bhutan, Somalia, Burundi, and Sudan) aged 50 or older living in the US. The participants completed a structured questionnaire which assessed factors such as participant’s sociodemographic attributes (e.g., age, gender, length of residence in the US, country of origin, employment, level of education, and marital status), and validated measures of social integration, post-migration living difficulties, and quality of life. Preliminary results suggest sociodemographic variability in quality of life among these refugees. Further analyses will be conducted using hierarchical regression analyses to address the following hypotheses: first, it is hypothesized that quality of life will vary by age and gender such that younger refugees and men will report higher quality of life. Second, it is expected that refugees with greater levels of social integration will also report better quality of life. Finally, post-migration factors such as language barriers and family stress are hypothesized to predict poorer quality of life. Further results will be analyzed, including potential moderating effects of age and gender, and resulting findings will be interpreted and discussed. The findings from this study have potential implications for communities on how they can better support older refugees as well as develop social programs that can effectively cater to their well-being. Conclusions will be drawn and discussed in light of policies related to both aging and refugee migration within the context of the US.

Keywords: aging out of place, migration, older refugees, quality of life, social integration

Procedia PDF Downloads 80
90 Current Methods for Drug Property Prediction in the Real World

Authors: Jacob Green, Cecilia Cabrera, Maximilian Jakobs, Andrea Dimitracopoulos, Mark van der Wilk, Ryan Greenhalgh

Abstract:

Predicting drug properties is key in drug discovery to enable de-risking of assets before expensive clinical trials and to find highly active compounds faster. Interest from the machine learning community has led to the release of a variety of benchmark datasets and proposed methods. However, it remains unclear for practitioners which method or approach is most suitable, as different papers benchmark on different datasets and methods, leading to varying conclusions that are not easily compared. Our large-scale empirical study links together numerous earlier works on different datasets and methods, thus offering a comprehensive overview of the existing property classes, datasets, and their interactions with different methods. We emphasise the importance of uncertainty quantification and the time and, therefore, cost of applying these methods in the drug development decision-making cycle. To the best of the author's knowledge, it has been observed that the optimal approach varies depending on the dataset and that engineered features with classical machine learning methods often outperform deep learning. Specifically, QSAR datasets are typically best analysed with classical methods such as Gaussian Processes, while ADMET datasets are sometimes better described by Trees or deep learning methods such as Graph Neural Networks or language models. Our work highlights that practitioners do not yet have a straightforward, black-box procedure to rely on and sets a precedent for creating practitioner-relevant benchmarks. Deep learning approaches must be proven on these benchmarks to become the practical method of choice in drug property prediction.

Keywords: activity (QSAR), ADMET, classical methods, drug property prediction, empirical study, machine learning

Procedia PDF Downloads 49
89 Cosmetic Recommendation Approach Using Machine Learning

Authors: Shakila N. Senarath, Dinesh Asanka, Janaka Wijayanayake

Abstract:

The necessity of cosmetic products is arising to fulfill consumer needs of personality appearance and hygiene. A cosmetic product consists of various chemical ingredients which may help to keep the skin healthy or may lead to damages. Every chemical ingredient in a cosmetic product does not perform on every human. The most appropriate way to select a healthy cosmetic product is to identify the texture of the body first and select the most suitable product with safe ingredients. Therefore, the selection process of cosmetic products is complicated. Consumer surveys have shown most of the time, the selection process of cosmetic products is done in an improper way by consumers. From this study, a content-based system is suggested that recommends cosmetic products for the human factors. To such an extent, the skin type, gender and price range will be considered as human factors. The proposed system will be implemented by using Machine Learning. Consumer skin type, gender and price range will be taken as inputs to the system. The skin type of consumer will be derived by using the Baumann Skin Type Questionnaire, which is a value-based approach that includes several numbers of questions to derive the user’s skin type to one of the 16 skin types according to the Bauman Skin Type indicator (BSTI). Two datasets are collected for further research proceedings. The user data set was collected using a questionnaire given to the public. Those are the user dataset and the cosmetic dataset. Product details are included in the cosmetic dataset, which belongs to 5 different kinds of product categories (Moisturizer, Cleanser, Sun protector, Face Mask, Eye Cream). An alternate approach of TF-IDF (Term Frequency – Inverse Document Frequency) is applied to vectorize cosmetic ingredients in the generic cosmetic products dataset and user-preferred dataset. Using the IF-IPF vectors, each user-preferred products dataset and generic cosmetic products dataset can be represented as sparse vectors. The similarity between each user-preferred product and generic cosmetic product will be calculated using the cosine similarity method. For the recommendation process, a similarity matrix can be used. Higher the similarity, higher the match for consumer. Sorting a user column from similarity matrix in a descending order, the recommended products can be retrieved in ascending order. Even though results return a list of similar products, and since the user information has been gathered, such as gender and the price ranges for product purchasing, further optimization can be done by considering and giving weights for those parameters once after a set of recommended products for a user has been retrieved.

Keywords: content-based filtering, cosmetics, machine learning, recommendation system

Procedia PDF Downloads 110
88 A Robust Optimization of Chassis Durability/Comfort Compromise Using Chebyshev Polynomial Chaos Expansion Method

Authors: Hanwei Gao, Louis Jezequel, Eric Cabrol, Bernard Vitry

Abstract:

The chassis system is composed of complex elements that take up all the loads from the tire-ground contact area and thus it plays an important role in numerous specifications such as durability, comfort, crash, etc. During the development of new vehicle projects in Renault, durability validation is always the main focus while deployment of comfort comes later in the project. Therefore, sometimes design choices have to be reconsidered because of the natural incompatibility between these two specifications. Besides, robustness is also an important point of concern as it is related to manufacturing costs as well as the performance after the ageing of components like shock absorbers. In this paper an approach is proposed aiming to realize a multi-objective optimization between chassis endurance and comfort while taking the random factors into consideration. The adaptive-sparse polynomial chaos expansion method (PCE) with Chebyshev polynomial series has been applied to predict responses’ uncertainty intervals of a system according to its uncertain-but-bounded parameters. The approach can be divided into three steps. First an initial design of experiments is realized to build the response surfaces which represent statistically a black-box system. Secondly within several iterations an optimum set is proposed and validated which will form a Pareto front. At the same time the robustness of each response, served as additional objectives, is calculated from the pre-defined parameter intervals and the response surfaces obtained in the first step. Finally an inverse strategy is carried out to determine the parameters’ tolerance combination with a maximally acceptable degradation of the responses in terms of manufacturing costs. A quarter car model has been tested as an example by applying the road excitations from the actual road measurements for both endurance and comfort calculations. One indicator based on the Basquin’s law is defined to compare the global chassis durability of different parameter settings. Another indicator related to comfort is obtained from the vertical acceleration of the sprung mass. An optimum set with best robustness has been finally obtained and the reference tests prove a good robustness prediction of Chebyshev PCE method. This example demonstrates the effectiveness and reliability of the approach, in particular its ability to save computational costs for a complex system.

Keywords: chassis durability, Chebyshev polynomials, multi-objective optimization, polynomial chaos expansion, ride comfort, robust design

Procedia PDF Downloads 130
87 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 69
86 Application of Rapidly Exploring Random Tree Star-Smart and G2 Quintic Pythagorean Hodograph Curves to the UAV Path Planning Problem

Authors: Luiz G. Véras, Felipe L. Medeiros, Lamartine F. Guimarães

Abstract:

This work approaches the automatic planning of paths for Unmanned Aerial Vehicles (UAVs) through the application of the Rapidly Exploring Random Tree Star-Smart (RRT*-Smart) algorithm. RRT*-Smart is a sampling process of positions of a navigation environment through a tree-type graph. The algorithm consists of randomly expanding a tree from an initial position (root node) until one of its branches reaches the final position of the path to be planned. The algorithm ensures the planning of the shortest path, considering the number of iterations tending to infinity. When a new node is inserted into the tree, each neighbor node of the new node is connected to it, if and only if the extension of the path between the root node and that neighbor node, with this new connection, is less than the current extension of the path between those two nodes. RRT*-smart uses an intelligent sampling strategy to plan less extensive routes by spending a smaller number of iterations. This strategy is based on the creation of samples/nodes near to the convex vertices of the navigation environment obstacles. The planned paths are smoothed through the application of the method called quintic pythagorean hodograph curves. The smoothing process converts a route into a dynamically-viable one based on the kinematic constraints of the vehicle. This smoothing method models the hodograph components of a curve with polynomials that obey the Pythagorean Theorem. Its advantage is that the obtained structure allows computation of the curve length in an exact way, without the need for quadratural techniques for the resolution of integrals.

Keywords: path planning, path smoothing, Pythagorean hodograph curve, RRT*-Smart

Procedia PDF Downloads 148
85 Hybrid Model: An Integration of Machine Learning with Traditional Scorecards

Authors: Golnush Masghati-Amoli, Paul Chin

Abstract:

Over the past recent years, with the rapid increases in data availability and computing power, Machine Learning (ML) techniques have been called on in a range of different industries for their strong predictive capability. However, the use of Machine Learning in commercial banking has been limited due to a special challenge imposed by numerous regulations that require lenders to be able to explain their analytic models, not only to regulators but often to consumers. In other words, although Machine Leaning techniques enable better prediction with a higher level of accuracy, in comparison with other industries, they are adopted less frequently in commercial banking especially for scoring purposes. This is due to the fact that Machine Learning techniques are often considered as a black box and fail to provide information on why a certain risk score is given to a customer. In order to bridge this gap between the explain-ability and performance of Machine Learning techniques, a Hybrid Model is developed at Dun and Bradstreet that is focused on blending Machine Learning algorithms with traditional approaches such as scorecards. The Hybrid Model maximizes efficiency of traditional scorecards by merging its practical benefits, such as explain-ability and the ability to input domain knowledge, with the deep insights of Machine Learning techniques which can uncover patterns scorecard approaches cannot. First, through development of Machine Learning models, engineered features and latent variables and feature interactions that demonstrate high information value in the prediction of customer risk are identified. Then, these features are employed to introduce observed non-linear relationships between the explanatory and dependent variables into traditional scorecards. Moreover, instead of directly computing the Weight of Evidence (WoE) from good and bad data points, the Hybrid Model tries to match the score distribution generated by a Machine Learning algorithm, which ends up providing an estimate of the WoE for each bin. This capability helps to build powerful scorecards with sparse cases that cannot be achieved with traditional approaches. The proposed Hybrid Model is tested on different portfolios where a significant gap is observed between the performance of traditional scorecards and Machine Learning models. The result of analysis shows that Hybrid Model can improve the performance of traditional scorecards by introducing non-linear relationships between explanatory and target variables from Machine Learning models into traditional scorecards. Also, it is observed that in some scenarios the Hybrid Model can be almost as predictive as the Machine Learning techniques while being as transparent as traditional scorecards. Therefore, it is concluded that, with the use of Hybrid Model, Machine Learning algorithms can be used in the commercial banking industry without being concerned with difficulties in explaining the models for regulatory purposes.

Keywords: machine learning algorithms, scorecard, commercial banking, consumer risk, feature engineering

Procedia PDF Downloads 103
84 Protective Effect of Diosgenin against Silica-Induced Tuberculosis in Rat Model

Authors: Williams A. Adu, Cynthia A. Danquah, Paul P. S. Ossei, Selase Ativui, Michael Ofori, James Asenso, George Owusu

Abstract:

Background Silicosis is an occupational disease of the lung that is caused by chronic exposure to silica dust. There is a higher frequency of co-existence of silicosis with tuberculosis (TB), ultimately resulting in lung fibrosis and respiratory failure. Chronic intake of synthetic drugs has resulted in undesirable side effects. Diosgenin is a steroidal saponin that has been shown to exert a therapeutic effect on lung injury. Therefore, we investigated the ability of diosgenin to reduce the susceptibility of silica-induced TB in rats. Method Silicosis was induced by intratracheal instillation of 50 mg/kg crystalline silica in Sprague Dawley rats. Different doses of diosgenin (1, 10, and 100 mg/kg), Mycobacterium smegmatis and saline were administered for 30 days. Afterwards, 5 of the rats from each group were sacrificed, and the 5 remaining rats in each group, except the control, received Mycobacterium smegmatis. Treatment of diosgenin continued until the 50th day, and the rats were sacrificed at the end of the experiment. The result was analysed using a one-way analysis of variance (ANOVA) with a Graph-pad prism Result At a half-maximal inhibition concentration of 48.27 µM, diosgenin inhibited the growth of Mycobacterium smegmatis. There was a marked decline in the levels of immune cell infiltration and cytokines production. Lactate dehydrogenase and total protein levels were significantly reduced compared to control. There was an increase in the survival rate of the treatment group compared to the control. Conclusion Diosgenin ameliorated silica-induced pulmonary tuberculosis by declining the levels of inflammatory and pro-inflammatory cytokines and, in effect, significantly reduced the susceptibility of rats to pulmonary TB.

Keywords: silicosis, tuberculosis, diosgenin, fibrosis, crystalline silica

Procedia PDF Downloads 39
83 A Multi-Objective Decision Making Model for Biodiversity Conservation and Planning: Exploring the Concept of Interdependency

Authors: M. Mohan, J. P. Roise, G. P. Catts

Abstract:

Despite living in an era where conservation zones are de-facto the central element in any sustainable wildlife management strategy, we still find ourselves grappling with several pareto-optimal situations regarding resource allocation and area distribution for the same. In this paper, a multi-objective decision making (MODM) model is presented to answer the question of whether or not we can establish mutual relationships between these contradicting objectives. For our study, we considered a Red-cockaded woodpecker (Picoides borealis) habitat conservation scenario in the coastal plain of North Carolina, USA. Red-cockaded woodpecker (RCW) is a non-migratory territorial bird that excavates cavities in living pine trees for roosting and nesting. The RCW groups nest in an aggregation of cavity trees called ‘cluster’ and for our model we use the number of clusters to be established as a measure of evaluating the size of conservation zone required. The case study is formulated as a linear programming problem and the objective function optimises the Red-cockaded woodpecker clusters, carbon retention rate, biofuel, public safety and Net Present Value (NPV) of the forest. We studied the variation of individual objectives with respect to the amount of area available and plotted a two dimensional dynamic graph after establishing interrelations between the objectives. We further explore the concept of interdependency by integrating the MODM model with GIS, and derive a raster file representing carbon distribution from the existing forest dataset. Model results demonstrate the applicability of interdependency from both linear and spatial perspectives, and suggest that this approach holds immense potential for enhancing environmental investment decision making in future.

Keywords: conservation, interdependency, multi-objective decision making, red-cockaded woodpecker

Procedia PDF Downloads 315
82 Sensitivity Analysis and Solitary Wave Solutions to the (2+1)-Dimensional Boussinesq Equation in Dispersive Media

Authors: Naila Nasreen, Dianchen Lu

Abstract:

This paper explores the dynamical behavior of the (2+1)-dimensional Boussinesq equation, which is a nonlinear water wave equation and is used to model wave packets in dispersive media with weak nonlinearity. This equation depicts how long wave made in shallow water propagates due to the influence of gravity. The (2+1)- dimensional Boussinesq equation combines the two-way propagation of the classical Boussinesq equation with the dependence on a second spatial variable, as that occurs in the two-dimensional Kadomstev- Petviashvili equation. This equation provides a description of head- on collision of oblique waves and it possesses some interesting properties. The governing model is discussed by the assistance of Ricatti equation mapping method, a relatively integration tool. The solutions have been extracted in different forms the solitary wave solutions as well as hyperbolic and periodic solutions. Moreover, the sensitivity analysis is demonstrated for the designed dynamical structural system’s wave profiles, where the soliton wave velocity and wave number parameters regulate the water wave singularity. In addition to being helpful for elucidating nonlinear partial differential equations, the method in use gives previously extracted solutions and extracts fresh exact solutions. Assuming the right values for the parameters, various graph in different shapes are sketched to provide information about the visual format of the earned results. This paper’s findings support the efficacy of the approach taken in enhancing nonlinear dynamical behavior. We believe this research will be of interest to a wide variety of engineers that work with engineering models. Findings show the effectiveness simplicity, and generalizability of the chosen computational approach, even when applied to complicated systems in a variety of fields, especially in ocean engineering.

Keywords: (2+1)-dimensional Boussinesq equation, solitary wave solutions, Ricatti equation mapping approach, nonlinear phenomena

Procedia PDF Downloads 55
81 Analysis of Travel Behavior Patterns of Frequent Passengers after the Section Shutdown of Urban Rail Transit - Taking the Huaqiao Section of Shanghai Metro Line 11 Shutdown During the COVID-19 Epidemic as an Example

Authors: Hongyun Li, Zhibin Jiang

Abstract:

The travel of passengers in the urban rail transit network is influenced by changes in network structure and operational status, and the response of individual travel preferences to these changes also varies. Firstly, the influence of the suspension of urban rail transit line sections on passenger travel along the line is analyzed. Secondly, passenger travel trajectories containing multi-dimensional semantics are described based on network UD data. Next, passenger panel data based on spatio-temporal sequences is constructed to achieve frequent passenger clustering. Then, the Graph Convolutional Network (GCN) is used to model and identify the changes in travel modes of different types of frequent passengers. Finally, taking Shanghai Metro Line 11 as an example, the travel behavior patterns of frequent passengers after the Huaqiao section shutdown during the COVID-19 epidemic are analyzed. The results showed that after the section shutdown, most passengers would transfer to the nearest Anting station for boarding, while some passengers would transfer to other stations for boarding or cancel their travels directly. Among the passengers who transferred to Anting station for boarding, most of passengers maintained the original normalized travel mode, a small number of passengers waited for a few days before transferring to Anting station for boarding, and only a few number of passengers stopped traveling at Anting station or transferred to other stations after a few days of boarding on Anting station. The results can provide a basis for understanding urban rail transit passenger travel patterns and improving the accuracy of passenger flow prediction in abnormal operation scenarios.

Keywords: urban rail transit, section shutdown, frequent passenger, travel behavior pattern

Procedia PDF Downloads 53
80 Adverse Childhood Experiences (ACES) and Later-Life Depression: Perceived Social Support as a Potential Protective Factor

Authors: E. Von Cheong, Carol Sinnott, Darren Dahly, Patricia M. Kearney

Abstract:

Introduction and Aim: Adverse childhood experiences (ACEs) are all too common and have been linked to poorer health and wellbeing across the life course. While the prevention of ACEs is a worthy goal, it is important that we also try to lessen the impact of ACEs for those who do experience them. This study aims to investigate associations between adverse childhood experiences (ACEs) and later-life depressive symptoms; and to explore whether perceived social support (PSS) moderates these. Method: We analysed baseline data from the Mitchelstown (Ireland) 2010-11 cohort involving 2047 men and women aged 50–69 years. Self-reported assessments included ACEs (Centre for Disease Control ACE questionnaire), PSS (Oslo Social Support Scale), and depressive symptoms (CES-D). The primary exposure was self-report of at least one ACE. We also investigated the effects of ACE exposure by the subtypes abuse, neglect, and household dysfunction. Associations between each of these exposures and depressive symptoms were estimated using logistic regression, adjusted for socio-demographic factors that were selected using the Directed Acyclic Graph (DAG) approach. We also tested whether the estimated associations varied across levels of PSS (poor, moderate, and good). Results: 23.7% of participants reported at least one ACE (95% CI: 21.9% to 25.6%). ACE exposures (overall or subtype) were associated with a higher odds of depressive symptoms, but only among individuals with poor PSS. For example, exposure to any ACE (vs. none) was associated with 3 times the odds of depressive symptoms (Adjusted OR 2.97; 95% CI 1.63 to 5.40) among individuals reporting poor PSS, while among those reporting moderate PSS, the adjusted OR was 1.18 (95% CI 0.72 to 1.94). Discussion: ACEs are common among older adults in Ireland and are associated with higher odds of later-life depressive symptoms among those also reporting poor PSS. Interventions that enhance perception of social support following ACE exposure may help reduce the burden of depression in older populations.

Keywords: adverse childhood experiences, depression, later-life, perceived social support

Procedia PDF Downloads 206
79 Design, Development and Analysis of Combined Darrieus and Savonius Wind Turbine

Authors: Ashish Bhattarai, Bishnu Bhatta, Hem Raj Joshi, Nabin Neupane, Pankaj Yadav

Abstract:

This report concerns the design, development, and analysis of the combined Darrieus and Savonius wind turbine. Vertical Axis Wind Turbines (VAWT's) are of two type's viz. Darrieus (lift type) and Savonius (drag type). The problem associated with Darrieus is the lack of self-starting while Savonius has low efficiency. There are 3 straight Darrieus blades having the cross-section of NACA(National Advisory Committee of Aeronautics) 0018 placed circumferentially and a helically twisted Savonius blade to get even torque distribution. This unique design allows the use of Savonius as a method of self-starting the wind turbine, which the Darrieus cannot achieve on its own. All the parts of the wind turbine are designed in CAD software, and simulation data were obtained via CFD(Computational Fluid Dynamics) approach. Also, the design was imported to FlashForge Finder to 3D print the wind turbine profile and finally, testing was carried out. The plastic material used for Savonius was ABS(Acrylonitrile Butadiene Styrene) and that for Darrieus was PLA(Polylactic Acid). From the data obtained experimentally, the hybrid VAWT so fabricated has been found to operate at the low cut-in speed of 3 m/s and maximum power output has been found to be 7.5537 watts at the wind speed of 6 m/s. The maximum rpm of the rotor blade is recorded to be 431 rpm(rotation per minute) at the wind velocity of 6 m/s, signifying its potentiality of wind power production. Besides, the data so obtained from both the process when analyzed through graph plots has shown the similar nature slope wise. Also, the difference between the experimental and theoretical data obtained has shown mechanical losses. The objective is to eliminate the need for external motors for self-starting purposes and study the performance of the model. The testing of the model was carried out for different wind velocities.

Keywords: VAWT, Darrieus, Savonius, helical blades, CFD, flash forge finder, ABS, PLA

Procedia PDF Downloads 177
78 No Histological and Biochemical Changes Following Administration of Tenofovir Nanoparticles: Animal Model Study

Authors: Aniekan Peter, ECS Naidu, Edidiong Akang, U. Offor, R. Kalhapure, A. A. Chuturgoon, T. Govender, O. O. Azu

Abstract:

Introduction: Nano-drugs are novel innovations in the management of human immunodeficiency virus (HIV) pandemic, especially resistant strains of the virus in their sanctuary sites: testis and the brain. There are safety concerns to be addressed to achieve the full potential of this new drug delivery system. Aim of study: Our study was designed to investigate toxicity profile of Tenofovir Nanoparticle (TDF-N) synthesized by University of Kwazulu-Natal (UKZN) Nano-team for prevention and treatment of HIV infection. Methodology: Ten adult male Sprague-Dawley rats maintained at the Animal House of the Biomedical Resources Unit UKZN were used for the study. The animals were weighed and divided into two groups of 5 animal each. Control animals (A) were administered with normal saline. Therapeutic dose (4.3 mg/kg) of TDF-N was administered to group B. At the end of four weeks, animals were weighed and sacrificed. Liver and kidney were removed fixed in formal saline, processed and stained using H/E, PAS and MT stains for light microscopy. Serum was obtained for renal function test (RFT), liver function test (LFT) and full blood count (FBC) using appropriate analysers. Cellular measurements were done using ImageJ and Leica software 2.0. Data were analysed using graph pad 6, values < 0.05 were significant. Results: We reported no histological alterations in the liver, kidney, FBC, LFT and RFT between the TDF-N animals and saline control. There were no significant differences in weight, organo-somatic index and histological measurements in the treatment group when compared with saline control. Conclusion/recommendations: TDF-N is not toxic to the liver, kidney and blood cells in our study. More studies using human subjects is recommended.

Keywords: tenofovir nanoparticles, liver, kidney, blood cells

Procedia PDF Downloads 156
77 Rejuvenating a Space into World Class Environment through Conservation of Heritage Architecture

Authors: Abhimanyu Sharma

Abstract:

India is known for its cultural heritage. As the country is rich in diversity along its length and breadth, the state of Jammu & Kashmir is world famous for the beautiful tourist destinations in the Kashmir region of the state. However, equally destined destinations are also located in Jammu region of the said state. For most of the time in last 50-60 years, the prime focus of development was centered around Kashmir region. But now due to an ever increase in globalization, the focus is decentralizing throughout the country. Pertinently, the potential of Jammu Region needs to be incorporated into the world tourist map in particular. One such spot in the Jammu region of the state is a place called ‘Mubarak Mandi’ – the palace with the royal residence of the Maharaja of Jammu & Kashmir from the Dogra Dynasty, is located in the heart of Jammu city (the winter capital of the state). Since the place is destined with a heritage importance but yet lack the supporting infrastructure to attract the national tourist in general and worldwide tourist at large. For such places, conservation and restoration of the existing structures are the potential tools to overcome the present limiting nature of the place. The rejuvenation of this place through potential and dynamic conservation techniques is targeted through this paper. This paper deals with developing and restoring the areas within the whole campus with appropriate building materials, conservation techniques, etc. to promote a great number of visitors by developing it into a prioritised tourist attraction point. Major thrust shall be on studying the criteria’s for developing the place considering the psychological effect needed to create a socially interactive environment. Additionally, thrust shall be on the spatial elements that will aid in creating a common platform for all kinds of tourists. Accordingly, different conservation guidelines (or model) shall be targeted through this paper so that this Jammu region shall also be an equally contributor to the tourist graph of the country as the Kashmir part is.

Keywords: conservation, heritage architecture, rejuvenating, restoration

Procedia PDF Downloads 266
76 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method

Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi

Abstract:

This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.

Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure

Procedia PDF Downloads 466
75 An Analysis of the Strategic Pathway to Building a Successful Mobile Advertising Business in Nigeria: From Strategic Intent to Competitive Advantage

Authors: Pius A. Onobhayedo, Eugene A. Ohu

Abstract:

Nigeria has one of the fastest growing mobile telecommunications industry in the world. In the absence of fixed connection access to the Internet, access to the Internet is primarily via mobile devices. It, therefore, provides a test case for how to penetrate the mobile market in an emerging economy. We also hope to contribute to a sparse literature on strategies employed in building successful data-driven mobile businesses in emerging economies. We, therefore, sought to identify and analyse the strategic approach taken in a successful locally born mobile data-driven business in Nigeria. The analysis was carried out through the framework of strategic intent and competitive advantages developed from the conception of the company to date. This study is based on an exploratory investigation of an innovative digital company based in Nigeria specializing in the mobile advertising business. The projected growth and high adoption of mobile in this African country, coinciding with the smartphone revolution triggered by the launch of iPhone in 2007 opened a new entrepreneurial horizon for the founder of the company, who reached the conclusion that ‘the future is mobile’. This dream led to the establishment of three digital businesses, designed for convergence and complementarity of medium and content. The mobile Ad subsidiary soon grew to become a truly African network with operations and campaigns across West, East and South Africa, successfully delivering campaigns in several African countries including Nigeria, Kenya, South Africa, Ghana, Uganda, Zimbabwe, and Zambia amongst others. The company recently declared a 40% year-end profit which was nine times that of the previous financial year. This study drew from an in-depth interview with the company’s founder, analysis of primary and secondary data from and about the business, as well as case studies of digital marketing campaigns. We hinge our analysis on the strategic intent concept which has been proposed to be an engine that drives the quest for sustainable strategic advantage in the global marketplace. Our goal was specifically to identify the strategic intents of the founder and how these were transformed creatively into processes that may have led to some distinct competitive advantages. Along with the strategic intents, we sought to identify the respective absorptive capacities that constituted favourable antecedents to the creation of such competitive advantages. Our recommendations and findings will be pivotal information for anybody wishing to invest in the world’s fastest technology business space - Africa.

Keywords: Africa, competitive advantage, competitive strategy, digital, mobile business, marketing, strategic intent

Procedia PDF Downloads 416
74 Design and Development of a Mechanical Force Gauge for the Square Watermelon Mold

Authors: Morteza Malek Yarand, Hadi Saebi Monfared

Abstract:

This study aimed at designing and developing a mechanical force gauge for the square watermelon mold for the first time. It also tried to introduce the square watermelon characteristics and its production limitations. The mechanical force gauge performance and the product itself were also described. There are three main designable gauge models: a. hydraulic gauge, b. strain gauge, and c. mechanical gauge. The advantage of the hydraulic model is that it instantly displays the pressure and thus the force exerted by the melon. However, considering the inability to measure forces at all directions, complicated development, high cost, possible hydraulic fluid leak into the fruit chamber and the possible influence of increased ambient temperature on the fluid pressure, the development of this gauge was overruled. The second choice was to calculate pressure using the direct force a strain gauge. The main advantage of these strain gauges over spring types is their high precision in measurements; but with regard to the lack of conformity of strain gauge working range with water melon growth, calculations were faced with problems. Finally the mechanical pressure gauge has advantages, including the ability to measured forces and pressures on the mold surface during melon growth; the ability to display the peak forces; the ability to produce melon growth graph thanks to its continuous force measurements; the conformity of its manufacturing materials with the required physical conditions of melon growth; high air conditioning capability; the ability to permit sunlight reaches the melon rind (no yellowish skin and quality loss); fast and straightforward calibration; no damages to the product during assembling and disassembling; visual check capability of the product within the mold; applicable to all growth environments (field, greenhouses, etc.); simple process; low costs and so forth.

Keywords: mechanical force gauge, mold, reshaped fruit, square watermelon

Procedia PDF Downloads 249
73 Structural Invertibility and Optimal Sensor Node Placement for Error and Input Reconstruction in Dynamic Systems

Authors: Maik Kschischo, Dominik Kahl, Philipp Wendland, Andreas Weber

Abstract:

Understanding and modelling of real-world complex dynamic systems in biology, engineering and other fields is often made difficult by incomplete knowledge about the interactions between systems states and by unknown disturbances to the system. In fact, most real-world dynamic networks are open systems receiving unknown inputs from their environment. To understand a system and to estimate the state dynamics, these inputs need to be reconstructed from output measurements. Reconstructing the input of a dynamic system from its measured outputs is an ill-posed problem if only a limited number of states is directly measurable. A first requirement for solving this problem is the invertibility of the input-output map. In our work, we exploit the fact that invertibility of a dynamic system is a structural property, which depends only on the network topology. Therefore, it is possible to check for invertibility using a structural invertibility algorithm which counts the number of node disjoint paths linking inputs and outputs. The algorithm is efficient enough, even for large networks up to a million nodes. To understand structural features influencing the invertibility of a complex dynamic network, we analyze synthetic and real networks using the structural invertibility algorithm. We find that invertibility largely depends on the degree distribution and that dense random networks are easier to invert than sparse inhomogeneous networks. We show that real networks are often very difficult to invert unless the sensor nodes are carefully chosen. To overcome this problem, we present a sensor node placement algorithm to achieve invertibility with a minimum set of measured states. This greedy algorithm is very fast and also guaranteed to find an optimal sensor node-set if it exists. Our results provide a practical approach to experimental design for open, dynamic systems. Since invertibility is a necessary condition for unknown input observers and data assimilation filters to work, it can be used as a preprocessing step to check, whether these input reconstruction algorithms can be successful. If not, we can suggest additional measurements providing sufficient information for input reconstruction. Invertibility is also important for systems design and model building. Dynamic models are always incomplete, and synthetic systems act in an environment, where they receive inputs or even attack signals from their exterior. Being able to monitor these inputs is an important design requirement, which can be achieved by our algorithms for invertibility analysis and sensor node placement.

Keywords: data-driven dynamic systems, inversion of dynamic systems, observability, experimental design, sensor node placement

Procedia PDF Downloads 127
72 Event Data Representation Based on Time Stamp for Pedestrian Detection

Authors: Yuta Nakano, Kozo Kajiwara, Atsushi Hori, Takeshi Fujita

Abstract:

In association with the wave of electric vehicles (EV), low energy consumption systems have become more and more important. One of the key technologies to realize low energy consumption is a dynamic vision sensor (DVS), or we can call it an event sensor, neuromorphic vision sensor and so on. This sensor has several features, such as high temporal resolution, which can achieve 1 Mframe/s, and a high dynamic range (120 DB). However, the point that can contribute to low energy consumption the most is its sparsity; to be more specific, this sensor only captures the pixels that have intensity change. In other words, there is no signal in the area that does not have any intensity change. That is to say, this sensor is more energy efficient than conventional sensors such as RGB cameras because we can remove redundant data. On the other side of the advantages, it is difficult to handle the data because the data format is completely different from RGB image; for example, acquired signals are asynchronous and sparse, and each signal is composed of x-y coordinate, polarity (two values: +1 or -1) and time stamp, it does not include intensity such as RGB values. Therefore, as we cannot use existing algorithms straightforwardly, we have to design a new processing algorithm to cope with DVS data. In order to solve difficulties caused by data format differences, most of the prior arts make a frame data and feed it to deep learning such as Convolutional Neural Networks (CNN) for object detection and recognition purposes. However, even though we can feed the data, it is still difficult to achieve good performance due to a lack of intensity information. Although polarity is often used as intensity instead of RGB pixel value, it is apparent that polarity information is not rich enough. Considering this context, we proposed to use the timestamp information as a data representation that is fed to deep learning. Concretely, at first, we also make frame data divided by a certain time period, then give intensity value in response to the timestamp in each frame; for example, a high value is given on a recent signal. We expected that this data representation could capture the features, especially of moving objects, because timestamp represents the movement direction and speed. By using this proposal method, we made our own dataset by DVS fixed on a parked car to develop an application for a surveillance system that can detect persons around the car. We think DVS is one of the ideal sensors for surveillance purposes because this sensor can run for a long time with low energy consumption in a NOT dynamic situation. For comparison purposes, we reproduced state of the art method as a benchmark, which makes frames the same as us and feeds polarity information to CNN. Then, we measured the object detection performances of the benchmark and ours on the same dataset. As a result, our method achieved a maximum of 7 points greater than the benchmark in the F1 score.

Keywords: event camera, dynamic vision sensor, deep learning, data representation, object recognition, low energy consumption

Procedia PDF Downloads 69
71 Safety Validation of Black-Box Autonomous Systems: A Multi-Fidelity Reinforcement Learning Approach

Authors: Jared Beard, Ali Baheri

Abstract:

As autonomous systems become more prominent in society, ensuring their safe application becomes increasingly important. This is clearly demonstrated with autonomous cars traveling through a crowded city or robots traversing a warehouse with heavy equipment. Human environments can be complex, having high dimensional state and action spaces. This gives rise to two problems. One being that analytic solutions may not be possible. The other is that in simulation based approaches, searching the entirety of the problem space could be computationally intractable, ruling out formal methods. To overcome this, approximate solutions may seek to find failures or estimate their likelihood of occurrence. One such approach is adaptive stress testing (AST) which uses reinforcement learning to induce failures in the system. The premise of which is that a learned model can be used to help find new failure scenarios, making better use of simulations. In spite of these failures AST fails to find particularly sparse failures and can be inclined to find similar solutions to those found previously. To help overcome this, multi-fidelity learning can be used to alleviate this overuse of information. That is, information in lower fidelity can simulations can be used to build up samples less expensively, and more effectively cover the solution space to find a broader set of failures. Recent work in multi-fidelity learning has passed information bidirectionally using “knows what it knows” (KWIK) reinforcement learners to minimize the number of samples in high fidelity simulators (thereby reducing computation time and load). The contribution of this work, then, is development of the bidirectional multi-fidelity AST framework. Such an algorithm, uses multi-fidelity KWIK learners in an adversarial context to find failure modes. Thus far, a KWIK learner has been used to train an adversary in a grid world to prevent an agent from reaching its goal; thus demonstrating the utility of KWIK learners in an AST framework. The next step is implementation of the bidirectional multi-fidelity AST framework described. Testing will be conducted in a grid world containing an agent attempting to reach a goal position and adversary tasked with intercepting the agent as demonstrated previously. Fidelities will be modified by adjusting the size of a time-step, with higher-fidelity effectively allowing for more responsive closed loop feedback. Results will compare the single KWIK AST learner with the multi-fidelity algorithm with respect to number of samples, distinct failure modes found, and relative effect of learning after a number of trials.

Keywords: multi-fidelity reinforcement learning, multi-fidelity simulation, safety validation, falsification

Procedia PDF Downloads 129
70 Completion of the Modified World Health Organization (WHO) Partograph during Labour in Public Health Institutions of Addis Ababa, Ethiopia

Authors: Engida Yisma, Berhanu Dessalegn, Ayalew Astatkie, Nebreed Fesseha

Abstract:

Background: The World Health Organization (WHO) recommends using the partograph to follow labour and delivery, with the objective to improve health care and reduce maternal and foetal morbidity and death. Methods: A retrospective document review was undertaken to assess the completion of the modified WHO partograph during labour in public health institutions of Addis Ababa, Ethiopia. A total of 420 of the modified WHO partographs used to monitor mothers in labour from five public health institutions that provide maternity care were reviewed. A structured checklist was used to gather the required data. The collected data were analyzed using SPSS version 16.0. Frequency distributions, cross-tabulations and a graph were used to describe the results of the study. Results: All facilities were using the modified WHO partograph. The correct completion of the partograph was very low. From 420 partographs reviewed across all the five health facilities, foetal heart rate was recorded into the recommended standard in 129(30.7%) of the partographs, while 138 (32.9%) of cervical dilatation and 87 (20.70%) of uterine contractions were recorded to the recommended standard. The study did not document descent of the presenting part in 353 (84%). Moulding in 364 (86.7%) of the partographs reviewed was not recorded. Documentation of state of the liquor was 113(26.9%), while the maternal blood pressure was recorded to standard only in 78(18.6%) of the partographs reviewed. Conclusions: This study showed a poor completion of the modified WHO partographs during labour in public health institutions of Addis Ababa, Ethiopia. The findings may reflect poor management of labour and indicate the need for pre-service and periodic on-job training of health workers on the proper completion of the partograph. Regular supportive supervision, provision of guidelines and mandatory health facility policy are also needed in support of a collaborative effort to reduce maternal and perinatal deaths.

Keywords: modified WHO partograph, completion, public health institutions, Addis Ababa, Ethiopia

Procedia PDF Downloads 316
69 Normalized P-Laplacian: From Stochastic Game to Image Processing

Authors: Abderrahim Elmoataz

Abstract:

More and more contemporary applications involve data in the form of functions defined on irregular and topologically complicated domains (images, meshs, points clouds, networks, etc). Such data are not organized as familiar digital signals and images sampled on regular lattices. However, they can be conveniently represented as graphs where each vertex represents measured data and each edge represents a relationship (connectivity or certain affinities or interaction) between two vertices. Processing and analyzing these types of data is a major challenge for both image and machine learning communities. Hence, it is very important to transfer to graphs and networks many of the mathematical tools which were initially developed on usual Euclidean spaces and proven to be efficient for many inverse problems and applications dealing with usual image and signal domains. Historically, the main tools for the study of graphs or networks come from combinatorial and graph theory. In recent years there has been an increasing interest in the investigation of one of the major mathematical tools for signal and image analysis, which are Partial Differential Equations (PDEs) variational methods on graphs. The normalized p-laplacian operator has been recently introduced to model a stochastic game called tug-of-war-game with noise. Part interest of this class of operators arises from the fact that it includes, as particular case, the infinity Laplacian, the mean curvature operator and the traditionnal Laplacian operators which was extensiveley used to models and to solve problems in image processing. The purpose of this paper is to introduce and to study a new class of normalized p-Laplacian on graphs. The introduction is based on the extension of p-harmonious function introduced in as discrete approximation for both infinity Laplacian and p-Laplacian equations. Finally, we propose to use these operators as a framework for solving many inverse problems in image processing.

Keywords: normalized p-laplacian, image processing, stochastic game, inverse problems

Procedia PDF Downloads 487
68 Vulnerability Assessment of Healthcare Interdependent Critical Infrastructure Coloured Petri Net Model

Authors: N. Nivedita, S. Durbha

Abstract:

Critical Infrastructure (CI) consists of services and technological networks such as healthcare, transport, water supply, electricity supply, information technology etc. These systems are necessary for the well-being and to maintain effective functioning of society. Critical Infrastructures can be represented as nodes in a network where they are connected through a set of links depicting the logical relationship among them; these nodes are interdependent on each other and interact with each at other at various levels, such that the state of each infrastructure influences or is correlated to the state of another. Disruption in the service of one infrastructure nodes of the network during a disaster would lead to cascading and escalating disruptions across other infrastructures nodes in the network. The operation of Healthcare Infrastructure is one such Critical Infrastructure that depends upon a complex interdependent network of other Critical Infrastructure, and during disasters it is very vital for the Healthcare Infrastructure to be protected, accessible and prepared for a mass casualty. To reduce the consequences of a disaster on the Critical Infrastructure and to ensure a resilient Critical Health Infrastructure network, knowledge, understanding, modeling, and analyzing the inter-dependencies between the infrastructures is required. The paper would present inter-dependencies related to Healthcare Critical Infrastructure based on Hierarchical Coloured Petri Nets modeling approach, given a flood scenario as the disaster which would disrupt the infrastructure nodes. The model properties are being analyzed for the various state changes which occur when there is a disruption or damage to any of the Critical Infrastructure. The failure probabilities for the failure risk of interconnected systems are calculated by deriving a reachability graph, which is later mapped to a Markov chain. By analytically solving and analyzing the Markov chain, the overall vulnerability of the Healthcare CI HCPN model is demonstrated. The entire model would be integrated with Geographic information-based decision support system to visualize the dynamic behavior of the interdependency of the Healthcare and related CI network in a geographically based environment.

Keywords: critical infrastructure interdependency, hierarchical coloured petrinet, healthcare critical infrastructure, Petri Nets, Markov chain

Procedia PDF Downloads 497
67 Elastodynamic Response of Shear Wave Dispersion in a Multi-Layered Concentric Cylinders Composed of Reinforced and Piezo-Materials

Authors: Sunita Kumawat, Sumit Kumar Vishwakarma

Abstract:

The present study fundamentally focuses on analyzing the limitations and transference of horizontally polarized Shear waves(SH waves) in a four-layered compounded cylinder. The geometrical structure comprises of concentric cylinders of infinite length composed of self-reinforced (SR), fibre-reinforced (FR), piezo-magnetic (PM), and piezo-electric(PE) materials. The entire structure is assumed to be pre stressed along the azimuthal direction. In order to make the structure sensitive to the application pertaining to sensors and actuators, the PM and PE cylinders have been categorically placed in the outer part of the geometry. Whereas in order to provide stiffness and stability to the structure, the inner part consists of self-reinforced and fibre-reinforced media. The common boundary between each of the cylinders has been essentially considered as imperfectly bounded. At the interface of PE and PM media, mechanical, electrical, magnetic, and inter-coupled types of imperfections have been exhibited. The closed-form of dispersion relation has been deduced for two contrast cases i.e. electrically open magnetically short(EOMS) and electrically short and magnetically open ESMO circuit conditions. Dispersion curves have been plotted to illustrate the salient features of parameters like normalized imperfect interface parameters, initial stresses, and radii of the concentric cylinders. The comparative effect of each one of these parameters on the phase velocity of the wave has been enlisted and marked individually. Every graph has been presented with two consecutive modes in succession for a comprehensive understanding. This theoretical study may be implemented to improvise the performance of surface acoustic wave (SAW) sensors and actuators consisting of piezo-electric quartz and piezo-composite concentric cylinders.

Keywords: self-reinforced, fibre-reinforced, piezo-electric, piezo-magnetic, interfacial imperfection

Procedia PDF Downloads 82
66 DeepLig: A de-novo Computational Drug Design Approach to Generate Multi-Targeted Drugs

Authors: Anika Chebrolu

Abstract:

Mono-targeted drugs can be of limited efficacy against complex diseases. Recently, multi-target drug design has been approached as a promising tool to fight against these challenging diseases. However, the scope of current computational approaches for multi-target drug design is limited. DeepLig presents a de-novo drug discovery platform that uses reinforcement learning to generate and optimize novel, potent, and multitargeted drug candidates against protein targets. DeepLig’s model consists of two networks in interplay: a generative network and a predictive network. The generative network, a Stack- Augmented Recurrent Neural Network, utilizes a stack memory unit to remember and recognize molecular patterns when generating novel ligands from scratch. The generative network passes each newly created ligand to the predictive network, which then uses multiple Graph Attention Networks simultaneously to forecast the average binding affinity of the generated ligand towards multiple target proteins. With each iteration, given feedback from the predictive network, the generative network learns to optimize itself to create molecules with a higher average binding affinity towards multiple proteins. DeepLig was evaluated based on its ability to generate multi-target ligands against two distinct proteins, multi-target ligands against three distinct proteins, and multi-target ligands against two distinct binding pockets on the same protein. With each test case, DeepLig was able to create a library of valid, synthetically accessible, and novel molecules with optimal and equipotent binding energies. We propose that DeepLig provides an effective approach to design multi-targeted drug therapies that can potentially show higher success rates during in-vitro trials.

Keywords: drug design, multitargeticity, de-novo, reinforcement learning

Procedia PDF Downloads 57
65 Environmental Impact Assessment in Mining Regions with Remote Sensing

Authors: Carla Palencia-Aguilar

Abstract:

Calculations of Net Carbon Balance can be obtained by means of Net Biome Productivity (NBP), Net Ecosystem Productivity (NEP), and Net Primary Production (NPP). The latter is an important component of the biosphere carbon cycle and is easily obtained data from MODIS MOD17A3HGF; however, the results are only available yearly. To overcome data availability, bands 33 to 36 from MODIS MYD021KM (obtained on a daily basis) were analyzed and compared with NPP data from the years 2000 to 2021 in 7 sites where surface mining takes place in the Colombian territory. Coal, Gold, Iron, and Limestone were the minerals of interest. Scales and Units as well as thermal anomalies, were considered for net carbon balance per location. The NPP time series from the satellite images were filtered by using two Matlab filters: First order and Discrete Transfer. After filtering the NPP time series, comparing the graph results from the satellite’s image value, and running a linear regression, the results showed R2 from 0,72 to 0,85. To establish comparable units among NPP and bands 33 to 36, the Greenhouse Gas Equivalencies Calculator by EPA was used. The comparison was established in two ways: one by the sum of all the data per point per year and the other by the average of 46 weeks and finding the percentage that the value represented with respect to NPP. The former underestimated the total CO2 emissions. The results also showed that coal and gold mining in the last 22 years had less CO2 emissions than limestone, with an average per year of 143 kton CO2 eq for gold, 152 kton CO2 eq for coal, and 287 kton CO2 eq for iron. Limestone emissions varied from 206 to 441 kton CO2 eq. The maximum emission values from unfiltered data correspond to 165 kton CO2 eq. for gold, 188 kton CO2 eq. for coal, and 310 kton CO2 eq. for iron and limestone, varying from 231 to 490 kton CO2 eq. If the most pollutant limestone site improves its production technology, limestone could count with a maximum of 318 kton CO2 eq emissions per year, a value very similar respect to iron. The importance of gathering data is to establish benchmarks in order to attain 2050’s zero emissions goal.

Keywords: carbon dioxide, NPP, MODIS, MINING

Procedia PDF Downloads 69