Search results for: computational vision
595 Artificial Intelligence in Bioscience: The Next Frontier
Authors: Parthiban Srinivasan
Abstract:
With recent advances in computational power and access to enough data in biosciences, artificial intelligence methods are increasingly being used in drug discovery research. These methods are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Our goal is to develop a model that accurately predicts biological activity and toxicity parameters for novel compounds. We have compiled a robust library of over 150,000 chemical compounds with different pharmacological properties from literature and public domain databases. The compounds are stored in simplified molecular-input line-entry system (SMILES), a commonly used text encoding for organic molecules. We utilize an automated process to generate an array of numerical descriptors (features) for each molecule. Redundant and irrelevant descriptors are eliminated iteratively. Our prediction engine is based on a portfolio of machine learning algorithms. We found Random Forest algorithm to be a better choice for this analysis. We captured non-linear relationship in the data and formed a prediction model with reasonable accuracy by averaging across a large number of randomized decision trees. Our next step is to apply deep neural network (DNN) algorithm to predict the biological activity and toxicity properties. We expect the DNN algorithm to give better results and improve the accuracy of the prediction. This presentation will review all these prominent machine learning and deep learning methods, our implementation protocols and discuss these techniques for their usefulness in biomedical and health informatics.Keywords: deep learning, drug discovery, health informatics, machine learning, toxicity prediction
Procedia PDF Downloads 357594 Proposed Design Principles for Low-Income Housing in South Africa
Authors: Gerald Steyn
Abstract:
Despite the huge number of identical, tiny, boxy, freestanding houses built by the South African government after the advent of democracy in 1994, squatter camps continue to mushroom, and there is no evidence that the backlog is being reduced. Not only is the wasteful low-density detached-unit approach of the past being perpetuated, but the social, spatial, and economic marginalization is worse than before 1994. The situation is precarious since squatters are vulnerable to fires and flooding. At the same time, the occupants of the housing schemes are trapped far from employment opportunities or any public amenities. Despite these insecurities, the architectural, urban design, and city planning professions are puzzlingly quiet. Design projects address these issues only at the universities, albeit inevitably with somewhat Utopian notions. Geoffrey Payne, the renowned urban housing and urban development consultant and researcher focusing on issues in the Global South, once proclaimed that “we do not have a housing problem – we have a settlement problem.” This dictum was used as the guiding philosophy to conceptualize urban design and architectural principles that foreground the needs of low-income households and allow them to be fully integrated into the larger conurbation. Information was derived from intensive research over two decades, involving frequent visits to informal settlements, historic Black townships, and rural villages. Observations, measured site surveys, and interviews resulted in several scholarly articles from which a set of desirable urban and architectural criteria could be extracted. To formulate culturally appropriate design principles, existing vernacular and informal patterns were analyzed, reconciled with contemporary designs that align with the requirements for the envisaged settlement attributes, and reimagined as residential design principles. Five interrelated design principles are proposed, ranging in scale from (1) Integrating informal settlements into the city, (2) linear neighborhoods, (3) market streets as wards, (4) linear neighborhoods, and (5) typologies and densities for clustered and aggregated patios and courtyards. Each design principle is described, first in terms of its context and associated issues of concern, followed by a discussion of the patterns available to inform a possible solution, and finally, an explanation and graphic illustration of the proposed design. The approach is predominantly bottom-up since each of the five principles is unfolded from existing informal and vernacular practices studied in situ. They are, however, articulated and represented in terms of contemporary design language. Contrary to an idealized vision of housing for South Africa’s low-income urban households, this study proposes actual principles for critical assessment by peers in the tradition of architectural research in design.Keywords: culturally appropriate design principles, informal settlements, South Africa’s housing backlog, squatter camps
Procedia PDF Downloads 49593 Effect of Depth on Texture Features of Ultrasound Images
Authors: M. A. Alqahtani, D. P. Coleman, N. D. Pugh, L. D. M. Nokes
Abstract:
In diagnostic ultrasound, the echo graphic B-scan texture is an important area of investigation since it can be analyzed to characterize the histological state of internal tissues. An important factor requiring consideration when evaluating ultrasonic tissue texture is the depth. The effect of attenuation with depth of ultrasound, the size of the region of interest, gain, and dynamic range are important variables to consider as they can influence the analysis of texture features. These sources of variability have to be considered carefully when evaluating image texture as different settings might influence the resultant image. The aim of this study is to investigate the effect of depth on the texture features in-vivo using a 3D ultrasound probe. The left leg medial head of the gastrocnemius muscle of 10 healthy subjects were scanned. Two regions A and B were defined at different depth within the gastrocnemius muscle boundary. The size of both ROI’s was 280*20 pixels and the distance between region A and B was kept constant at 5 mm. Texture parameters include gray level, variance, skewness, kurtosis, co-occurrence matrix; run length matrix, gradient, autoregressive (AR) model and wavelet transform were extracted from the images. The paired t –test was used to test the depth effect for the normally distributed data and the Wilcoxon–Mann-Whitney test was used for the non-normally distributed data. The gray level, variance, and run length matrix were significantly lowered when the depth increased. The other texture parameters showed similar values at different depth. All the texture parameters showed no significant difference between depths A and B (p > 0.05) except for gray level, variance and run length matrix (p < 0.05). This indicates that gray level, variance, and run length matrix are depth dependent.Keywords: ultrasound image, texture parameters, computational biology, biomedical engineering
Procedia PDF Downloads 295592 Assessment of Air Pollutant Dispersion and Soil Contamination: The Critical Role of MATLAB Modeling in Evaluating Emissions from the Covanta Municipal Solid Waste Incineration Facility
Authors: Jadon Matthiasa, Cindy Donga, Ali Al Jibouria, Hsin Kuo
Abstract:
The environmental impact of emissions from the Covanta Waste-to-Energy facility in Burnaby, BC, was comprehensively evaluated, focusing on the dispersion of air pollutants and the subsequent assessment of heavy metal contamination in surrounding soils. A Gaussian Plume Model, implemented in MATLAB, was utilized to simulate the dispersion of key pollutants to understand their atmospheric behaviour and potential deposition patterns. The MATLAB code developed for this study enhanced the accuracy of pollutant concentration predictions and provided capabilities for visualizing pollutant dispersion in 3D plots. Furthermore, the code could predict the maximum concentration of pollutants at ground level, eliminating the need to use the Ranchoux model for predictions. Complementing the modelling approach, empirical soil sampling and analysis were conducted to evaluate heavy metal concentrations in the vicinity of the facility. This integrated methodology underscored the importance of computational modelling in air pollution assessment and highlighted the necessity of soil analysis to obtain a holistic understanding of environmental impacts. The findings emphasized the effectiveness of current emissions controls while advocating for ongoing monitoring to safeguard public health and environmental integrity.Keywords: air emissions, Gaussian Plume Model, MATLAB, soil contamination, air pollution monitoring, waste-to-energy, pollutant dispersion visualization, heavy metal analysis, environmental impact assessment, emission control effectiveness
Procedia PDF Downloads 16591 Heat Sink Optimization for a High Power Wearable Thermoelectric Module
Authors: Zohreh Soleimani, Sally Salome Shahzad, Stamatis Zoras
Abstract:
As a result of current energy and environmental issues, the human body is known as one of the promising candidate for converting wasted heat to electricity (Seebeck effect). Thermoelectric generator (TEG) is one of the most prevalent means of harvesting body heat and converting that to eco-friendly electrical power. However, the uneven distribution of the body heat and its curvature geometry restrict harvesting adequate amount of energy. To perfectly transform the heat radiated by the body into power, the most direct solution is conforming the thermoelectric generators (TEG) with the arbitrary surface of the body and increase the temperature difference across the thermoelectric legs. Due to this, a computational survey through COMSOL Multiphysics is presented in this paper with the main focus on the impact of integrating a flexible wearable TEG with a corrugated shaped heat sink on the module power output. To eliminate external parameters (temperature, air flow, humidity), the simulations are conducted within indoor thermal level and when the wearer is stationary. The full thermoelectric characterization of the proposed TEG fabricated by a wavy shape heat sink has been computed leading to a maximum power output of 25µW/cm2 at a temperature gradient nearly 13°C. It is noteworthy that for the flexibility of the proposed TEG and heat sink, the applicability and efficiency of the module stay high even on the curved surfaces of the body. As a consequence, the results demonstrate the superiority of such a TEG to the most state of the art counterparts fabricated with no heat sink and offer a new train of thought for the development of self-sustained and unobtrusive wearable power suppliers which generate energy from low grade dissipated heat from the body.Keywords: device simulation, flexible thermoelectric module, heat sink, human body heat
Procedia PDF Downloads 151590 Modeling and Simulation of Secondary Breakup and Its Influence on Fuel Spray in High Torque Low Speed Diesel Engine
Authors: Mohsin Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi
Abstract:
High torque low-speed diesel engine has a wide range of industrial and commercial applications. In literature, it’s found that lot of work has been done for the high-speed diesel engine and research on High Torque low-speed is rare. The fuel injection plays a key role in the efficiency of engine and reduction in exhaust emission. The fuel breakup plays a critical role in air-fuel mixture and spray combustion. The current study explains numerically an important phenomenon in spray combustion which is deformation and breakup of liquid drops in compression ignition internal combustion engine. The secondary breakup and its influence on spray and characteristics of compressed gas in-cylinder have been calculated by using simulation software in the backdrop of high torque low-speed diesel like conditions. The secondary spray breakup is modeled with KH - RT instabilities. The continuous field is described by turbulence model and dynamics of the dispersed droplet is modeled by Lagrangian tracking scheme. The results by using KH - RT model are compared against other default methods in OpenFOAM and published experimental data from research and implemented in CFD (Computational Fluid Dynamics). These numerical simulation, done in OpenFoam and Matlab, results are analyzed for the complete 720- degree 4 stroke engine cycle at a low engine speed, for favorable agreement to be achieved. Results thus obtained will be analyzed for better evaporation in near nozzle region. The proposed analyses will further help in better engine efficiency, low emission and improved fuel economy.Keywords: diesel fuel, KH-RT, Lagrangian , Open FOAM, secondary breakup
Procedia PDF Downloads 265589 Analyzing Emerging Scientific Domains in Biomedical Discourse: Case Study Comparing Microbiome, Metabolome, and Metagenome Research in Scientific Articles
Authors: Kenneth D. Aiello, M. Simeone, Manfred Laubichler
Abstract:
It is increasingly difficult to analyze emerging scientific fields as contemporary scientific fields are more dynamic, their boundaries are more porous, and the relational possibilities have increased due to Big Data and new information sources. In biomedicine, where funding, medical categories, and medical jurisdiction are determined by distinct boundaries on biomedical research fields and definitions of concepts, ambiguity persists between the microbiome, metabolome, and metagenome research fields. This ambiguity continues despite efforts by institutions and organizations to establish parameters on the core concepts and research discourses. Further, the explosive growth of microbiome, metabolome, and metagenomic research has led to unknown variation and covariation making application of findings across subfields or coming to a consensus difficult. This study explores the evolution and variation of knowledge within the microbiome, metabolome, and metagenome research fields related to ambiguous scholarly language and commensurable theoretical frameworks via a semantic analysis of key concepts and narratives. A computational historical framework of cultural evolution and large-scale publication data highlight the boundaries and overlaps between the competing scientific discourses surrounding the three research areas. The results of this study highlight how discourse and language distribute power within scholarly and scientific networks, specifically the power to set and define norms, central questions, methods, and knowledge.Keywords: biomedicine, conceptual change, history of science, philosophy of science, science of science, sociolinguistics, sociology of knowledge
Procedia PDF Downloads 131588 A Survey of Skin Cancer Detection and Classification from Skin Lesion Images Using Deep Learning
Authors: Joseph George, Anne Kotteswara Roa
Abstract:
Skin disease is one of the most common and popular kinds of health issues faced by people nowadays. Skin cancer (SC) is one among them, and its detection relies on the skin biopsy outputs and the expertise of the doctors, but it consumes more time and some inaccurate results. At the early stage, skin cancer detection is a challenging task, and it easily spreads to the whole body and leads to an increase in the mortality rate. Skin cancer is curable when it is detected at an early stage. In order to classify correct and accurate skin cancer, the critical task is skin cancer identification and classification, and it is more based on the cancer disease features such as shape, size, color, symmetry and etc. More similar characteristics are present in many skin diseases; hence it makes it a challenging issue to select important features from a skin cancer dataset images. Hence, the skin cancer diagnostic accuracy is improved by requiring an automated skin cancer detection and classification framework; thereby, the human expert’s scarcity is handled. Recently, the deep learning techniques like Convolutional neural network (CNN), Deep belief neural network (DBN), Artificial neural network (ANN), Recurrent neural network (RNN), and Long and short term memory (LSTM) have been widely used for the identification and classification of skin cancers. This survey reviews different DL techniques for skin cancer identification and classification. The performance metrics such as precision, recall, accuracy, sensitivity, specificity, and F-measures are used to evaluate the effectiveness of SC identification using DL techniques. By using these DL techniques, the classification accuracy increases along with the mitigation of computational complexities and time consumption.Keywords: skin cancer, deep learning, performance measures, accuracy, datasets
Procedia PDF Downloads 129587 Fire and Explosion Consequence Modeling Using Fire Dynamic Simulator: A Case Study
Authors: Iftekhar Hassan, Sayedil Morsalin, Easir A Khan
Abstract:
Accidents involving fire occur frequently in recent times and their causes showing a great deal of variety which require intervention methods and risk assessment strategies are unique in each case. On September 4, 2020, a fire and explosion occurred in a confined space caused by a methane gas leak from an underground pipeline in Baitus Salat Jame mosque during Night (Esha) prayer in Narayanganj District, Bangladesh that killed 34 people. In this research, this incident is simulated using Fire Dynamics Simulator (FDS) software to analyze and understand the nature of the accident and associated consequences. FDS is an advanced computational fluid dynamics (CFD) system of fire-driven fluid flow which solves numerically a large eddy simulation form of the Navier–Stokes’s equations for simulation of the fire and smoke spread and prediction of thermal radiation, toxic substances concentrations and other relevant parameters of fire. This study focuses on understanding the nature of the fire and consequence evaluation due to thermal radiation caused by vapor cloud explosion. An evacuation modeling was constructed to visualize the effect of evacuation time and fractional effective dose (FED) for different types of agents. The results were presented by 3D animation, sliced pictures and graphical representation to understand fire hazards caused by thermal radiation or smoke due to vapor cloud explosion. This study will help to design and develop appropriate respond strategy for preventing similar accidents.Keywords: consequence modeling, fire and explosion, fire dynamics simulation (FDS), thermal radiation
Procedia PDF Downloads 226586 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids
Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit
Abstract:
Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations
Procedia PDF Downloads 175585 Numerical Performance Evaluation of a Savonius Wind Turbines Using Resistive Torque Modeling
Authors: Guermache Ahmed Chafik, Khelfellah Ismail, Ait-Ali Takfarines
Abstract:
The Savonius vertical axis wind turbine is characterized by sufficient starting torque at low wind speeds, simple design and does not require orientation to the wind direction; however, the developed power is lower than other types of wind turbines such as Darrieus. To increase these performances several studies and researches have been developed, such as optimizing blades shape, using passive controls and also minimizing power losses sources like the resisting torque due to friction. This work aims to estimate the performance of a Savonius wind turbine introducing a User Defined Function to the CFD model analyzing resisting torque. This User Defined Function is developed to simulate the action of the wind speed on the rotor; it receives the moment coefficient as an input to compute the rotational velocity that should be imposed on computational domain rotating regions. The rotational velocity depends on the aerodynamic moment applied on the turbine and the resisting torque, which is considered a linear function. Linking the implemented User Defined Function with the CFD solver allows simulating the real functioning of the Savonius turbine exposed to wind. It is noticed that the wind turbine takes a while to reach the stationary regime where the rotational velocity becomes invariable; at that moment, the tip speed ratio, the moment and power coefficients are computed. To validate this approach, the power coefficient versus tip speed ratio curve is compared with the experimental one. The obtained results are in agreement with the available experimental results.Keywords: resistant torque modeling, Savonius wind turbine, user-defined function, vertical axis wind turbine performances
Procedia PDF Downloads 156584 Modeling Flow and Deposition Characteristics of Solid CO2 during Choked Flow of CO2 Pipeline in CCS
Authors: Teng lin, Li Yuxing, Han Hui, Zhao Pengfei, Zhang Datong
Abstract:
With the development of carbon capture and storage (CCS), the flow assurance of CO2 transportation becomes more important, particularly for supercritical CO2 pipelines. The relieving system using the choke valve is applied to control the pressure in CO2 pipeline. However, the temperature of fluid would drop rapidly because of Joule-Thomson cooling (JTC), which may cause solid CO2 form and block the pipe. In this paper, a Computational Fluid Dynamic (CFD) model, using the modified Lagrangian method, Reynold's Stress Transport model (RSM) for turbulence and stochastic tracking model (STM) for particle trajectory, was developed to predict the deposition characteristic of solid carbon dioxide. The model predictions were in good agreement with the experiment data published in the literature. It can be observed that the particle distribution affected the deposition behavior. In the region of the sudden expansion, the smaller particles accumulated tightly on the wall were dominant for pipe blockage. On the contrary, the size of solid CO2 particles deposited near the outlet usually was bigger and the stacked structure was looser. According to the calculation results, the movement of the particles can be regarded as the main four types: turbulent motion close to the sudden expansion structure, balanced motion at sudden expansion-middle region, inertial motion near the outlet and the escape. Furthermore the particle deposits accumulated primarily in the sudden expansion region, reattachment region and outlet region because of the four type of motion. Also the Stokes number had an effect on the deposition ratio and it is recommended for Stokes number to avoid 3-8St.Keywords: carbon capture and storage, carbon dioxide pipeline, gas-particle flow, deposition
Procedia PDF Downloads 370583 Agent-Based Modeling to Simulate the Dynamics of Health Insurance Markets
Authors: Haripriya Chakraborty
Abstract:
The healthcare system in the United States is considered to be one of the most inefficient and expensive systems when compared to other developed countries. Consequently, there are persistent concerns regarding the overall functioning of this system. For instance, the large number of uninsured individuals and high premiums are pressing issues that are shown to have a negative effect on health outcomes with possible life-threatening consequences. The Affordable Care Act (ACA), which was signed into law in 2010, was aimed at improving some of these inefficiencies. This paper aims at providing a computational mechanism to examine some of these inefficiencies and the effects that policy proposals may have on reducing these inefficiencies. Agent-based modeling is an invaluable tool that provides a flexible framework to model complex systems. It can provide an important perspective into the nature of some interactions that occur and how the benefits of these interactions are allocated. In this paper, we propose a novel and versatile agent-based model with realistic assumptions to simulate the dynamics of a health insurance marketplace that contains a mixture of private and public insurers and individuals. We use this model to analyze the characteristics, motivations, payoffs, and strategies of these agents. In addition, we examine the effects of certain policies, including some of the provisions of the ACA, aimed at reducing the uninsured rate and the cost of premiums to move closer to a system that is more equitable and improves health outcomes for the general population. Our test results confirm the usefulness of our agent-based model in studying this complicated issue and suggest some implications for public policies aimed at healthcare reform.Keywords: agent-based modeling, healthcare reform, insurance markets, public policy
Procedia PDF Downloads 138582 Thermal Transport Properties of Common Transition Single Metal Atom Catalysts
Authors: Yuxi Zhu, Zhenqian Chen
Abstract:
It is of great interest to investigate the thermal properties of non-precious metal catalysts for Proton exchange membrane fuel cell (PEMFC) based on the thermal management requirements. Due to the low symmetry of materials, to accurately obtain the thermal conductivity of materials, it is necessary to obtain the second and third order force constants by combining density functional theory and machine learning interatomic potential. To be specific, the interatomic force constants are obtained by moment tensor potential (MTP), which is trained by the computational trajectory of Ab initio molecular dynamics (AIMD) at 50, 300, 600, and 900 K for 1 ps each, with a time step of 1 fs in the AIMD computation. And then the thermal conductivity can be obtained by solving the Boltzmann transport equation. In this paper, the thermal transport properties of single metal atom catalysts are studied for the first time to our best knowledge by machine-learning interatomic potential (MLIP). Results show that the single metal atom catalysts exhibit anisotropic thermal conductivities and partially exhibit good thermal conductivity. The average lattice thermal conductivities of G-FeN₄, G-CoN₄ and G-NiN₄ at 300 K are 88.61 W/mK, 205.32 W/mK and 210.57 W/mK, respectively. While other single metal atom catalysts show low thermal conductivity due to their low phonon lifetime. The results also show that low-frequency phonons (0-10 THz) dominate thermal transport properties. The results provide theoretical insights into the application of single metal atom catalysts in thermal management.Keywords: proton exchange membrane fuel cell, single metal atom catalysts, density functional theory, thermal conductivity, machine-learning interatomic potential
Procedia PDF Downloads 24581 Weakly Solving Kalah Game Using Artificial Intelligence and Game Theory
Authors: Hiba El Assibi
Abstract:
This study aims to weakly solve Kalah, a two-player board game, by developing a start-to-finish winning strategy using an optimized Minimax algorithm with Alpha-Beta Pruning. In weakly solving Kalah, our focus is on creating an optimal strategy from the game's beginning rather than analyzing every possible position. The project will explore additional enhancements like symmetry checking and code optimizations to speed up the decision-making process. This approach is expected to give insights into efficient strategy formulation in board games and potentially help create games with a fair distribution of outcomes. Furthermore, this research provides a unique perspective on human versus Artificial Intelligence decision-making in strategic games. By comparing the AI-generated optimal moves with human choices, we can explore how seemingly advantageous moves can, in the long run, be harmful, thereby offering a deeper understanding of strategic thinking and foresight in games. Moreover, this paper discusses the evaluation of our strategy against existing methods, providing insights on performance and computational efficiency. We also discuss the scalability of our approach to the game, considering different board sizes (number of pits and stones) and rules (different variations) and studying how that affects performance and complexity. The findings have potential implications for the development of AI applications in strategic game planning, enhancing our understanding of human cognitive processes in game settings, and offer insights into creating balanced and engaging game experiences.Keywords: minimax, alpha beta pruning, transposition tables, weakly solving, game theory
Procedia PDF Downloads 55580 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 427579 The Elimination of Fossil Fuel Subsidies from the Road Transportation Sector and the Promotion of Electro Mobility: The Ecuadorian Case
Authors: Henry Gonzalo Acurio Flores, Alvaro Nicolas Corral Naveda, Juan Francisco Fonseca Palacios
Abstract:
In Ecuador, subventions on fossil fuels for the road transportation sector have always been part of its economy throughout time, mainly because of demagogy and populism from political leaders. It is clearly seen that the government cannot maintain the subsidies anymore due to its commercial balance and its general state budget; subsidies are a key barrier to implementing the use of cleaner technologies. However, during the last few months, the elimination of subsidies has been done gradually with the purpose of reaching international prices. It is expected that with this measure, the population will opt for other means of transportation, and in a certain way, it will promote the use of private electric vehicles and public, e.g., taxis and buses (urban transport). Considering the three main elements of sustainable development, an analysis of the social, economic, and environmental impacts of eliminating subsidies will be generated at the country level. To achieve this, four scenarios will be developed in order to determine how the subsidies will contribute to the promotion of electro-mobility. 1) A Business as Usual BAU scenario; 2) the introduction of 10 000 electric vehicles by 2025; 3) the introduction of 100 000 electric vehicles by 2030; 4) the introduction of 750 000 electric vehicles by 2040 (for all the scenarios buses, taxis, lightweight duty vehicles, and private vehicles will be introduced, as it is established in the National Electro Mobility Strategy for Ecuador). The Low Emissions Analysis Platform (LEAP) will be used, and it will be suitable to determine the cost for the government in terms of importing derivatives for fossil fuels and the cost of electricity to power the electric fleet that can be changed. The elimination of subventions generates fiscal resources for the state that can be used to develop other kinds of projects that will benefit Ecuadorian society. It will definitely change the energy matrix, and it will provide energy security for the country; it will be an opportunity for the government to incentivize a greater introduction of renewable energies, e.g., solar, wind, and geothermal. At the same time, it will also reduce greenhouse gas emissions (GHG) from the transportation sector, considering its mitigation potential, which as a result, will ameliorate the inhabitant quality of life by improving the quality of air, therefore reducing respiratory diseases associated with exhaust emissions, consequently, achieving sustainability, the Sustainable Development Goals (SDGs), and complying with the agreements established in the Paris Agreement COP 21 in 2015. Electro mobility in Latin America and the Caribbean can only be achieved by the implementation of the right policies at the central government, which need to be accompanied by a National Urban Mobility Policy (NUMP) and can encompass a greater vision to develop holistic, sustainable transport systems at local governments.Keywords: electro mobility, energy, policy, sustainable transportation
Procedia PDF Downloads 84578 Urban Open Source: Synthesis of a Citizen-Centric Framework to Design Densifying Cities
Authors: Shaurya Chauhan, Sagar Gupta
Abstract:
Prominent urbanizing centres across the globe like Delhi, Dhaka, or Manila have exhibited that development often faces a challenge in bridging the gap among the top-down collective requirements of the city and the bottom-up individual aspirations of the ever-diversifying population. When this exclusion is intertwined with rapid urbanization and diversifying urban demography: unplanned sprawl, poor planning, and low-density development emerge as automated responses. In parallel, new ideas and methods of densification and public participation are being widely adopted as sustainable alternatives for the future of urban development. This research advocates a collaborative design method for future development: one that allows rapid application with its prototypical nature and an inclusive approach with mediation between the 'user' and the 'urban', purely with the use of empirical tools. Building upon the concepts and principles of 'open-sourcing' in design, the research establishes a design framework that serves the current user requirements while allowing for future citizen-driven modifications. This is synthesized as a 3-tiered model: user needs – design ideology – adaptive details. The research culminates into a context-responsive 'open source project development framework' (hereinafter, referred to as OSPDF) that can be used for on-ground field applications. To bring forward specifics, the research looks at a 300-acre redevelopment in the core of a rapidly urbanizing city as a case encompassing extreme physical, demographic, and economic diversity. The suggestive measures also integrate the region’s cultural identity and social character with the diverse citizen aspirations, using architecture and urban design tools, and references from recognized literature. This framework, based on a vision – feedback – execution loop, is used for hypothetical development at the five prevalent scales in design: master planning, urban design, architecture, tectonics, and modularity, in a chronological manner. At each of these scales, the possible approaches and avenues for open- sourcing are identified and validated, through hit-and-trial, and subsequently recorded. The research attempts to re-calibrate the architectural design process and make it more responsive and people-centric. Analytical tools such as Space, Event, and Movement by Bernard Tschumi and Five-Point Mental Map by Kevin Lynch, among others, are deep rooted in the research process. Over the five-part OSPDF, a two-part subsidiary process is also suggested after each cycle of application, for a continued appraisal and refinement of the framework and urban fabric with time. The research is an exploration – of the possibilities for an architect – to adopt the new role of a 'mediator' in development of the contemporary urbanity.Keywords: open source, public participation, urbanization, urban development
Procedia PDF Downloads 149577 A Rationale to Describe Ambident Reactivity
Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken
Abstract:
An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.Keywords: ambident, Gibbs, nucleophile, rates
Procedia PDF Downloads 84576 Pareto System of Optimal Placement and Sizing of Distributed Generation in Radial Distribution Networks Using Particle Swarm Optimization
Authors: Sani M. Lawal, Idris Musa, Aliyu D. Usman
Abstract:
The Pareto approach of optimal solutions in a search space that evolved in multi-objective optimization problems is adopted in this paper, which stands for a set of solutions in the search space. This paper aims at presenting an optimal placement of Distributed Generation (DG) in radial distribution networks with an optimal size for minimization of power loss and voltage deviation as well as maximizing voltage profile of the networks. And these problems are formulated using particle swarm optimization (PSO) as a constraint nonlinear optimization problem with both locations and sizes of DG being continuous. The objective functions adopted are the total active power loss function and voltage deviation function. The multiple nature of the problem, made it necessary to form a multi-objective function in search of the solution that consists of both the DG location and size. The proposed PSO algorithm is used to determine optimal placement and size of DG in a distribution network. The output indicates that PSO algorithm technique shows an edge over other types of search methods due to its effectiveness and computational efficiency. The proposed method is tested on the standard IEEE 34-bus and validated with 33-bus test systems distribution networks. Results indicate that the sizing and location of DG are system dependent and should be optimally selected before installing the distributed generators in the system and also an improvement in the voltage profile and power loss reduction have been achieved.Keywords: distributed generation, pareto, particle swarm optimization, power loss, voltage deviation
Procedia PDF Downloads 364575 Computational Investigation of Secondary Flow Losses in Linear Turbine Cascade by Modified Leading Edge Fence
Authors: K. N. Kiran, S. Anish
Abstract:
It is well known that secondary flow loses account about one third of the total loss in any axial turbine. Modern gas turbine height is smaller and have longer chord length, which might lead to increase in secondary flow. In order to improve the efficiency of the turbine, it is important to understand the behavior of secondary flow and device mechanisms to curtail these losses. The objective of the present work is to understand the effect of a stream wise end-wall fence on the aerodynamics of a linear turbine cascade. The study is carried out computationally by using commercial software ANSYS CFX. The effect of end-wall on the flow field are calculated based on RANS simulation by using SST transition turbulence model. Durham cascade which is similar to high-pressure axial flow turbine for simulation is used. The aim of fencing in blade passage is to get the maximum benefit from flow deviation and destroying the passage vortex in terms of loss reduction. It is observed that, for the present analysis, fence in the blade passage helps reducing the strength of horseshoe vortex and is capable of restraining the flow along the blade passage. Fence in the blade passage helps in reducing the under turning by 70 in comparison with base case. Fence on end-wall is effective in preventing the movement of pressure side leg of horseshoe vortex and helps in breaking the passage vortex. Computations are carried for different fence height whose curvature is different from the blade camber. The optimum fence geometry and location reduces the loss coefficient by 15.6% in comparison with base case.Keywords: boundary layer fence, horseshoe vortex, linear cascade, passage vortex, secondary flow
Procedia PDF Downloads 349574 Tracing Digital Traces of Phatic Communion in #Mooc
Authors: Judith Enriquez-Gibson
Abstract:
This paper meddles with the notion of phatic communion introduced 90 years ago by Malinowski, who was a Polish-born British anthropologist. It explores the phatic in Twitter within the contents of tweets related to moocs (massive online open courses) as a topic or trend. It is not about moocs though. It is about practices that could easily be hidden or neglected if we let big or massive topics take the lead or if we simply follow the computational or secret codes behind Twitter itself and third party software analytics. It draws from media and cultural studies. Though at first it appears data-driven as I submitted data collection and analytics into the hands of a third party software, Twitonomy, the aim is to follow how phatic communion might be practised in a social media site, such as Twitter. Lurking becomes its research method to analyse mooc-related tweets. A total of 3,000 tweets were collected on 11 October 2013 (UK timezone). The emphasis of lurking is to engage with Twitter as a system of connectivity. One interesting finding is that a click is in fact a phatic practice. A click breaks the silence. A click in one of the mooc website is actually a tweet. A tweet was posted on behalf of a user who simply chose to click without formulating the text and perhaps without knowing that it contains #mooc. Surely, this mechanism is not about reciprocity. To break the silence, users did not use words. They just clicked the ‘tweet button’ on a mooc website. A click performs and maintains connectivity – and Twitter as the medium in attendance in our everyday, available when needed to be of service. In conclusion, the phatic culture of breaking silence in Twitter does not have to submit to the power of code and analytics. It is a matter of human code.Keywords: click, Twitter, phatic communion, social media data, mooc
Procedia PDF Downloads 412573 Combination of Unmanned Aerial Vehicle and Terrestrial Laser Scanner Data for Citrus Yield Estimation
Authors: Mohammed Hmimou, Khalid Amediaz, Imane Sebari, Nabil Bounajma
Abstract:
Annual crop production is one of the most important macroeconomic indicators for the majority of countries around the world. This information is valuable, especially for exporting countries which need a yield estimation before harvest in order to correctly plan the supply chain. When it comes to estimating agricultural yield, especially for arboriculture, conventional methods are mostly applied. In the case of the citrus industry, the sale before harvest is largely practiced, which requires an estimation of the production when the fruit is on the tree. However, conventional method based on the sampling surveys of some trees within the field is always used to perform yield estimation, and the success of this process mainly depends on the expertise of the ‘estimator agent’. The present study aims to propose a methodology based on the combination of unmanned aerial vehicle (UAV) images and terrestrial laser scanner (TLS) point cloud to estimate citrus production. During data acquisition, a fixed wing and rotatory drones, as well as a terrestrial laser scanner, were tested. After that, a pre-processing step was performed in order to generate point cloud and digital surface model. At the processing stage, a machine vision workflow was implemented to extract points corresponding to fruits from the whole tree point cloud, cluster them into fruits, and model them geometrically in a 3D space. By linking the resulting geometric properties to the fruit weight, the yield can be estimated, and the statistical distribution of fruits size can be generated. This later property, which is information required by importing countries of citrus, cannot be estimated before harvest using the conventional method. Since terrestrial laser scanner is static, data gathering using this technology can be performed over only some trees. So, integration of drone data was thought in order to estimate the yield over a whole orchard. To achieve that, features derived from drone digital surface model were linked to yield estimation by laser scanner of some trees to build a regression model that predicts the yield of a tree given its features. Several missions were carried out to collect drone and laser scanner data within citrus orchards of different varieties by testing several data acquisition parameters (fly height, images overlap, fly mission plan). The accuracy of the obtained results by the proposed methodology in comparison to the yield estimation results by the conventional method varies from 65% to 94% depending mainly on the phenological stage of the studied citrus variety during the data acquisition mission. The proposed approach demonstrates its strong potential for early estimation of citrus production and the possibility of its extension to other fruit trees.Keywords: citrus, digital surface model, point cloud, terrestrial laser scanner, UAV, yield estimation, 3D modeling
Procedia PDF Downloads 142572 Variant Selection and Pre-transformation Phase Reconstruction for Deformation-Induced Transformation in AISI 304 Austenitic Stainless Steel
Authors: Manendra Singh Parihar, Sandip Ghosh Chowdhury
Abstract:
Austenitic stainless steels are widely used and give a good combination of properties. When this steel is plastically deformed, a phase transformation of the metastable Face Centred Cubic Austenite to the stable Body Centred Cubic (α’) or to the Hexagonal close packed (ԑ) martensite may occur, leading to the enhancement in the mechanical properties like strength. The work was based on variant selection and corresponding texture analysis for the strain induced martensitic transformation during deformation of the parent austenite FCC phase to form the product HCP and the BCC martensite phases separately, obeying their respective orientation relationships. The automated method for reconstruction of the parent phase orientation using the EBSD data of the product phase orientation is done using the MATLAB and TSL-OIM software. The method of triplets was used which involves the formation of a triplet of neighboring product grains having a common variant and linking them using a misorientation-based criterion. This led to the proper reconstruction of the pre-transformation phase orientation data and thus to its micro structure and texture. The computational speed of current method is better compared to the previously used methods of reconstruction. The reconstruction of austenite from ԑ and α’ martensite was carried out for multiple samples and their IPF images, pole figures, inverse pole figures and ODFs were compared. Similar type of results was observed for all samples. The comparison gives the idea for estimating the correct sequence of the transformation i.e. γ → ε → α’ or γ → α’, during deformation of AISI 304 austenitic stainless steel.Keywords: variant selection, reconstruction, EBSD, austenitic stainless steel, martensitic transformation
Procedia PDF Downloads 489571 Pueblos Mágicos in Mexico: The Loss of Intangible Cultural Heritage and Cultural Tourism
Authors: Claudia Rodriguez-Espinosa, Erika Elizabeth Pérez Múzquiz
Abstract:
Since the creation of the “Pueblos Mágicos” program in 2001, a series of social and cultural events had directly affected the heritage conservation of the 121 registered localities until 2018, when the federal government terminated the program. Many studies have been carried out that seek to analyze from different perspectives and disciplines the consequences that these appointments have generated in the “Pueblos Mágicos.” Multidisciplinary groups such as the one headed by Carmen Valverde and Liliana López Levi, have brought together specialists from all over the Mexican Republic to create a set of diagnoses of most of these settlements, and although each one has unique specificities, there is a constant in most of them that has to do with the loss of cultural heritage and that is related to transculturality. There are several factors identified that have fostered a cultural loss, as a direct reflection of the economic crisis that prevails in Mexico. It is important to remember that the origin of this program had as its main objective to promote the growth and development of local economies since one of the conditions for entering the program is that they have less than 20,000 inhabitants. With this goal in mind, one of the first actions that many “Pueblos Mágicos” carried out was to improve or create an infrastructure to receive both national and foreign tourists since this was practically non-existent. Creating hotels, restaurants, cafes, training certified tour guides, among other actions, have led to one of the great problems they face: globalization. Although by itself it is not bad, its impact in many cases has been negative for heritage conservation. The entry into and contact with new cultures has led to the undervaluation of cultural traditions, their transformation and even their total loss. This work seeks to present specific cases of transformation and loss of cultural heritage, as well as to reflect on the problem and propose scenarios in which the negative effects can be reversed. For this text, 36 “Pueblos Mágicos” have been selected for study, based on those settlements that are cited in volumes I and IV (the first and last of the collection) of the series produced by the multidisciplinary group led by Carmen Valverde and Liliana López Levi (researchers from UNAM and UAM Xochimilco respectively) in the project supported by CONACyT entitled “Pueblos Mágicos. An interdisciplinary vision”, of which we are part. This sample is considered representative since it forms 30% of the total of 121 “Pueblos Mágicos” existing at that moment. With this information, the elements of its intangible heritage loss or transformation have been identified in every chapter based on the texts written by the participants of that project. Finally, this text shows an analysis of the effects that this federal program, as a public policy applied to 132 populations, has had on the conservation or transformation of the intangible cultural heritage of the “Pueblos Mágicos.” Transculturality, globalization, the creation of identities and the desire to increase the flow of tourists have impacted the changes that traditions (main intangible cultural heritage) have had in the 18 years that the federal program lasted.Keywords: public policies, cultural tourism, heritage preservation, pueblos mágicos program
Procedia PDF Downloads 190570 Chemical vs Visual Perception in Food Choice Ability of Octopus vulgaris (Cuvier, 1797)
Authors: Al Sayed Al Soudy, Valeria Maselli, Gianluca Polese, Anna Di Cosmo
Abstract:
Cephalopods are considered as a model organism with a rich behavioral repertoire. Sophisticated behaviors were widely studied and described in different species such as Octopus vulgaris, who has evolved the largest and more complex nervous system among invertebrates. In O. vulgaris, cognitive abilities in problem-solving tasks and learning abilities are associated with long-term memory and spatial memory, mediated by highly developed sensory organs. They are equipped with sophisticated eyes, able to discriminate colors even with a single photoreceptor type, vestibular system, ‘lateral line analogue’, primitive ‘hearing’ system and olfactory organs. They can recognize chemical cues either through direct contact with odors sources using suckers or by distance through the olfactory organs. Cephalopods are able to detect widespread waterborne molecules by the olfactory organs. However, many volatile odorant molecules are insoluble or have a very low solubility in water, and must be perceived by direct contact. O. vulgaris, equipped with many chemosensory neurons located in their suckers, exhibits a peculiar behavior that can be provocatively described as 'smell by touch'. The aim of this study is to establish the priority given to chemical vs. visual perception in food choice. Materials and methods: Three different types of food (anchovies, clams, and mussels) were used, and all sessions were recorded with a digital camera. During the acclimatization period, Octopuses were exposed to the three types of food to test their natural food preferences. Later, to verify if food preference is maintained, food was provided in transparent screw-jars with pierced lids to allow both visual and chemical recognition of the food inside. Subsequently, we tested alternatively octopuses with food in sealed transparent screw-jars and food in blind screw-jars with pierced lids. As a control, we used blind sealed jars with the same lid color to verify a random choice among food types. Results and discussion: During the acclimatization period, O. vulgaris shows a higher preference for anchovies (60%) followed by clams (30%), then mussels (10%). After acclimatization, using the transparent and pierced screw jars octopus’s food choices resulted in 50-50 between anchovies and clams, avoiding mussels. Later, guided by just visual sense, with transparent but not pierced jars, their food preferences resulted in 100% anchovies. With pierced but not transparent jars their food preference resulted in 100% anchovies as first food choice, the clams as a second food choice result (33.3%). With no possibility to select food, neither by vision nor by chemoreception, the results were 20% anchovies, 20% clams, and 60% mussels. We conclude that O. vulgaris uses both chemical and visual senses in an integrative way in food choice, but if we exclude one of them, it appears clear that its food preference relies on chemical sense more than on visual perception.Keywords: food choice, Octopus vulgaris, olfaction, sensory organs, visual sense
Procedia PDF Downloads 221569 Evaluating the Validity of CFD Model of Dispersion in a Complex Urban Geometry Using Two Sets of Experimental Measurements
Authors: Mohammad R. Kavian Nezhad, Carlos F. Lange, Brian A. Fleck
Abstract:
This research presents the validation study of a computational fluid dynamics (CFD) model developed to simulate the scalar dispersion emitted from rooftop sources around the buildings at the University of Alberta North Campus. The ANSYS CFX code was used to perform the numerical simulation of the wind regime and pollutant dispersion by solving the 3D steady Reynolds-averaged Navier-Stokes (RANS) equations on a building-scale high-resolution grid. The validation study was performed in two steps. First, the CFD model performance in 24 cases (eight wind directions and three wind speeds) was evaluated by comparing the predicted flow fields with the available data from the previous measurement campaign designed at the North Campus, using the standard deviation method (SDM), while the estimated results of the numerical model showed maximum average percent errors of approximately 53% and 37% for wind incidents from the North and Northwest, respectively. Good agreement with the measurements was observed for the other six directions, with an average error of less than 30%. In the second step, the reliability of the implemented turbulence model, numerical algorithm, modeling techniques, and the grid generation scheme was further evaluated using the Mock Urban Setting Test (MUST) dispersion dataset. Different statistical measures, including the fractional bias (FB), the geometric mean bias (MG), and the normalized mean square error (NMSE), were used to assess the accuracy of the predicted dispersion field. Our CFD results are in very good agreement with the field measurements.Keywords: CFD, plume dispersion, complex urban geometry, validation study, wind flow
Procedia PDF Downloads 136568 Leadership and Management Strategies of Sports Administrator in Asia
Authors: Mark Christian Inductivo Siwa, Jesrelle Ormoc Bontuyan
Abstract:
This study was conducted in selected tertiary schools in selected universities in Asian countries such as Philippines, Thailand, and China, which are the top performing countries in Southeast Asian Games or SEA Games and Asian School Games (ASG), also known as the Youth SEA Games and Asian Games. The respondents of the study are sports administrators/directors and coaches in selected Southeast Asian countries such as Philippines, Thailand, and in Asia which is China. This study has generated a progressive sports operational model of Sports Leadership and Management in Selected Universities in Asia. This study utilized mixed-method research. It is a methodology for conducting research that involves collecting, analyzing and integrating quantitative (e.g., experiments, surveys) and qualitative (e.g., focus groups, interviews) research. This approach to research is used to provide integration for a better understanding of the research problem than either of each alone. This study particularly employed the explanatory sequential design of mixed methods, which involved two phases: the quantitative phase, which involves the collection and analysis of quantitative data, followed by the qualitative phase, which involves the collection and analysis of qualitative data. This study will prioritize the quantitative data and the findings will be followed up during the interpretation phase in the qualitative data of the study. The qualitative data help explain or build upon initial quantitative results. In phase I, the researcher began with the collection and analysis of the quantitative data. His investigation gave greater emphasis on the quantitative methods, particularly employed surveys with the coaches and sports directors of the three selected universities in Asia. In Phase II, the researcher subsequently collected and analyzed the qualitative data obtained through an interview with the sports directors to follow from or connect to the results of the quantitative phase. This study followed the data analysis spiral so that the researcher could follow – up or explain the quantitative results. The researcher engaged in the process of moving in analytic circles. Based on the school's mission and vision, the sports leadership and management consistently followed the key factors to take into account when leading the organization and managing the process in sports leadership and management when formulating objectives/goals, budget, equipment care and maintenance, facilities, training matrix, and consideration. Also, sports management demonstrates the need for development in terms of the upkeep and care of equipment as well as athlete funding. The development of goals or sports management goals, sports facilities and equipment, as well as improvements in demonstrating training and consideration, and incentives, should also include a maintenance plan. The study concluded with a progressive sports operational model that was created based on the result of the study.Keywords: sports leadership and management, formulating objectives, budget, equipment care and maintenance, training, consideration, incentives, progressive sports operational model
Procedia PDF Downloads 93567 Comparison of Different Machine Learning Algorithms for Solubility Prediction
Authors: Muhammet Baldan, Emel Timuçin
Abstract:
Molecular solubility prediction plays a crucial role in various fields, such as drug discovery, environmental science, and material science. In this study, we compare the performance of five machine learning algorithms—linear regression, support vector machines (SVM), random forests, gradient boosting machines (GBM), and neural networks—for predicting molecular solubility using the AqSolDB dataset. The dataset consists of 9981 data points with their corresponding solubility values. MACCS keys (166 bits), RDKit properties (20 properties), and structural properties(3) features are extracted for every smile representation in the dataset. A total of 189 features were used for training and testing for every molecule. Each algorithm is trained on a subset of the dataset and evaluated using metrics accuracy scores. Additionally, computational time for training and testing is recorded to assess the efficiency of each algorithm. Our results demonstrate that random forest model outperformed other algorithms in terms of predictive accuracy, achieving an 0.93 accuracy score. Gradient boosting machines and neural networks also exhibit strong performance, closely followed by support vector machines. Linear regression, while simpler in nature, demonstrates competitive performance but with slightly higher errors compared to ensemble methods. Overall, this study provides valuable insights into the performance of machine learning algorithms for molecular solubility prediction, highlighting the importance of algorithm selection in achieving accurate and efficient predictions in practical applications.Keywords: random forest, machine learning, comparison, feature extraction
Procedia PDF Downloads 41566 Evidence Theory Enabled Quickest Change Detection Using Big Time-Series Data from Internet of Things
Authors: Hossein Jafari, Xiangfang Li, Lijun Qian, Alexander Aved, Timothy Kroecker
Abstract:
Traditionally in sensor networks and recently in the Internet of Things, numerous heterogeneous sensors are deployed in distributed manner to monitor a phenomenon that often can be model by an underlying stochastic process. The big time-series data collected by the sensors must be analyzed to detect change in the stochastic process as quickly as possible with tolerable false alarm rate. However, sensors may have different accuracy and sensitivity range, and they decay along time. As a result, the big time-series data collected by the sensors will contain uncertainties and sometimes they are conflicting. In this study, we present a framework to take advantage of Evidence Theory (a.k.a. Dempster-Shafer and Dezert-Smarandache Theories) capabilities of representing and managing uncertainty and conflict to fast change detection and effectively deal with complementary hypotheses. Specifically, Kullback-Leibler divergence is used as the similarity metric to calculate the distances between the estimated current distribution with the pre- and post-change distributions. Then mass functions are calculated and related combination rules are applied to combine the mass values among all sensors. Furthermore, we applied the method to estimate the minimum number of sensors needed to combine, so computational efficiency could be improved. Cumulative sum test is then applied on the ratio of pignistic probability to detect and declare the change for decision making purpose. Simulation results using both synthetic data and real data from experimental setup demonstrate the effectiveness of the presented schemes.Keywords: CUSUM, evidence theory, kl divergence, quickest change detection, time series data
Procedia PDF Downloads 334