Search results for: reduced order models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22207

Search results for: reduced order models

20737 Variation in Complement Order in English: Implications for Interlanguage Syntax

Authors: Juliet Udoudom

Abstract:

Complement ordering principles of natural language phrases (XPs) stipulate that Head terms be consistently placed phrase initially or phrase-finally, yielding two basic theoretical orders – Head – Complement order or Complement – Head order. This paper examines the principles which determine complement ordering in English V- and N-bar structures. The aim is to determine the extent to which complement linearisations in the two phrase types are consistent with the two theoretical orders outlined above given the flexible and varied nature of natural language structures. The objective is to see whether there are variation(s) in the complement linearisations of the XPs studied and the implications which such variations hold for the inter-language syntax of English and Ibibio. A corpus-based approach was employed in obtaining the English data. V- and -N – bar structures containing complement structures were isolated for analysis. Data were examined from the perspective of the X-bar and Government – theories of Chomsky’s (1981) Government-Binding format. Findings from the analysis show that in V – bar structures in English, heads are consistently placed phrase – initially yielding a Head – Complement order; however, complement linearisation in the N – bar structures studied exhibited parametric variations. Thus, in some N – bar structures in English the nominal head is ordered to the left whereas in others, the head term occurs to the right. It may therefore be concluded that the principles which determine complement ordering are both Language – Particular and Phrase – specific following insights provided within Phrasal Syntax.

Keywords: complement order, complement–head order, head–complement order, language–particular principles

Procedia PDF Downloads 343
20736 Deep Brain Stimulation and Motor Cortex Stimulation for Post-Stroke Pain: A Systematic Review and Meta-Analysis

Authors: Siddarth Kannan

Abstract:

Objectives: Deep Brain Stimulation (DBS) and Motor Cortex stimulation (MCS) are innovative interventions in order to treat various neuropathic pain disorders such as post-stroke pain. While each treatment has a varying degree of success in managing pain, comparative analysis has not yet been performed, and the success rates of these techniques using validated, objective pain scores have not been synthesised. The aim of this study was to compare the effect of pain relief offered by MCS and DBS on patients with post-stroke pain and to assess if either of these procedures offered better results. Methods: A systematic review and meta-analysis were conducted in accordance with PRISMA guidelines (PROSPEROID CRD42021277542). Three databases were searched, and articles published from 2000 to June 2023 were included (last search date 25 June 2023). Meta-analysis was performed using random effects models. We evaluated the performance of DBS or MCS by assessing studies that reported pain relief using the Visual Analogue Scale (VAS). Data analysis of descriptive statistics was performed using SPSS (Version 27; IBM; Armonk; NY; USA). R statistics (Rstudio Version 4.0.1) was used to perform meta-analysis. Results: Of the 478 articles identified, 27 were included in the analysis (232 patients- 117 DBS & 115 MCS). The pooled number of patients who improved after DBS was 0.68 (95% CI, 0.57-0.77, I2=36%). The pooled number of patients who improved after MCS was 0.72 (95% CI, 0.62-0.80, I2=59%). Further sensitivity analysis was done to include only studies with a minimum of 5 patients in order to assess if there was any impact on the overall results. Nine studies each for DBS and MCS met these criteria. There seemed to be no significant difference in results. Conclusions: The use of surgical interventions such as DBS and MCS is an upcoming field for the treatment of post-stroke pain, with limited studies exploring and comparing these two techniques. While our study shows that MCS might be a slightly better treatment option, further research would need to be done in order to determine the appropriate surgical intervention for post-stroke pain.

Keywords: post-stroke pain, deep brain stimulation, motor cortex stimulation, pain relief

Procedia PDF Downloads 133
20735 Flexible Laser Reduced Graphene Oxide/MnO2 Electrode for Supercapacitor Applications

Authors: Ingy N. Bkrey, Ahmed A. Moniem

Abstract:

We succeeded to produce a high performance and flexible graphene/Manganese dioxide (G/MnO2) electrode coated on flexible polyethylene terephthalate (PET) substrate. The graphene film is initially synthesized by drop-casting the graphene oxide (GO) solution on the PET substrate, followed by simultaneous reduction and patterning of the dried film using carbon dioxide (CO2) laser beam with power of 1.8 W. Potentiostatic Anodic Deposition method was used to deposit thin film of MnO2 with different loading mass 10 – 50 and 100 μg.cm-2 on the pre-prepared graphene film. The electrodes were fully characterized in terms of structure, morphology, and electrochemical performance. A maximum specific capacitance of 973 F.g-1 was attributed when depositing 50 μg.cm-2 MnO2 on the laser reduced graphene oxide rGO (or G/50MnO2) and over 92% of its initial capacitance was retained after 1000 cycles. The good electrochemical performance and long-term cycling stability make our proposed approach a promising candidate in the supercapacitor applications.

Keywords: electrode deposition, flexible, graphene oxide, graphene, high power CO2 Laser, MnO2

Procedia PDF Downloads 313
20734 Wireless Sensor Networks Optimization by Using 2-Stage Algorithm Based on Imperialist Competitive Algorithm

Authors: Hamid R. Lashgarian Azad, Seyed N. Shetab Boushehri

Abstract:

Wireless sensor networks (WSN) have become progressively popular due to their wide range of applications. Wireless Sensor Network is made of numerous tiny sensor nodes that are battery-powered. It is a very significant problem to maximize the lifetime of wireless sensor networks. In this paper, we propose a two-stage protocol based on an imperialist competitive algorithm (2S-ICA) to solve a sensor network optimization problem. The energy of the sensors can be greatly reduced and the lifetime of the network reduced by long communication distances between the sensors and the sink. We can minimize the overall communication distance considerably, thereby extending the lifetime of the network lifetime through connecting sensors into a series of independent clusters using 2SICA. Comparison results of the proposed protocol and LEACH protocol, which is common to solving WSN problems, show that our protocol has a better performance in terms of improving network life and increasing the number of transmitted data.

Keywords: wireless sensor network, imperialist competitive algorithm, LEACH protocol, k-means clustering

Procedia PDF Downloads 98
20733 Full-Scale Test of a Causeway Embankment Supported by Raft-Aggregate Column Foundation on Soft Clay Deposit

Authors: Tri Harianto, Lawalenna Samang, St. Hijraini Nur, Arwin

Abstract:

Recently, a port development is constructed in Makassar city, South Sulawesi Province, Indonesia. Makassar city is located in lowland area that dominated by soft marine clay deposit. A two kilometers causeway construction was built which is situated on the soft clay layer. In order to investigate the behavior of causeway embankment, a full-scale test was conducted of high embankment built on a soft clay deposit. The embankment with 3,5 m high was supported by two types of reinforcement such as raft and raft-aggregate column foundation. Since the ground was undergoing consolidation due to the preload, the raft and raft-aggregate column foundations were monitored in order to analyze the vertical ground movement by inducing the settlement of the foundation. In this study, two types of foundation (raft and raft-aggregate column) were tested to observe the effectiveness of raft-aggregate column compare to raft foundation in reducing the settlement. The settlement monitored during the construction stage by using the settlement plates, which is located in the center and toe of the embankment. Measurements were taken every day for each embankment construction stage (4 months). In addition, an analytical calculation was conducted in this study to compare the full-scale test result. The result shows that the raft-aggregate column foundation significantly reduces the settlement by 30% compared to the raft foundation. A raft-aggregate column foundation also reduced the time period of each loading stage. The Good agreement of analytical calculation compared to the full-scale test result also found in this study.

Keywords: full-scale, preloading, raft-aggregate column, soft clay

Procedia PDF Downloads 292
20732 Microbial Activity and Greenhouse Gas (GHG) Emissions in Recovery Process in a Grassland of China

Authors: Qiushi Ning

Abstract:

The nitrogen (N) is an important limiting factor of various ecosystems, and the N deposition rate is increasing unprecedentedly due to anthropogenic activities. The N deposition altered the microbial growth and activity, and microbial mediated N cycling through changing soil pH, the availability of N and carbon (C). The CO2, CH4 and N2O are important greenhouse gas which threaten the sustainability and function of the ecosystem. With the prolonged and increasing N enrichment, the soil acidification and C limitation will be aggravated, and the microbial biomass will be further declined. The soil acidification and lack of C induced by N addition are argued as two important factors regulating the microbial activity and growth, and the studies combined soil acidification with lack of C on microbial community are scarce. In order to restore the ecosystem affected by chronic N loading, we determined the responses of microbial activity and GHG emssions to lime and glucose (control, 1‰ lime, 2‰ lime, glucose, 1‰ lime×glucose and 2‰ lime×glucose) addition which was used to alleviate the soil acidification and supply C resource into soils with N addition rates 0-50 g N m–2yr–1. The results showed no significant responses of soil respiration and microbial biomass (MBC and MBN) to lime addition, however, the glucose substantially improved the soil respiration and microbial biomass (MBC and MBN); the cumulative CO2 emission and microbial biomass of lime×glucose treatments were not significantly higher than those of only glucose treatment. The glucose and lime×glucose treatments reduced the net mineralization and nitrification rate, due to inspired microbial growth via C supply incorporating more inorganic N to the biomass, and mineralization of organic N was relatively reduced. The glucose addition also increased the CH4 and N2O emissions, CH4 emissions was regulated mainly by C resource as a substrate for methanogen. However, the N2O emissions were regulated by both C resources and soil pH, the C was important energy and the increased soil pH could benefit the nitrifiers and denitrifiers which were primary producers of N2O. The soil respiration and N2O emissions increased with increasing N addition rates in all glucose treatments, as the external C resource improved microbial N utilization. Compared with alleviated soil acidification, the improved availability of C substantially increased microbial activity, therefore, the C should be the main limiting factor in long-term N loading soils. The most important, when we use the organic C fertilization to improve the production of the ecosystems, the GHG emissions and consequent warming potentials should be carefully considered.

Keywords: acidification and C limitation, greenhouse gas emission, microbial activity, N deposition

Procedia PDF Downloads 300
20731 Parametric Estimation of U-Turn Vehicles

Authors: Yonas Masresha Aymeku

Abstract:

The purpose of capacity modelling at U-turns is to develop a relationship between capacity and its geometric characteristics. In fact, the few models available for the estimation of capacity at different transportation facilities do not provide specific guidelines for median openings. For this reason, an effort is made to estimate the capacity by collecting the data sets from median openings at different lane roads in Hyderabad City, India. Wide difference (43% -59%) among the capacity values estimated by the existing models shows the limitation to consider for mixed traffic situations. Thus, a distinct model is proposed for the estimation of the capacity of U-turn vehicles at median openings considering mixed traffic conditions, which would further prompt to investigate the effect of different factors that might affect the capacity.

Keywords: geometric, guiddelines, median, vehicles

Procedia PDF Downloads 59
20730 A Smart CAD Program for Custom Hand Orthosis Generation Based on Anthropometric Relationships

Authors: Elissa D. Ledoux, Eric J. Barth

Abstract:

Producing custom orthotic devices is a time-consuming and iterative process. Efficiency could be increased with a smart CAD program to rapidly generate custom part files for 3D printing, reducing the need for a skilled orthosis technician as well as the hands-on time required. Anthropometric data for the hand was analyzed in order to determine dimensional relationships and reduce the number of measurements needed to parameterize the hand. Using these relationships, a smart CAD package was developed to produce custom sized hand orthosis parts downloadable for 3D printing. Results showed that the number of anatomical parameters required could be reduced from 8 to 3, and the relationships hold for 5th to 95th percentile male hands. CAD parts regenerate correctly for the same range. This package could significantly impact the orthotics industry in terms of expedited production and reduction of required human resources and patient contact.

Keywords: CAD, hand, orthosis, orthotic, rehabilitation robotics, upper limb

Procedia PDF Downloads 218
20729 Heuristic Approaches for Injury Reductions by Reduced Car Use in Urban Areas

Authors: Stig H. Jørgensen, Trond Nordfjærn, Øyvind Teige Hedenstrøm, Torbjørn Rundmo

Abstract:

The aim of the paper is to estimate and forecast road traffic injuries in the coming 10-15 years given new targets in urban transport policy and shifts of mode of transport, including injury cross-effects of mode changes. The paper discusses possibilities and limitations in measuring and quantifying possible injury reductions. Injury data (killed and seriously injured road users) from six urban areas in Norway from 1998-2012 (N= 4709 casualties) form the basis for estimates of changing injury patterns. For the coming period calculation of number of injuries and injury rates by type of road user (categories of motorized versus non-motorized) by sex, age and type of road are made. A prognosticated population increase (25 %) in total population within 2025 in the six urban areas will curb the proceeded fall in injury figures. However, policy strategies and measures geared towards a stronger modal shift from use of private vehicles to safer public transport (bus, train) will modify this effect. On the other side will door to door transport (pedestrians on their way to/from public transport nodes) imply a higher exposure for pedestrians (bikers) converting from private vehicle use (including fall accidents not registered as traffic accidents). The overall effect is the sum of these modal shifts in the increasing urban population and in addition diminishing return to the majority of road safety countermeasures has also to be taken into account. The paper demonstrates how uncertainties in the various estimates (prediction factors) on increasing injuries as well as decreasing injury figures may partly offset each other. The paper discusses road safety policy and welfare consequences of transport mode shift, including reduced use of private vehicles, and further environmental impacts. In this regard, safety and environmental issues will as a rule concur. However pursuing environmental goals (e.g. improved air quality, reduced co2 emissions) encouraging more biking may generate more biking injuries. The study was given financial grants from the Norwegian Research Council’s Transport Safety Program.

Keywords: road injuries, forecasting, reduced private care use, urban, Norway

Procedia PDF Downloads 232
20728 Seismic Fragility of Weir Structure Considering Aging Degradation of Concrete Material

Authors: HoYoung Son, DongHoon Shin, WooYoung Jung

Abstract:

This study presented the seismic fragility framework of concrete weir structure subjected to strong seismic ground motions and in particular, concrete aging condition of the weir structure was taken into account in this study. In order to understand the influence of concrete aging on the weir structure, by using probabilistic risk assessment, the analytical seismic fragility of the weir structure was derived for pre- and post-deterioration of concrete. The performance of concrete weir structure after five years was assumed for the concrete aging or deterioration, and according to after five years’ condition, the elastic modulus was simply reduced about one–tenth compared with initial condition of weir structures. A 2D nonlinear finite element analysis was performed considering the deterioration of concrete in weir structures using ABAQUS platform, a commercial structural analysis program. Simplified concrete degradation was resulted in the increase of almost 45% of the probability of failure at Limit State 3, in comparison to initial construction stage, by analyzing the seismic fragility.

Keywords: weir, FEM, concrete, fragility, aging

Procedia PDF Downloads 478
20727 The Relationship of Lean Management Principles with Lean Maturity Levels: Multiple Case Study in Manufacturing Companies

Authors: Alexandre D. Ferraz, Dario H. Alliprandini, Mauro Sampaio

Abstract:

Companies and other institutions are constantly seeking better organizational performance and greater competitiveness. In order to fulfill this purpose, there are many tools, methodologies and models for increasing performance. However, the Lean Management approach seems to be the most effective in terms of achieving a significant improvement in productivity relatively quickly. Although Lean tools are relatively easy to understand and implement in different contexts, many organizations are not able to transform themselves into 'Lean companies'. Most of the efforts in its implementation have shown single benefits, failing to achieve the desired impact on the performance of the overall enterprise system. There is also a growing perception of the importance of management in Lean transformation, but few studies have empirically investigated and described the 'Lean Management'. In order to understand more clearly the ideas that guide Lean Management and its influence on the maturity level of the production system, the objective of this research is analyze the relationship between the Lean Management principles and the Lean maturity level in the organizations. The research also analyzes the principles of Lean Management and its relationship with the 'Lean culture' and the results obtained. The research was developed using the case study methodology. Three manufacturing units of a German multinational company from industrial automation segment, located in different countries were studied, in order to have a better comparison between the practices and the level of maturity in the implementation. The primary source of information was the application of a research questionnaire based on the theoretical review. The research showed that higher the level of Lean Management principles, higher are the Lean maturity level, the Lean culture level, and the level of Lean results obtained in the organization. The research also showed that factors such as time for application of Lean concepts and company size were not determinant for the level of Lean Management principles and, consequently, for the level of Lean maturity in the organization. The characteristics of the production system showed much more influence in different evaluated aspects. The present research also left recommendations for the managers of the plants analyzed and suggestions for future research.

Keywords: lean management, lean principles, lean maturity level, lean manufacturing

Procedia PDF Downloads 140
20726 Preparation Static Dissipative Nanocomposites of Alkaline Earth Metal Doped Aluminium Oxide and Methyl Vinyl Silicone Polymer

Authors: Aparna M. Joshi

Abstract:

Methyl vinyl silicone polymer (VMQ) - alkaline earth metal doped aluminium oxide composites are prepared by conventional two rolls open mill mixing method. Doped aluminium oxides (DAO) using silvery white coloured alkaline earth metals such as Mg and Ca as dopants in the concentration of 0.4 % are synthesized by microwave combustion method and referred as MA ( Mg doped aluminium oxide) and CA ( Ca doped aluminium oxide). The as-synthesized materials are characterized for the electrical resistance, X–ray diffraction, FE-SEM, TEM and FTIR. The electrical resistances of the DAOs are observed to be ~ 8-20 MΩ. This means that the resistance of aluminium oxide (Corundum) α-Al2O3 which is ~ 1010Ω is reduced by the order of ~ 103 to 104 Ω after doping. XRD studies reveal the doping of Mg and Ca in aluminium oxide. The microstructural study using FE-SEM shows the flaky clusterous structures with the thickness of the flakes between 10 and 20 nm. TEM images depict the rod-shaped morphological geometry of the particles with the diameter of ~50-70 nm. The nanocomposites are synthesized by incorporating the DAOs in the concentration of 75 phr (parts per hundred parts of rubber) into VMQ polymer. The electrical resistance of VMQ polymer, which is ~ 1015Ω, drops by the order of 108Ω. There is a retention of the electrical resistance of ~ 30-50 MΩ for the nanocomposites which is a static dissipative range of electricity. In this work white coloured electrically conductive VMQ polymer-DAO nanocomposites (MAVMQ for Mg doping and CAVMQ for Ca doping) have been synthesized. The physical and mechanical properties of the composites such as specific gravity, hardness, tensile strength and rebound resilience are measured. Hardness and tensile strength are found to increase, with the negligible alteration in the other properties.

Keywords: doped aluminium oxide, methyl vinyl silicone polymer, microwave synthesis, static dissipation

Procedia PDF Downloads 550
20725 Exploring Data Leakage in EEG Based Brain-Computer Interfaces: Overfitting Challenges

Authors: Khalida Douibi, Rodrigo Balp, Solène Le Bars

Abstract:

In the medical field, applications related to human experiments are frequently linked to reduced samples size, which makes the training of machine learning models quite sensitive and therefore not very robust nor generalizable. This is notably the case in Brain-Computer Interface (BCI) studies, where the sample size rarely exceeds 20 subjects or a few number of trials. To address this problem, several resampling approaches are often used during the data preparation phase, which is an overly critical step in a data science analysis process. One of the naive approaches that is usually applied by data scientists consists in the transformation of the entire database before the resampling phase. However, this can cause model’ s performance to be incorrectly estimated when making predictions on unseen data. In this paper, we explored the effect of data leakage observed during our BCI experiments for device control through the real-time classification of SSVEPs (Steady State Visually Evoked Potentials). We also studied potential ways to ensure optimal validation of the classifiers during the calibration phase to avoid overfitting. The results show that the scaling step is crucial for some algorithms, and it should be applied after the resampling phase to avoid data leackage and improve results.

Keywords: data leackage, data science, machine learning, SSVEP, BCI, overfitting

Procedia PDF Downloads 149
20724 Effects of Level Densities and Those of a-Parameter in the Framework of Preequilibrium Model for 63,65Cu(n,xp) Reactions in Neutrons at 9 to 15 MeV

Authors: L. Yettou

Abstract:

In this study, the calculations of proton emission spectra produced by 63Cu(n,xp) and 65Cu(n,xp) reactions are used in the framework of preequilibrium models using the EMPIRE code and TALYS code. Exciton Model predidtions combined with the Kalbach angular distribution systematics and the Hybrid Monte Carlo Simulation (HMS) were used. The effects of levels densities and those of a-parameter have been investigated for our calculations. The comparison with experimental data shows clear improvement over the Exciton Model and HMS calculations.

Keywords: Preequilibrium models , level density, level density a-parameter., Empire code, Talys code.

Procedia PDF Downloads 130
20723 An IoT-Enabled Crop Recommendation System Utilizing Message Queuing Telemetry Transport (MQTT) for Efficient Data Transmission to AI/ML Models

Authors: Prashansa Singh, Rohit Bajaj, Manjot Kaur

Abstract:

In the modern agricultural landscape, precision farming has emerged as a pivotal strategy for enhancing crop yield and optimizing resource utilization. This paper introduces an innovative Crop Recommendation System (CRS) that leverages the Internet of Things (IoT) technology and the Message Queuing Telemetry Transport (MQTT) protocol to collect critical environmental and soil data via sensors deployed across agricultural fields. The system is designed to address the challenges of real-time data acquisition, efficient data transmission, and dynamic crop recommendation through the application of advanced Artificial Intelligence (AI) and Machine Learning (ML) models. The CRS architecture encompasses a network of sensors that continuously monitor environmental parameters such as temperature, humidity, soil moisture, and nutrient levels. This sensor data is then transmitted to a central MQTT server, ensuring reliable and low-latency communication even in bandwidth-constrained scenarios typical of rural agricultural settings. Upon reaching the server, the data is processed and analyzed by AI/ML models trained to correlate specific environmental conditions with optimal crop choices and cultivation practices. These models consider historical crop performance data, current agricultural research, and real-time field conditions to generate tailored crop recommendations. This implementation gets 99% accuracy.

Keywords: Iot, MQTT protocol, machine learning, sensor, publish, subscriber, agriculture, humidity

Procedia PDF Downloads 60
20722 Several Spectrally Non-Arbitrary Ray Patterns of Order 4

Authors: Ling Zhang, Feng Liu

Abstract:

A matrix is called a ray pattern matrix if its entries are either 0 or a ray in complex plane which originates from 0. A ray pattern A of order n is called spectrally arbitrary if the complex matrices in the ray pattern class of A give rise to all possible nth degree complex polynomial. Otherwise, it is said to be spectrally non-arbitrary ray pattern. We call that a spectrally arbitrary ray pattern A of order n is minimally spectrally arbitrary if any nonzero entry of A is replaced, then A is not spectrally arbitrary. In this paper, we find that is not spectrally arbitrary when n equals to 4 for any θ which is greater than or equal to 0 and less than or equal to n. In this article, we give several ray patterns A(θ) of order n that are not spectrally arbitrary for some θ which is greater than or equal to 0 and less than or equal to n. by using the nilpotent-Jacobi method. One example is given in our paper.

Keywords: spectrally arbitrary, nilpotent matrix , ray patterns, sign patterns

Procedia PDF Downloads 176
20721 Evaluating Performance of Value at Risk Models for the MENA Islamic Stock Market Portfolios

Authors: Abderrazek Ben Maatoug, Ibrahim Fatnassi, Wassim Ben Ayed

Abstract:

In this paper we investigate the issue of market risk quantification for Middle East and North Africa (MENA) Islamic market equity. We use Value-at-Risk (VaR) as a measure of potential risk in Islamic stock market, for long and short position, based on Riskmetrics model and the conditional parametric ARCH class model volatility with normal, student and skewed student distribution. The sample consist of daily data for the 2006-2014 of 11 Islamic stock markets indices. We conduct Kupiec and Engle and Manganelli tests to evaluate the performance for each model. The main finding of our empirical results show that (i) the superior performance of VaR models based on the Student and skewed Student distribution, for the significance level of α=1% , for all Islamic stock market indices, and for both long and short trading positions (ii) Risk Metrics model, and VaR model based on conditional volatility with normal distribution provides the best accurate VaR estimations for both long and short trading positions for a significance level of α=5%.

Keywords: value-at-risk, risk management, islamic finance, GARCH models

Procedia PDF Downloads 589
20720 Numerical Solutions of Fractional Order Epidemic Model

Authors: Sadia Arshad, Ayesha Sohail, Sana Javed, Khadija Maqbool, Salma Kanwal

Abstract:

The dynamical study of the carriers play an essential role in the evolution and global transmission of infectious diseases and will be discussed in this study. To make this approach novel, we will consider the fractional order model which is generalization of integer order derivative to an arbitrary number. Since the integration involved is non local therefore this property of fractional operator is very useful to study epidemic model for infectious diseases. An extended numerical method (ODE solver) is implemented on the model equations and we will present the simulations of the model for different values of fractional order to study the effect of carriers on transmission dynamics. Global dynamics of fractional model are established by using the reproduction number.

Keywords: Fractional differential equation, Numerical simulations, epidemic model, transmission dynamics

Procedia PDF Downloads 594
20719 Deep Reinforcement Learning Model Using Parameterised Quantum Circuits

Authors: Lokes Parvatha Kumaran S., Sakthi Jay Mahenthar C., Sathyaprakash P., Jayakumar V., Shobanadevi A.

Abstract:

With the evolution of technology, the need to solve complex computational problems like machine learning and deep learning has shot up. But even the most powerful classical supercomputers find it difficult to execute these tasks. With the recent development of quantum computing, researchers and tech-giants strive for new quantum circuits for machine learning tasks, as present works on Quantum Machine Learning (QML) ensure less memory consumption and reduced model parameters. But it is strenuous to simulate classical deep learning models on existing quantum computing platforms due to the inflexibility of deep quantum circuits. As a consequence, it is essential to design viable quantum algorithms for QML for noisy intermediate-scale quantum (NISQ) devices. The proposed work aims to explore Variational Quantum Circuits (VQC) for Deep Reinforcement Learning by remodeling the experience replay and target network into a representation of VQC. In addition, to reduce the number of model parameters, quantum information encoding schemes are used to achieve better results than the classical neural networks. VQCs are employed to approximate the deep Q-value function for decision-making and policy-selection reinforcement learning with experience replay and the target network.

Keywords: quantum computing, quantum machine learning, variational quantum circuit, deep reinforcement learning, quantum information encoding scheme

Procedia PDF Downloads 127
20718 Direct Measurement of Pressure and Temperature Variations During High-Speed Friction Experiments

Authors: Simon Guerin-Marthe, Marie Violay

Abstract:

Thermal Pressurization (TP) has been proposed as a key mechanism involved in the weakening of faults during dynamic ruptures. Theoretical and numerical studies clearly show how frictional heating can lead to an increase in pore fluid pressure due to the rapid slip along faults occurring during earthquakes. In addition, recent laboratory studies have evidenced local pore pressure or local temperature variation during rotary shear tests, which are consistent with TP theoretical and numerical models. The aim of this study is to complement previous ones by measuring both local pore pressure and local temperature variations in the vicinity of a water-saturated calcite gouge layer subjected to a controlled slip velocity in direct double shear configuration. Laboratory investigation of TP process is crucial in order to understand the conditions at which it is likely to become a dominant mechanism controlling dynamic friction. It is also important in order to understand the timing and magnitude of temperature and pore pressure variations, to help understanding when it is negligible, and how it competes with other rather strengthening-mechanisms such as dilatancy, which can occur during rock failure. Here we present unique direct measurements of temperature and pressure variations during high-speed friction experiments under various load point velocities and show the timing of these variations relatively to the slip event.

Keywords: thermal pressurization, double-shear test, high-speed friction, dilatancy

Procedia PDF Downloads 59
20717 Supporting Densification through the Planning and Implementation of Road Infrastructure in the South African Context

Authors: K. Govender, M. Sinclair

Abstract:

This paper demonstrates a proof of concept whereby shorter trips and land use densification can be promoted through an alternative approach to planning and implementation of road infrastructure in the South African context. It briefly discusses how the development of the Compact City concept relies on a combination of promoting shorter trips and densification through a change in focus in road infrastructure provision. The methodology developed in this paper uses a traffic model to test the impact of synthesized deterrence functions on congestion locations in the road network through the assignment of traffic on the study network. The results from this study demonstrate that intelligent planning of road infrastructure can indeed promote reduced urban sprawl, increased residential density and mixed-use areas which are supported by an efficient public transport system; and reduced dependence on the freeway network with a fixed road infrastructure budget. The study has resonance for all cities where urban sprawl is seemingly unstoppable.

Keywords: compact cities, densification, road infrastructure planning, transportation modelling

Procedia PDF Downloads 173
20716 Lessons Learned from Interlaboratory Noise Modelling in Scope of Environmental Impact Assessments in Slovenia

Authors: S. Cencek, A. Markun

Abstract:

Noise assessment methods are regularly used in scope of Environmental Impact Assessments for planned projects to assess (predict) the expected noise emissions of these projects. Different noise assessment methods could be used. In recent years, we had an opportunity to collaborate in some noise assessment procedures where noise assessments of different laboratories have been performed simultaneously. We identified some significant differences in noise assessment results between laboratories in Slovenia. We estimate that despite good input Georeferenced Data to set up acoustic model exists in Slovenia; there is no clear consensus on methods for predictive noise methods for planned projects. We analyzed input data, methods and results of predictive noise methods for two planned industrial projects, both were done independently by two laboratories. We also analyzed the data, methods and results of two interlaboratory collaborative noise models for two existing noise sources (railway and motorway). In cases of predictive noise modelling, the validations of acoustic models were performed by noise measurements of surrounding existing noise sources, but in varying durations. The acoustic characteristics of existing buildings were also not described identically. The planned noise sources were described and digitized differently. Differences in noise assessment results between different laboratories have ranged up to 10 dBA, which considerably exceeds the acceptable uncertainty ranged between 3 to 6 dBA. Contrary to predictive noise modelling, in cases of collaborative noise modelling for two existing noise sources the possibility to perform the validation noise measurements of existing noise sources greatly increased the comparability of noise modelling results. In both cases of collaborative noise modelling for existing motorway and railway, the modelling results of different laboratories were comparable. Differences in noise modeling results between different laboratories were below 5 dBA, which was acceptable uncertainty set up by interlaboratory noise modelling organizer. The lessons learned from the study were: 1) Predictive noise calculation using formulae from International standard SIST ISO 9613-2: 1997 is not an appropriate method to predict noise emissions of planned projects since due to complexity of procedure they are not used strictly, 2) The noise measurements are important tools to minimize noise assessment errors of planned projects and should be in cases of predictive noise modelling performed at least for validation of acoustic model, 3) National guidelines should be made on the appropriate data, methods, noise source digitalization, validation of acoustic model etc. in order to unify the predictive noise models and their results in scope of Environmental Impact Assessments for planned projects.

Keywords: environmental noise assessment, predictive noise modelling, spatial planning, noise measurements, national guidelines

Procedia PDF Downloads 230
20715 Placebo Analgesia in Older Age: Evidence from Event-Related Potentials

Authors: Angelika Dierolf, K. Rischer, A. Gonzalez-Roldan, P. Montoya, F. Anton, M. Van der Meulen

Abstract:

Placebo analgesia is a powerful cognitive endogenous pain modulation mechanism with high relevance in pain treatment. Older people would benefit, especially from non-pharmacologic pain interventions, since this age group is disproportionately affected by acute and chronic pain, while pharmacological treatments are less suitable due to polypharmacy and age-related changes in drug metabolism. Although aging is known to affect neurobiological and physiological aspects of pain perception, as for example, changes in pain threshold and pain tolerance, its effects on cognitive pain modulation strategies, including placebo analgesia, have hardly been investigated so far. In the present study, we are assessing placebo analgesia in 35 older adults (60 years and older) and 35 younger adults (between 18 and 35 years). Acute pain was induced with short transdermal electrical pulses to the inner forearm, using a concentric stimulating electrode. Stimulation intensities were individually adjusted to the participant’s threshold. Next to the stimulation site, we applied sham transcutaneous electrical nerve stimulation (TENS). Participants were informed that sometimes the TENS device would be switched on (placebo condition), and sometimes it would be switched off (control condition). In reality, it was always switched off. Participants received alternating blocks of painful stimuli in the placebo and control condition and were asked to rate the intensity and unpleasantness of each stimulus on a visual analog scale (VAS). Pain-related evoked potentials were recorded with a 64-channel EEG. Preliminary results show a reduced placebo effect in older compared to younger adults in both behavioral and neurophysiological data. Older people experienced less subjective pain reduction under sham TENS treatment compared to younger adults, as evidenced by the VAS ratings. The N1 and P2 event-related potential components were generally reduced in the older group. While younger adults showed a reduced N1 and P2 under sham TENS treatment, this reduction was considerably smaller in older people. This reduced placebo effect in the older group suggests that cognitive pain modulation is altered in aging and may at least partly explain why older adults experience more pain. Our results highlight the need for a better understanding of the efficacy of non-pharmacological pain treatments in older adults and how these can be optimized to meet the specific requirements of this population.

Keywords: placebo analgesia, aging, acute pain, TENS, EEG

Procedia PDF Downloads 137
20714 Bank ATM Monitoring System Using IR Sensor

Authors: P. Saravanakumar, N. Raja, M. Rameshkumar, D. Mohankumar, R. Sateeshkumar, B. Maheshwari

Abstract:

This research work is designed using Microsoft VB. Net as front end and MySQL as back end. The project deals with secure the user transaction in the ATM system. This application contains the option for sending the failed transaction details to the particular customer by using the SMS. When the customer withdraws the amount from the Bank ATM system, sometimes the amount will not be dispatched but the amount will be debited to the particular account. This application is used to avoid this type of problems in the ATM system. In this proposed system using IR technique to detect the dispatched amount. IR Transmitter and IR Receiver are placed in the path of cash dispatch. It is connected each other through the IR signal. When the customers withdraw the amount in the ATM system then the amount will be dispatched or not is monitored by IR Receiver. If the amount will be dispatched then the signal will be interrupted between the IR Receiver and the IR Transmitter. At that time, the monitoring system will be reduced their particular withdraw amount on their account. If the cash will not be dispatched, the signal will not be interrupted, at that time the particular withdraw amount will not be reduced their account. If the transaction completed successfully, the transaction details such as withdraw amount and current balance can be sent to the customer via the SMS. If the transaction fails, the transaction failed message can be send to the customer.

Keywords: ATM system, monitoring system, IR Transmitter, IR Receiver

Procedia PDF Downloads 306
20713 Performance of Reinforced Concrete Beams under Different Fire Durations

Authors: Arifuzzaman Nayeem, Tafannum Torsha, Tanvir Manzur, Shaurav Alam

Abstract:

Performance evaluation of reinforced concrete (RC) beams subjected to accidental fire is significant for post-fire capacity measurement. Mechanical properties of any RC beam degrade due to heating since the strength and modulus of concrete and reinforcement suffer considerable reduction under elevated temperatures. Moreover, fire-induced thermal dilation and shrinkage cause internal stresses within the concrete and eventually result in cracking, spalling, and loss of stiffness, which ultimately leads to lower service life. However, conducting full-scale comprehensive experimental investigation for RC beams exposed to fire is difficult and cost-intensive, where the finite element (FE) based numerical study can provide an economical alternative for evaluating the post-fire capacity of RC beams. In this study, an attempt has been made to study the fire behavior of RC beams using FE software package ABAQUS under different durations of fire. The damaged plasticity model of concrete in ABAQUS was used to simulate behavior RC beams. The effect of temperature on strength and modulus of concrete and steel was simulated following relevant Eurocodes. Initially, the result of FE models was validated using several experimental results from available scholarly articles. It was found that the response of the developed FE models matched quite well with the experimental outcome for beams without heat. The FE analysis of beams subjected to fire showed some deviation from the experimental results, particularly in terms of stiffness degradation. However, the ultimate strength and deflection of FE models were similar to that of experimental values. The developed FE models, thus, exhibited the good potential to predict the fire behavior of RC beams. Once validated, FE models were then used to analyze several RC beams having different strengths (ranged between 20 MPa and 50 MPa) exposed to the standard fire curve (ASTM E119) for different durations. The post-fire performance of RC beams was investigated in terms of load-deflection behavior, flexural strength, and deflection characteristics.

Keywords: fire durations, flexural strength, post fire capacity, reinforced concrete beam, standard fire

Procedia PDF Downloads 135
20712 Consequences of Some Remediative Techniques Used in Sewaged Soil Bioremediation on Indigenous Microbial Activity

Authors: E. M. Hoballah, M. Saber, A. Turky, N. Awad, A. M. Zaghloul

Abstract:

Remediation of cultivated sewage soils in Egypt become an important aspect in last decade for having healthy crops and saving the human health. In this respect, a greenhouse experiment was conducted where contaminated sewage soil was treated with modified forms of 2% bentonite (T1), 2% kaolinite (T2), 1% bentonite+1% kaolinite (T3), 2% probentonite (T4), 2% prokaolinite (T5), 1% bentonite + 0.5% kaolinite + 0.5% rock phosphate (RP) (T6), 2% iron oxide (T7) and 1% iron oxide + 1% RP (T8). These materials were applied as remediative materials. Untreated soil was also used as a control. All soil samples were incubated for 2 months at 25°C at field capacity throughout the whole experiment. Carbon dioxide (CO2) efflux from both treated and untreated soils as a biomass indicator was measured through the incubation time and kinetic parameters of the best fitted models used to describe the phenomena were taken to evaluate the succession of sewaged soils remediation. The obtained results indicated that according to the kinetic parameters of used models, CO2 effluxes from remediated soils was significantly decreased compared to control treatment with variation in rate values according to type of remediation material applied. In addition, analyzed microbial biomass parameter showed that Ni and Zn were the most potential toxic elements (PTEs) that influenced the decreasing order of microbial activity in untreated soil. Meanwhile, Ni was the only influenced pollutant in treated soils. Although all applied materials significantly decreased the hazards of PTEs in treated soil, modified bentonite was the best treatment compared to other used materials. This work discussed different mechanisms taking place between applied materials and PTEs founded in the studied sewage soil.

Keywords: remediation, potential toxic elements, soil biomass, sewage

Procedia PDF Downloads 224
20711 Pharmacogenetics of P2Y12 Receptor Inhibitors

Authors: Ragy Raafat Gaber Attaalla

Abstract:

For cardiovascular illness, oral P2Y12 inhibitors including clopidogrel, prasugrel, and ticagrelor are frequently recommended. Each of these medications has advantages and disadvantages. In the absence of genotyping, it has been demonstrated that the stronger platelet aggregation inhibitors prasugrel and ticagrelor are superior than clopidogrel at preventing significant adverse cardiovascular events following an acute coronary syndrome and percutaneous coronary intervention (PCI). Both, nevertheless, come with a higher risk of bleeding unrelated to a coronary artery bypass. As a prodrug, clopidogrel needs to be bioactivated, principally by the CYP2C19 enzyme. A CYP2C19 no function allele and diminished or absent CYP2C19 enzyme activity are present in about 30% of people. The reduced exposure to the active metabolite of clopidogrel and reduced inhibition of platelet aggregation among clopidogrel-treated carriers of a CYP2C19 no function allele likely contributed to the reduced efficacy of clopidogrel in clinical trials. Clopidogrel's pharmacogenetic results are strongest when used in conjunction with PCI, but evidence for other indications is growing. One of the most typical examples of clinical pharmacogenetic application is CYP2C19 genotype-guided antiplatelet medication following PCI. Guidance is available from expert consensus groups and regulatory bodies to assist with incorporating genetic information into P2Y12 inhibitor prescribing decisions. Here, we examine the data supporting genotype-guided P2Y12 inhibitor selection's effects on clopidogrel response and outcomes and discuss tips for pharmacogenetic implementation. We also discuss procedures for using genotype data to choose P2Y12 inhibitor therapies as well as any unmet research needs. Finally, choosing a P2Y12 inhibitor medication that optimally balances the atherothrombotic and bleeding risks may be influenced by both clinical and genetic factors.

Keywords: inhibitors, cardiovascular events, coronary intervention, pharmacogenetic implementation

Procedia PDF Downloads 106
20710 Using Simulation Modeling Approach to Predict USMLE Steps 1 and 2 Performances

Authors: Chau-Kuang Chen, John Hughes, Jr., A. Dexter Samuels

Abstract:

The prediction models for the United States Medical Licensure Examination (USMLE) Steps 1 and 2 performances were constructed by the Monte Carlo simulation modeling approach via linear regression. The purpose of this study was to build robust simulation models to accurately identify the most important predictors and yield the valid range estimations of the Steps 1 and 2 scores. The application of simulation modeling approach was deemed an effective way in predicting student performances on licensure examinations. Also, sensitivity analysis (a/k/a what-if analysis) in the simulation models was used to predict the magnitudes of Steps 1 and 2 affected by changes in the National Board of Medical Examiners (NBME) Basic Science Subject Board scores. In addition, the study results indicated that the Medical College Admission Test (MCAT) Verbal Reasoning score and Step 1 score were significant predictors of the Step 2 performance. Hence, institutions could screen qualified student applicants for interviews and document the effectiveness of basic science education program based on the simulation results.

Keywords: prediction model, sensitivity analysis, simulation method, USMLE

Procedia PDF Downloads 338
20709 Mathematical Modeling of the Fouling Phenomenon in Ultrafiltration of Latex Effluent

Authors: Amira Abdelrasoul, Huu Doan, Ali Lohi

Abstract:

An efficient and well-planned ultrafiltration process is becoming a necessity for monetary returns in the industrial settings. The aim of the present study was to develop a mathematical model for an accurate prediction of ultrafiltration membrane fouling of latex effluent applied to homogeneous and heterogeneous membranes with uniform and non-uniform pore sizes, respectively. The models were also developed for an accurate prediction of power consumption that can handle the large-scale purposes. The model incorporated the fouling attachments as well as chemical and physical factors in membrane fouling for accurate prediction and scale-up application. Both Polycarbonate and Polysulfone flat membranes, with pore sizes of 0.05 µm and a molecular weight cut-off of 60,000, respectively, were used under a constant feed flow rate and a cross-flow mode in ultrafiltration of the simulated paint effluent. Furthermore, hydrophilic ultrafilic and hydrophobic PVDF membranes with MWCO of 100,000 were used to test the reliability of the models. Monodisperse particles of 50 nm and 100 nm in diameter, and a latex effluent with a wide range of particle size distributions were utilized to validate the models. The aggregation and the sphericity of the particles indicated a significant effect on membrane fouling.

Keywords: membrane fouling, mathematical modeling, power consumption, attachments, ultrafiltration

Procedia PDF Downloads 466
20708 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 122