Search results for: PV modeling
2885 Impact of Applying Bag House Filter Technology in Cement Industry on Ambient Air Quality - Case Study: Alexandria Cement Company
Authors: Haggag H. Mohamed, Ghatass F. Zekry, Shalaby A. Elsayed
Abstract:
Most sources of air pollution in Egypt are of anthropogenic origin. Alexandria Governorate is located at north of Egypt. The main contributing sectors of air pollution in Alexandria are industry, transportation and area source due to human activities. Alexandria includes more than 40% of the industrial activities in Egypt. Cement manufacture contributes a significant amount to the particulate pollution load. Alexandria Portland Cement Company (APCC) surrounding was selected to be the study area. APCC main kiln stack Total Suspended Particulate (TSP) continuous monitoring data was collected for assessment of dust emission control technology. Electro Static Precipitator (ESP) was fixed on the cement kiln since 2002. The collected data of TSP for first quarter of 2012 was compared to that one in first quarter of 2013 after installation of new bag house filter. In the present study, based on these monitoring data and metrological data a detailed air dispersion modeling investigation was carried out using the Industrial Source Complex Short Term model (ISC3-ST) to find out the impact of applying new bag house filter control technology on the neighborhood ambient air quality. The model results show a drastic reduction of the ambient TSP hourly average concentration from 44.94μg/m3 to 5.78μg/m3 which assures the huge positive impact on the ambient air quality by applying bag house filter technology on APCC cement kilnKeywords: air pollution modeling, ambient air quality, baghouse filter, cement industry
Procedia PDF Downloads 2672884 Pattern the Location and Area of Earth-Dumping Stations from Vehicle GPS Data in Taiwan
Authors: Chun-Yuan Chen, Ming-Chang Li, Xiu-Hui Wen, Yi-Ching Tu
Abstract:
The objective of this study explores GPS (Global Positioning System) applied to trace construction vehicles such as trucks or cranes, help to pattern the earth-dumping stations of traffic construction in Taiwan. Traffic construction in this research is defined as the engineering of high-speed railways, expressways, and which that distance more than kilometers. Audit the location and check the compliance with regulations of earth-dumping stations is one of important tasks in Taiwan EPA. Basically, the earth-dumping station was known as one source of particulate matter from air pollution during construction process. Due to GPS data can be analyzed quickly and be used conveniently, this study tried to find out dumping stations by modeling vehicles tracks from GPS data during work cycle of construction. The GPS data updated from 13 vehicles related to an expressway construction in central Taiwan. The GPS footprints were retrieved to Keyhole Markup Language (KML) files so that can pattern the tracks of trucks by computer applications, the data was collected about eight months- from Feb. to Oct. in 2017. The results of GPS footprints identified dumping station and outlined the areas of earthwork had been passed to the Taiwan EPA for on-site inspection. Taiwan EPA had issued advice comments to the agency which was in charge of the construction to prevent the air pollution. According to the result of this study compared to the commonly methods in inspecting environment by manual collection, the GPS with KML patterning and modeling method can consumes less time. On the other hand, through monitoring the GPS data from construction vehicles could be useful for administration to development and implementation of strategies in environmental management.Keywords: automatic management, earth-dumping station, environmental management, Global Positioning System (GPS), particulate matter, traffic construction
Procedia PDF Downloads 1632883 Thermomechanical Effects and Nanoscale Ripples in Graphene
Authors: Roderick Melnik, Sanjay Prabhakar
Abstract:
The relaxed state of graphene nanostructures due to externally applied tensile stress along both the armchair and zigzag directions are analyzed in detail. The results, obtained with the Finite Element Method (FEM), demonstrate that the amplitude of ripple waves in such nanostructures increases with temperature. Details of the multi-scale multi-physics computational procedure developed for this analysis are also provided.Keywords: nanostructures, modeling, coupled processes, computer-aided design, nanotechnological applications
Procedia PDF Downloads 3132882 Implementation of Free-Field Boundary Condition for 2D Site Response Analysis in OpenSees
Authors: M. Eskandarighadi, C. R. McGann
Abstract:
It is observed from past experiences of earthquakes that local site conditions can significantly affect the strong ground motion characteristics experience at the site. One-dimensional seismic site response analysis is the most common approach for investigating site response. This approach assumes that soil is homogeneous and infinitely extended in the horizontal direction. Therefore, tying side boundaries together is one way to model this behavior, as the wave passage is assumed to be only vertical. However, 1D analysis cannot capture the 2D nature of wave propagation, soil heterogeneity, and 2D soil profile with features such as inclined layer boundaries. In contrast, 2D seismic site response modeling can consider all of the mentioned factors to better understand local site effects on strong ground motions. 2D wave propagation and considering that the soil profile on the two sides of the model may not be identical clarifies the importance of a boundary condition on each side that can minimize the unwanted reflections from the edges of the model and input appropriate loading conditions. Ideally, the model size should be sufficiently large to minimize the wave reflection, however, due to computational limitations, increasing the model size is impractical in some cases. Another approach is to employ free-field boundary conditions that take into account the free-field motion that would exist far from the model domain and apply this to the sides of the model. This research focuses on implementing free-field boundary conditions in OpenSees for 2D site response analysisComparisons are made between 1D models and 2D models with various boundary conditions, and details and limitations of the developed free-field boundary modeling approach are discussed.Keywords: boundary condition, free-field, opensees, site response analysis, wave propagation
Procedia PDF Downloads 1572881 A Basic Modeling Approach for the 3D Protein Structure of Insulin
Authors: Daniel Zarzo Montes, Manuel Zarzo Castelló
Abstract:
Proteins play a fundamental role in biology, but their structure is complex, and it is a challenge for teachers to conceptually explain the differences between their primary, secondary, tertiary, and quaternary structures. On the other hand, there are currently many computer programs to visualize the 3D structure of proteins, but they require advanced training and knowledge. Moreover, it becomes difficult to visualize the sequence of amino acids in these models, and how the protein conformation is reached. Given this drawback, a simple and instructive procedure is proposed in order to teach the protein structure to undergraduate and graduate students. For this purpose, insulin has been chosen because it is a protein that consists of 51 amino acids, a relatively small number. The methodology has consisted of the use of plastic atom models, which are frequently used in organic chemistry and biochemistry to explain the chirality of biomolecules. For didactic purposes, when the aim is to teach the biochemical foundations of proteins, a manipulative system seems convenient, starting from the chemical structure of amino acids. It has the advantage that the bonds between amino acids can be conveniently rotated, following the pattern marked by the 3D models. First, the 51 amino acids were modeled, and then they were linked according to the sequence of this protein. Next, the three disulfide bonds that characterize the stability of insulin have been established, and then the alpha-helix structure has been formed. In order to reach the tertiary 3D conformation of this protein, different interactive models available on the Internet have been visualized. In conclusion, the proposed methodology seems very suitable for biology and biochemistry students because they can learn the fundamentals of protein modeling by means of a manipulative procedure as a basis for understanding the functionality of proteins. This methodology would be conveniently useful for a biology or biochemistry laboratory practice, either at the pre-graduate or university level.Keywords: protein structure, 3D model, insulin, biomolecule
Procedia PDF Downloads 522880 Finite Element Modeling of Mass Transfer Phenomenon and Optimization of Process Parameters for Drying of Paddy in a Hybrid Solar Dryer
Authors: Aprajeeta Jha, Punyadarshini P. Tripathy
Abstract:
Drying technologies for various food processing operations shares an inevitable linkage with energy, cost and environmental sustainability. Hence, solar drying of food grains has become imperative choice to combat duo challenges of meeting high energy demand for drying and to address climate change scenario. But performance and reliability of solar dryers depend hugely on sunshine period, climatic conditions, therefore, offer a limited control over drying conditions and have lower efficiencies. Solar drying technology, supported by Photovoltaic (PV) power plant and hybrid type solar air collector can potentially overpower the disadvantages of solar dryers. For development of such robust hybrid dryers; to ensure quality and shelf-life of paddy grains the optimization of process parameter becomes extremely critical. Investigation of the moisture distribution profile within the grains becomes necessary in order to avoid over drying or under drying of food grains in hybrid solar dryer. Computational simulations based on finite element modeling can serve as potential tool in providing a better insight of moisture migration during drying process. Hence, present work aims at optimizing the process parameters and to develop a 3-dimensional (3D) finite element model (FEM) for predicting moisture profile in paddy during solar drying. COMSOL Multiphysics was employed to develop a 3D finite element model for predicting moisture profile. Furthermore, optimization of process parameters (power level, air velocity and moisture content) was done using response surface methodology in design expert software. 3D finite element model (FEM) for predicting moisture migration in single kernel for every time step has been developed and validated with experimental data. The mean absolute error (MAE), mean relative error (MRE) and standard error (SE) were found to be 0.003, 0.0531 and 0.0007, respectively, indicating close agreement of model with experimental results. Furthermore, optimized process parameters for drying paddy were found to be 700 W, 2.75 m/s at 13% (wb) with optimum temperature, milling yield and drying time of 42˚C, 62%, 86 min respectively, having desirability of 0.905. Above optimized conditions can be successfully used to dry paddy in PV integrated solar dryer in order to attain maximum uniformity, quality and yield of product. PV-integrated hybrid solar dryers can be employed as potential and cutting edge drying technology alternative for sustainable energy and food security.Keywords: finite element modeling, moisture migration, paddy grain, process optimization, PV integrated hybrid solar dryer
Procedia PDF Downloads 1492879 Numerical Modeling of a Molten Salt Power Tower Configuration Adaptable for Harsh Winter Climate
Authors: Huiqiang Yang, Domingo Santana
Abstract:
This paper proposes a novel configuration which introduces a natural draft dry cooling tower system in a molten salt power tower. A three-dimensional numerical modeling was developed based on the novel configuration. A plan of building 20 new concentrating solar power plants has been announced by Chinese government in September 2016, and among these 20 new plants, most of them are located in regions with long winter and harsh winter climate. The innovative configuration proposed includes an external receiver concrete tower at the center, a natural draft dry cooling tower which is surrounding the external receiver concrete tower and whose shell is fixed on the external receiver concrete tower, and a power block (including a steam generation system, a steam turbine system and hot/cold molten salt tanks, and water treatment systems) is covered by the roof of the natural draft dry cooling tower. Heat exchanger bundles are vertically installed at the furthest edge of the power block. In such a way, all power block equipment operates under suitable environmental conditions through whole year operation. The monthly performance of the novel configuration is simulated as compared to a standard one. The results show that the novel configuration is much more efficient in each separate month in a typical meteorological year. Moreover, all systems inside the power block have less thermal losses at low ambient temperatures, especially in harsh winter climate. It is also worthwhile mentioning that a photovoltaic power plant can be installed on the roof of the cooling tower to reduce the parasites of the molten salt power tower.Keywords: molten salt power tower, natural draft dry cooling, commercial scale, power block, harsh winter climate
Procedia PDF Downloads 3402878 Land Suitability Scaling and Modeling for Assessing Crop Suitability in Some New Reclaimed Areas, Egypt
Authors: W. A. M. Abdel Kawy, Kh. M. Darwish
Abstract:
Adequate land use selection is an essential step towards achieving sustainable development. The main object of this study is to develop a new scale for land suitability system, which can be compatible with the local conditions. Furthermore, it aims to adapt the conventional land suitability systems to match the actual environmental status in term of soil types, climate and other conditions to evaluate land suitability for newly reclaimed areas. The new system suggests calculation of land suitability considering 20 factors affecting crop selection grouping into five categories; crop-agronomic, land management, development, environmental conditions and socio – economic status. Each factor is summed by each other to calculate the total points. The highest rating for each factor indicates the highest preference for the evaluated crop. The highest rated crops for each group are those with the highest points for the actual suitability. This study was conducted to assess the application efficiency of the new land suitability scale in recently reclaimed sites in Egypt. Moreover, 35 representative soil profiles were examined, and soil samples were subjected to some physical and chemical analysis. Actual and potential suitabilities were calculated by using the new land suitability scale. Finally, the obtained results confirmed the applicability of a new land suitability system to recommend the most promising crop rotation that can be applied in the study areas. The outputs of this research revealed that the integration of different aspects for modeling and adapting a proposed model provides an effective and flexible technique, which contribute to improve land suitability assessment for several crops to be more accurate and reliable.Keywords: analytic hierarchy process, land suitability, multi-criteria analysis, new reclaimed areas, soil parameters
Procedia PDF Downloads 1362877 Analysis of Factors Affecting the Number of Infant and Maternal Mortality in East Java with Geographically Weighted Bivariate Generalized Poisson Regression Method
Authors: Luh Eka Suryani, Purhadi
Abstract:
Poisson regression is a non-linear regression model with response variable in the form of count data that follows Poisson distribution. Modeling for a pair of count data that show high correlation can be analyzed by Poisson Bivariate Regression. Data, the number of infant mortality and maternal mortality, are count data that can be analyzed by Poisson Bivariate Regression. The Poisson regression assumption is an equidispersion where the mean and variance values are equal. However, the actual count data has a variance value which can be greater or less than the mean value (overdispersion and underdispersion). Violations of this assumption can be overcome by applying Generalized Poisson Regression. Characteristics of each regency can affect the number of cases occurred. This issue can be overcome by spatial analysis called geographically weighted regression. This study analyzes the number of infant mortality and maternal mortality based on conditions in East Java in 2016 using Geographically Weighted Bivariate Generalized Poisson Regression (GWBGPR) method. Modeling is done with adaptive bisquare Kernel weighting which produces 3 regency groups based on infant mortality rate and 5 regency groups based on maternal mortality rate. Variables that significantly influence the number of infant and maternal mortality are the percentages of pregnant women visit health workers at least 4 times during pregnancy, pregnant women get Fe3 tablets, obstetric complication handled, clean household and healthy behavior, and married women with the first marriage age under 18 years.Keywords: adaptive bisquare kernel, GWBGPR, infant mortality, maternal mortality, overdispersion
Procedia PDF Downloads 1582876 Assessment of a Coupled Geothermal-Solar Thermal Based Hydrogen Production System
Authors: Maryam Hamlehdar, Guillermo A. Narsilio
Abstract:
To enhance the feasibility of utilising geothermal hot sedimentary aquifers (HSAs) for clean hydrogen production, one approach is the implementation of solar-integrated geothermal energy systems. This detailed modelling study conducts a thermo-economic assessment of an advanced Organic Rankine Cycle (ORC)-based hydrogen production system that uses low-temperature geothermal reservoirs, with a specific focus on hot sedimentary aquifers (HSAs) over a 30-year period. In the proposed hybrid system, solar-thermal energy is used to raise the water temperature extracted from the geothermal production well. This temperature increase leads to a higher steam output, powering the turbine and subsequently enhancing the electricity output for running the electrolyser. Thermodynamic modeling of a parabolic trough solar (PTS) collector is developed and integrated with modeling for a geothermal-based configuration. This configuration includes a closed regenerator cycle (CRC), proton exchange membrane (PEM) electrolyser, and thermoelectric generator (TEG). Following this, the study investigates the impact of solar energy use on the temperature enhancement of the geothermal reservoir. It assesses the resulting consequences on the lifecycle performance of the hydrogen production system in comparison with a standalone geothermal system. The results indicate that, with the appropriate solar collector area, a combined solar-geothermal hydrogen production system outperforms a standalone geothermal system in both cost and rate of production. These findings underscore a solar-assisted geothermal hybrid system holds the potential to generate lower-cost hydrogen with enhanced efficiency, thereby boosting the appeal of numerous low to medium-temperature geothermal sources for hydrogen production.Keywords: clean hydrogen production, integrated solar-geothermal, low-temperature geothermal energy, numerical modelling
Procedia PDF Downloads 652875 Produced Water Treatment Using Novel Solid Scale Inhibitors Based on Silver Tungstate Loaded Kit-6: Static and Modeling Evaluation
Authors: R. Hosny, Mahmoud F. Mubarak, Heba M. Salem, Asmaa A. Abdelrahman
Abstract:
Oilfield scaling is a major problem in the oil and gas industry. Scale issues cost the industry millions of dollars in damage and lost production every year. One of the main causes of global production decline is scale. In this study, solid scale inhibitors based on silver tungstate loaded KIT-6 were synthesized and evaluated in both static and scale inhibition modeling. The silver tungstate loaded KIT-6 catalysts were synthesized via a simple impregnated method using 3D mesoporous KIT-6 as support. The synthesized materials were characterized using wide and low XRD, N2 adsorption–desorption analysis, TGA analysis, and FTIR, SEM, and XPS analysis. The scale inhibition efficiency of the synthesized materials was evaluated using a static scale inhibition test. The results of this study demonstrate the potential application of silver tungstate-loaded KIT-6 solid scale inhibitors for the oil and gas industry. The results of this study will contribute to the development of new and innovative solid scale inhibitors based on silver tungstate-loaded KIT-6. The inhibition efficiency of the scale inhibitor increases, and calcite scale inhibitor decreases with increasing pH (2 to8), it proposes that the scale inhibitor was more effective under alkaline conditions. An inhibition efficiency of 99% on calcium carbonate can be achieved at the optimal dosage of 7.5 ppm at 55oC, indicating that the scale inhibitor exhibits a relatively good inhibition performance on calcium carbonate. The use of these materials can potentially lead to more efficient and cost-effective solutions for scaling inhibition in various industrial processes.Keywords: produced water treatment, solid scale inhibitors, calcite, silver tungestate, 3 D mesoporous KIT-6, oilfield scales, adsorption
Procedia PDF Downloads 1422874 Analysis of Noise Environment and Acoustics Material in Residential Building
Authors: Heruanda Alviana Giska Barabah, Hilda Rasnia Hapsari
Abstract:
Acoustic phenomena create an acoustic interpretation condition that describes the characteristics of the environment. In urban areas, the tendency of heterogeneous and simultaneous human activity form a soundscape that is different from other regions, one of the characteristics of urban areas that developing the soundscape is the presence of vertical model houses or residential building. Activities both within the building and surrounding environment are able to make the soundscape with certain characteristics. The acoustics comfort of residential building becomes an important aspect, those demand lead the building features become more diverse. Initial steps in mapping acoustic conditions in a soundscape are important, this is the method to determine uncomfortable condition. Noise generated by road traffic, railway, and plane is an important consideration, especially for urban people, therefore the proper design of the building becomes very important as an effort to bring appropriate acoustics comfort. In this paper the authors developed noise mapping on the location of the residential building. Mapping done by taking some point referring to the noise source. The mapping result become the basis for modeling the acoustics wave interacted with the building model. Material selection is done based on literature study and modeling simulation using Insul by considering the absorption coefficient and Sound Transmission Class. The analysis of acoustics rays is ray tracing method using Comsol simulator software that can show the movement of acoustics rays and their interaction with a boundary. The result of this study can be used to consider boundary material in residential building as well as consideration for improving the acoustic quality in the acoustics zones that are formed.Keywords: residential building, noise, absorption coefficient, sound transmission class, ray tracing
Procedia PDF Downloads 2452873 ADP Approach to Evaluate the Blood Supply Network of Ontario
Authors: Usama Abdulwahab, Mohammed Wahab
Abstract:
This paper presents the application of uncapacitated facility location problems (UFLP) and 1-median problems to support decision making in blood supply chain networks. A plethora of factors make blood supply-chain networks a complex, yet vital problem for the regional blood bank. These factors are rapidly increasing demand; criticality of the product; strict storage and handling requirements; and the vastness of the theater of operations. As in the UFLP, facilities can be opened at any of $m$ predefined locations with given fixed costs. Clients have to be allocated to the open facilities. In classical location models, the allocation cost is the distance between a client and an open facility. In this model, the costs are the allocation cost, transportation costs, and inventory costs. In order to address this problem the median algorithm is used to analyze inventory, evaluate supply chain status, monitor performance metrics at different levels of granularity, and detect potential problems and opportunities for improvement. The Euclidean distance data for some Ontario cities (demand nodes) are used to test the developed algorithm. Sitation software, lagrangian relaxation algorithm, and branch and bound heuristics are used to solve this model. Computational experiments confirm the efficiency of the proposed approach. Compared to the existing modeling and solution methods, the median algorithm approach not only provides a more general modeling framework but also leads to efficient solution times in general.Keywords: approximate dynamic programming, facility location, perishable product, inventory model, blood platelet, P-median problem
Procedia PDF Downloads 5062872 Analysis, Evaluation and Optimization of Food Management: Minimization of Food Losses and Food Wastage along the Food Value Chain
Authors: G. Hafner
Abstract:
A method developed at the University of Stuttgart will be presented: ‘Analysis, Evaluation and Optimization of Food Management’. A major focus is represented by quantification of food losses and food waste as well as their classification and evaluation regarding a system optimization through waste prevention. For quantification and accounting of food, food losses and food waste along the food chain, a clear definition of core terms is required at the beginning. This includes their methodological classification and demarcation within sectors of the food value chain. The food chain is divided into agriculture, industry and crafts, trade and consumption (at home and out of home). For adjustment of core terms, the authors have cooperated with relevant stakeholders in Germany for achieving the goal of holistic and agreed definitions for the whole food chain. This includes modeling of sub systems within the food value chain, definition of terms, differentiation between food losses and food wastage as well as methodological approaches. ‘Food Losses’ and ‘Food Wastes’ are assigned to individual sectors of the food chain including a description of the respective methods. The method for analyzing, evaluation and optimization of food management systems consist of the following parts: Part I: Terms and Definitions. Part II: System Modeling. Part III: Procedure for Data Collection and Accounting Part. IV: Methodological Approaches for Classification and Evaluation of Results. Part V: Evaluation Parameters and Benchmarks. Part VI: Measures for Optimization. Part VII: Monitoring of Success The method will be demonstrated at the example of an invesigation of food losses and food wastage in the Federal State of Bavaria including an extrapolation of respective results to quantify food wastage in Germany.Keywords: food losses, food waste, resource management, waste management, system analysis, waste minimization, resource efficiency
Procedia PDF Downloads 4032871 Bi-Directional Impulse Turbine for Thermo-Acoustic Generator
Authors: A. I. Dovgjallo, A. B. Tsapkova, A. A. Shimanov
Abstract:
The paper is devoted to one of engine types with external heating – a thermoacoustic engine. In thermoacoustic engine heat energy is converted to an acoustic energy. Further, acoustic energy of oscillating gas flow must be converted to mechanical energy and this energy in turn must be converted to electric energy. The most widely used way of transforming acoustic energy to electric one is application of linear generator or usual generator with crank mechanism. In both cases, the piston is used. Main disadvantages of piston use are friction losses, lubrication problems and working fluid pollution which cause decrease of engine power and ecological efficiency. Using of a bidirectional impulse turbine as an energy converter is suggested. The distinctive feature of this kind of turbine is that the shock wave of oscillating gas flow passing through the turbine is reflected and passes through the turbine again in the opposite direction. The direction of turbine rotation does not change in the process. Different types of bidirectional impulse turbines for thermoacoustic engines are analyzed. The Wells turbine is the simplest and least efficient of them. A radial impulse turbine has more complicated design and is more efficient than the Wells turbine. The most appropriate type of impulse turbine was chosen. This type is an axial impulse turbine, which has a simpler design than that of a radial turbine and similar efficiency. The peculiarities of the method of an impulse turbine calculating are discussed. They include changes in gas pressure and velocity as functions of time during the generation of gas oscillating flow shock waves in a thermoacoustic system. In thermoacoustic system pressure constantly changes by a certain law due to acoustic waves generation. Peak values of pressure are amplitude which determines acoustic power. Gas, flowing in thermoacoustic system, periodically changes its direction and its mean velocity is equal to zero but its peak values can be used for bi-directional turbine rotation. In contrast with feed turbine, described turbine operates on un-steady oscillating flows with direction changes which significantly influence the algorithm of its calculation. Calculated power output is 150 W with frequency 12000 r/min and pressure amplitude 1,7 kPa. Then, 3-d modeling and numerical research of impulse turbine was carried out. As a result of numerical modeling, main parameters of the working fluid in turbine were received. On the base of theoretical and numerical data model of impulse turbine was made on 3D printer. Experimental unit was designed for numerical modeling results verification. Acoustic speaker was used as acoustic wave generator. Analysis if the acquired data shows that use of the bi-directional impulse turbine is advisable. By its characteristics as a converter, it is comparable with linear electric generators. But its lifetime cycle will be higher and engine itself will be smaller due to turbine rotation motion.Keywords: acoustic power, bi-directional pulse turbine, linear alternator, thermoacoustic generator
Procedia PDF Downloads 3772870 Adding a Degree of Freedom to Opinion Dynamics Models
Authors: Dino Carpentras, Alejandro Dinkelberg, Michael Quayle
Abstract:
Within agent-based modeling, opinion dynamics is the field that focuses on modeling people's opinions. In this prolific field, most of the literature is dedicated to the exploration of the two 'degrees of freedom' and how they impact the model’s properties (e.g., the average final opinion, the number of final clusters, etc.). These degrees of freedom are (1) the interaction rule, which determines how agents update their own opinion, and (2) the network topology, which defines the possible interaction among agents. In this work, we show that the third degree of freedom exists. This can be used to change a model's output up to 100% of its initial value or to transform two models (both from the literature) into each other. Since opinion dynamics models are representations of the real world, it is fundamental to understand how people’s opinions can be measured. Even for abstract models (i.e., not intended for the fitting of real-world data), it is important to understand if the way of numerically representing opinions is unique; and, if this is not the case, how the model dynamics would change by using different representations. The process of measuring opinions is non-trivial as it requires transforming real-world opinion (e.g., supporting most of the liberal ideals) to a number. Such a process is usually not discussed in opinion dynamics literature, but it has been intensively studied in a subfield of psychology called psychometrics. In psychometrics, opinion scales can be converted into each other, similarly to how meters can be converted to feet. Indeed, psychometrics routinely uses both linear and non-linear transformations of opinion scales. Here, we analyze how this transformation affects opinion dynamics models. We analyze this effect by using mathematical modeling and then validating our analysis with agent-based simulations. Firstly, we study the case of perfect scales. In this way, we show that scale transformations affect the model’s dynamics up to a qualitative level. This means that if two researchers use the same opinion dynamics model and even the same dataset, they could make totally different predictions just because they followed different renormalization processes. A similar situation appears if two different scales are used to measure opinions even on the same population. This effect may be as strong as providing an uncertainty of 100% on the simulation’s output (i.e., all results are possible). Still, by using perfect scales, we show that scales transformations can be used to perfectly transform one model to another. We test this using two models from the standard literature. Finally, we test the effect of scale transformation in the case of finite precision using a 7-points Likert scale. In this way, we show how a relatively small-scale transformation introduces both changes at the qualitative level (i.e., the most shared opinion at the end of the simulation) and in the number of opinion clusters. Thus, scale transformation appears to be a third degree of freedom of opinion dynamics models. This result deeply impacts both theoretical research on models' properties and on the application of models on real-world data.Keywords: degrees of freedom, empirical validation, opinion scale, opinion dynamics
Procedia PDF Downloads 1182869 Modeling of Drug Distribution in the Human Vitreous
Authors: Judith Stein, Elfriede Friedmann
Abstract:
The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body
Procedia PDF Downloads 1362868 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices
Authors: Mohammed Saleh Rezk, Reza Kashani
Abstract:
Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.Keywords: viscoelastic, damper, distributed damping, tuned mass damper
Procedia PDF Downloads 1062867 Asset Pricing Puzzle and GDP-Growth: Pre and Post Covid-19 Pandemic Effect on Pakistan Stock Exchange
Authors: Mohammad Azam
Abstract:
This work is an endeavor to empirically investigate the Gross Domestic Product-Growth as mediating variable between various factors and portfolio returns using a broad sample of 522 financial and non-financial firms enlisted on Pakistan Stock Exchange between January-1993 and June-2022. The study employs the Structural Equation modeling and Ordinary Least Square regression to determine the findings before and during the Covid-19 epidemiological situation, which has not received due attention by researchers. The analysis reveals that market and investment factors are redundant, whereas size and value show significant results, whereas Gross Domestic Product-Growth performs significant mediating impact for the whole time frame. Using before Covid-19 period, the results reveal that market, value, and investment are redundant, but size, profitability, and Gross Domestic Product-Growth are significant. During the Covid-19, the statistics indicate that market and investment are redundant, though size and Gross Domestic Product-Growth are highly significant, but value and profitability are moderately significant. The Ordinary Least Square regression shows that market and investment are statistically insignificant, whereas size is highly significant but value and profitability are marginally significant. Using the Gross Domestic Product-Growth augmented model, a slight growth in R-square is observed. The size, value and profitability factors are recommended to the investors for Pakistan Stock Exchange. Conclusively, in the Pakistani market, the Gross Domestic Product-Growth indicates a feeble moderating effect between risk-premia and portfolio returns.Keywords: asset pricing puzzle, mediating role of GDP-growth, structural equation modeling, COVID-19 pandemic, Pakistan stock exchange
Procedia PDF Downloads 712866 Integrating Computational Modeling and Analysis with in Vivo Observations for Enhanced Hemodynamics Diagnostics and Prognosis
Authors: Shreyas S. Hegde, Anindya Deb, Suresh Nagesh
Abstract:
Computational bio-mechanics is developing rapidly as a non-invasive tool to assist the medical fraternity to help in both diagnosis and prognosis of human body related issues such as injuries, cardio-vascular dysfunction, atherosclerotic plaque etc. Any system that would help either properly diagnose such problems or assist prognosis would be a boon to the doctors and medical society in general. Recently a lot of work is being focused in this direction which includes but not limited to various finite element analysis related to dental implants, skull injuries, orthopedic problems involving bones and joints etc. Such numerical solutions are helping medical practitioners to come up with alternate solutions for such problems and in most cases have also reduced the trauma on the patients. Some work also has been done in the area related to the use of computational fluid mechanics to understand the flow of blood through the human body, an area of hemodynamics. Since cardio-vascular diseases are one of the main causes of loss of human life, understanding of the blood flow with and without constraints (such as blockages), providing alternate methods of prognosis and further solutions to take care of issues related to blood flow would help save valuable life of such patients. This project is an attempt to use computational fluid dynamics (CFD) to solve specific problems related to hemodynamics. The hemodynamics simulation is used to gain a better understanding of functional, diagnostic and theoretical aspects of the blood flow. Due to the fact that many fundamental issues of the blood flow, like phenomena associated with pressure and viscous forces fields, are still not fully understood or entirely described through mathematical formulations the characterization of blood flow is still a challenging task. The computational modeling of the blood flow and mechanical interactions that strongly affect the blood flow patterns, based on medical data and imaging represent the most accurate analysis of the blood flow complex behavior. In this project the mathematical modeling of the blood flow in the arteries in the presence of successive blockages has been analyzed using CFD technique. Different cases of blockages in terms of percentages have been modeled using commercial software CATIA V5R20 and simulated using commercial software ANSYS 15.0 to study the effect of varying wall shear stress (WSS) values and also other parameters like the effect of increase in Reynolds number. The concept of fluid structure interaction (FSI) has been used to solve such problems. The model simulation results were validated using in vivo measurement data from existing literatureKeywords: computational fluid dynamics, hemodynamics, blood flow, results validation, arteries
Procedia PDF Downloads 4032865 Integration of Building Information Modeling Framework for 4D Constructability Review and Clash Detection Management of a Sewage Treatment Plant
Authors: Malla Vijayeta, Y. Vijaya Kumar, N. Ramakrishna Raju, K. Satyanarayana
Abstract:
Global AEC (architecture, engineering, and construction) industry has been coined as one of the most resistive domains in embracing technology. Although this digital era has been inundated with software tools like CAD, STADD, CANDY, Microsoft Project, Primavera etc. the key stakeholders have been working in siloes and processes remain fragmented. Unlike the yesteryears’ simpler project delivery methods, the current projects are of fast-track, complex, risky, multidisciplinary, stakeholder’s influential, statutorily regulative etc. pose extensive bottlenecks in preventing timely completion of projects. At this juncture, a paradigm shift surfaced in construction industry, and Building Information Modeling, aka BIM, has been a panacea to bolster the multidisciplinary teams’ cooperative and collaborative work leading to productive, sustainable and leaner project outcome. Building information modeling has been integrative, stakeholder engaging and centralized approach in providing a common platform of communication. A common misconception that BIM can be used for building/high rise projects in Indian Construction Industry, while this paper discusses of the implementation of BIM processes/methodologies in water and waste water industry. It elucidates about BIM 4D planning and constructability reviews of a Sewage Treatment Plant in India. Conventional construction planning and logistics management involves a blend of experience coupled with imagination. Even though the excerpts or judgments or lessons learnt gained from veterans might be predictive and helpful, but the uncertainty factor persists. This paper shall delve about the case study of real time implementation of BIM 4D planning protocols for one of the Sewage Treatment Plant of Dravyavati River Rejuvenation Project in India and develops a Time Liner to identify logistics planning and clash detection. With this BIM processes, we shall find that there will be significant reduction of duplication of tasks and reworks. Also another benefit achieved will be better visualization and workarounds during conception stage and enables for early involvement of the stakeholders in the Project Life cycle of Sewage Treatment Plant construction. Moreover, we have also taken an opinion poll of the benefits accrued utilizing BIM processes versus traditional paper based communication like 2D and 3D CAD tools. Thus this paper concludes with BIM framework for Sewage Treatment Plant construction which will achieve optimal construction co-ordination advantages like 4D construction sequencing, interference checking, clash detection checking and resolutions by primary engagement of all key stakeholders thereby identifying potential risks and subsequent creation of risk response strategies. However, certain hiccups like hesitancy in adoption of BIM technology by naïve users and availability of proficient BIM trainers in India poses a phenomenal impediment. Hence the nurture of BIM processes from conception, construction and till commissioning, operation and maintenance along with deconstruction of a project’s life cycle is highly essential for Indian Construction Industry in this digital era.Keywords: integrated BIM workflow, 4D planning with BIM, building information modeling, clash detection and visualization, constructability reviews, project life cycle
Procedia PDF Downloads 1212864 Assessing the Nutritional Characteristics and Habitat Modeling of the Comorian’s Yam (Dioscorea comorensis) in a Fragmented Landscape
Authors: Mounir Soule, Hindatou Saidou, Razafimahefa, Mohamed Thani Ibouroi
Abstract:
High levels of habitat fragmentation and loss are the main drivers of plant species extinction. They reduce the habitat quality, which is a determining factor for the reproduction of plant species, and generate strong selective pressures for habitat selection, with impacts on the reproduction and survival of individuals. The Comorian’s yam (Dioscorea comorensis) is one of the most threatened plant species of the Comoros archipelago. The species faces one of the highest rates of habitat loss worldwide (9.3 % per year) and is classified as Endangered in the IUCN red list. Despite the nutritional potential of this tuber, the Comorian’s yam cultivation remains neglected by local populations due probably to lack of knowledge on its nutritional importance and the factors driving its spatial distribution and development. In this study, we assessed the nutritional characteristics of Dioscorea comorensis and the drivers of spatial distribution and abundance to propose conservation measures and improve crop yields. To determine the nutritional characteristics, the Kjeldahl method, the Soxhlet method, and Atwater's specific calorific coefficients methods were applied for analyzing proteins, lipids, and caloric energy respectively. In addition, atomic absorption spectrometry was used to measure mineral particles. By combining species occurrences with ecological (habitat types), climatic (temperature, rainfall, etc.), and physicochemical (soil types and quality) variables, we assessed habitat suitability and spatial distribution of the species and the factors explaining the origin, persistence, distribution and competitive capacity of a species using a Species Distribution Modeling (SDM) method. The results showed that the species contains 83.37% carbohydrates, 6.37% protein, and 0.45% lipids. In 100 grams, the quantities of Calcium, Sodium, Zinc, Iron, Copper, Potassium, Phosphorus, Magnesium, and Manganese are respectively 422.70, 599.41, 223.11, 252.32, 332.20, 780.41, 444.17, 287.71 and 220.73 mg. Its PRAL index is negative (- 9.80 mEq/100 g), and its Ca/P (0.95) and Na/K (0.77) ratios are less than 1. This species provides an energy value of 357.46 Kcal per 100 g, thanks to its carbohydrates and minerals and is distinguished from others by its high protein content, offering benefits for cardiovascular health. According to our SDM, the species has a very limited distribution, restricted to forests with higher biomass, humidity, and clay. Our findings highlight how distribution patterns are related to ecological and environmental factors. They also emphasize how the Comoros yam is beneficial in terms of nutritional quality. Our results represent a basic knowledge that will help scientists and decision-makers to develop conservation strategies and to improve crop yields.Keywords: Dioscorea comorensis, nutritional characteristics, species distribution modeling, conservation strategies, crop yields improvement
Procedia PDF Downloads 292863 Mathematical Modeling to Reach Stability Condition within Rosetta River Mouth, Egypt
Authors: Ali Masria , Abdelazim Negm, Moheb Iskander, Oliver C. Saavedra
Abstract:
Estuaries play an important role in exchanging water and providing a navigational pathway for ships. These zones are very sensitive and vulnerable to any interventions in coastal dynamics. Almost major of these inlets experience coastal problems such as severe erosion, and accretion. Rosetta promontory, Egypt is an example of this environment. It suffers from many coastal problems as erosion problem along the coastline and siltation problem inside the inlet. It is due to lack of water and sediment resources as a side effect of constructing the Aswan High dam. The shoaling of the inlet leads to hindering the navigation process of fishing boats, negative impacts to estuarine and salt marsh habitat and decrease the efficiency of the cross section to transfer the flow during emergencies to the sea. This paper aims to reach a new condition of stability of Rosetta Promontory by using coastal measures to control the sediment entering, and causes shoaling inside the inlet. These coastal measures include modifying the inlet cross section by using centered jetties, eliminate the coastal dynamic in the entrance using boundary jetties. This target is achieved by using a hydrodynamic model Coastal Modeling System (CMS). Extensive field data collection (hydrographic surveys, wave data, tide data, and bed morphology) is used to build and calibrate the model. About 20 scenarios were tested to reach a suitable solution that mitigate the coastal problems at the inlet. The results show that 360 m jetty in the eastern bank with system of sand bypass from the leeside of the jetty can stabilize the estuary.Keywords: Rosetta promontory, erosion, sedimentation, inlet stability
Procedia PDF Downloads 5852862 Modeling and Characterization of Organic LED
Authors: Bouanati Sidi Mohammed, N. E. Chabane Sari, Mostefa Kara Selma
Abstract:
It is well-known that Organic light emitting diodes (OLEDs) are attracting great interest in the display technology industry due to their many advantages, such as low price of manufacturing, large-area of electroluminescent display, various colors of emission included white light. Recently, there has been much progress in understanding the device physics of OLEDs and their basic operating principles. In OLEDs, Light emitting is the result of the recombination of electron and hole in light emitting layer, which are injected from cathode and anode. For improve luminescence efficiency, it is needed that hole and electron pairs exist affluently and equally and recombine swiftly in the emitting layer. The aim of this paper is to modeling polymer LED and OLED made with small molecules for studying the electrical and optical characteristics. The first simulation structures used in this paper is a mono layer device; typically consisting of the poly (2-methoxy-5(2’-ethyl) hexoxy-phenylenevinylene) (MEH-PPV) polymer sandwiched between an anode usually an indium tin oxide (ITO) substrate, and a cathode, such as Al. In the second structure we replace MEH-PPV by tris (8-hydroxyquinolinato) aluminum (Alq3). We choose MEH-PPV because of it's solubility in common organic solvents, in conjunction with a low operating voltage for light emission and relatively high conversion efficiency and Alq3 because it is one of the most important host materials used in OLEDs. In this simulation, the Poole-Frenkel- like mobility model and the Langevin bimolecular recombination model have been used as the transport and recombination mechanism. These models are enabled in ATLAS -SILVACO software. The influence of doping and thickness on I(V) characteristics and luminescence, are reported.Keywords: organic light emitting diode, polymer lignt emitting diode, organic materials, hexoxy-phenylenevinylene
Procedia PDF Downloads 5532861 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3482860 Homeless Population Modeling and Trend Prediction Through Identifying Key Factors and Machine Learning
Authors: Shayla He
Abstract:
Background and Purpose: According to Chamie (2017), it’s estimated that no less than 150 million people, or about 2 percent of the world’s population, are homeless. The homeless population in the United States has grown rapidly in the past four decades. In New York City, the sheltered homeless population has increased from 12,830 in 1983 to 62,679 in 2020. Knowing the trend on the homeless population is crucial at helping the states and the cities make affordable housing plans, and other community service plans ahead of time to better prepare for the situation. This study utilized the data from New York City, examined the key factors associated with the homelessness, and developed systematic modeling to predict homeless populations of the future. Using the best model developed, named HP-RNN, an analysis on the homeless population change during the months of 2020 and 2021, which were impacted by the COVID-19 pandemic, was conducted. Moreover, HP-RNN was tested on the data from Seattle. Methods: The methodology involves four phases in developing robust prediction methods. Phase 1 gathered and analyzed raw data of homeless population and demographic conditions from five urban centers. Phase 2 identified the key factors that contribute to the rate of homelessness. In Phase 3, three models were built using Linear Regression, Random Forest, and Recurrent Neural Network (RNN), respectively, to predict the future trend of society's homeless population. Each model was trained and tuned based on the dataset from New York City for its accuracy measured by Mean Squared Error (MSE). In Phase 4, the final phase, the best model from Phase 3 was evaluated using the data from Seattle that was not part of the model training and tuning process in Phase 3. Results: Compared to the Linear Regression based model used by HUD et al (2019), HP-RNN significantly improved the prediction metrics of Coefficient of Determination (R2) from -11.73 to 0.88 and MSE by 99%. HP-RNN was then validated on the data from Seattle, WA, which showed a peak %error of 14.5% between the actual and the predicted count. Finally, the modeling results were collected to predict the trend during the COVID-19 pandemic. It shows a good correlation between the actual and the predicted homeless population, with the peak %error less than 8.6%. Conclusions and Implications: This work is the first work to apply RNN to model the time series of the homeless related data. The Model shows a close correlation between the actual and the predicted homeless population. There are two major implications of this result. First, the model can be used to predict the homeless population for the next several years, and the prediction can help the states and the cities plan ahead on affordable housing allocation and other community service to better prepare for the future. Moreover, this prediction can serve as a reference to policy makers and legislators as they seek to make changes that may impact the factors closely associated with the future homeless population trend.Keywords: homeless, prediction, model, RNN
Procedia PDF Downloads 1192859 Enhance Construction Visual As-Built Schedule Management Using BIM Technology
Authors: Shu-Hui Jan, Hui-Ping Tserng, Shih-Ping Ho
Abstract:
Construction project control attempts to obtain real-time as-built schedule information and to eliminate project delays by effectively enhancing dynamic schedule control and management. Suitable platforms for enhancing an as-built schedule visually during the construction phase are necessary and important for general contractors. As the application of building information modeling (BIM) becomes more common, schedule management integrated with the BIM approach becomes essential to enhance visual construction management implementation for the general contractor during the construction phase. To enhance visualization of the updated as-built schedule for the general contractor, this study presents a novel system called the Construction BIM-assisted Schedule Management (ConBIM-SM) system for general contractors in
Keywords: building information modeling (BIM), construction schedule management, as-built schedule management, BIM schedule updating mechanism
Procedia PDF Downloads 3742858 Design, Synthesis and Pharmacological Investigation of Novel 2-Phenazinamine Derivatives as a Mutant BCR-ABL (T315I) Inhibitor
Authors: Gajanan M. Sonwane
Abstract:
Nowadays, the entire pharmaceutical industry is facing the challenge of increasing efficiency and innovation. The major hurdles are the growing cost of research and development and a concurrent stagnating number of new chemical entities (NCEs). Hence, the challenge is to select the most druggable targets and to search the equivalent drug-like compounds, which also possess specific pharmacokinetic and toxicological properties that allow them to be developed as drugs. The present research work includes the studies of developing new anticancer heterocycles by using molecular modeling techniques. The heterocycles synthesized through such methodology are much effective as various physicochemical parameters have been already studied and the structure has been optimized for its best fit in the receptor. Hence, on the basis of the literature survey and considering the need to develop newer anticancer agents, new phenazinamine derivatives were designed by subjecting the nucleus to molecular modeling, viz., GQSAR analysis and docking studies. Simultaneously, these designed derivatives were subjected to in silico prediction of biological activity through PASS studies and then in silico toxicity risk assessment studies. In PASS studies, it was found that all the derivatives exhibited a good spectrum of biological activities confirming its anticancer potential. The toxicity risk assessment studies revealed that all the derivatives obey Lipinski’s rule. Amongst these series, compounds 4c, 5b and 6c were found to possess logP and drug-likeness values comparable with the standard Imatinib (used for anticancer activity studies) and also with the standard drug methotrexate (used for antimitotic activity studies). One of the most notable mutations is the threonine to isoleucine mutation at codon 315 (T315I), which is known to be resistant to all currently available TKI. Enzyme assay planned for confirmation of target selective activity.Keywords: drug design, tyrosine kinases, anticancer, Phenazinamine
Procedia PDF Downloads 1152857 Recurrent Neural Networks for Complex Survival Models
Authors: Pius Marthin, Nihal Ata Tutkun
Abstract:
Survival analysis has become one of the paramount procedures in the modeling of time-to-event data. When we encounter complex survival problems, the traditional approach remains limited in accounting for the complex correlational structure between the covariates and the outcome due to the strong assumptions that limit the inference and prediction ability of the resulting models. Several studies exist on the deep learning approach to survival modeling; moreover, the application for the case of complex survival problems still needs to be improved. In addition, the existing models need to address the data structure's complexity fully and are subject to noise and redundant information. In this study, we design a deep learning technique (CmpXRnnSurv_AE) that obliterates the limitations imposed by traditional approaches and addresses the above issues to jointly predict the risk-specific probabilities and survival function for recurrent events with competing risks. We introduce the component termed Risks Information Weights (RIW) as an attention mechanism to compute the weighted cumulative incidence function (WCIF) and an external auto-encoder (ExternalAE) as a feature selector to extract complex characteristics among the set of covariates responsible for the cause-specific events. We train our model using synthetic and real data sets and employ the appropriate metrics for complex survival models for evaluation. As benchmarks, we selected both traditional and machine learning models and our model demonstrates better performance across all datasets.Keywords: cumulative incidence function (CIF), risk information weight (RIW), autoencoders (AE), survival analysis, recurrent events with competing risks, recurrent neural networks (RNN), long short-term memory (LSTM), self-attention, multilayers perceptrons (MLPs)
Procedia PDF Downloads 882856 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain
Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende
Abstract:
The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis
Procedia PDF Downloads 151