Search results for: modeling and optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6825

Search results for: modeling and optimization

4575 Statistical and Analytical Comparison of GIS Overlay Modelings: An Appraisal on Groundwater Prospecting in Precambrian Metamorphics

Authors: Tapas Acharya, Monalisa Mitra

Abstract:

Overlay modeling is the most widely used conventional analysis for spatial decision support system. Overlay modeling requires a set of themes with different weightage computed in varied manners, which gives a resultant input for further integrated analysis. In spite of the popularity and most widely used technique; it gives inconsistent and erroneous results for similar inputs while processed in various GIS overlay techniques. This study is an attempt to compare and analyse the differences in the outputs of different overlay methods using GIS platform with same set of themes of the Precambrian metamorphic to obtain groundwater prospecting in Precambrian metamorphic rocks. The objective of the study is to emphasize the most suitable overlay method for groundwater prospecting in older Precambrian metamorphics. Seven input thematic layers like slope, Digital Elevation Model (DEM), soil thickness, lineament intersection density, average groundwater table fluctuation, stream density and lithology have been used in the spatial overlay models of fuzzy overlay, weighted overlay and weighted sum overlay methods to yield the suitable groundwater prospective zones. Spatial concurrence analysis with high yielding wells of the study area and the statistical comparative studies among the outputs of various overlay models using RStudio reveal that the Weighted Overlay model is the most efficient GIS overlay model to delineate the groundwater prospecting zones in the Precambrian metamorphic rocks.

Keywords: fuzzy overlay, GIS overlay model, groundwater prospecting, Precambrian metamorphics, weighted overlay, weighted sum overlay

Procedia PDF Downloads 128
4574 Modeling of the Heat and Mass Transfer in Fluids through Thermal Pollution in Pipelines

Authors: V. Radulescu, S. Dumitru

Abstract:

Introduction: Determination of the temperature field inside a fluid in motion has many practical issues, especially in the case of turbulent flow. The phenomenon is greater when the solid walls have a different temperature than the fluid. The turbulent heat and mass transfer have an essential role in case of the thermal pollution, as it was the recorded during the damage of the Thermoelectric Power-plant Oradea (closed even today). Basic Methods: Solving the theoretical turbulent thermal pollution represents a particularly difficult problem. By using the semi-empirical theories or by simplifying the made assumptions, based on the experimental measurements may be assured the elaboration of the mathematical model for further numerical simulations. The three zones of flow are analyzed separately: the vicinity of the solid wall, the turbulent transition zone, and the turbulent core. For each area are determined the distribution law of temperature. It is determined the dependence of between the Stanton and Prandtl numbers with correction factors, based on measurements experimental. Major Findings/Results: The limitation of the laminar thermal substrate was determined based on the theory of Landau and Levice, using the assumption that the longitudinal component of the velocity pulsation and the pulsation’s frequency varies proportionally with the distance to the wall. For the calculation of the average temperature, the formula is used a similar solution as for the velocity, by an analogous mediation. On these assumptions, the numerical modeling was performed with a gradient of temperature for the turbulent flow in pipes (intact or damaged, with cracks) having 4 different diameters, between 200-500 mm, as there were in the Thermoelectric Power-plant Oradea. Conclusions: It was made a superposition between the molecular viscosity and the turbulent one, followed by addition between the molecular and the turbulent transfer coefficients, necessary to elaborate the theoretical and the numerical modeling. The concept of laminar boundary layer has a different thickness when it is compared the flow with heat transfer and that one without a temperature gradient. The obtained results are within the margin of error of 5%, between the semi-empirical classical theories and the developed model, based on the experimental data. Finally, it is obtained a general correlation between the Stanton number and the Prandtl number, for a specific flow (with associated Reynolds number).

Keywords: experimental measurements, numerical correlations, thermal pollution through pipelines, turbulent thermal flow

Procedia PDF Downloads 164
4573 A Large Language Model-Driven Method for Automated Building Energy Model Generation

Authors: Yake Zhang, Peng Xu

Abstract:

The development of building energy models (BEM) required for architectural design and analysis is a time-consuming and complex process, demanding a deep understanding and proficient use of simulation software. To streamline the generation of complex building energy models, this study proposes an automated method for generating building energy models using a large language model and the BEM library aimed at improving the efficiency of model generation. This method leverages a large language model to parse user-specified requirements for target building models, extracting key features such as building location, window-to-wall ratio, and thermal performance of the building envelope. The BEM library is utilized to retrieve energy models that match the target building’s characteristics, serving as reference information for the large language model to enhance the accuracy and relevance of the generated model, allowing for the creation of a building energy model that adapts to the user’s modeling requirements. This study enables the automatic creation of building energy models based on natural language inputs, reducing the professional expertise required for model development while significantly decreasing the time and complexity of manual configuration. In summary, this study provides an efficient and intelligent solution for building energy analysis and simulation, demonstrating the potential of a large language model in the field of building simulation and performance modeling.

Keywords: artificial intelligence, building energy modelling, building simulation, large language model

Procedia PDF Downloads 26
4572 Computer Network Applications, Practical Implementations and Structural Control System Representations

Authors: El Miloudi Djelloul

Abstract:

The computer network play an important position for practical implementations of the differently system. To implement a system into network above all is needed to know all the configurations, which is responsible to be a part of the system, and to give adequate information and solution in realtime. So if want to implement this system for example in the school or relevant institutions, the first step is to analyze the types of model which is needed to be configured and another important step is to organize the works in the context of devices, as a part of the general system. Often before configuration, as important point is descriptions and documentations from all the works into the respective process, and then to organize in the aspect of problem-solving. The computer network as critic infrastructure is very specific so the paper present the effectiveness solutions in the structured aspect viewed from one side, and another side is, than the paper reflect the positive aspect in the context of modeling and block schema presentations as an better alternative to solve the specific problem because of continually distortions of the system from the line of devices, programs and signals or packed collisions, which are in movement from one computer node to another nodes.

Keywords: local area networks, LANs, block schema presentations, computer network system, computer node, critical infrastructure packed collisions, structural control system representations, computer network, implementations, modeling structural representations, companies, computers, context, control systems, internet, software

Procedia PDF Downloads 365
4571 Quantitative Structure-Activity Relationship Analysis of Binding Affinity of a Series of Anti-Prion Compounds to Human Prion Protein

Authors: Strahinja Kovačević, Sanja Podunavac-Kuzmanović, Lidija Jevrić, Milica Karadžić

Abstract:

The present study is based on the quantitative structure-activity relationship (QSAR) analysis of eighteen compounds with anti-prion activity. The structures and anti-prion activities (expressed in response units, RU%) of the analyzed compounds are taken from CHEMBL database. In the first step of analysis 85 molecular descriptors were calculated and based on them the hierarchical cluster analysis (HCA) and principal component analysis (PCA) were carried out in order to detect potential significant similarities or dissimilarities among the studied compounds. The calculated molecular descriptors were physicochemical, lipophilicity and ADMET (absorption, distribution, metabolism, excretion and toxicity) descriptors. The first stage of the QSAR analysis was simple linear regression modeling. It resulted in one acceptable model that correlates Henry's law constant with RU% units. The obtained 2D-QSAR model was validated by cross-validation as an internal validation method. The validation procedure confirmed the model’s quality and therefore it can be used for prediction of anti-prion activity. The next stage of the analysis of anti-prion activity will include 3D-QSAR and molecular docking approaches in order to select the most promising compounds in treatment of prion diseases. These results are the part of the project No. 114-451-268/2016-02 financially supported by the Provincial Secretariat for Science and Technological Development of AP Vojvodina.

Keywords: anti-prion activity, chemometrics, molecular modeling, QSAR

Procedia PDF Downloads 304
4570 Isolation, Characterization and Optimization of Alkalophilic and Thermotolerant Lipase from Bacillus subtilis Strain

Authors: Indu Bhushan Sharma, Rashmi Saraswat

Abstract:

The thermotolerant, solvent stable and alkalophilic lipase producing bacterial strain was isolated from the water sample of the foothills of Trikuta Mountain in Kakryal (Reasi district) in Jammu and Kashmir, India. The lipase-producing microorganisms were screened using tributyrin agar plates. The selected microbe was optimized for maximum lipase production by subjecting to various carbon and nitrogen sources, incubation period and inoculum size. The selected strain was identified as Bacillus subtilis strain kakrayal_1 (BSK_1) using 16S rRNA sequence analysis. Effect of pH, temperature, metal ions, detergents and organic solvents were studied on lipase activity. Lipase was found to be stable over a pH range of 6.0 to 9.0 and exhibited maximum activity at pH 8. Lipolytic activity was highest at 37°C and the enzyme activity remained at 60°C for 24hrs, hence, established as thermo-tolerant. Production of lipase was significantly induced by vegetable oil and the best nitrogen source was found to be peptone. The isolated Bacillus lipase was stimulated by pre-treatment with Mn2+, Ca2+, K+, Zn2+, and Fe2+. Lipase was stable in detergents such as triton X 100, tween 20 and Tween 80. The 100% ethyl acetate enhanced lipase activity whereas, lipase activity were found to be stable in Hexane. The optimization resulted in 4 fold increase in lipase production. Bacillus lipases are ‘generally recognized as safe’ (GRAS) and are industrially interesting. The inducible alkaline, thermo-tolerant lipase exhibited the ability to be stable in detergents and organic solvents. This could be further researched as a potential biocatalyst for industrial applications such as biotransformation, detergent formulation, bioremediation and organic synthesis.

Keywords: bacillus, lipase, thermotolerant, alkalophilic

Procedia PDF Downloads 255
4569 Real-Time Inventory Management and Operational Efficiency in Manufacturing

Authors: Tom Wanyama

Abstract:

We have developed a weight-based parts inventory monitoring system utilizing the Industrial Internet of Things (IIoT) to enhance operational efficiencies in manufacturing. The system addresses various challenges, including eliminating downtimes caused by stock-outs, preventing human errors in parts delivery and product assembly, and minimizing motion waste by reducing unnecessary worker movements. The system incorporates custom QR codes for simplified inventory tracking and retrieval processes. The generated data serves a dual purpose by enabling real-time optimization of parts flow within manufacturing facilities and facilitating retroactive optimization of stock levels for informed decision-making in inventory management. The pilot implementation at SEPT Learning Factory successfully eradicated data entry errors, optimized parts delivery, and minimized workstation downtimes, resulting in a remarkable increase of over 10% in overall equipment efficiency across all workstations. Leveraging the IIoT features, the system seamlessly integrates information into the process control system, contributing to the enhancement of product quality. This approach underscores the importance of effective tracking of parts inventory in manufacturing to achieve transparency, improved inventory control, and overall profitability. In the broader context, our inventory monitoring system aligns with the evolving focus on optimizing supply chains and maintaining well-managed warehouses to ensure maximum efficiency in the manufacturing industry.

Keywords: industrial Internet of things, industrial systems integration, inventory monitoring, inventory control in manufacturing

Procedia PDF Downloads 35
4568 Large Core Silica Few-Mode Optical Fibers with Reduced Differential Mode Delay and Enhanced Mode Effective Area over 'C'-Band

Authors: Anton V. Bourdine, Vladimir A. Burdin, Oleg R. Delmukhametov

Abstract:

This work presents a fast and simple method for the design of large core silica optical fibers with differential mode delay (DMD) management. Some results are reported concerned with refractive index profile optimization for 42 µm core 16-LP-mode optical fiber for next-generation optical networks. Here special refractive index profile form provides total DMD reducing over all mode staff under desired enhanced mode effective area. Method for the simulation of 'real manufactured' few-mode optical fiber (FMF) core geometry differing from the desired optimized structure by core non-symmetrical ellipticity and refractive index profile deviation including local fluctuations is proposed. Results of the following analysis of optimized FMF with inserted geometry distortions performed by earlier on developed modification of rigorous mixed finite-element method showed strong DMD degradation that requires additional higher-order mode management. In addition, this work also presents a method for design mode division multiplexer channel precision spatial positioning scheme at FMF core end that provides one of the potentiality solutions of described DMD degradation problem concerned with 'distorted' core geometry due to features of optical fiber manufacturing techniques.

Keywords: differential mode delay, few-mode optical fibers, nonlinear Shannon limit, optical fiber non-circularity, ‘real manufactured’ optical fiber core geometry simulation, refractive index profile optimization

Procedia PDF Downloads 157
4567 Geoplanology Modeling and Applications Engineering of Earth in Spatial Planning Related with Geological Hazard in Cilegon, Banten, Indonesia

Authors: Muhammad L. A. Dwiyoga

Abstract:

The condition of a spatial land in the industrial park needs special attention to be studied more deeply. Geoplanology modeling can help arrange area according to his ability. This research method is to perform the analysis of remote sensing, Geographic Information System, and more comprehensive analysis to determine geological characteristics and the ability to land on the area of research and its relation to the geological disaster. Cilegon is part of Banten province located in western Java, and the direction of the north is the Strait of Borneo. While the southern part is bordering the Indian Ocean. Morphology study area is located in the highlands to low. In the highlands of identified potential landslide prone, whereas in low-lying areas of potential flooding. Moreover, in the study area has the potential prone to earthquakes, this is due to the proximity of enough research to Mount Krakatau and Subdcution Zone. From the results of this study show that the study area has a susceptibility to landslides located around the District Waringinkurung. While the region as a potential flood areas in the District of Cilegon and surrounding areas. Based on the seismic data, this area includes zones with a range of magnitude 1.5 to 5.5 magnitude at a depth of 1 to 60 Km. As for the ability of its territory, based on the analyzes and studies carried out the need for renewal of the map Spatial Plan that has been made, considering the development of a fairly rapid Cilegon area.

Keywords: geoplanology, spatial plan, geological hazard, cilegon, Indonesia

Procedia PDF Downloads 504
4566 The Optimization of the Parameters for Eco-Friendly Leaching of Precious Metals from Waste Catalyst

Authors: Silindile Gumede, Amir Hossein Mohammadi, Mbuyu Germain Ntunka

Abstract:

Goal 12 of the 17 Sustainable Development Goals (SDGs) encourages sustainable consumption and production patterns. This necessitates achieving the environmentally safe management of chemicals and all wastes throughout their life cycle and the proper disposal of pollutants and toxic waste. Fluid catalytic cracking (FCC) catalysts are widely used in the refinery to convert heavy feedstocks to lighter ones. During the refining processes, the catalysts are deactivated and discarded as hazardous toxic solid waste. Spent catalysts (SC) contain high-cost metal, and the recovery of metals from SCs is a tactical plan for supplying part of the demand for these substances and minimizing the environmental impacts. Leaching followed by solvent extraction, has been found to be the most efficient method to recover valuable metals with high purity from spent catalysts. However, the use of inorganic acids during the leaching process causes a secondary environmental issue. Therefore, it is necessary to explore other alternative efficient leaching agents that are economical and environmentally friendly. In this study, the waste catalyst was collected from a domestic refinery and was characterised using XRD, ICP, XRF, and SEM. Response surface methodology (RSM) and Box Behnken design were used to model and optimize the influence of some parameters affecting the acidic leaching process. The parameters selected in this investigation were the acid concentration, temperature, and leaching time. From the characterisation results, it was found that the spent catalyst consists of high concentrations of Vanadium (V) and Nickel (Ni); hence this study focuses on the leaching of Ni and V using a biodegradable acid to eliminate the formation of the secondary pollution.

Keywords: eco-friendly leaching, optimization, metal recovery, leaching

Procedia PDF Downloads 68
4565 Computational Fluid Dynamic Modeling of Mixing Enhancement by Stimulation of Ferrofluid under Magnetic Field

Authors: Neda Azimi, Masoud Rahimi, Faezeh Mohammadi

Abstract:

Computational fluid dynamics (CFD) simulation was performed to investigate the effect of ferrofluid stimulation on hydrodynamic and mass transfer characteristics of two immiscible liquid phases in a Y-micromixer. The main purpose of this work was to develop a numerical model that is able to simulate hydrodynamic of the ferrofluid flow under magnetic field and determine its effect on mass transfer characteristics. A uniform external magnetic field was applied perpendicular to the flow direction. The volume of fluid (VOF) approach was used for simulating the multiphase flow of ferrofluid and two-immiscible liquid flows. The geometric reconstruction scheme (Geo-Reconstruct) based on piecewise linear interpolation (PLIC) was used for reconstruction of the interface in the VOF approach. The mass transfer rate was defined via an equation as a function of mass concentration gradient of the transported species and added into the phase interaction panel using the user-defined function (UDF). The magnetic field was solved numerically by Fluent MHD module based on solving the magnetic induction equation method. CFD results were validated by experimental data and good agreements have been achieved, which maximum relative error for extraction efficiency was about 7.52 %. It was showed that ferrofluid actuation by a magnetic field can be considered as an efficient mixing agent for liquid-liquid two-phase mass transfer in microdevices.

Keywords: CFD modeling, hydrodynamic, micromixer, ferrofluid, mixing

Procedia PDF Downloads 196
4564 Development of pm2.5 Forecasting System in Seoul, South Korea Using Chemical Transport Modeling and ConvLSTM-DNN

Authors: Ji-Seok Koo, Hee‑Yong Kwon, Hui-Young Yun, Kyung-Hui Wang, Youn-Seo Koo

Abstract:

This paper presents a forecasting system for PM2.5 levels in Seoul, South Korea, leveraging a combination of chemical transport modeling and ConvLSTM-DNN machine learning technology. Exposure to PM2.5 has known detrimental impacts on public health, making its prediction crucial for establishing preventive measures. Existing forecasting models, like the Community Multiscale Air Quality (CMAQ) and Weather Research and Forecasting (WRF), are hindered by their reliance on uncertain input data, such as anthropogenic emissions and meteorological patterns, as well as certain intrinsic model limitations. The system we've developed specifically addresses these issues by integrating machine learning and using carefully selected input features that account for local and distant sources of PM2.5. In South Korea, the PM2.5 concentration is greatly influenced by both local emissions and long-range transport from China, and our model effectively captures these spatial and temporal dynamics. Our PM2.5 prediction system combines the strengths of advanced hybrid machine learning algorithms, convLSTM and DNN, to improve upon the limitations of the traditional CMAQ model. Data used in the system include forecasted information from CMAQ and WRF models, along with actual PM2.5 concentration and weather variable data from monitoring stations in China and South Korea. The system was implemented specifically for Seoul's PM2.5 forecasting.

Keywords: PM2.5 forecast, machine learning, convLSTM, DNN

Procedia PDF Downloads 54
4563 Decoding Gender Disparities in AI: An Experimental Exploration Within the Realm of AI and Trust Building

Authors: Alexander Scott English, Yilin Ma, Xiaoying Liu

Abstract:

The widespread use of artificial intelligence in everyday life has triggered a fervent discussion covering a wide range of areas. However, to date, research on the influence of gender in various segments and factors from a social science perspective is still limited. This study aims to explore whether there are gender differences in human trust in AI for its application in basic everyday life and correlates with human perceived similarity, perceived emotions (including competence and warmth), and attractiveness. We conducted a study involving 321 participants using a two-subject experimental design with a two-factor (masculinized vs. feminized voice of the AI) multiplied by a two-factor (pitch level of the AI's voice) between-subject experimental design. Four contexts were created for the study and randomly assigned. The results of the study showed significant gender differences in perceived similarity, trust, and perceived emotion of the AIs, with females rating them significantly higher than males. Trust was higher in relation to AIs presenting the same gender (e.g., human female to female AI, human male to male AI). Mediation modeling tests indicated that emotion perception and similarity played a sufficiently mediating role in trust. Notably, although trust in AIs was strongly correlated with human gender, there was no significant effect on the gender of the AI. In addition, the study discusses the effects of subjects' age, job search experience, and job type on the findings.

Keywords: artificial intelligence, gender differences, human-robot trust, mediation modeling

Procedia PDF Downloads 45
4562 Fintech Credit and Bank Efficiency Two-way Relationship: A Comparison Study Across Country Groupings

Authors: Tan Swee Liang

Abstract:

This paper studies the two-way relationship between fintech credit and banking efficiency using the Generalized panel Method of Moment (GMM) estimation in structural equation modeling (SEM). Banking system efficiency, defined as its ability to produce the existing level of outputs with minimal inputs, is measured using input-oriented data envelopment analysis (DEA), where the whole banking system of an economy is treated as a single DMU. Banks are considered an intermediary between depositors and borrowers, utilizing inputs (deposits and overhead costs) to provide outputs (increase credits to the private sector and its earnings). Analysis of the interrelationship between fintech credit and bank efficiency is conducted to determine the impact in different country groupings (ASEAN, Asia and OECD), in particular the banking system response to fintech credit platforms. Our preliminary results show that banks do respond to the greater pressure caused by fintech platforms to enhance their efficiency, but differently across the different groups. The author’s earlier research on ASEAN-5 high bank overhead costs (as a share of total assets) as the determinant of economic growth suggests that expenses may not have been channeled efficiently to income-generating activities. One practical implication of the findings is that policymakers should enable alternative financing, such as fintech credit, as a warning or encouragement for banks to improve their efficiency.

Keywords: fintech lending, banking efficiency, data envelopment analysis, structural equation modeling

Procedia PDF Downloads 91
4561 Human Immunodeficiency Virus (HIV) Test Predictive Modeling and Identify Determinants of HIV Testing for People with Age above Fourteen Years in Ethiopia Using Data Mining Techniques: EDHS 2011

Authors: S. Abera, T. Gidey, W. Terefe

Abstract:

Introduction: Testing for HIV is the key entry point to HIV prevention, treatment, and care and support services. Hence, predictive data mining techniques can greatly benefit to analyze and discover new patterns from huge datasets like that of EDHS 2011 data. Objectives: The objective of this study is to build a predictive modeling for HIV testing and identify determinants of HIV testing for adults with age above fourteen years using data mining techniques. Methods: Cross-Industry Standard Process for Data Mining (CRISP-DM) was used to predict the model for HIV testing and explore association rules between HIV testing and the selected attributes among adult Ethiopians. Decision tree, Naïve-Bayes, logistic regression and artificial neural networks of data mining techniques were used to build the predictive models. Results: The target dataset contained 30,625 study participants; of which 16, 515 (53.9%) were women. Nearly two-fifth; 17,719 (58%), have never been tested for HIV while the rest 12,906 (42%) had been tested. Ethiopians with higher wealth index, higher educational level, belonging 20 to 29 years old, having no stigmatizing attitude towards HIV positive person, urban residents, having HIV related knowledge, information about family planning on mass media and knowing a place where to get testing for HIV showed an increased patterns with respect to HIV testing. Conclusion and Recommendation: Public health interventions should consider the identified determinants to promote people to get testing for HIV.

Keywords: data mining, HIV, testing, ethiopia

Procedia PDF Downloads 496
4560 Docking, Pharmacophore Modeling and 3d QSAR Studies on Some Novel HDAC Inhibitors with Heterocyclic Linker

Authors: Harish Rajak, Preeti Patel

Abstract:

The application of histone deacetylase inhibitors is a well-known strategy in prevention of cancer which shows acceptable preclinical antitumor activity due to its ability of growth inhibition and apoptosis induction of cancer cell. Molecular docking were performed using Histone Deacetylase protein (PDB ID:1t69) and prepared series of hydroxamic acid based HDACIs. On the basis of docking study, it was predicted that compound 1 has significant binding interaction with HDAC protein and three hydrogen bond interactions takes place, which are essential for antitumor activity. On docking, most of the compounds exhibited better glide score values between -8 to -10 which is close to the glide score value of suberoylanilide hydroxamic acid. The pharmacophore hypotheses were developed using e-pharmacophore script and phase module. The 3D-QSAR models provided a good correlation between predicted and actual anticancer activity. Best QSAR model showed Q2 (0.7974), R2 (0.9200) and standard deviation (0.2308). QSAR visualization maps suggest that hydrogen bond acceptor groups at carbonyl group of cap region and hydrophobic groups at ortho, meta, para position of R9 were favorable for HDAC inhibitory activity. We established structure activity correlation using docking, pharmacophore modeling and atom based 3D QSAR model for hydroxamic acid based HDACIs.

Keywords: HDACIs, QSAR, e-pharmacophore, docking, suberoylanilide hydroxamic acid

Procedia PDF Downloads 302
4559 Case-Based Reasoning for Modelling Random Variables in the Reliability Assessment of Existing Structures

Authors: Francesca Marsili

Abstract:

The reliability assessment of existing structures with probabilistic methods is becoming an increasingly important and frequent engineering task. However probabilistic reliability methods are based on an exhaustive knowledge of the stochastic modeling of the variables involved in the assessment; at the moment standards for the modeling of variables are absent, representing an obstacle to the dissemination of probabilistic methods. The framework according to probability distribution functions (PDFs) are established is represented by the Bayesian statistics, which uses Bayes Theorem: a prior PDF for the considered parameter is established based on information derived from the design stage and qualitative judgments based on the engineer past experience; then, the prior model is updated with the results of investigation carried out on the considered structure, such as material testing, determination of action and structural properties. The application of Bayesian statistics arises two different kind of problems: 1. The results of the updating depend on the engineer previous experience; 2. The updating of the prior PDF can be performed only if the structure has been tested, and quantitative data that can be statistically manipulated have been collected; performing tests is always an expensive and time consuming operation; furthermore, if the considered structure is an ancient building, destructive tests could compromise its cultural value and therefore should be avoided. In order to solve those problems, an interesting research path is represented by investigating Artificial Intelligence (AI) techniques that can be useful for the automation of the modeling of variables and for the updating of material parameters without performing destructive tests. Among the others, one that raises particular attention in relation to the object of this study is constituted by Case-Based Reasoning (CBR). In this application, cases will be represented by existing buildings where material tests have already been carried out and an updated PDFs for the material mechanical parameters has been computed through a Bayesian analysis. Then each case will be composed by a qualitative description of the material under assessment and the posterior PDFs that describe its material properties. The problem that will be solved is the definition of PDFs for material parameters involved in the reliability assessment of the considered structure. A CBR system represent a good candi¬date in automating the modelling of variables because: 1. Engineers already draw an estimation of the material properties based on the experience collected during the assessment of similar structures, or based on similar cases collected in literature or in data-bases; 2. Material tests carried out on structure can be easily collected from laboratory database or from literature; 3. The system will provide the user of a reliable probabilistic description of the variables involved in the assessment that will also serve as a tool in support of the engineer’s qualitative judgments. Automated modeling of variables can help in spreading probabilistic reliability assessment of existing buildings in the common engineering practice, and target at the best intervention and further tests on the structure; CBR represents a technique which may help to achieve this.

Keywords: reliability assessment of existing buildings, Bayesian analysis, case-based reasoning, historical structures

Procedia PDF Downloads 337
4558 Building Biodiversity Conservation Plans Robust to Human Land Use Uncertainty

Authors: Yingxiao Ye, Christopher Doehring, Angelos Georghiou, Hugh Robinson, Phebe Vayanos

Abstract:

Human development is a threat to biodiversity, and conservation organizations (COs) are purchasing land to protect areas for biodiversity preservation. However, COs have limited budgets and thus face hard prioritization decisions that are confounded by uncertainty in future human land use. This research proposes a data-driven sequential planning model to help COs choose land parcels that minimize the uncertain human impact on biodiversity. The proposed model is robust to uncertain development, and the sequential decision-making process is adaptive, allowing land purchase decisions to adapt to human land use as it unfolds. The cellular automata model is leveraged to simulate land use development based on climate data, land characteristics, and development threat index from NASA Socioeconomic Data and Applications Center. This simulation is used to model uncertainty in the problem. This research leverages state-of-the-art techniques in the robust optimization literature to propose a computationally tractable reformulation of the model, which can be solved routinely by off-the-shelf solvers like Gurobi or CPLEX. Numerical results based on real data from the Jaguar in Central and South America show that the proposed method reduces conservation loss by 19.46% on average compared to standard approaches such as MARXAN used in practice for biodiversity conservation. Our method may better help guide the decision process in land acquisition and thereby allow conservation organizations to maximize the impact of limited resources.

Keywords: data-driven robust optimization, biodiversity conservation, uncertainty simulation, adaptive sequential planning

Procedia PDF Downloads 208
4557 Integrated Mathematical Modeling and Advance Visualization of Magnetic Nanoparticle for Drug Delivery, Drug Release and Effects to Cancer Cell Treatment

Authors: Norma Binti Alias, Che Rahim Che The, Norfarizan Mohd Said, Sakinah Abdul Hanan, Akhtar Ali

Abstract:

This paper discusses on the transportation of magnetic drug targeting through blood within vessels, tissues and cells. There are three integrated mathematical models to be discussed and analyze the concentration of drug and blood flow through magnetic nanoparticles. The cell therapy brought advancement in the field of nanotechnology to fight against the tumors. The systematic therapeutic effect of Single Cells can reduce the growth of cancer tissue. The process of this nanoscale phenomena system is able to measure and to model, by identifying some parameters and applying fundamental principles of mathematical modeling and simulation. The mathematical modeling of single cell growth depends on three types of cell densities such as proliferative, quiescent and necrotic cells. The aim of this paper is to enhance the simulation of three types of models. The first model represents the transport of drugs by coupled partial differential equations (PDEs) with 3D parabolic type in a cylindrical coordinate system. This model is integrated by Non-Newtonian flow equations, leading to blood liquid flow as the medium for transportation system and the magnetic force on the magnetic nanoparticles. The interaction between the magnetic force on drug with magnetic properties produces induced currents and the applied magnetic field yields forces with tend to move slowly the movement of blood and bring the drug to the cancer cells. The devices of nanoscale allow the drug to discharge the blood vessels and even spread out through the tissue and access to the cancer cells. The second model is the transport of drug nanoparticles from the vascular system to a single cell. The treatment of the vascular system encounters some parameter identification such as magnetic nanoparticle targeted delivery, blood flow, momentum transport, density and viscosity for drug and blood medium, intensity of magnetic fields and the radius of the capillary. Based on two discretization techniques, finite difference method (FDM) and finite element method (FEM), the set of integrated models are transformed into a series of grid points to get a large system of equations. The third model is a single cell density model involving the three sets of first order PDEs equations for proliferating, quiescent and necrotic cells change over time and space in Cartesian coordinate which regulates under different rates of nutrients consumptions. The model presents the proliferative and quiescent cell growth depends on some parameter changes and the necrotic cells emerged as the tumor core. Some numerical schemes for solving the system of equations are compared and analyzed. Simulation and computation of the discretized model are supported by Matlab and C programming languages on a single processing unit. Some numerical results and analysis of the algorithms are presented in terms of informative presentation of tables, multiple graph and multidimensional visualization. As a conclusion, the integrated of three types mathematical modeling and the comparison of numerical performance indicates that the superior tool and analysis for solving the complete set of magnetic drug delivery system which give significant effects on the growth of the targeted cancer cell.

Keywords: mathematical modeling, visualization, PDE models, magnetic nanoparticle drug delivery model, drug release model, single cell effects, avascular tumor growth, numerical analysis

Procedia PDF Downloads 428
4556 Urban Growth Analysis Using Multi-Temporal Satellite Images, Non-stationary Decomposition Methods and Stochastic Modeling

Authors: Ali Ben Abbes, ImedRiadh Farah, Vincent Barra

Abstract:

Remotely sensed data are a significant source for monitoring and updating databases for land use/cover. Nowadays, changes detection of urban area has been a subject of intensive researches. Timely and accurate data on spatio-temporal changes of urban areas are therefore required. The data extracted from multi-temporal satellite images are usually non-stationary. In fact, the changes evolve in time and space. This paper is an attempt to propose a methodology for changes detection in urban area by combining a non-stationary decomposition method and stochastic modeling. We consider as input of our methodology a sequence of satellite images I1, I2, … In at different periods (t = 1, 2, ..., n). Firstly, a preprocessing of multi-temporal satellite images is applied. (e.g. radiometric, atmospheric and geometric). The systematic study of global urban expansion in our methodology can be approached in two ways: The first considers the urban area as one same object as opposed to non-urban areas (e.g. vegetation, bare soil and water). The objective is to extract the urban mask. The second one aims to obtain a more knowledge of urban area, distinguishing different types of tissue within the urban area. In order to validate our approach, we used a database of Tres Cantos-Madrid in Spain, which is derived from Landsat for a period (from January 2004 to July 2013) by collecting two frames per year at a spatial resolution of 25 meters. The obtained results show the effectiveness of our method.

Keywords: multi-temporal satellite image, urban growth, non-stationary, stochastic model

Procedia PDF Downloads 428
4555 3D Visualization for the Relationship of the Urban Rule and Building Form by Using CityEngine

Authors: Chin Ku, Han liang Lin

Abstract:

The purpose of this study is to visualize how the rule related to urban design influences the building form by 3D modeling software CityEngine. In order to make the goal of urban design clearly connect to urban form, urban planner or designer should understand how the rule affects the form, especially the building form. In Taiwan, the rule pertained to urban design includes traditional zoning, urban design review and building codes. However, zoning cannot precisely expect the outcome of building form and lack of thinking about public realm and 3D form. In addition to that, urban design review is based on case by case, do not have a comprehensive regulation plan and the building code is just for general regulation. Therefore, rule cannot make the urban form reach the vision or goal of the urban design. Consequently, another kind of zoning called Form-based code (FBC) has arisen. This study uses the component of FBC which pertained to urban fabric such as street width, block and plot size, etc., to be the variants of building form, and find out the relationship between the rule and building form. There are three stages of this research, it will start from a field survey of Taichung City in Taiwan to induce the rule-building form relationship by using cluster analysis and descriptive Statistics. Second, visualize the relationship through the parameterized and codified process in CityEngine which is the procedural modeling, and can analyze, monitor and visualize the 3D world. Last, compare the CityEngine result with real world to examine how extent do this model represent the real world appearance.

Keywords: 3D visualization, CityEngine, form-based code, urban form

Procedia PDF Downloads 550
4554 The Optimization of TICSI in the Convergence Mechanism of Urban Water Management

Authors: M. Macchiaroli, L. Dolores, V. Pellecchia

Abstract:

With the recent Resolution n. 580/2019/R/idr, the Italian Regulatory Authority for Energy, Networks, and Environment (ARERA) for the Urban Water Management has introduced, for water managements characterized by persistent critical issues regarding the planning and organization of the service and the implementation of the necessary interventions for the improvement of infrastructures and management quality, a new mechanism for determining tariffs: the regulatory scheme of Convergence. The aim of this regulatory scheme is the overcoming of the Water Service Divided in order to improve the stability of the local institutional structures, technical quality, contractual quality, as well as in order to guarantee transparency elements for Users of the Service. Convergence scheme presupposes the identification of the cost items to be considered in the tariff in parametric terms, distinguishing three possible cases according to the type of historical data available to the Manager. The study, in particular, focuses on operations that have neither data on tariff revenues nor data on operating costs. In this case, the Manager's Constraint on Revenues (VRG) is estimated on the basis of a reference benchmark and becomes the starting point for defining the structure of the tariff classes, in compliance with the TICSI provisions (Integrated Text for tariff classes, ARERA's Resolution n. 665/2017/R/idr). The proposed model implements the recent studies on optimization models for the definition of tariff classes in compliance with the constraints dictated by TICSI in the application of the Convergence mechanism, proposing itself as a support tool for the Managers and the local water regulatory Authority in the decision-making process.

Keywords: decision-making process, economic evaluation of projects, optimizing tools, urban water management, water tariff

Procedia PDF Downloads 118
4553 Lagrangian Approach for Modeling Marine Litter Transport

Authors: Sarra Zaied, Arthur Bonpain, Pierre Yves Fravallo

Abstract:

The permanent supply of marine litter implies their accumulation in the oceans, which causes the presence of more compact wastes layers. Their Spatio-temporal distribution is never homogeneous and depends mainly on the hydrodynamic characteristics of the environment and the size and location of the wastes. As part of optimizing collect of marine plastic wastes, it is important to measure and monitor their evolution over time. For this, many research studies have been dedicated to describing the wastes behavior in order to identify their accumulation in oceans areas. Several models are therefore developed to understand the mechanisms that allow the accumulation and the displacements of marine litter. These models are able to accurately simulate the drift of wastes to study their behavior and stranding. However, these works aim to study the wastes behavior over a long period of time and not at the time of waste collection. This work investigates the transport of floating marine litter (FML) to provide basic information that can help in optimizing wastes collection by proposing a model for predicting their behavior during collection. The proposed study is based on a Lagrangian modeling approach that uses the main factors influencing the dynamics of the waste. The performance of the proposed method was assessed on real data collected from the Copernicus Marine Environment Monitoring Service (CMEMS). Evaluation results in the Java Sea (Indonesia) prove that the proposed model can effectively predict the position and the velocity of marine wastes during collection.

Keywords: floating marine litter, lagrangian transport, particle-tracking model, wastes drift

Procedia PDF Downloads 191
4552 Comparison between Experimental Modeling and HYDRUS-2D for Nitrate Transport through a Saturated Soil Column

Authors: Mohamed Eltarabily, Abdelazim Negm, Chihiro Yoshimura

Abstract:

Recently, the pollution of groundwater from the use of nitrogenous fertilizer is at the increase. Also, due to the increase in area under cultivation and regular use of fertilizer in irrigated agriculture, groundwater pollution from agricultural activities is becoming a major concern. Because of the high mobility of Nitrate (NO3-) in soil which is governed by electrostatic processes, particularly anion exclusion, nitrate can be intercepted by shallow subsurface drainage pipe systems and then discharged offsite into streams, rivers, and lakes causing many hazards. In order to solve these environmental problems associated with nitrate, a better understanding of how NO3- moves through the soil profile under flow conditions is required. In the present paper, the results of a comparative study between experimental and numerical modeling of Nitrate transport through a saturated soil column are presented and analyzed. In order to achieve that, three water fluxes densities; 0.008, 0.007, and 0.006 m sec-1 and N concentration rates 10 mol cm-3 were used. The same concentrations were used in the simulation using HYDRUS-2D. The physical and chemical properties of the collected soil samples were calculated. Besides, the soil texture was determined which was silty sand. Results showed that HYDRUS-2D can successfully predict the relative behavior of N transport in the present experiment. Nitrate concentrations will reach deeper depth with the increase in the water flux. Overall, it was overestimated in the final concentration of (NO3-) in the soil by numerical simulation than by experimental column test. The column experiment is a useful tool for assessing the nitrate concentrations in the soil profile.

Keywords: groundwater, nitrate leaching, HYDRUS-2D, soil column

Procedia PDF Downloads 235
4551 Augmented Reality: New Relations with the Architectural Heritage Education

Authors: Carla Maria Furuno Rimkus

Abstract:

The technologies related to virtual reality and augmented reality in combination with mobile technologies, are being more consolidated and used each day. The increasing technological availability along with the decrease of their acquisition and maintenance costs, have favored the expansion of its use in the field of historic heritage. In this context it is focused, in this article, on the potential of mobile applications in the dissemination of the architectural heritage, using the technology of Augmented Reality. From this perspective approach, it is discussed about the process of producing an application for mobile devices on the Android platform, which combines the technologies of geometric modeling with augmented reality (AR) and access to interactive multimedia contents with cultural, social and historic information of the historic building that we take as the object of study: a block with a set of buildings built in the XVIII century, known as "Quarteirão dos Trapiches", which was modeled in 3D, coated with the original texture of its facades and displayed on AR. From this perspective approach, this paper discusses about methodological aspects of the development of this application regarding to the process and the project development tools, and presents our considerations on methodological aspects of developing an application for the Android system, focused on the dissemination of the architectural heritage, in order to encourage the tourist potential of the city in a sustainable way and to contribute to develop the digital documentation of the heritage of the city, meeting a demand of tourists visiting the city and the professionals who work in the preservation and restoration of it, consisting of architects, historians, archaeologists, museum specialists, among others.

Keywords: augmented reality, architectural heritage, geometric modeling, mobile applications

Procedia PDF Downloads 478
4550 Green Supply Chain Network Optimization with Internet of Things

Authors: Sema Kayapinar, Ismail Karaoglan, Turan Paksoy, Hadi Gokcen

Abstract:

Green Supply Chain Management is gaining growing interest among researchers and supply chain management. The concept of Green Supply Chain Management is to integrate environmental thinking into the Supply Chain Management. It is the systematic concept emphasis on environmental problems such as reduction of greenhouse gas emissions, energy efficiency, recycling end of life products, generation of solid and hazardous waste. This study is to present a green supply chain network model integrated Internet of Things applications. Internet of Things provides to get precise and accurate information of end-of-life product with sensors and systems devices. The forward direction consists of suppliers, plants, distributions centres and sales and collect centres while, the reverse flow includes the sales and collects centres, disassembled centre, recycling and disposal centre. The sales and collection centre sells the new products are transhipped from factory via distribution centre and also receive the end-of life product according their value level. We describe green logistics activities by presenting specific examples including “recycling of the returned products and “reduction of CO2 gas emissions”. The different transportation choices are illustrated between echelons according to their CO2 gas emissions. This problem is formulated as a mixed integer linear programming model to solve the green supply chain problems which are emerged from the environmental awareness and responsibilities. This model is solved by using Gams package program. Numerical examples are suggested to illustrate the efficiency of the proposed model.

Keywords: green supply chain optimization, internet of things, greenhouse gas emission, recycling

Procedia PDF Downloads 328
4549 Study and Conservation of Cultural and Natural Heritages with the Use of Laser Scanner and Processing System for 3D Modeling Spatial Data

Authors: Julia Desiree Velastegui Caceres, Luis Alejandro Velastegui Caceres, Oswaldo Padilla, Eduardo Kirby, Francisco Guerrero, Theofilos Toulkeridis

Abstract:

It is fundamental to conserve sites of natural and cultural heritage with any available technique or existing methodology of preservation in order to sustain them for the following generations. We propose a further skill to protect the actual view of such sites, in which with high technology instrumentation we are able to digitally preserve natural and cultural heritages applied in Ecuador. In this project the use of laser technology is presented for three-dimensional models, with high accuracy in a relatively short period of time. In Ecuador so far, there are not any records on the use and processing of data obtained by this new technological trend. The importance of the project is the description of the methodology of the laser scanner system using the Faro Laser Scanner Focus 3D 120, the method for 3D modeling of geospatial data and the development of virtual environments in the areas of Cultural and Natural Heritage. In order to inform users this trend in technology in which three-dimensional models are generated, the use of such tools has been developed to be able to be displayed in all kinds of digitally formats. The results of the obtained 3D models allows to demonstrate that this technology is extremely useful in these areas, but also indicating that each data campaign needs an individual slightly different proceeding starting with the data capture and processing to obtain finally the chosen virtual environments.

Keywords: laser scanner system, 3D model, cultural heritage, natural heritage

Procedia PDF Downloads 306
4548 Heterogeneity of Soil Moisture and Its Impacts on the Mountainous Watershed Hydrology in Northwest China

Authors: Chansheng He, Zhongfu Wang, Xiao Bai, Jie Tian, Xin Jin

Abstract:

Heterogeneity of soil hydraulic properties directly affects hydrological processes at different scales. Understanding heterogeneity of soil hydraulic properties such as soil moisture is therefore essential for modeling watershed ecohydrological processes, particularly in hard to access, topographically complex mountainous watersheds. This study maps spatial variations of soil moisture by in situ observation network that consists of sampling points, zones, and tributaries, and monitors corresponding hydrological variables of air and soil temperatures, evapotranspiration, infiltration, and runoff in the Upper Reach of the Heihe River Watershed, a second largest inland river (terminal lake) with a drainage area of over 128,000 km² in Northwest China. Subsequently, the study uses a hydrological model, SWAT (Soil and Water Assessment Tool) to simulate the effects of heterogeneity of soil moisture on watershed hydrological processes. The spatial clustering method, Full-Order-CLK was employed to derive five soil heterogeneous zones (Configuration 97, 80, 65, 40, and 20) for soil input to SWAT. Results show the simulations by the SWAT model with the spatially clustered soil hydraulic information from the field sampling data had much better representation of the soil heterogeneity and more accurate performance than the model using the average soil property values for each soil type derived from the coarse soil datasets. Thus, incorporating detailed field sampling soil heterogeneity data greatly improves performance in hydrologic modeling.

Keywords: heterogeneity, soil moisture, SWAT, up-scaling

Procedia PDF Downloads 346
4547 Optimization of the Culture Medium, Incubation Period, pH and Temperatures for Maximal Dye Bioremoval Using A. Fumigates

Authors: Wafaa M. Abd El-Rahim, Magda A. El-Meleigy, Eman Refaat

Abstract:

This study dealing with optimization the conditions affecting the formation of extracellular lignin- degrading enzymes to achieve maximal decolorization activity of Direct Violet dye by one fungal strain. In this study Aspergillus fumigates fungal strain used for production extracellular ligninolytic enzymes for removing Direct Violet dye under different conditions: culture medium, incubation period, pH and temperatures. The results indicted that the removal efficiency of A. fumigatus was enhanced by addition glucose and peptone to the culture medium. The addition of peptone and glucose was found to increase the decolorization activity of the fungal isolate from 51.38% to 93.74% after 4 days of incubation. The highest production of extracellular lignin degrading enzymes also recorded in Direct Violet dye medium supplemented with peptone and glucose. It was also found the decolorization activity of A. fumigatus was decreased gradually by increasing the incubation period up to 4 days. Also it was found that the fungal strain can grow and produce extracellular ligninolytic enzymes which accompanied by efficient removal of Direct Violet dye in a wide pH range of 4-8. The results also found that the maximal biosynthesis of ligninolytic enzymes which accompanied with maximal removal of Direct Violet dye was obtained at a temperature of 28C. This indicates that the different conditions of culture medium, incubation period, pH and temperatures are effective on dye decolorization on the fungal biomass and played a role in Direct Violet dye removal along with enzymatic activity of A. fumigatus.

Keywords: A. fumigates, extracellular lignin- degrading enzymes, textile dye, dye removing

Procedia PDF Downloads 278
4546 Computer Based Identification of Possible Molecular Targets for Induction of Drug Resistance Reversion in Multidrug Resistant Mycobacterium Tuberculosis

Authors: Oleg Reva, Ilya Korotetskiy, Marina Lankina, Murat Kulmanov, Aleksandr Ilin

Abstract:

Molecular docking approaches are widely used for design of new antibiotics and modeling of antibacterial activities of numerous ligands which bind specifically to active centers of indispensable enzymes and/or key signaling proteins of pathogens. Widespread drug resistance among pathogenic microorganisms calls for development of new antibiotics specifically targeting important metabolic and information pathways. A generally recognized problem is that almost all molecular targets have been identified already and it is getting more and more difficult to design innovative antibacterial compounds to combat the drug resistance. A promising way to overcome the drug resistance problem is an induction of reversion of drug resistance by supplementary medicines to improve the efficacy of the conventional antibiotics. In contrast to well established computer-based drug design, modeling of drug resistance reversion still is in its infancy. In this work, we proposed an approach to identification of compensatory genetic variants reducing the fitness cost associated with the acquisition of drug resistance by pathogenic bacteria. The approach was based on an analysis of the population genetic of Mycobacterium tuberculosis and on results of experimental modeling of the drug resistance reversion induced by a new anti-tuberculosis drug FS-1. The latter drug is an iodine-containing nanomolecular complex that passed clinical trials and was admitted as a new medicine against MDR-TB in Kazakhstan. Isolates of M. tuberculosis obtained on different stages of the clinical trials and also from laboratory animals infected with MDR-TB strain were characterized by antibiotic resistance, and their genomes were sequenced by the paired-end Illumina HiSeq 2000 technology. A steady increase in sensitivity to conventional anti-tuberculosis antibiotics in series of isolated treated with FS-1 was registered despite the fact that the canonical drug resistance mutations identified in the genomes of these isolates remained intact. It was hypothesized that the drug resistance phenotype in M. tuberculosis requires an adjustment of activities of many genes to compensate the fitness cost of the drug resistance mutations. FS-1 cased an aggravation of the fitness cost and removal of the drug-resistant variants of M. tuberculosis from the population. This process caused a significant increase in genetic heterogeneity of the Mtb population that was not observed in the positive and negative controls (infected laboratory animals left untreated and treated solely with the antibiotics). A large-scale search for linkage disequilibrium associations between the drug resistance mutations and genetic variants in other genomic loci allowed identification of target proteins, which could be influenced by supplementary drugs to increase the fitness cost of the drug resistance and deprive the drug-resistant bacterial variants of their competitiveness in the population. The approach will be used to improve the efficacy of FS-1 and also for computer-based design of new drugs to combat drug-resistant infections.

Keywords: complete genome sequencing, computational modeling, drug resistance reversion, Mycobacterium tuberculosis

Procedia PDF Downloads 263