Search results for: equation modeling methods
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19332

Search results for: equation modeling methods

17022 A Hybrid Traffic Model for Smoothing Traffic Near Merges

Authors: Shiri Elisheva Decktor, Sharon Hornstein

Abstract:

Highway merges and unmarked junctions are key components in any urban road network, which can act as bottlenecks and create traffic disruption. Inefficient highway merges may trigger traffic instabilities such as stop-and-go waves, pose safety conditions and lead to longer journey times. These phenomena occur spontaneously if the average vehicle density exceeds a certain critical value. This study focuses on modeling the traffic using a microscopic traffic flow model. A hybrid traffic model, which combines human-driven and controlled vehicles is assumed. The controlled vehicles obey different driving policies when approaching the merge, or in the vicinity of other vehicles. We developed a co-simulation model in SUMO (Simulation of Urban Mobility), in which the human-driven cars are modeled using the IDM model, and the controlled cars are modeled using a dedicated controller. The scenario chosen for this study is a closed track with one merge and one exit, which could be later implemented using a scaled infrastructure on our lab setup. This will enable us to benchmark the results of this study obtained in simulation, to comparable results in similar conditions in the lab. The metrics chosen for the comparison of the performance of our algorithm on the overall traffic conditions include the average speed, wait time near the merge, and throughput after the merge, measured under different travel demand conditions (low, medium, and heavy traffic).

Keywords: highway merges, traffic modeling, SUMO, driving policy

Procedia PDF Downloads 90
17021 Modeling the Time Dependent Biodistribution of a 177Lu Labeled Somatostatin Analogues for Targeted Radiotherapy of Neuroendocrine Tumors Using Compartmental Analysis

Authors: Mahdieh Jajroudi

Abstract:

Developing a pharmacokinetic model for the neuroendocrine tumors therapy agent 177Lu-DOTATATE in nude mice bearing AR42J rat pancreatic tumor to investigate and evaluate the behavior of the complex was the main purpose of this study. The utilization of compartmental analysis permits the mathematical differencing of tissues and organs to become acquainted with the concentration of activity in each fraction of interest. Biodistribution studies are onerous and troublesome to perform in humans, but such data can be obtained facilely in rodents. A physiologically based pharmacokinetic model for scaling up activity concentration in particular organs versus time was developed. The mathematical model exerts physiological parameters including organ volumes, blood flow rates, and vascular permabilities; the compartments (organs) are connected anatomically. This allows the use of scale-up techniques to forecast new complex distribution in humans' each organ. The concentration of the radiopharmaceutical in various organs was measured at different times. The temporal behavior of biodistribution of 177Lu labeled somatostatin analogues was modeled and drawn as function of time. Conclusion: The variation of pharmaceutical concentration in all organs is characterized with summation of six to nine exponential terms and it approximates our experimental data with precision better than 1%.

Keywords: biodistribution modeling, compartmental analysis, 177Lu labeled somatostatin analogues, neuroendocrine tumors

Procedia PDF Downloads 348
17020 Militating Factors Against Building Information Modeling Adoption in Quantity Surveying Practice in South Africa

Authors: Kenneth O. Otasowie, Matthew Ikuabe, Clinton Aigbavboa, Ayodeji Oke

Abstract:

The quantity surveying (QS) profession is one of the professions in the construction industry, and it is saddled with the responsibility of measuring the number of materials as well as the workmanship required to get work done in the industry. This responsibility is vital to the success of a construction project as it determines if a project will be completed on time, within budget, and up to the required standard. However, the practice has been criticised severally for failure to accurately execute her responsibility. The need to reduce errors, inaccuracies and omissions has made the adoption of modern technologies such as building information modeling (BIM) inevitable in its practice. Nevertheless, there are barriers to the adoption of BIM in QS practice in South Africa (SA). Thus, this study aims to investigate these barriers. A survey design was adopted. A total number of one hundred and fifteen (115) questionnaires were administered to quantity surveyors in Guateng Province, SA, and ninety (90) were returned and found suitable for analysis. Collected data were analysed using percentage, mean item score, standard deviation, one-sample t-test, and Kruskal-Wallis. The findings show that lack of BIM expertise, lack of government enforcement, resistance to change, and no client demand for BIM are the most significant barriers to the adoption of BIM in QS practice. As a result, this study recommends that trainings on BIM technology be prioritised, and government must take the lead in BIM adoption in the country, particularly in public projects.

Keywords: barriers, BIM, quantity surveying practice, South Africa

Procedia PDF Downloads 85
17019 Economic Forecasting Analysis for Solar Photovoltaic Application

Authors: Enas R. Shouman

Abstract:

Economic development with population growth is leading to a continuous increase in energy demand. At the same time, growing global concern for the environment is driving to decrease the use of conventional energy sources and to increase the use of renewable energy sources. The objective of this study is to present the market trends of solar energy photovoltaic technology over the world and to represent economics methods for PV financial analyzes on the basis of expectations for the expansion of PV in many applications. In the course of this study, detailed information about the current PV market was gathered and analyzed to find factors influencing the penetration of PV energy. The paper methodology depended on five relevant economic financial analysis methods that are often used for investment decisions maker. These methods are payback analysis, net benefit analysis, saving-to-investment ratio, adjusted internal rate of return, and life-cycle cost. The results of this study may be considered as a marketing guide that helps diffusion of using PV Energy. The study showed that PV cost is economically reliable. The consumers will pay higher purchase prices for PV system installation but will get lower electricity bill.

Keywords: photovoltaic, financial methods, solar energy, economics, PV panel

Procedia PDF Downloads 96
17018 Research of Seepage Field and Slope Stability Considering Heterogeneous Characteristics of Waste Piles: A Less Costly Way to Reduce High Leachate Levels and Avoid Accidents

Authors: Serges Mendomo Meye, Li Guowei, Shen Zhenzhong, Gan Lei, Xu Liqun

Abstract:

Due to the characteristics of high-heap and large-volume, the complex layers of waste and the high-water level of leachate, environmental pollution, and slope instability are easily produced. It is therefore of great significance to research the heterogeneous seepage field and stability of landfills. This paper focuses on the heterogeneous characteristics of the landfill piles and analyzes the seepage field and slope stability of the landfill using statistical and numerical analysis methods. The calculated results are compared with the field measurement and literature research data to verify the reliability of the model, which may provide the basis for the design, safe, and eco-friendly operation of the landfill. The main innovations are as follows: (1) The saturated-unsaturated seepage equation of heterogeneous soil is derived theoretically. The heterogeneous landfill is regarded as composed of infinite layers of homogeneous waste, and a method for establishing the heterogeneous seepage model is proposed. Then the formation law of the stagnant water level of heterogeneous landfills is studied. It is found that the maximum stagnant water level of landfills is higher when considering the heterogeneous seepage characteristics, which harms the stability of landfills. (2) Considering the heterogeneity weight and strength characteristics of waste, a method of establishing a heterogeneous stability model is proposed, and it is extended to the three-dimensional stability study. It is found that the distribution of heterogeneous characteristics has a great influence on the stability of landfill slope. During the operation and management of the landfill, the reservoir bank should also be considered while considering the capacity of the landfill.

Keywords: heterogeneous characteristics, leachate levels, saturated-unsaturated seepage, seepage field, slope stability

Procedia PDF Downloads 230
17017 BIM-Based Tool for Sustainability Assessment and Certification Documents Provision

Authors: Taki Eddine Seghier, Mohd Hamdan Ahmad, Yaik-Wah Lim, Samuel Opeyemi Williams

Abstract:

The assessment of building sustainability to achieve a specific green benchmark and the preparation of the required documents in order to receive a green building certification, both are considered as major challenging tasks for green building design team. However, this labor and time-consuming process can take advantage of the available Building Information Modeling (BIM) features such as material take-off and scheduling. Furthermore, the workflow can be automated in order to track potentially achievable credit points and provide rating feedback for several design options by using integrated Visual Programing (VP) to handle the stored parameters within the BIM model. Hence, this study proposes a BIM-based tool that uses Green Building Index (GBI) rating system requirements as a unique input case to evaluate the building sustainability in the design stage of the building project life cycle. The tool covers two key models for data extraction, firstly, a model for data extraction, calculation and the classification of achievable credit points in a green template, secondly, a model for the generation of the required documents for green building certification. The tool was validated on a BIM model of residential building and it serves as proof of concept that building sustainability assessment of GBI certification can be automatically evaluated and documented through BIM.

Keywords: green building rating system, GBRS, building information modeling, BIM, visual programming, VP, sustainability assessment

Procedia PDF Downloads 312
17016 A Proposed Model of E-Marketing Service-Oriented Architecture (E-MSOA)

Authors: Hussein Moselhy, Islam Salam

Abstract:

There have been some challenges and problems which hinder the implementation of the e-marketing systems such as the high cost of information systems infrastructure and maintenance as well as their unavailability within the institution. Also, there is no system which supports all programming languages and different platforms. Another problem is the lack of integration between these systems on one hand and the operating systems and different web browsers on the other hand. No system for customer relationship management is established which recognizes their desires and puts them in consideration while performing e-marketing functions is available. Therefore, the service-oriented architecture emerged as one of the most important techniques and methodologies to build systems that integrate with various operating systems and different platforms and other technologies. This technology allows realizing the data exchange among different applications. The service-oriented architecture represents distributed computing concepts to demonstrate its success in achieving the requirements of systems through web services. It also reflects the appropriate design for the services to use different web services in supporting the requirements of business processes and software users. In a service-oriented environment, web services are deployed on the web in the form of independent services to be accessed without knowledge of the nature of the programs and systems with in. This Paper presents a proposal for a new model which contributes to the application of methods and means of e-marketing with the integration of marketing mix elements to improve marketing efficiency (E-MSOA). And apply it in the educational city of one of the Egyptian sector.

Keywords: service-oriented architecture, electronic commerce, virtual retailing, unified modeling language

Procedia PDF Downloads 415
17015 Frailty Models for Modeling Heterogeneity: Simulation Study and Application to Quebec Pension Plan

Authors: Souad Romdhane, Lotfi Belkacem

Abstract:

When referring to actuarial analysis of lifetime, only models accounting for observable risk factors have been developed. Within this context, Cox proportional hazards model (CPH model) is commonly used to assess the effects of observable covariates as gender, age, smoking habits, on the hazard rates. These covariates may fail to fully account for the true lifetime interval. This may be due to the existence of another random variable (frailty) that is still being ignored. The aim of this paper is to examine the shared frailty issue in the Cox proportional hazard model by including two different parametric forms of frailty into the hazard function. Four estimated methods are used to fit them. The performance of the parameter estimates is assessed and compared between the classical Cox model and these frailty models through a real-life data set from the Quebec Pension Plan and then using a more general simulation study. This performance is investigated in terms of the bias of point estimates and their empirical standard errors in both fixed and random effect parts. Both the simulation and the real dataset studies showed differences between classical Cox model and shared frailty model.

Keywords: life insurance-pension plan, survival analysis, risk factors, cox proportional hazards model, multivariate failure-time data, shared frailty, simulations study

Procedia PDF Downloads 343
17014 Convex Restrictions for Outage Constrained MU-MISO Downlink under Imperfect Channel State Information

Authors: A. Preetha Priyadharshini, S. B. M. Priya

Abstract:

In this paper, we consider the MU-MISO downlink scenario, under imperfect channel state information (CSI). The main issue in imperfect CSI is to keep the probability of each user achievable outage rate below the given threshold level. Such a rate outage constraints present significant and analytical challenges. There are many probabilistic methods are used to minimize the transmit optimization problem under imperfect CSI. Here, decomposition based large deviation inequality and Bernstein type inequality convex restriction methods are used to perform the optimization problem under imperfect CSI. These methods are used for achieving improved output quality and lower complexity. They provide a safe tractable approximation of the original rate outage constraints. Based on these method implementations, performance has been evaluated in the terms of feasible rate and average transmission power. The simulation results are shown that all the two methods offer significantly improved outage quality and lower computational complexity.

Keywords: imperfect channel state information, outage probability, multiuser- multi input single output, channel state information

Procedia PDF Downloads 804
17013 The Analysis of Brain Response to Auditory Stimuli through EEG Signals’ Non-Linear Analysis

Authors: H. Namazi, H. T. N. Kuan

Abstract:

Brain activity can be measured by acquiring and analyzing EEG signals from an individual. In fact, the human brain response to external and internal stimuli is mapped in his EEG signals. During years some methods such as Fourier transform, wavelet transform, empirical mode decomposition, etc. have been used to analyze the EEG signals in order to find the effect of stimuli, especially external stimuli. But each of these methods has some weak points in analysis of EEG signals. For instance, Fourier transform and wavelet transform methods are linear signal analysis methods which are not good to be used for analysis of EEG signals as nonlinear signals. In this research we analyze the brain response to auditory stimuli by extracting information in the form of various measures from EEG signals using a software developed by our research group. The used measures are Jeffrey’s measure, Fractal dimension and Hurst exponent. The results of these analyses are useful not only for fundamental understanding of brain response to auditory stimuli but provide us with very good recommendations for clinical purposes.

Keywords: auditory stimuli, brain response, EEG signal, fractal dimension, hurst exponent, Jeffrey’s measure

Procedia PDF Downloads 522
17012 Standalone Docking Station with Combined Charging Methods for Agricultural Mobile Robots

Authors: Leonor Varandas, Pedro D. Gaspar, Martim L. Aguiar

Abstract:

One of the biggest concerns in the field of agriculture is around the energy efficiency of robots that will perform agriculture’s activity and their charging methods. In this paper, two different charging methods for agricultural standalone docking stations are shown that will take into account various variants as field size and its irregularities, work’s nature to which the robot will perform, deadlines that have to be respected, among others. Its features also are dependent on the orchard, season, battery type and its technical specifications and cost. First charging base method focuses on wireless charging, presenting more benefits for small field. The second charging base method relies on battery replacement being more suitable for large fields, thus avoiding the robot stop for recharge. Existing many methods to charge a battery, the CC CV was considered the most appropriate for either simplicity or effectiveness. The choice of the battery for agricultural purposes is if most importance. While the most common battery used is Li-ion battery, this study also discusses the use of graphene-based new type of batteries with 45% over capacity to the Li-ion one. A Battery Management Systems (BMS) is applied for battery balancing. All these approaches combined showed to be a promising method to improve a lot of technical agricultural work, not just in terms of plantation and harvesting but also about every technique to prevent harmful events like plagues and weeds or even to reduce crop time and cost.

Keywords: agricultural mobile robot, charging methods, battery replacement method, wireless charging method

Procedia PDF Downloads 133
17011 Transport Emission Inventories and Medical Exposure Modeling: A Missing Link for Urban Health

Authors: Frederik Schulte, Stefan Voß

Abstract:

The adverse effects of air pollution on public health are an increasingly vital problem in planning for urban regions in many parts of the world. The issue is addressed from various angles and by distinct disciplines in research. Epidemiological studies model the relative increase of numerous diseases in response to an increment of different forms of air pollution. A significant share of air pollution in urban regions is related to transport emissions that are often measured and stored in emission inventories. Though, most approaches in transport planning, engineering, and operational design of transport activities are restricted to general emission limits for specific air pollutants and do not consider more nuanced exposure models. We conduct an extensive literature review on exposure models and emission inventories used to study the health impact of transport emissions. Furthermore, we review methods applied in both domains and use emission inventory data of transportation hubs such as ports, airports, and urban traffic for an in-depth analysis of public health impacts deploying medical exposure models. The results reveal specific urban health risks related to transport emissions that may improve urban planning for environmental health by providing insights in actual health effects instead of only referring to general emission limits.

Keywords: emission inventories, exposure models, transport emissions, urban health

Procedia PDF Downloads 373
17010 Numerical Analysis of Shear Crack Propagation in a Concrete Beam without Transverse Reinforcement

Authors: G. A. Rombach, A. Faron

Abstract:

Crack formation and growth in reinforced concrete members are, in many cases, the cause of the collapse of technical structures. Such serious failures impair structural behavior and can also damage property and persons. An intensive investigation of the crack propagation is indispensable. Numerical methods are being developed to analyze crack growth in an element and to detect fracture failure at an early stage. For reinforced concrete components, however, further research and action are required in the analysis of shear cracks. This paper presents numerical simulations and continuum mechanical modeling of bending shear crack propagation in a three-dimensional reinforced concrete beam without transverse reinforcement. The analysis will provide a further understanding of crack growth and redistribution of inner forces in concrete members. As a numerical method to map discrete cracks, the extended finite element method (XFEM) is applied. The crack propagation is compared with the smeared crack approach using concrete damage plasticity. For validation, the crack patterns of real experiments are compared with the results of the different finite element models. The evaluation is based on single span beams under bending. With the analysis, it is possible to predict the fracture behavior of concrete members.

Keywords: concrete damage plasticity, crack propagation, extended finite element method, fracture mechanics

Procedia PDF Downloads 111
17009 Chebyshev Polynomials Relad with Fibonacci and Lucas Polynomials

Authors: Vandana N. Purav

Abstract:

Fibonacci and Lucas polynomials are special cases of Chebyshev polynomial. There are two types of Chebyshev polynomials, a Chebyshev polynomial of first kind and a Chebyshev polynomial of second kind. Chebyshev polynomial of second kind can be derived from the Chebyshev polynomial of first kind. Chebyshev polynomial is a polynomial of degree n and satisfies a second order homogenous differential equation. We consider the difference equations which are related with Chebyshev, Fibonacci and Lucas polynomias. Thus Chebyshev polynomial of second kind play an important role in finding the recurrence relations with Fibonacci and Lucas polynomials.

Keywords:

Procedia PDF Downloads 353
17008 Modeling Slow Crack Growth under Thermal and Chemical Effects for Fitness Predictions of High-Density Polyethylene Material

Authors: Luis Marquez, Ge Zhu, Vikas Srivastava

Abstract:

High-density polyethylene (HDPE) is one of the most commonly used thermoplastic polymer materials for water and gas pipelines. Slow crack growth failure is a well-known phenomenon in high-density polyethylene material and causes brittle failure well below the yield point with no obvious sign. The failure of transportation pipelines can cause catastrophic environmental and economic consequences. Using the non-destructive testing method to predict slow crack growth failure behavior is the primary preventative measurement employed by the pipeline industry but is often costly and time-consuming. Phenomenological slow crack growth models are useful to predict the slow crack growth behavior in the polymer material due to their ability to evaluate slow crack growth under different temperature and loading conditions. We developed a quantitative method to assess the slow crack growth behavior in the high-density polyethylene pipeline material under different thermal conditions based on existing physics-based phenomenological models. We are also working on developing an experimental protocol and quantitative model that can address slow crack growth behavior under different chemical exposure conditions to improve the safety, reliability, and resilience of HDPE-based pipeline infrastructure.

Keywords: mechanics of materials, physics-based modeling, civil engineering, fracture mechanics

Procedia PDF Downloads 189
17007 Resilient Analysis as an Alternative to Conventional Seismic Analysis Methods for the Maintenance of a Socioeconomical Functionality of Structures

Authors: Sara Muhammad Elqudah, Vigh László Gergely

Abstract:

Catastrophic events, such as earthquakes, are sudden, short, and devastating, threatening lives, demolishing futures, and causing huge economic losses. Current seismic analyses and design standards are based on life safety levels where only some residual strength and stiffness are left in the structure leaving it beyond economical repair. Consequently, it has become necessary to introduce and implement the concept of resilient design. Resilient design is about designing for ductility over time by resisting, absorbing, and recovering from the effects of a hazard in an appropriate and efficient time manner while maintaining the functionality of the structure in the aftermath of the incident. Resilient analysis is mainly based on the fragility, vulnerability, and functionality curves where eventually a resilience index is generated from these curves, and the higher this index is, the better is the performance of the structure. In this paper, seismic performances of a simple two story reinforced concrete building, located in a moderate seismic region, has been evaluated using the conventional seismic analyses methods, which are the linear static analysis, the response spectrum analysis, and the pushover analysis, and the generated results of these analyses methods are compared to those of the resilient analysis. Results highlight that the resilience analysis was the most convenient method in generating a more ductile and functional structure from a socio-economic perspective, in comparison to the standard seismic analysis methods.

Keywords: conventional analysis methods, functionality, resilient analysis, seismic performance

Procedia PDF Downloads 90
17006 Optimal Geothermal Borehole Design Guided By Dynamic Modeling

Authors: Hongshan Guo

Abstract:

Ground-source heat pumps provide stable and reliable heating and cooling when designed properly. The confounding effect of the borehole depth for a GSHP system, however, is rarely taken into account for any optimization: the determination of the borehole depth usually comes prior to the selection of corresponding system components and thereafter any optimization of the GSHP system. The depth of the borehole is important to any GSHP system because the shallower the borehole, the larger the fluctuation of temperature of the near-borehole soil temperature. This could lead to fluctuations of the coefficient of performance (COP) for the GSHP system in the long term when the heating/cooling demand is large. Yet the deeper the boreholes are drilled, the more the drilling cost and the operational expenses for the circulation. A controller that reads different building load profiles, optimizing for the smallest costs and temperature fluctuation at the borehole wall, eventually providing borehole depth as the output is developed. Due to the nature of the nonlinear dynamic nature of the GSHP system, it was found that between conventional optimal controller problem and model predictive control problem, the latter was found to be more feasible due to a possible history of both the trajectory during the iteration as well as the final output could be computed and compared against. Aside from a few scenarios of different weighting factors, the resulting system costs were verified with literature and reports and were found to be relatively accurate, while the temperature fluctuation at the borehole wall was also found to be within acceptable range. It was therefore determined that the MPC is adequate to optimize for the investment as well as the system performance for various outputs.

Keywords: geothermal borehole, MPC, dynamic modeling, simulation

Procedia PDF Downloads 276
17005 Teaching Research Methods at the Graduate Level Utilizing Flipped Classroom Approach; An Action Research Study

Authors: Munirah Alaboudi

Abstract:

This paper discusses a research project carried out with 12 first-year graduate students enrolled in research methods course prior to undertaking a graduate thesis during the academic year 2019. The research was designed for the objective of creating research methods course structure that embraces an individualized and activity-based approach to learning in a highly engaging group environment. This approach targeted innovating the traditional research methods lecture-based, theoretical format where students reported less engagement and limited learning. This study utilized action research methodology in developing a different approach to research methods course instruction where student performance indicators and feedback were periodically collected to assess the new teaching method. Student learning was achieved through utilizing the flipped classroom approach where students learned the material at home and classroom activities were designed to implement and experiment with the newly acquired information, with the guidance of the course instructor. Student learning in class was practiced through a series of activities based on different research methods. With the goal of encouraging student engagement, a wide range of activities was utilized including workshops, role play, mind-mapping, presentations, peer evaluations. Data was collected through an open-ended qualitative questionnaire to establish whether students were engaged in the material they were learning, and to what degree were they engaged, and to test their mastery level of the concepts discussed. Analysis of the data presented positive results as around 91% of the students reported feeling more engaged with the active learning experience and learning research by “actually doing research, not just reading about it”. The students expressed feeling invested in the process of their learning as they saw their research “gradually come to life” through peer learning and practice during workshops. Based on the results of this study, the research methods course structure was successfully remodeled and continues to be delivered.

Keywords: research methods, higher education instruction, flipped classroom, graduate education

Procedia PDF Downloads 86
17004 Methods for Solving Identification Problems

Authors: Fadi Awawdeh

Abstract:

In this work, we highlight the key concepts in using semigroup theory as a methodology used to construct efficient formulas for solving inverse problems. The proposed method depends on some results concerning integral equations. The experimental results show the potential and limitations of the method and imply directions for future work.

Keywords: identification problems, semigroup theory, methods for inverse problems, scientific computing

Procedia PDF Downloads 464
17003 Effect of the Orifice Plate Specifications on Coefficient of Discharge

Authors: Abulbasit G. Abdulsayid, Zinab F. Abdulla, Asma A. Omer

Abstract:

On the ground that the orifice plate is relatively inexpensive, requires very little maintenance and only calibrated during the occasion of plant turnaround, the orifice plate has turned to be in a real prevalent use in gas industry. Inaccuracy of measurement in the fiscal metering stations may highly be accounted to be the most vital factor for mischarges in the natural gas industry in Libya. A very trivial error in measurement can add up a fast escalating financial burden to the custodian transactions. The unaccounted gas quantity transferred annually via orifice plates in Libya, could be estimated in an extent of multi-million dollars. As the oil and gas wealth is the solely source of income to Libya, every effort is now being exerted to improve the accuracy of existing orifice metering facilities. Discharge coefficient has become pivotal in current researches undertaken in this regard. Hence, increasing the knowledge of the flow field in a typical orifice meter is indispensable. Recently and in a drastic pace, the CFD has become the most time and cost efficient versatile tool for in-depth analysis of fluid mechanics, heat and mass transfer of various industrial applications. Getting deeper into the physical phenomena lied beneath and predicting all relevant parameters and variables with high spatial and temporal resolution have been the greatest weighing pros counting for CFD. In this paper, flow phenomena for air passing through an orifice meter were numerically analyzed with CFD code based modeling, giving important information about the effect of orifice plate specifications on the discharge coefficient for three different tappings locations, i.e., flange tappings, D and D/2 tappings compared with vena contracta tappings. Discharge coefficients were paralleled with discharge coefficients estimated by ISO 5167. The influences of orifice plate bore thickness, orifice plate thickness, beveled angle, perpendicularity and buckling of the orifice plate, were all duly investigated. A case of an orifice meter whose pipe diameter of 2 in, beta ratio of 0.5 and Reynolds number of 91100, was taken as a model. The results highlighted that the discharge coefficients were highly responsive to the variation of plate specifications and under all cases, the discharge coefficients for D and D/2 tappings were very close to that of vena contracta tappings which were believed as an ideal arrangement. Also, in general sense, it was appreciated that the standard equation in ISO 5167, by which the discharge coefficient was calculated, cannot capture the variation of the plate specifications and thus further thorough considerations would be still needed.

Keywords: CFD, discharge coefficients, orifice meter, orifice plate specifications

Procedia PDF Downloads 107
17002 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 172
17001 Comparison of Two Different Methods for Peptide Synthesis

Authors: Klaudia Chmielewska, Krystyna Dzierzbicka, Iwona Inkielewicz-Stepniak

Abstract:

Carnosine, an endogenous peptide consisting of β-alanine and L-histidine has a variety of functions to mention: antioxidant, antiglycation, and reducing the toxicity of metal ions. It has therefore been proposed to act as a therapeutic agent for many pathological states, although its therapeutic index is limited by quick enzymatic cleavage. To overcome this limitation, there’s an urge to create new derivatives which might become less potent to hydrolysis, while preserving the therapeutic effect. The poster summarizes the efficiency of two peptide synthesis methods, which were: (1) the mixed anhydride with isobutyl chloroformate and N-methylmorpholine (NMM) and (2) carbodiimide - mediated coupling method via appropriate reagent condensing, here – CDI. The methods were used to obtain dipeptides which were the derivatives of carnosine. Obtained dipeptides were made in the form of methyl esters and their structures will be confirmed 1H NMR, 13C NMR, MS and elemental analysis techniques. Later on, they will be analyzed for their antioxidant properties, in comparison to carnosine.

Keywords: carnosine, method, peptide, synthesis

Procedia PDF Downloads 140
17000 Transient Response of Elastic Structures Subjected to a Fluid Medium

Authors: Helnaz Soltani, J. N. Reddy

Abstract:

Presence of fluid medium interacting with a structure can lead to failure of the structure. Since developing efficient computational model for fluid-structure interaction (FSI) problems has broader impact to realistic problems encountered in aerospace industry, ship industry, oil and gas industry, and so on, one can find an increasing need to find a method in order to investigate the effect of fluid domain on structural response. A coupled finite element formulation of problems involving FSI issue is an accurate method to predict the response of structures in contact with a fluid medium. This study proposes a finite element approach in order to study the transient response of the structures interacting with a fluid medium. Since beam and plate are considered to be the fundamental elements of almost any structure, the developed method is applied to beams and plates benchmark problems in order to demonstrate its efficiency. The formulation is a combination of the various structure theories and the solid-fluid interface boundary condition, which is used to represent the interaction between the solid and fluid regimes. Here, three different beam theories as well as three different plate theories are considered to model the solid medium, and the Navier-Stokes equation is used as the theoretical equation governed the fluid domain. For each theory, a coupled set of equations is derived where the element matrices of both regimes are calculated by Gaussian quadrature integration. The main feature of the proposed methodology is to model the fluid domain as an added mass; the external distributed force due to the presence of the fluid. We validate the accuracy of such formulation by means of some numerical examples. Since the formulation presented in this study covers several theories in literature, the applicability of our proposed approach is independent of any structure geometry. The effect of varying parameters such as structure thickness ratio, fluid density and immersion depth, are studied using numerical simulations. The results indicate that maximum vertical deflection of the structure is affected considerably in the presence of a fluid medium.

Keywords: beam and plate, finite element analysis, fluid-structure interaction, transient response

Procedia PDF Downloads 551
16999 Artificial Intelligence Approach to Water Treatment Processes: Case Study of Daspoort Treatment Plant, South Africa

Authors: Olumuyiwa Ojo, Masengo Ilunga

Abstract:

Artificial neural network (ANN) has broken the bounds of the convention programming, which is actually a function of garbage in garbage out by its ability to mimic the human brain. Its ability to adopt, adapt, adjust, evaluate, learn and recognize the relationship, behavior, and pattern of a series of data set administered to it, is tailored after the human reasoning and learning mechanism. Thus, the study aimed at modeling wastewater treatment process in order to accurately diagnose water control problems for effective treatment. For this study, a stage ANN model development and evaluation methodology were employed. The source data analysis stage involved a statistical analysis of the data used in modeling in the model development stage, candidate ANN architecture development and then evaluated using a historical data set. The model was developed using historical data obtained from Daspoort Wastewater Treatment plant South Africa. The resultant designed dimensions and model for wastewater treatment plant provided good results. Parameters considered were temperature, pH value, colour, turbidity, amount of solids and acidity. Others are total hardness, Ca hardness, Mg hardness, and chloride. This enables the ANN to handle and represent more complex problems that conventional programming is incapable of performing.

Keywords: ANN, artificial neural network, wastewater treatment, model, development

Procedia PDF Downloads 134
16998 The Relationship between Proximity to Sources of Industrial-Related Outdoor Air Pollution and Children Emergency Department Visits for Asthma in the Census Metropolitan Area of Edmonton, Canada, 2004/2005 to 2009/2010

Authors: Laura A. Rodriguez-Villamizar, Alvaro Osornio-Vargas, Brian H. Rowe, Rhonda J. Rosychuk

Abstract:

Introduction/Objectives: The Census Metropolitan Area of Edmonton (CMAE) has important industrial emissions to the air from the Industrial Heartland Alberta (IHA) at the Northeast and the coal-fired power plants (CFPP) at the West. The objective of the study was to explore the presence of clusters of children asthma ED visits in the areas around the IHA and the CFPP. Methods: Retrospective data on children asthma ED visits was collected at the dissemination area (DA) level for children between 2 and 14 years of age, living in the CMAE between April 1, 2004, and March 31, 2010. We conducted a spatial analysis of disease clusters around putative sources with count (ecological) data using descriptive, hypothesis testing, and multivariable modeling analysis. Results: The mean crude rate of asthma ED visits was 9.3/1,000 children population per year during the study period. Circular spatial scan test for cases and events identified a cluster of children asthma ED visits in the DA where the CFPP are located in the Wabamum area. No clusters were identified around the IHA area. The multivariable models suggest that there is a significant decline in risk for children asthma ED visits as distance increases around the CFPP area this effect is modified at the SE direction with mean angle 125.58 degrees, where the risk increases with distance. In contrast, the regression models for IHA suggest that there is a significant increase in risk for children asthma ED visits as distance increases around the IHA area and this effect is modified at SW direction with mean angle 216.52 degrees, where the risk increases at shorter distances. Conclusions: Different methods for detecting clusters of disease consistently suggested the existence of a cluster of children asthma ED visits around the CFPP but not around the IHA within the CMAE. These results are probably explained by the direction of the air pollutants dispersion caused by the predominant and subdominant wind direction at each point. The use of different approaches to detect clusters of disease is valuable to have a better understanding of the presence, shape, direction and size of clusters of disease around pollution sources.

Keywords: air pollution, asthma, disease cluster, industry

Procedia PDF Downloads 272
16997 Integration of Gravity and Seismic Methods in the Geometric Characterization of a Dune Reservoir: Case of the Zouaraa Basin, NW Tunisia

Authors: Marwa Djebbi, Hakim Gabtni

Abstract:

Gravity is a continuously advancing method that has become a mature technology for geological studies. Increasingly, it has been used to complement and constrain traditional seismic data and even used as the only tool to get information of the sub-surface. In fact, in some regions the seismic data, if available, are of poor quality and hard to be interpreted. Such is the case for the current study area. The Nefza zone is part of the Tellian fold and thrust belt domain in the north west of Tunisia. It is essentially made of a pile of allochthonous units resulting from a major Neogene tectonic event. Its tectonic and stratigraphic developments have always been subject of controversies. Considering the geological and hydrogeological importance of this area, a detailed interdisciplinary study has been conducted integrating geology, seismic and gravity techniques. The interpretation of Gravity data allowed the delimitation of the dune reservoir and the identification of the regional lineaments contouring the area. It revealed the presence of three gravity lows that correspond to the dune of Zouara and Ouchtata separated along with a positive gravity axis espousing the Ain Allega_Aroub Er Roumane axe. The Bouguer gravity map illustrated the compartmentalization of the Zouara dune into two depressions separated by a NW-SE anomaly trend. This constitution was confirmed by the vertical derivative map which showed the individualization of two depressions with slightly different anomaly values. The horizontal gravity gradient magnitude was performed in order to determine the different geological features present in the studied area. The latest indicated the presence of NE-SW parallel folds according to the major Atlasic direction. Also, NW-SE and EW trends were identified. The maxima tracing confirmed this direction by the presence of NE-SW faults, mainly the Ghardimaou_Cap Serrat accident. The quality of the available seismic sections and the absence of borehole data in the region, except few hydraulic wells that been drilled and showing the heterogeneity of the substratum of the dune, required the process of gravity modeling of this challenging area that necessitates to be modeled for the geometrical characterization of the dune reservoir and determine the different stratigraphic series underneath these deposits. For more detailed and accurate results, the scale of study will be reduced in coming research. A more concise method will be elaborated; the 4D microgravity survey. This approach is considered as an expansion of gravity method and its fourth dimension is time. It will allow a continuous and repeated monitoring of fluid movement in the subsurface according to the micro gal (μgall) scale. The gravity effect is a result of a monthly variation of the dynamic groundwater level which correlates with rainfall during different periods.

Keywords: 3D gravity modeling, dune reservoir, heterogeneous substratum, seismic interpretation

Procedia PDF Downloads 282
16996 SMART: Solution Methods with Ants Running by Types

Authors: Nicolas Zufferey

Abstract:

Ant algorithms are well-known metaheuristics which have been widely used since two decades. In most of the literature, an ant is a constructive heuristic able to build a solution from scratch. However, other types of ant algorithms have recently emerged: the discussion is thus not limited by the common framework of the constructive ant algorithms. Generally, at each generation of an ant algorithm, each ant builds a solution step by step by adding an element to it. Each choice is based on the greedy force (also called the visibility, the short term profit or the heuristic information) and the trail system (central memory which collects historical information of the search process). Usually, all the ants of the population have the same characteristics and behaviors. In contrast in this paper, a new type of ant metaheuristic is proposed, namely SMART (for Solution Methods with Ants Running by Types). It relies on the use of different population of ants, where each population has its own personality.

Keywords: ant algorithms, evolutionary procedures, metaheuristics, optimization, population-based methods

Procedia PDF Downloads 350
16995 Geochemical and Spatial Distribution of Minerals in the Tailings of IFE/IJESA Gold Mine Zone, Nigeria

Authors: Oladejo S. O, Tomori W. B, Adebayo A. O

Abstract:

The main objective of this research is to identify the geochemical and mineralogical characteristics potential of unexplored tailings around the gold deposit region using spatial statistics and map modeling. Some physicochemical parameters such as pH, redox potential, electrical conductivity, cation exchange capacity, total organic carbon, total organic matter, residual humidity, Cation exchange capacity, and particle size were determined from both the mine drains and tailing samples using standard methods. The physicochemical parameters of tailings ranges obtained were pH (6.0 – 7.3), Eh (−16 - 95 Mev), EC (49 - 156 µS/cm), RH (0.20-2.60%), CEC (3.64-6.45 cmol/kg), TOC (3.57-18.62%), TOM (6.15-22.93%). The geochemical oxide composition were identified using Proton Induced X-ray emission and the results indicated that SiO2>Al2O3>Fe2O3>TiO2>K2O>MgO>CaO>Na2O> P2O5>MnO>Cr2O3>SrO>K2O>P2O5. The major mineralogical components in the tailing samples were determined by quantitative X-ray diffraction techniques using the Rietveld method. Geostatistical relationships among the known points were determined using ArcGIS 10.2 software to interpolate mineral concentration with respect to the study area. The Rietveld method gave a general Quartz value of 73.73-92.76%, IImenite as 0.38-4.77%, Kaolinite group as 3.19-20.83%, Muscovite as 0.77-11.70% with a trace of other minerals. The high percentage of quartz is an indication of a sandy environment with a loose binding site.

Keywords: tailings, geochemical, mineralogy, spatial

Procedia PDF Downloads 47
16994 Modeling Optimal Lipophilicity and Drug Performance in Ligand-Receptor Interactions: A Machine Learning Approach to Drug Discovery

Authors: Jay Ananth

Abstract:

The drug discovery process currently requires numerous years of clinical testing as well as money just for a single drug to earn FDA approval. For drugs that even make it this far in the process, there is a very slim chance of receiving FDA approval, resulting in detrimental hurdles to drug accessibility. To minimize these inefficiencies, numerous studies have implemented computational methods, although few computational investigations have focused on a crucial feature of drugs: lipophilicity. Lipophilicity is a physical attribute of a compound that measures its solubility in lipids and is a determinant of drug efficacy. This project leverages Artificial Intelligence to predict the impact of a drug’s lipophilicity on its performance by accounting for factors such as binding affinity and toxicity. The model predicted lipophilicity and binding affinity in the validation set with very high R² scores of 0.921 and 0.788, respectively, while also being applicable to a variety of target receptors. The results expressed a strong positive correlation between lipophilicity and both binding affinity and toxicity. The model helps in both drug development and discovery, providing every pharmaceutical company with recommended lipophilicity levels for drug candidates as well as a rapid assessment of early-stage drugs prior to any testing, eliminating significant amounts of time and resources currently restricting drug accessibility.

Keywords: drug discovery, lipophilicity, ligand-receptor interactions, machine learning, drug development

Procedia PDF Downloads 86
16993 Reliability of the Estimate of Earthwork Quantity Based on 3D-BIM

Authors: Jaechoul Shin, Juhwan Hwang

Abstract:

In case of applying the BIM method to the civil engineering in the area of free formed structure, we can expect comparatively high rate of construction productivity as it is in the building engineering area. In this research, we developed quantity calculation error applying it to earthwork and bridge construction (e.g. PSC-I type segmental girder bridge amd integrated bridge of steel I-girders and inverted-Tee bent cap), NATM (New Austrian Tunneling Method) tunnel construction, retaining wall construction, culvert construction and implemented BIM based 3D modeling quantity survey. we confirmed high reliability of the BIM-based method in structure work in which errors occurred in range between -6% ~ +5%. Especially, understanding of the problem and improvement of the existing 2D-CAD based of quantity calculation through rock type quantity calculation error in range of -14% ~ +13% of earthwork quantity calculation. It is benefit and applicability of BIM method in civil engineering. In addition, routine method for quantity of earthwork has the same error tolerance negligible for that of structure work. But, rock type's quantity calculated as the error appears significantly to the reliability of 2D-based volume calculation shows that the problem could be. Through the estimating quantity of earthwork based 3D-BIM, proposed method has better reliability than routine method. BIM, as well as the design, construction, maintenance levels of information when you consider the benefits of integration, the introduction of BIM design in civil engineering and the possibility of applying for the effectiveness was confirmed.

Keywords: BIM, 3D modeling, 3D-BIM, quantity of earthwork

Procedia PDF Downloads 426