Search results for: molecular docking and simulation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6846

Search results for: molecular docking and simulation

486 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 139
485 Study of Morning-Glory Spillway Structure in Hydraulic Characteristics by CFD Model

Authors: Mostafa Zandi, Ramin Mansouri

Abstract:

Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. Morning-Glory spillway is one of the common spillways for discharging the overflow water behind dams, these kinds of spillways are constructed in dams with small reservoirs. In this research, the hydraulic flow characteristics of a morning-glory spillways are investigated with CFD model. Two dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k- and k-, were chosen to model Reynolds shear stress term. The power law scheme was used for discretization of momentum, k , and  equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k -ε (Standard) has the most consistent results with experimental results. When the jet is getting closer to end of basin, the computational results increase with the numerical results of their differences. The lower profile of the water jet has less sensitivity to the hydraulic jet profile than the hydraulic jet profile. In the pressure test, it was also found that the results show that the numerical values of the pressure in the lower landing number differ greatly in experimental results. The characteristics of the complex flows over a Morning-Glory spillway were studied numerically using a RANS solver. Grid study showed that numerical results of a 57512-node grid had the best agreement with the experimental values. The desired downstream channel length was preferred to be 1.5 meter, and the standard k-ε turbulence model produced the best results in Morning-Glory spillway. The numerical free-surface profiles followed the theoretical equations very well.

Keywords: morning-glory spillway, CFD model, hydraulic characteristics, wall function

Procedia PDF Downloads 66
484 Assessment of Cellular Metabolites and Impedance for Early Diagnosis of Oral Cancer among Habitual Smokers

Authors: Ripon Sarkar, Kabita Chaterjee, Ananya Barui

Abstract:

Smoking is one of the leading causes of oral cancer. Cigarette smoke affects various cellular parameters and alters molecular metabolism of cells. Epithelial cells losses their cytoskeleton structure, membrane integrity, cellular polarity that subsequently initiates the process of epithelial cells to mesenchymal transition due to long exposure of cigarette smoking. It changes the normal cellular metabolic activity which induces oxidative stress and enhances the reactive oxygen spices (ROS) formation. Excessive ROS and associated oxidative stress are considered to be a driving force in alteration in cellular phenotypes, polarity distribution and mitochondrial metabolism. Noninvasive assessment of such parameters plays essential role in development of routine screening system for early diagnosis of oral cancer. Electrical cell-substrate impedance sensing (ECIS) is one of such method applied for detection of cellular membrane impedance which can be correlated to cell membrane integrity. Present study intends to explore the alteration in cellular impedance along with the expression of cellular polarity molecules and cytoskeleton distributions in oral epithelial cells of habitual smokers and to correlate the outcome to that of clinically diagnosed oral leukoplakia and oral squamous cell carcinoma patients. Total 80 subjects were categorized into four study groups: nonsmoker (NS), cigarette smoker (CS), oral leukoplakia (OLPK) and oral squamous cell carcinoma (OSCC). Cytoskeleton distribution was analyzed by staining of actin filament and generation of ROS was measured using assay kit using standard protocol. Cell impedance was measured through ECIS method at different frequencies. Expression of E-cadherin and protease-activated receptor (PAR) proteins were observed through immune-fluorescence method. Distribution of actin filament is well organized in NS group however; distribution pattern was grossly varied in CS, OLPK and OSCC. Generation of ROS was low in NS which subsequently increased towards OSCC. Expressions of E-cadherin and change in cellular electrical impedance in different study groups indicated the hallmark of cancer progression from NS to OSCC. Expressions of E-cadherin, PAR protein, and cell impedance were decreased from NS to CS and farther OSCC. Generally, the oral epithelial cells exhibit apico-basal polarity however with cancer progression these cells lose their characteristic polarity distribution. In this study expression of polarity molecule and ECIS observation indicates such altered pattern of polarity among smoker group. Overall the present study monitored the alterations in intracellular ROS generation and cell metabolic function, membrane integrity in oral epithelial cells in cigarette smokers. Present study thus has clinical significance, and it may help in developing a noninvasive technique for early diagnosis of oral cancer amongst susceptible individuals.

Keywords: cigarette smoking, early oral cancer detection, electric cell-substrate impedance sensing, noninvasive screening

Procedia PDF Downloads 164
483 A Study of Topical and Similarity of Sebum Layer Using Interactive Technology in Image Narratives

Authors: Chao Wang

Abstract:

Under rapid innovation of information technology, the media plays a very important role in the dissemination of information, and it has a totally different analogy generations face. However, the involvement of narrative images provides more possibilities of narrative text. "Images" through the process of aperture, a camera shutter and developable photosensitive processes are manufactured, recorded and stamped on paper, displayed on a computer screen-concretely saved. They exist in different forms of files, data, or evidence as the ultimate looks of events. By the interface of media and network platforms and special visual field of the viewer, class body space exists and extends out as thin as sebum layer, extremely soft and delicate with real full tension. The physical space of sebum layer of confuses the fact that physical objects exist, needs to be established under a perceived consensus. As at the scene, the existing concepts and boundaries of physical perceptions are blurred. Sebum layer physical simulation shapes the “Topical-Similarity" immersing, leading the contemporary social practice communities, groups, network users with a kind of illusion without the presence, i.e. a non-real illusion. From the investigation and discussion of literatures, digital movies editing manufacture and produce the variability characteristics of time (for example, slices, rupture, set, and reset) are analyzed. Interactive eBook has an unique interaction in "Waiting-Greeting" and "Expectation-Response" that makes the operation of image narrative structure more interpretations functionally. The works of digital editing and interactive technology are combined and further analyze concept and results. After digitization of Interventional Imaging and interactive technology, real events exist linked and the media handing cannot be cut relationship through movies, interactive art, practical case discussion and analysis. Audience needs more rational thinking about images carried by the authenticity of the text.

Keywords: sebum layer, topical and similarity, interactive technology, image narrative

Procedia PDF Downloads 382
482 Analysis of Road Network Vulnerability Due to Merapi Volcano Eruption

Authors: Imam Muthohar, Budi Hartono, Sigit Priyanto, Hardiansyah Hardiansyah

Abstract:

The eruption of Merapi Volcano in Yogyakarta, Indonesia in 2010 caused many casualties due to minimum preparedness in facing disaster. Increasing population capacity and evacuating to safe places become very important to minimize casualties. Regional government through the Regional Disaster Management Agency has divided disaster-prone areas into three parts, namely ring 1 at a distance of 10 km, ring 2 at a distance of 15 km and ring 3 at a distance of 20 km from the center of Mount Merapi. The success of the evacuation is fully supported by road network infrastructure as a way to rescue in an emergency. This research attempts to model evacuation process based on the rise of refugees in ring 1, expanded to ring 2 and finally expanded to ring 3. The model was developed using SATURN (Simulation and Assignment of Traffic to Urban Road Networks) program version 11.3. 12W, involving 140 centroid, 449 buffer nodes, and 851 links across Yogyakarta Special Region, which was aimed at making a preliminary identification of road networks considered vulnerable to disaster. An assumption made to identify vulnerability was the improvement of road network performance in the form of flow and travel times on the coverage of ring 1, ring 2, ring 3, Sleman outside the ring, Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul. The research results indicated that the performance increase in the road networks existing in the area of ring 2, ring 3, and Sleman outside the ring. The road network in ring 1 started to increase when the evacuation was expanded to ring 2 and ring 3. Meanwhile, the performance of road networks in Yogyakarta City, Bantul, Kulon Progo, and Gunung Kidul during the evacuation period simultaneously decreased in when the evacuation areas were expanded. The results of preliminary identification of the vulnerability have determined that the road networks existing in ring 1, ring 2, ring 3 and Sleman outside the ring were considered vulnerable to the evacuation of Mount Merapi eruption. Therefore, it is necessary to pay a great deal of attention in order to face the disasters that potentially occur at anytime.

Keywords: model, evacuation, SATURN, vulnerability

Procedia PDF Downloads 160
481 Enhanced Functional Production of a Crucial Biomolecule Human Serum Albumin in Escherichia coli

Authors: Ashima Sharma

Abstract:

Human Serum Albumin (HSA)- one of the most demanded therapeutic proteins with immense biotechnological applications- is a large multidomain protein containing 17 disulfide bonds. The current source of HSA is human blood plasma which is a limited and unsafe source. Thus, there exists an indispensable need to promote non-animal derived recombinant HSA (rHSA) production. Escherichia coli is one of the most convenient hosts which had contributed to the production of more than 30% of the FDA approved recombinant pharmaceuticals. It grows rapidly and reaches high cell density using inexpensive and simple substrates. E. coli derived recombinant products have more economic potential as fermentation processes are cheaper compared to the other expression hosts. The major bottleneck in exploiting E. coli as a host for a disulfide-rich multidomain protein is the formation of aggregates of overexpressed protein. The majority of the expressed HSA forms inclusion bodies (more than 90% of the total expressed rHSA) in the E. coli cytosol. Recovery of functional rHSA from inclusion bodies is not preferred because it is difficult to obtain a large multidomain disulfide bond rich protein like rHSA in its functional native form. Purification is tedious, time-consuming, laborious and expensive. Because of such limitations, the E. coli host system was neglected for rHSA production for the past few decades despite its numerous advantages. In the present work, we have exploited the capabilities of E. coli as a host for the enhanced functional production of rHSA (~60% of the total expressed rHSA in the soluble fraction). Parameters like intracellular environment, temperature, induction type, duration of induction, cell lysis conditions etc. which play an important role in enhancing the level of production of the desired protein in its native form in vivo have been optimized. We have studied the effect of assistance of different types of exogenously employed chaperone systems on the functional expression of rHSA in the E. coli host system. Different aspects of cell growth parameters during the production of rHSA in presence and absence of molecular chaperones in E. coli have also been studied. Upon overcoming the difficulties to produce functional rHSA in E. coli, it has been possible to produce significant levels of functional protein through engineering the biological system of protein folding in the cell, the E. coli-derived rHSA has been purified to homogeneity. Its detailed physicochemical characterization has been performed by monitoring its conformational properties, secondary and tertiary structure elements, surface properties, ligand binding properties, stability issues etc. These parameters of the recombinant protein have been compared with the naturally occurring protein from the human source. The outcome of the comparison reveals that the recombinant protein resembles exactly the same as the natural one. Hence, we propose that the E. coli-derived rHSA is an ideal biosimilar for human blood plasma-derived serum albumin. Therefore, in the present study, we have introduced and promoted the E. coli- derived rHSA as an alternative to the preparation from a human source, pHSA.

Keywords: recombinant human serum albumin, Escherichia coli, biosimilar, chaperone assisted protein folding

Procedia PDF Downloads 199
480 Modeling and Simulation of Multiphase Evaporation in High Torque Low Speed Diesel Engine

Authors: Ali Raza, Rizwan Latif, Syed Adnan Qasim, Imran Shafi

Abstract:

Diesel engines are most efficient and reliable in terms of efficiency, reliability, and adaptability. Most of the research and development up till now have been directed towards High Speed Diesel Engine, for Commercial use. In these engines, objective is to optimize maximum acceleration by reducing exhaust emission to meet international standards. In high torque low speed engines, the requirement is altogether different. These types of engines are mostly used in Maritime Industry, Agriculture Industry, Static Engines Compressors Engines, etc. On the contrary, high torque low speed engines are neglected quite often and are eminent for low efficiency and high soot emissions. One of the most effective ways to overcome these issues is by efficient combustion in an engine cylinder. Fuel spray dynamics play a vital role in defining mixture formation, fuel consumption, combustion efficiency and soot emissions. Therefore, a comprehensive understanding of the fuel spray characteristics and atomization process in high torque low speed diesel engine is of great importance. Evaporation in the combustion chamber has a rigorous effect on the efficiency of the engine. In this paper, multiphase evaporation of fuel is modeled for high torque low speed engine using the CFD (computational fluid dynamics) codes. Two distinct phases of evaporation are modeled using modeling soft wares. The basic model equations are derived from the energy conservation equation and Naiver-Stokes equation. O’Rourke model is used to model the evaporation phases. The results obtained showed a generous effect on the efficiency of the engine. Evaporation rate of fuel droplet is increased with the increase in vapor pressure. An appreciable reduction in size of droplet is achieved by adding the convective heat effects in the combustion chamber. By and large, an overall increase in efficiency is observed by modeling distinct evaporation phases. This increase in efficiency is due to the fact that droplet size is reduced and vapor pressure is increased in the engine cylinder.

Keywords: diesel fuel, CFD, evaporation, multiphase

Procedia PDF Downloads 332
479 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers

Authors: M. Sarraf, J. E. Moros, M. C. Sánchez

Abstract:

Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.

Keywords: basil seed gum, particle size, viscoelastic properties, whey protein

Procedia PDF Downloads 57
478 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 518
477 Approaches to Integrating Entrepreneurial Education in School Curriculum

Authors: Kofi Nkonkonya Mpuangnan, Samantha Govender, Hlengiwe Romualda Mhlongo

Abstract:

In recent years, a noticeable and worrisome pattern has emerged in numerous developing nations which is a steady and persistent rise in unemployment rates. This escalation of economic struggles has become a cause of great concern for parents who, having invested significant resources in their children's education, harboured hopes of achieving economic prosperity and stability for their families through secure employment. To effectively tackle this pressing unemployment issue, it is imperative to adopt a holistic approach, and a pivotal aspect of this approach involves incorporating entrepreneurial education seamlessly into the entire educational system. In this light, the authors explored approaches to integrating entrepreneurial education into school curriculum focusing on the following questions. How can an entrepreneurial mindset among learners be promoted in school? And how far have pedagogical approaches improved entrepreneurship in schools? To find answers to these questions, a systematic literature review underpinned by Human Capital Theory was adopted. This method was supported by the three stages of guidelines like planning, conducting, and reporting. The data were specifically sought from publishers with expansive coverage of scholarly literature like Sage, Taylor & Francis, Emirate, and Springer, covering publications from 1965 to 2023. The search was supported by two broad terms such as promoting entrepreneurial mindset in learners and pedagogical strategies for enhancing entrepreneurship. It was found that acquiring an entrepreneurial mindset through an innovative classroom environment, resilience, and guest speakers and industry experts. Also, teachers can promote entrepreneurial education through the adoption of pedagogical approaches such as hands-on learning and experiential activities, role-playing, business simulation games and creative and innovative teaching. It was recommended that the Ministry of Education should develop tailored training programs and workshops aimed at empowering educators with the essential competencies and insights to deliver impactful entrepreneurial education.

Keywords: education, entrepreneurship, school curriculum, pedagogical approaches, integration

Procedia PDF Downloads 78
476 Application of Groundwater Level Data Mining in Aquifer Identification

Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen

Abstract:

Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.

Keywords: aquifer identification, decision tree, groundwater, Fourier transform

Procedia PDF Downloads 149
475 Dynamic Capabilities and Disorganization: A Conceptual Exploration

Authors: Dinuka Herath, Shelley Harrington

Abstract:

This paper prompts debate about whether disorganization can be positioned as a mechanism that facilitates the creation and enactment of important dynamic capabilities within an organization. This particular article is a conceptual exploration of the link between dynamic capabilities and disorganization and presents the case for agent-based modelling as a viable methodological tool which can be used to explore this link. Dynamic capabilities are those capabilities that an organization needs to sustain competitive advantage in complex environments. Disorganization is the process of breaking down restrictive organizational structures and routines that commonly reside in organizations in order to increase organizational performance. In the 20th century, disorganization was largely viewed as an undesirable phenomenon within an organization. However, the concept of disorganization has been revitalized and garnered research interest in the recent years due to studies which demonstrate some of the advantages of disorganization to an organization. Furthermore, recent Agent-based simulation studies have shown the capability of disorganization to be managed and argue for disorganization to be viewed as an enabler of organizational productivity. Given the natural state of disorganization and resulting fear this can create, this paper argues that instead of trying to ‘correct’ disorganization, it should be actively encouraged to have functional purpose. The study of dynamic capabilities emerged as a result of heightened dynamism and consequentially the very nature of dynamism denotes a level of fluidity and flexibility, something which this paper argues many organizations do not truly foster due to a constrained commitment to organization and order. We argue in this paper that the very state of disorganization is a state that should be encouraged to develop dynamic capabilities needed to not only deal with the complexities of the modern business environment but also to sustain competitive success. The significance of this paper stems from the fact that both dynamic capabilities and disorganization are two concepts that are gaining prominence in their respective academic genres. Despite the attention each concept has received individually, no conceptual link has been established to depict how they actually interact with each other. We argue that the link between these two concepts present a novel way of looking at organizational performance. By doing so, we explore the potential of these two concepts working in tandem in order to increase organizational productivity which has significant implications for both academics and practitioners alike.

Keywords: agent-based modelling, disorganization, dynamic capabilities, performance

Procedia PDF Downloads 309
474 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions

Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud

Abstract:

When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.

Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort

Procedia PDF Downloads 285
473 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 196
472 Exploration of Cone Foam Breaker Behavior Using Computational Fluid Dynamic

Authors: G. St-Pierre-Lemieux, E. Askari Mahvelati, D. Groleau, P. Proulx

Abstract:

Mathematical modeling has become an important tool for the study of foam behavior. Computational Fluid Dynamic (CFD) can be used to investigate the behavior of foam around foam breakers to better understand the mechanisms leading to the ‘destruction’ of foam. The focus of this investigation was the simple cone foam breaker, whose performance has been identified in numerous studies. While the optimal pumping angle is known from the literature, the contribution of pressure drop, shearing, and centrifugal forces to the foam syneresis are subject to speculation. This work provides a screening of those factors against changes in the cone angle and foam rheology. The CFD simulation was made with the open source OpenFOAM toolkits on a full three-dimensional model discretized using hexahedral cells. The geometry was generated using a python script then meshed with blockMesh. The OpenFOAM Volume Of Fluid (VOF) method was used (interFOAM) to obtain a detailed description of the interfacial forces, and the model k-omega SST was used to calculate the turbulence fields. The cone configuration allows the use of a rotating wall boundary condition. In each case, a pair of immiscible fluids, foam/air or water/air was used. The foam was modeled as a shear thinning (Herschel-Buckley) fluid. The results were compared to our measurements and to results found in the literature, first by computing the pumping rate of the cone, and second by the liquid break-up at the exit of the cone. A 3D printed version of the cones submerged in foam (shaving cream or soap solution) and water, at speeds varying between 400 RPM and 1500 RPM, was also used to validate the modeling results by calculating the torque exerted on the shaft. While most of the literature is focusing on cone behavior using Newtonian fluids, this works explore its behavior in shear thinning fluid which better reflects foam apparent rheology. Those simulations bring new light on the cone behavior within the foam and allow the computation of shearing, pressure, and velocity of the fluid, enabling to better evaluate the efficiency of the cones as foam breakers. This study contributes to clarify the mechanisms behind foam breaker performances, at least in part, using modern CFD techniques.

Keywords: bioreactor, CFD, foam breaker, foam mitigation, OpenFOAM

Procedia PDF Downloads 197
471 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 66
470 Acceleration Techniques of DEM Simulation for Dynamics of Particle Damping

Authors: Masato Saeki

Abstract:

Presented herein is a novel algorithms for calculating the damping performance of particle dampers. The particle damper is a passive vibration control technique and has many practical applications due to simple design. It consists of granular materials constrained to move between two ends in the cavity of a primary vibrating system. The damping effect results from the exchange of momentum during the impact of granular materials against the wall of the cavity. This damping has the advantage of being independent of the environment. Therefore, particle damping can be applied in extreme temperature environments, where most conventional dampers would fail. It was shown experimentally in many papers that the efficiency of the particle dampers is high in the case of resonant vibration. In order to use the particle dampers effectively, it is necessary to solve the equations of motion for each particle, considering the granularity. The discrete element method (DEM) has been found to be effective for revealing the dynamics of particle damping. In this method, individual particles are assumed as rigid body and interparticle collisions are modeled by mechanical elements as springs and dashpots. However, the computational cost is significant since the equation of motion for each particle must be solved at each time step. In order to improve the computational efficiency of the DEM, the new algorithms are needed. In this study, new algorithms are proposed for implementing the high performance DEM. On the assumption that behaviors of the granular particles in the each divided area of the damper container are the same, the contact force of the primary system with all particles can be considered to be equal to the product of the divided number of the damper area and the contact force of the primary system with granular materials per divided area. This convenience makes it possible to considerably reduce the calculation time. The validity of this calculation method was investigated and the calculated results were compared with the experimental ones. This paper also presents the results of experimental studies of the performance of particle dampers. It is shown that the particle radius affect the noise level. It is also shown that the particle size and the particle material influence the damper performance.

Keywords: particle damping, discrete element method (DEM), granular materials, numerical analysis, equivalent noise level

Procedia PDF Downloads 450
469 Low-Cost, Portable Optical Sensor with Regression Algorithm Models for Accurate Monitoring of Nitrites in Environments

Authors: David X. Dong, Qingming Zhang, Meng Lu

Abstract:

Nitrites enter waterways as runoff from croplands and are discharged from many industrial sites. Excessive nitrite inputs to water bodies lead to eutrophication. On-site rapid detection of nitrite is of increasing interest for managing fertilizer application and monitoring water source quality. Existing methods for detecting nitrites use spectrophotometry, ion chromatography, electrochemical sensors, ion-selective electrodes, chemiluminescence, and colorimetric methods. However, these methods either suffer from high cost or provide low measurement accuracy due to their poor selectivity to nitrites. Therefore, it is desired to develop an accurate and economical method to monitor nitrites in environments. We report a low-cost optical sensor, in conjunction with a machine learning (ML) approach to enable high-accuracy detection of nitrites in water sources. The sensor works under the principle of measuring molecular absorptions of nitrites at three narrowband wavelengths (295 nm, 310 nm, and 357 nm) in the ultraviolet (UV) region. These wavelengths are chosen because they have relatively high sensitivity to nitrites; low-cost light-emitting devices (LEDs) and photodetectors are also available at these wavelengths. A regression model is built, trained, and utilized to minimize cross-sensitivities of these wavelengths to the same analyte, thus achieving precise and reliable measurements with various interference ions. The measured absorbance data is input to the trained model that can provide nitrite concentration prediction for the sample. The sensor is built with i) a miniature quartz cuvette as the test cell that contains a liquid sample under test, ii) three low-cost UV LEDs placed on one side of the cell as light sources, with each LED providing a narrowband light, and iii) a photodetector with a built-in amplifier and an analog-to-digital converter placed on the other side of the test cell to measure the power of transmitted light. This simple optical design allows measuring the absorbance data of the sample at the three wavelengths. To train the regression model, absorbances of nitrite ions and their combination with various interference ions are first obtained at the three UV wavelengths using a conventional spectrophotometer. Then, the spectrophotometric data are inputs to different regression algorithm models for training and evaluating high-accuracy nitrite concentration prediction. Our experimental results show that the proposed approach enables instantaneous nitrite detection within several seconds. The sensor hardware costs about one hundred dollars, which is much cheaper than a commercial spectrophotometer. The ML algorithm helps to reduce the average relative errors to below 3.5% over a concentration range from 0.1 ppm to 100 ppm of nitrites. The sensor has been validated to measure nitrites at three sites in Ames, Iowa, USA. This work demonstrates an economical and effective approach to the rapid, reagent-free determination of nitrites with high accuracy. The integration of the low-cost optical sensor and ML data processing can find a wide range of applications in environmental monitoring and management.

Keywords: optical sensor, regression model, nitrites, water quality

Procedia PDF Downloads 67
468 Polycyclic Aromatic Hydrocarbons: Pollution and Ecological Risk Assessment in Surface Soil of the Tezpur Town, on the North Bank of the Brahmaputra River, Assam, India

Authors: Kali Prasad Sarma, Nibedita Baul, Jinu Deka

Abstract:

In the present study, pollution level of polycyclic aromatic hydrocarbon (PAH) in surface soil of historic Tezpur town located in the north bank of the River Brahmaputra were evaluated. In order to determine the seasonal distribution and concentration level of 16 USEPA priority PAHs surface soil samples were collected from 12 different sampling sites with various land use type. The total concentrations of 16 PAHs (∑16 PAHs) varied from 242.68µgkg-1to 7901.89µgkg-1. Concentration of total probable carcinogenic PAH ranged between 7.285µgkg-1 and 479.184 µgkg-1 in different seasons. However, the concentration of BaP, the most carcinogenic PAH, was found in the range of BDL to 50.01 µgkg-1. The composition profiles of PAHs in 3 different seasons were characterized by following two different types of ring: (1) 4-ring PAHs, contributed to highest percentage of total PAHs (43.75%) (2) while in pre- and post- monsoon season 3- ring compounds dominated the PAH profile, contributing 65.58% and 74.41% respectively. A high PAHs concentration with significant seasonality and high abundance of LMWPAHs was observed in Tezpur town. Soil PAHs toxicity was evaluated taking toxic equivalency factors (TEFs), which quantify the carcinogenic potential of other PAHs relative to BaP and estimate benzo[a]pyrene-equivalent concentration (BaPeq). The calculated BaPeq value signifies considerable risk to contact with soil PAHs. We applied cluster analysis and principal component analysis (PCA) with multivariate linear regression (MLR) to apportion sources of polycyclic aromatic hydrocarbons (PAHs) in surface soil of Tezpur town, based on the measured PAH concentrations. The results indicate that petrogenic and pyrogenic sources are the important sources of PAHs. A combination of chemometric and molecular indices were used to identify the sources of PAHs, which could be attributed to vehicle emissions, a mixed source input, natural gas combustion, wood or biomass burning and coal combustion. Source apportionment using absolute principle component scores–multiple linear regression showed that the main sources of PAHs are 22.3% mix sources comprising of diesel and biomass combustion and petroleum spill,13.55% from vehicle emission, 9.15% from diesel and natural gas burning, 38.05% from wood and biomass burning and 16.95% contribute coal combustion. Pyrogenic input was found to dominate source of PAHs origin with more contribution from vehicular exhaust. PAHs have often been found to co-emit with other environmental pollutants like heavy metals due to similar source of origin. A positive correlation was observed between PAH with Cr and Pb (r2 = 0.54 and 0.55 respectively) in monsoon season and PAH with Cd and Pb (r2 = 0.54 and 0.61 respectively) indicating their common source. Strong correlation was observed between PAH and OC during pre- and post- monsoon (r2=0.46 and r2=0.65 respectively) whereas during monsoon season no significant correlation was observed (r2=0.24).

Keywords: polycyclic aromatic hydrocarbon, Tezpur town, chemometric analysis, ecological risk assessment, pollution

Procedia PDF Downloads 200
467 Transitional Separation Bubble over a Rounded Backward Facing Step Due to a Temporally Applied Very High Adverse Pressure Gradient Followed by a Slow Adverse Pressure Gradient Applied at Inlet of the Profile

Authors: Saikat Datta

Abstract:

Incompressible laminar time-varying flow is investigated over a rounded backward-facing step for a triangular piston motion at the inlet of a straight channel with very high acceleration, followed by a slow deceleration experimentally and through numerical simulation. The backward-facing step is an important test-case as it embodies important flow characteristics such as separation point, reattachment length, and recirculation of flow. A sliding piston imparts two successive triangular velocities at the inlet, constant acceleration from rest, 0≤t≤t0, and constant deceleration to rest, t0≤tKeywords: laminar boundary layer separation, rounded backward facing step, separation bubble, unsteady separation, unsteady vortex flows

Procedia PDF Downloads 60
466 A Single-Channel BSS-Based Method for Structural Health Monitoring of Civil Infrastructure under Environmental Variations

Authors: Yanjie Zhu, André Jesus, Irwanda Laory

Abstract:

Structural Health Monitoring (SHM), involving data acquisition, data interpretation and decision-making system aim to continuously monitor the structural performance of civil infrastructures under various in-service circumstances. The main value and purpose of SHM is identifying damages through data interpretation system. Research on SHM has been expanded in the last decades and a large volume of data is recorded every day owing to the dramatic development in sensor techniques and certain progress in signal processing techniques. However, efficient and reliable data interpretation for damage detection under environmental variations is still a big challenge. Structural damages might be masked because variations in measured data can be the result of environmental variations. This research reports a novel method based on single-channel Blind Signal Separation (BSS), which extracts environmental effects from measured data directly without any prior knowledge of the structure loading and environmental conditions. Despite the successful application in audio processing and bio-medical research fields, BSS has never been used to detect damage under varying environmental conditions. This proposed method optimizes and combines Ensemble Empirical Mode Decomposition (EEMD), Principal Component Analysis (PCA) and Independent Component Analysis (ICA) together to separate structural responses due to different loading conditions respectively from a single channel input signal. The ICA is applying on dimension-reduced output of EEMD. Numerical simulation of a truss bridge, inspired from New Joban Line Arakawa Railway Bridge, is used to validate this method. All results demonstrate that the single-channel BSS-based method can recover temperature effects from mixed structural response recorded by a single sensor with a convincing accuracy. This will be the foundation of further research on direct damage detection under varying environment.

Keywords: damage detection, ensemble empirical mode decomposition (EEMD), environmental variations, independent component analysis (ICA), principal component analysis (PCA), structural health monitoring (SHM)

Procedia PDF Downloads 295
465 Prevalence and Molecular Characterization of Extended-Spectrum–β Lactamase and Carbapenemase-Producing Enterobacterales from Tunisian Seafood

Authors: Mehdi Soula, Yosra Mani, Estelle Saras, Antoine Drapeau, Raoudha Grami, Mahjoub Aouni, Jean-Yves Madec, Marisa Haenni, Wejdene Mansour

Abstract:

Multi-resistance to antibiotics in gram-negative bacilli and particularly in enterobacteriaceae, has become frequent in hospitals in Tunisia. However, data on antibiotic resistant bacteria in aquatic products are scarce. The aims of this study are to estimate the proportion of ESBL- and carbapenemase-producing Enterobacterales in seafood (clams and fish) in Tunisia and to molecularly characterize the collected isolates. Two types of seafood were sampled in unrelated markets in four different regions in Tunisia (641 pieces of farmed fish and 1075 mediterranean clams divided into 215 pools, and each pool contained 5 pieces). Once purchased, all samples were incubated in tubes containing peptone salt broth for 24 to 48h at 37°C. After incubation, overnight cultures were isolated on selective MacConkey agar plates supplemented with either imipenem or cefotaxime, identified using API20E test strips (bioMérieux, Marcy-l’Étoile, France) and confirmed by Maldi-TOF MS. Antimicrobial susceptibility was determined by the disk diffusion method on Mueller-Hinton agar plates and results were interpreted according to CA-SFM 2021. ESBL-producing Enterobacterales were detected using the Double Disc Synergy Test (DDST). Carbapenem-resistance was detected using an ertapenem disk and was respectively confirmed using the ROSCO KPC/MBL and OXA-48 Confirm Kit (ROSCO Diagnostica, Taastrup, Denmark). DNA was extracted using a NucleoSpin Microbial DNA extraction kit (Macherey-Nagel, Hoerdt, France), according to the manufacturer’s instructions. Resistance genes were determined using the CGE online tools. The replicon content and plasmid formula were identified from the WGS data using PlasmidFinder 2.0.1 and pMLST 2.0. From farmed fishes, nine ESBL-producing strains (9/641, 1.4%) were isolated, which were identified as E. coli (n=6) and K. pneumoniae (n=3). Among the 215 pools of 5 clams analyzed, 18 ESBL-producing isolates were identified, including 14 E. coli and 4 K. pneumoniae. A low isolation rate of ESBL-producing Enterobacterales was detected 1.6% (18/1075) in clam pools. In fish, the ESBL phenotype was due to the presence of the blaCTX-M-15 gene in all nine isolates, but no carbapenemase gene was identified. In clams, the predominant ESBL phenotype was blaCTX-M-1 (n=6/18). blaCPE (NDM1, OXA48) was detected only in 3 isolates ‘K. pneumoniae isolates’. Replicon typing on the strains carring the ESBL and carbapenemase gene revelead that the major type plasmid carried ESBL were IncF (42.3%) [n=11/26]. In all, our results suggest that seafood can be a reservoir of multi-drug resistant bacteria, most probably of human origin but also by the selection pressure of antibiotic. Our findings raise concerns that seafood bought for consumption may serve as potential reservoirs of AMR genes and pose serious threat to public health.

Keywords: BLSE, carbapenemase, enterobacterales, tunisian seafood

Procedia PDF Downloads 100
464 The Correlation between Eye Movements, Attentional Shifting, and Driving Simulator Performance among Adolescents with Attention Deficit Hyperactivity Disorder

Authors: Navah Z. Ratzon, Anat Keren, Shlomit Y. Greenberg

Abstract:

Car accidents are a problem worldwide. Adolescents’ involvement in car accidents is higher in comparison to the overall driving population. Researchers estimate the risk of accidents among adolescents with symptoms of attention-deficit/hyperactivity disorder (ADHD) to be 1.2 to 4 times higher than that of their peers. Individuals with ADHD exhibit unique patterns of eye movements and attentional shifts that play an important role in driving. In addition, deficiencies in cognitive and executive functions among adolescents with ADHD is likely to put them at greater risk for car accidents. Fifteen adolescents with ADHD and 17 matched controls participated in the study. Individuals from both groups attended local public schools and did not have a driver’s license. Participants’ mean age was 16.1 (SD=.23). As part of the experiment, they all completed a driving simulation session, while their eye movements were monitored. Data were recorded by an eye tracker: The entire driving session was recorded, registering the tester’s exact gaze position directly on the screen. Eye movements and simulator data were analyzed using Matlab (Mathworks, USA). Participants’ cognitive and metacognitive abilities were evaluated as well. No correlation was found between saccade properties, regions of interest, and simulator performance in either group, although participants with ADHD allocated more visual scan time (25%, SD = .13%) to a smaller segment of dashboard area, whereas controls scanned the monitor more evenly (15%, SD = .05%). The visual scan pattern found among participants with ADHD indicates a distinct pattern of engagement-disengagement of spatial attention compared to that of non-ADHD participants as well as lower attention flexibility, which likely affects driving. Additionally the lower the results on the cognitive tests, the worse driving performance was. None of the participants had prior driving experience, yet participants with ADHD distinctly demonstrated difficulties in scanning their surroundings, which may impair driving. This stresses the need to consider intervention programs, before driving lessons begin, to help adolescents with ADHD acquire proper driving habits, avoid typical driving errors, and achieve safer driving.

Keywords: ADHD, attentional shifting, driving simulator, eye movements

Procedia PDF Downloads 315
463 Real-Time Data Stream Partitioning over a Sliding Window in Real-Time Spatial Big Data

Authors: Sana Hamdi, Emna Bouazizi, Sami Faiz

Abstract:

In recent years, real-time spatial applications, like location-aware services and traffic monitoring, have become more and more important. Such applications result dynamic environments where data as well as queries are continuously moving. As a result, there is a tremendous amount of real-time spatial data generated every day. The growth of the data volume seems to outspeed the advance of our computing infrastructure. For instance, in real-time spatial Big Data, users expect to receive the results of each query within a short time period without holding in account the load of the system. But with a huge amount of real-time spatial data generated, the system performance degrades rapidly especially in overload situations. To solve this problem, we propose the use of data partitioning as an optimization technique. Traditional horizontal and vertical partitioning can increase the performance of the system and simplify data management. But they remain insufficient for real-time spatial Big data; they can’t deal with real-time and stream queries efficiently. Thus, in this paper, we propose a novel data partitioning approach for real-time spatial Big data named VPA-RTSBD (Vertical Partitioning Approach for Real-Time Spatial Big data). This contribution is an implementation of the Matching algorithm for traditional vertical partitioning. We find, firstly, the optimal attribute sequence by the use of Matching algorithm. Then, we propose a new cost model used for database partitioning, for keeping the data amount of each partition more balanced limit and for providing a parallel execution guarantees for the most frequent queries. VPA-RTSBD aims to obtain a real-time partitioning scheme and deals with stream data. It improves the performance of query execution by maximizing the degree of parallel execution. This affects QoS (Quality Of Service) improvement in real-time spatial Big Data especially with a huge volume of stream data. The performance of our contribution is evaluated via simulation experiments. The results show that the proposed algorithm is both efficient and scalable, and that it outperforms comparable algorithms.

Keywords: real-time spatial big data, quality of service, vertical partitioning, horizontal partitioning, matching algorithm, hamming distance, stream query

Procedia PDF Downloads 151
462 Factors Impacting Geostatistical Modeling Accuracy and Modeling Strategy of Fluvial Facies Models

Authors: Benbiao Song, Yan Gao, Zhuo Liu

Abstract:

Geostatistical modeling is the key technic for reservoir characterization, the quality of geological models will influence the prediction of reservoir performance greatly, but few studies have been done to quantify the factors impacting geostatistical reservoir modeling accuracy. In this study, 16 fluvial prototype models have been established to represent different geological complexity, 6 cases range from 16 to 361 wells were defined to reproduce all those 16 prototype models by different methodologies including SIS, object-based and MPFS algorithms accompany with different constraint parameters. Modeling accuracy ratio was defined to quantify the influence of each factor, and ten realizations were averaged to represent each accuracy ratio under the same modeling condition and parameters association. Totally 5760 simulations were done to quantify the relative contribution of each factor to the simulation accuracy, and the results can be used as strategy guide for facies modeling in the similar condition. It is founded that data density, geological trend and geological complexity have great impact on modeling accuracy. Modeling accuracy may up to 90% when channel sand width reaches up to 1.5 times of well space under whatever condition by SIS and MPFS methods. When well density is low, the contribution of geological trend may increase the modeling accuracy from 40% to 70%, while the use of proper variogram may have very limited contribution for SIS method. It can be implied that when well data are dense enough to cover simple geobodies, few efforts were needed to construct an acceptable model, when geobodies are complex with insufficient data group, it is better to construct a set of robust geological trend than rely on a reliable variogram function. For object-based method, the modeling accuracy does not increase obviously as SIS method by the increase of data density, but kept rational appearance when data density is low. MPFS methods have the similar trend with SIS method, but the use of proper geological trend accompany with rational variogram may have better modeling accuracy than MPFS method. It implies that the geological modeling strategy for a real reservoir case needs to be optimized by evaluation of dataset, geological complexity, geological constraint information and the modeling objective.

Keywords: fluvial facies, geostatistics, geological trend, modeling strategy, modeling accuracy, variogram

Procedia PDF Downloads 253
461 Mathematical Modeling of Avascular Tumor Growth and Invasion

Authors: Meitham Amereh, Mohsen Akbari, Ben Nadler

Abstract:

Cancer has been recognized as one of the most challenging problems in biology and medicine. Aggressive tumors are a lethal type of cancers characterized by high genomic instability, rapid progression, invasiveness, and therapeutic resistance. Their behavior involves complicated molecular biology and consequential dynamics. Although tremendous effort has been devoted to developing therapeutic approaches, there is still a huge need for new insights into the dark aspects of tumors. As one of the key requirements in better understanding the complex behavior of tumors, mathematical modeling and continuum physics, in particular, play a pivotal role. Mathematical modeling can provide a quantitative prediction on biological processes and help interpret complicated physiological interactions in tumors microenvironment. The pathophysiology of aggressive tumors is strongly affected by the extracellular cues such as stresses produced by mechanical forces between the tumor and the host tissue. During the tumor progression, the growing mass displaces the surrounding extracellular matrix (ECM), and due to the level of tissue stiffness, stress accumulates inside the tumor. The produced stress can influence the tumor by breaking adherent junctions. During this process, the tumor stops the rapid proliferation and begins to remodel its shape to preserve the homeostatic equilibrium state. To reach this, the tumor, in turn, upregulates epithelial to mesenchymal transit-inducing transcription factors (EMT-TFs). These EMT-TFs are involved in various signaling cascades, which are often associated with tumor invasiveness and malignancy. In this work, we modeled the tumor as a growing hyperplastic mass and investigated the effects of mechanical stress from surrounding ECM on tumor invasion. The invasion is modeled as volume-preserving inelastic evolution. In this framework, principal balance laws are considered for tumor mass, linear momentum, and diffusion of nutrients. Also, mechanical interactions between the tumor and ECM is modeled using Ciarlet constitutive strain energy function, and dissipation inequality is utilized to model the volumetric growth rate. System parameters, such as rate of nutrient uptake and cell proliferation, are obtained experimentally. To validate the model, human Glioblastoma multiforme (hGBM) tumor spheroids were incorporated inside Matrigel/Alginate composite hydrogel and was injected into a microfluidic chip to mimic the tumor’s natural microenvironment. The invasion structure was analyzed by imaging the spheroid over time. Also, the expression of transcriptional factors involved in invasion was measured by immune-staining the tumor. The volumetric growth, stress distribution, and inelastic evolution of tumors were predicted by the model. Results showed that the level of invasion is in direct correlation with the level of predicted stress within the tumor. Moreover, the invasion length measured by fluorescent imaging was shown to be related to the inelastic evolution of tumors obtained by the model.

Keywords: cancer, invasion, mathematical modeling, microfluidic chip, tumor spheroids

Procedia PDF Downloads 103
460 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 17
459 Application of Low Frequency Ac Magnetic Field for Controlled Delivery of Drugs by Magnetic Nanoparticles

Authors: K. Yu Vlasova, M. A. Abakumov, H. Wishwarsao, M. Sokolsky, N. V. Nukolova, A. G. Majouga, Y. I. Golovin, N. L. Klyachko, A. V. Kabanov

Abstract:

Introduction:Nowadays pharmaceutical medicine is aimed to create systems for combined therapy, diagnostic, drug delivery and controlled release of active molecules to target cells. Magnetic nanoparticles (MNPs) are used to achieve this aim. MNPs can be applied in molecular diagnostics, magnetic resonance imaging (T1/T2 contrast agents), drug delivery, hyperthermia and could improve therapeutic effect of drugs. The most common drug containers, containing MNPs, are liposomes, micelles and polymeric molecules bonded to the MNPs surface. Usually superparamagnetic nanoparticles are used (the general diameter is about 5-6 nm) and all effects of high frequency magnetic field (MF) application are based on Neel relaxation resulting in heating of surrounded media. In this work we try to develop a new method to improve drug release from MNPs under super low frequency MF. We suppose that under low frequency MF exposures the Brown’s relaxation dominates and MNPs rotation could occur leading to conformation changes and release of bioactive molecules immobilized on MNPs surface.The aim of this work was to synthesize different systems with active drug (biopolymers coated MNPs nanoclusters with immobilized enzymes and doxorubicin (Dox) loaded magnetic liposomes/micelles) and investigate the effect of super low frequency MF on these drug containers. Methods: We have synthesized MNPs of magnetite with magnetic core diameter 7-12 nm . The MNPs were coated with block-copolymer of polylysine and polyethylene glycol. Superoxide dismutase 1 (SOD1) was electrostatically adsorbed on the surface of the clusters. Liposomes were prepared as follow: MNPs, phosphatidylcholine and cholesterol were dispersed in chloroform, dried to get film and then dispersed in distillated water, sonicated. Dox was added to the solution, pH was adjusted to 7.4 and excess of drug was removed by centrifugation through 3 kDa filters. Results: Polylysine coated MNPs formed nanosized clusters (as observed by TEM) with intensity average diameter of 112±5 nm and zeta potential 12±3 mV. After low frequency AC MF exposure we observed change of immobilized enzyme activity and hydrodynamic size of clusters. We suppose that the biomolecules (enzymes) are released from the MNPs surface followed with additional aggregation of complexes at the MF in medium. Centrifugation of the nanosuspension after AC MF exposures resulted in increase of positive charge of clusters and change in enzyme concentration in comparison with control sample without MF, thus confirming desorption of negatively charged enzyme from the positively charged surface of MNPs. Dox loaded magnetic liposomes had average diameter of 160±8 nm and polydispersity index (PDI) 0.25±0.07. Liposomes were stable in DW and PBS at pH=7.4 at 370C during a week. After MF application (10 min of exposure, 50 Hz, 230 mT) diameter of liposomes raised to 190±10 nm and PDI was 0.38±0.05. We explain this by destroying and/or reorganization of lipid bilayer, that leads to changes in release of drug in comparison with control without MF exposure. Conclusion: A new application of low frequency AC MF for drug delivery and controlled drug release was shown. Investigation was supported by RSF-14-13-00731 grant, K1-2014-022 grant.

Keywords: magnetic nanoparticles, low frequency magnetic field, drug delivery, controlled drug release

Procedia PDF Downloads 476
458 Increased Energy Efficiency and Improved Product Quality in Processing of Lithium Bearing Ores by Applying Fluidized-Bed Calcination Systems

Authors: Edgar Gasafi, Robert Pardemann, Linus Perander

Abstract:

For the production of lithium carbonate or hydroxide out of lithium bearing ores, a thermal activation (calcination/decrepitation) is required for the phase transition in the mineral to enable an acid respectively soda leaching in the downstream hydrometallurgical section. In this paper, traditional processing in Lithium industry is reviewed, and opportunities to reduce energy consumption and improve product quality and recovery rate will be discussed. The conventional process approach is still based on rotary kiln calcination, a technology in use since the early days of lithium ore processing, albeit not significantly further developed since. A new technology, at least for the Lithium industry, is fluidized bed calcination. Decrepitation of lithium ore was investigated at Outotec’s Frankfurt Research Centre. Focusing on fluidized bed technology, a study of major process parameters (temperature and residence time) was performed at laboratory and larger bench scale aiming for optimal product quality for subsequent processing. The technical feasibility was confirmed for optimal process conditions on pilot scale (400 kg/h feed input) providing the basis for industrial process design. Based on experimental results, a comprehensive Aspen Plus flow sheet simulation was developed to quantify mass and energy flow for the rotary kiln and fluidized bed system. Results show a significant reduction in energy consumption and improved process performance in terms of temperature profile, product quality and plant footprint. The major conclusion is that a substantial reduction of energy consumption can be achieved in processing Lithium bearing ores by using fluidized bed based systems. At the same time and different from rotary kiln process, an accurate temperature and residence time control is ensured in fluidized-bed systems leading to a homogenous temperature profile in the reactor which prevents overheating and sintering of the solids and results in uniform product quality.

Keywords: calcination, decrepitation, fluidized bed, lithium, spodumene

Procedia PDF Downloads 219
457 Effect of Discharge Pressure Conditions on Flow Characteristics in Axial Piston Pump

Authors: Jonghyuk Yoon, Jongil Yoon, Seong-Gyo Chung

Abstract:

In many kinds of industries which usually need a large amount of power, an axial piston pump has been widely used as a main power source of a hydraulic system. The axial piston pump is a type of positive displacement pump that has several pistons in a circular array within a cylinder block. As the cylinder block and pistons start to rotate, since the exposed ends of the pistons are constrained to follow the surface of the swashed plate, the pistons are driven to reciprocate axially and then a hydraulic power is produced. In the present study, a numerical simulation which has three dimensional full model of the axial piston pump was carried out using a commercial CFD code (Ansys CFX 14.5). In order to take into consideration motion of compression and extension by the reciprocating pistons, the moving boundary conditions were applied as a function of the rotation angle to that region. In addition, this pump using hydraulic oil as working fluid is intentionally designed as a small amount of oil leaks out in order to lubricate moving parts. Since leakage could directly affect the pump efficiency, evaluation of effect of oil-leakage is very important. In order to predict the effect of the oil leakage on the pump efficiency, we considered the leakage between piston-shoe and swash-plate by modeling cylindrical shaped-feature at the end of the cylinder. In order to validate the numerical method used in this study, the numerical results of the flow rate at the discharge port are compared with the experimental data, and good agreement between them was shown. Using the validated numerical method, the effect of the discharge pressure was also investigated. The result of the present study can be useful information of small axial piston pump used in many different manufacturing industries. Acknowledgement: This research was financially supported by the “Next-generation construction machinery component specialization complex development program” through the Ministry of Trade, Industry and Energy (MOTIE) and Korea Institute for Advancement of Technology (KIAT).

Keywords: axial piston pump, CFD, discharge pressure, hydraulic system, moving boundary condition, oil leaks

Procedia PDF Downloads 238