Search results for: geochemical modelling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 1946

Search results for: geochemical modelling

296 Development of a Paediatric Head Model for the Computational Analysis of Head Impact Interactions

Authors: G. A. Khalid, M. D. Jones, R. Prabhu, A. Mason-Jones, W. Whittington, H. Bakhtiarydavijani, P. S. Theobald

Abstract:

Head injury in childhood is a common cause of death or permanent disability from injury. However, despite its frequency and significance, there is little understanding of how a child’s head responds during injurious loading. Whilst Infant Post Mortem Human Subject (PMHS) experimentation is a logical approach to understand injury biomechanics, it is the authors’ opinion that a lack of subject availability is hindering potential progress. Computer modelling adds great value when considering adult populations; however, its potential remains largely untapped for infant surrogates. The complexities of child growth and development, which result in age dependent changes in anatomy, geometry and physical response characteristics, present new challenges for computational simulation. Further geometric challenges are presented by the intricate infant cranial bones, which are separated by sutures and fontanelles and demonstrate a visible fibre orientation. This study presents an FE model of a newborn infant’s head, developed from high-resolution computer tomography scans, informed by published tissue material properties. To mimic the fibre orientation of immature cranial bone, anisotropic properties were applied to the FE cranial bone model, with elastic moduli representing the bone response both parallel and perpendicular to the fibre orientation. Biofiedility of the computational model was confirmed by global validation against published PMHS data, by replicating experimental impact tests with a series of computational simulations, in terms of head kinematic responses. Numerical results confirm that the FE head model’s mechanical response is in favourable agreement with the PMHS drop test results.

Keywords: finite element analysis, impact simulation, infant head trauma, material properties, post mortem human subjects

Procedia PDF Downloads 326
295 The Impact of Public Finance Management on Economic Growth and Development in South Africa

Authors: Zintle Sikhunyana

Abstract:

Management of public finance in many countries such as South Africa is affected by political decisions and by policies around fiscal decentralization amongst the government spheres. Economic success is said to be determined by efficient management of public finance and by the policies or strategies that are implemented to support efficient public finance management. Policymakers focus on pay attention to how economic policies have been implemented and how they are directed into ensuring stable development. This will allow policymakers to address economic challenges through the usage of fiscal policy parameters that are linked to the achieved rate of economic growth and development. Efficient public finance management reduces the likelihood of corruption and corruption is said to have negative effects on economic growth and development. Corruption in public finance refers to an act of using funds for personal benefits. To achieve macroeconomic objectives, governments make use of government expenditure and government expenditure is financed through tax revenue. The main aim of this paper is to investigate the potential impact of public finance management on economic growth and development in South Africa. The secondary data obtained from the South African Reserve Bank (SARB) and World Bank for 1980- 2020 has been utilized to achieve the research objectives. To test the impact of public finance management on economic growth and development, the study will use Seeming Unrelated Regression Equation (SURE) Modelling that allows researchers to model multiple equations with interdependent variables. The advantages of using SUR are that it efficiently allows estimation of relationships between variables by combining information on different equations and SUR test restrictions that involve parameters in different equations. The findings have shown that there is a positive relationship between efficient public finance management and economic growth/development. The findings also show that efficient public finance management has an indirect positive impact on economic growth and development. Corruption has a negative impact on economic growth and development. It results in an efficient allocation of government resources and thereby improves economic growth and development. The study recommends that governments who aim to stimulate economic growth and development should target and strengthen public finance management policies or strategies.

Keywords: corruption, economic growth, economic development, public finance management, fiscal decentralization

Procedia PDF Downloads 201
294 2D-Numerical Modelling of Local Scour around a Circular Pier in Steady Current

Authors: Mohamed Rajab Peer Mohamed, Thiruvenkatasamy Kannabiran

Abstract:

In the present investigation, the scour around a circular pier subjected to a steady current were studied numerically using two-dimensional MIKE21 Flow Model (FM) and Sand Transport (ST)Modulewhich is developed by Danish Hydraulic Institute (DHI), Denmark. The unstructured flexible mesh generated with rectangular flume dimension of 10 m wide, 1 m deep, and 30 m long. The grain size of the sand was d50 = 0.16 mm, sediment size, sediment gradation=1.16, pier diameter D= 30 mm and depth-averaged current velocity, U = 0.449 m/s are considered in the model. The estimated scour depth obtained from this model is validated and it is observed that the results of the model have good agreement with flume experimental results.In order to estimate the scour depth, several simulations were made for three cases viz., Case I:change in sediment transport model description in the numerical model viz, i) Engelund-Hansen model, ii) Engelund-Fredsøe model, and iii) Van Rijn model, Case II: change in current velocity for keeping constant pile diameter D=0.03 m and Case III:change in pier diameter for constant depth averaged current speed U=0.449 m/s.In case I simulations, the results indicate that the scour depth S/D is the order of 1.73 for Engelund-Hansen model, 0.64 for Engelund-Fredsøe model and 0.46 for VanRijn model. The scour depth estimates using Engelund-Hansen method compares well the experimental results.In case II, simulations show that the scour depth increases with increasing current component of the flow.In case III simulations, the results indicate that the scour depth increases with increase in pier diameter and it stabilize attains steady value when the Froude number> 2.71.All the results of the numerical simulations are clearly matches with reported values of the experimental results. Hence, this MIKE21 FM –Sand Transport model can be used as a suitable tool to estimate the scour depth for field applications. Moreover, to provide suitable scour protection methods, the maximum scour depth is to be predicted, Engelund-Hansen method can be adopted to estimate the scour depth in the steady current region.

Keywords: circular pier, MIKE21, numerical model, scour, sediment transport

Procedia PDF Downloads 317
293 Effective Wind-Induced Natural Ventilation in a Residential Apartment Typology

Authors: Tanvi P. Medshinge, Prasad Vaidya, Monisha E. Royan

Abstract:

In India, cooling loads in residential sector is a major contributor to its total energy consumption. Due to the increasing cooling need, the market penetration of air-conditioners is further expected to rise. Natural Ventilation (NV), however, possesses great potential to save significant energy consumption especially for residential buildings in moderate climates. As multifamily residential apartment buildings are designed by repetitive use of prototype designs, deriving individual NV based design prototype solutions for a combination of different wind incidence angles and orientations would provide significant opportunity to address the rise in cooling loads by residential sector. This paper presents the results of NV performance of a selected prototype apartment design with a cluster of four units in Pune, India, and an attempt to improve the NV performance through design modifications. The water table apparatus, a physical modelling tool, is used to study the flow patterns and simulate wind-induced NV performance. Quantification of NV performance is done by post processing images captured from video recordings in terms of percentage of area with good and poor access to ventilation. NV performance of the existing design for eight wind incidence angles showed that of the cluster of four units, the windward units showed good access to ventilation for all rooms, and the leeward units had lower access to ventilation with the bedrooms in the leeward units having the least access. The results showed improved performance in all the units for all wind incidence angles to more than 80% good access to ventilation. Some units showed an additional improvement to more than 90% good access to ventilation. This process of design and performance evaluation improved some individual units from 0% to 100% for good access to ventilation. The results demonstrate the ease of use and the power of the water table apparatus for performance-based design to simulate wind induced NV.  

Keywords: fluid dynamics, prototype design, natural ventilation, simulations, water table apparatus, wind incidence angles

Procedia PDF Downloads 229
292 Initial Palaeotsunami and Historical Tsunami in the Makran Subduction Zone of the Northwest Indian Ocean

Authors: Mohammad Mokhtari, Mehdi Masoodi, Parvaneh Faridi

Abstract:

history of tsunami generating earthquakes along the Makran Subduction Zone provides evidence of the potential tsunami hazard for the whole coastal area. In comparison with other subduction zone in the world, the Makran region of southern Pakistan and southeastern Iran remains low seismicity. Also, it is one of the least studied area in the northwest of the Indian Ocean regarding tsunami studies. We present a review of studies dealing with the historical /and ongoing palaeotsunamis supported by IGCP of UNESCO in the Makran Subduction Zone. The historical tsunami presented here includes about nine tsunamis in the Makran Subduction Zone, of which over 7 tsunamis occur in the eastern Makran. Tsunami is not as common in the western Makran as in the eastern Makran, where a database of historical events exists. The historically well-documented event is related to the 1945 earthquake with a magnitude of 8.1moment magnitude and tsunami in the western and eastern Makran. There are no details as to whether a tsunami was generated by a seismic event before 1945 off western Makran. But several potentially large tsunamigenic events in the MSZ before 1945 occurred in 325 B.C., 1008, 1483, 1524, 1765, 1851, 1864, and 1897. Here we will present new findings from a historical point of view, immediately, we would like to emphasize that the area needs to be considered with higher research investigation. As mentioned above, a palaeotsunami (geological evidence) is now being planned, and here we will present the first phase result. From a risk point of view, the study shows as preliminary achievement within 20 minutes the wave reaches to Iranian as well Pakistan and Oman coastal zone with very much destructive tsunami waves capable of inundating destructive effect. It is important to note that all the coastal areas of all states surrounding the MSZ are being developed very rapidly, so any event would have a devastating effect on this region. Although several papers published about modelling, seismology, tsunami deposits in the last decades; as Makran is a forgotten subduction zone, more data such as the main crustal structure, fault location, and its related parameter are required.

Keywords: historical tsunami, Indian ocean, makran subduction zone, palaeotsunami

Procedia PDF Downloads 131
291 Hybrid Nanostructures of Acrylonitrile Copolymers

Authors: A. Sezai Sarac

Abstract:

Acrylonitrile (AN) copolymers with typical comonomers of vinyl acetate (VAc) or methyl acrylate (MA) exhibit better mechanical behaviors than its homopolymer. To increase processability of conjugated polymer, and to obtain a hybrid nano-structure multi-stepped emulsion polymerization was applied. Such products could be used in, i.e., drug-delivery systems, biosensors, gas-sensors, electronic compounds, etc. Incorporation of a number of flexible comonomers weakens the dipolar interactions among CN and thereby decreases melting point or increases decomposition temperatures of the PAN based copolymers. Hence, it is important to consider the effect of comonomer on the properties of PAN-based copolymers. Acrylonitrile vinylacetate (AN–VAc ) copolymers have the significant effect to their thermal behavior and are also of interest as precursors in the production of high strength carbon fibers. AN is copolymerized with one or two comonomers, particularly with vinyl acetate The copolymer of AN and VAc can be used either as a plastic (VAc > 15 wt %) or as microfibers (VAc < 15 wt %). AN provides the copolymer with good processability, electrochemical and thermal stability; VAc provides the mechanical stability. The free radical copolymerization of AN and VAc copolymer and core Shell structure of polyprrole composites,and nanofibers of poly(m-anthranilic acid)/polyacrylonitrile blends were recently studied. Free radical copolymerization of acrylonitrile (AN) – with different comonomers, i.e. acrylates, and styrene was realized using ammonium persulfate (APS) in the presence of a surfactant and in-situ polymerization of conjugated polymers was performed in this reaction medium to obtain core-shell nano particles. Nanofibers of such nanoparticles were obtained by electrospinning. Morphological properties of nanofibers are investigated by scanning electron microscopy (SEM) and atomic force spectroscopy (AFM). Nanofibers are characterized using Fourier Transform Infrared - Attenuated Total Reflectance spectrometer (FTIR-ATR), Nuclear Magnetic Resonance Spectroscopy (1H-NMR), differential scanning calorimeter (DSC), thermal gravimetric analysis (TGA), and Electrochemical Impedance Spectroscopy. The electrochemical Impedance results of the nanofibers were fitted to an equivalent curcuit by modelling (ECM).

Keywords: core shell nanoparticles, nanofibers, ascrylonitile copolymers, hybrid nanostructures

Procedia PDF Downloads 383
290 Synthesis of Fluorescent PET-Type “Turn-Off” Triazolyl Coumarin Based Chemosensors for the Sensitive and Selective Sensing of Fe⁺³ Ions in Aqueous Solutions

Authors: Aidan Battison, Neliswa Mama

Abstract:

Environmental pollution by ionic species has been identified as one of the biggest challenges to the sustainable development of communities. The widespread use of organic and inorganic chemical products and the release of toxic chemical species from industrial waste have resulted in a need for advanced monitoring technologies for environment protection, remediation and restoration. Some of the disadvantages of conventional sensing methods include expensive instrumentation, well-controlled experimental conditions, time-consuming procedures and sometimes complicated sample preparation. On the contrary, the development of fluorescent chemosensors for biological and environmental detection of metal ions has attracted a great deal of attention due to their simplicity, high selectivity, eidetic recognition, rapid response and real-life monitoring. Coumarin derivatives S1 and S2 (Scheme 1) containing 1,2,3-triazole moieties at position -3- have been designed and synthesized from azide and alkyne derivatives by CuAAC “click” reactions for the detection of metal ions. These compounds displayed a strong preference for Fe3+ ions with complexation resulting in fluorescent quenching through photo-induced electron transfer (PET) by the “sphere of action” static quenching model. The tested metal ions included Cd2+, Pb2+, Ag+, Na+, Ca2+, Cr3+, Fe3+, Al3+, Cd2+, Ba2+, Cu2+, Co2+, Hg2+, Zn2+ and Ni2+. The detection limits of S1 and S2 were determined to be 4.1 and 5.1 uM, respectively. Compound S1 displayed the greatest selectivity towards Fe3+ in the presence of competing for metal cations. S1 could also be used for the detection of Fe3+ in a mixture of CH3CN/H¬2¬O. Binding stoichiometry between S1 and Fe3+ was determined by using both Jobs-plot and Benesi-Hildebrand analysis. The binding was shown to occur in a 1:1 ratio between the sensor and a metal cation. Reversibility studies between S1 and Fe3+ were conducted by using EDTA. The binding site of Fe3+ to S1 was determined by using 13 C NMR and Molecular Modelling studies. Complexation was suggested to occur between the lone-pair of electrons from the coumarin-carbonyl and the triazole-carbon double bond.

Keywords: chemosensor, "click" chemistry, coumarin, fluorescence, static quenching, triazole

Procedia PDF Downloads 163
289 Development of Thermal Regulating Textile Material Consisted of Macrocapsulated Phase Change Material

Authors: Surini Duthika Fernandopulle, Kalamba Arachchige Pramodya Wijesinghe

Abstract:

Macrocapsules containing phase change material (PCM) PEG4000 as core and Calcium Alginate as the shell was synthesized by in-situ polymerization process, and their suitability for textile applications was studied. PCM macro-capsules were sandwiched between two polyurethane foams at regular intervals, and the sandwiched foams were subsequently covered with 100% cotton woven fabrics. According to the mathematical modelling and calculations 46 capsules were required to provide cooling for a period of 2 hours at 56ºC, so a panel of 10 cm x 10 cm area with 25 parts (having 5 capsules in each for 9 parts are 16 parts spaced for air permeability) were effectively merged into one textile material without changing the textile's original properties. First, the available cooling techniques related to textiles were considered and the best cooling techniques suiting the Sri Lankan climatic conditions were selected using a survey conducted for Sri Lankan Public based on ASHRAE-55-2010 standard and it consisted of 19 questions under 3 sections categorized as general information, thermal comfort sensation and requirement of Personal Cooling Garments (PCG). The results indicated that during daytime, majority of respondents feel warm and during nighttime also majority have responded as slightly warm. The survey also revealed that around 85% of the respondents are willing to accept a PCG. The developed panels were characterized using Fourier-transform infrared spectroscopy (FTIR) and Thermogravimetric Analysis (TGA) tests and the findings from FTIR showed that the macrocapsules consisted of PEG 4000 as the core material and Calcium Alginate as the shell material and findings from TGA showed that the capsules had the average weight percentage for core with 61,9% and shell with 34,7%. After heating both control samples and samples incorporating PCM panels, it was discovered that only the temperature of the control sample increased after 56ºC, whereas the temperature of the sample incorporating PCM panels began to regulate the temperature at 56ºC, preventing a temperature increase beyond 56ºC.

Keywords: phase change materials, thermal regulation, textiles, macrocapsules

Procedia PDF Downloads 127
288 A Next Generation Multi-Scale Modeling Theatre for in silico Oncology

Authors: Safee Chaudhary, Mahnoor Naseer Gondal, Hira Anees Awan, Abdul Rehman, Ammar Arif, Risham Hussain, Huma Khawar, Zainab Arshad, Muhammad Faizyab Ali Chaudhary, Waleed Ahmed, Muhammad Umer Sultan, Bibi Amina, Salaar Khan, Muhammad Moaz Ahmad, Osama Shiraz Shah, Hadia Hameed, Muhammad Farooq Ahmad Butt, Muhammad Ahmad, Sameer Ahmed, Fayyaz Ahmed, Omer Ishaq, Waqar Nabi, Wim Vanderbauwhede, Bilal Wajid, Huma Shehwana, Muhammad Tariq, Amir Faisal

Abstract:

Cancer is a manifestation of multifactorial deregulations in biomolecular pathways. These deregulations arise from the complex multi-scale interplay between cellular and extracellular factors. Such multifactorial aberrations at gene, protein, and extracellular scales need to be investigated systematically towards decoding the underlying mechanisms and orchestrating therapeutic interventions for patient treatment. In this work, we propose ‘TISON’, a next-generation web-based multiscale modeling platform for clinical systems oncology. TISON’s unique modeling abstraction allows a seamless coupling of information from biomolecular networks, cell decision circuits, extra-cellular environments, and tissue geometries. The platform can undertake multiscale sensitivity analysis towards in silico biomarker identification and drug evaluation on cellular phenotypes in user-defined tissue geometries. Furthermore, integration of cancer expression databases such as The Cancer Genome Atlas (TCGA) and Human Proteome Atlas (HPA) facilitates in the development of personalized therapeutics. TISON is the next-evolution of multiscale cancer modeling and simulation platforms and provides a ‘zero-code’ model development, simulation, and analysis environment for application in clinical settings.

Keywords: systems oncology, cancer systems biology, cancer therapeutics, personalized therapeutics, cancer modelling

Procedia PDF Downloads 222
287 Physics-Based Earthquake Source Models for Seismic Engineering: Analysis and Validation for Dip-Slip Faults

Authors: Percy Galvez, Anatoly Petukhin, Paul Somerville, Ken Miyakoshi, Kojiro Irikura, Daniel Peter

Abstract:

Physics-based dynamic rupture modelling is necessary for estimating parameters such as rupture velocity and slip rate function that are important for ground motion simulation, but poorly resolved by observations, e.g. by seismic source inversion. In order to generate a large number of physically self-consistent rupture models, whose rupture process is consistent with the spatio-temporal heterogeneity of past earthquakes, we use multicycle simulations under the heterogeneous rate-and-state (RS) friction law for a 45deg dip-slip fault. We performed a parametrization study by fully dynamic rupture modeling, and then, a set of spontaneous source models was generated in a large magnitude range (Mw > 7.0). In order to validate rupture models, we compare the source scaling relations vs. seismic moment Mo for the modeled rupture area S, as well as average slip Dave and the slip asperity area Sa, with similar scaling relations from the source inversions. Ground motions were also computed from our models. Their peak ground velocities (PGV) agree well with the GMPE values. We obtained good agreement of the permanent surface offset values with empirical relations. From the heterogeneous rupture models, we analyzed parameters, which are critical for ground motion simulations, i.e. distributions of slip, slip rate, rupture initiation points, rupture velocities, and source time functions. We studied cross-correlations between them and with the friction weakening distance Dc value, the only initial heterogeneity parameter in our modeling. The main findings are: (1) high slip-rate areas coincide with or are located on an outer edge of the large slip areas, (2) ruptures have a tendency to initiate in small Dc areas, and (3) high slip-rate areas correlate with areas of small Dc, large rupture velocity and short rise-time.

Keywords: earthquake dynamics, strong ground motion prediction, seismic engineering, source characterization

Procedia PDF Downloads 144
286 Modelling of Recovery and Application of Low-Grade Thermal Resources in the Mining and Mineral Processing Industry

Authors: S. McLean, J. A. Scott

Abstract:

The research topic is focusing on improving sustainable operation through recovery and reuse of waste heat in process water streams, an area in the mining industry that is often overlooked. There are significant advantages to the application of this topic, including economic and environmental benefits. The smelting process in the mining industry presents an opportunity to recover waste heat and apply it to alternative uses, thereby enhancing the overall process. This applied research has been conducted at the Sudbury Integrated Nickel Operations smelter site, in particular on the water cooling towers. The aim was to determine and optimize methods for appropriate recovery and subsequent upgrading of thermally low-grade heat lost from the water cooling towers in a manner that makes it useful for repurposing in applications, such as within an acid plant. This would be valuable to mining companies as it would be an opportunity to reduce the cost of the process, as well as decrease environmental impact and primary fuel usage. The waste heat from the cooling towers needs to be upgraded before it can be beneficially applied, as lower temperatures result in a decrease of the number of potential applications. Temperature and flow rate data were collected from the water cooling towers at an acid plant over two years. The research includes process control strategies and the development of a model capable of determining if the proposed heat recovery technique is economically viable, as well as assessing any environmental impact with the reduction in net energy consumption by the process. Therefore, comprehensive cost and impact analyses are carried out to determine the best area of application for the recovered waste heat. This method will allow engineers to easily identify the value of thermal resources available to them and determine if a full feasibility study should be carried out. The rapid scoping model developed will be applicable to any site that generates large amounts of waste heat. Results show that heat pumps are an economically viable solution for this application, allowing for reduced cost and CO₂ emissions.

Keywords: environment, heat recovery, mining engineering, sustainability

Procedia PDF Downloads 110
285 Exploration of Hydrocarbon Unconventional Accumulations in the Argillaceous Formation of the Autochthonous Miocene Succession in the Carpathian Foredeep

Authors: Wojciech Górecki, Anna Sowiżdżał, Grzegorz Machowski, Tomasz Maćkowski, Bartosz Papiernik, Michał Stefaniuk

Abstract:

The article shows results of the project which aims at evaluating possibilities of effective development and exploitation of natural gas from argillaceous series of the Autochthonous Miocene in the Carpathian Foredeep. To achieve the objective, the research team develop a world-trend based but unique methodology of processing and interpretation, adjusted to data, local variations and petroleum characteristics of the area. In order to determine the zones in which maximum volumes of hydrocarbons might have been generated and preserved as shale gas reservoirs, as well as to identify the most preferable well sites where largest gas accumulations are anticipated a number of task were accomplished. Evaluation of petrophysical properties and hydrocarbon saturation of the Miocene complex is based on laboratory measurements as well as interpretation of well-logs and archival data. The studies apply mercury porosimetry (MICP), micro CT and nuclear magnetic resonance imaging (using the Rock Core Analyzer). For prospective location (e.g. central part of Carpathian Foredeep – Brzesko-Wojnicz area) reprocessing and reinterpretation of detailed seismic survey data with the use of integrated geophysical investigations has been made. Construction of quantitative, structural and parametric models for selected areas of the Carpathian Foredeep is performed on the basis of integrated, detailed 3D computer models. Modeling are carried on with the Schlumberger’s Petrel software. Finally, prospective zones are spatially contoured in a form of regional 3D grid, which will be framework for generation modelling and comprehensive parametric mapping, allowing for spatial identification of the most prospective zones of unconventional gas accumulation in the Carpathian Foredeep. Preliminary results of research works indicate a potentially prospective area for occurrence of unconventional gas accumulations in the Polish part of Carpathian Foredeep.

Keywords: autochthonous Miocene, Carpathian foredeep, Poland, shale gas

Procedia PDF Downloads 228
284 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood

Authors: Randa Alharbi, Vladislav Vyshemirsky

Abstract:

Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.

Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)

Procedia PDF Downloads 202
283 Modelling the Effect of Alcohol Consumption on the Accelerating and Braking Behaviour of Drivers

Authors: Ankit Kumar Yadav, Nagendra R. Velaga

Abstract:

Driving under the influence of alcohol impairs the driving performance and increases the crash risks worldwide. The present study investigated the effect of different Blood Alcohol Concentrations (BAC) on the accelerating and braking behaviour of drivers with the help of driving simulator experiments. Eighty-two licensed Indian drivers drove on the rural road environment designed in the driving simulator at BAC levels of 0.00%, 0.03%, 0.05%, and 0.08% respectively. Driving performance was analysed with the help of vehicle control performance indicators such as mean acceleration and mean brake pedal force of the participants. Preliminary analysis reported an increase in mean acceleration and mean brake pedal force with increasing BAC levels. Generalized linear mixed models were developed to quantify the effect of different alcohol levels and explanatory variables such as driver’s age, gender and other driver characteristic variables on the driving performance indicators. Alcohol use was reported as a significant factor affecting the accelerating and braking performance of the drivers. The acceleration model results indicated that mean acceleration of the drivers increased by 0.013 m/s², 0.026 m/s² and 0.027 m/s² for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Results of the brake pedal force model reported that mean brake pedal force of the drivers increased by 1.09 N, 1.32 N and 1.44 N for the BAC levels of 0.03%, 0.05% and 0.08% respectively. Age was a significant factor in both the models where one year increase in drivers’ age resulted in 0.2% reduction in mean acceleration and 19% reduction in mean brake pedal force of the drivers. It shows that driving experience could compensate for the negative effects of alcohol to some extent while driving. Female drivers were found to accelerate slower and brake harder as compared to the male drivers which confirmed that female drivers are more conscious about their safety while driving. It was observed that drivers who were regular exercisers had better control on their accelerator pedal as compared to the non-regular exercisers during drunken driving. The findings of the present study revealed that drivers tend to be more aggressive and impulsive under the influence of alcohol which deteriorates their driving performance. Drunk driving state can be differentiated from sober driving state by observing the accelerating and braking behaviour of the drivers. The conclusions may provide reference in making countermeasures against drinking and driving and contribute to traffic safety.

Keywords: alcohol, acceleration, braking behaviour, driving simulator

Procedia PDF Downloads 146
282 A Topology-Based Dynamic Repair Strategy for Enhancing Urban Road Network Resilience under Flooding

Authors: Xuhui Lin, Qiuchen Lu, Yi An, Tao Yang

Abstract:

As global climate change intensifies, extreme weather events such as floods increasingly threaten urban infrastructure, making the vulnerability of urban road networks a pressing issue. Existing static repair strategies fail to adapt to the rapid changes in road network conditions during flood events, leading to inefficient resource allocation and suboptimal recovery. The main research gap lies in the lack of repair strategies that consider both the dynamic characteristics of networks and the progression of flood propagation. This paper proposes a topology-based dynamic repair strategy that adjusts repair priorities based on real-time changes in flood propagation and traffic demand. Specifically, a novel method is developed to assess and enhance the resilience of urban road networks during flood events. The method combines road network topological analysis, flood propagation modelling, and traffic flow simulation, introducing a local importance metric to dynamically evaluate the significance of road segments across different spatial and temporal scales. Using London's road network and rainfall data as a case study, the effectiveness of this dynamic strategy is compared to traditional and Transport for London (TFL) strategies. The most significant highlight of the research is that the dynamic strategy substantially reduced the number of stranded vehicles across different traffic demand periods, improving efficiency by up to 35.2%. The advantage of this method lies in its ability to adapt in real-time to changes in network conditions, enabling more precise resource allocation and more efficient repair processes. This dynamic strategy offers significant value to urban planners, traffic management departments, and emergency response teams, helping them better respond to extreme weather events like floods, enhance overall urban resilience, and reduce economic losses and social impacts.

Keywords: Urban resilience, road networks, flood response, dynamic repair strategy, topological analysis

Procedia PDF Downloads 35
281 Mathematical Modelling of Spatial Distribution of Covid-19 Outbreak Using Diffusion Equation

Authors: Kayode Oshinubi, Brice Kammegne, Jacques Demongeot

Abstract:

The use of mathematical tools like Partial Differential Equations and Ordinary Differential Equations have become very important to predict the evolution of a viral disease in a population in order to take preventive and curative measures. In December 2019, a novel variety of Coronavirus (SARS-CoV-2) was identified in Wuhan, Hubei Province, China causing a severe and potentially fatal respiratory syndrome, i.e., COVID-19. Since then, it has become a pandemic declared by World Health Organization (WHO) on March 11, 2020 which has spread around the globe. A reaction-diffusion system is a mathematical model that describes the evolution of a phenomenon subjected to two processes: a reaction process in which different substances are transformed, and a diffusion process that causes a distribution in space. This article provides a mathematical study of the Susceptible, Exposed, Infected, Recovered, and Vaccinated population model of the COVID-19 pandemic by the bias of reaction-diffusion equations. Both local and global asymptotic stability conditions for disease-free and endemic equilibria are determined using the Lyapunov function are considered and the endemic equilibrium point exists and is stable if it satisfies Routh–Hurwitz criteria. Also, adequate conditions for the existence and uniqueness of the solution of the model have been proved. We showed the spatial distribution of the model compartments when the basic reproduction rate $\mathcal{R}_0 < 1$ and $\mathcal{R}_0 > 1$ and sensitivity analysis is performed in order to determine the most sensitive parameters in the proposed model. We demonstrate the model's effectiveness by performing numerical simulations. We investigate the impact of vaccination and the significance of spatial distribution parameters in the spread of COVID-19. The findings indicate that reducing contact with an infected person and increasing the proportion of susceptible people who receive high-efficacy vaccination will lessen the burden of COVID-19 in the population. To the public health policymakers, we offered a better understanding of the COVID-19 management.

Keywords: COVID-19, SEIRV epidemic model, reaction-diffusion equation, basic reproduction number, vaccination, spatial distribution

Procedia PDF Downloads 122
280 Assessing the Impacts of Vocational Training System in the Sudan: A Dynamic CGE Application

Authors: Zuhal Mohammed, Khalid Siddig, Harald Grethe

Abstract:

Vocational training (VT) has been identified as a potential engine for achieving economic and social development, particularly in developing countries, while during the last two decades it is deemed as an essential determinant of human capital accumulation. Furthermore, it has a crucial role in reducing inequality, wage gaps and unemployment and in promoting skill decomposition. Government plays an important role in the human capital formulation by providing finance for education. In some countries, a large portion of the public educational investment is devoted to academic education (primary, secondary and tertiary). This is reflected in disproportionately increasing investment in various education sectors other than vocational education and VT. Nevertheless, the finance of VT system is not likely to increase or even remain at its existing level. This paper conducts an in-depth analysis to quantify the impacts of various options for expanding the public expenditure on education as well as vocational training in the Sudan. The study uses a recursive dynamic CGE modelling framework that accommodates VT and allows depicting the impact of various policies targeting the vocational training system with special focus on the agricultural sector. This allows for depicting the potential effects of various resource allocation policies not only among education versus non-education sectors, but also between the various types of education and training. Moreover, the study assesses the role of VT system in the economy through its influence on workers’ skill improvement and their movement across sectors. The results show that an increase in the public educational investment will lead to decrease the supply of low and high educated workers as results of increasing the school participation of the students in the short run. While in the medium to long run, this measure guides to increase the productivity of the labour and thus the growth rate of the gross domestic product (GDP). Therefore, the findings of the study provide Sudanese policymakers with needed information to help to adopt measures to reduce unemployment, enhance workers’ skill and ultimately improve livelihoods.

Keywords: vocational training, recursive dynamic CGE, skill level, labour market, economic growth, Sudan

Procedia PDF Downloads 197
279 Performance Improvement of Long-Reach Optical Access Systems Using Hybrid Optical Amplifiers

Authors: Shreyas Srinivas Rangan, Jurgis Porins

Abstract:

The internet traffic has increased exponentially due to the high demand for data rates by the users, and the constantly increasing metro networks and access networks are focused on improving the maximum transmit distance of the long-reach optical networks. One of the common methods to improve the maximum transmit distance of the long-reach optical networks at the component level is to use broadband optical amplifiers. The Erbium Doped Fiber Amplifier (EDFA) provides high amplification with low noise figure but due to the characteristics of EDFA, its operation is limited to C-band and L-band. In contrast, the Raman amplifier exhibits a wide amplification spectrum, and negative noise figure values can be achieved. To obtain such results, high powered pumping sources are required. Operating Raman amplifiers with such high-powered optical sources may cause fire hazards and it may damage the optical system. In this paper, we implement a hybrid optical amplifier configuration. EDFA and Raman amplifiers are used in this hybrid setup to combine the advantages of both EDFA and Raman amplifiers to improve the reach of the system. Using this setup, we analyze the maximum transmit distance of the network by obtaining a correlation diagram between the length of the single-mode fiber (SMF) and the Bit Error Rate (BER). This hybrid amplifier configuration is implemented in a Wavelength Division Multiplexing (WDM) system with a BER of 10⁻⁹ by using NRZ modulation format, and the gain uniformity noise ratio (signal-to-noise ratio (SNR)), the efficiency of the pumping source, and the optical signal gain efficiency of the amplifier are studied experimentally in a mathematical modelling environment. Numerical simulations were implemented in RSoft OptSim simulation software based on the nonlinear Schrödinger equation using the Split-Step method, the Fourier transform, and the Monte Carlo method for estimating BER.

Keywords: Raman amplifier, erbium doped fibre amplifier, bit error rate, hybrid optical amplifiers

Procedia PDF Downloads 70
278 Using GIS for Assessment and Modelling of Oil Spill Risk at Vulnerable Coastal Resources: Of Misratah Coast, Libya

Authors: Abduladim Maitieg

Abstract:

The oil manufacture is one of the main productive activities in Libya and has a massive infrastructure, including offshore drilling and exploration and wide oil export platform sites that located in coastal area. There is a threat to marine and coastal area of oil spills is greatest in those sites with a high spills comes from urban and industry, parallel to that, monitoring oil spills and risk emergency strategy is weakness, An approach for estimating a coastal resources vulnerability to oil spills is presented based on abundance, environmental and Scio-economic importance, distance to oil spill resources and oil risk likelihood. As many as 10 coastal resources were selected for oil spill assessment at the coast. This study aims to evaluate, determine and establish vulnerable coastal resource maps and estimating the rate of oil spill comes for different oil spill resources in Misratah marine environment. In the study area there are two type of oil spill resources, major oil resources come from offshore oil industries which are 96 km from the Coast and Loading/Uploading oil platform. However, the miner oil resources come from urban sewage pipes and fish ports. In order to analyse the collected database, the Geographic information system software has been used to identify oil spill location, to map oil tracks in front of study area, and developing seasonal vulnerable costal resources maps. This work shows that there is a differential distribution of the degree of vulnerability to oil spills along the coastline, with values ranging from high vulnerability and low vulnerability, and highlights the link between oil spill movement and coastal resources vulnerability. The results of assessment found most of costal freshwater spring sites are highly vulnerable to oil spill due to their location on the intertidal zone and their close to proximity to oil spills recourses such as Zreag coast. Furthermore, the Saltmarsh coastline is highly vulnerable to oil spill risk due to characterisation as it contains a nesting area of sea turtles and feeding places for migratory birds and the . Oil will reach the coast in winter season according to oil spill movement. Coastal tourist beaches in the north coast are considered as highly vulnerable to oil spill due to location and closeness to oil spill resources.

Keywords: coastal recourses vulnerability, oil spill trajectory, gnome software, Misratah coast- Libya, GIS

Procedia PDF Downloads 314
277 A Dual-Mode Infinite Horizon Predictive Control Algorithm for Load Tracking in PUSPATI TRIGA Reactor

Authors: Mohd Sabri Minhat, Nurul Adilla Mohd Subha

Abstract:

The PUSPATI TRIGA Reactor (RTP), Malaysia reached its first criticality on June 28, 1982, with power capacity 1MW thermal. The Feedback Control Algorithm (FCA) which is conventional Proportional-Integral (PI) controller, was used for present power control method to control fission process in RTP. It is important to ensure the core power always stable and follows load tracking within acceptable steady-state error and minimum settling time to reach steady-state power. At this time, the system could be considered not well-posed with power tracking performance. However, there is still potential to improve current performance by developing next generation of a novel design nuclear core power control. In this paper, the dual-mode predictions which are proposed in modelling Optimal Model Predictive Control (OMPC), is presented in a state-space model to control the core power. The model for core power control was based on mathematical models of the reactor core, OMPC, and control rods selection algorithm. The mathematical models of the reactor core were based on neutronic models, thermal hydraulic models, and reactivity models. The dual-mode prediction in OMPC for transient and terminal modes was based on the implementation of a Linear Quadratic Regulator (LQR) in designing the core power control. The combination of dual-mode prediction and Lyapunov which deal with summations in cost function over an infinite horizon is intended to eliminate some of the fundamental weaknesses related to MPC. This paper shows the behaviour of OMPC to deal with tracking, regulation problem, disturbance rejection and caters for parameter uncertainty. The comparison of both tracking and regulating performance is analysed between the conventional controller and OMPC by numerical simulations. In conclusion, the proposed OMPC has shown significant performance in load tracking and regulating core power for nuclear reactor with guarantee stabilising in the closed-loop.

Keywords: core power control, dual-mode prediction, load tracking, optimal model predictive control

Procedia PDF Downloads 162
276 Investigation of Heat Conduction through Particulate Filled Polymer Composite

Authors: Alok Agrawal, Alok Satapathy

Abstract:

In this paper, an attempt to determine the effective thermal conductivity (keff) of particulate filled polymer composites using finite element method (FEM) a powerful computational technique is made. A commercially available finite element package ANSYS is used for this numerical analysis. Three-dimensional spheres-in-cube lattice array models are constructed to simulate the microstructures of micro-sized particulate filled polymer composites with filler content ranging from 2.35 to 26.8 vol %. Based on the temperature profiles across the composite body, the keff of each composition is estimated theoretically by FEM. Composites with similar filler contents are than fabricated using compression molding technique by reinforcing micro-sized aluminium oxide (Al2O3) in polypropylene (PP) resin. Thermal conductivities of these composite samples are measured according to the ASTM standard E-1530 by using the Unitherm™ Model 2022 tester, which operates on the double guarded heat flow principle. The experimentally measured conductivity values are compared with the numerical values and also with those obtained from existing empirical models. This comparison reveals that the FEM simulated values are found to be in reasonable good agreement with the experimental data. Values obtained from the theoretical model proposed by the authors are also found to be in even closer approximation with the measured values within percolation limit. Further, this study shows that there is gradual enhancement in the conductivity of PP resin with increase in filler percentage and thereby its heat conduction capability is improved. It is noticed that with addition of 26.8 vol % of filler, the keff of composite increases to around 6.3 times that of neat PP. This study validates the proposed model for PP-Al2O3 composite system and proves that finite element analysis can be an excellent methodology for such investigations. With such improved heat conduction ability, these composites can find potential applications in micro-electronics, printed circuit boards, encapsulations etc.

Keywords: analytical modelling, effective thermal conductivity, finite element method, polymer matrix composite

Procedia PDF Downloads 321
275 An Examination of Factors Leading to Knowledge-Sharing Behavior of Sri Lankan Bankers

Authors: Eranga N. Somaratna, Pradeep Dharmadasa

Abstract:

In the current competitive environment, the factors leading to organization success are not limited to the investment of capital, labor, and raw material, but in the ability of knowledge innovation from all the members of an organization. However, knowledge on its own cannot provide organizations with its promised benefits unless it is shared, as organizations are increasingly experiencing unsuccessful knowledge sharing efforts. In such a backdrop and due to the dearth of research in this area in the South Asian context, the study set forth to develop an understanding of the factors that influence knowledge-sharing behavior within an organizational framework, using widely accepted social psychology theories. The purpose of the article is to discover the determinants of knowledge-sharing intention and actual knowledge sharing behaviors of bank employees in Sri Lanka using an aggregate model. Knowledge sharing intentions are widely discussed in literature through the application of Ajzen’s Theory of planned behavior (TPB) and Theory of Social Capital (SCT) separately. Both the theories are rich to explain knowledge sharing intention of workers with limitations. The study, therefore, combines the TPB with SCT in developing its conceptual model. Data were collected through a self-administrated paper-based questionnaire of 199 bank managers from 6 public and private banks of Sri Lanka and analyzed the suggested research model using Structural Equation Modelling (SEM). The study supported six of the nine hypotheses, where Attitudes toward Knowledge Sharing Behavior, Perceived Behavioral Control, Trust, Anticipated Reciprocal Relationships and Actual Knowledge Sharing Behavior were supported while Organizational Climate, Sense of Self-Worth and Anticipated Extrinsic Rewards were not, in determining knowledge sharing intentions. Furthermore, the study investigated the effect of demographic factors of bankers (age, gender, position, education, and experiences) to the actual knowledge sharing behavior. However, findings should be confirmed using a larger sample, as well as through cross-sectional studies. The results highlight the need for theoreticians to combined TPB and SCT in understanding knowledge workers’ intentions and actual behavior; and for practitioners to focus on the perceptions and needs of the individual knowledge worker and the need to cultivate a culture of sharing knowledge in the organization for their mutual benefit.

Keywords: banks, employees behavior, knowledge management, knowledge sharing

Procedia PDF Downloads 132
274 Modelling Forest Fire Risk in the Goaso Forest Area of Ghana: Remote Sensing and Geographic Information Systems Approach

Authors: Bernard Kumi-Boateng, Issaka Yakubu

Abstract:

Forest fire, which is, an uncontrolled fire occurring in nature has become a major concern for the Forestry Commission of Ghana (FCG). The forest fires in Ghana usually result in massive destruction and take a long time for the firefighting crews to gain control over the situation. In order to assess the effect of forest fire at local scale, it is important to consider the role fire plays in vegetation composition, biodiversity, soil erosion, and the hydrological cycle. The occurrence, frequency and behaviour of forest fires vary over time and space, primarily as a result of the complicated influences of changes in land use, vegetation composition, fire suppression efforts, and other indigenous factors. One of the forest zones in Ghana with a high level of vegetation stress is the Goaso forest area. The area has experienced changes in its traditional land use such as hunting, charcoal production, inefficient logging practices and rural abandonment patterns. These factors which were identified as major causes of forest fire, have recently modified the incidence of fire in the Goaso area. In spite of the incidence of forest fires in the Goaso forest area, most of the forest services do not provide a cartographic representation of the burned areas. This has resulted in significant amount of information being required by the firefighting unit of the FCG to understand fire risk factors and its spatial effects. This study uses Remote Sensing and Geographic Information System techniques to develop a fire risk hazard model using the Goaso Forest Area (GFA) as a case study. From the results of the study, natural forest, agricultural lands and plantation cover types were identified as the major fuel contributing loads. However, water bodies, roads and settlements were identified as minor fuel contributing loads. Based on the major and minor fuel contributing loads, a forest fire risk hazard model with a reasonable accuracy has been developed for the GFA to assist decision making.

Keywords: forest, GIS, remote sensing, Goaso

Procedia PDF Downloads 457
273 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles

Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu

Abstract:

The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.

Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation

Procedia PDF Downloads 310
272 Flood Hazard Assessment and Land Cover Dynamics of the Orai Khola Watershed, Bardiya, Nepal

Authors: Loonibha Manandhar, Rajendra Bhandari, Kumud Raj Kafle

Abstract:

Nepal’s Terai region is a part of the Ganges river basin which is one of the most disaster-prone areas of the world, with recurrent monsoon flooding causing millions in damage and the death and displacement of hundreds of people and households every year. The vulnerability of human settlements to natural disasters such as floods is increasing, and mapping changes in land use practices and hydro-geological parameters is essential in developing resilient communities and strong disaster management policies. The objective of this study was to develop a flood hazard zonation map of Orai Khola watershed and map the decadal land use/land cover dynamics of the watershed. The watershed area was delineated using SRTM DEM, and LANDSAT images were classified into five land use classes (forest, grassland, sediment and bare land, settlement area and cropland, and water body) using pixel-based semi-automated supervised maximum likelihood classification. Decadal changes in each class were then quantified using spatial modelling. Flood hazard mapping was performed by assigning weights to factors slope, rainfall distribution, distance from the river and land use/land cover on the basis of their estimated influence in causing flood hazard and performing weighed overlay analysis to identify areas that are highly vulnerable. The forest and grassland coverage increased by 11.53 km² (3.8%) and 1.43 km² (0.47%) from 1996 to 2016. The sediment and bare land areas decreased by 12.45 km² (4.12%) from 1996 to 2016 whereas settlement and cropland areas showed a consistent increase to 14.22 km² (4.7%). Waterbody coverage also increased to 0.3 km² (0.09%) from 1996-2016. 1.27% (3.65 km²) of total watershed area was categorized into very low hazard zone, 20.94% (60.31 km²) area into low hazard zone, 37.59% (108.3 km²) area into moderate hazard zone, 29.25% (84.27 km²) area into high hazard zone and 31 villages which comprised 10.95% (31.55 km²) were categorized into high hazard zone area.

Keywords: flood hazard, land use/land cover, Orai river, supervised maximum likelihood classification, weighed overlay analysis

Procedia PDF Downloads 352
271 Filtering Momentum Life Cycles, Price Acceleration Signals and Trend Reversals for Stocks, Credit Derivatives and Bonds

Authors: Periklis Brakatsoulas

Abstract:

Recent empirical research shows a growing interest in investment decision-making under market anomalies that contradict the rational paradigm. Momentum is undoubtedly one of the most robust anomalies in the empirical asset pricing research and remains surprisingly lucrative ever since first documented. Although predominantly phenomena identified across equities, momentum premia are now evident across various asset classes. Yet few many attempts are made so far to provide traders a diversified portfolio of strategies across different assets and markets. Moreover, literature focuses on patterns from past returns rather than mechanisms to signal future price directions prior to momentum runs. The aim of this paper is to develop a diversified portfolio approach to price distortion signals using daily position data on stocks, credit derivatives, and bonds. An algorithm allocates assets periodically, and new investment tactics take over upon price momentum signals and across different ranking groups. We focus on momentum life cycles, trend reversals, and price acceleration signals. The main effort here concentrates on the density, time span and maturity of momentum phenomena to identify consistent patterns over time and measure the predictive power of buy-sell signals generated by these anomalies. To tackle this, we propose a two-stage modelling process. First, we generate forecasts on core macroeconomic drivers. Secondly, satellite models generate market risk forecasts using the core driver projections generated at the first stage as input. Moreover, using a combination of the ARFIMA and FIGARCH models, we examine the dependence of consecutive observations across time and portfolio assets since long memory behavior in volatilities of one market appears to trigger persistent volatility patterns across other markets. We believe that this is the first work that employs evidence of volatility transmissions among derivatives, equities, and bonds to identify momentum life cycle patterns.

Keywords: forecasting, long memory, momentum, returns

Procedia PDF Downloads 102
270 Simplified Modelling of Visco-Elastic Fluids for Use in Recoil Damping Systems

Authors: Prasad Pokkunuri

Abstract:

Visco-elastic materials combine the stress response properties of both solids and fluids and have found use in a variety of damping applications – both vibrational and acoustic. Defense and automotive applications, in particular, are subject to high impact and shock loading – for example: aircraft landing gear, firearms, and shock absorbers. Field responsive fluids – a class of smart materials – are the preferred choice of energy absorbents because of their controllability. These fluids’ stress response can be controlled by the application of a magnetic or electric field, in a closed loop. Their rheological properties – elasticity, plasticity, and viscosity – can be varied all the way from that of a liquid such as water to a hard solid. This work presents a simplified model to study the impulse response behavior of such fluids for use in recoil damping systems. The well-known Burger’s equation, in conjunction with various visco-elastic constitutive models, is used to represent fluid behavior. The Kelvin-Voigt, Upper Convected Maxwell (UCM), and Oldroyd-B constitutive models are implemented in this study. Using these models in a one-dimensional framework eliminates additional complexities due to geometry, pressure, body forces, and other source terms. Using a finite difference formulation to numerically solve the governing equation(s), the response to an initial impulse is studied. The disturbance is confined within the problem domain with no-inflow, no-outflow boundary conditions, and its decay characteristics studied. Visco-elastic fluids typically involve a time-dependent stress relaxation which gives rise to interesting behavior when subjected to an impulsive load. For particular values of viscous damping and elastic modulus, the fluid settles into a stable oscillatory state, absorbing and releasing energy without much decay. The simplified formulation enables a comprehensive study of different modes of system response, by varying relevant parameters. Using the insights gained from this study, extension to a more detailed multi-dimensional model is considered.

Keywords: Burgers Equation, Impulse Response, Recoil Damping Systems, Visco-elastic Fluids

Procedia PDF Downloads 292
269 Adsorption: A Decision Maker in the Photocatalytic Degradation of Phenol on Co-Catalysts Doped TiO₂

Authors: Dileep Maarisetty, Janaki Komandur, Saroj S. Baral

Abstract:

In the current work, photocatalytic degradation of phenol was carried both in UV and visible light to find the slowest step that is limiting the rate of photo-degradation process. Characterization such as XRD, SEM, FT-IR, TEM, XPS, UV-DRS, PL, BET, UPS, ESR and zeta potential experiments were conducted to assess the credibility of catalysts in boosting the photocatalytic activity. To explore the synergy, TiO₂ was doped with graphene and alumina. The orbital hybridization with alumina doping (mediated by graphene) resulted in higher electron transfer from the conduction band of TiO₂ to alumina surface where oxygen reduction reactions (ORR) occur. Besides, the doping of alumina and graphene introduced defects into Ti lattice and helped in improving the adsorptive properties of modified photo-catalyst. Results showed that these defects promoted the oxygen reduction reactions (ORR) on the catalyst’s surface. ORR activity aims at producing reactive oxygen species (ROS). These ROS species oxidizes the phenol molecules which is adsorbed on the surface of photo-catalysts, thereby driving the photocatalytic reactions. Since mass transfer is considered as rate limiting step, various mathematical models were applied to the experimental data to probe the best fit. By varying the parameters, it was found that intra-particle diffusion was the slowest step in the degradation process. Lagergren model gave the best R² values indicating the nature of rate kinetics. Similarly, different adsorption isotherms were employed and realized that Langmuir isotherm suits the best with tremendous increase in uptake capacity (mg/g) of TiO₂-rGO-Al₂O₃ as compared undoped TiO₂. This further assisted in higher adsorption of phenol molecules. The results obtained from experimental, kinetic modelling and adsorption isotherms; it is concluded that apart from changes in surface, optoelectronic and morphological properties that enhanced the photocatalytic activity, the intra-particle diffusion within the catalyst’s pores serve as rate-limiting step in deciding the fate of photo-catalytic degradation of phenol.

Keywords: ORR, phenol degradation, photo-catalyst, rate kinetics

Procedia PDF Downloads 144
268 Drug Design Modelling and Molecular Virtual Simulation of an Optimized BSA-Based Nanoparticle Formulation Loaded with Di-Berberine Sulfate Acid Salt

Authors: Eman M. Sarhan, Doaa A. Ghareeb, Gabriella Ortore, Amr A. Amara, Mohamed M. El-Sayed

Abstract:

Drug salting and nanoparticle-based drug delivery formulations are considered to be an effective means for rendering the hydrophobic drugs’ nano-scale dispersion in aqueous media, and thus circumventing the pitfalls of their poor solubility as well as enhancing their membrane permeability. The current study aims to increase the bioavailability of quaternary ammonium berberine through acid salting and biodegradable bovine serum albumin (BSA)-based nanoparticulate drug formulation. Berberine hydroxide (BBR-OH) that was chemically synthesized by alkalization of the commercially available berberine hydrochloride (BBR-HCl) was then acidified to get Di-berberine sulfate (BBR)₂SO₄. The purified crystals were spectrally characterized. The desolvation technique was optimized for the preparation of size-controlled BSA-BBR-HCl, BSA-BBR-OH, and BSA-(BBR)₂SO₄ nanoparticles. Particle size, zeta potential, drug release, encapsulation efficiency, Fourier transform infrared spectroscopy (FTIR), tandem MS-MS spectroscopy, energy-dispersive X-ray spectroscopy (EDX), scanning and transmitting electron microscopic examination (SEM, TEM), in vitro bioactivity, and in silico drug-polymer interaction were determined. BSA (PDB ID; 4OR0) protonation state at different pH values was predicted using Amber12 molecular dynamic simulation. Then blind docking was performed using Lamarkian genetic algorithm (LGA) through AutoDock4.2 software. Results proved the purity and the size-controlled synthesis of berberine-BSA-nanoparticles. The possible binding poses, hydrophobic and hydrophilic interactions of berberine on BSA at different pH values were predicted. Antioxidant, anti-hemolytic, and cell differentiated ability of tested drugs and their nano-formulations were evaluated. Thus, drug salting and the potentially effective albumin berberine nanoparticle formulations can be successfully developed using a well-optimized desolvation technique and exhibiting better in vitro cellular bioavailability.

Keywords: berberine, BSA, BBR-OH, BBR-HCl, BSA-BBR-HCl, BSA-BBR-OH, (BBR)₂SO₄, BSA-(BBR)₂SO₄, FTIR, AutoDock4.2 Software, Lamarkian genetic algorithm, SEM, TEM, EDX

Procedia PDF Downloads 174
267 Optimization of Bills Assignment to Different Skill-Levels of Data Entry Operators in a Business Process Outsourcing Industry

Authors: M. S. Maglasang, S. O. Palacio, L. P. Ogdoc

Abstract:

Business Process Outsourcing has been one of the fastest growing and emerging industry in the Philippines today. Unlike most of the contact service centers, more popularly known as "call centers", The BPO Industry’s primary outsourced service is performing audits of the global clients' logistics. As a service industry, manpower is considered as the most important yet the most expensive resource in the company. Because of this, there is a need to maximize the human resources so people are effectively and efficiently utilized. The main purpose of the study is to optimize the current manpower resources through effective distribution and assignment of different types of bills to the different skill-level of data entry operators. The assignment model parameters include the average observed time matrix gathered from through time study, which incorporates the learning curve concept. Subsequently, a simulation model was made to duplicate the arrival rate of demand which includes the different batches and types of bill per day. Next, a mathematical linear programming model was formulated. Its objective is to minimize direct labor cost per bill by allocating the different types of bills to the different skill-levels of operators. Finally, a hypothesis test was done to validate the model, comparing the actual and simulated results. The analysis of results revealed that the there’s low utilization of effective capacity because of its failure to determine the product-mix, skill-mix, and simulated demand as model parameters. Moreover, failure to consider the effects of learning curve leads to overestimation of labor needs. From 107 current number of operators, the proposed model gives a result of 79 operators. This results to an increase of utilization of effective capacity to 14.94%. It is recommended that the excess 28 operators would be reallocated to the other areas of the department. Finally, a manpower capacity planning model is also recommended in support to management’s decisions on what to do when the current capacity would reach its limit with the expected increasing demand.

Keywords: optimization modelling, linear programming, simulation, time and motion study, capacity planning

Procedia PDF Downloads 518