Search results for: CLIL models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6544

Search results for: CLIL models

4594 Atypical Retinoid ST1926 Nanoparticle Formulation Development and Therapeutic Potential in Colorectal Cancer

Authors: Sara Assi, Berthe Hayar, Claudio Pisano, Nadine Darwiche, Walid Saad

Abstract:

Nanomedicine, the application of nanotechnology to medicine, is an emerging discipline that has gained significant attention in recent years. Current breakthroughs in nanomedicine have paved the way to develop effective drug delivery systems that can be used to target cancer. The use of nanotechnology provides effective drug delivery, enhanced stability, bioavailability, and permeability, thereby minimizing drug dosage and toxicity. As such, the use of nanoparticle (NP) formulations in drug delivery has been applied in various cancer models and have shown to improve the ability of drugs to reach specific targeted sites in a controlled manner. Cancer is one of the major causes of death worldwide; in particular, colorectal cancer (CRC) is the third most common type of cancer diagnosed amongst men and women and the second leading cause of cancer related deaths, highlighting the need for novel therapies. Retinoids, consisting of natural and synthetic derivatives, are a class of chemical compounds that have shown promise in preclinical and clinical cancer settings. However, retinoids are limited by their toxicity and resistance to treatment. To overcome this resistance, various synthetic retinoids have been developed, including the adamantyl retinoid ST1926, which is a potent anti-cancer agent. However, due to its limited bioavailability, the development of ST1926 has been restricted in phase I clinical trials. We have previously investigated the preclinical efficacy of ST1926 in CRC models. ST1926 displayed potent inhibitory and apoptotic effects in CRC cell lines by inducing early DNA damage and apoptosis. ST1926 significantly reduced the tumor doubling time and tumor burden in a xenograft CRC model. Therefore, we developed ST1926-NPs and assessed their efficacy in CRC models. ST1926-NPs were produced using Flash NanoPrecipitation with the amphiphilic diblock copolymer polystyrene-b-ethylene oxide and cholesterol as a co-stabilizer. ST1926 was formulated into NPs with a drug to polymer mass ratio of 1:2, providing a stable formulation for one week. The contin ST1926-NP diameter was 100 nm, with a polydispersity index of 0.245. Using the MTT cell viability assay, ST1926-NP exhibited potent anti-growth activities as naked ST1926 in HCT116 cells, at pharmacologically achievable concentrations. Future studies will be performed to study the anti-tumor activities and mechanism of action of ST1926-NPs in a xenograft mouse model and to detect the compound and its glucuroconjugated form in the plasma of mice. Ultimately, our studies will support the use of ST1926-NP formulations in enhancing the stability and bioavailability of ST1926 in CRC.

Keywords: nanoparticles, drug delivery, colorectal cancer, retinoids

Procedia PDF Downloads 84
4593 Ferromagnetic Potts Models with Multi Site Interaction

Authors: Nir Schreiber, Reuven Cohen, Simi Haber

Abstract:

The Potts model has been widely explored in the literature for the last few decades. While many analytical and numerical results concern with the traditional two site interaction model in various geometries and dimensions, little is yet known about models where more than two spins simultaneously interact. We consider a ferromagnetic four site interaction Potts model on the square lattice (FFPS), where the four spins reside in the corners of an elementary square. Each spin can take an integer value 1,2,...,q. We write the partition function as a sum over clusters consisting of monochromatic faces. When the number of faces becomes large, tracing out spin configurations is equivalent to enumerating large lattice animals. It is known that the asymptotic number of animals with k faces is governed by λᵏ, with λ ≈ 4.0626. Based on this observation, systems with q < 4 and q > 4 exhibit a second and first order phase transitions, respectively. The transition nature of the q = 4 case is borderline. For any q, a critical giant component (GC) is formed. In the finite order case, GC is simple, while it is fractal when the transition is continuous. Using simple equilibrium arguments, we obtain a (zero order) bound on the transition point. It is claimed that this bound should apply for other lattices as well. Next, taking into account higher order sites contributions, the critical bound becomes tighter. Moreover, for q > 4, if corrections due to contributions from small clusters are negligible in the thermodynamic limit, the improved bound should be exact. The improved bound is used to relate the critical point to the finite correlation length. Our analytical predictions are confirmed by an extensive numerical study of FFPS, using the Wang-Landau method. In particular, the q=4 marginal case is supported by a very ambiguous pseudo-critical finite size behavior.

Keywords: entropic sampling, lattice animals, phase transitions, Potts model

Procedia PDF Downloads 149
4592 Vulnerability of Steel Moment-Frame Buildings with Pinned and, Alternatively, with Semi-Rigid Connections

Authors: Daniel Llanes, Alfredo Reyes, Sonia E. Ruiz, Federico Valenzuela Beltran

Abstract:

Steel frames have been used in building construction for more than one hundred years. Beam-column may be connected to columns using either stiffened or unstiffened angles at the top and bottom beam flanges. Designers often assume that these assemblies acted as “pinned” connections for gravity loads and that the stiffened connections would act as “fixed” connections for lateral loads. Observation of damages sustained by buildings during the 1994 Northridge earthquake indicated that, contrary to the intended behavior, in many cases, brittle fractures initiated within the connections at very low levels of plastic demand, and in some cases, while the structures remained essentially elastic. Due to the damage presented in these buildings other type of alternative connections have been proposed. According to a research funded by the Federal Emergency Management Agency (FEMA), the screwed connections have better performance when they are subjected to cyclic loads, but at the same time, these connections have some degree of flexibility. Due to this situation, some researchers ventured into the study of semi-rigid connections. In the present study three steel buildings, constituted by regular frames are analyzed. Two types of connections are considered: pinned and semi-rigid connections. With the aim to estimate their structural capacity, a number of incremental dynamic analyzes are performed. 3D structural models are used for the analyses. The seismic ground motions were recorded on sites near Los Angeles, California, where the structures are supposed to be located. The vulnerability curves of the building are obtained in terms of maximum inter-story drifts. The vulnerability curves (which correspond to the models with two different types of connections) are compared, and its implications on its structural design and performance is discussed.

Keywords: steel frame Buildings, vulnerability curves, semi-rigid connections, pinned connections

Procedia PDF Downloads 212
4591 Generalized Additive Model for Estimating Propensity Score

Authors: Tahmidul Islam

Abstract:

Propensity Score Matching (PSM) technique has been widely used for estimating causal effect of treatment in observational studies. One major step of implementing PSM is estimating the propensity score (PS). Logistic regression model with additive linear terms of covariates is most used technique in many studies. Logistics regression model is also used with cubic splines for retaining flexibility in the model. However, choosing the functional form of the logistic regression model has been a question since the effectiveness of PSM depends on how accurately the PS been estimated. In many situations, the linearity assumption of linear logistic regression may not hold and non-linear relation between the logit and the covariates may be appropriate. One can estimate PS using machine learning techniques such as random forest, neural network etc for more accuracy in non-linear situation. In this study, an attempt has been made to compare the efficacy of Generalized Additive Model (GAM) in various linear and non-linear settings and compare its performance with usual logistic regression. GAM is a non-parametric technique where functional form of the covariates can be unspecified and a flexible regression model can be fitted. In this study various simple and complex models have been considered for treatment under several situations (small/large sample, low/high number of treatment units) and examined which method leads to more covariate balance in the matched dataset. It is found that logistic regression model is impressively robust against inclusion quadratic and interaction terms and reduces mean difference in treatment and control set equally efficiently as GAM does. GAM provided no significantly better covariate balance than logistic regression in both simple and complex models. The analysis also suggests that larger proportion of controls than treatment units leads to better balance for both of the methods.

Keywords: accuracy, covariate balances, generalized additive model, logistic regression, non-linearity, propensity score matching

Procedia PDF Downloads 348
4590 Cross-Sectional Study of Critical Parameters on RSET and Decision-Making of At-Risk Groups in Fire Evacuation

Authors: Naser Kazemi Eilaki, Ilona Heldal, Carolyn Ahmer, Bjarne Christian Hagen

Abstract:

Elderly people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to a safe place. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. While earlier studies have frequently addressed quantitative measurements regarding at-risk groups' physical characteristics (e.g., their speed of travel), this paper considers the influence of at-risk groups’ characteristics on their decision and determining better escape routes. Most of evacuation models are based on mapping people's movement and their behaviour to summation times for common activity types on a timeline. Usually, timeline models estimate required safe egress time (RSET) as a sum of four timespans: detection, alarm, premovement, and movement time, and compare this with the available safe egress time (ASET) to determine what is influencing the margin of safety.This paper presents a cross-sectional study for identifying the most critical items on RSET and people's decision-making and with possibilities to include safety knowledge regarding people with physical or cognitive functional impairments. The result will contribute to increased knowledge on considering at-risk groups and disabilities for designing and developing safe escape routes. The expected results can be an asset to predict the probabilistic behavioural pattern of at-risk groups and necessary components for defining a framework for understanding how stakeholders can consider various disabilities when determining the margin of safety for a safe escape route.

Keywords: fire safety, evacuation, decision-making, at-risk groups

Procedia PDF Downloads 86
4589 Nuclear Fuel Safety Threshold Determined by Logistic Regression Plus Uncertainty

Authors: D. S. Gomes, A. T. Silva

Abstract:

Analysis of the uncertainty quantification related to nuclear safety margins applied to the nuclear reactor is an important concept to prevent future radioactive accidents. The nuclear fuel performance code may involve the tolerance level determined by traditional deterministic models producing acceptable results at burn cycles under 62 GWd/MTU. The behavior of nuclear fuel can simulate applying a series of material properties under irradiation and physics models to calculate the safety limits. In this study, theoretical predictions of nuclear fuel failure under transient conditions investigate extended radiation cycles at 75 GWd/MTU, considering the behavior of fuel rods in light-water reactors under reactivity accident conditions. The fuel pellet can melt due to the quick increase of reactivity during a transient. Large power excursions in the reactor are the subject of interest bringing to a treatment that is known as the Fuchs-Hansen model. The point kinetic neutron equations show similar characteristics of non-linear differential equations. In this investigation, the multivariate logistic regression is employed to a probabilistic forecast of fuel failure. A comparison of computational simulation and experimental results was acceptable. The experiments carried out use the pre-irradiated fuels rods subjected to a rapid energy pulse which exhibits the same behavior during a nuclear accident. The propagation of uncertainty utilizes the Wilk's formulation. The variables chosen as essential to failure prediction were the fuel burnup, the applied peak power, the pulse width, the oxidation layer thickness, and the cladding type.

Keywords: logistic regression, reactivity-initiated accident, safety margins, uncertainty propagation

Procedia PDF Downloads 278
4588 Designing a Model to Increase the Flow of Circular Economy Startups Using a Systemic and Multi-Generational Approach

Authors: Luís Marques, João Rocha, Andreia Fernandes, Maria Moura, Cláudia Caseiro, Filipa Figueiredo, João Nunes

Abstract:

The implementation of circularity strategies other than recycling, such as reducing the amount of raw material, as well as reusing or sharing existing products, remains marginal. The European Commission announced that the transition towards a more circular economy could lead to the net creation of about 700,000 jobs in Europe by 2030, through additional labour demand from recycling plants, repair services and other circular activities. Efforts to create new circular business models in accordance with completely circular processes, as opposed to linear ones, have increased considerably in recent years. In order to create a societal Circular Economy transition model, it is necessary to include innovative solutions, where startups play a key role. Early-stage startups based on new business models according to circular processes often face difficulties in creating enough impact. The StartUp Zero Program designs a model and approach to increase the flow of startups in the Circular Economy field, focusing on a systemic decision analysis and multi-generational approach, considering Multi-Criteria Decision Analysis to support a decision-making tool, which is also supported by the use of a combination of an Analytical Hierarchy Process and Multi-Attribute Value Theory methods. We define principles, criteria and indicators for evaluating startup prerogatives, quantifying the evaluation process in a unique result. Additionally, this entrepreneurship program spanning 16 months involved more than 2400 young people, from ages 14 to 23, in more than 200 interaction activities.

Keywords: circular economy, entrepreneurship, startups;, multi-criteria decision analysis

Procedia PDF Downloads 79
4587 Time-Dependent Modulation on Depressive Responses and Circadian Rhythms of Corticosterone in Models of Melatonin Deficit

Authors: Jana Tchekalarova, Milena Atanasova, Katerina Georgieva

Abstract:

Melatonin deficit can cause a disturbance in emotional status and circadian rhythms of the endocrine system in the body. Both pharmacological and alternative approaches are applied for correction of dysfunctions driven by changes in circadian dynamics of many physiological indicators. In the present study, we tested and compare the beneficial effect of agomelatine (40 mg/kg, i.p. for 3 weeks) and endurance training on depressive behavior in two models of melatonin deficit in rat. The role of disturbed circadian rhythms of plasma melatonin and corticosterone secretion in the mechanism of these treatments was also explored. The continuous exercise program attenuated depressive responses associated with disrupted diurnal rhythm of home-cage motor activity, anhedonia in the sucrose preference test, and despair-like behavior in the forced swimming test were attenuated by agomelatine exposed to chronic constant light (CCL) and long-term exercise in pinealectomized rats. Parallel to the observed positive effect on the emotional status, agomelatine restored CCL-induced impairment of circadian patterns of plasma melatonin but not that of corticosterone. In opposite, exercise training diminished total plasma corticosterone levels and corrected its flattened pattern while it was unable to correct melatonin deficit in pinealectomy. These results suggest that the antidepressant-like effect of pharmacological and alternative approach might be mediated via two different mechanism, correction of the disturbed circadian rhythm of melatonin and corticosterone, respectively. Therefore, these treatment approaches might have a potential therapeutic application in different subpopulations of people characterized by a melatonin deficiency. This work was supported by the National Science Fund of Bulgaria (research grant # № DN 03/10; DN# 12/6).

Keywords: agomelatine, exercise training, melatonin deficit, corticosterone

Procedia PDF Downloads 116
4586 Count Regression Modelling on Number of Migrants in Households

Authors: Tsedeke Lambore Gemecho, Ayele Taye Goshu

Abstract:

The main objective of this study is to identify the determinants of the number of international migrants in a household and to compare regression models for count response. This study is done by collecting data from total of 2288 household heads of 16 randomly sampled districts in Hadiya and Kembata-Tembaro zones of Southern Ethiopia. The Poisson mixed models, as special cases of the generalized linear mixed model, is explored to determine effects of the predictors: age of household head, farm land size, and household size. Two ethnicities Hadiya and Kembata are included in the final model as dummy variables. Stepwise variable selection has indentified four predictors: age of head, farm land size, family size and dummy variable ethnic2 (0=other, 1=Kembata). These predictors are significant at 5% significance level with count response number of migrant. The Poisson mixed model consisting of the four predictors with random effects districts. Area specific random effects are significant with the variance of about 0.5105 and standard deviation of 0.7145. The results show that the number of migrant increases with heads age, family size, and farm land size. In conclusion, there is a significantly high number of international migration per household in the area. Age of household head, family size, and farm land size are determinants that increase the number of international migrant in households. Community-based intervention is needed so as to monitor and regulate the international migration for the benefits of the society.

Keywords: Poisson regression, GLM, number of migrant, Hadiya and Kembata Tembaro zones

Procedia PDF Downloads 269
4585 Adsorption of Pb(II) with MOF [Co2(Btec)(Bipy)(DMF)2]N in Aqueous Solution

Authors: E. Gil, A. Zepeda, J. Rivera, C. Ben-Youssef, S. Rincón

Abstract:

Water pollution has become one of the most serious environmental problems. Multiple methods have been proposed for the removal of Pb(II) from contaminated water. Among these, adsorption processes have shown to be more efficient, cheaper and easier to handle with respect to other treatment methods. However, research for adsorbents with high adsorption capacities is still necessary. For this purpose, we proposed in this work the study of metal-organic Framework [Co2(btec)(bipy)(DMF)2]n (MOF-Co) as adsorbent material of Pb (II) in aqueous media. MOF-Co was synthesized by a simple method. Firstly 4, 4’ dipyridyl, 1,2,4,5 benzenetetracarboxylic acid, cobalt (II) and nitrate hexahydrate were first mixed each one in N,N dimethylformamide (DMF) and then, mixed in a reactor altogether. The obtained solution was heated at 363 K in a muffle during 68 h to complete the synthesis. It was washed and dried, obtaining MOF-Co as the final product. MOF-Co was characterized before and after the adsorption process by Fourier transforms infrared spectra (FTIR) and X-ray photoelectron spectroscopy (XPS). The Pb(II) in aqueous media was detected by Absorption Atomic Spectroscopy (AA). In order to evaluate the adsorption process in the presence of Pb(II) in aqueous media, the experiments were realized in flask of 100 ml the work volume at 200 rpm, with different MOF-Co quantities (0.0125 and 0.025 g), pH (2-6), contact time (0.5-6 h) and temperature (298,308 and 318 K). The kinetic adsorption was represented by pseudo-second order model, which suggests that the adsorption took place through chemisorption or chemical adsorption. The best adsorption results were obtained at pH 5. Langmuir, Freundlich and BET equilibrium isotherms models were used to study the adsorption of Pb(II) with 0.0125 g of MOF-Co, in the presence of different concentration of Pb(II) (20-200 mg/L, 100 mL, pH 5) with 4 h of reaction. The correlation coefficients (R2) of the different models show that the Langmuir model is better than Freundlich and BET model with R2=0.97 and a maximum adsorption capacity of 833 mg/g. Therefore, the Langmuir model can be used to best describe the Pb(II) adsorption in monolayer behavior on the MOF-Co. This value is the highest when compared to other materials such as the graphene/activated carbon composite (217 mg/g), biomass fly ashes (96.8 mg/g), PVA/PAA gel (194.99 mg/g) and MOF with Ag12 nanoparticles (120 mg/g).

Keywords: adsorption, heavy metals, metal-organic frameworks, Pb(II)

Procedia PDF Downloads 198
4584 Review of the Road Crash Data Availability in Iraq

Authors: Abeer K. Jameel, Harry Evdorides

Abstract:

Iraq is a middle income country where the road safety issue is considered one of the leading causes of deaths. To control the road risk issue, the Iraqi Ministry of Planning, General Statistical Organization started to organise a collection system of traffic accidents data with details related to their causes and severity. These data are published as an annual report. In this paper, a review of the available crash data in Iraq will be presented. The available data represent the rate of accidents in aggregated level and classified according to their types, road users’ details, and crash severity, type of vehicles, causes and number of causalities. The review is according to the types of models used in road safety studies and research, and according to the required road safety data in the road constructions tasks. The available data are also compared with the road safety dataset published in the United Kingdom as an example of developed country. It is concluded that the data in Iraq are suitable for descriptive and exploratory models, aggregated level comparison analysis, and evaluation and monitoring the progress of the overall traffic safety performance. However, important traffic safety studies require disaggregated level of data and details related to the factors of the likelihood of traffic crashes. Some studies require spatial geographic details such as the location of the accidents which is essential in ranking the roads according to their level of safety, and name the most dangerous roads in Iraq which requires tactic plan to control this issue. Global Road safety agencies interested in solve this problem in low and middle-income countries have designed road safety assessment methodologies which are basing on the road attributes data only. Therefore, in this research it is recommended to use one of these methodologies.

Keywords: road safety, Iraq, crash data, road risk assessment, The International Road Assessment Program (iRAP)

Procedia PDF Downloads 239
4583 Numerical Investigation of Entropy Signatures in Fluid Turbulence: Poisson Equation for Pressure Transformation from Navier-Stokes Equation

Authors: Samuel Ahamefula Mba

Abstract:

Fluid turbulence is a complex and nonlinear phenomenon that occurs in various natural and industrial processes. Understanding turbulence remains a challenging task due to its intricate nature. One approach to gain insights into turbulence is through the study of entropy, which quantifies the disorder or randomness of a system. This research presents a numerical investigation of entropy signatures in fluid turbulence. The work is to develop a numerical framework to describe and analyse fluid turbulence in terms of entropy. This decomposes the turbulent flow field into different scales, ranging from large energy-containing eddies to small dissipative structures, thus establishing a correlation between entropy and other turbulence statistics. This entropy-based framework provides a powerful tool for understanding the underlying mechanisms driving turbulence and its impact on various phenomena. This work necessitates the derivation of the Poisson equation for pressure transformation of Navier-Stokes equation and using Chebyshev-Finite Difference techniques to effectively resolve it. To carry out the mathematical analysis, consider bounded domains with smooth solutions and non-periodic boundary conditions. To address this, a hybrid computational approach combining direct numerical simulation (DNS) and Large Eddy Simulation with Wall Models (LES-WM) is utilized to perform extensive simulations of turbulent flows. The potential impact ranges from industrial process optimization and improved prediction of weather patterns.

Keywords: turbulence, Navier-Stokes equation, Poisson pressure equation, numerical investigation, Chebyshev-finite difference, hybrid computational approach, large Eddy simulation with wall models, direct numerical simulation

Procedia PDF Downloads 75
4582 Observed Changes in Constructed Precipitation at High Resolution in Southern Vietnam

Authors: Nguyen Tien Thanh, Günter Meon

Abstract:

Precipitation plays a key role in water cycle, defining the local climatic conditions and in ecosystem. It is also an important input parameter for water resources management and hydrologic models. With spatial continuous data, a certainty of discharge predictions or other environmental factors is unquestionably better than without. This is, however, not always willingly available to acquire for a small basin, especially for coastal region in Vietnam due to a low network of meteorological stations (30 stations) on long coast of 3260 km2. Furthermore, available gridded precipitation datasets are not fine enough when applying to hydrologic models. Under conditions of global warming, an application of spatial interpolation methods is a crucial for the climate change impact studies to obtain the spatial continuous data. In recent research projects, although some methods can perform better than others do, no methods draw the best results for all cases. The objective of this paper therefore, is to investigate different spatial interpolation methods for daily precipitation over a small basin (approximately 400 km2) located in coastal region, Southern Vietnam and find out the most efficient interpolation method on this catchment. The five different interpolation methods consisting of cressman, ordinary kriging, regression kriging, dual kriging and inverse distance weighting have been applied to identify the best method for the area of study on the spatio-temporal scale (daily, 10 km x 10 km). A 30-year precipitation database was created and merged into available gridded datasets. Finally, observed changes in constructed precipitation were performed. The results demonstrate that the method of ordinary kriging interpolation is an effective approach to analyze the daily precipitation. The mixed trends of increasing and decreasing monthly, seasonal and annual precipitation have documented at significant levels.

Keywords: interpolation, precipitation, trend, vietnam

Procedia PDF Downloads 264
4581 Effect of Goat Milk Kefir and Soy Milk Kefir on IL-6 in Diabetes Mellitus Wistar Mice Models Induced by Streptozotocin and Nicotinamide

Authors: Agatha Swasti Ayuning Tyas

Abstract:

Hyperglycemia in Diabetes Mellitus (DM) is an important factor in cellular and vascular damage, which is caused by activation of C Protein Kinase, polyol and hexosamine track, and production of Advanced Glycation End-Products (AGE). Those mentioned before causes the accumulation of Reactive Oxygen Species (ROS). Oxidative stress increases the expression of proinflammatory factors IL-6 as one of many signs of endothelial disfunction. Genistein in soy milk has a high immunomodulator potential. Goat milk contains amino acids which have antioxidative potential. Fermented kefir has an anti-inflammatory activity which believed will also contribute in potentiating goat milk and soy milk. This study is a quasi-experimental posttest-only research to 30 Wistar mice. This study compared the levels of IL-6 between healthy Wistar mice group (G1) and 4 DM Wistar mice with intervention and grouped as follows: mice without treatment (G2), mice treated with 100% goat milk kefir (G3), mice treated with combination of 50% goat milk kefir and 50% soy milk kefir (G4), and mice treated with 100% soy milk kefir (G5). DM animal models were induced with Streptozotocin & Nicotinamide to achieve hyperglycemic condition. Goat milk kefir and soy milk kefir are given at a dose of 2 mL/kg body weight/day for four weeks to intervention groups. Blood glucose was analyzed by the GOD-POD principle. IL-6 was analyzed by enzyme-linked sandwich ELISA. The level of IL-6 in DM untreated control group (G2) showed a significant difference from the group treated with the combination of 50% goat milk kefir and 50% soy milk kefir (G3) (p=0,006) and the group treated with 100% soy milk kefir (G5) (p=0,009). Whereas the difference of IL-6 in group treated with 100% goat milk kefir (G3) was not significant (p=0,131). There is also synergism between glucose level and IL-6 in intervention groups treated with combination of 50% goat milk kefir and 50% soy milk kefir (G3) and the group treated with 100% soy milk kefir (G5). Combination of 50 % goat milk kefir and 50% soy milk kefir and administration of 100% soy milk kefir alone can control the level of IL-6 remained low in DM Wistar mice induced with streptozocin and nicotinamide.

Keywords: diabetes mellitus, goat milk kefir, soy milk kefir, interleukin 6

Procedia PDF Downloads 269
4580 Contact Phenomena in Medieval Business Texts

Authors: Carmela Perta

Abstract:

Among the studies flourished in the field of historical sociolinguistics, mainly in the strand devoted to English history, during its Medieval and early modern phases, multilingual texts had been analysed using theories and models coming from contact linguistics, thus applying synchronic models and approaches to the past. This is true also in the case of contact phenomena which would transcend the writing level involving the language systems implicated in contact processes to the point of perceiving a new variety. This is the case for medieval administrative-commercial texts in which, according to some Scholars, the degree of fusion of Anglo-Norman, Latin and middle English is so high a mixed code emerges, and there are recurrent patterns of mixed forms. Interesting is a collection of multilingual business writings by John Balmayn, an Englishman overseeing a large shipment in Tuscany, namely the Cantelowe accounts. These documents display various analogies with multilingual texts written in England in the same period; in fact, the writer seems to make use of the above-mentioned patterns, with Middle English, Latin, Anglo-Norman, and the newly added Italian. Applying an atomistic yet dynamic approach to the study of contact phenomena, we will investigate these documents, trying to explore the nature of the switching forms they contain from an intra-writer variation perspective. After analysing the accounts and the type of multilingualism in them, we will take stock of the assumed mixed code nature, comparing the characteristics found in this genre with modern assumptions. The aim is to evaluate the possibility to consider the switching forms as core elements of a mixed code, used as professional variety among merchant communities, or whether such texts should be analysed from a switching perspective.

Keywords: historical sociolinguistics, historical code switching, letters, medieval england

Procedia PDF Downloads 56
4579 Using Time Series NDVI to Model Land Cover Change: A Case Study in the Berg River Catchment Area, Western Cape, South Africa

Authors: Adesuyi Ayodeji Steve, Zahn Munch

Abstract:

This study investigates the use of MODIS NDVI to identify agricultural land cover change areas on an annual time step (2007 - 2012) and characterize the trend in the study area. An ISODATA classification was performed on the MODIS imagery to select only the agricultural class producing 3 class groups namely: agriculture, agriculture/semi-natural, and semi-natural. NDVI signatures were created for the time series to identify areas dominated by cereals and vineyards with the aid of ancillary, pictometry and field sample data. The NDVI signature curve and training samples aided in creating a decision tree model in WEKA 3.6.9. From the training samples two classification models were built in WEKA using decision tree classifier (J48) algorithm; Model 1 included ISODATA classification and Model 2 without, both having accuracies of 90.7% and 88.3% respectively. The two models were used to classify the whole study area, thus producing two land cover maps with Model 1 and 2 having classification accuracies of 77% and 80% respectively. Model 2 was used to create change detection maps for all the other years. Subtle changes and areas of consistency (unchanged) were observed in the agricultural classes and crop practices over the years as predicted by the land cover classification. 41% of the catchment comprises of cereals with 35% possibly following a crop rotation system. Vineyard largely remained constant over the years, with some conversion to vineyard (1%) from other land cover classes. Some of the changes might be as a result of misclassification and crop rotation system.

Keywords: change detection, land cover, modis, NDVI

Procedia PDF Downloads 381
4578 A Closed-Loop Design Model for Sustainable Manufacturing by Integrating Forward Design and Reverse Design

Authors: Yuan-Jye Tseng, Yi-Shiuan Chen

Abstract:

In this paper, a new concept of closed-loop design model is presented. The closed-loop design model is developed by integrating forward design and reverse design. Based on this new concept, a closed-loop design model for sustainable manufacturing by integrated evaluation of forward design, reverse design, and green manufacturing using a fuzzy analytic network process is developed. In the design stage of a product, with a given product requirement and objective, there can be different ways to design the detailed components and specifications. Therefore, there can be different design cases to achieve the same product requirement and objective. Thus, in the design evaluation stage, it is required to analyze and evaluate the different design cases. The purpose of this research is to develop a model for evaluating the design cases by integrated evaluation of forward design, reverse design, and green manufacturing models. A fuzzy analytic network process model is presented for integrated evaluation of the criteria in the three models. The comparison matrices for evaluating the criteria in the three groups are established. The total relational values among the three groups represent the total relational effects. In application, a super matrix can be created and the total relational values can be used to evaluate the design cases for decision-making to select the final design case. An example product is demonstrated in this presentation. It shows that the model is useful for integrated evaluation of forward design, reverse design, and green manufacturing to achieve a closed-loop design for sustainable manufacturing objective.

Keywords: design evaluation, forward design, reverse design, closed-loop design, supply chain management, closed-loop supply chain, fuzzy analytic network process

Procedia PDF Downloads 656
4577 Comparison Approach for Wind Resource Assessment to Determine Most Precise Approach

Authors: Tasir Khan, Ishfaq Ahmad, Yejuan Wang, Muhammad Salam

Abstract:

Distribution models of the wind speed data are essential to assess the potential wind speed energy because it decreases the uncertainty to estimate wind energy output. Therefore, before performing a detailed potential energy analysis, the precise distribution model for data relating to wind speed must be found. In this research, material from numerous criteria goodness-of-fits, such as Kolmogorov Simonov, Anderson Darling statistics, Chi-Square, root mean square error (RMSE), AIC and BIC were combined finally to determine the wind speed of the best-fitted distribution. The suggested method collectively makes each criterion. This method was useful in a circumstance to fitting 14 distribution models statistically with the data of wind speed together at four sites in Pakistan. The consequences show that this method provides the best source for selecting the most suitable wind speed statistical distribution. Also, the graphical representation is consistent with the analytical results. This research presents three estimation methods that can be used to calculate the different distributions used to estimate the wind. In the suggested MLM, MOM, and MLE the third-order moment used in the wind energy formula is a key function because it makes an important contribution to the precise estimate of wind energy. In order to prove the presence of the suggested MOM, it was compared with well-known estimation methods, such as the method of linear moment, and maximum likelihood estimate. In the relative analysis, given to several goodness-of-fit, the presentation of the considered techniques is estimated on the actual wind speed evaluated in different time periods. The results obtained show that MOM certainly provides a more precise estimation than other familiar approaches in terms of estimating wind energy based on the fourteen distributions. Therefore, MOM can be used as a better technique for assessing wind energy.

Keywords: wind-speed modeling, goodness of fit, maximum likelihood method, linear moment

Procedia PDF Downloads 70
4576 Logical-Probabilistic Modeling of the Reliability of Complex Systems

Authors: Sergo Tsiramua, Sulkhan Sulkhanishvili, Elisabed Asabashvili, Lazare Kvirtia

Abstract:

The paper presents logical-probabilistic methods, models, and algorithms for reliability assessment of complex systems, based on which a web application for structural analysis and reliability assessment of systems was created. It is important to design systems based on structural analysis, research, and evaluation of efficiency indicators. One of the important efficiency criteria is the reliability of the system, which depends on the components of the structure. Quantifying the reliability of large-scale systems is a computationally complex process, and it is advisable to perform it with the help of a computer. Logical-probabilistic modeling is one of the effective means of describing the structure of a complex system and quantitatively evaluating its reliability, which was the basis of our application. The reliability assessment process included the following stages, which were reflected in the application: 1) Construction of a graphical scheme of the structural reliability of the system; 2) Transformation of the graphic scheme into a logical representation and modeling of the shortest ways of successful functioning of the system; 3) Description of system operability condition with logical function in the form of disjunctive normal form (DNF); 4) Transformation of DNF into orthogonal disjunction normal form (ODNF) using the orthogonalization algorithm; 5) Replacing logical elements with probabilistic elements in ODNF, obtaining a reliability estimation polynomial and quantifying reliability; 6) Calculation of “weights” of elements of system. Using the logical-probabilistic methods, models and algorithms discussed in the paper, a special software was created, by means of which a quantitative assessment of the reliability of systems of a complex structure is produced. As a result, structural analysis of systems, research, and designing of optimal structure systems are carried out.

Keywords: complex systems, logical-probabilistic methods, orthogonalization algorithm, reliability of systems, “weights” of elements

Procedia PDF Downloads 47
4575 Evaluating the Terrace Benefits of Erosion in a Terraced-Agricultural Watershed for Sustainable Soil and Water Conservation

Authors: Sitarrine Thongpussawal, Hui Shao, Clark Gantzer

Abstract:

Terracing is a conservation practice to reduce erosion and widely used for soil and water conservation throughout the world but is relatively expensive. A modification of the Soil and Water Assessment Tool (called SWAT-Terrace or SWAT-T) explicitly aims to improve the simulation of the hydrological process of erosion from the terraces. SWAT-T simulates erosion from the terraces by separating terraces into three segments instead of evaluating the entire terrace. The objective of this work is to evaluate the terrace benefits on erosion from the Goodwater Creek Experimental Watershed (GCEW) at watershed and Hydrologic Response Unit (HRU) scales using SWAT-T. The HRU is the smallest spatial unit of the model, which lumps all similar land uses, soils, and slopes within a sub-basin. The SWAT-T model was parameterized for slope length, steepness and the empirical Universal Soil Erosion Equation support practice factor for three terrace segments. Data from 1993-2010 measured at the watershed outlet were used to evaluate the models for calibration and validation. Results of SWAT-T calibration showed good performance between measured and simulated erosion for the monthly time step, but poor performance for SWAT-T validation. This is probably because of large storms in spring 2002 that prevented planting, causing poorly simulated scheduling of actual field operations. To estimate terrace benefits on erosion, models were compared with and without terraces. Results showed that SWAT-T showed significant ~3% reduction in erosion (Pr <0.01) at the watershed scale and ~12% reduction in erosion at the HRU scale. Studies using the SWAT-T model indicated that the terraces have advantages to reduce erosion from terraced-agricultural watersheds. SWAT-T can be used in the evaluation of erosion to sustainably conserve the soil and water.

Keywords: Erosion, Modeling, Terraces, SWAT

Procedia PDF Downloads 184
4574 Development of an Automatic Computational Machine Learning Pipeline to Process Confocal Fluorescence Images for Virtual Cell Generation

Authors: Miguel Contreras, David Long, Will Bachman

Abstract:

Background: Microscopy plays a central role in cell and developmental biology. In particular, fluorescence microscopy can be used to visualize specific cellular components and subsequently quantify their morphology through development of virtual-cell models for study of effects of mechanical forces on cells. However, there are challenges with these imaging experiments, which can make it difficult to quantify cell morphology: inconsistent results, time-consuming and potentially costly protocols, and limitation on number of labels due to spectral overlap. To address these challenges, the objective of this project is to develop an automatic computational machine learning pipeline to predict cellular components morphology for virtual-cell generation based on fluorescence cell membrane confocal z-stacks. Methods: Registered confocal z-stacks of nuclei and cell membrane of endothelial cells, consisting of 20 images each, were obtained from fluorescence confocal microscopy and normalized through software pipeline for each image to have a mean pixel intensity value of 0.5. An open source machine learning algorithm, originally developed to predict fluorescence labels on unlabeled transmitted light microscopy cell images, was trained using this set of normalized z-stacks on a single CPU machine. Through transfer learning, the algorithm used knowledge acquired from its previous training sessions to learn the new task. Once trained, the algorithm was used to predict morphology of nuclei using normalized cell membrane fluorescence images as input. Predictions were compared to the ground truth fluorescence nuclei images. Results: After one week of training, using one cell membrane z-stack (20 images) and corresponding nuclei label, results showed qualitatively good predictions on training set. The algorithm was able to accurately predict nuclei locations as well as shape when fed only fluorescence membrane images. Similar training sessions with improved membrane image quality, including clear lining and shape of the membrane, clearly showing the boundaries of each cell, proportionally improved nuclei predictions, reducing errors relative to ground truth. Discussion: These results show the potential of pre-trained machine learning algorithms to predict cell morphology using relatively small amounts of data and training time, eliminating the need of using multiple labels in immunofluorescence experiments. With further training, the algorithm is expected to predict different labels (e.g., focal-adhesion sites, cytoskeleton), which can be added to the automatic machine learning pipeline for direct input into Principal Component Analysis (PCA) for generation of virtual-cell mechanical models.

Keywords: cell morphology prediction, computational machine learning, fluorescence microscopy, virtual-cell models

Procedia PDF Downloads 185
4573 Multiphase Equilibrium Characterization Model For Hydrate-Containing Systems Based On Trust-Region Method Non-Iterative Solving Approach

Authors: Zhuoran Li, Guan Qin

Abstract:

A robust and efficient compositional equilibrium characterization model for hydrate-containing systems is required, especially for time-critical simulations such as subsea pipeline flow assurance analysis, compositional simulation in hydrate reservoirs etc. A multiphase flash calculation framework, which combines Gibbs energy minimization function and cubic plus association (CPA) EoS, is developed to describe the highly non-ideal phase behavior of hydrate-containing systems. A non-iterative eigenvalue problem-solving approach for the trust-region sub-problem is selected to guarantee efficiency. The developed flash model is based on the state-of-the-art objective function proposed by Michelsen to minimize the Gibbs energy of the multiphase system. It is conceivable that a hydrate-containing system always contains polar components (such as water and hydrate inhibitors), introducing hydrogen bonds to influence phase behavior. Thus, the cubic plus associating (CPA) EoS is utilized to compute the thermodynamic parameters. The solid solution theory proposed by van der Waals and Platteeuw is applied to represent hydrate phase parameters. The trust-region method combined with the trust-region sub-problem non-iterative eigenvalue problem-solving approach is utilized to ensure fast convergence. The developed multiphase flash model's accuracy performance is validated by three available models (one published and two commercial models). Hundreds of published hydrate-containing system equilibrium experimental data are collected to act as the standard group for the accuracy test. The accuracy comparing results show that our model has superior performances over two models and comparable calculation accuracy to CSMGem. Efficiency performance test also has been carried out. Because the trust-region method can determine the optimization step's direction and size simultaneously, fast solution progress can be obtained. The comparison results show that less iteration number is needed to optimize the objective function by utilizing trust-region methods than applying line search methods. The non-iterative eigenvalue problem approach also performs faster computation speed than the conventional iterative solving algorithm for the trust-region sub-problem, further improving the calculation efficiency. A new thermodynamic framework of the multiphase flash model for the hydrate-containing system has been constructed in this work. Sensitive analysis and numerical experiments have been carried out to prove the accuracy and efficiency of this model. Furthermore, based on the current thermodynamic model in the oil and gas industry, implementing this model is simple.

Keywords: equation of state, hydrates, multiphase equilibrium, trust-region method

Procedia PDF Downloads 154
4572 Belief-Based Games: An Appropriate Tool for Uncertain Strategic Situation

Authors: Saied Farham-Nia, Alireza Ghaffari-Hadigheh

Abstract:

Game theory is a mathematical tool to study the behaviors of a rational and strategic decision-makers, that analyze existing equilibrium in interest conflict situation and provides an appropriate mechanisms for cooperation between two or more player. Game theory is applicable for any strategic and interest conflict situation in politics, management and economics, sociology and etc. Real worlds’ decisions are usually made in the state of indeterminacy and the players often are lack of the information about the other players’ payoffs or even his own, which leads to the games in uncertain environments. When historical data for decision parameters distribution estimation is unavailable, we may have no choice but to use expertise belief degree, which represents the strength with that we believe the event will happen. To deal with belief degrees, we have use uncertainty theory which is introduced and developed by Liu based on normality, duality, subadditivity and product axioms to modeling personal belief degree. As we know, the personal belief degree heavily depends on the personal knowledge concerning the event and when personal knowledge changes, cause changes in the belief degree too. Uncertainty theory not only theoretically is self-consistent but also is the best among other theories for modeling belief degree on practical problem. In this attempt, we primarily reintroduced Expected Utility Function in uncertainty environment according to uncertainty theory axioms to extract payoffs. Then, we employed Nash Equilibrium to investigate the solutions. For more practical issues, Stackelberg leader-follower Game and Bertrand Game, as a benchmark models are discussed. Compared to existing articles in the similar topics, the game models and solution concepts introduced in this article can be a framework for problems in an uncertain competitive situation based on experienced expert’s belief degree.

Keywords: game theory, uncertainty theory, belief degree, uncertain expected value, Nash equilibrium

Procedia PDF Downloads 396
4571 Day Ahead and Intraday Electricity Demand Forecasting in Himachal Region using Machine Learning

Authors: Milan Joshi, Harsh Agrawal, Pallaw Mishra, Sanand Sule

Abstract:

Predicting electricity usage is a crucial aspect of organizing and controlling sustainable energy systems. The task of forecasting electricity load is intricate and requires a lot of effort due to the combined impact of social, economic, technical, environmental, and cultural factors on power consumption in communities. As a result, it is important to create strong models that can handle the significant non-linear and complex nature of the task. The objective of this study is to create and compare three machine learning techniques for predicting electricity load for both the day ahead and intraday, taking into account various factors such as meteorological data and social events including holidays and festivals. The proposed methods include a LightGBM, FBProphet, combination of FBProphet and LightGBM for day ahead and Motifs( Stumpy) based on Mueens algorithm for similarity search for intraday. We utilize these techniques to predict electricity usage during normal days and social events in the Himachal Region. We then assess their performance by measuring the MSE, RMSE, and MAPE values. The outcomes demonstrate that the combination of FBProphet and LightGBM method is the most accurate for day ahead and Motifs for intraday forecasting of electricity usage, surpassing other models in terms of MAPE, RMSE, and MSE. Moreover, the FBProphet - LightGBM approach proves to be highly effective in forecasting electricity load during social events, exhibiting precise day ahead predictions. In summary, our proposed electricity forecasting techniques display excellent performance in predicting electricity usage during normal days and special events in the Himachal Region.

Keywords: feature engineering, FBProphet, LightGBM, MASS, Motifs, MAPE

Procedia PDF Downloads 50
4570 The Display of Environmental Information to Promote Energy Saving Practices: Evidence from a Massive Behavioral Platform

Authors: T. Lazzarini, M. Imbiki, P. E. Sutter, G. Borragan

Abstract:

While several strategies, such as the development of more efficient appliances, the financing of insulation programs or the rolling out of smart meters represent promising tools to reduce future energy consumption, their implementation relies on people’s decisions-actions. Likewise, engaging with consumers to reshape their behavior has shown to be another important way to reduce energy usage. For these reasons, integrating the human factor in the energy transition has become a major objective for researchers and policymakers. Digital education programs based on tangible and gamified user interfaces have become a new tool with potential effects to reduce energy consumption4. The B2020 program, developed by the firm “Économie d’Énergie SAS”, proposes a digital platform to encourage pro-environmental behavior change among employees and citizens. The platform integrates 160 eco-behaviors to help saving energy and water and reducing waste and CO2 emissions. A total of 13,146 citizens have used the tool so far to declare the range of eco-behaviors they adopt in their daily lives. The present work seeks to build on this database to identify the potential impact of adopted energy-saving behaviors (n=62) to reduce the use of energy in buildings. To this end, behaviors were classified into three categories regarding the nature of its implementation (Eco-habits: e.g., turning-off the light, Eco-actions: e.g., installing low carbon technology such as led light-bulbs and Home-Refurbishments: e.g., such as wall-insulation or double-glazed energy efficient windows). General Linear Models (GLM) disclosed the existence of a significantly higher frequency of Eco-habits when compared to the number of home-refurbishments realized by the platform users. While this might be explained in part by the high financial costs that are associated with home renovation works, it also contrasts with the up to three times larger energy-savings that can be accomplished by these means. Furthermore, multiple regression models failed to disclose the expected relationship between energy-savings and frequency of adopted eco behaviors, suggesting that energy-related practices are not necessarily driven by the correspondent energy-savings. Finally, our results also suggested that people adopting more Eco-habits and Eco-actions were more likely to engage in Home-Refurbishments. Altogether, these results fit well with a growing body of scientific research, showing that energy-related practices do not necessarily maximize utility, as postulated by traditional economic models, and suggest that other variables might be triggering them. Promoting home refurbishments could benefit from the adoption of complementary energy-saving habits and actions.

Keywords: energy-saving behavior, human performance, behavioral change, energy efficiency

Procedia PDF Downloads 176
4569 Vulnerability Assessment of Vertically Irregular Structures during Earthquake

Authors: Pranab Kumar Das

Abstract:

Vulnerability assessment of buildings with irregularity in the vertical direction has been carried out in this study. The constructions of vertically irregular buildings are increasing in the context of fast urbanization in the developing countries including India. During two reconnaissance based survey performed after Nepal earthquake 2015 and Imphal (India) earthquake 2016, it has been observed that so many structures are damaged due to the vertically irregular configuration. These irregular buildings are necessary to perform safely during seismic excitation. Therefore, it is very urgent demand to point out the actual vulnerability of the irregular structure. So that remedial measures can be taken for protecting those structures during natural hazard as like earthquake. This assessment will be very helpful for India and as well as for the other developing countries. A sufficient number of research has been contributed to the vulnerability of plan asymmetric buildings. In the field of vertically irregular buildings, the effort has not been forwarded much to find out their vulnerability during an earthquake. Irregularity in vertical direction may be caused due to irregular distribution of mass, stiffness and geometrically irregular configuration. Detailed analysis of such structures, particularly non-linear/ push over analysis for performance based design seems to be challenging one. The present paper considered a number of models of irregular structures. Building models made of both reinforced concrete and brick masonry are considered for the sake of generality. The analyses are performed with both help of finite element method and computational method.The study, as a whole, may help to arrive at a reasonably good estimate, insight for fundamental and other natural periods of such vertically irregular structures. The ductility demand, storey drift, and seismic response study help to identify the location of critical stress concentration. Summarily, this paper is a humble step for understanding the vulnerability and framing up the guidelines for vertically irregular structures.

Keywords: ductility, stress concentration, vertically irregular structure, vulnerability

Procedia PDF Downloads 218
4568 Programmatic Actions of Social Welfare State in Service to Justice: Law, Society and the Third Sector

Authors: Bruno Valverde Chahaira, Matheus Jeronimo Low Lopes, Marta Beatriz Tanaka Ferdinandi

Abstract:

This paper proposes to dissect the meanings and / or directions of the State, in order, to present the State models to elaborate a conceptual framework about its function in the legal scope. To do so, it points out the possible contracts established between the State and the Society, since the general principles immanent in them can guide the models of society in force. From this orientation arise the contracts, whose purpose is by the effect to modify the status (the being and / or the opinion) of each of the subjects in presence - State and Society. In this logic, this paper announces the fiduciary contracts and “veredicção”(portuguese word) contracts, from the perspective of semiotics discourse (or greimasian). Therefore, studies focus on the issue of manifest language in unilateral and bilateral or reciprocal relations between the State and Society. Thus, under the biases of the model of the communicative situation and discourse, the guidelines of these contractual relations will be analyzed in order to see if there is a pragmatic sanction: positive when the contract is signed between the subjects (reward), or negative when the contract between they are broken (punishment). In this way, a third path emerges which, in this specific case, passes through the subject-third sector. In other words, the proposal, which is systemic in nature, is to analyze whether, since the contract of the welfare state is not carried out in the constitutional program on fundamental rights: education, health, housing, an others. Therefore, in the structure of the exchange demanded by the society according to its contractual obligations (others), the third way (Third Sector) advances in the empty space left by the State. In this line, it presents the modalities of action of the third sector in the social scope. Finally, the normative communication organization of these three subjects is sought in the pragmatic model of discourse, namely: State, Society and Third Sector, in an attempt to understand the constant dynamics in the Law and in the language of the relations established between them.

Keywords: access to justice, state, social rights, third sector

Procedia PDF Downloads 126
4567 A Multi-Modal Virtual Walkthrough of the Virtual Past and Present Based on Panoramic View, Crowd Simulation and Acoustic Heritage on Mobile Platform

Authors: Lim Chen Kim, Tan Kian Lam, Chan Yi Chee

Abstract:

This research presents a multi-modal simulation in the reconstruction of the past and the construction of present in digital cultural heritage on mobile platform. In bringing the present life, the virtual environment is generated through a presented scheme for rapid and efficient construction of 360° panoramic view. Then, acoustical heritage model and crowd model are presented and improvised into the 360° panoramic view. For the reconstruction of past life, the crowd is simulated and rendered in an old trading port. However, the keystone of this research is in a virtual walkthrough that shows the virtual present life in 2D and virtual past life in 3D, both in an environment of virtual heritage sites in George Town through mobile device. Firstly, the 2D crowd is modelled and simulated using OpenGL ES 1.1 on mobile platform. The 2D crowd is used to portray the present life in 360° panoramic view of a virtual heritage environment based on the extension of Newtonian Laws. Secondly, the 2D crowd is animated and rendered into 3D with improved variety and incorporated into the virtual past life using Unity3D Game Engine. The behaviours of the 3D models are then simulated based on the enhancement of the classical model of Boid algorithm. Finally, a demonstration system is developed and integrated with the models, techniques and algorithms of this research. The virtual walkthrough is demonstrated to a group of respondents and is evaluated through the user-centred evaluation by navigating around the demonstration system. The results of the evaluation based on the questionnaires have shown that the presented virtual walkthrough has been successfully deployed through a multi-modal simulation and such a virtual walkthrough would be particularly useful in a virtual tour and virtual museum applications.

Keywords: Boid Algorithm, Crowd Simulation, Mobile Platform, Newtonian Laws, Virtual Heritage

Procedia PDF Downloads 257
4566 Implementation of Fuzzy Version of Block Backward Differentiation Formulas for Solving Fuzzy Differential Equations

Authors: Z. B. Ibrahim, N. Ismail, K. I. Othman

Abstract:

Fuzzy Differential Equations (FDEs) play an important role in modelling many real life phenomena. The FDEs are used to model the behaviour of the problems that are subjected to uncertainty, vague or imprecise information that constantly arise in mathematical models in various branches of science and engineering. These uncertainties have to be taken into account in order to obtain a more realistic model and many of these models are often difficult and sometimes impossible to obtain the analytic solutions. Thus, many authors have attempted to extend or modified the existing numerical methods developed for solving Ordinary Differential Equations (ODEs) into fuzzy version in order to suit for solving the FDEs. Therefore, in this paper, we proposed the development of a fuzzy version of three-point block method based on Block Backward Differentiation Formulas (FBBDF) for the numerical solution of first order FDEs. The three-point block FBBDF method are implemented in uniform step size produces three new approximations simultaneously at each integration step using the same back values. Newton iteration of the FBBDF is formulated and the implementation is based on the predictor and corrector formulas in the PECE mode. For greater efficiency of the block method, the coefficients of the FBBDF are stored at the start of the program. The proposed FBBDF is validated through numerical results on some standard problems found in the literature and comparisons are made with the existing fuzzy version of the Modified Simpson and Euler methods in terms of the accuracy of the approximated solutions. The numerical results show that the FBBDF method performs better in terms of accuracy when compared to the Euler method when solving the FDEs.

Keywords: block, backward differentiation formulas, first order, fuzzy differential equations

Procedia PDF Downloads 301
4565 Mean and Volatility Spillover between US Stocks Market and Crude Oil Markets

Authors: Kamel Malik Bensafta, Gervasio Bensafta

Abstract:

The purpose of this paper is to investigate the relationship between oil prices and socks markets. The empirical analysis in this paper is conducted within the context of Multivariate GARCH models, using a transform version of the so-called BEKK parameterization. We show that mean and uncertainty of US market are transmitted to oil market and European market. We also identify an important transmission from WTI prices to Brent Prices.

Keywords: oil volatility, stock markets, MGARCH, transmission, structural break

Procedia PDF Downloads 473