Search results for: median models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7049

Search results for: median models

4379 Coexistence of Two Different Types of Intermittency near the Boundary of Phase Synchronization in the Presence of Noise

Authors: Olga I. Moskalenko, Maksim O. Zhuravlev, Alexey A. Koronovskii, Alexander E. Hramov

Abstract:

Intermittent behavior near the boundary of phase synchronization in the presence of noise is studied. In certain range of the coupling parameter and noise intensity the intermittency of eyelet and ring intermittencies is shown to take place. Main results are illustrated using the example of two unidirectionally coupled Rössler systems. Similar behavior is shown to take place in two hydrodynamical models of Pierce diode coupled unidirectionally.

Keywords: chaotic oscillators, phase synchronization, noise, intermittency of intermittencies

Procedia PDF Downloads 624
4378 Design of Evaluation for Ehealth Intervention: A Participatory Study in Italy, Israel, Spain and Sweden

Authors: Monika Jurkeviciute, Amia Enam, Johanna Torres Bonilla, Henrik Eriksson

Abstract:

Introduction: Many evaluations of eHealth interventions conclude that the evidence for improved clinical outcomes is limited, especially when the intervention is short, such as one year. Often, evaluation design does not address the feasibility of achieving clinical outcomes. Evaluations are designed to reflect upon clinical goals of intervention without utilizing the opportunity to illuminate effects on organizations and cost. A comprehensive design of evaluation can better support decision-making regarding the effectiveness and potential transferability of eHealth. Hence, the purpose of this paper is to present a feasible and comprehensive design of evaluation for eHealth intervention, including the design process in different contexts. Methodology: The situation of limited feasibility of clinical outcomes was foreseen in the European Union funded project called “DECI” (“Digital Environment for Cognitive Inclusion”) that is run under the “Horizon 2020” program with an aim to define and test a digital environment platform within corresponding care models that help elderly people live independently. A complex intervention of eHealth implementation into elaborate care models in four different countries was planned for one year. To design the evaluation, a participative approach was undertaken using Pettigrew’s lens of change and transformations, including context, process, and content. Through a series of workshops, observations, interviews, and document analysis, as well as a review of scientific literature, a comprehensive design of evaluation was created. Findings: The findings indicate that in order to get evidence on clinical outcomes, eHealth interventions should last longer than one year. The content of the comprehensive evaluation design includes a collection of qualitative and quantitative methods for data gathering which illuminates non-medical aspects. Furthermore, it contains communication arrangements to discuss the results and continuously improve the evaluation design, as well as procedures for monitoring and improving the data collection during the intervention. The process of the comprehensive evaluation design consists of four stages: (1) analysis of a current state in different contexts, including measurement systems, expectations and profiles of stakeholders, organizational ambitions to change due to eHealth integration, and the organizational capacity to collect data for evaluation; (2) workshop with project partners to discuss the as-is situation in relation to the project goals; (3) development of general and customized sets of relevant performance measures, questionnaires and interview questions; (4) setting up procedures and monitoring systems for the interventions. Lastly, strategies are presented on how challenges can be handled during the design process of evaluation in four different countries. The evaluation design needs to consider contextual factors such as project limitations, and differences between pilot sites in terms of eHealth solutions, patient groups, care models, national and organizational cultures and settings. This implies a need for the flexible approach to evaluation design to enable judgment over the effectiveness and potential for adoption and transferability of eHealth. In summary, this paper provides learning opportunities for future evaluation designs of eHealth interventions in different national and organizational settings.

Keywords: ehealth, elderly, evaluation, intervention, multi-cultural

Procedia PDF Downloads 308
4377 Analysis on the Feasibility of Landsat 8 Imagery for Water Quality Parameters Assessment in an Oligotrophic Mediterranean Lake

Authors: V. Markogianni, D. Kalivas, G. Petropoulos, E. Dimitriou

Abstract:

Lake water quality monitoring in combination with the use of earth observation products constitutes a major component in many water quality monitoring programs. Landsat 8 images of Trichonis Lake (Greece) acquired on 30/10/2013 and 30/08/2014 were used in order to explore the possibility of Landsat 8 to estimate water quality parameters and particularly CDOM absorption at specific wavelengths, chlorophyll-a and nutrient concentrations in this oligotrophic freshwater body, characterized by inexistent quantitative, temporal and spatial variability. Water samples have been collected at 22 different stations, on late August of 2014 and the satellite image of the same date was used to statistically correlate the in-situ measurements with various combinations of Landsat 8 bands in order to develop algorithms that best describe those relationships and calculate accurately the aforementioned water quality components. Optimal models were applied to the image of late October of 2013 and the validation of the results was conducted through their comparison with the respective available in-situ data of 2013. Initial results indicated the limited ability of the Landsat 8 sensor to accurately estimate water quality components in an oligotrophic waterbody. As resulted by the validation process, ammonium concentrations were proved to be the most accurately estimated component (R = 0.7), followed by chl-a concentration (R = 0.5) and the CDOM absorption at 420 nm (R = 0.3). In-situ nitrate, nitrite, phosphate and total nitrogen concentrations of 2014 were measured as lower than the detection limit of the instrument used, hence no statistical elaboration was conducted. On the other hand, multiple linear regression among reflectance measures and total phosphorus concentrations resulted in low and statistical insignificant correlations. Our results were concurrent with other studies in international literature, indicating that estimations for eutrophic and mesotrophic lakes are more accurate than oligotrophic, owing to the lack of suspended particles that are detectable by satellite sensors. Nevertheless, although those predictive models, developed and applied to Trichonis oligotrophic lake are less accurate, may still be useful indicators of its water quality deterioration.

Keywords: landsat 8, oligotrophic lake, remote sensing, water quality

Procedia PDF Downloads 380
4376 Role of Grey Scale Ultrasound Including Elastography in Grading the Severity of Carpal Tunnel Syndrome - A Comparative Cross-sectional Study

Authors: Arjun Prakash, Vinutha H., Karthik N.

Abstract:

BACKGROUND: Carpal tunnel syndrome (CTS) is a common entrapment neuropathy with an estimated prevalence of 0.6 - 5.8% in the general adult population. It is caused by compression of the Median Nerve (MN) at the wrist as it passes through a narrow osteofibrous canal. Presently, the diagnosis is established by the clinical symptoms and physical examination and Nerve conduction study (NCS) is used to assess its severity. However, it is considered to be painful, time consuming and expensive, with a false-negative rate between 16 - 34%. Ultrasonography (USG) is now increasingly used as a diagnostic tool in CTS due to its non-invasive nature, increased accessibility and relatively low cost. Elastography is a newer modality in USG which helps to assess stiffness of tissues. However, there is limited available literature about its applications in peripheral nerves. OBJECTIVES: Our objectives were to measure the Cross-Sectional Area (CSA) and elasticity of MN at the carpal tunnel using Grey scale Ultrasonography (USG), Strain Elastography (SE) and Shear Wave Elastography (SWE). We also made an attempt to independently evaluate the role of Gray scale USG, SE and SWE in grading the severity of CTS, keeping NCS as the gold standard. MATERIALS AND METHODS: After approval from the Institutional Ethics Review Board, we conducted a comparative cross sectional study for a period of 18 months. The participants were divided into two groups. Group A consisted of 54 patients with clinically diagnosed CTS who underwent NCS, and Group B consisted of 50 controls without any clinical symptoms of CTS. All Ultrasound examinations were performed on SAMSUNG RS 80 EVO Ultrasound machine with 2 - 9 Mega Hertz linear probe. In both groups, CSA of the MN was measured on Grey scale USG, and its elasticity was measured at the carpal tunnel (in terms of Strain ratio and Shear Modulus). The variables were compared between both groups by using ‘Independent t test’, and subgroup analyses were performed using one-way analysis of variance. Receiver operating characteristic curves were used to evaluate the diagnostic performance of each variable. RESULTS: The mean CSA of the MN was 13.60 + 3.201 mm2 and 9.17 + 1.665 mm2 in Group A and Group B, respectively (p < 0.001). The mean SWE was 30.65 + 12.996 kPa and 17.33 + 2.919 kPa in Group A and Group B, respectively (p < 0.001), and the mean Strain ratio was 7.545 + 2.017 and 5.802 + 1.153 in Group A and Group B respectively (p < 0.001). CONCLUSION: The combined use of Gray scale USG, SE and SWE is extremely useful in grading the severity of CTS and can be used as a painless and cost-effective alternative to NCS. Early diagnosis and grading of CTS and effective treatment is essential to avoid permanent nerve damage and functional disability.

Keywords: carpal tunnel, ultrasound, elastography, nerve conduction study

Procedia PDF Downloads 81
4375 Reactive Transport Modeling in Carbonate Rocks: A Single Pore Model

Authors: Priyanka Agrawal, Janou Koskamp, Amir Raoof, Mariette Wolthers

Abstract:

Calcite is the main mineral found in carbonate rocks, which form significant hydrocarbon reservoirs and subsurface repositories for CO2 sequestration. The injected CO2 mixes with the reservoir fluid and disturbs the geochemical equilibrium, triggering calcite dissolution. Different combinations of fluid chemistry and injection rate may therefore result in different evolution of porosity, permeability and dissolution patterns. To model the changes in porosity and permeability Kozeny-Carman equation K∝〖(∅)〗^n is used, where K is permeability and ∅ is porosity. The value of n is mostly based on experimental data or pore network models. In pore network models, this derivation is based on accuracy of relation used for conductivity and pore volume change. In fact, at a single pore scale, this relationship is the result of the pore shape development due to dissolution. We have prepared a new reactive transport model for a single pore which simulates the complex chemical reaction of carbonic-acid induced calcite dissolution and subsequent pore-geometry evolution at a single pore scale. We use COMSOL Multiphysics package 5.3 for the simulation. COMSOL utilizes the arbitary-Lagrangian Eulerian (ALE) method for the free-moving domain boundary. We examined the effect of flow rate on the evolution of single pore shape profiles due to calcite dissolution. We used three flow rates to cover diffusion dominated and advection-dominated transport regimes. The fluid in diffusion dominated flow (Pe number 0.037 and 0.37) becomes less reactive along the pore length and thus produced non-uniform pore shapes. However, for the advection-dominated flow (Pe number 3.75), the fast velocity of the fluid keeps the fluid relatively more reactive towards the end of the pore length, thus yielding uniform pore shape. Different pore shapes in terms of inlet opening vs overall pore opening will have an impact on the relation between changing volumes and conductivity. We have related the shape of pore with the Pe number which controls the transport regimes. For every Pe number, we have derived the relation between conductivity and porosity. These relations will be used in the pore network model to get the porosity and permeability variation.

Keywords: single pore, reactive transport, calcite system, moving boundary

Procedia PDF Downloads 359
4374 A Game Theory Analysis of the Effectiveness of Passenger Profiling for Transportation Security

Authors: Yael Deutsch, Arieh Gavious

Abstract:

The threat of aviation terrorism and its potential damage became significant after the 9/11 terror attacks. These attacks have led authorities and leaders to suggest that security personnel should overcome politically correct scruples about profiling and use it openly. However, there is a lack of knowledge about the smart usage of profiling and its advantages. We analyze game models that are suitable to specific real-world scenarios, focusing on profiling as a tool to detect potential violators, such as terrorists and smugglers. We provide analytical and clear answers to difficult questions, and by that help fighting against harmful violation acts.

Keywords: game theory, profiling, security, nash equilibrium

Procedia PDF Downloads 92
4373 Experimental Study and Numerical Modelling of Failure of Rocks Typical for Kuzbass Coal Basin

Authors: Mikhail O. Eremin

Abstract:

Present work is devoted to experimental study and numerical modelling of failure of rocks typical for Kuzbass coal basin (Russia). The main goal was to define strength and deformation characteristics of rocks on the base of uniaxial compression and three-point bending loadings and then to build a mathematical model of failure process for both types of loading. Depending on particular physical-mechanical characteristics typical rocks of Kuzbass coal basin (sandstones, siltstones, mudstones, etc. of different series – Kolchuginsk, Tarbagansk, Balohonsk) manifest brittle and quasi-brittle character of failure. The strength characteristics for both tension and compression are found. Other characteristics are also found from the experiment or taken from literature reviews. On the base of obtained characteristics and structure (obtained from microscopy) the mathematical and structural models are built and numerical modelling of failure under different types of loading is carried out. Effective characteristics obtained from modelling and character of failure correspond to experiment and thus, the mathematical model was verified. An Instron 1185 machine was used to carry out the experiments. Mathematical model includes fundamental conservation laws of solid mechanics – mass, impulse, energy. Each rock has a sufficiently anisotropic structure, however, each crystallite might be considered as isotropic and then a whole rock model has a quasi-isotropic structure. This idea gives an opportunity to use the Hooke’s law inside of each crystallite and thus explicitly accounting for the anisotropy of rocks and the stress-strain state at loading. Inelastic behavior is described in frameworks of two different models: von Mises yield criterion and modified Drucker-Prager yield criterion. The damage accumulation theory is also implemented in order to describe a failure process. Obtained effective characteristics of rocks are used then for modelling of rock mass evolution when mining is carried out both by an open-pit or underground opening.

Keywords: damage accumulation, Drucker-Prager yield criterion, failure, mathematical modelling, three-point bending, uniaxial compression

Procedia PDF Downloads 158
4372 Learning the Dynamics of Articulated Tracked Vehicles

Authors: Mario Gianni, Manuel A. Ruiz Garcia, Fiora Pirri

Abstract:

In this work, we present a Bayesian non-parametric approach to model the motion control of ATVs. The motion control model is based on a Dirichlet Process-Gaussian Process (DP-GP) mixture model. The DP-GP mixture model provides a flexible representation of patterns of control manoeuvres along trajectories of different lengths and discretizations. The model also estimates the number of patterns, sufficient for modeling the dynamics of the ATV.

Keywords: Dirichlet processes, gaussian mixture models, learning motion patterns, tracked robots for urban search and rescue

Procedia PDF Downloads 433
4371 Cell-Cell Interactions in Diseased Conditions Revealed by Three Dimensional and Intravital Two Photon Microscope: From Visualization to Quantification

Authors: Satoshi Nishimura

Abstract:

Although much information has been garnered from the genomes of humans and mice, it remains difficult to extend that information to explain physiological and pathological phenomena. This is because the processes underlying life are by nature stochastic and fluctuate with time. Thus, we developed novel "in vivo molecular imaging" method based on single and two-photon microscopy. We visualized and analyzed many life phenomena, including common adult diseases. We integrated the knowledge obtained, and established new models that will serve as the basis for new minimally invasive therapeutic approaches.

Keywords: two photon microscope, intravital visualization, thrombus, artery

Procedia PDF Downloads 358
4370 Berry Phase and Quantum Skyrmions: A Loop Tour in Physics

Authors: Sinuhé Perea Puente

Abstract:

In several physics systems the whole can be obtained as an exact copy of each of its parts, which facilitates the study of a complex system by looking carefully at its elements, separately. Reducionism offers simplified models which makes the problems easier, but “there’s plenty of room...at the mesoscopic scale”. Here we present a tour for two of its representants: Berry phase and skyrmions, studying some of its basic definitions and properties, and two cases in which both arise together, to finish constraining the scale for our mesoscopic system in the quest of quantum skyrmions, discovering which properties are conserved and which others may be destroyed.

Keywords: condensed mattter, quantum physics, skyrmions, topological defects

Procedia PDF Downloads 123
4369 Data and Model-based Metamodels for Prediction of Performance of Extended Hollo-Bolt Connections

Authors: M. Cabrera, W. Tizani, J. Ninic, F. Wang

Abstract:

Open section beam to concrete-filled tubular column structures has been increasingly utilized in construction over the past few decades due to their enhanced structural performance, as well as economic and architectural advantages. However, the use of this configuration in construction is limited due to the difficulties in connecting the structural members as there is no access to the inner part of the tube to install standard bolts. Blind-bolted systems are a relatively new approach to overcome this limitation as they only require access to one side of the tubular section to tighten the bolt. The performance of these connections in concrete-filled steel tubular sections remains uncharacterized due to the complex interactions between concrete, bolt, and steel section. Over the last years, research in structural performance has moved to a more sophisticated and efficient approach consisting of machine learning algorithms to generate metamodels. This method reduces the need for developing complex, and computationally expensive finite element models, optimizing the search for desirable design variables. Metamodels generated by a data fusion approach use numerical and experimental results by combining multiple models to capture the dependency between the simulation design variables and connection performance, learning the relations between different design parameters and predicting a given output. Fully characterizing this connection will transform high-rise and multistorey construction by means of the introduction of design guidance for moment-resisting blind-bolted connections, which is currently unavailable. This paper presents a review of the steps taken to develop metamodels generated by means of artificial neural network algorithms which predict the connection stress and stiffness based on the design parameters when using Extended Hollo-Bolt blind bolts. It also provides consideration of the failure modes and mechanisms that contribute to the deformability as well as the feasibility of achieving blind-bolted rigid connections when using the blind fastener.

Keywords: blind-bolted connections, concrete-filled tubular structures, finite element analysis, metamodeling

Procedia PDF Downloads 145
4368 Experimental and Analytical Studies for the Effect of Thickness and Axial Load on Load-Bearing Capacity of Fire-Damaged Concrete Walls

Authors: Yeo Kyeong Lee, Ji Yeon Kang, Eun Mi Ryu, Hee Sun Kim, Yeong Soo Shin

Abstract:

The objective of this paper is an investigation of the effects of the thickness and axial loading during a fire test on the load-bearing capacity of a fire-damaged normal-strength concrete wall. Two factors are attributed to the temperature distributions in the concrete members and are mainly obtained through numerous experiments. Toward this goal, three wall specimens of different thicknesses are heated for 2 h according to the ISO-standard heating curve, and the temperature distributions through the thicknesses are measured using thermocouples. In addition, two wall specimens are heated for 2 h while simultaneously being subjected to a constant axial loading at their top sections. The test results show that the temperature distribution during the fire test depends on wall thickness and axial load during the fire test. After the fire tests, the specimens are cured for one month, followed by the loading testing. The heated specimens are compared with three unheated specimens to investigate the residual load-bearing capacities. The fire-damaged walls show a minor difference of the load-bearing capacity regarding the axial loading, whereas a significant difference became evident regarding the wall thickness. To validate the experiment results, finite element models are generated for which the material properties that are obtained for the experiment are subject to elevated temperatures, and the analytical results show sound agreements with the experiment results. The analytical method based on validated thought experimental results is applied to generate the fire-damaged walls with 2,800 mm high considering the buckling effect: typical story height of residual buildings in Korea. The models for structural analyses generated to deformation shape after thermal analysis. The load-bearing capacity of the fire-damaged walls with pin supports at both ends does not significantly depend on the wall thickness, the reason for it is restraint of pinned ends. The difference of the load-bearing capacity of fire-damaged walls as axial load during the fire is within approximately 5 %.

Keywords: normal-strength concrete wall, wall thickness, axial-load ratio, slenderness ratio, fire test, residual strength, finite element analysis

Procedia PDF Downloads 207
4367 Preoperative Smoking Cessation Audit: A Single Centre Experience from Metropolitan Melbourne

Authors: Ya-Chu May Tsai, Ibrahim Yacoub, Eoin Casey

Abstract:

The Australian and New Zealand College of Anaesthetists (ANZCA) advises that smoking should not be permitted within 12 hours of surgery. There is little information in the medical literature regarding patients awareness of perioperative smoking cessation recommendations nor their appreciation of how smoking might negatively impact their perioperative course. The aim of the study is to assess the prevalence of current smokers presenting to Werribee Mercy Hospital (WMH) and to evaluate if pre-operative provision of both written and verbal pre-operative advice was, 1: Effective in improving patient awareness of the benefits of pre-operative smoking cessation, 2: Associated with an increase in the number of elective surgical patients who stop smoking at least 12 hours pre-operatively. Methods: The initial survey included all patients who presented to WMH for elective surgical procedures from 19 – 30 September 2016 using a standardized questionnaire focused on patients’ smoking history and their awareness of smoking cessation preoperatively. The intervention consisted of a standard pre-operative phone call to all patients advising them of the increased perioperative risks associated with smoking, and advised patients to cease 12 hours prior. In addition, written information on smoking cessation strategies were sent out in mail at least 1 week prior to planned procedure date to all patients. Questionnaire-based study after the intervention was conducted on day of elective procedure from 10 – 21 October 2016 inclusive. Primary outcomes measured were patient’s awareness of smoking cessation and proportion of smokers who quit >12 hours, considered a clinically meaning duration to reduce anaesthetics complications. Comparison of pre and post intervention results were made using SPSS 21.0. Results: In the pre-intervention group (n=156), 36 (22.4%) patients were current smokers, 46 were ex-smokers (29.5%) and 74 were non-smokers (48.1%). Of the smokers, 12 (33%) reported having been informed of smoking cessation prior to operation and 8 (22%) were aware of increased intra- and perioperative adverse events associated with smoking. In the post-intervention group n= 177, 38 (21.5%) patients were current smokers, 39 were ex-smokers (22.0%) and 100 were non-smokers (56.5%). Of the smokers, 32 (88.9%) reported having been informed of smoking cessation prior to operation and 35 (97.2%) reported being aware of increased intra- and perioperative adverse events associated with smoking. The median time since last smoke in the pre-intervention group was 5.5 hours (Q1-Q3 = 2-14) compared with 13 hours (Q1-Q3 = 5-24) in post intervention group. Amongst the smokers, smoking cessation at least 12 hours prior to surgery significantly increased from 27.8% pre-intervention to 52.6% post intervention (P=0.03). Conclusion: A standard preoperative phone call and written instruction on smoking cessation guidelines at time of waitlist placement increase preoperative smoking cessation rates by almost 2-fold.

Keywords: anaesthesia, audit, perioperative medicine, smoking cessation

Procedia PDF Downloads 287
4366 Evaluating the Accuracy of Biologically Relevant Variables Generated by ClimateAP

Authors: Jing Jiang, Wenhuan XU, Lei Zhang, Shiyi Zhang, Tongli Wang

Abstract:

Climate data quality significantly affects the reliability of ecological modeling. In the Asia Pacific (AP) region, low-quality climate data hinders ecological modeling. ClimateAP, a software developed in 2017, generates high-quality climate data for the AP region, benefiting researchers in forestry and agriculture. However, its adoption remains limited. This study aims to confirm the validity of biologically relevant variable data generated by ClimateAP during the normal climate period through comparison with the currently available gridded data. Climate data from 2,366 weather stations were used to evaluate the prediction accuracy of ClimateAP in comparison with the commonly used gridded data from WorldClim1.4. Univariate regressions were applied to 48 monthly biologically relevant variables, and the relationship between the observational data and the predictions made by ClimateAP and WorldClim was evaluated using Adjusted R-Squared and Root Mean Squared Error (RMSE). Locations were categorized into mountainous and flat landforms, considering elevation, slope, ruggedness, and Topographic Position Index. Univariate regressions were then applied to all biologically relevant variables for each landform category. Random Forest (RF) models were implemented for the climatic niche modeling of Cunninghamia lanceolata. A comparative analysis of the prediction accuracies of RF models constructed with distinct climate data sources was conducted to evaluate their relative effectiveness. Biologically relevant variables were obtained from three unpublished Chinese meteorological datasets. ClimateAPv3.0 and WorldClim predictions were obtained from weather station coordinates and WorldClim1.4 rasters, respectively, for the normal climate period of 1961-1990. Occurrence data for Cunninghamia lanceolata came from integrated biodiversity databases with 3,745 unique points. ClimateAP explains a minimum of 94.74%, 97.77%, 96.89%, and 94.40% of monthly maximum, minimum, average temperature, and precipitation variances, respectively. It outperforms WorldClim in 37 biologically relevant variables with lower RMSE values. ClimateAP achieves higher R-squared values for the 12 monthly minimum temperature variables and consistently higher Adjusted R-squared values across all landforms for precipitation. ClimateAP's temperature data yields lower Adjusted R-squared values than gridded data in high-elevation, rugged, and mountainous areas but achieves higher values in mid-slope drainages, plains, open slopes, and upper slopes. Using ClimateAP improves the prediction accuracy of tree occurrence from 77.90% to 82.77%. The biologically relevant climate data produced by ClimateAP is validated based on evaluations using observations from weather stations. The use of ClimateAP leads to an improvement in data quality, especially in non-mountainous regions. The results also suggest that using biologically relevant variables generated by ClimateAP can slightly enhance climatic niche modeling for tree species, offering a better understanding of tree species adaptation and resilience compared to using gridded data.

Keywords: climate data validation, data quality, Asia pacific climate, climatic niche modeling, random forest models, tree species

Procedia PDF Downloads 53
4365 Automated Adaptions of Semantic User- and Service Profile Representations by Learning the User Context

Authors: Nicole Merkle, Stefan Zander

Abstract:

Ambient Assisted Living (AAL) describes a technological and methodological stack of (e.g. formal model-theoretic semantics, rule-based reasoning and machine learning), different aspects regarding the behavior, activities and characteristics of humans. Hence, a semantic representation of the user environment and its relevant elements are required in order to allow assistive agents to recognize situations and deduce appropriate actions. Furthermore, the user and his/her characteristics (e.g. physical, cognitive, preferences) need to be represented with a high degree of expressiveness in order to allow software agents a precise evaluation of the users’ context models. The correct interpretation of these context models highly depends on temporal, spatial circumstances as well as individual user preferences. In most AAL approaches, model representations of real world situations represent the current state of a universe of discourse at a given point in time by neglecting transitions between a set of states. However, the AAL domain currently lacks sufficient approaches that contemplate on the dynamic adaptions of context-related representations. Semantic representations of relevant real-world excerpts (e.g. user activities) help cognitive, rule-based agents to reason and make decisions in order to help users in appropriate tasks and situations. Furthermore, rules and reasoning on semantic models are not sufficient for handling uncertainty and fuzzy situations. A certain situation can require different (re-)actions in order to achieve the best results with respect to the user and his/her needs. But what is the best result? To answer this question, we need to consider that every smart agent requires to achieve an objective, but this objective is mostly defined by domain experts who can also fail in their estimation of what is desired by the user and what not. Hence, a smart agent has to be able to learn from context history data and estimate or predict what is most likely in certain contexts. Furthermore, different agents with contrary objectives can cause collisions as their actions influence the user’s context and constituting conditions in unintended or uncontrolled ways. We present an approach for dynamically updating a semantic model with respect to the current user context that allows flexibility of the software agents and enhances their conformance in order to improve the user experience. The presented approach adapts rules by learning sensor evidence and user actions using probabilistic reasoning approaches, based on given expert knowledge. The semantic domain model consists basically of device-, service- and user profile representations. In this paper, we present how this semantic domain model can be used in order to compute the probability of matching rules and actions. We apply this probability estimation to compare the current domain model representation with the computed one in order to adapt the formal semantic representation. Our approach aims at minimizing the likelihood of unintended interferences in order to eliminate conflicts and unpredictable side-effects by updating pre-defined expert knowledge according to the most probable context representation. This enables agents to adapt to dynamic changes in the environment which enhances the provision of adequate assistance and affects positively the user satisfaction.

Keywords: ambient intelligence, machine learning, semantic web, software agents

Procedia PDF Downloads 265
4364 Prenatal Can Reduce the Burden of Preterm Birth and Low Birthweight from Maternal Sexually Transmitted Infections: US National Data

Authors: Anthony J. Kondracki, Bonzo I. Reddick, Jennifer L. Barkin

Abstract:

We sought to examine the association of maternal Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and treponema pallidum (TP) (syphilis) infections with preterm birth (PTB) (<37 weeks gestation), low birth weight (LBW) (<2500 grams) and prenatal care (PNC) attendance. This cross-sectional study was based on data drawn from the 2020 United States National Center for Health Statistics (NCHS) Natality File. We estimated the prevalence of all births, early/late PTBs, moderately/very LBW, and the distribution of sexually transmitted infections (STIs) according to maternal characteristics in the sample. In multivariable logistic regression models, we examined adjusted odds ratios (aORs) and their corresponding 95% confidence intervals (CIs) of PTB and LBW subcategories in the association with maternal/infant characteristics, PNC status, and maternal CT, NG, and TP infections. In separate logistic regression models, we assessed the risk of these newborn outcomes stratified by PNC status. Adjustments were made for race/ethnicity, age, education, marital status, health insurance, liveborn parity, previous preterm birth, gestational hypertension, gestational diabetes, PNC status, smoking, and infant sex. Additionally, in a sensitivity analysis, we assessed the association with early, full, and late term births and the potential impact of unmeasured confounding using the E-value. CT (1.8%) was most prevalent STI in pregnancy, followed by NG (0.3%), and TP (0.1%). Non-Hispanic Black women, 20-24 years old, with a high school education, and on Medicaid had the highest rate of STIs. Around 96.6% of women reported receiving PNC and about 60.0% initiated PNC early in pregnancy. PTB and LBW were strongly associated with NG infection (12.2% and 12.1%, respectively) and late initiation/no PNC (8.5% and 7.6%, respectively), and ≤10 prenatal visits received (13.1% and 10.3%, respectively). The odds of PTB and LBW were 2.5- to 3-foldhigher for each STI among women who received ≤10 prenatal visits than >10 visits. Adequate prenatal care utilization and timely screening and treatment of maternal STIs can substantially reduce the burden of adverse newborn outcomes.

Keywords: low birthweight, prenatal care, preterm birth, sexually transmitted infections

Procedia PDF Downloads 162
4363 Parameter Estimation via Metamodeling

Authors: Sergio Haram Sarmiento, Arcady Ponosov

Abstract:

Based on appropriate multivariate statistical methodology, we suggest a generic framework for efficient parameter estimation for ordinary differential equations and the corresponding nonlinear models. In this framework classical linear regression strategies is refined into a nonlinear regression by a locally linear modelling technique (known as metamodelling). The approach identifies those latent variables of the given model that accumulate most information about it among all approximations of the same dimension. The method is applied to several benchmark problems, in particular, to the so-called ”power-law systems”, being non-linear differential equations typically used in Biochemical System Theory.

Keywords: principal component analysis, generalized law of mass action, parameter estimation, metamodels

Procedia PDF Downloads 495
4362 Behavior Loss Aversion Experimental Laboratory of Financial Investments

Authors: Jihene Jebeniani

Abstract:

We proposed an approach combining both the techniques of experimental economy and the flexibility of discrete choice models in order to test the loss aversion. Our main objective was to test the loss aversion of the Cumulative Prospect Theory (CPT). We developed an experimental laboratory in the context of the financial investments that aimed to analyze the attitude towards the risk of the investors. The study uses the lotteries and is basing on econometric modeling. The estimated model was the ordered probit.

Keywords: risk aversion, behavioral finance, experimental economic, lotteries, cumulative prospect theory

Procedia PDF Downloads 454
4361 Evaluation of a Remanufacturing for Lithium Ion Batteries from Electric Cars

Authors: Achim Kampker, Heiner H. Heimes, Mathias Ordung, Christoph Lienemann, Ansgar Hollah, Nemanja Sarovic

Abstract:

Electric cars with their fast innovation cycles and their disruptive character offer a high degree of freedom regarding innovative design for remanufacturing. Remanufacturing increases not only the resource but also the economic efficiency by a prolonged product life time. The reduced power train wear of electric cars combined with high manufacturing costs for batteries allow new business models and even second life applications. Modular and intermountable designed battery packs enable the replacement of defective or outdated battery cells, allow additional cost savings and a prolongation of life time. This paper discusses opportunities for future remanufacturing value chains of electric cars and their battery components and how to address their potentials with elaborate designs. Based on a brief overview of implemented remanufacturing structures in different industries, opportunities of transferability are evaluated. In addition to an analysis of current and upcoming challenges, promising perspectives for a sustainable electric car circular economy enabled by design for remanufacturing are deduced. Two mathematical models describe the feasibility of pursuing a circular economy of lithium ion batteries and evaluate remanufacturing in terms of sustainability and economic efficiency. Taking into consideration not only labor and material cost but also capital costs for equipment and factory facilities to support the remanufacturing process, cost benefit analysis prognosticate that a remanufacturing battery can be produced more cost-efficiently. The ecological benefits were calculated on a broad database from different research projects which focus on the recycling, the second use and the assembly of lithium ion batteries. The results of this calculations show a significant improvement by remanufacturing in all relevant factors especially in the consumption of resources and greenhouse warming potential. Exemplarily suitable design guidelines for future remanufacturing lithium ion batteries, which consider modularity, interfaces and disassembly, are used to illustrate the findings. For one guideline, potential cost improvements were calculated and upcoming challenges are pointed out.

Keywords: circular economy, electric mobility, lithium ion batteries, remanufacturing

Procedia PDF Downloads 342
4360 Climate Related Financial Risk on Automobile Industry and the Impact to the Financial Institutions

Authors: Mahalakshmi Vivekanandan S.

Abstract:

As per the recent changes happening in the global policies, climate-related changes and the impact it causes across every sector are viewed as green swan events – in essence, climate-related changes can often happen and lead to risk and a lot of uncertainty, but needs to be mitigated instead of considering them as black swan events. This brings about a question on how this risk can be computed so that the financial institutions can plan to mitigate it. Climate-related changes impact all risk types – credit risk, market risk, operational risk, liquidity risk, reputational risk and other risk types. And the models required to compute this has to consider the different industrial needs of the counterparty, as well as the factors that are contributing to this – be it in the form of different risk drivers, or the different transmission channels or the different approaches and the granular form of data availability. This brings out the suggestion that the climate-related changes, though it affects Pillar I risks, will be a Pillar II risk. This has to be modeled specifically based on the financial institution’s actual exposure to different industries instead of generalizing the risk charge. And this will have to be considered as the additional capital to be met by the financial institution in addition to their Pillar I risks, as well as the existing Pillar II risks. In this paper, the author presents a risk assessment framework to model and assess climate change risks - for both credit and market risks. This framework helps in assessing the different scenarios and how the different transition risks affect the risk associated with the different parties. This research paper delves into the topic of the increase in the concentration of greenhouse gases that in turn cause global warming. It then considers the various scenarios of having the different risk drivers impacting the Credit and market risk of an institution by understanding the transmission channels and also considering the transition risk. The paper then focuses on the industry that’s fast seeing a disruption: the automobile industry. The paper uses the framework to show how the climate changes and the change to the relevant policies have impacted the entire financial institution. Appropriate statistical models for forecasting, anomaly detection and scenario modeling are built to demonstrate how the framework can be used by the relevant agencies to understand their financial risks. The paper also focuses on the climate risk calculation for the Pillar II Capital calculations and how it will make sense for the bank to maintain this in addition to their regular Pillar I and Pillar II capital.

Keywords: capital calculation, climate risk, credit risk, pillar ii risk, scenario modeling

Procedia PDF Downloads 122
4359 Droplet Entrainment and Deposition in Horizontal Stratified Two-Phase Flow

Authors: Joshua Kim Schimpf, Kyun Doo Kim, Jaseok Heo

Abstract:

In this study, the droplet behavior of under horizontal stratified flow regime for air and water flow in horizontal pipe experiments from a 0.24 m, 0.095 m, and 0.0486 m size diameter pipe are examined. The effects of gravity, pipe diameter, and turbulent diffusion on droplet deposition are considered. Models for droplet entrainment and deposition are proposed that considers developing length. Validation for experimental data dedicated from the REGARD, CEA and Williams, University of Illinois, experiment were performed using SPACE (Safety and Performance Analysis Code for Nuclear Power Plants).

Keywords: droplet, entrainment, deposition, horizontal

Procedia PDF Downloads 362
4358 Numerical Modeling of the Depth-Averaged Flow over a Hill

Authors: Anna Avramenko, Heikki Haario

Abstract:

This paper reports the development and application of a 2D depth-averaged model. The main goal of this contribution is to apply the depth averaged equations to a wind park model in which the treatment of the geometry, introduced on the mathematical model by the mass and momentum source terms. The depth-averaged model will be used in future to find the optimal position of wind turbines in the wind park. K-E and 2D LES turbulence models were consider in this article. 2D CFD simulations for one hill was done to check the depth-averaged model in practise.

Keywords: depth-averaged equations, numerical modeling, CFD, wind park model

Procedia PDF Downloads 588
4357 Interfacing and Replication of Electronic Machinery Using MATLAB/SIMULINK

Authors: Abdulatif Abdulsalam, Mohamed Shaban

Abstract:

This paper introduces interfacing and replication of electronic tools based on the MATLAB/ SIMULINK mock-up package. Mock-up components contain dc-dc converters, power issue rectifiers, motivation machines, dc gear, synchronous gear, and more entire systems. Power issue rectifier model includes solid state device models. The tools are the clear-cut structure and mock-up of complex energetic systems connecting with power electronic machines.

Keywords: power electronics, machine, MATLAB, simulink

Procedia PDF Downloads 335
4356 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 39
4355 The Use of Artificial Intelligence in Diagnosis of Mastitis in Cows

Authors: Djeddi Khaled, Houssou Hind, Miloudi Abdellatif, Rabah Siham

Abstract:

In the field of veterinary medicine, there is a growing application of artificial intelligence (AI) for diagnosing bovine mastitis, a prevalent inflammatory disease in dairy cattle. AI technologies, such as automated milking systems, have streamlined the assessment of key metrics crucial for managing cow health during milking and identifying prevalent diseases, including mastitis. These automated milking systems empower farmers to implement automatic mastitis detection by analyzing indicators like milk yield, electrical conductivity, fat, protein, lactose, blood content in the milk, and milk flow rate. Furthermore, reports highlight the integration of somatic cell count (SCC), thermal infrared thermography, and diverse systems utilizing statistical models and machine learning techniques, including artificial neural networks, to enhance the overall efficiency and accuracy of mastitis detection. According to a review of 15 publications, machine learning technology can predict the risk and detect mastitis in cattle with an accuracy ranging from 87.62% to 98.10% and sensitivity and specificity ranging from 84.62% to 99.4% and 81.25% to 98.8%, respectively. Additionally, machine learning algorithms and microarray meta-analysis are utilized to identify mastitis genes in dairy cattle, providing insights into the underlying functional modules of mastitis disease. Moreover, AI applications can assist in developing predictive models that anticipate the likelihood of mastitis outbreaks based on factors such as environmental conditions, herd management practices, and animal health history. This proactive approach supports farmers in implementing preventive measures and optimizing herd health. By harnessing the power of artificial intelligence, the diagnosis of bovine mastitis can be significantly improved, enabling more effective management strategies and ultimately enhancing the health and productivity of dairy cattle. The integration of artificial intelligence presents valuable opportunities for the precise and early detection of mastitis, providing substantial benefits to the dairy industry.

Keywords: artificial insemination, automatic milking system, cattle, machine learning, mastitis

Procedia PDF Downloads 43
4354 Reproductive Biology and Lipid Content of Albacore Tuna (Thunnus alalunga) in the Western Indian Ocean

Authors: Zahirah Dhurmeea, Iker Zudaire, Heidi Pethybridge, Emmanuel Chassot, Maria Cedras, Natacha Nikolic, Jerome Bourjea, Wendy West, Chandani Appadoo, Nathalie Bodin

Abstract:

Scientific advice on the status of fish stocks relies on indicators that are based on strong assumptions on biological parameters such as condition, maturity and fecundity. Currently, information on the biology of albacore tuna, Thunnus alalunga, in the Indian Ocean is scarce. Consequently, many parameters used in stock assessment models for Indian Ocean albacore originate largely from other studied stocks or species of tuna. Inclusion of incorrect biological data in stock assessment models would lead to inappropriate estimates of stock status used by fisheries manager’s to establish future catch allowances. The reproductive biology of albacore tuna in the western Indian Ocean was examined through analysis of the sex ratio, spawning season, length-at-maturity (L50), spawning frequency, fecundity and fish condition. In addition, the total lipid content (TL) and lipid class composition in the gonads, liver and muscle tissues of female albacore during the reproductive cycle was investigated. A total of 923 female and 867 male albacore were sampled from 2013 to 2015. A bias in sex-ratio was found in favour of females with fork length (LF) <100 cm. Using histological analyses and gonadosomatic index, spawning was found to occur between 10°S and 30°S, mainly to the east of Madagascar from October to January. Large females contributed more to reproduction through their longer spawning period compared to small individuals. The L50 (mean ± standard error) of female albacore was estimated at 85.3 ± 0.7 cm LF at the vitellogenic 3 oocyte stage maturity threshold. Albacore spawn on average every 2.2 days within the spawning region and spawning months from November to January. Batch fecundity varied between 0.26 and 2.09 million eggs and the relative batch fecundity (mean  standard deviation) was estimated at 53.4 ± 23.2 oocytes g-1 of somatic-gutted weight. Depending on the maturity stage, TL in ovaries ranged from 7.5 to 577.8 mg g-1 of wet weight (ww) with different proportions of phospholipids (PL), wax esters (WE), triacylglycerol (TAG) and sterol (ST). The highest TL were observed in immature (mostly TAG and PL) and spawning capable ovaries (mostly PL, WE and TAG). Liver TL varied from 21.1 to 294.8 mg g-1 (ww) and acted as an energy (mainly TAG and PL) storage prior to reproduction when the lowest TL was observed. Muscle TL varied from 2.0 to 71.7 g-1 (ww) in mature females without a clear pattern between maturity stages, although higher values of up to 117.3 g-1 (ww) was found in immature females. TL results suggest that albacore could be viewed predominantly as a capital breeder relying mostly on lipids stored before the onset of reproduction and with little additional energy derived from feeding. This study is the first one to provide new information on the reproductive development and classification of albacore in the western Indian Ocean. The reproductive parameters will reduce uncertainty in current stock assessment models which will eventually promote sustainability of the fishery.

Keywords: condition, size-at-maturity, spawning behaviour, temperate tuna, total lipid content

Procedia PDF Downloads 244
4353 Predicting Subsurface Abnormalities Growth Using Physics-Informed Neural Networks

Authors: Mehrdad Shafiei Dizaji, Hoda Azari

Abstract:

The research explores the pioneering integration of Physics-Informed Neural Networks (PINNs) into the domain of Ground-Penetrating Radar (GPR) data prediction, akin to advancements in medical imaging for tracking tumor progression in the human body. This research presents a detailed development framework for a specialized PINN model proficient at interpreting and forecasting GPR data, much like how medical imaging models predict tumor behavior. By harnessing the synergy between deep learning algorithms and the physical laws governing subsurface structures—or, in medical terms, human tissues—the model effectively embeds the physics of electromagnetic wave propagation into its architecture. This ensures that predictions not only align with fundamental physical principles but also mirror the precision needed in medical diagnostics for detecting and monitoring tumors. The suggested deep learning structure comprises three components: a CNN, a spatial feature channel attention (SFCA) mechanism, and ConvLSTM, along with temporal feature frame attention (TFFA) modules. The attention mechanism computes channel attention and temporal attention weights using self-adaptation, thereby fine-tuning the visual and temporal feature responses to extract the most pertinent and significant visual and temporal features. By integrating physics directly into the neural network, our model has shown enhanced accuracy in forecasting GPR data. This improvement is vital for conducting effective assessments of bridge deck conditions and other evaluations related to civil infrastructure. The use of Physics-Informed Neural Networks (PINNs) has demonstrated the potential to transform the field of Non-Destructive Evaluation (NDE) by enhancing the precision of infrastructure deterioration predictions. Moreover, it offers a deeper insight into the fundamental mechanisms of deterioration, viewed through the prism of physics-based models.

Keywords: physics-informed neural networks, deep learning, ground-penetrating radar (GPR), NDE, ConvLSTM, physics, data driven

Procedia PDF Downloads 5
4352 Analytical Investigation of Modeling and Simulation of Different Combinations of Sinusoidal Supplied Autotransformer under Linear Loading Conditions

Authors: M. Salih Taci, N. Tayebi, I. Bozkır

Abstract:

This paper investigates the operation of a sinusoidal supplied autotransformer on the different states of magnetic polarity of primary and secondary terminals for four different step-up and step-down analytical conditions. In this paper, a new analytical modeling and equations for dot-marked and polarity-based step-up and step-down autotransformer are presented. These models are validated by the simulation of current and voltage waveforms for each state. PSpice environment was used for simulation.

Keywords: autotransformer modeling, autotransformer simulation, step-up autotransformer, step-down autotransformer, polarity

Procedia PDF Downloads 293
4351 The Competitiveness of Small and Medium Sized Enterprises: Digital Transformation of Business Models

Authors: Chante Van Tonder, Bart Bossink, Chris Schachtebeck, Cecile Nieuwenhuizen

Abstract:

Small and Medium-Sized Enterprises (SMEs) play a key role in national economies around the world, being contributors to economic and social well-being. Due to this, the success, growth and competitiveness of SMEs are critical. However, there are many factors that undermine this, such as resource constraints, poor information communication infrastructure (ICT), skills shortages and poor management. The Fourth Industrial Revolution offers new tools and opportunities such as digital transformation and business model innovation (BMI) to the SME sector to enhance its competitiveness. Adopting and leveraging digital technologies such as cloud, mobile technologies, big data and analytics can significantly improve business efficiencies, value proposition and customer experiences. Digital transformation can contribute to the growth and competitiveness of SMEs. However, SMEs are lagging behind in the participation of digital transformation. Extant research lacks conceptual and empirical research on how digital transformation drives BMI and the impact it has on the growth and competitiveness of SMEs. The purpose of the study is, therefore, to close this gap by developing and empirically validating a conceptual model to determine if SMEs are achieving BMI through digital transformation and how this is impacting the growth, competitiveness and overall business performance. An empirical study is being conducted on 300 SMEs, consisting of 150 South-African and 150 Dutch SMEs, to achieve this purpose. Structural equation modeling is used, since it is a multivariate statistical analysis technique that is used to analyse structural relationships and is a suitable research method to test the hypotheses in the model. Empirical research is needed to gather more insight into how and if SMEs are digitally transformed and how BMI can be driven through digital transformation. The findings of this study can be used by SME business owners, managers and employees at all levels. The findings will indicate if digital transformation can indeed impact the growth, competitiveness and overall performance of an SME, reiterating the importance and potential benefits of adopting digital technologies. In addition, the findings will also exhibit how BMI can be achieved in light of digital transformation. This study contributes to the body of knowledge in a highly relevant and important topic in management studies by analysing the impact of digital transformation on BMI on a large number of SMEs that are distinctly different in economic and cultural factors

Keywords: business models, business model innovation, digital transformation, SMEs

Procedia PDF Downloads 223
4350 Vitex agnus-castus Anti-Inflammatory, Antioxidants Characters and Anti-Tumor Effect in Ehrlich Ascites Carcinoma Model

Authors: Abeer Y. Ibrahim, Faten M. Ibrahim, Samah A. El-Newary, Saber F. Hendawy

Abstract:

Objective: Appreciation of in-vitro anti-inflammatory and antioxidant characters of Vitex agnus-castus berries alcoholic extract and fractions, as well as in-vivo antitumor ability of alcoholic extract and chloroform fraction against Ehrlich ascites carcinoma is the aim of this study. Material and methods: Antioxidant properties of crude alcoholic extract of vitex berries as well as petroleum ether, chloroform, ethyl acetate and butanol fractions were evaluated, in-vitro assessments, as compared with standard materials, l-ascorbic acid (vitamin C) and butylated hydroxyl toluene(BHT). The anti-inflammatory activity was investigated in cyclooxygenase (COX)-1 and COX-2 inhibition assays. Moreover, in-vivo antitumor effect of vitex berries alcoholic and chloroform extracts were evaluated using Ehrlich ascites carcinoma model. Data were presented as mean±SE, and data were analyzed by one-way analysis of variance test. Results and conclusion: Berries crude extract showed potent antioxidant activity followed with its fractions ethyl acetate and chloroform as compared with standard (V.C and BHT). Ethyl acetate fraction showed good reduction capability, metal ion chelation, hydrogen peroxide scavenging, nitric oxide scavenging and superoxide anion scavenging. Meanwhile, chloroform fraction produced the highest free radical scavenging activity and total antioxidant capacity. In respectable of lipid peroxidation inhibition, crude alcoholic extract and its fractions cleared weak inhibition in comparing with standard materials. Anti-inflammatory activity of V. agnus-castus berries chloroform fraction of vitex was best COX-2 inhibitor (IC₅₀, 135.41 µg/ ml) as compared to vitex alcoholic extract or ethyl acetate fraction with weak inhibitory effect on COX-1 (IC50, 778.432 µg/ ml), where the lowest effect on COX-1 was recorded with alcoholic extract. Alcoholic extract and its fractions showed weak COX-1 inhibition activity, whereas COX-2 was inhibited (100%), compared with celecoxib drug (72% at 1000ppm). The crude alcoholic and chloroform extracts of V. agnus-castus barries significantly reduced the viable Ehrlich cell count and increased nonviable count with amelioration of all hematological parameters. This amelioration was reflected on increasing median survival time and significant increase (P < 0.05) in lifespan.

Keywords: anti-inflammatory, antioxidants, ehrlich ascites carcinoma, Vitex agnus-castus

Procedia PDF Downloads 126