Search results for: Markov deterioration models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7364

Search results for: Markov deterioration models

5864 Comparing Business Excellence Models Using Quantitative Methods: A First Step

Authors: Mohammed Alanazi, Dimitrios Tsagdis

Abstract:

Established Business Excellence Models (BEMs), like the Malcolm Baldrige National Quality Award (MBNQA) model and the European Foundation for Quality Management (EFQM) model, have been adopted by firms all over the world. They exist alongside more recent country-specific BEMs; e.g. the Australian, Canadian, China, New Zealand, Singapore, and Taiwan quality awards that although not as widespread as MBNQA and EFQM have nonetheless strong national followings. Regardless of any differences in their following or prestige, the emergence and development of all BEMs have been shaped both by their local context (e.g. underlying socio-economic dynamics) as well as by global best practices. Besides such similarities, that render them into objects (i.e. models) of the same class (i.e. BEMs), BEMs exhibit non-trivial differences in their criteria, relations, and emphasis. Given the evolution of BEMs (e.g. the MBNQA underwent seven evolutions since its inception in 1987 while the EFQM five since 1993), it is unsurprising that comparative studies of their validity are few and far in between. This poses challenges for practitioners and policy makers alike; as it is not always clear which BEM is to be preferred or better fitting to a particular context. Especially, in contexts that differ substantially from the original context of BEM development. This paper aims to fill this gap by presenting a research design and measurement model for comparing BEMs using quantitative methods (e.g. structural equations). Three BEMs will be focused upon in particular for illustration purposes; the MBNQA, the EFQM, and the King Abdul Aziz Quality Award (KAQA) model. They have been selected so to reflect the two established and widely spread traditions as well as a more recent context-specific arrival promising a better fit.

Keywords: Baldrige, business excellence, European Foundation for Quality Management, Structural Equation Model, total quality management

Procedia PDF Downloads 237
5863 Ratio of Omega-6/Omega-3 Fatty Acids in Spelt and Flaxseed Pasta

Authors: Jelena Filipovic, Milenko Kosutic

Abstract:

The dynamic way of life has the tendency to simplify and decrease preparing healthy, quick, cheap and safe meals. Spelt pasta is meeting most of these goals. Contrary to bread, pasta can be stored a long time without deterioration in flavour, odour and usability without losing quality. This paper deals with the chemical composition and content of fatty acids in flaxseed and spelt flour. Ratio of essential fatty acids ω-6/ω-3 is also analysed in spelt pasta and pasta with 0%, 10% and 20% flaxseed flour. Gas chromatography with mass spectrometry is used for carrying out a quantitative analysis of flaxseed flour, spelt flour and pasta liposoluble extracts. Flaxseed flour has a better fatty acid profile than spelt flour, with low levels of saturated fat (approximately 9g/100g), high concentration of linolenic acid (57g/100g) and lower content of linoleic acid (16g/100g), as well as superior ω-6/ω-3 ratio that is 1:4. Flaxseed flour in the share of 10% and 20% in spelt pasta positively contributes to the essential fatty acids daily intake recommended by nutritionists and the improvement of ω-6/ω-3 ratio (6,7:1 and 1:1.2). This paper points out that investigated pasta with flaxseed is a new product with improved functional properties due to high level of ω-3 fatty acids and it is acceptable for consumers in regard to sensory properties.

Keywords: flaxseed, spelt, fatty acids, ω-3/ω-6 ratio, pasta

Procedia PDF Downloads 617
5862 Why and When to Teach Definitions: Necessary and Unnecessary Discontinuities Resulting from the Definition of Mathematical Concepts

Authors: Josephine Shamash, Stuart Smith

Abstract:

We examine reasons for introducing definitions in teaching mathematics in a number of different cases. We try to determine if, where, and when to provide a definition, and which definition to choose. We characterize different types of definitions and the different purposes we may have for formulating them, and detail examples of each type. Giving a definition at a certain stage can sometimes be detrimental to the development of the concept image. In such a case, it is advisable to delay the precise definition to a later stage. We describe two models, the 'successive approximation model', and the 'model of the extending definition' that fit such situations. Detailed examples that fit the different models are given based on material taken from a number of textbooks, and analysis of the way the concept is introduced, and where and how its definition is given. Our conclusions, based on this analysis, is that some of the definitions given may cause discontinuities in the learning sequence and constitute obstacles and unnecessary cognitive conflicts in the formation of the concept definition. However, in other cases, the discontinuity in passing from definition to definition actually serves a didactic purpose, is unavoidable for the mathematical evolution of the concept image, and is essential for students to deepen their understanding.

Keywords: concept image, mathematical definitions, mathematics education, mathematics teaching

Procedia PDF Downloads 128
5861 On-Line Data-Driven Multivariate Statistical Prediction Approach to Production Monitoring

Authors: Hyun-Woo Cho

Abstract:

Detection of incipient abnormal events in production processes is important to improve safety and reliability of manufacturing operations and reduce losses caused by failures. The construction of calibration models for predicting faulty conditions is quite essential in making decisions on when to perform preventive maintenance. This paper presents a multivariate calibration monitoring approach based on the statistical analysis of process measurement data. The calibration model is used to predict faulty conditions from historical reference data. This approach utilizes variable selection techniques, and the predictive performance of several prediction methods are evaluated using real data. The results shows that the calibration model based on supervised probabilistic model yielded best performance in this work. By adopting a proper variable selection scheme in calibration models, the prediction performance can be improved by excluding non-informative variables from their model building steps.

Keywords: calibration model, monitoring, quality improvement, feature selection

Procedia PDF Downloads 354
5860 Effect of Plasticizer Additives on the Mechanical Properties of Cement Composite: A Molecular Dynamics Analysis

Authors: R. Mohan, V. Jadhav, A. Ahmed, J. Rivas, A. Kelkar

Abstract:

Cementitious materials are an excellent example of a composite material with complex hierarchical features and random features that range from nanometer (nm) to millimeter (mm) scale. Multi-scale modeling of complex material systems requires starting from fundamental building blocks to capture the scale relevant features through associated computational models. In this paper, molecular dynamics (MD) modeling is employed to predict the effect of plasticizer additive on the mechanical properties of key hydrated cement constituent calcium-silicate-hydrate (CSH) at the molecular, nanometer scale level. Due to complexity, still unknown molecular configuration of CSH, a representative configuration widely accepted in the field of mineral Jennite is employed. The effectiveness of the Molecular Dynamics modeling to understand the predictive influence of material chemistry changes based on molecular/nanoscale models is demonstrated.

Keywords: cement composite, mechanical properties, molecular dynamics, plasticizer additives

Procedia PDF Downloads 452
5859 Kou Jump Diffusion Model: An Application to the SP 500; Nasdaq 100 and Russell 2000 Index Options

Authors: Wajih Abbassi, Zouhaier Ben Khelifa

Abstract:

The present research points towards the empirical validation of three options valuation models, the ad-hoc Black-Scholes model as proposed by Berkowitz (2001), the constant elasticity of variance model of Cox and Ross (1976) and the Kou jump-diffusion model (2002). Our empirical analysis has been conducted on a sample of 26,974 options written on three indexes, the S&P 500, Nasdaq 100 and the Russell 2000 that were negotiated during the year 2007 just before the sub-prime crisis. We start by presenting the theoretical foundations of the models of interest. Then we use the technique of trust-region-reflective algorithm to estimate the structural parameters of these models from cross-section of option prices. The empirical analysis shows the superiority of the Kou jump-diffusion model. This superiority arises from the ability of this model to portray the behavior of market participants and to be closest to the true distribution that characterizes the evolution of these indices. Indeed the double-exponential distribution covers three interesting properties that are: the leptokurtic feature, the memory less property and the psychological aspect of market participants. Numerous empirical studies have shown that markets tend to have both overreaction and under reaction over good and bad news respectively. Despite of these advantages there are not many empirical studies based on this model partly because probability distribution and option valuation formula are rather complicated. This paper is the first to have used the technique of nonlinear curve-fitting through the trust-region-reflective algorithm and cross-section options to estimate the structural parameters of the Kou jump-diffusion model.

Keywords: jump-diffusion process, Kou model, Leptokurtic feature, trust-region-reflective algorithm, US index options

Procedia PDF Downloads 427
5858 Metrology-Inspired Methods to Assess the Biases of Artificial Intelligence Systems

Authors: Belkacem Laimouche

Abstract:

With the field of artificial intelligence (AI) experiencing exponential growth, fueled by technological advancements that pave the way for increasingly innovative and promising applications, there is an escalating need to develop rigorous methods for assessing their performance in pursuit of transparency and equity. This article proposes a metrology-inspired statistical framework for evaluating bias and explainability in AI systems. Drawing from the principles of metrology, we propose a pioneering approach, using a concrete example, to evaluate the accuracy and precision of AI models, as well as to quantify the sources of measurement uncertainty that can lead to bias in their predictions. Furthermore, we explore a statistical approach for evaluating the explainability of AI systems based on their ability to provide interpretable and transparent explanations of their predictions.

Keywords: artificial intelligence, metrology, measurement uncertainty, prediction error, bias, machine learning algorithms, probabilistic models, interlaboratory comparison, data analysis, data reliability, measurement of bias impact on predictions, improvement of model accuracy and reliability

Procedia PDF Downloads 103
5857 Estimating CO₂ Storage Capacity under Geological Uncertainty Using 3D Geological Modeling of Unconventional Reservoir Rocks in Block nv32, Shenvsi Oilfield, China

Authors: Ayman Mutahar Alrassas, Shaoran Ren, Renyuan Ren, Hung Vo Thanh, Mohammed Hail Hakimi, Zhenliang Guan

Abstract:

The significant effect of CO₂ on global climate and the environment has gained more concern worldwide. Enhance oil recovery (EOR) associated with sequestration of CO₂ particularly into the depleted oil reservoir is considered the viable approach under financial limitations since it improves the oil recovery from the existing oil reservoir and boosts the relation between global-scale of CO₂ capture and geological sequestration. Consequently, practical measurements are required to attain large-scale CO₂ emission reduction. This paper presents an integrated modeling workflow to construct an accurate 3D reservoir geological model to estimate the storage capacity of CO₂ under geological uncertainty in an unconventional oil reservoir of the Paleogene Shahejie Formation (Es1) in the block Nv32, Shenvsi oilfield, China. In this regard, geophysical data, including well logs of twenty-two well locations and seismic data, were combined with geological and engineering data and used to construct a 3D reservoir geological modeling. The geological modeling focused on four tight reservoir units of the Shahejie Formation (Es1-x1, Es1-x2, Es1-x3, and Es1-x4). The validated 3D reservoir models were subsequently used to calculate the theoretical CO₂ storage capacity in the block Nv32, Shenvsi oilfield. Well logs were utilized to predict petrophysical properties such as porosity and permeability, and lithofacies and indicate that the Es1 reservoir units are mainly sandstone, shale, and limestone with a proportion of 38.09%, 32.42%, and 29.49, respectively. Well log-based petrophysical results also show that the Es1 reservoir units generally exhibit 2–36% porosity, 0.017 mD to 974.8 mD permeability, and moderate to good net to gross ratios. These estimated values of porosity, permeability, lithofacies, and net to gross were up-scaled and distributed laterally using Sequential Gaussian Simulation (SGS) and Simulation Sequential Indicator (SIS) methods to generate 3D reservoir geological models. The reservoir geological models show there are lateral heterogeneities of the reservoir properties and lithofacies, and the best reservoir rocks exist in the Es1-x4, Es1-x3, and Es1-x2 units, respectively. In addition, the reservoir volumetric of the Es1 units in block Nv32 was also estimated based on the petrophysical property models and fund to be between 0.554368

Keywords: CO₂ storage capacity, 3D geological model, geological uncertainty, unconventional oil reservoir, block Nv32

Procedia PDF Downloads 175
5856 An Assessment of Self-Perceived Health after the Death of a Spouse among the Elderly

Authors: Shu-Hsi Ho

Abstract:

The problems of aging and number of widowed peers gradually rise in Taiwan. It is worth to concern the related issues for elderly after the death of a spouse. Hence, this study is to examine the impact of spousal death on the surviving spouse’s self-perceived health and mental health for the elderly in Taiwan. A cross section data design and ordered logistic regression models are applied to investigate whether marriage is associated significantly to self-perceived health and mental health for the widowed older Taiwanese. The results indicate that widowed marriage shows significant negative effects on self-perceived health and mental health regardless of widows or widowers. Among them, widows might be more likely to show worse mental health than widowers. The belief confirms that marriage provides effective sources to promote self-perceived health and mental health, particularly for females. In addition, since the social welfare system is not perfect in Taiwan, the findings also suggest that family and social support reveal strongly association with the self-perceived health and mental health for the widows and widowers elderly.

Keywords: logistic regression models, self-perceived health, widow, widower

Procedia PDF Downloads 461
5855 Large Language Model Powered Chatbots Need End-to-End Benchmarks

Authors: Debarag Banerjee, Pooja Singh, Arjun Avadhanam, Saksham Srivastava

Abstract:

Autonomous conversational agents, i.e., chatbots, are becoming an increasingly common mechanism for enterprises to provide support to customers and partners. In order to rate chatbots, especially ones powered by Generative AI tools like Large Language Models (LLMs), we need to be able to accurately assess their performance. This is where chatbot benchmarking becomes important. In this paper, authors propose the use of a benchmark that they call the E2E (End to End) benchmark and show how the E2E benchmark can be used to evaluate the accuracy and usefulness of the answers provided by chatbots, especially ones powered by LLMs. The authors evaluate an example chatbot at different levels of sophistication based on both our E2E benchmark as well as other available metrics commonly used in the state of the art and observe that the proposed benchmark shows better results compared to others. In addition, while some metrics proved to be unpredictable, the metric associated with the E2E benchmark, which uses cosine similarity, performed well in evaluating chatbots. The performance of our best models shows that there are several benefits of using the cosine similarity score as a metric in the E2E benchmark.

Keywords: chatbot benchmarking, end-to-end (E2E) benchmarking, large language model, user centric evaluation.

Procedia PDF Downloads 64
5854 The Effectiveness of Multiphase Flow in Well- Control Operations

Authors: Ahmed Borg, Elsa Aristodemou, Attia Attia

Abstract:

Well control involves managing the circulating drilling fluid within the wells and avoiding kicks and blowouts as these can lead to losses in human life and drilling facilities. Current practices for good control incorporate predictions of pressure losses through computational models. Developing a realistic hydraulic model for a good control problem is a very complicated process due to the existence of a complex multiphase region, which usually contains a non-Newtonian drilling fluid and the miscibility of formation gas in drilling fluid. The current approaches assume an inaccurate flow fluid model within the well, which leads to incorrect pressure loss calculations. To overcome this problem, researchers have been considering the more complex two-phase fluid flow models. However, even these more sophisticated two-phase models are unsuitable for applications where pressure dynamics are important, such as in managed pressure drilling. This study aims to develop and implement new fluid flow models that take into consideration the miscibility of fluids as well as their non-Newtonian properties for enabling realistic kick treatment. furthermore, a corresponding numerical solution method is built with an enriched data bank. The research work considers and implements models that take into consideration the effect of two phases in kick treatment for well control in conventional drilling. In this work, a corresponding numerical solution method is built with an enriched data bank. Software STARCCM+ for the computational studies to study the important parameters to describe wellbore multiphase flow, the mass flow rate, volumetric fraction, and velocity of each phase. Results showed that based on the analysis of these simulation studies, a coarser full-scale model of the wellbore, including chemical modeling established. The focus of the investigations was put on the near drill bit section. This inflow area shows certain characteristics that are dominated by the inflow conditions of the gas as well as by the configuration of the mud stream entering the annulus. Without considering the gas solubility effect, the bottom hole pressure could be underestimated by 4.2%, while the bottom hole temperature is overestimated by 3.2%. and without considering the heat transfer effect, the bottom hole pressure could be overestimated by 11.4% under steady flow conditions. Besides, larger reservoir pressure leads to a larger gas fraction in the wellbore. However, reservoir pressure has a minor effect on the steady wellbore temperature. Also as choke pressure increases, less gas will exist in the annulus in the form of free gas.

Keywords: multiphase flow, well- control, STARCCM+, petroleum engineering and gas technology, computational fluid dynamic

Procedia PDF Downloads 117
5853 The Impact of Steel Connections on the Fire Resistance of Composite Buildings

Authors: Shuyuan Lin, Zhaohui Huang, Mizi Fan

Abstract:

In the majority of previous research into modelling large scale composite floor subjected to fire, the beam-to-column and beam-to-beam connections were assumed to behave either as pinned or rigid for simplicity, and the vertical shear and axial tension failures of the connection were not taken into account. We have recently developed robust two-noded connection models for modeling endplate and partial endplate steel connections under fire conditions. The main objective of this research is to systematically investigate the impact of the connections of protected beams, on the tensile membrane actions of supported floor slabs in which the failures of the connections, such as, axial tension, vertical shear and bending are accounted for. The models developed have very good numerical stability under a static solver condition, and can be used for large scale modelling of composite buildings in fire.

Keywords: fire, steel structure, component-based model, beam-to-column connections

Procedia PDF Downloads 448
5852 Machine Learning Automatic Detection on Twitter Cyberbullying

Authors: Raghad A. Altowairgi

Abstract:

With the wide spread of social media platforms, young people tend to use them extensively as the first means of communication due to their ease and modernity. But these platforms often create a fertile ground for bullies to practice their aggressive behavior against their victims. Platform usage cannot be reduced, but intelligent mechanisms can be implemented to reduce the abuse. This is where machine learning comes in. Understanding and classifying text can be helpful in order to minimize the act of cyberbullying. Artificial intelligence techniques have expanded to formulate an applied tool to address the phenomenon of cyberbullying. In this research, machine learning models are built to classify text into two classes; cyberbullying and non-cyberbullying. After preprocessing the data in 4 stages; removing characters that do not provide meaningful information to the models, tokenization, removing stop words, and lowering text. BoW and TF-IDF are used as the main features for the five classifiers, which are; logistic regression, Naïve Bayes, Random Forest, XGboost, and Catboost classifiers. Each of them scores 92%, 90%, 92%, 91%, 86% respectively.

Keywords: cyberbullying, machine learning, Bag-of-Words, term frequency-inverse document frequency, natural language processing, Catboost

Procedia PDF Downloads 129
5851 A Meta-Analysis of School-Based Suicide Prevention for Adolescents and Meta-Regressions of Contextual and Intervention Factors

Authors: E. H. Walsh, J. McMahon, M. P. Herring

Abstract:

Post-primary school-based suicide prevention (PSSP) is a valuable avenue to reduce suicidal behaviours in adolescents. The aims of this meta-analysis and meta-regression were 1) to quantify the effect of PSSP interventions on adolescent suicide ideation (SI) and suicide attempts (SA), and 2) to explore how intervention effects may vary based on important contextual and intervention factors. This study provides further support to the benefits of PSSP by demonstrating lower suicide outcomes in over 30,000 adolescents following PSSP and mental health interventions and tentatively suggests that intervention effectiveness may potentially vary based on intervention factors. The protocol for this study is registered on PROSPERO (ID=CRD42020168883). Population, intervention, comparison, outcomes, and study design (PICOs) defined eligible studies as cluster randomised studies (n=12) containing PSSP and measuring suicide outcomes. Aggregate electronic database EBSCO host, Web of Science, and Cochrane Central Register of Controlled Trials databases were searched. Cochrane bias tools for cluster randomised studies demonstrated that half of the studies were rated as low risk of bias. The Egger’s Regression Test adapted for multi-level modelling indicated that publication bias was not an issue (all ps > .05). Crude and corresponding adjusted pooled log odds ratios (OR) were computed using the Metafor package in R, yielding 12 SA and 19 SI effects. Multi-level random-effects models accounting for dependencies of effects from the same study revealed that in crude models, compared to controls, interventions were significantly associated with 13% (OR=0.87, 95% confidence interval (CI), [0.78,0.96], Q18 =15.41, p=0.63) and 34% (OR=0.66, 95%CI [0.47,0.91], Q10=16.31, p=0.13) lower odds of SI and SA, respectively. Adjusted models showed similar odds reductions of 15% (OR=0.85, 95%CI[0.75,0.95], Q18=10.04, p=0.93) and 28% (OR=0.72, 95%CI[0.59,0.87], Q10=10.46, p=0.49) for SI and SA, respectively. Within-cluster heterogeneity ranged from no heterogeneity to low heterogeneity for SA across crude and adjusted models (0-9%). No heterogeneity was identified for SI across crude and adjusted models (0%). Pre-specified univariate moderator analyses were not significant for SA (all ps < 0.05). Variations in average pooled SA odds reductions across categories of various intervention characteristics were observed (all ps < 0.05), which preliminarily suggests that the effectiveness of interventions may potentially vary across intervention factors. These findings have practical implications for researchers, clinicians, educators, and decision-makers. Further investigation of important logical, theoretical, and empirical moderators on PSSP intervention effectiveness is recommended to establish how and when PSSP interventions best reduce adolescent suicidal behaviour.

Keywords: adolescents, contextual factors, post-primary school-based suicide prevention, suicide ideation, suicide attempts

Procedia PDF Downloads 100
5850 Modelling Urban Rigidity and Elasticity Growth Boundaries: A Spatial Constraints-Suitability Based Perspective

Authors: Pengcheng Xiang Jr., Xueqing Sun, Dong Ngoduy

Abstract:

In the context of rapid urbanization, urban sprawl has brought about extensive negative impacts on ecosystems and the environment, resulting in a gradual shift from "incremental growth" to ‘stock growth’ in cities. A detailed urban growth boundary is a prerequisite for urban renewal and management. This study takes Shenyang City, China, as the study area and evaluates the spatial distribution of urban spatial suitability in the study area from the perspective of spatial constraints-suitability using multi-source data and simulates the future rigid and elastic growth boundaries of the city in the study area using the CA-Markov model. The results show that (1) the suitable construction area and moderate construction area in the study area account for 8.76% and 19.01% of the total area, respectively, and the suitable construction area and moderate construction area show a trend of distribution from the urban centre to the periphery, mainly in Shenhe District, the southern part of Heping District, the western part of Dongling District, and the central part of Dadong District; (2) the area of expansion of construction land in the study area in the period of 2023-2030 is 153274.6977hm2, accounting for 44.39% of the total area of the study area; (3) the rigid boundary of the study area occupies an area of 153274.6977 hm2, accounting for 44.39% of the total area of the study area, and the elastic boundary of the study area contains an area of 75362.61 hm2, accounting for 21.69% of the total area of the study area. The study constructed a method for urban growth boundary delineation, which helps to apply remote sensing to guide future urban spatial growth management and urban renewal.

Keywords: urban growth boundary, spatial constraints, spatial suitability, urban sprawl

Procedia PDF Downloads 32
5849 Line Heating Forming: Methodology and Application Using Kriging and Fifth Order Spline Formulations

Authors: Henri Champliaud, Zhengkun Feng, Ngan Van Lê, Javad Gholipour

Abstract:

In this article, a method is presented to effectively estimate the deformed shape of a thick plate due to line heating. The method uses a fifth order spline interpolation, with up to C3 continuity at specific points to compute the shape of the deformed geometry. First and second order derivatives over a surface are the resulting parameters of a given heating line on a plate. These parameters are determined through experiments and/or finite element simulations. Very accurate kriging models are fitted to real or virtual surfaces to build-up a database of maps. Maps of first and second order derivatives are then applied on numerical plate models to evaluate their evolving shapes through a sequence of heating lines. Adding an optimization process to this approach would allow determining the trajectories of heating lines needed to shape complex geometries, such as Francis turbine blades.

Keywords: deformation, kriging, fifth order spline interpolation, first, second and third order derivatives, C3 continuity, line heating, plate forming, thermal forming

Procedia PDF Downloads 451
5848 Hydraulic Analysis of Irrigation Approach Channel Using HEC-RAS Model

Authors: Muluegziabher Semagne Mekonnen

Abstract:

This study was intended to show the irrigation water requirements and evaluation of canal hydraulics steady state conditions to improve on scheme performance of the Meki-Ziway irrigation project. The methodology used was the CROPWAT 8.0 model to estimate the irrigation water requirements of five major crops irrigated in the study area. The results showed that for the whole existing and potential irrigation development area of 2000 ha and 2599 ha, crop water requirements were 3,339,200 and 4,339,090.4 m³, respectively. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. Hydraulic simulation models are fundamental tools for understanding the hydraulic flow characteristics of irrigation systems. In this study Hydraulic Analysis of Irrigation Canals Using HEC-RAS Model was conducted in Meki-Ziway Irrigation Scheme. The HEC-RAS model was tested in terms of error estimation and used to determine canal capacity potential.

Keywords: HEC-RAS, irrigation, hydraulic. canal reach, capacity

Procedia PDF Downloads 58
5847 Prospects of Acellular Organ Scaffolds for Drug Discovery

Authors: Inna Kornienko, Svetlana Guryeva, Natalia Danilova, Elena Petersen

Abstract:

Drug toxicity often goes undetected until clinical trials, the most expensive and dangerous phase of drug development. Both human cell culture and animal studies have limitations that cannot be overcome by improvements in drug testing protocols. Tissue engineering is an emerging alternative approach to creating models of human malignant tumors for experimental oncology, personalized medicine, and drug discovery studies. This new generation of bioengineered tumors provides an opportunity to control and explore the role of every component of the model system including cell populations, supportive scaffolds, and signaling molecules. An area that could greatly benefit from these models is cancer research. Recent advances in tissue engineering demonstrated that decellularized tissue is an excellent scaffold for tissue engineering. Decellularization of donor organs such as heart, liver, and lung can provide an acellular, naturally occurring three-dimensional biologic scaffold material that can then be seeded with selected cell populations. Preliminary studies in animal models have provided encouraging results for the proof of concept. Decellularized Organs preserve organ microenvironment, which is critical for cancer metastasis. Utilizing 3D tumor models results greater proximity of cell culture morphological characteristics in a model to its in vivo counterpart, allows more accurate simulation of the processes within a functioning tumor and its pathogenesis. 3D models allow study of migration processes and cell proliferation with higher reliability as well. Moreover, cancer cells in a 3D model bear closer resemblance to living conditions in terms of gene expression, cell surface receptor expression, and signaling. 2D cell monolayers do not provide the geometrical and mechanical cues of tissues in vivo and are, therefore, not suitable to accurately predict the responses of living organisms. 3D models can provide several levels of complexity from simple monocultures of cancer cell lines in liquid environment comprised of oxygen and nutrient gradients and cell-cell interaction to more advanced models, which include co-culturing with other cell types, such as endothelial and immune cells. Following this reasoning, spheroids cultivated from one or multiple patient-derived cell lines can be utilized to seed the matrix rather than monolayer cells. This approach furthers the progress towards personalized medicine. As an initial step to create a new ex vivo tissue engineered model of a cancer tumor, optimized protocols have been designed to obtain organ-specific acellular matrices and evaluate their potential as tissue engineered scaffolds for cultures of normal and tumor cells. Decellularized biomatrix was prepared from animals’ kidneys, urethra, lungs, heart, and liver by two decellularization methods: perfusion in a bioreactor system and immersion-agitation on an orbital shaker with the use of various detergents (SDS, Triton X-100) in different concentrations and freezing. Acellular scaffolds and tissue engineered constructs have been characterized and compared using morphological methods. Models using decellularized matrix have certain advantages, such as maintaining native extracellular matrix properties and biomimetic microenvironment for cancer cells; compatibility with multiple cell types for cell culture and drug screening; utilization to culture patient-derived cells in vitro to evaluate different anticancer therapeutics for developing personalized medicines.

Keywords: 3D models, decellularization, drug discovery, drug toxicity, scaffolds, spheroids, tissue engineering

Procedia PDF Downloads 299
5846 A New Approach to Interval Matrices and Applications

Authors: Obaid Algahtani

Abstract:

An interval may be defined as a convex combination as follows: I=[a,b]={x_α=(1-α)a+αb: α∈[0,1]}. Consequently, we may adopt interval operations by applying the scalar operation point-wise to the corresponding interval points: I ∙J={x_α∙y_α ∶ αϵ[0,1],x_α ϵI ,y_α ϵJ}, With the usual restriction 0∉J if ∙ = ÷. These operations are associative: I+( J+K)=(I+J)+ K, I*( J*K)=( I*J )* K. These two properties, which are missing in the usual interval operations, will enable the extension of the usual linear system concepts to the interval setting in a seamless manner. The arithmetic introduced here avoids such vague terms as ”interval extension”, ”inclusion function”, determinants which we encounter in the engineering literature that deal with interval linear systems. On the other hand, these definitions were motivated by our attempt to arrive at a definition of interval random variables and investigate the corresponding statistical properties. We feel that they are the natural ones to handle interval systems. We will enable the extension of many results from usual state space models to interval state space models. The interval state space model we will consider here is one of the form X_((t+1) )=AX_t+ W_t, Y_t=HX_t+ V_t, t≥0, where A∈ 〖IR〗^(k×k), H ∈ 〖IR〗^(p×k) are interval matrices and 〖W 〗_t ∈ 〖IR〗^k,V_t ∈〖IR〗^p are zero – mean Gaussian white-noise interval processes. This feeling is reassured by the numerical results we obtained in a simulation examples.

Keywords: interval analysis, interval matrices, state space model, Kalman Filter

Procedia PDF Downloads 423
5845 Intellectual Property and SMEs in the Baltic Sea Region: A Comparative Study on the Use of the Utility Model Protection

Authors: Christina Wainikka, Besrat Tesfaye

Abstract:

Several of the countries in the Baltic Sea region are ranked high in international innovations rankings, such as the Global Innovation Index and European Innovation Scoreboard. There are however some concerns in the performance of different countries. For example, there is a widely spread notion about “The Swedish Paradox”. Sweden is ranked high due to investments in R&D and patent activity, but the outcome is not as high as could be expected. SMEs in Sweden are also below EU average when it comes to registering intellectual property rights such as patents and trademarks. This study is concentrating on the protection of utility model. This intellectual property right does not exist in Sweden, but in for example Finland and Germany. The utility model protection is sometimes referred to as a “patent light” since it is easier to obtain than the patent protection but at the same time does cover technical solutions. In examining statistics on patent activities and activities in registering utility models it is clear that utility model protection is scarcely used in the countries that have the protection. In Germany 10 577 applications were made in 2021. In Finland there were 259 applications made in 2021. This can be compared with patent applications that were 58 568 in Germany in 2021 and 1 662 in Finland in 2021. In Sweden there has never been a protection for utility models. The only protection for technical solutions is patents and business secrets. The threshold for obtaining a patent is high, due to the legal requirements and the costs. The patent protection is there for often not chosen by SMEs in Sweden. This study examines whether the protection of utility models in other countries in the Baltic region provide SMEs in these countries with better options to protect their innovations. The legal methodology is comparative law. In order to study the effects of the legal differences statistics are examined and interviews done with SMEs from different industries.

Keywords: baltic sea region, comparative law, SME, utility model

Procedia PDF Downloads 114
5844 [Keynote Talk]: Software Reliability Assessment and Fault Tolerance: Issues and Challenges

Authors: T. Gayen

Abstract:

Although, there are several software reliability models existing today there does not exist any versatile model even today which can be used for the reliability assessment of software. Complex software has a large number of states (unlike the hardware) so it becomes practically difficult to completely test the software. Irrespective of the amount of testing one does, sometimes it becomes extremely difficult to assure that the final software product is fault free. The Black Box Software Reliability models are found be quite uncertain for the reliability assessment of various systems. As mission critical applications need to be highly reliable and since it is not always possible to ensure the development of highly reliable system. Hence, in order to achieve fault-free operation of software one develops some mechanism to handle faults remaining in the system even after the development. Although, several such techniques are currently in use to achieve fault tolerance, yet these mechanisms may not always be very suitable for various systems. Hence, this discussion is focused on analyzing the issues and challenges faced with the existing techniques for reliability assessment and fault tolerance of various software systems.

Keywords: black box, fault tolerance, failure, software reliability

Procedia PDF Downloads 425
5843 Application of MALDI-MS to Differentiate SARS-CoV-2 and Non-SARS-CoV-2 Symptomatic Infections in the Early and Late Phases of the Pandemic

Authors: Dmitriy Babenko, Sergey Yegorov, Ilya Korshukov, Aidana Sultanbekova, Valentina Barkhanskaya, Tatiana Bashirova, Yerzhan Zhunusov, Yevgeniya Li, Viktoriya Parakhina, Svetlana Kolesnichenko, Yeldar Baiken, Aruzhan Pralieva, Zhibek Zhumadilova, Matthew S. Miller, Gonzalo H. Hortelano, Anar Turmuhambetova, Antonella E. Chesca, Irina Kadyrova

Abstract:

Introduction: The rapidly evolving COVID-19 pandemic, along with the re-emergence of pathogens causing acute respiratory infections (ARI), has necessitated the development of novel diagnostic tools to differentiate various causes of ARI. MALDI-MS, due to its wide usage and affordability, has been proposed as a potential instrument for diagnosing SARS-CoV-2 versus non-SARS-CoV-2 ARI. The aim of this study was to investigate the potential of MALDI-MS in conjunction with a machine learning model to accurately distinguish between symptomatic infections caused by SARS-CoV-2 and non-SARS-CoV-2 during both the early and later phases of the pandemic. Furthermore, this study aimed to analyze mass spectrometry (MS) data obtained from nasal swabs of healthy individuals. Methods: We gathered mass spectra from 252 samples, comprising 108 SARS-CoV-2-positive samples obtained in 2020 (Covid 2020), 7 SARS-CoV- 2-positive samples obtained in 2023 (Covid 2023), 71 samples from symptomatic individuals without SARS-CoV-2 (Control non-Covid ARVI), and 66 samples from healthy individuals (Control healthy). All the samples were subjected to RT-PCR testing. For data analysis, we employed the caret R package to train and test seven machine-learning algorithms: C5.0, KNN, NB, RF, SVM-L, SVM-R, and XGBoost. We conducted a training process using a five-fold (outer) nested repeated (five times) ten-fold (inner) cross-validation with a randomized stratified splitting approach. Results: In this study, we utilized the Covid 2020 dataset as a case group and the non-Covid ARVI dataset as a control group to train and test various machine learning (ML) models. Among these models, XGBoost and SVM-R demonstrated the highest performance, with accuracy values of 0.97 [0.93, 0.97] and 0.95 [0.95; 0.97], specificity values of 0.86 [0.71; 0.93] and 0.86 [0.79; 0.87], and sensitivity values of 0.984 [0.984; 1.000] and 1.000 [0.968; 1.000], respectively. When examining the Covid 2023 dataset, the Naive Bayes model achieved the highest classification accuracy of 43%, while XGBoost and SVM-R achieved accuracies of 14%. For the healthy control dataset, the accuracy of the models ranged from 0.27 [0.24; 0.32] for k-nearest neighbors to 0.44 [0.41; 0.45] for the Support Vector Machine with a radial basis function kernel. Conclusion: Therefore, ML models trained on MALDI MS of nasopharyngeal swabs obtained from patients with Covid during the initial phase of the pandemic, as well as symptomatic non-Covid individuals, showed excellent classification performance, which aligns with the results of previous studies. However, when applied to swabs from healthy individuals and a limited sample of patients with Covid in the late phase of the pandemic, ML models exhibited lower classification accuracy.

Keywords: SARS-CoV-2, MALDI-TOF MS, ML models, nasopharyngeal swabs, classification

Procedia PDF Downloads 106
5842 Russian Spatial Impersonal Sentence Models in Translation Perspective

Authors: Marina Fomina

Abstract:

The paper focuses on the category of semantic subject within the framework of a functional approach to linguistics. The semantic subject is related to similar notions such as the grammatical subject and the bearer of predicative feature. It is the multifaceted nature of the category of subject that 1) triggers a number of issues that, syntax-wise, remain to be dealt with (cf. semantic vs. syntactic functions / sentence parts vs. parts of speech issues, etc.); 2) results in a variety of approaches to the category of subject, such as formal grammatical, semantic/syntactic (functional), communicative approaches, etc. Many linguists consider the prototypical approach to the category of subject to be the most instrumental as it reveals the integrity of denotative and linguistic components of the conceptual category. This approach relates to subject as a source of non-passive predicative feature, an element of subject-predicate-object situation that can take on a variety of semantic roles, cf.: 1) an agent (He carefully surveyed the valley stretching before him), 2) an experiencer (I feel very bitter about this), 3) a recipient (I received this book as a gift), 4) a causee (The plane broke into three pieces), 5) a patient (This stove cleans easily), etc. It is believed that the variety of roles stems from the radial (prototypical) structure of the category with some members more central than others. Translation-wise, the most “treacherous” subject types are the peripheral ones. The paper 1) features a peripheral status of spatial impersonal sentence models such as U menia v ukhe zvenit (lit. I-Gen. in ear buzzes) within the category of semantic subject, 2) makes a structural and semantic analysis of the models, 3) focuses on their Russian-English translation patterns, 4) reveals non-prototypical features of subjects in the English equivalents.

Keywords: bearer of predicative feature, grammatical subject, impersonal sentence model, semantic subject

Procedia PDF Downloads 369
5841 Deep Learning Strategies for Mapping Complex Vegetation Patterns in Mediterranean Environments Undergoing Climate Change

Authors: Matan Cohen, Maxim Shoshany

Abstract:

Climatic, topographic and geological diversity, together with frequent disturbance and recovery cycles, produce highly complex spatial patterns of trees, shrubs, dwarf shrubs and bare ground patches. Assessment of spatial and temporal variations of these life-forms patterns under climate change is of high ecological priority. Here we report on one of the first attempts to discriminate between images of three Mediterranean life-forms patterns at three densities. The development of an extensive database of orthophoto images representing these 9 pattern categories was instrumental for training and testing pre-trained and newly-trained DL models utilizing DenseNet architecture. Both models demonstrated the advantages of using Deep Learning approaches over existing spectral and spatial (pattern or texture) algorithmic methods in differentiation 9 life-form spatial mixtures categories.

Keywords: texture classification, deep learning, desert fringe ecosystems, climate change

Procedia PDF Downloads 88
5840 Quantification of Dispersion Effects in Arterial Spin Labelling Perfusion MRI

Authors: Rutej R. Mehta, Michael A. Chappell

Abstract:

Introduction: Arterial spin labelling (ASL) is an increasingly popular perfusion MRI technique, in which arterial blood water is magnetically labelled in the neck before flowing into the brain, providing a non-invasive measure of cerebral blood flow (CBF). The accuracy of ASL CBF measurements, however, is hampered by dispersion effects; the distortion of the ASL labelled bolus during its transit through the vasculature. In spite of this, the current recommended implementation of ASL – the white paper (Alsop et al., MRM, 73.1 (2015): 102-116) – does not account for dispersion, which leads to the introduction of errors in CBF. Given that the transport time from the labelling region to the tissue – the arterial transit time (ATT) – depends on the region of the brain and the condition of the patient, it is likely that these errors will also vary with the ATT. In this study, various dispersion models are assessed in comparison with the white paper (WP) formula for CBF quantification, enabling the errors introduced by the WP to be quantified. Additionally, this study examines the relationship between the errors associated with the WP and the ATT – and how this is influenced by dispersion. Methods: Data were simulated using the standard model for pseudo-continuous ASL, along with various dispersion models, and then quantified using the formula in the WP. The ATT was varied from 0.5s-1.3s, and the errors associated with noise artefacts were computed in order to define the concept of significant error. The instantaneous slope of the error was also computed as an indicator of the sensitivity of the error with fluctuations in ATT. Finally, a regression analysis was performed to obtain the mean error against ATT. Results: An error of 20.9% was found to be comparable to that introduced by typical measurement noise. The WP formula was shown to introduce errors exceeding 20.9% for ATTs beyond 1.25s even when dispersion effects were ignored. Using a Gaussian dispersion model, a mean error of 16% was introduced by using the WP, and a dispersion threshold of σ=0.6 was determined, beyond which the error was found to increase considerably with ATT. The mean error ranged from 44.5% to 73.5% when other physiologically plausible dispersion models were implemented, and the instantaneous slope varied from 35 to 75 as dispersion levels were varied. Conclusion: It has been shown that the WP quantification formula holds only within an ATT window of 0.5 to 1.25s, and that this window gets narrower as dispersion occurs. Provided that the dispersion levels fall below the threshold evaluated in this study, however, the WP can measure CBF with reasonable accuracy if dispersion is correctly modelled by the Gaussian model. However, substantial errors were observed with other common models for dispersion with dispersion levels similar to those that have been observed in literature.

Keywords: arterial spin labelling, dispersion, MRI, perfusion

Procedia PDF Downloads 368
5839 Assessment of the Impact of Traffic Safety Policy in Barcelona, 2010-2019

Authors: Lluís Bermúdez, Isabel Morillo

Abstract:

Road safety involves carrying out a determined and explicit policy to reduce accidents. In the city of Barcelona, through the Local Road Safety Plan 2013-2018, in line with the framework that has been established at the European and state level, a series of preventive, corrective and technical measures are specified, with the priority objective of reducing the number of serious injuries and fatalities. In this work, based on the data from the accidents managed by the local police during the period 2010-2019, an analysis is carried out to verify whether the measures established in the Plan to reduce the accident rate have had an effect or not and to what extent. The analysis focuses on the type of accident and the type of vehicles involved. Different count regression models have been fitted, from which it can be deduced that the number of serious and fatal victims of the accidents that have occurred in the city of Barcelona has been reduced as the measures approved by the authorities.

Keywords: accident reduction, count regression models, road safety, urban traffic

Procedia PDF Downloads 130
5838 Reading and Writing Memories in Artificial and Human Reasoning

Authors: Ian O'Loughlin

Abstract:

Memory networks aim to integrate some of the recent successes in machine learning with a dynamic memory base that can be updated and deployed in artificial reasoning tasks. These models involve training networks to identify, update, and operate over stored elements in a large memory array in order, for example, to ably perform question and answer tasks parsing real-world and simulated discourses. This family of approaches still faces numerous challenges: the performance of these network models in simulated domains remains considerably better than in open, real-world domains, wide-context cues remain elusive in parsing words and sentences, and even moderately complex sentence structures remain problematic. This innovation, employing an array of stored and updatable ‘memory’ elements over which the system operates as it parses text input and develops responses to questions, is a compelling one for at least two reasons: first, it addresses one of the difficulties that standard machine learning techniques face, by providing a way to store a large bank of facts, offering a way forward for the kinds of long-term reasoning that, for example, recurrent neural networks trained on a corpus have difficulty performing. Second, the addition of a stored long-term memory component in artificial reasoning seems psychologically plausible; human reasoning appears replete with invocations of long-term memory, and the stored but dynamic elements in the arrays of memory networks are deeply reminiscent of the way that human memory is readily and often characterized. However, this apparent psychological plausibility is belied by a recent turn in the study of human memory in cognitive science. In recent years, the very notion that there is a stored element which enables remembering, however dynamic or reconstructive it may be, has come under deep suspicion. In the wake of constructive memory studies, amnesia and impairment studies, and studies of implicit memory—as well as following considerations from the cognitive neuroscience of memory and conceptual analyses from the philosophy of mind and cognitive science—researchers are now rejecting storage and retrieval, even in principle, and instead seeking and developing models of human memory wherein plasticity and dynamics are the rule rather than the exception. In these models, storage is entirely avoided by modeling memory using a recurrent neural network designed to fit a preconceived energy function that attains zero values only for desired memory patterns, so that these patterns are the sole stable equilibrium points in the attractor network. So although the array of long-term memory elements in memory networks seem psychologically appropriate for reasoning systems, they may actually be incurring difficulties that are theoretically analogous to those that older, storage-based models of human memory have demonstrated. The kind of emergent stability found in the attractor network models more closely fits our best understanding of human long-term memory than do the memory network arrays, despite appearances to the contrary.

Keywords: artificial reasoning, human memory, machine learning, neural networks

Procedia PDF Downloads 271
5837 Bridges Seismic Isolation Using CNT Reinforced Polymer Bearings

Authors: Mohamed Attia, Vissarion Papadopoulos

Abstract:

There is no doubt that there is a continuous deterioration of structures as a result of multiple hazards which can be divided into natural hazards (e.g., earthquakes, floods, winds) and other hazards due to human behavior (e.g., ship collisions, excessive traffic, terrorist attacks). There have been numerous attempts to address the catastrophic consequences of these hazards and traditional solutions through structural design and safety factors within the design codes, but there has not been much research addressing solutions through the use of new materials that have high performance and can be more effective than usual materials such as reinforced concrete and steel. To illustrate the effect of one of the new high-performance materials, carbon nanotube-reinforced polymer (CNT/polymer) bearings with different weight fractions were simulated as structural components of seismic isolation using ABAQUS in the connection between a bridge superstructure and the substructure. The results of the analyzes showed a significant increase in the time period of the bridge and a clear decrease in the bending moment at the base of the bridge piers at each time step of the time-history analysis in the case of using CNT/polymer bearings compared to the case of direct contact between the superstructure of the bridge and the substructure.

Keywords: seismic isolation, bridges damage, earthquake hazard, earthquake resistant structures

Procedia PDF Downloads 191
5836 UPPAAL-based Design and Analysis of Intelligent Parking System

Authors: Abobaker Mohammed Qasem Farhan, Olof M. A. Saif

Abstract:

The demand for parking spaces in urban areas, particularly in developing countries, has led to a significant issue in the absence of sufficient parking spaces in crowded areas, which results in daily traffic congestion as drivers search for parking. This not only affects the appearance of the city but also has indirect impacts on the economy, society, and environment. In response to these challenges, researchers from various countries have sought technical and intelligent solutions to mitigate the problem through the development of smart parking systems. This paper aims to analyze and design three models of parking lots, with a focus on parking time and security. The study used computer software and Uppaal tools to simulate the models and determine the best among them. The results and suggestions provided in the paper aim to reduce the parking problems and improve the overall efficiency and safety of the parking process. The conclusion of the study highlights the importance of utilizing advanced technology to address the pressing issue of insufficient parking spaces in urban areas.

Keywords: preliminaries, system requirements, timed Au- tomata, Uppaal

Procedia PDF Downloads 145
5835 Convectory Policing-Reconciling Historic and Contemporary Models of Police Service Delivery

Authors: Mark Jackson

Abstract:

Description: This paper is based on an theoretical analysis of the efficacy of the dominant model of policing in western jurisdictions. Those results are then compared with a similar analysis of a traditional reactive model. It is found that neither model provides for optimal delivery of services. Instead optimal service can be achieved by a synchronous hybrid model, termed the Convectory Policing approach. Methodology and Findings: For over three decades problem oriented policing (PO) has been the dominant model for western police agencies. Initially based on the work of Goldstein during the 1970s the problem oriented framework has spawned endless variants and approaches, most of which embrace a problem solving rather than a reactive approach to policing. This has included the Area Policing Concept (APC) applied in many smaller jurisdictions in the USA, the Scaled Response Policing Model (SRPM) currently under trial in Western Australia and the Proactive Pre-Response Approach (PPRA) which has also seen some success. All of these, in some way or another, are largely based on a model that eschews a traditional reactive model of policing. Convectory Policing (CP) is an alternative model which challenges the underpinning assumptions which have seen proliferation of the PO approach in the last three decades and commences by questioning the economics on which PO is based. It is argued that in essence, the PO relies on an unstated, and often unrecognised assumption that resources will be available to meet demand for policing services, while at the same time maintaining the capacity to deploy staff to develop solutions to the problems which were ultimately manifested in those same calls for service. The CP model relies on the observations from a numerous western jurisdictions to challenge the validity of that underpinning assumption, particularly in fiscally tight environment. In deploying staff to pursue and develop solutions to underpinning problems, there is clearly an opportunity cost. Those same staff cannot be allocated to alternative duties while engaged in a problem solution role. At the same time, resources in use responding to calls for service are unavailable, while committed to that role, to pursue solutions to the problems giving rise to those same calls for service. The two approaches, reactive and PO are therefore dichotomous. One cannot be optimised while the other is being pursued. Convectory Policing is a pragmatic response to the schism between the competing traditional and contemporary models. If it is not possible to serve either model with any real rigour, it becomes necessary to taper an approach to deliver specific outcomes against which success or otherwise might be measured. CP proposes that a structured roster-driven approach to calls for service, combined with the application of what is termed a resource-effect response capacity has the potential to resolve the inherent conflict between traditional and models of policing and the expectations of the community in terms of community policing based problem solving models.

Keywords: policing, reactive, proactive, models, efficacy

Procedia PDF Downloads 482