Search results for: median models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7037

Search results for: median models

6737 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models

Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai

Abstract:

Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.

Keywords: plant identification, CNN, image processing, vision transformer, classification

Procedia PDF Downloads 78
6736 Sensitivity and Uncertainty Analysis of One Dimensional Shape Memory Alloy Constitutive Models

Authors: A. B. M. Rezaul Islam, Ernur Karadogan

Abstract:

Shape memory alloys (SMAs) are known for their shape memory effect and pseudoelasticity behavior. Their thermomechanical behaviors are modeled by numerous researchers using microscopic thermodynamic and macroscopic phenomenological point of view. Tanaka, Liang-Rogers and Ivshin-Pence models are some of the most popular SMA macroscopic phenomenological constitutive models. They describe SMA behavior in terms of stress, strain and temperature. These models involve material parameters and they have associated uncertainty present in them. At different operating temperatures, the uncertainty propagates to the output when the material is subjected to loading followed by unloading. The propagation of uncertainty while utilizing these models in real-life application can result in performance discrepancies or failure at extreme conditions. To resolve this, we used probabilistic approach to perform the sensitivity and uncertainty analysis of Tanaka, Liang-Rogers, and Ivshin-Pence models. Sobol and extended Fourier Amplitude Sensitivity Testing (eFAST) methods have been used to perform the sensitivity analysis for simulated isothermal loading/unloading at various operating temperatures. As per the results, it is evident that the models vary due to the change in operating temperature and loading condition. The average and stress-dependent sensitivity indices present the most significant parameters at several temperatures. This work highlights the sensitivity and uncertainty analysis results and shows comparison of them at different temperatures and loading conditions for all these models. The analysis presented will aid in designing engineering applications by eliminating the probability of model failure due to the uncertainty in the input parameters. Thus, it is recommended to have a proper understanding of sensitive parameters and the uncertainty propagation at several operating temperatures and loading conditions as per Tanaka, Liang-Rogers, and Ivshin-Pence model.

Keywords: constitutive models, FAST sensitivity analysis, sensitivity analysis, sobol, shape memory alloy, uncertainty analysis

Procedia PDF Downloads 124
6735 Measuring Environmental Efficiency of Energy in OPEC Countries

Authors: Bahram Fathi, Seyedhossein Sajadifar, Naser Khiabani

Abstract:

Data envelopment analysis (DEA) has recently gained popularity in energy efficiency analysis. A common feature of the previously proposed DEA models for measuring energy efficiency performance is that they treat energy consumption as an input within a production framework without considering undesirable outputs. However, energy use results in the generation of undesirable outputs as byproducts of producing desirable outputs. Within a joint production framework of both desirable and undesirable outputs, this paper presents several DEA-type linear programming models for measuring energy efficiency performance. In addition to considering undesirable outputs, our models treat different energy sources as different inputs so that changes in energy mix could be accounted for in evaluating energy efficiency. The proposed models are applied to measure the energy efficiency performances of 12 OPEC countries and the results obtained are presented.

Keywords: energy efficiency, undesirable outputs, data envelopment analysis

Procedia PDF Downloads 715
6734 Enhancing Model Interoperability and Reuse by Designing and Developing a Unified Metamodel Standard

Authors: Arash Gharibi

Abstract:

Mankind has always used models to solve problems. Essentially, models are simplified versions of reality, whose need stems from having to deal with complexity; many processes or phenomena are too complex to be described completely. Thus a fundamental model requirement is that it contains the characteristic features that are essential in the context of the problem to be solved or described. Models are used in virtually every scientific domain to deal with various problems. During the recent decades, the number of models has increased exponentially. Publication of models as part of original research has traditionally been in in scientific periodicals, series, monographs, agency reports, national journals and laboratory reports. This makes it difficult for interested groups and communities to stay informed about the state-of-the-art. During the modeling process, many important decisions are made which impact the final form of the model. Without a record of these considerations, the final model remains ill-defined and open to varying interpretations. Unfortunately, the details of these considerations are often lost or in case there is any existing information about a model, it is likely to be written intuitively in different layouts and in different degrees of detail. In order to overcome these issues, different domains have attempted to implement their own approaches to preserve their models’ information in forms of model documentation. The most frequently cited model documentation approaches show that they are domain specific, not to applicable to the existing models and evolutionary flexibility and intrinsic corrections and improvements are not possible with the current approaches. These issues are all because of a lack of unified standards for model documentation. As a way forward, this research will propose a new standard for capturing and managing models’ information in a unified way so that interoperability and reusability of models become possible. This standard will also be evolutionary, meaning members of modeling realm could contribute to its ongoing developments and improvements. In this paper, the current 3 of the most common metamodels are reviewed and according to pros and cons of each, a new metamodel is proposed.

Keywords: metamodel, modeling, interoperability, reuse

Procedia PDF Downloads 182
6733 Implied Adjusted Volatility by Leland Option Pricing Models: Evidence from Australian Index Options

Authors: Mimi Hafizah Abdullah, Hanani Farhah Harun, Nik Ruzni Nik Idris

Abstract:

With the implied volatility as an important factor in financial decision-making, in particular in option pricing valuation, and also the given fact that the pricing biases of Leland option pricing models and the implied volatility structure for the options are related, this study considers examining the implied adjusted volatility smile patterns and term structures in the S&P/ASX 200 index options using the different Leland option pricing models. The examination of the implied adjusted volatility smiles and term structures in the Australian index options market covers the global financial crisis in the mid-2007. The implied adjusted volatility was found to escalate approximately triple the rate prior the crisis.

Keywords: implied adjusted volatility, financial crisis, Leland option pricing models, Australian index options

Procedia PDF Downloads 361
6732 Inadequate Intake of Energy and Nutrients: A Comparative Cross-Sectional Study Between Sport and Non-sport Science University Students of Southern Ethiopia

Authors: Beruk Berhanu Desalegn, Kebede Awgechew, Addisalem Mesfin

Abstract:

Introduction: This study aimed to investigate and compare the energy and selected nutrient intakes of sport science and non-sport science University students of Southern Ethiopia. Method: Multiple-day dietary data were collected from 166 university students (76 sport science and 90 non-sport sciences). Average daily energy and nutrient intake, and inadequate intakes were calculated using NutriSurvey (NS). Results: There were significant differences (p < 0.05) in the median intakes of energy, total carbohydrate, and vitamin B1 between female students from the sport science and non-sport science groups, but only the median intake of iron was significantly different (p < 0.05) between the male sport and non-sport science students’ group. The prevalence of inadequate intake of vitamin B1 were significantly (p<0.05) higher in the male and female from the non-sport science groups compared to the male and female students’ groups in the sport science, respectively. Whereas, the prevalence of inadequate iron intake by the male sport science students’ group was significantly (p<0.05) higher compared to their counterparts. Similarly, the prevalence of inadequate energy among the females from the sport science group was significantly (p<0.05) higher compared to the female students from the non-sport science department group. The prevalence of inadequate intakes of dietary energy, and the majority of the nutrients (protein, fat, vitamin A, B1, B2, and magnesium) were high (>50%) in selected University students. Conclusion: The energy and majority of nutrient intakes by the students in the selected universities of southern Ethiopia were sub-optimal. Therefore, activities that will improve the dietary intake of University students should include weekly meal plan revision considering their average recommended nutrient intake (RNI).

Keywords: dietary intake, sport science, University students, Ethiopia

Procedia PDF Downloads 68
6731 Evaluation of Environmental, Technical, and Economic Indicators of a Fused Deposition Modeling Process

Authors: M. Yosofi, S. Ezeddini, A. Ollivier, V. Lavaste, C. Mayousse

Abstract:

Additive manufacturing processes have changed significantly in a wide range of industries and their application progressed from rapid prototyping to production of end-use products. However, their environmental impact is still a rather open question. In order to support the growth of this technology in the industrial sector, environmental aspects should be considered and predictive models may help monitor and reduce the environmental footprint of the processes. This work presents predictive models based on a previously developed methodology for the environmental impact evaluation combined with a technical and economical assessment. Here we applied the methodology to the Fused Deposition Modeling process. First, we present the predictive models relative to different types of machines. Then, we present a decision-making tool designed to identify the optimum manufacturing strategy regarding technical, economic, and environmental criteria.

Keywords: additive manufacturing, decision-makings, environmental impact, predictive models

Procedia PDF Downloads 110
6730 Understanding Chromosome Movement in Starfish Oocytes

Authors: Bryony Davies

Abstract:

Many cell and tissue culture practices ignore the effects of gravity on cell biology, and little is known about how cell components may move in response to gravitational forces. Starfish oocytes provide an excellent model for interrogating the movement of cell components due to their unusually large size, ease of handling, and high transparency. Chromosomes from starfish oocytes can be visualised by microinjection of the histone-H2B-mCherry plasmid into the oocytes. The movement of the chromosomes can then be tracked by live-cell fluorescence microscopy. The results from experiments using these methods suggest that there is a replicable downward movement of centrally located chromosomes at a median velocity of 0.39 μm/min. Chromosomes nearer the nuclear boundary showed more restricted movement. Chromosome density and shape could also be altered by microinjection of restriction enzymes, primarily Alu1, before imaging. This was found to alter the speed of chromosome movement, with chromosomes from Alu1-injected nuclei showing a median downward velocity of 0.60 μm/min. Overall, these results suggest that there is a non-negligible movement of chromosomes in response to gravitational forces and that this movement can be altered by enzyme activity. Future directions based on these results could interrogate if this observed downward movement extends to other cell components and to other cell types. Additionally, it may be important to understand whether gravitational orientation and vertical positioning of cell components alter cell behaviour. The findings here may have implications for current cell culture practices, which do not replicate cell orientations or external forces experienced in vivo. It is possible that a failure to account for gravitational forces in 2D cell culture alters experimental results and the accuracy of conclusions drawn from them. Understanding possible behavioural changes in cells due to the effects of gravity would therefore be beneficial.

Keywords: starfish, oocytes, live-cell imaging, microinjection, chromosome dynamics

Procedia PDF Downloads 89
6729 Leveraging Unannotated Data to Improve Question Answering for French Contract Analysis

Authors: Touila Ahmed, Elie Louis, Hamza Gharbi

Abstract:

State of the art question answering models have recently shown impressive performance especially in a zero-shot setting. This approach is particularly useful when confronted with a highly diverse domain such as the legal field, in which it is increasingly difficult to have a dataset covering every notion and concept. In this work, we propose a flexible generative question answering approach to contract analysis as well as a weakly supervised procedure to leverage unannotated data and boost our models’ performance in general, and their zero-shot performance in particular.

Keywords: question answering, contract analysis, zero-shot, natural language processing, generative models, self-supervision

Procedia PDF Downloads 164
6728 Dow Polyols near Infrared Chemometric Model Reduction Based on Clustering: Reducing Thirty Global Hydroxyl Number (OH) Models to Less Than Five

Authors: Wendy Flory, Kazi Czarnecki, Matthijs Mercy, Mark Joswiak, Mary Beth Seasholtz

Abstract:

Polyurethane Materials are present in a wide range of industrial segments such as Furniture, Building and Construction, Composites, Automotive, Electronics, and more. Dow is one of the leaders for the manufacture of the two main raw materials, Isocyanates and Polyols used to produce polyurethane products. Dow is also a key player for the manufacture of Polyurethane Systems/Formulations designed for targeted applications. In 1990, the first analytical chemometric models were developed and deployed for use in the Dow QC labs of the polyols business for the quantification of OH, water, cloud point, and viscosity. Over the years many models have been added; there are now over 140 models for quantification and hundreds for product identification, too many to be reasonable for support. There are 29 global models alone for the quantification of OH across > 70 products at many sites. An attempt was made to consolidate these into a single model. While the consolidated model proved good statistics across the entire range of OH, several products had a bias by ASTM E1655 with individual product validation. This project summary will show the strategy for global model updates for OH, to reduce the number of models for quantification from over 140 to 5 or less using chemometric methods. In order to gain an understanding of the best product groupings, we identify clusters by reducing spectra to a few dimensions via Principal Component Analysis (PCA) and Uniform Manifold Approximation and Projection (UMAP). Results from these cluster analyses and a separate validation set allowed dow to reduce the number of models for predicting OH from 29 to 3 without loss of accuracy.

Keywords: hydroxyl, global model, model maintenance, near infrared, polyol

Procedia PDF Downloads 119
6727 Benchmarking Machine Learning Approaches for Forecasting Hotel Revenue

Authors: Rachel Y. Zhang, Christopher K. Anderson

Abstract:

A critical aspect of revenue management is a firm’s ability to predict demand as a function of price. Historically hotels have used simple time series models (regression and/or pick-up based models) owing to the complexities of trying to build casual models of demands. Machine learning approaches are slowly attracting attention owing to their flexibility in modeling relationships. This study provides an overview of approaches to forecasting hospitality demand – focusing on the opportunities created by machine learning approaches, including K-Nearest-Neighbors, Support vector machine, Regression Tree, and Artificial Neural Network algorithms. The out-of-sample performances of above approaches to forecasting hotel demand are illustrated by using a proprietary sample of the market level (24 properties) transactional data for Las Vegas NV. Causal predictive models can be built and evaluated owing to the availability of market level (versus firm level) data. This research also compares and contrast model accuracy of firm-level models (i.e. predictive models for hotel A only using hotel A’s data) to models using market level data (prices, review scores, location, chain scale, etc… for all hotels within the market). The prospected models will be valuable for hotel revenue prediction given the basic characters of a hotel property or can be applied in performance evaluation for an existed hotel. The findings will unveil the features that play key roles in a hotel’s revenue performance, which would have considerable potential usefulness in both revenue prediction and evaluation.

Keywords: hotel revenue, k-nearest-neighbors, machine learning, neural network, prediction model, regression tree, support vector machine

Procedia PDF Downloads 117
6726 Text Similarity in Vector Space Models: A Comparative Study

Authors: Omid Shahmirzadi, Adam Lugowski, Kenneth Younge

Abstract:

Automatic measurement of semantic text similarity is an important task in natural language processing. In this paper, we evaluate the performance of different vector space models to perform this task. We address the real-world problem of modeling patent-to-patent similarity and compare TFIDF (and related extensions), topic models (e.g., latent semantic indexing), and neural models (e.g., paragraph vectors). Contrary to expectations, the added computational cost of text embedding methods is justified only when: 1) the target text is condensed; and 2) the similarity comparison is trivial. Otherwise, TFIDF performs surprisingly well in other cases: in particular for longer and more technical texts or for making finer-grained distinctions between nearest neighbors. Unexpectedly, extensions to the TFIDF method, such as adding noun phrases or calculating term weights incrementally, were not helpful in our context.

Keywords: big data, patent, text embedding, text similarity, vector space model

Procedia PDF Downloads 157
6725 Geographic Information System for District Level Energy Performance Simulations

Authors: Avichal Malhotra, Jerome Frisch, Christoph van Treeck

Abstract:

The utilization of semantic, cadastral and topological data from geographic information systems (GIS) has exponentially increased for building and urban-scale energy performance simulations. Urban planners, simulation scientists, and researchers use virtual 3D city models for energy analysis, algorithms and simulation tools. For dynamic energy simulations at city and district level, this paper provides an overview of the available GIS data models and their levels of detail. Adhering to different norms and standards, these models also intend to describe building and construction industry data. For further investigations, CityGML data models are considered for simulations. Though geographical information modelling has considerably many different implementations, extensions of virtual city data can also be made for domain specific applications. Highlighting the use of the extended CityGML models for energy researches, a brief introduction to the Energy Application Domain Extension (ADE) along with its significance is made. Consequently, addressing specific input simulation data, a workflow using Modelica underlining the usage of GIS information and the quantification of its significance over annual heating energy demand is presented in this paper.

Keywords: CityGML, EnergyADE, energy performance simulation, GIS

Procedia PDF Downloads 153
6724 Place of Radiotherapy in the Treatment of Intracranial Meningiomas: Experience of the Cancer Center Emir Abdelkader of Oran Algeria

Authors: Taleb L., Benarbia M., Boutira F. M., Allam H., Boukerche A.

Abstract:

Introduction and purpose of the study: Meningiomas are the most common non-glial intracranial tumors in adults, accounting for approximately 30% of all central nervous system tumors. The aim of our study is to determine the epidemiological, clinical, therapeutic, and evolutionary characteristics of a cohort of patients with intracranial meningioma treated with radiotherapy at the Emir Abdelkader Cancer Center in Oran. Material and methods: This is a retrospective study of 44 patients during the period from 2014 to 2020. The overall survival and relapse-free survival curves were calculated using the Kaplan-Meier method. Results and statistical analysis: The median age of the patients was 49 years [21-76 years] with a clear female predominance (sex ratio=2.4). The average diagnostic delay was seven months [2 to 24 months], the circumstances of the discovery of which were dominated by headaches in 54.5% of cases (n=24), visual disturbances in 40.9% (n=18), and motor disorders in 15.9% (n=7). The seat of the tumor was essentially at the level of the base of the skull in 52.3% of patients (n=23), including 29.5% (n=13) at the level of the cavernous sinus, 27.3% (n=12) at the parasagittal level and 20.5% (n=9) at the convexity. The diagnosis was confirmed surgically in 36 patients (81.8%) whose anatomopathological study returned in favor of grades I, II, and III in respectively 40.9%, 29.5%, and 11.4% of the cases. Radiotherapy was indicated postoperatively in 45.5% of patients (n=20), exclusive in 27.3% (n=12) and after tumor recurrence in 27.3% of cases (n=18). The irradiation doses delivered were as follows: 50 Gy (20.5%), 54 Gy (65.9%), and 60 Gy (13.6%). With a median follow-up of 69 months, the probabilities of relapse-free survival and overall survival at three years are 93.2% and 95.4%, respectively, whereas they are 71.2% and 80.7% at five years. Conclusion: Meningiomas are common primary brain tumors. Most often benign but can also progress aggressively. Their treatment is essentially surgical, but radiotherapy retains its place in specific situations, allowing good tumor control and overall survival.

Keywords: diagnosis, meningioma, surgery, radiotherapy, survival

Procedia PDF Downloads 86
6723 Talent-to-Vec: Using Network Graphs to Validate Models with Data Sparsity

Authors: Shaan Khosla, Jon Krohn

Abstract:

In a recruiting context, machine learning models are valuable for recommendations: to predict the best candidates for a vacancy, to match the best vacancies for a candidate, and compile a set of similar candidates for any given candidate. While useful to create these models, validating their accuracy in a recommendation context is difficult due to a sparsity of data. In this report, we use network graph data to generate useful representations for candidates and vacancies. We use candidates and vacancies as network nodes and designate a bi-directional link between them based on the candidate interviewing for the vacancy. After using node2vec, the embeddings are used to construct a validation dataset with a ranked order, which will help validate new recommender systems.

Keywords: AI, machine learning, NLP, recruiting

Procedia PDF Downloads 74
6722 Bridging the Gap between Different Interfaces for Business Process Modeling

Authors: Katalina Grigorova, Kaloyan Mironov

Abstract:

The paper focuses on the benefits of business process modeling. Although this discipline is developing for many years, there is still necessity of creating new opportunities to meet the ever-increasing users’ needs. Because one of these needs is related to the conversion of business process models from one standard to another, the authors have developed a converter between BPMN and EPC standards using workflow patterns as intermediate tool. Nowadays there are too many systems for business process modeling. The variety of output formats is almost the same as the systems themselves. This diversity additionally hampers the conversion of the models. The presented study is aimed at discussing problems due to differences in the output formats of various modeling environments.

Keywords: business process modeling, business process modeling standards, workflow patterns, converting models

Procedia PDF Downloads 566
6721 Hybrid Project Management Model Based on Lean and Agile Approach

Authors: Fatima-Zahra Eddoug, Jamal Benhra, Rajaa Benabbou

Abstract:

Several project management models exist in the literature and the most used ones are the hybrids for their multiple advantages. Our objective in this paper is to analyze the existing models, which are based on the Lean and Agile approaches and to propose a novel framework with the convenient tools that will allow efficient management of a general project. To create the desired framework, we were based essentially on 7 existing models. Only the Scrum tool among the agile tools was identified by several authors to be appropriate for project management. In contrast, multiple lean tools were proposed in different phases of the project.

Keywords: agility, hybrid project management, lean, scrum

Procedia PDF Downloads 118
6720 Multiple Linear Regression for Rapid Estimation of Subsurface Resistivity from Apparent Resistivity Measurements

Authors: Sabiu Bala Muhammad, Rosli Saad

Abstract:

Multiple linear regression (MLR) models for fast estimation of true subsurface resistivity from apparent resistivity field measurements are developed and assessed in this study. The parameters investigated were apparent resistivity (ρₐ), horizontal location (X) and depth (Z) of measurement as the independent variables; and true resistivity (ρₜ) as the dependent variable. To achieve linearity in both resistivity variables, datasets were first transformed into logarithmic domain following diagnostic checks of normality of the dependent variable and heteroscedasticity to ensure accurate models. Four MLR models were developed based on hierarchical combination of the independent variables. The generated MLR coefficients were applied to another data set to estimate ρₜ values for validation. Contours of the estimated ρₜ values were plotted and compared to the observed data plots at the colour scale and blanking for visual assessment. The accuracy of the models was assessed using coefficient of determination (R²), standard error (SE) and weighted mean absolute percentage error (wMAPE). It is concluded that the MLR models can estimate ρₜ for with high level of accuracy.

Keywords: apparent resistivity, depth, horizontal location, multiple linear regression, true resistivity

Procedia PDF Downloads 258
6719 Evaluation of Newly Synthesized Steroid Derivatives Using In silico Molecular Descriptors and Chemometric Techniques

Authors: Milica Ž. Karadžić, Lidija R. Jevrić, Sanja Podunavac-Kuzmanović, Strahinja Z. Kovačević, Anamarija I. Mandić, Katarina Penov-Gaši, Andrea R. Nikolić, Aleksandar M. Oklješa

Abstract:

This study considered selection of the in silico molecular descriptors and the models for newly synthesized steroid derivatives description and their characterization using chemometric techniques. Multiple linear regression (MLR) models were established and gave the best molecular descriptors for quantitative structure-retention relationship (QSRR) modeling of the retention of the investigated molecules. MLR models were without multicollinearity among the selected molecular descriptors according to the variance inflation factor (VIF) values. Used molecular descriptors were ranked using generalized pair correlation method (GPCM). In this method, the significant difference between independent variables can be noticed regardless almost equal correlation between dependent variable. Generated MLR models were statistically and cross-validated and the best models were kept. Models were ranked using sum of ranking differences (SRD) method. According to this method, the most consistent QSRR model can be found and similarity or dissimilarity between the models could be noticed. In this study, SRD was performed using average values of experimentally observed data as a golden standard. Chemometric analysis was conducted in order to characterize newly synthesized steroid derivatives for further investigation regarding their potential biological activity and further synthesis. This article is based upon work from COST Action (CM1105), supported by COST (European Cooperation in Science and Technology).

Keywords: generalized pair correlation method, molecular descriptors, regression analysis, steroids, sum of ranking differences

Procedia PDF Downloads 334
6718 Estimating Lost Digital Video Frames Using Unidirectional and Bidirectional Estimation Based on Autoregressive Time Model

Authors: Navid Daryasafar, Nima Farshidfar

Abstract:

In this article, we make attempt to hide error in video with an emphasis on the time-wise use of autoregressive (AR) models. To resolve this problem, we assume that all information in one or more video frames is lost. Then, lost frames are estimated using analogous Pixels time information in successive frames. Accordingly, after presenting autoregressive models and how they are applied to estimate lost frames, two general methods are presented for using these models. The first method which is the same standard method of autoregressive models estimates lost frame in unidirectional form. Usually, in such condition, previous frames information is used for estimating lost frame. Yet, in the second method, information from the previous and next frames is used for estimating the lost frame. As a result, this method is known as bidirectional estimation. Then, carrying out a series of tests, performance of each method is assessed in different modes. And, results are compared.

Keywords: error steganography, unidirectional estimation, bidirectional estimation, AR linear estimation

Procedia PDF Downloads 516
6717 In vitro Control of Aedes aegypti Larvae Using Beauveria bassiana

Authors: R. O. B. Bitencourt, F. S. Farias, M. C. Freitas, C. J. R. Balduino, E.S. Mesquita, A. R. C. Corval, P. S. Gôlo, E. G. Pontes, V. R. E. P. Bittencourt, I. C. Angelo

Abstract:

Aedes aegypti larval survival rate was assessed after exposure to blastopores or conidia (mineral oil-in-water formulation or aqueous suspension) of Beauveria bassiana CG 479 propagules (blastospores or conidia). Here, mineral oil was used in the fungal formulation to control Aedes aegypti larvae. 1%, 0.5% or 0.1% mineral oil-in-water solutions were used to evaluate mineral oil toxicity for mosquito larvae. In the oil toxicity test, 0.1% mineral oil solution reduced only 4.5% larval survival; accordingly, this concentration was chosen for fungal oil-in-water formulations. Aqueous suspensions were prepared using 0.01% Tween 80® in sterile dechlorinated water. A. aegypti larvae (L2) were exposed in aqueous suspensions or mineral oil-in-water fungal formulations at 1×107 propagules mL-1; the survival rate (assessed daily, for 7 days) and the median survival time (S50) were calculated. Seven days after the treatment, mosquito larvae survival rates were 8.56%, 16.22%, 58%, and 42.56% after exposure to oil-in-water blastospores, oil-in-water conidia, blastospores aqueous suspension and conidia aqueous suspension (respectively). Larvae exposed to 0.01% Tween 80® had 100% survival rate and the ones treated with 0.1% mineral oil-in-water had 95.11% survival rate. Larvae treated with conidia (regardless the presence of oil) or treated with blastospores formulation had survival median time (S50) ranging from one to two days. S50 was not determined (ND) when larvae were exposed to blastospores aqueous suspension, 0.01% Tween 80® (aqueous control) or 0.1% mineral oil-in-water formulation (oil control). B. bassiana conidia and blastospores (mineral oil-in-water formulated or suspended in water) had potential to control A. aegypti mosquito larvae, despite mineral oil-in-water formulation yielded better results in comparison to aqueous suspensions. Here, B. bassiana CG 479 isolate is suggested as a potential biocontrol agent of A. aegypti mosquito larvae.

Keywords: blastospores, formulation, mosquitoes, conidia

Procedia PDF Downloads 173
6716 Validating Condition-Based Maintenance Algorithms through Simulation

Authors: Marcel Chevalier, Léo Dupont, Sylvain Marié, Frédérique Roffet, Elena Stolyarova, William Templier, Costin Vasile

Abstract:

Industrial end-users are currently facing an increasing need to reduce the risk of unexpected failures and optimize their maintenance. This calls for both short-term analysis and long-term ageing anticipation. At Schneider Electric, we tackle those two issues using both machine learning and first principles models. Machine learning models are incrementally trained from normal data to predict expected values and detect statistically significant short-term deviations. Ageing models are constructed by breaking down physical systems into sub-assemblies, then determining relevant degradation modes and associating each one to the right kinetic law. Validating such anomaly detection and maintenance models is challenging, both because actual incident and ageing data are rare and distorted by human interventions, and incremental learning depends on human feedback. To overcome these difficulties, we propose to simulate physics, systems, and humans -including asset maintenance operations- in order to validate the overall approaches in accelerated time and possibly choose between algorithmic alternatives.

Keywords: degradation models, ageing, anomaly detection, soft sensor, incremental learning

Procedia PDF Downloads 113
6715 Learning Predictive Models for Efficient Energy Management of Exhibition Hall

Authors: Jeongmin Kim, Eunju Lee, Kwang Ryel Ryu

Abstract:

This paper addresses the problem of predictive control for energy management of large-scaled exhibition halls, where a lot of energy is consumed to maintain internal atmosphere under certain required conditions. Predictive control achieves better energy efficiency by optimizing the operation of air-conditioning facilities with not only the current but also some future status taken into account. In this paper, we propose to use predictive models learned from past sensor data of hall environment, for use in optimizing the operating plan for the air-conditioning facilities by simulating future environmental change. We have implemented an emulator of an exhibition hall by using EnergyPlus, a widely used building energy emulation tool, to collect data for learning environment-change models. Experimental results show that the learned models predict future change highly accurately on a short-term basis.

Keywords: predictive control, energy management, machine learning, optimization

Procedia PDF Downloads 255
6714 Empirical Roughness Progression Models of Heavy Duty Rural Pavements

Authors: Nahla H. Alaswadko, Rayya A. Hassan, Bayar N. Mohammed

Abstract:

Empirical deterministic models have been developed to predict roughness progression of heavy duty spray sealed pavements for a dataset representing rural arterial roads. The dataset provides a good representation of the relevant network and covers a wide range of operating and environmental conditions. A sample with a large size of historical time series data for many pavement sections has been collected and prepared for use in multilevel regression analysis. The modelling parameters include road roughness as performance parameter and traffic loading, time, initial pavement strength, reactivity level of subgrade soil, climate condition, and condition of drainage system as predictor parameters. The purpose of this paper is to report the approaches adopted for models development and validation. The study presents multilevel models that can account for the correlation among time series data of the same section and to capture the effect of unobserved variables. Study results show that the models fit the data very well. The contribution and significance of relevant influencing factors in predicting roughness progression are presented and explained. The paper concludes that the analysis approach used for developing the models confirmed their accuracy and reliability by well-fitting to the validation data.

Keywords: roughness progression, empirical model, pavement performance, heavy duty pavement

Procedia PDF Downloads 158
6713 A Systematic Review Examining the Experimental methodology behind in vivo testing of hiatus hernia and Diaphragmatic Hernia Mesh

Authors: Whitehead-Clarke T., Beynon V., Banks J., Karanjia R., Mudera V., Windsor A., Kureshi A.

Abstract:

Introduction: Mesh implants are regularly used to help repair both hiatus hernias (HH) and diaphragmatic hernias (DH). In vivo studies are used to test not only mesh safety but increasingly comparative efficacy. Our work examines the field of in vivo mesh testing for HH and DH models to establish current practices and standards. Method: This systematic review was registered with PROSPERO. Medline and Embase databases were searched for relevant in vivo studies. 44 articles were identified and underwent abstract review, where 22 were excluded. 4 further studies were excluded after full text review – leaving 18 to undergo data extraction. Results: Of 18 studies identified, 9 used an in vivo HH model and 9 a DH model. 5 studies undertook mechanical testing on tissue samples – all uniaxial in nature. Testing strip widths ranged from 1-20mm (median 3mm). Testing speeds varied from 1.5-60mm/minute. Upon histology, the most commonly assessed structural and cellular factors were neovascularization and macrophages, respectively (n=9 each). Structural analysis was mostly qualitative, where cellular analysis was equally likely to be quantitative. 11 studies assessed adhesion formation, of which 8 used one of four scoring systems. 8 studies measured mesh shrinkage. Discussion: In vivo studies assessing mesh for HH and DH repair are uncommon. Within this relatively young field, we encourage surgical and materials testing institutions to discuss its standardisation.

Keywords: hiatus, diaphragmatic, hernia, mesh, materials testing, in vivo

Procedia PDF Downloads 200
6712 Compliance of Systematic Reviews in Ophthalmology with the PRISMA Statement

Authors: Seon-Young Lee, Harkiran Sagoo, Reem Farwana, Katharine Whitehurst, Alex Fowler, Riaz Agha

Abstract:

Background/Aims: Systematic reviews and meta-analysis are becoming increasingly important way of summarizing research evidence. Researches in ophthalmology may represent further challenges, due to their potential complexity in study design. The aim of our study was to determine the reporting quality of systematic reviews and meta-analysis in ophthalmology with the PRISMA statement, by assessing the articles published between 2010 and 2015 from five major journals with the highest impact factor. Methods: MEDLINE and EMBASE were used to search systematic reviews published between January 2010 and December 2015, in 5 major ophthalmology journals: Progress in Retinal and Eye Research, Ophthalmology, Archives of Ophthalmology, American Journal of Ophthalmology, Journal of the American Optometric Association. Screening, identification, and scoring of articles were performed independently by two teams, followed by statistical analysis including the median, range, and 95% CIs. Results: 115 articles were involved. The median PRISMA score was 15 of 27 items (56%), with a range of 5-26 (19-96%) and 95% CI 13.9-16.1 (51-60%). Compliance was highest in items related to the description of rationale (item 3,100%) and inclusion of a structured summary in the abstract (item 2, 90%), while poorest in indication of review protocol and registration (item 5, 9%), specification of risk of bias affecting the cumulative evidence (item 15, 24%) and description of clear objectives in introduction (item 4, 26%). Conclusion: The reporting quality of systematic reviews and meta-analysis in ophthalmology need significant improvement. While the use of PRISMA criteria as a guideline before journal submission is recommended, additional research identifying potential barriers may be required to improve the compliance to the PRISMA guidelines.

Keywords: systematic reviews, meta-analysis, research methodology, reporting quality, PRISMA, ophthalmology

Procedia PDF Downloads 250
6711 Wind Power Forecast Error Simulation Model

Authors: Josip Vasilj, Petar Sarajcev, Damir Jakus

Abstract:

One of the major difficulties introduced with wind power penetration is the inherent uncertainty in production originating from uncertain wind conditions. This uncertainty impacts many different aspects of power system operation, especially the balancing power requirements. For this reason, in power system development planing, it is necessary to evaluate the potential uncertainty in future wind power generation. For this purpose, simulation models are required, reproducing the performance of wind power forecasts. This paper presents a wind power forecast error simulation models which are based on the stochastic process simulation. Proposed models capture the most important statistical parameters recognized in wind power forecast error time series. Furthermore, two distinct models are presented based on data availability. First model uses wind speed measurements on potential or existing wind power plant locations, while the seconds model uses statistical distribution of wind speeds.

Keywords: wind power, uncertainty, stochastic process, Monte Carlo simulation

Procedia PDF Downloads 463
6710 Relationship between Ageism, Health and Social Conditions: A Cross-Sectional Study Among Brazilian Older Adults

Authors: Ana Luiza Blanco, Luiza de Pádua Penteado, Daniella Pires Nunes

Abstract:

Ageism is a widespread and prevalent phenomenon that affects older adults and directly affects healthy aging. Identifying the factors that contribute to ageism is important to discuss interventions that minimizes its social and emotional impact. To identify factors related with ageism in Brazilians older adults. Quantitative study, with a cross-sectional and analytical design. 134 older adults completed an online questionnaire about Sociodemographic and Health Characteristics, Discrimination (Ageism Survey), Depressive Symptoms (The Geriatric Depression Scale), Family Function (Family APGAR) and Loneliness. The Mann Whitney and Kruskal Wallis tests were used for data analysis, with a significance level of 5%. The mean age was 66.93 years (sd=0.50), mostly women (84.20%), married (52.60%) and with more than 12 years of schooling (75.93%). The results showed that older adults with a regular self-perception of health had higher median ageism scores when compared to individuals who rated their health as very good or good (p=0.006). The same occurred for individuals with depressive symptoms when compared to those without signs of depression (p=0.001). Regarding family function, it was observed that people with low family functionality tend to suffer more ageism than those with high functionality (p=0.017). Loneliness was also a factor related with the experience of ageism in this study. Lonely individuals had higher median ageism scores (p=0.002). There was relationship between ageism and self-perception of health, depressive symptoms, loneliness and dysfunctional family. Such findings demonstrate the importance of considering the psychosocial determinants of aging to reduce discrimination and promote healthy aging, focusing on social support and educational interventions.

Keywords: ageism, age stereotypes, healthy aging, social conditions

Procedia PDF Downloads 82
6709 A Comparative Study of Regional Climate Models and Global Coupled Models over Uttarakhand

Authors: Sudip Kumar Kundu, Charu Singh

Abstract:

As a great physiographic divide, the Himalayas affecting a large system of water and air circulation which helps to determine the climatic condition in the Indian subcontinent to the south and mid-Asian highlands to the north. It creates obstacles by defending chill continental air from north side into India in winter and also defends rain-bearing southwesterly monsoon to give up maximum precipitation in that area in monsoon season. Nowadays extreme weather conditions such as heavy precipitation, cloudburst, flash flood, landslide and extreme avalanches are the regular happening incidents in the region of North Western Himalayan (NWH). The present study has been planned to investigate the suitable model(s) to find out the rainfall pattern over that region. For this investigation, selected models from Coordinated Regional Climate Downscaling Experiment (CORDEX) and Coupled Model Intercomparison Project Phase 5 (CMIP5) has been utilized in a consistent framework for the period of 1976 to 2000 (historical). The ability of these driving models from CORDEX domain and CMIP5 has been examined according to their capability of the spatial distribution as well as time series plot of rainfall over NWH in the rainy season and compared with the ground-based Indian Meteorological Department (IMD) gridded rainfall data set. It is noted from the analysis that the models like MIROC5 and MPI-ESM-LR from the both CORDEX and CMIP5 provide the best spatial distribution of rainfall over NWH region. But the driving models from CORDEX underestimates the daily rainfall amount as compared to CMIP5 driving models as it is unable to capture daily rainfall data properly when it has been plotted for time series (TS) individually for the state of Uttarakhand (UK) and Himachal Pradesh (HP). So finally it can be said that the driving models from CMIP5 are better than CORDEX domain models to investigate the rainfall pattern over NWH region.

Keywords: global warming, rainfall, CMIP5, CORDEX, NWH

Procedia PDF Downloads 153
6708 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 63