Search results for: common information model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 29195

Search results for: common information model

28565 Reliability Assessment Using Full Probabilistic Modelling for Carbonation and Chloride Exposures, Including Initiation and Propagation Periods

Authors: Frank Papworth, Inam Khan

Abstract:

Fib’s model code 2020 has four approaches for design life verification. Historically ‘deemed to satisfy provisions have been the principal approach, but this has limited options for materials and covers. The use of an equation in fib’s model code for service life design to predict time to corrosion initiation has become increasingly popular to justify further options, but in some cases, the analysis approaches are incorrect. Even when the equations are computed using full probabilistic analysis, there are common mistakes. This paper reviews the work of recent fib commissions on implementing the service life model to assess the reliability of durability designs, including initiation and propagation periods. The paper goes on to consider the assessment of deemed to satisfy requirements in national codes and considers the influence of various options, including different steel types, various cement systems, quality of concrete and cover, on reliability achieved. As modelling is based on achieving agreed target reliability, consideration is given to how a project might determine appropriate target reliability.

Keywords: chlorides, marine, exposure, design life, reliability, modelling

Procedia PDF Downloads 235
28564 Mixed Effects Models for Short-Term Load Forecasting for the Spanish Regions: Castilla-Leon, Castilla-La Mancha and Andalucia

Authors: C. Senabre, S. Valero, M. Lopez, E. Velasco, M. Sanchez

Abstract:

This paper focuses on an application of linear mixed models to short-term load forecasting. The challenge of this research is to improve a currently working model at the Spanish Transport System Operator, programmed by us, and based on linear autoregressive techniques and neural networks. The forecasting system currently forecasts each of the regions within the Spanish grid separately, even though the behavior of the load in each region is affected by the same factors in a similar way. A load forecasting system has been verified in this work by using the real data from a utility. In this research it has been used an integration of several regions into a linear mixed model as starting point to obtain the information from other regions. Firstly, the systems to learn general behaviors present in all regions, and secondly, it is identified individual deviation in each regions. The technique can be especially useful when modeling the effect of special days with scarce information from the past. The three most relevant regions of the system have been used to test the model, focusing on special day and improving the performance of both currently working models used as benchmark. A range of comparisons with different forecasting models has been conducted. The forecasting results demonstrate the superiority of the proposed methodology.

Keywords: short-term load forecasting, mixed effects models, neural networks, mixed effects models

Procedia PDF Downloads 188
28563 Towards a Security Model against Denial of Service Attacks for SIP Traffic

Authors: Arellano Karina, Diego Avila-Pesántez, Leticia Vaca-Cárdenas, Alberto Arellano, Carmen Mantilla

Abstract:

Nowadays, security threats in Voice over IP (VoIP) systems are an essential and latent concern for people in charge of security in a corporate network, because, every day, new Denial-of-Service (DoS) attacks are developed. These affect the business continuity of an organization, regarding confidentiality, availability, and integrity of services, causing frequent losses of both information and money. The purpose of this study is to establish the necessary measures to mitigate DoS threats, which affect the availability of VoIP systems, based on the Session Initiation Protocol (SIP). A Security Model called MS-DoS-SIP is proposed, which is based on two approaches. The first one analyzes the recommendations of international security standards. The second approach takes into account weaknesses and threats. The implementation of this model in a VoIP simulated system allowed to minimize the present vulnerabilities in 92% and increase the availability time of the VoIP service into an organization.

Keywords: Denial-of-Service SIP attacks, MS-DoS-SIP, security model, VoIP-SIP vulnerabilities

Procedia PDF Downloads 203
28562 Residual Life Estimation Based on Multi-Phase Nonlinear Wiener Process

Authors: Hao Chen, Bo Guo, Ping Jiang

Abstract:

Residual life (RL) estimation based on multi-phase nonlinear Wiener process was studied in this paper, which is significant for complicated products with small samples. Firstly, nonlinear Wiener model with random parameter was introduced and multi-phase nonlinear Wiener model was proposed to model degradation process of products that were nonlinear and separated into different phases. Then the multi-phase RL probability density function based on the presented model was derived approximately in a closed form and parameters estimation was achieved with the method of maximum likelihood estimation (MLE). Finally, the method was applied to estimate the RL of high voltage plus capacitor. Compared with the other three different models by log-likelihood function (Log-LF) and Akaike information criterion (AIC), the results show that the proposed degradation model can capture degradation process of high voltage plus capacitors in a better way and provide a more reliable result.

Keywords: multi-phase nonlinear wiener process, residual life estimation, maximum likelihood estimation, high voltage plus capacitor

Procedia PDF Downloads 453
28561 Model Observability – A Monitoring Solution for Machine Learning Models

Authors: Amreth Chandrasehar

Abstract:

Machine Learning (ML) Models are developed and run in production to solve various use cases that help organizations to be more efficient and help drive the business. But this comes at a massive development cost and lost business opportunities. According to the Gartner report, 85% of data science projects fail, and one of the factors impacting this is not paying attention to Model Observability. Model Observability helps the developers and operators to pinpoint the model performance issues data drift and help identify root cause of issues. This paper focuses on providing insights into incorporating model observability in model development and operationalizing it in production.

Keywords: model observability, monitoring, drift detection, ML observability platform

Procedia PDF Downloads 112
28560 A User Study on the Adoption of Context-Aware Destination Mobile Applications

Authors: Shu-Lu Hsu, Fang-Yi Chu

Abstract:

With the advances in information and communications technology, mobile context-aware applications have become powerful marketing tools. In Apple online store, there are numerous mobile applications (APPs) developed for destination tour. This study investigated the determinants of adoption of context-aware APPs for destination tour services. A model is proposed based on Technology Acceptance Model and privacy concern theory. The model was empirically tested based on a sample of 259 users of a tourism APP published by Kaohsiung Tourism Bureau, Taiwan. The results showed that the fitness of the model is well and, among all the factors, the perceived usefulness and perceived ease of use have the most significant influences on the intention to adopt context-aware destination APPs. Finally, contrary to the findings of previous literature, the effect of privacy concern on the adoption intention of context-aware APP is insignificant.

Keywords: mobile application, context-aware, privacy concern, TAM

Procedia PDF Downloads 258
28559 High Resolution Sandstone Connectivity Modelling: Implications for Outcrop Geological and Its Analog Studies

Authors: Numair Ahmed Siddiqui, Abdul Hadi bin Abd Rahman, Chow Weng Sum, Wan Ismail Wan Yousif, Asif Zameer, Joel Ben-Awal

Abstract:

Advances in data capturing from outcrop studies have made possible the acquisition of high-resolution digital data, offering improved and economical reservoir modelling methods. Terrestrial laser scanning utilizing LiDAR (Light detection and ranging) provides a new method to build outcrop based reservoir models, which provide a crucial piece of information to understand heterogeneities in sandstone facies with high-resolution images and data set. This study presents the detailed application of outcrop based sandstone facies connectivity model by acquiring information gathered from traditional fieldwork and processing detailed digital point-cloud data from LiDAR to develop an intermediate small-scale reservoir sandstone facies model of the Miocene Sandakan Formation, Sabah, East Malaysia. The software RiScan pro (v1.8.0) was used in digital data collection and post-processing with an accuracy of 0.01 m and point acquisition rate of up to 10,000 points per second. We provide an accurate and descriptive workflow to triangulate point-clouds of different sets of sandstone facies with well-marked top and bottom boundaries in conjunction with field sedimentology. This will provide highly accurate qualitative sandstone facies connectivity model which is a challenge to obtain from subsurface datasets (i.e., seismic and well data). Finally, by applying this workflow, we can build an outcrop based static connectivity model, which can be an analogue to subsurface reservoir studies.

Keywords: LiDAR, outcrop, high resolution, sandstone faceis, connectivity model

Procedia PDF Downloads 226
28558 All-or-None Principle and Weakness of Hodgkin-Huxley Mathematical Model

Authors: S. A. Sadegh Zadeh, C. Kambhampati

Abstract:

Mathematical and computational modellings are the necessary tools for reviewing, analysing, and predicting processes and events in the wide spectrum range of scientific fields. Therefore, in a field as rapidly developing as neuroscience, the combination of these two modellings can have a significant role in helping to guide the direction the field takes. The paper combined mathematical and computational modelling to prove a weakness in a very precious model in neuroscience. This paper is intended to analyse all-or-none principle in Hodgkin-Huxley mathematical model. By implementation the computational model of Hodgkin-Huxley model and applying the concept of all-or-none principle, an investigation on this mathematical model has been performed. The results clearly showed that the mathematical model of Hodgkin-Huxley does not observe this fundamental law in neurophysiology to generating action potentials. This study shows that further mathematical studies on the Hodgkin-Huxley model are needed in order to create a model without this weakness.

Keywords: all-or-none, computational modelling, mathematical model, transmembrane voltage, action potential

Procedia PDF Downloads 617
28557 Bridging the Gap between M and E, and KM: Towards the Integration of Evidence-Based Information and Policy Decision-Making

Authors: Xueqing Ivy Chen, Christo De Coning

Abstract:

It is clear from practice that a gap exists between Result-Based Monitoring and Evaluation (RBME) as a discipline, and Knowledge Management (KM) on the other hand. Whereas various government departments have institutionalised these functions, KM and M&E has functioned in isolation from each other in a practical sense in the public sector. It’s therefore necessary to explore the relationship between KM and M&E and the necessity for integration, so that a convergence of these disciplines can be established. An integration of KM and M&E will lead to integration and improvement of evidence-based information and policy decision-making. M&E and KM process models are available but the complementarity between specific process steps of these process models are not exploited. A need exists to clarify the relationships between these functions in order to ensure evidence based information and policy decision-making. This paper will depart from the well-known policy process models, such as the generic model and consider recent on the interface between policy, M&E and KM.

Keywords: result-based monitoring and evaluation, RBME, knowledge management, KM, evident based decision making, public policy, information systems, institutional arrangement

Procedia PDF Downloads 152
28556 Modeling the Intricate Relationship between miRNA Dysregulation and Breast Cancer Development

Authors: Sajed Sarabandi, Mostafa Rostampour Vajari

Abstract:

Breast cancer is the most frequent form of cancer among women and the fifth-leading cause of cancer-related deaths. A common feature of cancer cells is their ability to survive and evade apoptosis. Understanding the mechanisms of these pathways and their regulatory factors can lead to the development of effective treatment strategies. In this study, we aim to model the effect of key miRNAs, which are significant regulatory factors in breast cancer. We designed a Petri net focusing on two crucial pathways, proliferation, and apoptosis, and identified the role of miRNAs in these pathways. Our analysis indicates that the upregulation of miRNAs 99a and 372 can effectively increase apoptosis and decrease proliferation. Moreover, we demonstrate that miRNA-600, previously reported as a potential candidate for treatment, may not be a suitable target due to its dual activity in proliferation. Therefore, further research is required to investigate the potential of this miRNA in cancer treatment. Our model shows that a combination of miRNA upregulation and knockdown can efficiently influence key genes such as MDM2 and PTEN, leading to the activation of apoptosis in cancer cells. Ultimately, our model successfully simulates the connection between regulatory miRNAs and key genes in breast cancer.

Keywords: breast cancer, microRNAs, bio-modeling, Petri net

Procedia PDF Downloads 28
28555 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons

Authors: Kristopher A. Pruitt, Justin M. Hill

Abstract:

The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.

Keywords: nutrition, optimization, pacing, ultramarathons

Procedia PDF Downloads 189
28554 The Conceptualization of Patient-Centered Care in Latin America: A Scoping Review

Authors: Anne Klimesch, Alejandra Martinez, Martin HäRter, Isabelle Scholl, Paulina Bravo

Abstract:

Patient-centered care (PCC) is a key principle of high-quality healthcare. In Latin America, research on and promotion of PCC have taken place in the past. However, thorough implementation of PCC in practice is still missing. In Germany, an integrative model of patient-centeredness has been developed by synthesis of diverse concepts of PCC. The model could serve as a point of reference for further research on the implementation of PCC. However, it is predominantly based on research from Europe and North America. This scoping review, therefore, aims to accumulate research on PCC in Latin America in the past 15 years and analyse how PCC has been conceptualized. The resulting overview of PCC in Latin America will be a foundation for a subsequent study aiming at the adaptation of the integrative model of patient-centeredness to the Latin American health care context. Scientific databases (MEDLINE, EMBASE, PsycINFO, CINAHL, Scopus, Web of Science, SCIELO, Redalyc.) will be searched, and reference and citation tracking will be performed. Studies will be included if they were carried out in Latin America, investigated PCC in any clinical and community setting (public and private), and were published in English, Spanish, French, or Portuguese since 2006. Furthermore, any theoretical framework or conceptual model to guide how PCC is conceptualized in Latin America will be included. Two reviewers will be responsible for the identification of articles, screening of records, and full-text assessment. The results of the scoping review will be used in the development of a mixed-methods study with the aim to understand the needs for PCC, as well as barriers and facilitators in Latin America. Based on the outcomes, the integrative model of PCC will be translated to Spanish and adapted to the Latin American context. The integrative model will enable the dissemination of the concept of PCC in Latin America and will provide a common ground for further research on the topic. The project will thereby make an important contribution to an evidence-based implementation of PCC in Latin America.

Keywords: conceptual framework, integrative model of PCC, Latin America, patient-centered care

Procedia PDF Downloads 200
28553 A Graph-Based Retrieval Model for Passage Search

Authors: Junjie Zhong, Kai Hong, Lei Wang

Abstract:

Passage Retrieval (PR) plays an important role in many Natural Language Processing (NLP) tasks. Traditional efficient retrieval models relying on exact term-matching, such as TF-IDF or BM25, have nowadays been exceeded by pre-trained language models which match by semantics. Though they gain effectiveness, deep language models often require large memory as well as time cost. To tackle the trade-off between efficiency and effectiveness in PR, this paper proposes Graph Passage Retriever (GraphPR), a graph-based model inspired by the development of graph learning techniques. Different from existing works, GraphPR is end-to-end and integrates both term-matching information and semantics. GraphPR constructs a passage-level graph from BM25 retrieval results and trains a GCN-like model on the graph with graph-based objectives. Passages were regarded as nodes in the constructed graph and were embedded in dense vectors. PR can then be implemented using embeddings and a fast vector-similarity search. Experiments on a variety of real-world retrieval datasets show that the proposed model outperforms related models in several evaluation metrics (e.g., mean reciprocal rank, accuracy, F1-scores) while maintaining a relatively low query latency and memory usage.

Keywords: efficiency, effectiveness, graph learning, language model, passage retrieval, term-matching model

Procedia PDF Downloads 148
28552 Artificial Neural Network-Based Prediction of Effluent Quality of Wastewater Treatment Plant Employing Data Preprocessing Approaches

Authors: Vahid Nourani, Atefeh Ashrafi

Abstract:

Prediction of treated wastewater quality is a matter of growing importance in water treatment procedure. In this way artificial neural network (ANN), as a robust data-driven approach, has been widely used for forecasting the effluent quality of wastewater treatment. However, developing ANN model based on appropriate input variables is a major concern due to the numerous parameters which are collected from treatment process and the number of them are increasing in the light of electronic sensors development. Various studies have been conducted, using different clustering methods, in order to classify most related and effective input variables. This issue has been overlooked in the selecting dominant input variables among wastewater treatment parameters which could effectively lead to more accurate prediction of water quality. In the presented study two ANN models were developed with the aim of forecasting effluent quality of Tabriz city’s wastewater treatment plant. Biochemical oxygen demand (BOD) was utilized to determine water quality as a target parameter. Model A used Principal Component Analysis (PCA) for input selection as a linear variance-based clustering method. Model B used those variables identified by the mutual information (MI) measure. Therefore, the optimal ANN structure when the result of model B compared with model A showed up to 15% percent increment in Determination Coefficient (DC). Thus, this study highlights the advantage of PCA method in selecting dominant input variables for ANN modeling of wastewater plant efficiency performance.

Keywords: Artificial Neural Networks, biochemical oxygen demand, principal component analysis, mutual information, Tabriz wastewater treatment plant, wastewater treatment plant

Procedia PDF Downloads 128
28551 Reservoir Fluids: Occurrence, Classification, and Modeling

Authors: Ahmed El-Banbi

Abstract:

Several PVT models exist to represent how PVT properties are handled in sub-surface and surface engineering calculations for oil and gas production. The most commonly used models include black oil, modified black oil (MBO), and compositional models. These models are used in calculations that allow engineers to optimize and forecast well and reservoir performance (e.g., reservoir simulation calculations, material balance, nodal analysis, surface facilities, etc.). The choice of which model is dependent on fluid type and the production process (e.g., depletion, water injection, gas injection, etc.). Based on close to 2,000 reservoir fluid samples collected from different basins and locations, this paper presents some conclusions on the occurrence of reservoir fluids. It also reviews the common methods used to classify reservoir fluid types. Based on new criteria related to the production behavior of different fluids and economic considerations, an updated classification of reservoir fluid types is presented in the paper. Recommendations on the use of different PVT models to simulate the behavior of different reservoir fluid types are discussed. Each PVT model requirement is highlighted. Available methods for the calculation of PVT properties from each model are also discussed. Practical recommendations and tips on how to control the calculations to achieve the most accurate results are given.

Keywords: PVT models, fluid types, PVT properties, fluids classification

Procedia PDF Downloads 72
28550 Geo-Collaboration Model between a City and Its Inhabitants to Develop Complementary Solutions for Better Household Waste Collection

Authors: Abdessalam Hijab, Hafida Boulekbache, Eric Henry

Abstract:

According to several research studies, the city as a whole is a complex, spatially organized system; its modeling must take into account several factors, socio-economic, and political, or geographical, acting at multiple scales of observation according to varied temporalities. Sustainable management and protection of the environment in this complex system require significant human and technical investment, particularly for monitoring and maintenance. The objective of this paper is to propose an intelligent approach based on the coupling of Geographic Information System (GIS) and Information and Communications Technology (ICT) tools in order to integrate the inhabitants in the processes of sustainable management and protection of the urban environment, specifically in the processes of household waste collection in urban areas. We are discussing a collaborative 'city/inhabitant' space. Indeed, it is a geo-collaborative approach, based on the spatialization and real-time geo-localization of topological and multimedia data taken by the 'active' inhabitant, in the form of geo-localized alerts related to household waste issues in their city. Our proposal provides a good understanding of the extent to which civil society (inhabitants) can help and contribute to the development of complementary solutions for the collection of household waste and the protection of the urban environment. Moreover, it allows the inhabitant to contribute to the enrichment of a data bank for future uses. Our geo-collaborative model will be tested in the Lamkansa sampling district of the city of Casablanca in Morocco.

Keywords: geographic information system, GIS, information and communications technology, ICT, geo-collaboration, inhabitants, city

Procedia PDF Downloads 116
28549 Analysis of the Diffusion Behavior of an Information and Communication Technology Platform for City Logistics

Authors: Giulio Mangano, Alberto De Marco, Giovanni Zenezini

Abstract:

The concept of City Logistics (CL) has emerged to improve the impacts of last mile freight distribution in urban areas. In this paper, a System Dynamics (SD) model exploring the dynamics of the diffusion of a ICT platform for CL management across different populations is proposed. For the development of the model two sources have been used. On the one hand, the major diffusion variables and feedback loops are derived from a literature review of existing diffusion models. On the other hand, the parameters are represented by the value propositions delivered by the platform as a response to some of the users’ needs. To extract the most important value propositions the Business Model Canvas approach has been used. Such approach in fact focuses on understanding how a company can create value for her target customers. These variables and parameters are thus translated into a SD diffusion model with three different populations namely municipalities, logistics service providers, and own account carriers. Results show that, the three populations under analysis fully adopt the platform within the simulation time frame, highlighting a strong demand by different stakeholders for CL projects aiming at carrying out more efficient urban logistics operations.

Keywords: city logistics, simulation, system dynamics, business model

Procedia PDF Downloads 266
28548 Classification of Foliar Nitrogen in Common Bean (Phaseolus Vulgaris L.) Using Deep Learning Models and Images

Authors: Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Murilo Mesquita Baesso

Abstract:

Common beans are a widely cultivated and consumed legume globally, serving as a staple food for humans, especially in developing countries, due to their nutritional characteristics. Nitrogen (N) is the most limiting nutrient for productivity, and foliar analysis is crucial to ensure balanced nitrogen fertilization. Excessive N applications can cause, either isolated or cumulatively, soil and water contamination, plant toxicity, and increase their susceptibility to diseases and pests. However, the quantification of N using conventional methods is time-consuming and costly, demanding new technologies to optimize the adequate supply of N to plants. Thus, it becomes necessary to establish constant monitoring of the foliar content of this macronutrient in plants, mainly at the V4 stage, aiming at precision management of nitrogen fertilization. In this work, the objective was to evaluate the performance of a deep learning model, Resnet-50, in the classification of foliar nitrogen in common beans using RGB images. The BRS Estilo cultivar was sown in a greenhouse in a completely randomized design with four nitrogen doses (T1 = 0 kg N ha-1, T2 = 25 kg N ha-1, T3 = 75 kg N ha-1, and T4 = 100 kg N ha-1) and 12 replications. Pots with 5L capacity were used with a substrate composed of 43% soil (Neossolo Quartzarênico), 28.5% crushed sugarcane bagasse, and 28.5% cured bovine manure. The water supply of the plants was done with 5mm of water per day. The application of urea (45% N) and the acquisition of images occurred 14 and 32 days after sowing, respectively. A code developed in Matlab© R2022b was used to cut the original images into smaller blocks, originating an image bank composed of 4 folders representing the four classes and labeled as T1, T2, T3, and T4, each containing 500 images of 224x224 pixels obtained from plants cultivated under different N doses. The Matlab© R2022b software was used for the implementation and performance analysis of the model. The evaluation of the efficiency was done by a set of metrics, including accuracy (AC), F1-score (F1), specificity (SP), area under the curve (AUC), and precision (P). The ResNet-50 showed high performance in the classification of foliar N levels in common beans, with AC values of 85.6%. The F1 for classes T1, T2, T3, and T4 was 76, 72, 74, and 77%, respectively. This study revealed that the use of RGB images combined with deep learning can be a promising alternative to slow laboratory analyses, capable of optimizing the estimation of foliar N. This can allow rapid intervention by the producer to achieve higher productivity and less fertilizer waste. Future approaches are encouraged to develop mobile devices capable of handling images using deep learning for the classification of the nutritional status of plants in situ.

Keywords: convolutional neural network, residual network 50, nutritional status, artificial intelligence

Procedia PDF Downloads 19
28547 A Fuzzy Multiobjective Model for Bed Allocation Optimized by Artificial Bee Colony Algorithm

Authors: Jalal Abdulkareem Sultan, Abdulhakeem Luqman Hasan

Abstract:

With the development of health care systems competition, hospitals face more and more pressures. Meanwhile, resource allocation has a vital effect on achieving competitive advantages in hospitals. Selecting the appropriate number of beds is one of the most important sections in hospital management. However, in real situation, bed allocation selection is a multiple objective problem about different items with vagueness and randomness of the data. It is very complex. Hence, research about bed allocation problem is relatively scarce under considering multiple departments, nursing hours, and stochastic information about arrival and service of patients. In this paper, we develop a fuzzy multiobjective bed allocation model for overcoming uncertainty and multiple departments. Fuzzy objectives and weights are simultaneously applied to help the managers to select the suitable beds about different departments. The proposed model is solved by using Artificial Bee Colony (ABC), which is a very effective algorithm. The paper describes an application of the model, dealing with a public hospital in Iraq. The results related that fuzzy multi-objective model was presented suitable framework for bed allocation and optimum use.

Keywords: bed allocation problem, fuzzy logic, artificial bee colony, multi-objective optimization

Procedia PDF Downloads 324
28546 Downscaling Seasonal Sea Surface Temperature Forecasts over the Mediterranean Sea Using Deep Learning

Authors: Redouane Larbi Boufeniza, Jing-Jia Luo

Abstract:

This study assesses the suitability of deep learning (DL) for downscaling sea surface temperature (SST) over the Mediterranean Sea in the context of seasonal forecasting. We design a set of experiments that compare different DL configurations and deploy the best-performing architecture to downscale one-month lead forecasts of June–September (JJAS) SST from the Nanjing University of Information Science and Technology Climate Forecast System version 1.0 (NUIST-CFS1.0) for the period of 1982–2020. We have also introduced predictors over a larger area to include information about the main large-scale circulations that drive SST over the Mediterranean Sea region, which improves the downscaling results. Finally, we validate the raw model and downscaled forecasts in terms of both deterministic and probabilistic verification metrics, as well as their ability to reproduce the observed precipitation extreme and spell indicator indices. The results showed that the convolutional neural network (CNN)-based downscaling consistently improves the raw model forecasts, with lower bias and more accurate representations of the observed mean and extreme SST spatial patterns. Besides, the CNN-based downscaling yields a much more accurate forecast of extreme SST and spell indicators and reduces the significant relevant biases exhibited by the raw model predictions. Moreover, our results show that the CNN-based downscaling yields better skill scores than the raw model forecasts over most portions of the Mediterranean Sea. The results demonstrate the potential usefulness of CNN in downscaling seasonal SST predictions over the Mediterranean Sea, particularly in providing improved forecast products.

Keywords: Mediterranean Sea, sea surface temperature, seasonal forecasting, downscaling, deep learning

Procedia PDF Downloads 76
28545 Artificial Neural Network Based Parameter Prediction of Miniaturized Solid Rocket Motor

Authors: Hao Yan, Xiaobing Zhang

Abstract:

The working mechanism of miniaturized solid rocket motors (SRMs) is not yet fully understood. It is imperative to explore its unique features. However, there are many disadvantages to using common multi-objective evolutionary algorithms (MOEAs) in predicting the parameters of the miniaturized SRM during its conceptual design phase. Initially, the design variables and objectives are constrained in a lumped parameter model (LPM) of this SRM, which leads to local optima in MOEAs. In addition, MOEAs require a large number of calculations due to their population strategy. Although the calculation time for simulating an LPM just once is usually less than that of a CFD simulation, the number of function evaluations (NFEs) is usually large in MOEAs, which makes the total time cost unacceptably long. Moreover, the accuracy of the LPM is relatively low compared to that of a CFD model due to its assumptions. CFD simulations or experiments are required for comparison and verification of the optimal results obtained by MOEAs with an LPM. The conceptual design phase based on MOEAs is a lengthy process, and its results are not precise enough due to the above shortcomings. An artificial neural network (ANN) based parameter prediction is proposed as a way to reduce time costs and improve prediction accuracy. In this method, an ANN is used to build a surrogate model that is trained with a 3D numerical simulation. In design, the original LPM is replaced by a surrogate model. Each case uses the same MOEAs, in which the calculation time of the two models is compared, and their optimization results are compared with 3D simulation results. Using the surrogate model for the parameter prediction process of the miniaturized SRMs results in a significant increase in computational efficiency and an improvement in prediction accuracy. Thus, the ANN-based surrogate model does provide faster and more accurate parameter prediction for an initial design scheme. Moreover, even when the MOEAs converge to local optima, the time cost of the ANN-based surrogate model is much lower than that of the simplified physical model LPM. This means that designers can save a lot of time during code debugging and parameter tuning in a complex design process. Designers can reduce repeated calculation costs and obtain accurate optimal solutions by combining an ANN-based surrogate model with MOEAs.

Keywords: artificial neural network, solid rocket motor, multi-objective evolutionary algorithm, surrogate model

Procedia PDF Downloads 90
28544 Assessment of Soil Erosion Risk Using Soil and Water Assessment Tools Model: Case of Siliana Watershed, Northwest Tunisia

Authors: Sana Dridi, Jalel Aouissi, Rafla Attia, Taoufik Hermassi, Thouraya Sahli

Abstract:

Soil erosion is an increasing issue in Mediterranean countries. In Tunisia, the capacity of dam reservoirs continues to decrease as a consequence of soil erosion. This study aims to predict sediment yield to enrich soil management practices using Soil and Water Assessment Tools model (SWAT) in the Siliana watershed (1041.6 km²), located in the northwest of Tunisia. A database was constructed using remote sensing and Geographical Information System. Climatic and flow data were collected from water resources directorates in Tunisia. The SWAT model was built to simulate hydrological processes and sediment transport. A sensitivity analysis, calibration, and validation were performed using SWAT-CUP software. The model calibration of stream flow simulations shows a good performance with NSE and R² values of 0.77 and 0.79, respectively. The model validation shows a very good performance with values of NSE and R² for 0.8 and 0.88, respectively. After calibration and validation of stream flow simulation, the model was used to simulate the soil erosion and sediment load transport. The spatial distributions of soil loss rate for determining the critical sediment source areas show that 63 % of the study area has a low soil loss rate less than 7 t ha⁻¹y⁻¹. The annual average soil loss rate simulated with the SWAT model in the Siliana watershed is 4.62 t ha⁻¹y⁻¹.

Keywords: water erosion, SWAT model, streamflow, SWATCUP, sediment yield

Procedia PDF Downloads 101
28543 Multiscale Modelling of Citrus Black Spot Transmission Dynamics along the Pre-Harvest Supply Chain

Authors: Muleya Nqobile, Winston Garira

Abstract:

We presented a compartmental deterministic multi-scale model which encompass internal plant defensive mechanism and pathogen interaction, then we consider nesting the model into the epidemiological model. The objective was to improve our understanding of the transmission dynamics of within host and between host of Guignardia citricapa Kiely. The inflow of infected class was scaled down to individual level while the outflow was scaled up to average population level. Conceptual model and mathematical model were constructed to display a theoretical framework which can be used for predicting or identify disease pattern.

Keywords: epidemiological model, mathematical modelling, multi-scale modelling, immunological model

Procedia PDF Downloads 458
28542 Determination of Tide Height Using Global Navigation Satellite Systems (GNSS)

Authors: Faisal Alsaaq

Abstract:

Hydrographic surveys have traditionally relied on the availability of tide information for the reduction of sounding observations to a common datum. In most cases, tide information is obtained from tide gauge observations and/or tide predictions over space and time using local, regional or global tide models. While the latter often provides a rather crude approximation, the former relies on tide gauge stations that are spatially restricted, and often have sparse and limited distribution. A more recent method that is increasingly being used is Global Navigation Satellite System (GNSS) positioning which can be utilised to monitor height variations of a vessel or buoy, thus providing information on sea level variations during the time of a hydrographic survey. However, GNSS heights obtained under the dynamic environment of a survey vessel are affected by “non-tidal” processes such as wave activity and the attitude of the vessel (roll, pitch, heave and dynamic draft). This research seeks to examine techniques that separate the tide signal from other non-tidal signals that may be contained in GNSS heights. This requires an investigation of the processes involved and their temporal, spectral and stochastic properties in order to apply suitable recovery techniques of tide information. In addition, different post-mission and near real-time GNSS positioning techniques will be investigated with focus on estimation of height at ocean. Furthermore, the study will investigate the possibility to transfer the chart datums at the location of tide gauges.

Keywords: hydrography, GNSS, datum, tide gauge

Procedia PDF Downloads 262
28541 Vehicle Timing Motion Detection Based on Multi-Dimensional Dynamic Detection Network

Authors: Jia Li, Xing Wei, Yuchen Hong, Yang Lu

Abstract:

Detecting vehicle behavior has always been the focus of intelligent transportation, but with the explosive growth of the number of vehicles and the complexity of the road environment, the vehicle behavior videos captured by traditional surveillance have been unable to satisfy the study of vehicle behavior. The traditional method of manually labeling vehicle behavior is too time-consuming and labor-intensive, but the existing object detection and tracking algorithms have poor practicability and low behavioral location detection rate. This paper proposes a vehicle behavior detection algorithm based on the dual-stream convolution network and the multi-dimensional video dynamic detection network. In the videos, the straight-line behavior of the vehicle will default to the background behavior. The Changing lanes, turning and turning around are set as target behaviors. The purpose of this model is to automatically mark the target behavior of the vehicle from the untrimmed videos. First, the target behavior proposals in the long video are extracted through the dual-stream convolution network. The model uses a dual-stream convolutional network to generate a one-dimensional action score waveform, and then extract segments with scores above a given threshold M into preliminary vehicle behavior proposals. Second, the preliminary proposals are pruned and identified using the multi-dimensional video dynamic detection network. Referring to the hierarchical reinforcement learning, the multi-dimensional network includes a Timer module and a Spacer module, where the Timer module mines time information in the video stream and the Spacer module extracts spatial information in the video frame. The Timer and Spacer module are implemented by Long Short-Term Memory (LSTM) and start from an all-zero hidden state. The Timer module uses the Transformer mechanism to extract timing information from the video stream and extract features by linear mapping and other methods. Finally, the model fuses time information and spatial information and obtains the location and category of the behavior through the softmax layer. This paper uses recall and precision to measure the performance of the model. Extensive experiments show that based on the dataset of this paper, the proposed model has obvious advantages compared with the existing state-of-the-art behavior detection algorithms. When the Time Intersection over Union (TIoU) threshold is 0.5, the Average-Precision (MP) reaches 36.3% (the MP of baselines is 21.5%). In summary, this paper proposes a vehicle behavior detection model based on multi-dimensional dynamic detection network. This paper introduces spatial information and temporal information to extract vehicle behaviors in long videos. Experiments show that the proposed algorithm is advanced and accurate in-vehicle timing behavior detection. In the future, the focus will be on simultaneously detecting the timing behavior of multiple vehicles in complex traffic scenes (such as a busy street) while ensuring accuracy.

Keywords: vehicle behavior detection, convolutional neural network, long short-term memory, deep learning

Procedia PDF Downloads 130
28540 Author Profiling: Prediction of Learners’ Gender on a MOOC Platform Based on Learners’ Comments

Authors: Tahani Aljohani, Jialin Yu, Alexandra. I. Cristea

Abstract:

The more an educational system knows about a learner, the more personalised interaction it can provide, which leads to better learning. However, asking a learner directly is potentially disruptive, and often ignored by learners. Especially in the booming realm of MOOC Massive Online Learning platforms, only a very low percentage of users disclose demographic information about themselves. Thus, in this paper, we aim to predict learners’ demographic characteristics, by proposing an approach using linguistically motivated Deep Learning Architectures for Learner Profiling, particularly targeting gender prediction on a FutureLearn MOOC platform. Additionally, we tackle here the difficult problem of predicting the gender of learners based on their comments only – which are often available across MOOCs. The most common current approaches to text classification use the Long Short-Term Memory (LSTM) model, considering sentences as sequences. However, human language also has structures. In this research, rather than considering sentences as plain sequences, we hypothesise that higher semantic - and syntactic level sentence processing based on linguistics will render a richer representation. We thus evaluate, the traditional LSTM versus other bleeding edge models, which take into account syntactic structure, such as tree-structured LSTM, Stack-augmented Parser-Interpreter Neural Network (SPINN) and the Structure-Aware Tag Augmented model (SATA). Additionally, we explore using different word-level encoding functions. We have implemented these methods on Our MOOC dataset, which is the most performant one comparing with a public dataset on sentiment analysis that is further used as a cross-examining for the models' results.

Keywords: deep learning, data mining, gender predication, MOOCs

Procedia PDF Downloads 147
28539 Analysis of Vocal Pathologies Through Subglottic Pressure Measurement

Authors: Perla Elizabeth Jimarez Rocha, Carolina Daniela Tejeda Franco, Arturo Minor Martínez, Annel Gomez Coello

Abstract:

One of the biggest problems in developing new therapies for the management and treatment of voice disorders is the difficulty of objectively evaluating the results of each treatment. A system was proposed that captures and records voice signals, in addition to analyzing the vocal quality (fundamental frequency, zero crossings, energy, and amplitude spectrum), as well as the subglottic pressure (cm H2O) during the sustained phonation of the vowel / a /; a recording system is implemented, as well as an interactive system that records information on subglottic pressure. In Mexico City, a control group of 31 patients with phoniatric pathology is proposed; non-invasive tests were performed for these most common vocal pathologies (Nodules, Polyps, Irritative Laryngitis, Ventricular Dysphonia, Laryngeal Cancer, Dysphonia, and Dysphagia). The most common pathology was irritative laryngitis (32%), followed by vocal fold paralysis (unilateral and bilateral,19.4 %). We take into consideration men and women in the pathological groups due to the physiological difference. They were separated in gender by the difference in the morphology of the respiratory tract.

Keywords: amplitude spectrum, energy, fundamental frequency, subglottic pressure, zero crossings

Procedia PDF Downloads 120
28538 LGG Architecture for Brain Tumor Segmentation Using Convolutional Neural Network

Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan

Abstract:

The most aggressive form of brain tumor is called glioma. Glioma is kind of tumor that arises from glial tissue of the brain and occurs quite often. A fully automatic 2D-CNN model for brain tumor segmentation is presented in this paper. We performed pre-processing steps to remove noise and intensity variances using N4ITK and standard intensity correction, respectively. We used Keras open-source library with Theano as backend for fast implementation of CNN model. In addition, we used BRATS 2015 MRI dataset to evaluate our proposed model. Furthermore, we have used SimpleITK open-source library in our proposed model to analyze images. Moreover, we have extracted random 2D patches for proposed 2D-CNN model for efficient brain segmentation. Extracting 2D patched instead of 3D due to less dimensional information present in 2D which helps us in reducing computational time. Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.77 for complete, 0.76 for core, 0.77 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.

Keywords: brain tumor segmentation, convolutional neural networks, deep learning, LGG

Procedia PDF Downloads 182
28537 Detection of Flood Prone Areas Using Multi Criteria Evaluation, Geographical Information Systems and Fuzzy Logic. The Ardas Basin Case

Authors: Vasileiou Apostolos, Theodosiou Chrysa, Tsitroulis Ioannis, Maris Fotios

Abstract:

The severity of extreme phenomena is due to their ability to cause severe damage in a small amount of time. It has been observed that floods affect the greatest number of people and induce the biggest damage when compared to the total of annual natural disasters. The detection of potential flood-prone areas constitutes one of the fundamental components of the European Natural Disaster Management Policy, directly connected to the European Directive 2007/60. The aim of the present paper is to develop a new methodology that combines geographical information, fuzzy logic and multi-criteria evaluation methods so that the most vulnerable areas are defined. Therefore, ten factors related to geophysical, morphological, climatological/meteorological and hydrological characteristics of the basin were selected. Afterwards, two models were created to detect the areas pronest to flooding. The first model defined the gravitas of each factor using Analytical Hierarchy Process (AHP) and the final map of possible flood spots were created using GIS and Boolean Algebra. The second model made use of the fuzzy logic and GIS combination and a respective map was created. The application area of the aforementioned methodologies was in Ardas basin due to the frequent and important floods that have taken place these last years. Then, the results were compared to the already observed floods. The result analysis shows that both models can detect with great precision possible flood spots. As the fuzzy logic model is less time-consuming, it is considered the ideal model to apply to other areas. The said results are capable of contributing to the delineation of high risk areas and to the creation of successful management plans dealing with floods.

Keywords: analytical hierarchy process, flood prone areas, fuzzy logic, geographic information system

Procedia PDF Downloads 379
28536 Plant Growth, Symbiotic Performance and Grain Yield of 63 Common Bean Genotypes Grown Under Field Conditions at Malkerns Eswatini

Authors: Rotondwa P. Gunununu, Mustapha Mohammed, Felix D. Dakora

Abstract:

Common bean is the most importantly high protein grain legume grown in Southern Africa for human consumption and income generation. Although common bean can associate with rhizobia to fix N₂ for bacterial use and plant growth, it is reported to be a poor nitrogen fixer when compared to other legumes. N₂ fixation can vary with legume species, genotype and rhizobial strain. Therefore, screening legume germplasm can reveal rhizobia/genotype combinations with high N₂-fixing efficiency for use by farmers. This study assessed symbiotic performance and N₂ fixation in 63 common bean genotypes under field conditions at Malkerns Station in Eswatini, using the ¹⁵N natural abundance technique. The shoots of common bean genotypes were sampled at a pod-filling stage, oven-dried (65oC for 72h), weighed, ground into a fine powder (0.50 mm sieve), and subjected to ¹⁵N/¹⁴N isotopic analysis using mass spectrometry. At maturity, plants from the inner rows were harvested for the determination of grain yield. The results revealed significantly higher modulation (p≤0.05) in genotypes MCA98 and CIM-RM01-97-8 relative to the other genotypes. Shoot N concentration was highest in genotype MCA 98, followed by KAB 10 F2.8-84, with most genotypes showing shoot N concentrations below 2%. Percent N derived from atmospheric N₂ fixation (%Ndfa) differed markedly among genotypes, with CIM-RM01-92-3 and DAB 174, respectively, recording the highest values of 66.65% and 66.22 % N derived from fixation. There were also significant differences in grain yield, with CIM-RM02-79-1 producing the highest yield (3618.75 kg/ha). These results represent an important contribution in the profiling of symbiotic functioning of common bean germplasm for improved N₂ fixation.

Keywords: nitrogen fixation, %Ndfa, ¹⁵N natural abundance, grain yield

Procedia PDF Downloads 218