Search results for: information seeking models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16984

Search results for: information seeking models

16744 Legal Means for Access to Information Management

Authors: Sameut Bouhaik Mostafa

Abstract:

Information Act is the Canadian law gives the right of access to information for the institution of government. It declares the availability of government information to the public, but that exceptions should be limited and the necessary right of access to be specific, and also states the need to constantly re-examine the decisions on the disclosure of any government information independently from the government. By 1982, it enacted a dozen countries, including France, Denmark, Finland, Sweden, the Netherlands and the United States (1966) newly legally to access the information. It entered access to Canadian information into force of the Act of 1983, under the government of Pierre Trudeau, allowing Canadians to recover information from government files, and the development of what can be accessed from the information, and the imposition of timetables to respond. It has been applied by the Information Commissioner in Canada.

Keywords: law, information, management, legal

Procedia PDF Downloads 415
16743 Copula Markov Switching Multifractal Models for Forecasting Value-at-Risk

Authors: Giriraj Achari, Malay Bhattacharyya

Abstract:

In this paper, the effectiveness of Copula Markov Switching Multifractal (MSM) models at forecasting Value-at-Risk of a two-stock portfolio is studied. The innovations are allowed to be drawn from distributions that can capture skewness and leptokurtosis, which are well documented empirical characteristics observed in financial returns. The candidate distributions considered for this purpose are Johnson-SU, Pearson Type-IV and α-Stable distributions. The two univariate marginal distributions are combined using the Student-t copula. The estimation of all parameters is performed by Maximum Likelihood Estimation. Finally, the models are compared in terms of accurate Value-at-Risk (VaR) forecasts using tests of unconditional coverage and independence. It is found that Copula-MSM-models with leptokurtic innovation distributions perform slightly better than Copula-MSM model with Normal innovations. Copula-MSM models, in general, produce better VaR forecasts as compared to traditional methods like Historical Simulation method, Variance-Covariance approach and Copula-Generalized Autoregressive Conditional Heteroscedasticity (Copula-GARCH) models.

Keywords: Copula, Markov Switching, multifractal, value-at-risk

Procedia PDF Downloads 164
16742 Digital Marketing Maturity Models: Overview and Comparison

Authors: Elina Bakhtieva

Abstract:

The variety of available digital tools, strategies and activities might confuse and disorient even an experienced marketer. This applies in particular to B2B companies, which are usually less flexible in uptaking of digital technology than B2C companies. B2B companies are lacking a framework that corresponds to the specifics of the B2B business, and which helps to evaluate a company’s capabilities and to choose an appropriate path. A B2B digital marketing maturity model helps to fill this gap. However, modern marketing offers no widely approved digital marketing maturity model, and thus, some marketing institutions provide their own tools. The purpose of this paper is building an optimized B2B digital marketing maturity model based on a SWOT (strengths, weaknesses, opportunities, and threats) analysis of existing models. The current study provides an analytical review of the existing digital marketing maturity models with open access. The results of the research are twofold. First, the provided SWOT analysis outlines the main advantages and disadvantages of existing models. Secondly, the strengths of existing digital marketing maturity models, helps to identify the main characteristics and the structure of an optimized B2B digital marketing maturity model. The research findings indicate that only one out of three analyzed models could be used as a separate tool. This study is among the first examining the use of maturity models in digital marketing. It helps businesses to choose between the existing digital marketing models, the most effective one. Moreover, it creates a base for future research on digital marketing maturity models. This study contributes to the emerging B2B digital marketing literature by providing a SWOT analysis of the existing digital marketing maturity models and suggesting a structure and main characteristics of an optimized B2B digital marketing maturity model.

Keywords: B2B digital marketing strategy, digital marketing, digital marketing maturity model, SWOT analysis

Procedia PDF Downloads 344
16741 Classification of Health Information Needs of Hypertensive Patients in the Online Health Community Based on Content Analysis

Authors: Aijing Luo, Zirui Xin, Yifeng Yuan

Abstract:

Background: With the rapid development of the online health community, more and more patients or families are seeking health information on the Internet. Objective: This study aimed to discuss how to fully reveal the health information needs expressed by hypertensive patients in their questions in the online environment. Methods: This study randomly selected 1,000 text records from the question data of hypertensive patients from 2008 to 2018 collected from the website www.haodf.com and constructed a classification system through literature research and content analysis. This paper identified the background characteristics and questioning the intention of each hypertensive patient based on the patient’s question and used co-occurrence network analysis to explore the features of the health information needs of hypertensive patients. Results: The classification system for health information needs of patients with hypertension is composed of 9 parts: 355 kinds of drugs, 395 kinds of symptoms and signs, 545 kinds of tests and examinations , 526 kinds of demographic data, 80 kinds of diseases, 37 kinds of risk factors, 43 kinds of emotions, 6 kinds of lifestyles, 49 kinds of questions. The characteristics of the explored online health information needs of the hypertensive patients include: i)more than 49% of patients describe the features such as drugs, symptoms and signs, tests and examinations, demographic data, diseases, etc. ii) these groups are most concerned about treatment (77.8%), followed by diagnosis (32.3%); iii) 65.8% of hypertensive patients will ask doctors online several questions at the same time. 28.3% of the patients are very concerned about how to adjust the medication, and they will ask other treatment-related questions at the same time, including drug side effects, whether to take drugs, how to treat a disease, etc.; secondly, 17.6% of the patients will consult the doctors online about the causes of the clinical findings, including the relationship between the clinical findings and a disease, the treatment of a disease, medication, and examinations. Conclusion: In the online environment, the health information needs expressed by Chinese hypertensive patients to doctors are personalized; that is, patients with different background features express their questioning intentions to doctors. The classification system constructed in this study can guide health information service providers in the construction of online health resources, to help solve the problem of information asymmetry in communication between doctors and patients.

Keywords: online health community, health information needs, hypertensive patients, doctor-patient communication

Procedia PDF Downloads 119
16740 Investigating the performance of machine learning models on PM2.5 forecasts: A case study in the city of Thessaloniki

Authors: Alexandros Pournaras, Anastasia Papadopoulou, Serafim Kontos, Anastasios Karakostas

Abstract:

The air quality of modern cities is an important concern, as poor air quality contributes to human health and environmental issues. Reliable air quality forecasting has, thus, gained scientific and governmental attention as an essential tool that enables authorities to take proactive measures for public safety. In this study, the potential of Machine Learning (ML) models to forecast PM2.5 at local scale is investigated in the city of Thessaloniki, the second largest city in Greece, which has been struggling with the persistent issue of air pollution. ML models, with proven ability to address timeseries forecasting, are employed to predict the PM2.5 concentrations and the respective Air Quality Index 5-days ahead by learning from daily historical air quality and meteorological data from 2014 to 2016 and gathered from two stations with different land use characteristics in the urban fabric of Thessaloniki. The performance of the ML models on PM2.5 concentrations is evaluated with common statistical methods, such as R squared (r²) and Root Mean Squared Error (RMSE), utilizing a portion of the stations’ measurements as test set. A multi-categorical evaluation is utilized for the assessment of their performance on respective AQIs. Several conclusions were made from the experiments conducted. Experimenting on MLs’ configuration revealed a moderate effect of various parameters and training schemas on the model’s predictions. Their performance of all these models were found to produce satisfactory results on PM2.5 concentrations. In addition, their application on untrained stations showed that these models can perform well, indicating a generalized behavior. Moreover, their performance on AQI was even better, showing that the MLs can be used as predictors for AQI, which is the direct information provided to the general public.

Keywords: Air Quality, AQ Forecasting, AQI, Machine Learning, PM2.5

Procedia PDF Downloads 77
16739 Energy Models for Analyzing the Economic Wide Impact of the Environmental Policies

Authors: Majdi M. Alomari, Nafesah I. Alshdaifat, Mohammad S. Widyan

Abstract:

Different countries have introduced different schemes and policies to counter global warming. The rationale behind the proposed policies and the potential barriers to successful implementation of the policies adopted by the countries were analyzed and estimated based on different models. It is argued that these models enhance the transparency and provide a better understanding to the policy makers. However, these models are underpinned with several structural and baseline assumptions. These assumptions, modeling features and future prediction of emission reductions and other implication such as cost and benefits of a transition to a low-carbon economy and its economy wide impacts were discussed. On the other hand, there are potential barriers in the form political, financial, and cultural and many others that pose a threat to the mitigation options.

Keywords: energy models, environmental policy instruments, mitigating CO2 emission, economic wide impact

Procedia PDF Downloads 523
16738 Financial Information and Collective Bargaining: Conflicting or Complementing

Authors: Humayun Murshed, Shibly Abdullah

Abstract:

The research conducted in early seventies apparently assumed the existence of a universal decision model for union negotiators and furthermore tended to regard financial information as a ‘neutral’ input into a rational decision-making process. However, research in the eighties began to question the neutrality of financial information as an input in collective bargaining rather viewing it as a potentially effective means for controlling the labour force. Furthermore, this later research also started challenging the simplistic assumptions relating particularly to union objectives which have underpinned the earlier search for universal union decision models. Despite the above developments there seems to be a dearth of studies in developing countries concerning the use of financial information in collective bargaining. This paper seeks to begin to remedy this deficiency. Utilising a case study approach based on two enterprises, one in the public sector and the other a multinational, the universal decision model is rejected and it is argued that the decision whether or not to use financial information is a contingent one and such a contingency is largely defined by the context and environment in which both union and management negotiators work. An attempt is also made to identify the factors constraining as well as promoting the use of financial information in collective bargaining, these being regarded as unique to the organizations within which the case studies are conducted.

Keywords: collective bargaining, developing countries, disclosures, financial information

Procedia PDF Downloads 471
16737 3D Object Retrieval Based on Similarity Calculation in 3D Computer Aided Design Systems

Authors: Ahmed Fradi

Abstract:

Nowadays, recent technological advances in the acquisition, modeling, and processing of three-dimensional (3D) objects data lead to the creation of models stored in huge databases, which are used in various domains such as computer vision, augmented reality, game industry, medicine, CAD (Computer-aided design), 3D printing etc. On the other hand, the industry is currently benefiting from powerful modeling tools enabling designers to easily and quickly produce 3D models. The great ease of acquisition and modeling of 3D objects make possible to create large 3D models databases, then, it becomes difficult to navigate them. Therefore, the indexing of 3D objects appears as a necessary and promising solution to manage this type of data, to extract model information, retrieve an existing model or calculate similarity between 3D objects. The objective of the proposed research is to develop a framework allowing easy and fast access to 3D objects in a CAD models database with specific indexing algorithm to find objects similar to a reference model. Our main objectives are to study existing methods of similarity calculation of 3D objects (essentially shape-based methods) by specifying the characteristics of each method as well as the difference between them, and then we will propose a new approach for indexing and comparing 3D models, which is suitable for our case study and which is based on some previously studied methods. Our proposed approach is finally illustrated by an implementation, and evaluated in a professional context.

Keywords: CAD, 3D object retrieval, shape based retrieval, similarity calculation

Procedia PDF Downloads 262
16736 Operating Speed Models on Tangent Sections of Two-Lane Rural Roads

Authors: Dražen Cvitanić, Biljana Maljković

Abstract:

This paper presents models for predicting operating speeds on tangent sections of two-lane rural roads developed on continuous speed data. The data corresponds to 20 drivers of different ages and driving experiences, driving their own cars along an 18 km long section of a state road. The data were first used for determination of maximum operating speeds on tangents and their comparison with speeds in the middle of tangents i.e. speed data used in most of operating speed studies. Analysis of continuous speed data indicated that the spot speed data are not reliable indicators of relevant speeds. After that, operating speed models for tangent sections were developed. There was no significant difference between models developed using speed data in the middle of tangent sections and models developed using maximum operating speeds on tangent sections. All developed models have higher coefficient of determination then models developed on spot speed data. Thus, it can be concluded that the method of measuring has more significant impact on the quality of operating speed model than the location of measurement.

Keywords: operating speed, continuous speed data, tangent sections, spot speed, consistency

Procedia PDF Downloads 452
16735 Control-Oriented Enhanced Zero-Dimensional Two-Zone Combustion Modelling of Internal Combustion Engines

Authors: Razieh Arian, Hadi Adibi-Asl

Abstract:

This paper investigates an efficient combustion modeling for cycle simulation of internal combustion engine (ICE) studies. The term “efficient model” means that the models must generate desired simulation results while having fast simulation time. In other words, the efficient model is defined based on the application of the model. The objective of this study is to develop math-based models for control applications or shortly control-oriented models. This study compares different modeling approaches used to model the ICEs such as mean-value models, zero dimensional, quasi-dimensional, and multi-dimensional models for control applications. Mean-value models have been widely used for model-based control applications, but recently by developing advanced simulation tools (e.g. Maple/MapleSim) the higher order models (more complex) could be considered as control-oriented models. This paper presents the enhanced zero-dimensional cycle-by-cycle modeling and simulation of a spark ignition engine with a two-zone combustion model. The simulation results are cross-validated against the simulation results from GT-Power package and show a good agreement in terms of trends and values.

Keywords: Two-zone combustion, control-oriented model, wiebe function, internal combustion engine

Procedia PDF Downloads 340
16734 Using Complete Soil Particle Size Distributions for More Precise Predictions of Soil Physical and Hydraulic Properties

Authors: Habib Khodaverdiloo, Fatemeh Afrasiabi, Farrokh Asadzadeh, Martinus Th. Van Genuchten

Abstract:

The soil particle-size distribution (PSD) is known to affect a broad range of soil physical, mechanical and hydraulic properties. Complete descriptions of a PSD curve should provide more information about these properties as opposed to having only information about soil textural class or the soil sand, silt and clay (SSC) fractions. We compared the accuracy of 19 different models of the cumulative PSD in terms of fitting observed data from a large number of Iranian soils. Parameters of the six most promising models were correlated with measured values of the field saturated hydraulic conductivity (Kfs), the mean weight diameter of soil aggregates (MWD), bulk density (ρb), and porosity (∅). These same soil properties were correlated also with conventional PSD parameters (SSC fractions), selected geometric PSD parameters (notably the mean diameter dg and its standard deviation σg), and several other PSD parameters (D50 and D60). The objective was to find the best predictions of several soil physical quality indices and the soil hydraulic properties. Neither SSC nor dg, σg, D50 and D60 were found to have a significant correlation with both Kfs or logKfs, However, the parameters of several cumulative PSD models showed statistically significant correlation with Kfs and/or logKfs (|r| = 0.42 to 0.65; p ≤ 0.05). The correlation between MWD and the model parameters was generally also higher than either with SSC fraction and dg, or with D50 and D60. Porosity (∅) and the bulk density (ρb) also showed significant correlation with several PSD model parameters, with ρb additionally correlating significantly with various geometric (dg), mechanical (D50 and D60), and agronomic (clay and sand) representations of the PSD. The fitted parameters of selected PSD models furthermore showed statistically significant correlations with Kfs,, MWD and soil porosity, which may be viewed as soil quality indices. Results of this study are promising for developing more accurate pedotransfer functions.

Keywords: particle size distribution, soil texture, hydraulic conductivity, pedotransfer functions

Procedia PDF Downloads 279
16733 A Comparison between Artificial Neural Network Prediction Models for Coronal Hole Related High Speed Streams

Authors: Rehab Abdulmajed, Amr Hamada, Ahmed Elsaid, Hisashi Hayakawa, Ayman Mahrous

Abstract:

Solar emissions have a high impact on the Earth’s magnetic field, and the prediction of solar events is of high interest. Various techniques have been used in the prediction of solar wind using mathematical models, MHD models, and neural network (NN) models. This study investigates the coronal hole (CH) derived high-speed streams (HSSs) and their correlation to the CH area and create a neural network model to predict the HSSs. Two different algorithms were used to compare different models to find a model that best simulates the HSSs. A dataset of CH synoptic maps through Carrington rotations 1601 to 2185 along with Omni-data set solar wind speed averaged over the Carrington rotations is used, which covers Solar cycles (sc) 21, 22, 23, and most of 24.

Keywords: artificial neural network, coronal hole area, feed-forward neural network models, solar high speed streams

Procedia PDF Downloads 88
16732 Patients in Opioid Maintenance Programs: Psychological Features that Predict Abstinence

Authors: Janaina Pereira, Barbara Gonzalez, Valentina Chitas, Teresa Molina

Abstract:

Intro: The positive impact of opioid maintenance programs on the health of heroin addicts, and on public health in general, has been widely recognized, namely on the prevalence reduction of infectious diseases as HIV, and on the social reintegration of this population. Nevertheless, a part of patients in these programs cannot remain heroin abstinent, or has relapses, during the treatment. Method: Thus, this cross-sectional research aims at analyzing the relation between a set of psychological and psychosocial variables, which have been associated with the onset of heroin use, and assess if they are also associated with absence of abstinence in participants in an opioid maintenance program. A total of 62 patients, aged between 26 and 58 years old (M= 40.87, DP= 7.39) with a time in opioid maintenance program between 1 and 10 years (M= 5.42, DP= 3.05), 77.4% male and 22.6% female, participated in this research. To assess the criterion variable (heroin use) we used the mean value of positive results in urine tests during the participation in the program, weighted according to the number of months in program. The predictor variables were the coping strategies, the dispositional sensation seeking, and the existence of Posttraumatic stress disorder (PTSD). Results: The results showed that only 33.87% of the patients were totally abstinent of heroin use since the beginning of the program, and the absence of abstinence, as the number of positive heroin tests, was primarily predicted by less proactive coping, and secondarily by a higher level of sensation seeking. 16.13% of the sample fulfilled diagnosis criteria for PTSD, and 67.74 % had at least one traumatic experience throughout their lives. The total of PTSD symptoms had a positive correlation with the number of physical health problems, and with the lack of professional occupation. These results have several implications for the clinical practice in this field, and we suggest the promotion of proactive coping strategies should integrate these opioid maintenance programs, as they represent the tendency to face future events as challenges and opportunities, being positively related to positive results on several fields. The early identification of PTSD in the participants, before entering the opioid maintenance programs, would be important as it is related to negative features that hinder social reintegration, Finally, to identify individuals with a sensation seeking profile would be relevant, not only because they face a higher risk of relapse, but also because the therapeutical approaches should not ignore this dispositional feature in the alternatives they propose to the patients.

Keywords: opioid maintenance programs, proactive coping, PTSD, sensation seeking

Procedia PDF Downloads 128
16731 Production Optimization under Geological Uncertainty Using Distance-Based Clustering

Authors: Byeongcheol Kang, Junyi Kim, Hyungsik Jung, Hyungjun Yang, Jaewoo An, Jonggeun Choe

Abstract:

It is important to figure out reservoir properties for better production management. Due to the limited information, there are geological uncertainties on very heterogeneous or channel reservoir. One of the solutions is to generate multiple equi-probable realizations using geostatistical methods. However, some models have wrong properties, which need to be excluded for simulation efficiency and reliability. We propose a novel method of model selection scheme, based on distance-based clustering for reliable application of production optimization algorithm. Distance is defined as a degree of dissimilarity between the data. We calculate Hausdorff distance to classify the models based on their similarity. Hausdorff distance is useful for shape matching of the reservoir models. We use multi-dimensional scaling (MDS) to describe the models on two dimensional space and group them by K-means clustering. Rather than simulating all models, we choose one representative model from each cluster and find out the best model, which has the similar production rates with the true values. From the process, we can select good reservoir models near the best model with high confidence. We make 100 channel reservoir models using single normal equation simulation (SNESIM). Since oil and gas prefer to flow through the sand facies, it is critical to characterize pattern and connectivity of the channels in the reservoir. After calculating Hausdorff distances and projecting the models by MDS, we can see that the models assemble depending on their channel patterns. These channel distributions affect operation controls of each production well so that the model selection scheme improves management optimization process. We use one of useful global search algorithms, particle swarm optimization (PSO), for our production optimization. PSO is good to find global optimum of objective function, but it takes too much time due to its usage of many particles and iterations. In addition, if we use multiple reservoir models, the simulation time for PSO will be soared. By using the proposed method, we can select good and reliable models that already matches production data. Considering geological uncertainty of the reservoir, we can get well-optimized production controls for maximum net present value. The proposed method shows one of novel solutions to select good cases among the various probabilities. The model selection schemes can be applied to not only production optimization but also history matching or other ensemble-based methods for efficient simulations.

Keywords: distance-based clustering, geological uncertainty, particle swarm optimization (PSO), production optimization

Procedia PDF Downloads 143
16730 Perspectives of Computational Modeling in Sanskrit Lexicons

Authors: Baldev Ram Khandoliyan, Ram Kishor

Abstract:

India has a classical tradition of Sanskrit Lexicons. Research work has been done on the study of Indian lexicography. India has seen amazing strides in Information and Communication Technology (ICT) applications for Indian languages in general and for Sanskrit in particular. Since Machine Translation from Sanskrit to other Indian languages is often the desired goal, traditional Sanskrit lexicography has attracted a lot of attention from the ICT and Computational Linguistics community. From Nighaŋţu and Nirukta to Amarakośa and Medinīkośa, Sanskrit owns a rich history of lexicography. As these kośas do not follow the same typology or standard in the selection and arrangement of the words and the information related to them, several types of Kośa-styles have emerged in this tradition. The model of a grammar given by Aṣṭādhyāyī is well appreciated by Indian and western linguists and grammarians. But the different models provided by lexicographic tradition also have importance. The general usefulness of Sanskrit traditional Kośas is well discussed by some scholars. That is most of the matter made available in the text. Some also have discussed the good arrangement of lexica. This paper aims to discuss some more use of the different models of Sanskrit lexicography especially focusing on its computational modeling and its use in different computational operations.

Keywords: computational lexicography, Sanskrit Lexicons, nighanṭu, kośa, Amarkosa

Procedia PDF Downloads 164
16729 Superiority of High Frequency Based Volatility Models: Empirical Evidence from an Emerging Market

Authors: Sibel Celik, Hüseyin Ergin

Abstract:

The paper aims to find the best volatility forecasting model for stock markets in Turkey. For this purpose, we compare performance of different volatility models-both traditional GARCH model and high frequency based volatility models- and conclude that both in pre-crisis and crisis period, the performance of high frequency based volatility models are better than traditional GARCH model. The findings of paper are important for policy makers, financial institutions and investors.

Keywords: volatility, GARCH model, realized volatility, high frequency data

Procedia PDF Downloads 486
16728 Credit Risk Evaluation Using Genetic Programming

Authors: Ines Gasmi, Salima Smiti, Makram Soui, Khaled Ghedira

Abstract:

Credit risk is considered as one of the important issues for financial institutions. It provokes great losses for banks. To this objective, numerous methods for credit risk evaluation have been proposed. Many evaluation methods are black box models that cannot adequately reveal information hidden in the data. However, several works have focused on building transparent rules-based models. For credit risk assessment, generated rules must be not only highly accurate, but also highly interpretable. In this paper, we aim to build both, an accurate and transparent credit risk evaluation model which proposes a set of classification rules. In fact, we consider the credit risk evaluation as an optimization problem which uses a genetic programming (GP) algorithm, where the goal is to maximize the accuracy of generated rules. We evaluate our proposed approach on the base of German and Australian credit datasets. We compared our finding with some existing works; the result shows that the proposed GP outperforms the other models.

Keywords: credit risk assessment, rule generation, genetic programming, feature selection

Procedia PDF Downloads 353
16727 A Graph-Based Retrieval Model for Passage Search

Authors: Junjie Zhong, Kai Hong, Lei Wang

Abstract:

Passage Retrieval (PR) plays an important role in many Natural Language Processing (NLP) tasks. Traditional efficient retrieval models relying on exact term-matching, such as TF-IDF or BM25, have nowadays been exceeded by pre-trained language models which match by semantics. Though they gain effectiveness, deep language models often require large memory as well as time cost. To tackle the trade-off between efficiency and effectiveness in PR, this paper proposes Graph Passage Retriever (GraphPR), a graph-based model inspired by the development of graph learning techniques. Different from existing works, GraphPR is end-to-end and integrates both term-matching information and semantics. GraphPR constructs a passage-level graph from BM25 retrieval results and trains a GCN-like model on the graph with graph-based objectives. Passages were regarded as nodes in the constructed graph and were embedded in dense vectors. PR can then be implemented using embeddings and a fast vector-similarity search. Experiments on a variety of real-world retrieval datasets show that the proposed model outperforms related models in several evaluation metrics (e.g., mean reciprocal rank, accuracy, F1-scores) while maintaining a relatively low query latency and memory usage.

Keywords: efficiency, effectiveness, graph learning, language model, passage retrieval, term-matching model

Procedia PDF Downloads 148
16726 Soil-Structure Interaction Models for the Reinforced Foundation System – A State-of-the-Art Review

Authors: Ashwini V. Chavan, Sukhanand S. Bhosale

Abstract:

Challenges of weak soil subgrade are often resolved either by stabilization or reinforcing it. However, it is also practiced to reinforce the granular fill to improve the load-settlement behavior of over weak soil strata. The inclusion of reinforcement in the engineered granular fill provided a new impetus for the development of enhanced Soil-Structure Interaction (SSI) models, also known as mechanical foundation models or lumped parameter models. Several researchers have been working in this direction to understand the mechanism of granular fill-reinforcement interaction and the response of weak soil under the application of load. These models have been developed by extending available SSI models such as the Winkler Model, Pasternak Model, Hetenyi Model, Kerr Model etc., and are helpful to visualize the load-settlement behavior of a physical system through 1-D and 2-D analysis considering beam and plate resting on the foundation respectively. Based on the literature survey, these models are categorized as ‘Reinforced Pasternak Model,’ ‘Double Beam Model,’ ‘Reinforced Timoshenko Beam Model,’ and ‘Reinforced Kerr Model.’ The present work reviews the past 30+ years of research in the field of SSI models for reinforced foundation systems, presenting the conceptual development of these models systematically and discussing their limitations. Special efforts are taken to tabulate the parameters and their significance in the load-settlement analysis, which may be helpful in future studies for the comparison and enhancement of results and findings of physical models.

Keywords: geosynthetics, mathematical modeling, reinforced foundation, soil-structure interaction, ground improvement, soft soil

Procedia PDF Downloads 123
16725 MIMIC: A Multi Input Micro-Influencers Classifier

Authors: Simone Leonardi, Luca Ardito

Abstract:

Micro-influencers are effective elements in the marketing strategies of companies and institutions because of their capability to create an hyper-engaged audience around a specific topic of interest. In recent years, many scientific approaches and commercial tools have handled the task of detecting this type of social media users. These strategies adopt solutions ranging from rule based machine learning models to deep neural networks and graph analysis on text, images, and account information. This work compares the existing solutions and proposes an ensemble method to generalize them with different input data and social media platforms. The deployed solution combines deep learning models on unstructured data with statistical machine learning models on structured data. We retrieve both social media accounts information and multimedia posts on Twitter and Instagram. These data are mapped into feature vectors for an eXtreme Gradient Boosting (XGBoost) classifier. Sixty different topics have been analyzed to build a rule based gold standard dataset and to compare the performances of our approach against baseline classifiers. We prove the effectiveness of our work by comparing the accuracy, precision, recall, and f1 score of our model with different configurations and architectures. We obtained an accuracy of 0.91 with our best performing model.

Keywords: deep learning, gradient boosting, image processing, micro-influencers, NLP, social media

Procedia PDF Downloads 183
16724 A Game-Theory-Based Price-Optimization Algorithm for the Simulation of Markets Using Agent-Based Modelling

Authors: Juan Manuel Sanchez-Cartas, Gonzalo Leon

Abstract:

A price competition algorithm for ABMs based on game theory principles is proposed to deal with the simulation of theoretical market models. The algorithm is applied to the classical Hotelling’s model and to a two-sided market model to show it leads to the optimal behavior predicted by theoretical models. However, when theoretical models fail to predict the equilibrium, the algorithm is capable of reaching a feasible outcome. Results highlight that the algorithm can be implemented in other simulation models to guarantee rational users and endogenous optimal behaviors. Also, it can be applied as a tool of verification given that is theoretically based.

Keywords: agent-based models, algorithmic game theory, multi-sided markets, price optimization

Procedia PDF Downloads 455
16723 Instructional Information Resources

Authors: Parveen Kumar

Abstract:

This article discusses institute information resources. Information, in its most restricted technical sense, is a sequence of symbols that can be interpreted as message information can be recorded as signs, or transmitted as signals. Information is any kind of event that affects the state of a dynamic system. Conceptually, information is the message being conveyed. This concept has numerous other meanings in different contexts. Moreover, the concept of information is closely related to notions of constraint, communication, control, data, form, instruction, knowledge, meaning, mental stimulus, pattern, perception, representation, and especially entropy.

Keywords: institutions, information institutions, information services for mission-oriented institute, pattern

Procedia PDF Downloads 376
16722 Ecological Networks: From Structural Analysis to Synchronization

Authors: N. F. F. Ebecken, G. C. Pereira

Abstract:

Ecological systems are exposed and are influenced by various natural and anthropogenic disturbances. They produce various effects and states seeking response symmetry to a state of global phase coherence or stability and balance of their food webs. This research project addresses the development of a computational methodology for modeling plankton food webs. The use of algorithms to establish connections, the generation of representative fuzzy multigraphs and application of technical analysis of complex networks provide a set of tools for defining, analyzing and evaluating community structure of coastal aquatic ecosystems, beyond the estimate of possible external impacts to the networks. Thus, this study aims to develop computational systems and data models to assess how these ecological networks are structurally and functionally organized, to analyze the types and degree of compartmentalization and synchronization between oscillatory and interconnected elements network and the influence of disturbances on the overall pattern of rhythmicity of the system.

Keywords: ecological networks, plankton food webs, fuzzy multigraphs, dynamic of networks

Procedia PDF Downloads 299
16721 The Use of Stochastic Gradient Boosting Method for Multi-Model Combination of Rainfall-Runoff Models

Authors: Phanida Phukoetphim, Asaad Y. Shamseldin

Abstract:

In this study, the novel Stochastic Gradient Boosting (SGB) combination method is addressed for producing daily river flows from four different rain-runoff models of Ohinemuri catchment, New Zealand. The selected rainfall-runoff models are two empirical black-box models: linear perturbation model and linear varying gain factor model, two conceptual models: soil moisture accounting and routing model and Nedbør-Afrstrømnings model. In this study, the simple average combination method and the weighted average combination method were used as a benchmark for comparing the results of the novel SGB combination method. The models and combination results are evaluated using statistical and graphical criteria. Overall results of this study show that the use of combination technique can certainly improve the simulated river flows of four selected models for Ohinemuri catchment, New Zealand. The results also indicate that the novel SGB combination method is capable of accurate prediction when used in a combination method of the simulated river flows in New Zealand.

Keywords: multi-model combination, rainfall-runoff modeling, stochastic gradient boosting, bioinformatics

Procedia PDF Downloads 339
16720 Islam and Globalization: Accommodation or Containment of One by the Other

Authors: Mohammed Isah Shehu

Abstract:

This paper examined the context of globalization and Islam and accommodation or containment of one by the other. The paper is born out of the misconception and misunderstanding among many people that globalization is purely Western, anti-Islam and that Islam, globalization and Islam are diametrically opposed as such have no places for accommodating each other. The study used secondary sources to gather data. The study found that from its origin, Islam is in the whole context, a globalized religion and the contemporary globalization is already contained by Islam; that while contemporary globalization is centered on Western world, values and preferences (Western civilization, information and communication technology, free markets, trade and investments); some of the major foundation works that are aiding globalization were originally handiworks of past great Muslims (Islamic civilizations, Order of Algebra, tools of Navigation, Calligraphy, Medicine, Astronomy et cetera) whose major values are not Islamic; with globalization the Muslims have greater opportunities of spreading of Islam and practicing it in a most conducive atmosphere, easy and fast linkage with their fellow Muslim brothers wherever they may be; easier and freer world of trade and have the best opportunities to most things. The study however observed that Western contemporary globalization poses threats to religions such as those of globalization of immorality, injustice, trade with anti-Islamic terms and conditions, internationalized crime et cetera. Muslims would have to avoid or be cautious of many things for Islam is a complete religion that has what is forbidden and allowed (halaal and haramm) based on principles of (Shariah, justice to all, humanity and compassion, obedience to and seeking Allah’s pleasure); to Muslims, Contemporary globalization has to be in conformity with original provisions of Islam. The study recommended that Muslims must rise up in seeking knowledge on Islam and all other fields, further intellectual explorations of works by Muslim scholars/thinkers so that any advancement in globalization would be properly domesticated within Islam for the Muslims to make optimum use of any advancement to the benefit of Islam.

Keywords: accommodation, containment, Islam, globalization

Procedia PDF Downloads 283
16719 Profitability Assessment of Granite Aggregate Production and the Development of a Profit Assessment Model

Authors: Melodi Mbuyi Mata, Blessing Olamide Taiwo, Afolabi Ayodele David

Abstract:

The purpose of this research is to create empirical models for assessing the profitability of granite aggregate production in Akure, Ondo state aggregate quarries. In addition, an artificial neural network (ANN) model and multivariate predicting models for granite profitability were developed in the study. A formal survey questionnaire was used to collect data for the study. The data extracted from the case study mine for this study includes granite marketing operations, royalty, production costs, and mine production information. The following methods were used to achieve the goal of this study: descriptive statistics, MATLAB 2017, and SPSS16.0 software in analyzing and modeling the data collected from granite traders in the study areas. The ANN and Multi Variant Regression models' prediction accuracy was compared using a coefficient of determination (R²), Root mean square error (RMSE), and mean square error (MSE). Due to the high prediction error, the model evaluation indices revealed that the ANN model was suitable for predicting generated profit in a typical quarry. More quarries in Nigeria's southwest region and other geopolitical zones should be considered to improve ANN prediction accuracy.

Keywords: national development, granite, profitability assessment, ANN models

Procedia PDF Downloads 101
16718 Predicting the Diagnosis of Alzheimer’s Disease: Development and Validation of Machine Learning Models

Authors: Jay L. Fu

Abstract:

Patients with Alzheimer's disease progressively lose their memory and thinking skills and, eventually, the ability to carry out simple daily tasks. The disease is irreversible, but early detection and treatment can slow down the disease progression. In this research, publicly available MRI data and demographic data from 373 MRI imaging sessions were utilized to build models to predict dementia. Various machine learning models, including logistic regression, k-nearest neighbor, support vector machine, random forest, and neural network, were developed. Data were divided into training and testing sets, where training sets were used to build the predictive model, and testing sets were used to assess the accuracy of prediction. Key risk factors were identified, and various models were compared to come forward with the best prediction model. Among these models, the random forest model appeared to be the best model with an accuracy of 90.34%. MMSE, nWBV, and gender were the three most important contributing factors to the detection of Alzheimer’s. Among all the models used, the percent in which at least 4 of the 5 models shared the same diagnosis for a testing input was 90.42%. These machine learning models allow early detection of Alzheimer’s with good accuracy, which ultimately leads to early treatment of these patients.

Keywords: Alzheimer's disease, clinical diagnosis, magnetic resonance imaging, machine learning prediction

Procedia PDF Downloads 143
16717 Fuzzy Set Qualitative Comparative Analysis in Business Models' Study

Authors: K. Debkowska

Abstract:

The aim of this article is presenting the possibilities of using Fuzzy Set Qualitative Comparative Analysis (fsQCA) in researches concerning business models of enterprises. FsQCA is a bridge between quantitative and qualitative researches. It's potential can be used in analysis and evaluation of business models. The article presents the results of a study conducted on the basis of enterprises belonging to different sectors: transport and logistics, industry, building construction, and trade. The enterprises have been researched taking into account the components of business models and the financial condition of companies. Business models are areas of complex and heterogeneous nature. The use of fsQCA has enabled to answer the following question: which components of a business model and in which configuration influence better financial condition of enterprises. The analysis has been performed separately for particular sectors. This enabled to compare the combinations of business models' components which actively influence the financial condition of enterprises in analyzed sectors. The following components of business models were analyzed for the purposes of the study: Key Partners, Key Activities, Key Resources, Value Proposition, Channels, Cost Structure, Revenue Streams, Customer Segment and Customer Relationships. These components of the study constituted the variables shaping the financial results of enterprises. The results of the study lead us to believe that fsQCA can help in analyzing and evaluating a business model, which is important in terms of making a business decision about the business model used or its change. In addition, results obtained by fsQCA can be applied by all stakeholders connected with the company.

Keywords: business models, components of business models, data analysis, fsQCA

Procedia PDF Downloads 170
16716 Formal Models of Sanitary Inspections Teams Activities

Authors: Tadeusz Nowicki, Radosław Pytlak, Robert Waszkowski, Jerzy Bertrandt, Anna Kłos

Abstract:

This paper presents methods for formal modeling of activities in the area of sanitary inspectors outbreak of food-borne diseases. The models allow you to measure the characteristics of the activities of sanitary inspection and as a result allow improving the performance of sanitary services and thus food security.

Keywords: food-borne disease, epidemic, sanitary inspection, mathematical models

Procedia PDF Downloads 302
16715 Identification of Classes of Bilinear Time Series Models

Authors: Anthony Usoro

Abstract:

In this paper, two classes of bilinear time series model are obtained under certain conditions from the general bilinear autoregressive moving average model. Bilinear Autoregressive (BAR) and Bilinear Moving Average (BMA) Models have been identified. From the general bilinear model, BAR and BMA models have been proved to exist for q = Q = 0, => j = 0, and p = P = 0, => i = 0 respectively. These models are found useful in modelling most of the economic and financial data.

Keywords: autoregressive model, bilinear autoregressive model, bilinear moving average model, moving average model

Procedia PDF Downloads 407