Search results for: information seeking models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16982

Search results for: information seeking models

16592 Comparison of Different k-NN Models for Speed Prediction in an Urban Traffic Network

Authors: Seyoung Kim, Jeongmin Kim, Kwang Ryel Ryu

Abstract:

A database that records average traffic speeds measured at five-minute intervals for all the links in the traffic network of a metropolitan city. While learning from this data the models that can predict future traffic speed would be beneficial for the applications such as the car navigation system, building predictive models for every link becomes a nontrivial job if the number of links in a given network is huge. An advantage of adopting k-nearest neighbor (k-NN) as predictive models is that it does not require any explicit model building. Instead, k-NN takes a long time to make a prediction because it needs to search for the k-nearest neighbors in the database at prediction time. In this paper, we investigate how much we can speed up k-NN in making traffic speed predictions by reducing the amount of data to be searched for without a significant sacrifice of prediction accuracy. The rationale behind this is that we had a better look at only the recent data because the traffic patterns not only repeat daily or weekly but also change over time. In our experiments, we build several different k-NN models employing different sets of features which are the current and past traffic speeds of the target link and the neighbor links in its up/down-stream. The performances of these models are compared by measuring the average prediction accuracy and the average time taken to make a prediction using various amounts of data.

Keywords: big data, k-NN, machine learning, traffic speed prediction

Procedia PDF Downloads 363
16591 Evaluation of UI for 3D Visualization-Based Building Information Applications

Authors: Monisha Pattanaik

Abstract:

In scenarios where users have to work with large amounts of hierarchical data structures combined with visualizations (For example, Construction 3d Models, Manufacturing equipment's models, Gantt charts, Building Plans), the data structures have a high density in terms of consisting multiple parent nodes up to 50 levels and their siblings to descendants, therefore convey an immediate feeling of complexity. With customers moving to consumer-grade enterprise software, it is crucial to make sophisticated features made available to touch devices or smaller screen sizes. This paper evaluates the UI component that allows users to scroll through all deep density levels using a slider overlay on top of the hierarchy table, performing several actions to focus on one set of objects at any point in time. This overlay component also solves the problem of excessive horizontal scrolling of the entire table on a fixed pane for a hierarchical table. This component can be customized to navigate through parents, only siblings, or a specific component of the hierarchy only. The evaluation of the UI component was done by End Users of application and Human-Computer Interaction (HCI) experts to test the UI component's usability with statistical results and recommendations to handle complex hierarchical data visualizations.

Keywords: building information modeling, digital twin, navigation, UI component, user interface, usability, visualization

Procedia PDF Downloads 137
16590 Estimation of the Parameters of Muskingum Methods for the Prediction of the Flood Depth in the Moudjar River Catchment

Authors: Fares Laouacheria, Said Kechida, Moncef Chabi

Abstract:

The objective of the study was based on the hydrological routing modelling for the continuous monitoring of the hydrological situation in the Moudjar river catchment, especially during floods with Hydrologic Engineering Center–Hydrologic Modelling Systems (HEC-HMS). The HEC-GeoHMS was used to transform data from geographic information system (GIS) to HEC-HMS for delineating and modelling the catchment river in order to estimate the runoff volume, which is used as inputs to the hydrological routing model. Two hydrological routing models were used, namely Muskingum and Muskingum routing models, for conducting this study. In this study, a comparison between the parameters of the Muskingum and Muskingum-Cunge routing models in HEC-HMS was used for modelling flood routing in the Moudjar river catchment and determining the relationship between these parameters and the physical characteristics of the river. The results indicate that the effects of input parameters such as the weighting factor "X" and travel time "K" on the output results are more significant, where the Muskingum routing model was more sensitive to input parameters than the Muskingum-Cunge routing model. This study can contribute to understand and improve the knowledge of the mechanisms of river floods, especially in ungauged river catchments.

Keywords: HEC-HMS, hydrological modelling, Muskingum routing model, Muskingum-Cunge routing model

Procedia PDF Downloads 276
16589 Definition of a Computing Independent Model and Rules for Transformation Focused on the Model-View-Controller Architecture

Authors: Vanessa Matias Leite, Jandira Guenka Palma, Flávio Henrique de Oliveira

Abstract:

This paper presents a model-oriented development approach to software development in the Model-View-Controller (MVC) architectural standard. This approach aims to expose a process of extractions of information from the models, in which through rules and syntax defined in this work, assists in the design of the initial model and its future conversions. The proposed paper presents a syntax based on the natural language, according to the rules agreed in the classic grammar of the Portuguese language, added to the rules of conversions generating models that follow the norms of the Object Management Group (OMG) and the Meta-Object Facility MOF.

Keywords: BNF Syntax, model driven architecture, model-view-controller, transformation, UML

Procedia PDF Downloads 395
16588 A Performance Analysis Study for Cloud Based ERP Systems

Authors: Burak Erkayman

Abstract:

The manufacturing and service organizations are in the need of using ERP systems to integrate many functions from purchasing to storage, production planning to calculation of costs. Using ERP systems by the integration in the level of information provides companies remarkable advantages in terms of profitability, productivity and efficiency in processes. Cloud computing is one of the most significant changes in information and communication technology. The developments in Cloud Computing attract business world to take advantage of this field. Cloud Computing means much more storage area, more cost saving and faster data transfer rate. In addition to these, it presents new business models, new field of study and practicable solutions for anyone’s use. These developments make inevitable the implementation of ERP systems to cloud environment. In this study, the performance of ERP systems in cloud environment is analyzed through various performance criteria and a comparison between traditional and cloud-ERP systems is presented. At the end of study the transformation and the future of ERP systems is discussed.

Keywords: cloud-ERP, ERP system performance, information system transformation

Procedia PDF Downloads 529
16587 A Hybrid System of Hidden Markov Models and Recurrent Neural Networks for Learning Deterministic Finite State Automata

Authors: Pavan K. Rallabandi, Kailash C. Patidar

Abstract:

In this paper, we present an optimization technique or a learning algorithm using the hybrid architecture by combining the most popular sequence recognition models such as Recurrent Neural Networks (RNNs) and Hidden Markov models (HMMs). In order to improve the sequence or pattern recognition/ classification performance by applying a hybrid/neural symbolic approach, a gradient descent learning algorithm is developed using the Real Time Recurrent Learning of Recurrent Neural Network for processing the knowledge represented in trained Hidden Markov Models. The developed hybrid algorithm is implemented on automata theory as a sample test beds and the performance of the designed algorithm is demonstrated and evaluated on learning the deterministic finite state automata.

Keywords: hybrid systems, hidden markov models, recurrent neural networks, deterministic finite state automata

Procedia PDF Downloads 388
16586 2.5D Face Recognition Using Gabor Discrete Cosine Transform

Authors: Ali Cheraghian, Farshid Hajati, Soheila Gheisari, Yongsheng Gao

Abstract:

In this paper, we present a novel 2.5D face recognition method based on Gabor Discrete Cosine Transform (GDCT). In the proposed method, the Gabor filter is applied to extract feature vectors from the texture and the depth information. Then, Discrete Cosine Transform (DCT) is used for dimensionality and redundancy reduction to improve computational efficiency. The system is combined texture and depth information in the decision level, which presents higher performance compared to methods, which use texture and depth information, separately. The proposed algorithm is examined on publically available Bosphorus database including models with pose variation. The experimental results show that the proposed method has a higher performance compared to the benchmark.

Keywords: Gabor filter, discrete cosine transform, 2.5d face recognition, pose

Procedia PDF Downloads 328
16585 Influence of the Refractory Period on Neural Networks Based on the Recognition of Neural Signatures

Authors: José Luis Carrillo-Medina, Roberto Latorre

Abstract:

Experimental evidence has revealed that different living neural systems can sign their output signals with some specific neural signature. Although experimental and modeling results suggest that neural signatures can have an important role in the activity of neural networks in order to identify the source of the information or to contextualize a message, the functional meaning of these neural fingerprints is still unclear. The existence of cellular mechanisms to identify the origin of individual neural signals can be a powerful information processing strategy for the nervous system. We have recently built different models to study the ability of a neural network to process information based on the emission and recognition of specific neural fingerprints. In this paper we further analyze the features that can influence on the information processing ability of this kind of networks. In particular, we focus on the role that the duration of a refractory period in each neuron after emitting a signed message can play in the network collective dynamics.

Keywords: neural signature, neural fingerprint, processing based on signal identification, self-organizing neural network

Procedia PDF Downloads 492
16584 Development of Building Information Modeling in Property Industry: Beginning with Building Information Modeling Construction

Authors: B. Godefroy, D. Beladjine, K. Beddiar

Abstract:

In France, construction BIM actors commonly evoke the BIM gains for exploitation by integrating of the life cycle of a building. The standardization of level 7 of development would achieve this stage of the digital model. The householders include local public authorities, social landlords, public institutions (health and education), enterprises, facilities management companies. They have a dual role: owner and manager of their housing complex. In a context of financial constraint, the BIM of exploitation aims to control costs, make long-term investment choices, renew the portfolio and enable environmental standards to be met. It assumes a knowledge of the existing buildings, marked by its size and complexity. The information sought must be synthetic and structured, it concerns, in general, a real estate complex. We conducted a study with professionals about their concerns and ways to use it to see how householders could benefit from this development. To obtain results, we had in mind the recurring interrogation of the project management, on the needs of the operators, we tested the following stages: 1) Inculcate a minimal culture of BIM with multidisciplinary teams of the operator then by business, 2) Learn by BIM tools, the adaptation of their trade in operations, 3) Understand the place and creation of a graphic and technical database management system, determine the components of its library so their needs, 4) Identify the cross-functional interventions of its managers by business (operations, technical, information system, purchasing and legal aspects), 5) Set an internal protocol and define the BIM impact in their digital strategy. In addition, continuity of management by the integration of construction models in the operation phase raises the question of interoperability in the control of the production of IFC files in the operator’s proprietary format and the export and import processes, a solution rivaled by the traditional method of vectorization of paper plans. Companies that digitize housing complex and those in FM produce a file IFC, directly, according to their needs without recourse to the model of construction, they produce models business for the exploitation. They standardize components, equipment that are useful for coding. We observed the consequences resulting from the use of the BIM in the property industry and, made the following observations: a) The value of data prevail over the graphics, 3D is little used b) The owner must, through his organization, promote the feedback of technical management information during the design phase c) The operator's reflection on outsourcing concerns the acquisition of its information system and these services, observing the risks and costs related to their internal or external developments. This study allows us to highlight: i) The need for an internal organization of operators prior to a response to the construction management ii) The evolution towards automated methods for creating models dedicated to the exploitation, a specialization would be required iii) A review of the communication of the project management, management continuity not articulating around his building model, it must take into account the environment of the operator and reflect on its scope of action.

Keywords: information system, interoperability, models for exploitation, property industry

Procedia PDF Downloads 144
16583 Leverage Effect for Volatility with Generalized Laplace Error

Authors: Farrukh Javed, Krzysztof Podgórski

Abstract:

We propose a new model that accounts for the asymmetric response of volatility to positive ('good news') and negative ('bad news') shocks in economic time series the so-called leverage effect. In the past, asymmetric powers of errors in the conditionally heteroskedastic models have been used to capture this effect. Our model is using the gamma difference representation of the generalized Laplace distributions that efficiently models the asymmetry. It has one additional natural parameter, the shape, that is used instead of power in the asymmetric power models to capture the strength of a long-lasting effect of shocks. Some fundamental properties of the model are provided including the formula for covariances and an explicit form for the conditional distribution of 'bad' and 'good' news processes given the past the property that is important for the statistical fitting of the model. Relevant features of volatility models are illustrated using S&P 500 historical data.

Keywords: heavy tails, volatility clustering, generalized asymmetric laplace distribution, leverage effect, conditional heteroskedasticity, asymmetric power volatility, GARCH models

Procedia PDF Downloads 385
16582 Modelling of Hydric Behaviour of Textiles

Authors: A. Marolleau, F. Salaun, D. Dupont, H. Gidik, S. Ducept.

Abstract:

The goal of this study is to analyze the hydric behaviour of textiles which can impact significantly the comfort of the wearer. Indeed, fabrics can be adapted for different climate if hydric and thermal behaviors are known. In this study, fabrics are only submitted to hydric variations. Sorption and desorption isotherms obtained from the dynamic vapour sorption apparatus (DVS) are fitted with the parallel exponential kinetics (PEK), the Hailwood-Horrobin (HH) and the Brunauer-Emmett-Teller (BET) models. One of the major finding is the relationship existing between PEK and HH models. During slow and fast processes, the sorption of water molecules on the polymer can be in monolayer and multilayer form. According to the BET model, moisture regain, a physical property of textiles, show a linear correlation with the total amount of water taken in monolayer. This study provides potential information of the end uses of these fabrics according to the selected activity level.

Keywords: comfort, hydric properties, modelling, underwears

Procedia PDF Downloads 149
16581 Mental Health Literacy in the Arabic Community

Authors: Yamam Abuzinadah

Abstract:

Mental health literacy has become a very influential topic around the world due to the increase of mental health issues that have been reported through national research and surveys. Mental health literacy refers to the awareness, attitudes, beliefs, knowledge and skills when dealing with mental illness. This research explores mental health literacy in the Arabic and the ways culture informs perceptions of mental health in general. Also, the impact of mental health literacy on: help-seeking attitudes, relationships and community interactions. The outcomes of this research will contribute to raising mental health awareness among the Arabic community, develop and enhance mental health service provision and explore new ideas in regards to elevating mental health literacy in the Arabic community. This research aims to explore attitudes, beliefs, perspective, values and perceptions toward mental health in general among the Arabic community. It will also aim to highlight the factors contributing to theses beliefs, perspective, value and perception and accordingly the role these factors play in regards to awareness, services access, recovery and care provided from the family and the community. This thesis will aim to reflect a detailed theorisation and exploration of: (1) The impact of cultural factors on mental health literacy ie. attitudes, beliefs, knowledge and skills. (2) The ways culture informs perceptions of mental health literacy. (3) The impact of mental health literacy on: help-seeking behaviors, and relationships and community interactions.

Keywords: Arab, mental health, literacy, awareness

Procedia PDF Downloads 432
16580 Monitoring Large-Coverage Forest Canopy Height by Integrating LiDAR and Sentinel-2 Images

Authors: Xiaobo Liu, Rakesh Mishra, Yun Zhang

Abstract:

Continuous monitoring of forest canopy height with large coverage is essential for obtaining forest carbon stocks and emissions, quantifying biomass estimation, analyzing vegetation coverage, and determining biodiversity. LiDAR can be used to collect accurate woody vegetation structure such as canopy height. However, LiDAR’s coverage is usually limited because of its high cost and limited maneuverability, which constrains its use for dynamic and large area forest canopy monitoring. On the other hand, optical satellite images, like Sentinel-2, have the ability to cover large forest areas with a high repeat rate, but they do not have height information. Hence, exploring the solution of integrating LiDAR data and Sentinel-2 images to enlarge the coverage of forest canopy height prediction and increase the prediction repeat rate has been an active research topic in the environmental remote sensing community. In this study, we explore the potential of training a Random Forest Regression (RFR) model and a Convolutional Neural Network (CNN) model, respectively, to develop two predictive models for predicting and validating the forest canopy height of the Acadia Forest in New Brunswick, Canada, with a 10m ground sampling distance (GSD), for the year 2018 and 2021. Two 10m airborne LiDAR-derived canopy height models, one for 2018 and one for 2021, are used as ground truth to train and validate the RFR and CNN predictive models. To evaluate the prediction performance of the trained RFR and CNN models, two new predicted canopy height maps (CHMs), one for 2018 and one for 2021, are generated using the trained RFR and CNN models and 10m Sentinel-2 images of 2018 and 2021, respectively. The two 10m predicted CHMs from Sentinel-2 images are then compared with the two 10m airborne LiDAR-derived canopy height models for accuracy assessment. The validation results show that the mean absolute error (MAE) for year 2018 of the RFR model is 2.93m, CNN model is 1.71m; while the MAE for year 2021 of the RFR model is 3.35m, and the CNN model is 3.78m. These demonstrate the feasibility of using the RFR and CNN models developed in this research for predicting large-coverage forest canopy height at 10m spatial resolution and a high revisit rate.

Keywords: remote sensing, forest canopy height, LiDAR, Sentinel-2, artificial intelligence, random forest regression, convolutional neural network

Procedia PDF Downloads 92
16579 A Spatial Information Network Traffic Prediction Method Based on Hybrid Model

Authors: Jingling Li, Yi Zhang, Wei Liang, Tao Cui, Jun Li

Abstract:

Compared with terrestrial network, the traffic of spatial information network has both self-similarity and short correlation characteristics. By studying its traffic prediction method, the resource utilization of spatial information network can be improved, and the method can provide an important basis for traffic planning of a spatial information network. In this paper, considering the accuracy and complexity of the algorithm, the spatial information network traffic is decomposed into approximate component with long correlation and detail component with short correlation, and a time series hybrid prediction model based on wavelet decomposition is proposed to predict the spatial network traffic. Firstly, the original traffic data are decomposed to approximate components and detail components by using wavelet decomposition algorithm. According to the autocorrelation and partial correlation smearing and truncation characteristics of each component, the corresponding model (AR/MA/ARMA) of each detail component can be directly established, while the type of approximate component modeling can be established by ARIMA model after smoothing. Finally, the prediction results of the multiple models are fitted to obtain the prediction results of the original data. The method not only considers the self-similarity of a spatial information network, but also takes into account the short correlation caused by network burst information, which is verified by using the measured data of a certain back bone network released by the MAWI working group in 2018. Compared with the typical time series model, the predicted data of hybrid model is closer to the real traffic data and has a smaller relative root means square error, which is more suitable for a spatial information network.

Keywords: spatial information network, traffic prediction, wavelet decomposition, time series model

Procedia PDF Downloads 146
16578 Analyzing Business Model Choices and Sustainable Value Capturing: A Multiple Case Study of Sharing Economy Business Models

Authors: Minttu Laukkanen, Janne Huiskonen

Abstract:

This study investigates the sharing economy business models as examples of the sustainable business models. The aim is to contribute to the limited literature on sharing economy in connection with sustainable business models by explaining sharing economy business models value capturing. Specifically, this research answers the following question: How business model choices affect captured sustainable value? A multiple case study approach is applied in this study. Twenty different successful sharing economy business models focusing on consumer business and covering four main areas, accommodation, mobility, food, and consumer goods, are selected for analysis. The secondary data available on companies’ websites, previous research, reports, and other public documents are used. All twenty cases are analyzed through the sharing economy business model framework and sustainable value analysis framework using qualitative data analysis. This study represents general sharing economy business model value attributes and their specifications, i.e. sustainable value propositions for different stakeholders, and further explains the sustainability impacts of different sharing economy business models through captured and uncaptured value. In conclusion, this study represents how business model choices affect sustainable value capturing through eight business model attributes identified in this study. This paper contributes to the research on sustainable business models and sharing economy by examining how business model choices affect captured sustainable value. This study highlights the importance of careful business model and sustainability impacts analyses including the triple bottom line, multiple stakeholders and value captured and uncaptured perspectives as well as sustainability trade-offs. It is not self-evident that sharing economy business models advance sustainability, and business model choices does matter.

Keywords: sharing economy, sustainable business model innovation, sustainable value, value capturing

Procedia PDF Downloads 172
16577 Generic Hybrid Models for Two-Dimensional Ultrasonic Guided Wave Problems

Authors: Manoj Reghu, Prabhu Rajagopal, C. V. Krishnamurthy, Krishnan Balasubramaniam

Abstract:

A thorough understanding of guided ultrasonic wave behavior in structures is essential for the application of existing Non Destructive Evaluation (NDE) technologies, as well as for the development of new methods. However, the analysis of guided wave phenomena is challenging because of their complex dispersive and multimodal nature. Although numerical solution procedures have proven to be very useful in this regard, the increasing complexity of features and defects to be considered, as well as the desire to improve the accuracy of inspection often imposes a large computational cost. Hybrid models that combine numerical solutions for wave scattering with faster alternative methods for wave propagation have long been considered as a solution to this problem. However usually such models require modification of the base code of the solution procedure. Here we aim to develop Generic Hybrid models that can be directly applied to any two different solution procedures. With this goal in mind, a Numerical Hybrid model and an Analytical-Numerical Hybrid model has been developed. The concept and implementation of these Hybrid models are discussed in this paper.

Keywords: guided ultrasonic waves, Finite Element Method (FEM), Hybrid model

Procedia PDF Downloads 465
16576 Computational Models for Accurate Estimation of Joint Forces

Authors: Ibrahim Elnour Abdelrahman Eltayeb

Abstract:

Computational modelling is a method used to investigate joint forces during a movement. It can get high accuracy in the joint forces via subject-specific models. However, the construction of subject-specific models remains time-consuming and expensive. The purpose of this paper was to identify what alterations we can make to generic computational models to get a better estimation of the joint forces. It appraised the impact of these alterations on the accuracy of the estimated joint forces. It found different strategies of alterations: joint model, muscle model, and an optimisation problem. All these alterations affected joint contact force accuracy, so showing the potential for improving the model predictions without involving costly and time-consuming medical images.

Keywords: joint force, joint model, optimisation problem, validation

Procedia PDF Downloads 170
16575 Modelling Mode Choice Behaviour Using Cloud Theory

Authors: Leah Wright, Trevor Townsend

Abstract:

Mode choice models are crucial instruments in the analysis of travel behaviour. These models show the relationship between an individual’s choice of transportation mode for a given O-D pair and the individual’s socioeconomic characteristics such as household size and income level, age and/or gender, and the features of the transportation system. The most popular functional forms of these models are based on Utility-Based Choice Theory, which addresses the uncertainty in the decision-making process with the use of an error term. However, with the development of artificial intelligence, many researchers have started to take a different approach to travel demand modelling. In recent times, researchers have looked at using neural networks, fuzzy logic and rough set theory to develop improved mode choice formulas. The concept of cloud theory has recently been introduced to model decision-making under uncertainty. Unlike the previously mentioned theories, cloud theory recognises a relationship between randomness and fuzziness, two of the most common types of uncertainty. This research aims to investigate the use of cloud theory in mode choice models. This paper highlights the conceptual framework of the mode choice model using cloud theory. Merging decision-making under uncertainty and mode choice models is state of the art. The cloud theory model is expected to address the issues and concerns with the nested logit and improve the design of mode choice models and their use in travel demand.

Keywords: Cloud theory, decision-making, mode choice models, travel behaviour, uncertainty

Procedia PDF Downloads 387
16574 Cloud Computing in Data Mining: A Technical Survey

Authors: Ghaemi Reza, Abdollahi Hamid, Dashti Elham

Abstract:

Cloud computing poses a diversity of challenges in data mining operation arising out of the dynamic structure of data distribution as against the use of typical database scenarios in conventional architecture. Due to immense number of users seeking data on daily basis, there is a serious security concerns to cloud providers as well as data providers who put their data on the cloud computing environment. Big data analytics use compute intensive data mining algorithms (Hidden markov, MapReduce parallel programming, Mahot Project, Hadoop distributed file system, K-Means and KMediod, Apriori) that require efficient high performance processors to produce timely results. Data mining algorithms to solve or optimize the model parameters. The challenges that operation has to encounter is the successful transactions to be established with the existing virtual machine environment and the databases to be kept under the control. Several factors have led to the distributed data mining from normal or centralized mining. The approach is as a SaaS which uses multi-agent systems for implementing the different tasks of system. There are still some problems of data mining based on cloud computing, including design and selection of data mining algorithms.

Keywords: cloud computing, data mining, computing models, cloud services

Procedia PDF Downloads 479
16573 Simulation of Channel Models for Device-to-Device Application of 5G Urban Microcell Scenario

Authors: H. Zormati, J. Chebil, J. Bel Hadj Tahar

Abstract:

Next generation wireless transmission technology (5G) is expected to support the development of channel models for higher frequency bands, so clarification of high frequency bands is the most important issue in radio propagation research for 5G, multiple urban microcellular measurements have been carried out at 60 GHz. In this paper, the collected data is uniformly analyzed with focus on the path loss (PL), the objective is to compare simulation results of some studied channel models with the purpose of testing the performance of each one.

Keywords: 5G, channel model, 60GHz channel, millimeter-wave, urban microcell

Procedia PDF Downloads 319
16572 Liesegang Phenomena: Experimental and Simulation Studies

Authors: Vemula Amalakrishna, S. Pushpavanam

Abstract:

Change and motion characterize and persistently reshape the world around us, on scales from molecular to global. The subtle interplay between change (Reaction) and motion (Diffusion) gives rise to an astonishing intricate spatial or temporal pattern. These pattern formation in nature has been intellectually appealing for many scientists since antiquity. Periodic precipitation patterns, also known as Liesegang patterns (LP), are one of the stimulating examples of such self-assembling reaction-diffusion (RD) systems. LP formation has a great potential in micro and nanotechnology. So far, the research on LPs has been concentrated mostly on how these patterns are forming, retrieving information to build a universal mathematical model for them. Researchers have developed various theoretical models to comprehensively construct the geometrical diversity of LPs. To the best of our knowledge, simulation studies of LPs assume an arbitrary value of RD parameters to explain experimental observation qualitatively. In this work, existing models were studied to understand the mechanism behind this phenomenon and challenges pertaining to models were understood and explained. These models are not computationally effective due to the presence of discontinuous precipitation rate in RD equations. To overcome the computational challenges, smoothened Heaviside functions have been introduced, which downsizes the computational time as well. Experiments were performed using a conventional LP system (AgNO₃-K₂Cr₂O₇) to understand the effects of different gels and temperatures on formed LPs. The model is extended for real parameter values to compare the simulated results with experimental data for both 1-D (Cartesian test tubes) and 2-D(cylindrical and Petri dish).

Keywords: reaction-diffusion, spatio-temporal patterns, nucleation and growth, supersaturation

Procedia PDF Downloads 152
16571 A Forward-Looking View of the Intellectual Capital Accounting Information System

Authors: Rbiha Salsabil Ketitni

Abstract:

The entire company is a series of information among themselves so that each information serves several events and activities, and the latter is nothing but a large set of data or huge data. The enormity of information leads to the possibility of losing it sometimes, and this possibility must be avoided in the institution, especially the information that has a significant impact on it. In most cases, to avoid the loss of this information and to be relatively correct, information systems are used. At present, it is impossible to have a company that does not have information systems, as the latter works to organize the information as well as to preserve it and even saves time for its owner and this is the result of the speed of its mission. This study aims to provide an idea of an accounting information system that opens a forward-looking study for its manufacture and development by researchers, scientists, and professionals. This is the result of most individuals seeing a great contradiction between the work of an information system for moral capital and does not provide real values when measured, and its disclosure in financial reports is not distinguished by transparency.

Keywords: accounting, intellectual capital, intellectual capital accounting, information system

Procedia PDF Downloads 83
16570 A Biometric Template Security Approach to Fingerprints Based on Polynomial Transformations

Authors: Ramon Santana

Abstract:

The use of biometric identifiers in the field of information security, access control to resources, authentication in ATMs and banking among others, are of great concern because of the safety of biometric data. In the general architecture of a biometric system have been detected eight vulnerabilities, six of them allow obtaining minutiae template in plain text. The main consequence of obtaining minutia templates is the loss of biometric identifier for life. To mitigate these vulnerabilities several models to protect minutiae templates have been proposed. Several vulnerabilities in the cryptographic security of these models allow to obtain biometric data in plain text. In order to increase the cryptographic security and ease of reversibility, a minutiae templates protection model is proposed. The model aims to make the cryptographic protection and facilitate the reversibility of data using two levels of security. The first level of security is the data transformation level. In this level generates invariant data to rotation and translation, further transformation is irreversible. The second level of security is the evaluation level, where the encryption key is generated and data is evaluated using a defined evaluation function. The model is aimed at mitigating known vulnerabilities of the proposed models, basing its security on the impossibility of the polynomial reconstruction.

Keywords: fingerprint, template protection, bio-cryptography, minutiae protection

Procedia PDF Downloads 170
16569 Mask-Prompt-Rerank: An Unsupervised Method for Text Sentiment Transfer

Authors: Yufen Qin

Abstract:

Text sentiment transfer is an important branch of text style transfer. The goal is to generate text with another sentiment attribute based on a text with a specific sentiment attribute while maintaining the content and semantic information unrelated to sentiment unchanged in the process. There are currently two main challenges in this field: no parallel corpus and text attribute entanglement. In response to the above problems, this paper proposed a novel solution: Mask-Prompt-Rerank. Use the method of masking the sentiment words and then using prompt regeneration to transfer the sentence sentiment. Experiments on two sentiment benchmark datasets and one formality transfer benchmark dataset show that this approach makes the performance of small pre-trained language models comparable to that of the most advanced large models, while consuming two orders of magnitude less computing and memory.

Keywords: language model, natural language processing, prompt, text sentiment transfer

Procedia PDF Downloads 81
16568 The Relationship of Lean Management Principles with Lean Maturity Levels: Multiple Case Study in Manufacturing Companies

Authors: Alexandre D. Ferraz, Dario H. Alliprandini, Mauro Sampaio

Abstract:

Companies and other institutions are constantly seeking better organizational performance and greater competitiveness. In order to fulfill this purpose, there are many tools, methodologies and models for increasing performance. However, the Lean Management approach seems to be the most effective in terms of achieving a significant improvement in productivity relatively quickly. Although Lean tools are relatively easy to understand and implement in different contexts, many organizations are not able to transform themselves into 'Lean companies'. Most of the efforts in its implementation have shown single benefits, failing to achieve the desired impact on the performance of the overall enterprise system. There is also a growing perception of the importance of management in Lean transformation, but few studies have empirically investigated and described the 'Lean Management'. In order to understand more clearly the ideas that guide Lean Management and its influence on the maturity level of the production system, the objective of this research is analyze the relationship between the Lean Management principles and the Lean maturity level in the organizations. The research also analyzes the principles of Lean Management and its relationship with the 'Lean culture' and the results obtained. The research was developed using the case study methodology. Three manufacturing units of a German multinational company from industrial automation segment, located in different countries were studied, in order to have a better comparison between the practices and the level of maturity in the implementation. The primary source of information was the application of a research questionnaire based on the theoretical review. The research showed that higher the level of Lean Management principles, higher are the Lean maturity level, the Lean culture level, and the level of Lean results obtained in the organization. The research also showed that factors such as time for application of Lean concepts and company size were not determinant for the level of Lean Management principles and, consequently, for the level of Lean maturity in the organization. The characteristics of the production system showed much more influence in different evaluated aspects. The present research also left recommendations for the managers of the plants analyzed and suggestions for future research.

Keywords: lean management, lean principles, lean maturity level, lean manufacturing

Procedia PDF Downloads 142
16567 Contrasted Mean and Median Models in Egyptian Stock Markets

Authors: Mai A. Ibrahim, Mohammed El-Beltagy, Motaz Khorshid

Abstract:

Emerging Markets return distributions have shown significance departure from normality were they are characterized by fatter tails relative to the normal distribution and exhibit levels of skewness and kurtosis that constitute a significant departure from normality. Therefore, the classical Markowitz Mean-Variance is not applicable for emerging markets since it assumes normally-distributed returns (with zero skewness and kurtosis) and a quadratic utility function. Moreover, the Markowitz mean-variance analysis can be used in cases of moderate non-normality and it still provides a good approximation of the expected utility, but it may be ineffective under large departure from normality. Higher moments models and median models have been suggested in the literature for asset allocation in this case. Higher moments models have been introduced to account for the insufficiency of the description of a portfolio by only its first two moments while the median model has been introduced as a robust statistic which is less affected by outliers than the mean. Tail risk measures such as Value-at Risk (VaR) and Conditional Value-at-Risk (CVaR) have been introduced instead of Variance to capture the effect of risk. In this research, higher moment models including the Mean-Variance-Skewness (MVS) and Mean-Variance-Skewness-Kurtosis (MVSK) are formulated as single-objective non-linear programming problems (NLP) and median models including the Median-Value at Risk (MedVaR) and Median-Mean Absolute Deviation (MedMAD) are formulated as a single-objective mixed-integer linear programming (MILP) problems. The higher moment models and median models are compared to some benchmark portfolios and tested on real financial data in the Egyptian main Index EGX30. The results show that all the median models outperform the higher moment models were they provide higher final wealth for the investor over the entire period of study. In addition, the results have confirmed the inapplicability of the classical Markowitz Mean-Variance to the Egyptian stock market as it resulted in very low realized profits.

Keywords: Egyptian stock exchange, emerging markets, higher moment models, median models, mixed-integer linear programming, non-linear programming

Procedia PDF Downloads 314
16566 The Influence of Covariance Hankel Matrix Dimension on Algorithms for VARMA Models

Authors: Celina Pestano-Gabino, Concepcion Gonzalez-Concepcion, M. Candelaria Gil-Fariña

Abstract:

Some estimation methods for VARMA models, and Multivariate Time Series Models in general, rely on the use of a Hankel matrix. It is known that if the data sample is populous enough and the dimension of the Hankel matrix is unnecessarily large, this may result in an unnecessary number of computations as well as in numerical problems. In this sense, the aim of this paper is two-fold. First, we provide some theoretical results for these matrices which translate into a lower dimension for the matrices normally used in the algorithms. This contribution thus serves to improve those methods from a numerical and, presumably, statistical point of view. Second, we have chosen an estimation algorithm to illustrate in practice our improvements. The results we obtained in a simulation of VARMA models show that an increase in the size of the Hankel matrix beyond the theoretical bound proposed as valid does not necessarily lead to improved practical results. Therefore, for future research, we propose conducting similar studies using any of the linear system estimation methods that depend on Hankel matrices.

Keywords: covariances Hankel matrices, Kronecker indices, system identification, VARMA models

Procedia PDF Downloads 243
16565 An Infinite Mixture Model for Modelling Stutter Ratio in Forensic Data Analysis

Authors: M. A. C. S. Sampath Fernando, James M. Curran, Renate Meyer

Abstract:

Forensic DNA analysis has received much attention over the last three decades, due to its incredible usefulness in human identification. The statistical interpretation of DNA evidence is recognised as one of the most mature fields in forensic science. Peak heights in an Electropherogram (EPG) are approximately proportional to the amount of template DNA in the original sample being tested. A stutter is a minor peak in an EPG, which is not masking as an allele of a potential contributor, and considered as an artefact that is presumed to be arisen due to miscopying or slippage during the PCR. Stutter peaks are mostly analysed in terms of stutter ratio that is calculated relative to the corresponding parent allele height. Analysis of mixture profiles has always been problematic in evidence interpretation, especially with the presence of PCR artefacts like stutters. Unlike binary and semi-continuous models; continuous models assign a probability (as a continuous weight) for each possible genotype combination, and significantly enhances the use of continuous peak height information resulting in more efficient reliable interpretations. Therefore, the presence of a sound methodology to distinguish between stutters and real alleles is essential for the accuracy of the interpretation. Sensibly, any such method has to be able to focus on modelling stutter peaks. Bayesian nonparametric methods provide increased flexibility in applied statistical modelling. Mixture models are frequently employed as fundamental data analysis tools in clustering and classification of data and assume unidentified heterogeneous sources for data. In model-based clustering, each unknown source is reflected by a cluster, and the clusters are modelled using parametric models. Specifying the number of components in finite mixture models, however, is practically difficult even though the calculations are relatively simple. Infinite mixture models, in contrast, do not require the user to specify the number of components. Instead, a Dirichlet process, which is an infinite-dimensional generalization of the Dirichlet distribution, is used to deal with the problem of a number of components. Chinese restaurant process (CRP), Stick-breaking process and Pólya urn scheme are frequently used as Dirichlet priors in Bayesian mixture models. In this study, we illustrate an infinite mixture of simple linear regression models for modelling stutter ratio and introduce some modifications to overcome weaknesses associated with CRP.

Keywords: Chinese restaurant process, Dirichlet prior, infinite mixture model, PCR stutter

Procedia PDF Downloads 330
16564 Hybrid Method for Smart Suggestions in Conversations for Online Marketplaces

Authors: Yasamin Rahimi, Ali Kamandi, Abbas Hoseini, Hesam Haddad

Abstract:

Online/offline chat is a convenient approach in the electronic markets of second-hand products in which potential customers would like to have more information about the products to fill the information gap between buyers and sellers. Online peer in peer market is trying to create artificial intelligence-based systems that help customers ask more informative questions in an easier way. In this article, we introduce a method for the question/answer system that we have developed for the top-ranked electronic market in Iran called Divar. When it comes to secondhand products, incomplete product information in a purchase will result in loss to the buyer. One way to balance buyer and seller information of a product is to help the buyer ask more informative questions when purchasing. Also, the short time to start and achieve the desired result of the conversation was one of our main goals, which was achieved according to A/B tests results. In this paper, we propose and evaluate a method for suggesting questions and answers in the messaging platform of the e-commerce website Divar. Creating such systems is to help users gather knowledge about the product easier and faster, All from the Divar database. We collected a dataset of around 2 million messages in Persian colloquial language, and for each category of product, we gathered 500K messages, of which only 2K were Tagged, and semi-supervised methods were used. In order to publish the proposed model to production, it is required to be fast enough to process 10 million messages daily on CPU processors. In order to reach that speed, in many subtasks, faster and simplistic models are preferred over deep neural models. The proposed method, which requires only a small amount of labeled data, is currently used in Divar production on CPU processors, and 15% of buyers and seller’s messages in conversations is directly chosen from our model output, and more than 27% of buyers have used this model suggestions in at least one daily conversation.

Keywords: smart reply, spell checker, information retrieval, intent detection, question answering

Procedia PDF Downloads 187
16563 Seafloor and Sea Surface Modelling in the East Coast Region of North America

Authors: Magdalena Idzikowska, Katarzyna Pająk, Kamil Kowalczyk

Abstract:

Seafloor topography is a fundamental issue in geological, geophysical, and oceanographic studies. Single-beam or multibeam sonars attached to the hulls of ships are used to emit a hydroacoustic signal from transducers and reproduce the topography of the seabed. This solution provides relevant accuracy and spatial resolution. Bathymetric data from ships surveys provides National Centers for Environmental Information – National Oceanic and Atmospheric Administration. Unfortunately, most of the seabed is still unidentified, as there are still many gaps to be explored between ship survey tracks. Moreover, such measurements are very expensive and time-consuming. The solution is raster bathymetric models shared by The General Bathymetric Chart of the Oceans. The offered products are a compilation of different sets of data - raw or processed. Indirect data for the development of bathymetric models are also measurements of gravity anomalies. Some forms of seafloor relief (e.g. seamounts) increase the force of the Earth's pull, leading to changes in the sea surface. Based on satellite altimetry data, Sea Surface Height and marine gravity anomalies can be estimated, and based on the anomalies, it’s possible to infer the structure of the seabed. The main goal of the work is to create regional bathymetric models and models of the sea surface in the area of the east coast of North America – a region of seamounts and undulating seafloor. The research includes an analysis of the methods and techniques used, an evaluation of the interpolation algorithms used, model thickening, and the creation of grid models. Obtained data are raster bathymetric models in NetCDF format, survey data from multibeam soundings in MB-System format, and satellite altimetry data from Copernicus Marine Environment Monitoring Service. The methodology includes data extraction, processing, mapping, and spatial analysis. Visualization of the obtained results was carried out with Geographic Information System tools. The result is an extension of the state of the knowledge of the quality and usefulness of the data used for seabed and sea surface modeling and knowledge of the accuracy of the generated models. Sea level is averaged over time and space (excluding waves, tides, etc.). Its changes, along with knowledge of the topography of the ocean floor - inform us indirectly about the volume of the entire water ocean. The true shape of the ocean surface is further varied by such phenomena as tides, differences in atmospheric pressure, wind systems, thermal expansion of water, or phases of ocean circulation. Depending on the location of the point, the higher the depth, the lower the trend of sea level change. Studies show that combining data sets, from different sources, with different accuracies can affect the quality of sea surface and seafloor topography models.

Keywords: seafloor, sea surface height, bathymetry, satellite altimetry

Procedia PDF Downloads 79