Search results for: panel data method
34830 Visual Text Analytics Technologies for Real-Time Big Data: Chronological Evolution and Issues
Authors: Siti Azrina B. A. Aziz, Siti Hafizah A. Hamid
Abstract:
New approaches to analyze and visualize data stream in real-time basis is important in making a prompt decision by the decision maker. Financial market trading and surveillance, large-scale emergency response and crowd control are some example scenarios that require real-time analytic and data visualization. This situation has led to the development of techniques and tools that support humans in analyzing the source data. With the emergence of Big Data and social media, new techniques and tools are required in order to process the streaming data. Today, ranges of tools which implement some of these functionalities are available. In this paper, we present chronological evolution evaluation of technologies for supporting of real-time analytic and visualization of the data stream. Based on the past research papers published from 2002 to 2014, we gathered the general information, main techniques, challenges and open issues. The techniques for streaming text visualization are identified based on Text Visualization Browser in chronological order. This paper aims to review the evolution of streaming text visualization techniques and tools, as well as to discuss the problems and challenges for each of identified tools.Keywords: information visualization, visual analytics, text mining, visual text analytics tools, big data visualization
Procedia PDF Downloads 39934829 Churn Prediction for Telecommunication Industry Using Artificial Neural Networks
Authors: Ulas Vural, M. Ergun Okay, E. Mesut Yildiz
Abstract:
Telecommunication service providers demand accurate and precise prediction of customer churn probabilities to increase the effectiveness of their customer relation services. The large amount of customer data owned by the service providers is suitable for analysis by machine learning methods. In this study, expenditure data of customers are analyzed by using an artificial neural network (ANN). The ANN model is applied to the data of customers with different billing duration. The proposed model successfully predicts the churn probabilities at 83% accuracy for only three months expenditure data and the prediction accuracy increases up to 89% when the nine month data is used. The experiments also show that the accuracy of ANN model increases on an extended feature set with information of the changes on the bill amounts.Keywords: customer relationship management, churn prediction, telecom industry, deep learning, artificial neural networks
Procedia PDF Downloads 14734828 An Alternative Stratified Cox Model for Correlated Variables in Infant Mortality
Authors: K. A. Adeleke
Abstract:
Often in epidemiological research, introducing stratified Cox model can account for the existence of interactions of some inherent factors with some major/noticeable factors. This research work aimed at modelling correlated variables in infant mortality with the existence of some inherent factors affecting the infant survival function. An alternative semiparametric Stratified Cox model is proposed with a view to take care of multilevel factors that have interactions with others. This, however, was used as a tool to model infant mortality data from Nigeria Demographic and Health Survey (NDHS) with some multilevel factors (Tetanus, Polio, and Breastfeeding) having correlation with main factors (Sex, Size, and Mode of Delivery). Asymptotic properties of the estimators are also studied via simulation. The tested model via data showed good fit and performed differently depending on the levels of the interaction of the strata variable Z*. An evidence that the baseline hazard functions and regression coefficients are not the same from stratum to stratum provides a gain in information as against the usage of Cox model. Simulation result showed that the present method produced better estimates in terms of bias, lower standard errors, and or mean square errors.Keywords: stratified Cox, semiparametric model, infant mortality, multilevel factors, cofounding variables
Procedia PDF Downloads 55734827 A Machine Learning Based Method to Detect System Failure in Resource Constrained Environment
Authors: Payel Datta, Abhishek Das, Abhishek Roychoudhury, Dhiman Chattopadhyay, Tanushyam Chattopadhyay
Abstract:
Machine learning (ML) and deep learning (DL) is most predominantly used in image/video processing, natural language processing (NLP), audio and speech recognition but not that much used in system performance evaluation. In this paper, authors are going to describe the architecture of an abstraction layer constructed using ML/DL to detect the system failure. This proposed system is used to detect the system failure by evaluating the performance metrics of an IoT service deployment under constrained infrastructure environment. This system has been tested on the manually annotated data set containing different metrics of the system, like number of threads, throughput, average response time, CPU usage, memory usage, network input/output captured in different hardware environments like edge (atom based gateway) and cloud (AWS EC2). The main challenge of developing such system is that the accuracy of classification should be 100% as the error in the system has an impact on the degradation of the service performance and thus consequently affect the reliability and high availability which is mandatory for an IoT system. Proposed ML/DL classifiers work with 100% accuracy for the data set of nearly 4,000 samples captured within the organization.Keywords: machine learning, system performance, performance metrics, IoT, edge
Procedia PDF Downloads 19534826 A Multi-Criteria Decision Method for the Recruitment of Academic Personnel Based on the Analytical Hierarchy Process and the Delphi Method in a Neutrosophic Environment
Authors: Antonios Paraskevas, Michael Madas
Abstract:
For a university to maintain its international competitiveness in education, it is essential to recruit qualitative academic staff as it constitutes its most valuable asset. This selection demonstrates a significant role in achieving strategic objectives, particularly by emphasizing a firm commitment to the exceptional student experience and innovative teaching and learning practices of high quality. In this vein, the appropriate selection of academic staff establishes a very important factor of competitiveness, efficiency and reputation of an academic institute. Within this framework, our work demonstrates a comprehensive methodological concept that emphasizes the multi-criteria nature of the problem and how decision-makers could utilize our approach in order to proceed to the appropriate judgment. The conceptual framework introduced in this paper is built upon a hybrid neutrosophic method based on the Neutrosophic Analytical Hierarchy Process (N-AHP), which uses the theory of neutrosophy sets and is considered suitable in terms of a significant degree of ambiguity and indeterminacy observed in the decision-making process. To this end, our framework extends the N-AHP by incorporating the Neutrosophic Delphi Method (N-DM). By applying the N-DM, we can take into consideration the importance of each decision-maker and their preferences per evaluation criterion. To the best of our knowledge, the proposed model is the first which applies the Neutrosophic Delphi Method in the selection of academic staff. As a case study, it was decided to use our method for a real problem of academic personnel selection, having as the main goal to enhance the algorithm proposed in previous scholars’ work, and thus taking care of the inherent ineffectiveness which becomes apparent in traditional multi-criteria decision-making methods when dealing with situations alike. As a further result, we prove that our method demonstrates greater applicability and reliability when compared to other decision models.Keywords: multi-criteria decision making methods, analytical hierarchy process, delphi method, personnel recruitment, neutrosophic set theory
Procedia PDF Downloads 11734825 Studying the Influence of Logistics on Organizational Performance through a Supply Chain Strategy: Case Study in Goldiran Electronics Co.
Authors: Ali Hajiesmaeili, Mehdi Rahimi, Ehsan Jaberi, Amir Abbas Hosseini
Abstract:
The purpose of this study is investigating the influences of logistics performance on organizational performance including both marketing & financial aspects, and showing the financial impacts of selecting the right marketing and logistics priorities in line with their supply chain type, and also giving the practitioners an advance identification of their priorities and participation types of supply chain, and the best combination of their strategies and resources in this regard. We made use of the original model’s questionnaire to gather all expert’s data and also SPSS and AMOS Ver.22 to analyze the gathered data. CFA method was also used to test whether a relationship between observed variables and their underlying latent constructs exists. Supply chain strategy implementation leads to logistics performance improvement, and marketing performance will be affected as well. Logistics service providers should focus on enhancement of supply chain performance, since logistics performance has been considered as a basis of evaluation of supply chain management strategy. Consequently, performance of the organization will be enhanced. This case is the first research made in Iran that analyzes the relationship between Logistics & Organizational performance in Home Appliances and Home Entertainment companies.Keywords: logistics, organizational, performance, supply chain, strategy
Procedia PDF Downloads 64934824 Parameter Study for TPU Nanofibers Fabricated via Centrifugal Spinning
Authors: Yasin Akgül, Yusuf Polat, Emine Canbay, Ali Kılıç
Abstract:
Electrospinning is the most common method to produce nanofibers. However, low production rate is still a big challenge for industrial applications of this method. In this study, morphology of nanofibers obtained from namely centrifugal spinning was investigated. Dominant process parameters acting on the fiber diameter and fiber orientation were discussed.Keywords: centrifugal spinning, electrospinning, nanofiber, TPU nanofibers
Procedia PDF Downloads 45034823 Convergence of Generalized Jacobi, Gauss-Seidel and Successive Overrelaxation Methods for Various Classes of Matrices
Authors: Manideepa Saha, Jahnavi Chakrabarty
Abstract:
Generalized Jacobi (GJ) and Generalized Gauss-Seidel (GGS) methods are most effective than conventional Jacobi and Gauss-Seidel methods for solving linear system of equations. It is known that GJ and GGS methods converge for strictly diagonally dominant (SDD) and for M-matrices. In this paper, we study the convergence of GJ and GGS converge for symmetric positive definite (SPD) matrices, L-matrices and H-matrices. We introduce a generalization of successive overrelaxation (SOR) method for solving linear systems and discuss its convergence for the classes of SDD matrices, SPD matrices, M-matrices, L-matrices and for H-matrices. Advantages of generalized SOR method are established through numerical experiments over GJ, GGS, and SOR methods.Keywords: convergence, Gauss-Seidel, iterative method, Jacobi, SOR
Procedia PDF Downloads 18934822 Geographical Data Visualization Using Video Games Technologies
Authors: Nizar Karim Uribe-Orihuela, Fernando Brambila-Paz, Ivette Caldelas, Rodrigo Montufar-Chaveznava
Abstract:
In this paper, we present the advances corresponding to the implementation of a strategy to visualize geographical data using a Software Development Kit (SDK) for video games. We use multispectral images from Landsat 7 platform and Laser Imaging Detection and Ranging (LIDAR) data from The National Institute of Geography and Statistics of Mexican (INEGI). We select a place of interest to visualize from Landsat platform and make some processing to the image (rotations, atmospheric correction and enhancement). The resulting image will be our gray scale color-map to fusion with the LIDAR data, which was selected using the same coordinates than in Landsat. The LIDAR data is translated to 8-bit raw data. Both images are fused in a software developed using Unity (an SDK employed for video games). The resulting image is then displayed and can be explored moving around. The idea is the software could be used for students of geology and geophysics at the Engineering School of the National University of Mexico. They will download the software and images corresponding to a geological place of interest to a smartphone and could virtually visit and explore the site with a virtual reality visor such as Google cardboard.Keywords: virtual reality, interactive technologies, geographical data visualization, video games technologies, educational material
Procedia PDF Downloads 24634821 Design of Enhanced Adaptive Filter for Integrated Navigation System of FOG-SINS and Star Tracker
Authors: Nassim Bessaad, Qilian Bao, Zhao Jiangkang
Abstract:
The fiber optics gyroscope in the strap-down inertial navigation system (FOG-SINS) suffers from precision degradation due to the influence of random errors. In this work, an enhanced Allan variance (AV) stochastic modeling method combined with discrete wavelet transform (DWT) for signal denoising is implemented to estimate the random process in the FOG signal. Furthermore, we devise a measurement-based iterative adaptive Sage-Husa nonlinear filter with augmented states to integrate a star tracker sensor with SINS. The proposed filter adapts the measurement noise covariance matrix based on the available data. Moreover, the enhanced stochastic modeling scheme is invested in tuning the process noise covariance matrix and the augmented state Gauss-Markov process parameters. Finally, the effectiveness of the proposed filter is investigated by employing the collected data in laboratory conditions. The result shows the filter's improved accuracy in comparison with the conventional Kalman filter (CKF).Keywords: inertial navigation, adaptive filtering, star tracker, FOG
Procedia PDF Downloads 8034820 Nonparametric Sieve Estimation with Dependent Data: Application to Deep Neural Networks
Authors: Chad Brown
Abstract:
This paper establishes general conditions for the convergence rates of nonparametric sieve estimators with dependent data. We present two key results: one for nonstationary data and another for stationary mixing data. Previous theoretical results often lack practical applicability to deep neural networks (DNNs). Using these conditions, we derive convergence rates for DNN sieve estimators in nonparametric regression settings with both nonstationary and stationary mixing data. The DNN architectures considered adhere to current industry standards, featuring fully connected feedforward networks with rectified linear unit activation functions, unbounded weights, and a width and depth that grows with sample size.Keywords: sieve extremum estimates, nonparametric estimation, deep learning, neural networks, rectified linear unit, nonstationary processes
Procedia PDF Downloads 4234819 Electron Impact Ionization Cross-Sections for e-C₅H₅N₅ Scattering
Authors: Manoj Kumar
Abstract:
Ionization cross sections of molecules due to electron impact play an important role in chemical processes in various branches of applied physics, such as radiation chemistry, gas discharges, plasmas etching in semiconductors, planetary upper atmospheric physics, mass spectrometry, etc. In the present work, we have calculated the total ionization cross sections for Adenine (C₅H₅N₅), a biologically important molecule, by electron impact in the incident electron energy range from ionization threshold to 2 keV employing a well-known Jain-Khare semiempirical formulation based on Bethe and Möllor cross sections. In the non-availability of the experimental results, the present results are in good agreement qualitatively as well as quantitatively with available theoretical results. The present results drive our confidence for further investigation of complex bio-molecule with better accuracy. Notwithstanding, the present method can deduce reliable cross-sectional data for complex targets with adequate accuracy and may facilitate the acclimatization of calculated cross-sections into atomic molecular cross-section data sets for modeling codes and other applications.Keywords: electron impact ionization cross-sections, oscillator strength, jain-khare semiempirical approach
Procedia PDF Downloads 11134818 The Conditionality of Financial Risk: A Comparative Analysis of High-Tech and Utility Companies Listed on the Shenzhen Stock Exchange (SSE)
Authors: Joseph Paul Chunga
Abstract:
The investment universe is awash with a myriad of financial choices that investors have to opt for, which principally culminates into a duality between aggressive or conservative approaches. Howbeit, it is pertinent to emphasize that the investment vehicles with an aggressive approach tend to take on more risk than the latter group in an effort to generate higher future returns for their respective investors. This study examines the conditionality effect that such partiality in financing has on the High-Tech and Public Utility companies listed on the Shenzhen Stock Exchange (SSE). Specifically, it examines the significance of the relationship between capitalization ratios of Total Debt Ratio (TDR), Degree of Financial Leverage (DFL) and profitability ratios of Earnings per Share (EPS) and Returns on Equity (ROE) on the Financial Risk of the two industries. We employ a modified version of the Panel Regression Model used by Rahman (2017) to estimate the relationship. The study finds that there is a significant positive relationship between the capitalization ratios on the financial risk of Public Utility companies more than High-Tech companies and a substantial negative relationship between the profitability ratios and the financial risk of the former than the latter companies. This then spells an important insight for prospective investors with regards to the volatility of earnings of such companies.Keywords: financial leverage, debt financing, conservative firms, aggressive firms
Procedia PDF Downloads 18634817 Unknown Groundwater Pollution Source Characterization in Contaminated Mine Sites Using Optimal Monitoring Network Design
Authors: H. K. Esfahani, B. Datta
Abstract:
Groundwater is one of the most important natural resources in many parts of the world; however it is widely polluted due to human activities. Currently, effective and reliable groundwater management and remediation strategies are obtained using characterization of groundwater pollution sources, where the measured data in monitoring locations are utilized to estimate the unknown pollutant source location and magnitude. However, accurately identifying characteristics of contaminant sources is a challenging task due to uncertainties in terms of predicting source flux injection, hydro-geological and geo-chemical parameters, and the concentration field measurement. Reactive transport of chemical species in contaminated groundwater systems, especially with multiple species, is a complex and highly non-linear geochemical process. Although sufficient concentration measurement data is essential to accurately identify sources characteristics, available data are often sparse and limited in quantity. Therefore, this inverse problem-solving method for characterizing unknown groundwater pollution sources is often considered ill-posed, complex and non- unique. Different methods have been utilized to identify pollution sources; however, the linked simulation-optimization approach is one effective method to obtain acceptable results under uncertainties in complex real life scenarios. With this approach, the numerical flow and contaminant transport simulation models are externally linked to an optimization algorithm, with the objective of minimizing the difference between measured concentration and estimated pollutant concentration at observation locations. Concentration measurement data are very important to accurately estimate pollution source properties; therefore, optimal design of the monitoring network is essential to gather adequate measured data at desired times and locations. Due to budget and physical restrictions, an efficient and effective approach for groundwater pollutant source characterization is to design an optimal monitoring network, especially when only inadequate and arbitrary concentration measurement data are initially available. In this approach, preliminary concentration observation data are utilized for preliminary source location, magnitude and duration of source activity identification, and these results are utilized for monitoring network design. Further, feedback information from the monitoring network is used as inputs for sequential monitoring network design, to improve the identification of unknown source characteristics. To design an effective monitoring network of observation wells, optimization and interpolation techniques are used. A simulation model should be utilized to accurately describe the aquifer properties in terms of hydro-geochemical parameters and boundary conditions. However, the simulation of the transport processes becomes complex when the pollutants are chemically reactive. Three dimensional transient flow and reactive contaminant transport process is considered. The proposed methodology uses HYDROGEOCHEM 5.0 (HGCH) as the simulation model for flow and transport processes with chemically multiple reactive species. Adaptive Simulated Annealing (ASA) is used as optimization algorithm in linked simulation-optimization methodology to identify the unknown source characteristics. Therefore, the aim of the present study is to develop a methodology to optimally design an effective monitoring network for pollution source characterization with reactive species in polluted aquifers. The performance of the developed methodology will be evaluated for an illustrative polluted aquifer sites, for example an abandoned mine site in Queensland, Australia.Keywords: monitoring network design, source characterization, chemical reactive transport process, contaminated mine site
Procedia PDF Downloads 23134816 A PROMETHEE-BELIEF Approach for Multi-Criteria Decision Making Problems with Incomplete Information
Abstract:
Multi-criteria decision aid methods consider decision problems where numerous alternatives are evaluated on several criteria. These methods are used to deal with perfect information. However, in practice, it is obvious that this information requirement is too much strict. In fact, the imperfect data provided by more or less reliable decision makers usually affect decision results since any decision is closely linked to the quality and availability of information. In this paper, a PROMETHEE-BELIEF approach is proposed to help multi-criteria decisions based on incomplete information. This approach solves problems with incomplete decision matrix and unknown weights within PROMETHEE method. On the base of belief function theory, our approach first determines the distributions of belief masses based on PROMETHEE’s net flows and then calculates weights. Subsequently, it aggregates the distribution masses associated to each criterion using Murphy’s modified combination rule in order to infer a global belief structure. The final action ranking is obtained via pignistic probability transformation. A case study of real-world application concerning the location of a waste treatment center from healthcare activities with infectious risk in the center of Tunisia is studied to illustrate the detailed process of the BELIEF-PROMETHEE approach.Keywords: belief function theory, incomplete information, multiple criteria analysis, PROMETHEE method
Procedia PDF Downloads 16734815 Critical Path Segments Method for Scheduling Technique
Authors: Sherif M. Hafez, Remon F. Aziz, May S. A. Elalim
Abstract:
Project managers today rely on scheduling tools based on the Critical Path Method (CPM) to determine the overall project duration and the activities’ float times which lead to greater efficiency in planning and control of projects. CPM was useful for scheduling construction projects, but researchers had highlighted a number of serious drawbacks that limit its use as a decision support tool and lacks the ability to clearly record and represent detailed information. This paper discusses the drawbacks of CPM as a scheduling technique and presents a modified critical path method (CPM) model which is called critical path segments (CPS). The CPS scheduling mechanism addresses the problems of CPM in three ways: decomposing the activity duration of separated but connected time segments; all relationships among activities are converted into finish–to–start relationship; and analysis and calculations are made with forward path. Sample cases are included to illustrate the shortages in CPM, CPS full analysis and calculations are explained in details, and how schedules can be handled better with the CPS technique.Keywords: construction management, scheduling, critical path method, critical path segments, forward pass, float, project control
Procedia PDF Downloads 35334814 Integrated Model for Enhancing Data Security Performance in Cloud Computing
Authors: Amani A. Saad, Ahmed A. El-Farag, El-Sayed A. Helali
Abstract:
Cloud computing is an important and promising field in the recent decade. Cloud computing allows sharing resources, services and information among the people of the whole world. Although the advantages of using clouds are great, but there are many risks in a cloud. The data security is the most important and critical problem of cloud computing. In this research a new security model for cloud computing is proposed for ensuring secure communication system, hiding information from other users and saving the user's times. In this proposed model Blowfish encryption algorithm is used for exchanging information or data, and SHA-2 cryptographic hash algorithm is used for data integrity. For user authentication process a user-name and password is used, the password uses SHA-2 for one way encryption. The proposed system shows an improvement of the processing time of uploading and downloading files on the cloud in secure form.Keywords: cloud Ccomputing, data security, SAAS, PAAS, IAAS, Blowfish
Procedia PDF Downloads 47734813 Black Masculinity, Media Stereotyping And Its Influence on Policing in the United States: A Functionalist Perspective
Authors: Jack Santiago Monell
Abstract:
In America, misrepresentations of black males have been perpetuated throughout the history of popular culture. Because of these narratives, varying communities have developed biases and stereotypes about what black male masculinity represents and more importantly, how they respond to them. The researcher explored the perspectives of police officers in the following states, Maryland, Pennsylvania, and North Carolina. Because of the nature of police and community relations, and national attention to high profile cases, having officers provide context into how black males are viewed from their lens, was critical while expanding on the theoretical explanations to describe attitudes towards police confrontations. As one of the objectives was to identify specific themes relevant to why police officers may view African American males differently, hence, responding more aggressively, this proved to be the most beneficial method of initial analysis to identify themes. The following nodes (appearance, acting suspicious/ troublesome behavior, upbringing about black males, excessive force) were identified to analyze the transcripts to discern associations. The data was analyzed through NVivo 11, and several themes resulted to elaborate on the data received. In analyzing the data, four themes were identified: appearance, acting suspicious/ troublesome behavior, upbringing about black males, and excessive force. The data conveyed that continuous stereotypes about African American men will ultimately result in excessive use of force or pervasive shootings, albeit the men are armed or unarmed. African American males are consistently targeted because of their racial makeup and appearance over any other probable circumstances. As long as racial bias and stereotypical practices continue in policing, African American males will endlessly be unjustly targeted and at times, the victims of violent encounters with police officers in the United States.Keywords: African American males, police perceptions, masculinity, popular culture
Procedia PDF Downloads 11334812 The Pressure Effect and First-Principles Study of Strontium Chalcogenides SrS
Authors: Benallou Yassine, Amara Kadda, Bouazza Boubakar, Soudini Belabbes, Arbouche Omar, M. Zemouli
Abstract:
The study of the pressure effect on the materials, their functionality and their properties is very important, insofar as it provides the opportunity to identify others applications such the optical properties in the alkaline earth chalcogenides, as like the SrS. Here we present the first-principles calculations which have been performed using the full potential linearized augmented plane wave method (FP-LAPW) within the Generalized Gradient Approximation developed by Perdew–Burke–Ernzerhor for solids (PBEsol). The calculated structural parameters like the lattice parameters, the bulk modulus B and their pressure derivative B' are in reasonable agreement with the available experimental and theoretical data. In addition, the elastic properties such as elastic constants (C11, C12, and C44), the shear modulus G, the Young modulus E, the Poisson’s ratio ν and the B/G ratio are also given. The treatments of exchange and correlation effects were done by the Tran-Blaha modified Becke-Johnson (TB-mBJ) potential for the electronic. The pressure effect on the electronic properties was visualized by calculating the variations of the gap as a function of pressure. The obtained results are compared to available experimental data and to other theoretical calculationsKeywords: SrS, GGA-PBEsol+TB-MBJ, density functional, Perdew–Burke–Ernzerhor, FP-LAPW, pressure effect
Procedia PDF Downloads 56934811 Performance Comparison and Visualization of COMSOL Multiphysics, Matlab, and Fortran for Predicting the Reservoir Pressure on Oil Production in a Multiple Leases Reservoir with Boundary Element Method
Authors: N. Alias, W. Z. W. Muhammad, M. N. M. Ibrahim, M. Mohamed, H. F. S. Saipol, U. N. Z. Ariffin, N. A. Zakaria, M. S. Z. Suardi
Abstract:
This paper presents the performance comparison of some computation software for solving the boundary element method (BEM). BEM formulation is the numerical technique and high potential for solving the advance mathematical modeling to predict the production of oil well in arbitrarily shaped based on multiple leases reservoir. The limitation of data validation for ensuring that a program meets the accuracy of the mathematical modeling is considered as the research motivation of this paper. Thus, based on this limitation, there are three steps involved to validate the accuracy of the oil production simulation process. In the first step, identify the mathematical modeling based on partial differential equation (PDE) with Poisson-elliptic type to perform the BEM discretization. In the second step, implement the simulation of the 2D BEM discretization using COMSOL Multiphysic and MATLAB programming languages. In the last step, analyze the numerical performance indicators for both programming languages by using the validation of Fortran programming. The performance comparisons of numerical analysis are investigated in terms of percentage error, comparison graph and 2D visualization of pressure on oil production of multiple leases reservoir. According to the performance comparison, the structured programming in Fortran programming is the alternative software for implementing the accurate numerical simulation of BEM. As a conclusion, high-level language for numerical computation and numerical performance evaluation are satisfied to prove that Fortran is well suited for capturing the visualization of the production of oil well in arbitrarily shaped.Keywords: performance comparison, 2D visualization, COMSOL multiphysic, MATLAB, Fortran, modelling and simulation, boundary element method, reservoir pressure
Procedia PDF Downloads 49134810 The Method for Synthesis of Chromium Oxide Nano Particles as Increasing Color Intensity on Industrial Ceramics
Authors: Bagher Aziz Kalantari, Javad Rafiei, Mohamad Reza Talei Bavil Olyai
Abstract:
Disclosed is a method of preparing a pigmentary chromium oxide nano particles having 50 percent particle size less than about 100nm. According to the disclosed method, a substantially dry solid composition of potassium dichromate and carbon active is heated in CO2 atmosphere to a temperature of about 600ºc for 1hr. Thereafter, the solid Cr2O3 product was washed twice with distilled water. The other aim of this study is to assess both the colouring performance and the potential of nano-pigments in the ceramic tile decoration. The rationable consists in nano-pigment application in several ceramics, including a comparison of colour performance with conventional micro-pigments.Keywords: green chromium oxide, nano particles, colour performances, particle size
Procedia PDF Downloads 33534809 Two-Dimensional Analysis and Numerical Simulation of the Navier-Stokes Equations for Principles of Turbulence around Isothermal Bodies Immersed in Incompressible Newtonian Fluids
Authors: Romulo D. C. Santos, Silvio M. A. Gama, Ramiro G. R. Camacho
Abstract:
In this present paper, the thermos-fluid dynamics considering the mixed convection (natural and forced convections) and the principles of turbulence flow around complex geometries have been studied. In these applications, it was necessary to analyze the influence between the flow field and the heated immersed body with constant temperature on its surface. This paper presents a study about the Newtonian incompressible two-dimensional fluid around isothermal geometry using the immersed boundary method (IBM) with the virtual physical model (VPM). The numerical code proposed for all simulations satisfy the calculation of temperature considering Dirichlet boundary conditions. Important dimensionless numbers such as Strouhal number is calculated using the Fast Fourier Transform (FFT), Nusselt number, drag and lift coefficients, velocity and pressure. Streamlines and isothermal lines are presented for each simulation showing the flow dynamics and patterns. The Navier-Stokes and energy equations for mixed convection were discretized using the finite difference method for space and a second order Adams-Bashforth and Runge-Kuta 4th order methods for time considering the fractional step method to couple the calculation of pressure, velocity, and temperature. This work used for simulation of turbulence, the Smagorinsky, and Spalart-Allmaras models. The first model is based on the local equilibrium hypothesis for small scales and hypothesis of Boussinesq, such that the energy is injected into spectrum of the turbulence, being equal to the energy dissipated by the convective effects. The Spalart-Allmaras model, use only one transport equation for turbulent viscosity. The results were compared with numerical data, validating the effect of heat-transfer together with turbulence models. The IBM/VPM is a powerful tool to simulate flow around complex geometries. The results showed a good numerical convergence in relation the references adopted.Keywords: immersed boundary method, mixed convection, turbulence methods, virtual physical model
Procedia PDF Downloads 11534808 Exploring Consumers' Intention to Adopt Mobile Payment System in Ghana
Authors: Y. Kong, I. Masud, M. H. Nyaso
Abstract:
This paper seeks to examine consumers’ intention to adopt and use mobile payment method in Ghana. A conceptual framework was adopted from the extant literature using the Technology Acceptance Model (TAM) and the Theory of Reasoned Action (TRA) as the theoretical bases. Data for the study was obtained from a sample of 425 respondents through online and direct surveys using structured questionnaire. Structural Equation Modeling was used to analyse the data through SPSS v.22 and SmartPLS v.3. Findings with regards to the determinants of mobile payment system adoption indicate that subjective norm, perceived ease of use, attitude, and perceived usefulness play active roles in consumers’ decision to adopt mobile payment system in Ghana. Also, perceived usefulness and perceived ease of use have a significant and positive influence on consumers’ attitude towards mobile payment adoption in Ghana. Further, subjective norm was found to influence perceived usefulness and perceived ease of use of mobile payment adoption in Ghana. The study contributes to literature on mobile payment system from developing country context. The study proffered some recommendations.Keywords: consumer behaviour, mobile payment, subjective norm, theory of planned behavior
Procedia PDF Downloads 15334807 The Result of Suggestion for Low Energy Diet (1,000-1,200 kcal) in Obese Women to the Effect on Body Weight, Waist Circumference, and BMI
Authors: S. Kumchoo
Abstract:
The result of suggestion for low energy diet (1,000-1,200 kcal) in obese women to the effect on body weight, waist circumference and body mass index (BMI) in this experiment. Quisi experimental research was used for this study and it is a One-group pretest-posttest designs measurement method. The aim of this study was body weight, waist circumference and body mass index (BMI) reduction by using low energy diet (1,000-1,200 kcal) in obese women, the result found that in 15 of obese women that contained their body mass index (BMI) ≥ 30, after they obtained low energy diet (1,000-1,200 kcal) within 2 weeks. The data were collected before and after of testing the results showed that the average of body weight decrease 3.4 kilogram, waist circumference value decrease 6.1 centimeter and the body mass index (BMI) decrease 1.3 kg.m2 from their previous body weight, waist circumference and body mass index (BMI) before experiment started. After this study, the volunteers got healthy and they can choose or select some food for themselves. For this study, the research can be improved for data development for forward study in the future.Keywords: body weight, waist circumference, low energy diet, BMI
Procedia PDF Downloads 38834806 Model-Driven and Data-Driven Approaches for Crop Yield Prediction: Analysis and Comparison
Authors: Xiangtuo Chen, Paul-Henry Cournéde
Abstract:
Crop yield prediction is a paramount issue in agriculture. The main idea of this paper is to find out efficient way to predict the yield of corn based meteorological records. The prediction models used in this paper can be classified into model-driven approaches and data-driven approaches, according to the different modeling methodologies. The model-driven approaches are based on crop mechanistic modeling. They describe crop growth in interaction with their environment as dynamical systems. But the calibration process of the dynamic system comes up with much difficulty, because it turns out to be a multidimensional non-convex optimization problem. An original contribution of this paper is to propose a statistical methodology, Multi-Scenarios Parameters Estimation (MSPE), for the parametrization of potentially complex mechanistic models from a new type of datasets (climatic data, final yield in many situations). It is tested with CORNFLO, a crop model for maize growth. On the other hand, the data-driven approach for yield prediction is free of the complex biophysical process. But it has some strict requirements about the dataset. A second contribution of the paper is the comparison of these model-driven methods with classical data-driven methods. For this purpose, we consider two classes of regression methods, methods derived from linear regression (Ridge and Lasso Regression, Principal Components Regression or Partial Least Squares Regression) and machine learning methods (Random Forest, k-Nearest Neighbor, Artificial Neural Network and SVM regression). The dataset consists of 720 records of corn yield at county scale provided by the United States Department of Agriculture (USDA) and the associated climatic data. A 5-folds cross-validation process and two accuracy metrics: root mean square error of prediction(RMSEP), mean absolute error of prediction(MAEP) were used to evaluate the crop prediction capacity. The results show that among the data-driven approaches, Random Forest is the most robust and generally achieves the best prediction error (MAEP 4.27%). It also outperforms our model-driven approach (MAEP 6.11%). However, the method to calibrate the mechanistic model from dataset easy to access offers several side-perspectives. The mechanistic model can potentially help to underline the stresses suffered by the crop or to identify the biological parameters of interest for breeding purposes. For this reason, an interesting perspective is to combine these two types of approaches.Keywords: crop yield prediction, crop model, sensitivity analysis, paramater estimation, particle swarm optimization, random forest
Procedia PDF Downloads 23134805 Dynamic Modeling of Orthotropic Cracked Materials by X-FEM
Authors: S. Houcine Habib, B. Elkhalil Hachi, Mohamed Guesmi, Mohamed Haboussi
Abstract:
In this paper, dynamic fracture behaviors of cracked orthotropic structure are modeled using extended finite element method (X-FEM). In this approach, the finite element method model is first created and then enriched by special orthotropic crack tip enrichments and Heaviside functions in the framework of partition of unity. The mixed mode stress intensity factor (SIF) is computed using the interaction integral technique based on J-integral in order to predict cracking behavior of the structure. The developments of these procedures are programmed and introduced in a self-software platform code. To assess the accuracy of the developed code, results obtained by the proposed method are compared with those of literature.Keywords: X-FEM, composites, stress intensity factor, crack, dynamic orthotropic behavior
Procedia PDF Downloads 57034804 Low-Complexity Multiplication Using Complement and Signed-Digit Recoding Methods
Authors: Te-Jen Chang, I-Hui Pan, Ping-Sheng Huang, Shan-Jen Cheng
Abstract:
In this paper, a fast multiplication computing method utilizing the complement representation method and canonical recoding technique is proposed. By performing complements and canonical recoding technique, the number of partial products can be reduced. Based on these techniques, we propose an algorithm that provides an efficient multiplication method. On average, our proposed algorithm is to reduce the number of k-bit additions from (0.25k+logk/k+2.5) to (k/6 +logk/k+2.5), where k is the bit-length of the multiplicand A and multiplier B. We can therefore efficiently speed up the overall performance of the multiplication. Moreover, if we use the new proposes to compute common-multiplicand multiplication, the computational complexity can be reduced from (0.5 k+2 logk/k+5) to (k/3+2 logk/k+5) k-bit additions.Keywords: algorithm design, complexity analysis, canonical recoding, public key cryptography, common-multiplicand multiplication
Procedia PDF Downloads 43534803 Formation Flying Design Applied for an Aurora Borealis Monitoring Mission
Authors: Thais Cardoso Franco, Caio Nahuel Sousa Fagonde, Willer Gomes dos Santos
Abstract:
Aurora Borealis is an optical phenomenon composed of luminous events observed in the night skies in the polar regions resulting from disturbances in the magnetosphere due to the impact of solar wind particles with the Earth's upper atmosphere, channeled by the Earth's magnetic field, which causes atmospheric molecules to become excited and emit electromagnetic spectrum, leading to the display of lights in the sky. However, there are still different implications of this phenomenon under study: high intensity auroras are often accompanied by geomagnetic storms that cause blackouts on Earth and impair the transmission of signals from the Global Navigation Satellite Systems (GNSS). Auroras are also known to occur on other planets and exoplanets, so the activity is an indication of active space weather conditions that can aid in learning about the planetary environment. In order to improve understanding of the phenomenon, this research aims to design a satellite formation flying solution for collecting and transmitting data for monitoring aurora borealis in northern hemisphere, an approach that allows studying the event with multipoint data collection in a reduced time interval, in order to allow analysis from the beginning of the phenomenon until its decline. To this end, the ideal number of satellites, the spacing between them, as well as the ideal topology to be used will be analyzed. From an orbital study, approaches from different altitudes, eccentricities and inclinations will also be considered. Given that at large relative distances between satellites in formation, controllers tend to fail, a study on the efficiency of nonlinear adaptive control methods from the point of view of position maintenance and propellant consumption will be carried out. The main orbital perturbations considered in the simulation: non-homogeneity terrestrial, atmospheric drag, gravitational action of the Sun and the Moon, accelerations due to solar radiation pressure and relativistic effects.Keywords: formation flying, nonlinear adaptive control method, aurora borealis, adaptive SDRE method
Procedia PDF Downloads 3934802 Derivation of Runoff Susceptibility Map Using Slope-Adjusted SCS-CN in a Tropical River Basin
Authors: Abolghasem Akbari
Abstract:
The Natural Resources Conservation Service Curve Number (NRCS-CN) method is widely used for predicting direct runoff from rainfall. It employs the hydrologic soil groups and land use information along with period soil moisture conditions to derive NRCS-CN. This method has been well documented and available in popular rainfall-runoff models such as HEC-HMS, SWAT, SWMM and much more. Despite all benefits and advantages of this well documented and easy-to-use method, it does not take into account the effect of terrain slope and drainage area. This study aimed to first investigate the effect of slope on CN and then slope-adjusted runoff potential map is generated for Kuantan River Basin, Malaysia. The Hanng method was used to adjust CN values provided in National Handbook of Engineering and The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) Global Digital Elevation Model (GDEM) version 2 is used to derive slope map with the spatial resolution of 30 m for Kuantan River Basin (KRB). The study significantly enhanced the application of GIS tools and recent advances in earth observation technology to analyze the hydrological process.Keywords: Kuantan, ASTER-GDEM, SCS-CN, runoff
Procedia PDF Downloads 28734801 Challenges in Multi-Cloud Storage Systems for Mobile Devices
Authors: Rajeev Kumar Bedi, Jaswinder Singh, Sunil Kumar Gupta
Abstract:
The demand for cloud storage is increasing because users want continuous access their data. Cloud Storage revolutionized the way how users access their data. A lot of cloud storage service providers are available as DropBox, G Drive, and providing limited free storage and for extra storage; users have to pay money, which will act as a burden on users. To avoid the issue of limited free storage, the concept of Multi Cloud Storage introduced. In this paper, we will discuss the limitations of existing Multi Cloud Storage systems for mobile devices.Keywords: cloud storage, data privacy, data security, multi cloud storage, mobile devices
Procedia PDF Downloads 699