Search results for: cluster model approach
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 26805

Search results for: cluster model approach

24315 Analysis of the Contribution of Drude and Brendel Model Terms to the Dielectric Function

Authors: Christopher Mkirema Maghanga, Maurice Mghendi Mwamburi

Abstract:

Parametric modeling provides a means to deeper understand the properties of materials. Drude, Brendel, Lorentz and OJL incorporated in SCOUT® software are some of the models used to study dielectric films. In our work, we utilized Brendel and Drude models to extract the optical constants from spectroscopic data of fabricated undoped and niobium doped titanium oxide thin films. The individual contributions by the two models were studied to establish how they influence the dielectric function. The effect of dopants on their influences was also analyzed. For the undoped films, results indicate minimal contribution from the Drude term due to the dielectric nature of the films. However as doping levels increase, the rise in the concentration of free electrons favors the use of Drude model. Brendel model was confirmed to work well with dielectric films - the undoped titanium Oxide films in our case.

Keywords: modeling, Brendel model, optical constants, titanium oxide, Drude Model

Procedia PDF Downloads 166
24314 A Multicriteria Mathematical Programming Model for Farm Planning in Greece

Authors: Basil Manos, Parthena Chatzinikolaou, Fedra Kiomourtzi

Abstract:

This paper presents a Multicriteria Mathematical Programming model for farm planning and sustainable optimization of agricultural production. The model can be used as a tool for the analysis and simulation of agricultural production plans, as well as for the study of impacts of various measures of Common Agriculture Policy in the member states of European Union. The model can achieve the optimum production plan of a farm or an agricultural region combining in one utility function different conflicting criteria as the maximization of gross margin and the minimization of fertilizers used, under a set of constraints for land, labor, available capital, Common Agricultural Policy etc. The proposed model was applied to the region of Larisa in central Greece. The optimum production plan achieves a greater gross return, a less fertilizers use, and a less irrigated water use than the existent production plan.

Keywords: sustainable optimization, multicriteria analysis, agricultural production, farm planning

Procedia PDF Downloads 589
24313 Software Component Identification from Its Object-Oriented Code: Graph Metrics Based Approach

Authors: Manel Brichni, Abdelhak-Djamel Seriai

Abstract:

Systems are increasingly complex. To reduce their complexity, an abstract view of the system can simplify its development. To overcome this problem, we propose a method to decompose systems into subsystems while reducing their coupling. These subsystems represent components. Consisting of an existing object-oriented systems, the main idea of our approach is based on modelling as graphs all entities of an oriented object source code. Such modelling is easy to handle, so we can apply restructuring algorithms based on graph metrics. The particularity of our approach consists in integrating in addition to standard metrics, such as coupling and cohesion, some graph metrics giving more precision during the components identi cation. To treat this problem, we relied on the ROMANTIC approach that proposed a component-based software architecture recovery from an object oriented system.

Keywords: software reengineering, software component and interfaces, metrics, graphs

Procedia PDF Downloads 483
24312 A Comparative Analysis of E-Government Quality Models

Authors: Abdoullah Fath-Allah, Laila Cheikhi, Rafa E. Al-Qutaish, Ali Idri

Abstract:

Many quality models have been used to measure e-government portals quality. However, the absence of an international consensus for e-government portals quality models results in many differences in terms of quality attributes and measures. The aim of this paper is to compare and analyze the existing e-government quality models proposed in literature (those that are based on ISO standards and those that are not) in order to propose guidelines to build a good and useful e-government portals quality model. Our findings show that, there is no e-government portal quality model based on the new international standard ISO 25010. Besides that, the quality models are not based on a best practice model to allow agencies to both; measure e-government portals quality and identify missing best practices for those portals.

Keywords: e-government, portal, best practices, quality model, ISO, standard, ISO 25010, ISO 9126

Procedia PDF Downloads 539
24311 Predicting Options Prices Using Machine Learning

Authors: Krishang Surapaneni

Abstract:

The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%

Keywords: finance, linear regression model, machine learning model, neural network, stock price

Procedia PDF Downloads 64
24310 Experimental and Numerical Analysis of Mustafa Paşa Mosque in Skopje

Authors: Ozden Saygili, Eser Cakti

Abstract:

The masonry building stock in Istanbul and in other cities of Turkey are exposed to significant earthquake hazard. Determination of the safety of masonry structures against earthquakes is a complex challenge. This study deals with experimental tests and non-linear dynamic analysis of masonry structures modeled through discrete element method. The 1:10 scale model of Mustafa Paşa Mosque was constructed and the data were obtained from the sensors on it during its testing on the shake table. The results were used in the calibration/validation of the numerical model created on the basis of the 1:10 scale model built for shake table testing. 3D distinct element model was developed that represents the linear and nonlinear behavior of the shake table model as closely as possible during experimental tests. Results of numerical analyses with those from the experimental program were compared and discussed.

Keywords: dynamic analysis, non-linear modeling, shake table tests, masonry

Procedia PDF Downloads 405
24309 Study of Proton-9,11Li Elastic Scattering at 60~75 MeV/Nucleon

Authors: Arafa A. Alholaisi, Jamal H. Madani, M. A. Alvi

Abstract:

The radial form of nuclear matter distribution, charge and the shape of nuclei are essential properties of nuclei, and hence, are of great attention for several areas of research in nuclear physics. More than last three decades have witnessed a range of experimental means employing leptonic probes (such as muons, electrons etc.) for exploring nuclear charge distributions, whereas the hadronic probes (for example alpha particles, protons, etc.) have been used to investigate the nuclear matter distributions. In this paper, p-9,11Li elastic scattering differential cross sections in the energy range  to  MeV have been studied by means of Coulomb modified Glauber scattering formalism. By applying the semi-phenomenological Bhagwat-Gambhir-Patil [BGP] nuclear density for loosely bound neutron rich 11Li nucleus, the estimated matter radius is found to be 3.446 fm which is quite large as compared to so known experimental value 3.12 fm. The results of microscopic optical model based calculation by applying Bethe-Brueckner–Hartree–Fock formalism (BHF) have also been compared. It should be noted that in most of phenomenological density model used to reproduce the p-11Li differential elastic scattering cross sections data, the calculated matter radius lies between 2.964 and 3.55 fm. The calculated results with phenomenological BGP model density and with nucleon density calculated in the relativistic mean-field (RMF) reproduces p-9Li and p-11Li experimental data quite nicely as compared to Gaussian- Gaussian or Gaussian-Oscillator densities at all energies under consideration. In the approach described here, no free/adjustable parameter has been employed to reproduce the elastic scattering data as against the well-known optical model based studies that involve at least four to six adjustable parameters to match the experimental data. Calculated reaction cross sections σR for p-11Li at these energies are quite large as compared to estimated values reported by earlier works though so far no experimental studies have been performed to measure it.

Keywords: Bhagwat-Gambhir-Patil density, Coulomb modified Glauber model, halo nucleus, optical limit approximation

Procedia PDF Downloads 146
24308 Tolerating Input Faults in Asynchronous Sequential Machines

Authors: Jung-Min Yang

Abstract:

A method of tolerating input faults for input/state asynchronous sequential machines is proposed. A corrective controller is placed in front of the considered asynchronous machine to realize model matching with a reference model. The value of the external input transmitted to the closed-loop system may change by fault. We address the existence condition for the controller that can counteract adverse effects of any input fault while maintaining the objective of model matching. A design procedure for constructing the controller is outlined. The proposed reachability condition for the controller design is validated in an illustrative example.

Keywords: asynchronous sequential machines, corrective control, fault tolerance, input faults, model matching

Procedia PDF Downloads 407
24307 The Free Vibration Analysis of Honeycomb Sandwich Beam using 3D and Continuum Model

Authors: Gürkan Şakar, Fevzi Çakmak Bolat

Abstract:

In this study free vibration analysis of aluminum honeycomb sandwich structures were carried out experimentally and numerically. The natural frequencies and mode shapes of sandwich structures fabricated with different configurations for clamped-free boundary condition were determined. The effects of lower and upper face sheet thickness, the core material thickness, cell diameter, cell angle and foil thickness on the vibration characteristics were examined. The numerical studies were performed with ANSYS package. While the sandwich structures were modeled in ANSYS the continuum model was used. Later, the numerical results were compared with the experimental findings.

Keywords: sandwich structure, free vibration, numeric analysis, 3D model, continuum model

Procedia PDF Downloads 404
24306 Transdisciplinary Pedagogy: An Arts-Integrated Approach to Promote Authentic Science, Technology, Engineering, Arts, and Mathematics Education in Initial Teacher Education

Authors: Anne Marie Morrin

Abstract:

This paper will focus on the design, delivery and assessment of a transdisciplinary STEAM (Science, Technology, Engineering, Arts, and Mathematics) education initiative in a college of education in Ireland. The project explores a transdisciplinary approach to supporting STEAM education where the concepts, methodologies and assessments employed derive from visual art sessions within initial teacher education. The research will demonstrate that the STEAM Education approach is effective when visual art concepts and methods are placed at the core of the teaching and learning experience. Within this study, emphasis is placed on authentic collaboration and transdisciplinary pedagogical approaches with the STEAM subjects. The partners included a combination of teaching expertise in STEM and Visual Arts education, artists, in-service and pre-service teachers and children. The inclusion of all stakeholders mentioned moves towards a more authentic approach where transdisciplinary practice is at the core of the teaching and learning. Qualitative data was collected using a combination of questionnaires (focused and open-ended questions) and focus groups. In addition, the data was collected through video diaries where students reflected on their visual journals and transdisciplinary practice, which gave rich insight into participants' experiences and opinions on their learning. It was found that an effective program of STEAM education integration was informed by co-teaching (continuous professional development), which involved a commitment to adaptable and flexible approaches to teaching, learning, and assessment, as well as the importance of continuous reflection-in-action by all participants. The delivery of a transdisciplinary model of STEAM education was devised to reconceptualizatise how individual subject areas can develop essential skills and tackle critical issues (such as self-care and climate change) through data visualisation and technology. The success of the project can be attributed to the collaboration, which was inclusive, flexible and a willingness between various stakeholders to be involved in the design and implementation of the project from conception to completion. The case study approach taken is particularistic (focusing on the STEAM-ED project), descriptive (providing in-depth descriptions from varied and multiple perspectives), and heuristic (interpreting the participants’ experiences and what meaning they attributed to their experiences).

Keywords: collaboration, transdisciplinary, STEAM, visual arts education

Procedia PDF Downloads 35
24305 Exploring an Exome Target Capture Method for Cross-Species Population Genetic Studies

Authors: Benjamin A. Ha, Marco Morselli, Xinhui Paige Zhang, Elizabeth A. C. Heath-Heckman, Jonathan B. Puritz, David K. Jacobs

Abstract:

Next-generation sequencing has enhanced the ability to acquire massive amounts of sequence data to address classic population genetic questions for non-model organisms. Targeted approaches allow for cost effective or more precise analyses of relevant sequences; although, many such techniques require a known genome and it can be costly to purchase probes from a company. This is challenging for non-model organisms with no published genome and can be expensive for large population genetic studies. Expressed exome capture sequencing (EecSeq) synthesizes probes in the lab from expressed mRNA, which is used to capture and sequence the coding regions of genomic DNA from a pooled suite of samples. A normalization step produces probes to recover transcripts from a wide range of expression levels. This approach offers low cost recovery of a broad range of genes in the genome. This research project expands on EecSeq to investigate if mRNA from one taxon may be used to capture relevant sequences from a series of increasingly less closely related taxa. For this purpose, we propose to use the endangered Northern Tidewater goby, Eucyclogobius newberryi, a non-model organism that inhabits California coastal lagoons. mRNA will be extracted from E. newberryi to create probes and capture exomes from eight other taxa, including the more at-risk Southern Tidewater goby, E. kristinae, and more divergent species. Captured exomes will be sequenced, analyzed bioinformatically and phylogenetically, then compared to previously generated phylogenies across this group of gobies. This will provide an assessment of the utility of the technique in cross-species studies and for analyzing low genetic variation within species as is the case for E. kristinae. This method has potential applications to provide economical ways to expand population genetic and evolutionary biology studies for non-model organisms.

Keywords: coastal lagoons, endangered species, non-model organism, target capture method

Procedia PDF Downloads 173
24304 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 120
24303 Assessing Livelihood Vulnerability to Climate Change and Adaptation Strategies in Rajanpur District, Pakistan

Authors: Muhammad Afzal, Shahbaz Mushtaq, Duc-Anh-An-Vo, Kathryn Reardon Smith, Thanh Ma

Abstract:

Climate change has become one of the most challenging environmental issues in the 21st century. Climate change-induced natural disasters, especially floods, are the major factors of livelihood vulnerability, impacting millions of individuals worldwide. Evaluating and mitigating the effects of floods requires an in-depth understanding of the relationship between vulnerability and livelihood capital assets. Using an integrated approach, sustainable livelihood framework, and system thinking approach, the study developed a conceptual model of a generalized livelihood system in District Rajanpur, Pakistan. The model visualizes the livelihood vulnerability system as a whole and identifies the key feedback loops likely to influence the livelihood vulnerability. The study suggests that such conceptual models provide effective communication and understanding tools to stakeholders and decision-makers to anticipate the problem and design appropriate policies. It can also serve as an evaluation technique for rural livelihood policy and identify key systematic interventions. The key finding of the study reveals that household income, health, and education are the major factors behind the livelihood vulnerability of the rural poor of District Rajanpur. The Pakistani government tried to reduce the livelihood vulnerability of the region through different income, health, and education programs, but still, many changes are required to make these programs more effective especially during the flood times. The government provided only cash to vulnerable and marginalized families through income support programs, but this study suggests that along with the cash, the government must provide seed storage facilities and access to crop insurance to the farmers. Similarly, the government should establish basic health units in villages and frequent visits of medical mobile vans should be arranged with advanced medical lab facilities during and after the flood.

Keywords: livelihood vulnerability, rural communities, flood, sustainable livelihood framework, system dynamics, Pakistan

Procedia PDF Downloads 37
24302 Driver Behavior Analysis and Inter-Vehicular Collision Simulation Approach

Authors: Lu Zhao, Nadir Farhi, Zoi Christoforou, Nadia Haddadou

Abstract:

The safety test of deploying intelligent connected vehicles (ICVs) on the road network is a critical challenge. Road traffic network simulation can be used to test the functionality of ICVs, which is not only time-saving and less energy-consuming but also can create scenarios with car collisions. However, the relationship between different human driver behaviors and the car-collision occurrences has been not understood clearly; meanwhile, the procedure of car-collisions generation in the traffic numerical simulators is not fully integrated. In this paper, we propose an approach to identify specific driver profiles from real driven data; then, we replicate them in numerical traffic simulations with the purpose of generating inter-vehicular collisions. We proposed three profiles: (i) 'aggressive': short time-headway, (ii) 'inattentive': long reaction time, and (iii) 'normal' with intermediate values of reaction time and time-headway. These three driver profiles are extracted from the NGSIM dataset and simulated using the intelligent driver model (IDM), with an extension of reaction time. At last, the generation of inter-vehicular collisions is performed by varying the percentages of different profiles.

Keywords: vehicular collisions, human driving behavior, traffic modeling, car-following models, microscopic traffic simulation

Procedia PDF Downloads 160
24301 Qsar Studies of Certain Novel Heterocycles Derived From bis-1, 2, 4 Triazoles as Anti-Tumor Agents

Authors: Madhusudan Purohit, Stephen Philip, Bharathkumar Inturi

Abstract:

In this paper we report the quantitative structure activity relationship of novel bis-triazole derivatives for predicting the activity profile. The full model encompassed a dataset of 46 Bis- triazoles. Tripos Sybyl X 2.0 program was used to conduct CoMSIA QSAR modeling. The Partial Least-Squares (PLS) analysis method was used to conduct statistical analysis and to derive a QSAR model based on the field values of CoMSIA descriptor. The compounds were divided into test and training set. The compounds were evaluated by various CoMSIA parameters to predict the best QSAR model. An optimum numbers of components were first determined separately by cross-validation regression for CoMSIA model, which were then applied in the final analysis. A series of parameters were used for the study and the best fit model was obtained using donor, partition coefficient and steric parameters. The CoMSIA models demonstrated good statistical results with regression coefficient (r2) and the cross-validated coefficient (q2) of 0.575 and 0.830 respectively. The standard error for the predicted model was 0.16322. In the CoMSIA model, the steric descriptors make a marginally larger contribution than the electrostatic descriptors. The finding that the steric descriptor is the largest contributor for the CoMSIA QSAR models is consistent with the observation that more than half of the binding site area is occupied by steric regions.

Keywords: 3D QSAR, CoMSIA, triazoles, novel heterocycles

Procedia PDF Downloads 430
24300 Estimation of Structural Parameters in Time Domain Using One Dimensional Piezo Zirconium Titanium Patch Model

Authors: N. Jinesh, K. Shankar

Abstract:

This article presents a method of using the one dimensional piezo-electric patch on beam model for structural identification. A hybrid element constituted of one dimensional beam element and a PZT sensor is used with reduced material properties. This model is convenient and simple for identification of beams. Accuracy of this element is first verified against a corresponding 3D finite element model (FEM). The structural identification is carried out as an inverse problem whereby parameters are identified by minimizing the deviation between the predicted and measured voltage response of the patch, when subjected to excitation. A non-classical optimization algorithm Particle Swarm Optimization is used to minimize this objective function. The signals are polluted with 5% Gaussian noise to simulate experimental noise. The proposed method is applied on beam structure and identified parameters are stiffness and damping. The model is also validated experimentally.

Keywords: inverse problem, particle swarm optimization, PZT patches, structural identification

Procedia PDF Downloads 292
24299 Cascaded Neural Network for Internal Temperature Forecasting in Induction Motor

Authors: Hidir S. Nogay

Abstract:

In this study, two systems were created to predict interior temperature in induction motor. One of them consisted of a simple ANN model which has two layers, ten input parameters and one output parameter. The other one consisted of eight ANN models connected each other as cascaded. Cascaded ANN system has 17 inputs. Main reason of cascaded system being used in this study is to accomplish more accurate estimation by increasing inputs in the ANN system. Cascaded ANN system is compared with simple conventional ANN model to prove mentioned advantages. Dataset was obtained from experimental applications. Small part of the dataset was used to obtain more understandable graphs. Number of data is 329. 30% of the data was used for testing and validation. Test data and validation data were determined for each ANN model separately and reliability of each model was tested. As a result of this study, it has been understood that the cascaded ANN system produced more accurate estimates than conventional ANN model.

Keywords: cascaded neural network, internal temperature, inverter, three-phase induction motor

Procedia PDF Downloads 332
24298 Coefficient of Performance (COP) Optimization of an R134a Cross Vane Expander Compressor Refrigeration System

Authors: Y. D. Lim, K. S. Yap, K. T. Ooi

Abstract:

Cross Vane Expander Compressor (CVEC) is a newly invented expander-compressor combined unit, where it is introduced to replace the compressor and the expansion valve in traditional refrigeration system. The mathematical model of CVEC has been developed to examine its performance, and it was found that the energy consumption of a conventional refrigeration system was reduced by as much as 18%. It is believed that energy consumption can be further reduced by optimizing the device. In this study, the coefficient of performance (COP) of CVEC has been optimized under predetermined operational parameters and constrained main design parameters. Several main design parameters of CVEC were selected to be the variables, and the optimization was done with theoretical model in a simulation program. The theoretical model consists of geometrical model, dynamic model, heat transfer model and valve dynamics model. Complex optimization method, which is a constrained, direct search and multi-variables method was used in the study. As a result, the optimization study suggested that with an appropriate combination of design parameters, a 58% COP improvement in CVEC R134a refrigeration system is possible.

Keywords: COP, cross vane expander-compressor, CVEC, design, simulation, refrigeration system, air-conditioning, R134a, multi variables

Procedia PDF Downloads 316
24297 Rainfall–Runoff Simulation Using WetSpa Model in Golestan Dam Basin, Iran

Authors: M. R. Dahmardeh Ghaleno, M. Nohtani, S. Khaledi

Abstract:

Flood simulation and prediction is one of the most active research areas in surface water management. WetSpa is a distributed, continuous, and physical model with daily or hourly time step that explains precipitation, runoff, and evapotranspiration processes for both simple and complex contexts. This model uses a modified rational method for runoff calculation. In this model, runoff is routed along the flow path using Diffusion-Wave equation which depends on the slope, velocity, and flow route characteristics. Golestan Dam Basin is located in Golestan province in Iran and it is passing over coordinates 55° 16´ 50" to 56° 4´ 25" E and 37° 19´ 39" to 37° 49´ 28"N. The area of the catchment is about 224 km2, and elevations in the catchment range from 414 to 2856 m at the outlet, with average slope of 29.78%. Results of the simulations show a good agreement between calculated and measured hydrographs at the outlet of the basin. Drawing upon Nash-Sutcliffe model efficiency coefficient for calibration periodic model estimated daily hydrographs and maximum flow rate with an accuracy up to 59% and 80.18%, respectively.

Keywords: watershed simulation, WetSpa, stream flow, flood prediction

Procedia PDF Downloads 232
24296 Reinforcement Learning for Self Driving Racing Car Games

Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh

Abstract:

This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.

Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming

Procedia PDF Downloads 24
24295 Kohonen Self-Organizing Maps as a New Method for Determination of Salt Composition of Multi-Component Solutions

Authors: Sergey A. Burikov, Tatiana A. Dolenko, Kirill A. Gushchin, Sergey A. Dolenko

Abstract:

The paper presents the results of clusterization by Kohonen self-organizing maps (SOM) applied for analysis of array of Raman spectra of multi-component solutions of inorganic salts, for determination of types of salts present in the solution. It is demonstrated that use of SOM is a promising method for solution of clusterization and classification problems in spectroscopy of multi-component objects, as attributing a pattern to some cluster may be used for recognition of component composition of the object.

Keywords: Kohonen self-organizing maps, clusterization, multi-component solutions, Raman spectroscopy

Procedia PDF Downloads 430
24294 Alternative Approach to the Machine Vision System Operating for Solving Industrial Control Issue

Authors: M. S. Nikitenko, S. A. Kizilov, D. Y. Khudonogov

Abstract:

The paper considers an approach to a machine vision operating system combined with using a grid of light markers. This approach is used to solve several scientific and technical problems, such as measuring the capability of an apron feeder delivering coal from a lining return port to a conveyor in the technology of mining high coal releasing to a conveyor and prototyping an autonomous vehicle obstacle detection system. Primary verification of a method of calculating bulk material volume using three-dimensional modeling and validation in laboratory conditions with relative errors calculation were carried out. A method of calculating the capability of an apron feeder based on a machine vision system and a simplifying technology of a three-dimensional modelled examined measuring area with machine vision was offered. The proposed method allows measuring the volume of rock mass moved by an apron feeder using machine vision. This approach solves the volume control issue of coal produced by a feeder while working off high coal by lava complexes with release to a conveyor with accuracy applied for practical application. The developed mathematical apparatus for measuring feeder productivity in kg/s uses only basic mathematical functions such as addition, subtraction, multiplication, and division. Thus, this fact simplifies software development, and this fact expands the variety of microcontrollers and microcomputers suitable for performing tasks of calculating feeder capability. A feature of an obstacle detection issue is to correct distortions of the laser grid, which simplifies their detection. The paper presents algorithms for video camera image processing and autonomous vehicle model control based on obstacle detection machine vision systems. A sample fragment of obstacle detection at the moment of distortion with the laser grid is demonstrated.

Keywords: machine vision, machine vision operating system, light markers, measuring capability, obstacle detection system, autonomous transport

Procedia PDF Downloads 97
24293 Two-Warehouse Inventory Model for Deteriorating Items with Inventory-Level-Dependent Demand under Two Dispatching Policies

Authors: Lei Zhao, Zhe Yuan, Wenyue Kuang

Abstract:

This paper studies two-warehouse inventory models for a deteriorating item considering that the demand is influenced by inventory levels. The problem mainly focuses on the optimal order policy and the optimal order cycle with inventory-level-dependent demand in two-warehouse system for retailers. It considers the different deterioration rates and the inventory holding costs in owned warehouse (OW) and rented warehouse (RW), and the conditions of transportation cost, allowed shortage and partial backlogging. Two inventory models are formulated: last-in first-out (LIFO) model and first-in-first-out (FIFO) model based on the policy choices of LIFO and FIFO, and a comparative analysis of LIFO model and FIFO model is made. The study finds that the FIFO policy is more in line with realistic operating conditions. Especially when the inventory holding cost of OW is high, and there is no difference or big difference between deterioration rates of OW and RW, the FIFO policy has better applicability. Meanwhile, this paper considers the differences between the effects of warehouse and shelf inventory levels on demand, and then builds retailers’ inventory decision model and studies the factors of the optimal order quantity, the optimal order cycle and the average inventory cost per unit time. To minimize the average total cost, the optimal dispatching policies are provided for retailers’ decisions.

Keywords: FIFO model, inventory-level-dependent, LIFO model, two-warehouse inventory

Procedia PDF Downloads 266
24292 Causal Relationship between Corporate Governance and Financial Information Transparency: A Simultaneous Equations Approach

Authors: Maali Kachouri, Anis Jarboui

Abstract:

We focus on the causal relationship between governance and information transparency as well as interrelation among the various governance mechanisms. This paper employs a simultaneous equations approach to show this relationship in the Tunisian context. Based on an 8-year dataset, our sample covers 28 listed companies over 2006-2013. Our findings suggest that internal and external governance mechanisms are interdependent. Moreover, in order to analyze the causal effect between information transparency and governance mechanisms, we found evidence that information transparency tends to increase good corporate governance practices.

Keywords: simultaneous equations approach, transparency, causal relationship, corporate governance

Procedia PDF Downloads 339
24291 Thermomechanical Damage Modeling of F114 Carbon Steel

Authors: A. El Amri, M. El Yakhloufi Haddou, A. Khamlichi

Abstract:

The numerical simulation based on the Finite Element Method (FEM) is widely used in academic institutes and in the industry. It is a useful tool to predict many phenomena present in the classical manufacturing forming processes such as fracture. But, the results of such numerical model depend strongly on the parameters of the constitutive behavior model. The influences of thermal and mechanical loads cause damage. The temperature and strain rate dependent materials’ properties and their modelling are discussed. A Johnson-Cook Model of damage has been selected for the numerical simulations. Virtual software called the ABAQUS 6.11 is used for finite element analysis. This model was introduced in order to give information concerning crack initiation during thermal and mechanical loads.

Keywords: thermo-mechanical fatigue, failure, numerical simulation, fracture, damage

Procedia PDF Downloads 378
24290 Gender Based of Sustainable Food Self-Resilience for Village Using Dynamic System Model

Authors: Kholil, Laksanto Utomo

Abstract:

The food needs of the Indonesian people will continue increase year to year due to the increase of population growth. For ensuring food securityand and resilience, the government has developed a program food self-resilience village since 2006. Food resilience is a complex system, consisting of subsystem availability, distribution and consumption of the sufficiency of food consumed both in quantity and quality. Low access, and limited assets to food sources is the dominant factor vulnerable of food. Women have a major role in supporting the productive activities of the family to meet food sufficiency and resilience. The purpose of this paper is to discuss the model of food self-resilience village wich gender responsive by using a dynamic system model. Model will be developed into 3 level: family, vilage, and regency in accordance with the concept of village food resilience model wich has been developed by ministry of agriculture. Model development based on the results of experts discussion and field study. By some scenarios and simulation models we will able to develop appropriate policy strategies for family food resilience. The result of study show that food resilience was influenced by many factors: goverment policies, technology, human resource, and in the same time it will be a feed back for goverment policies and number of poor family.

Keywords: food availability, food sufficiency, gender, model dynamic, law enfrocement

Procedia PDF Downloads 522
24289 A Model of Knowledge Management Culture Change

Authors: Reza Davoodi, Hamid Abbasi, Heidar Norouzi, Gholamabbas Alipourian

Abstract:

A dynamic model shaping a process of knowledge management (KM) culture change is suggested. It is aimed at providing effective KM of employees for obtaining desired results in an organization. The essential requirements for obtaining KM culture change are determined. The proposed model realizes these requirements. Dynamics of the model are expressed by a change of its parameters. It is adjusted to the dynamic process of KM culture change. Building the model includes elaboration and integration of interconnected components. The “Result” is a central component of the model. This component determines a desired organizational goal and possible directions of its attainment. The “Confront” component engenders constructive confrontation in an organization. For this reason, the employees are prompted toward KM culture change with the purpose of attaining the desired result. The “Assess” component realizes complex assessments of employee proposals by management and peers. The proposals are directed towards attaining the desired result in an organization. The “Reward” component sets the order of assigning rewards to employees based on the assessments of their proposals.

Keywords: knowledge management, organizational culture change, employee, result

Procedia PDF Downloads 392
24288 Multilayer Perceptron Neural Network for Rainfall-Water Level Modeling

Authors: Thohidul Islam, Md. Hamidul Haque, Robin Kumar Biswas

Abstract:

Floods are one of the deadliest natural disasters which are very complex to model; however, machine learning is opening the door for more reliable and accurate flood prediction. In this research, a multilayer perceptron neural network (MLP) is developed to model the rainfall-water level relation, in a subtropical monsoon climatic region of the Bangladesh-India border. Our experiments show promising empirical results to forecast the water level for 1 day lead time. Our best performing MLP model achieves 98.7% coefficient of determination with lower model complexity which surpasses previously reported results on similar forecasting problems.

Keywords: flood forecasting, machine learning, multilayer perceptron network, regression

Procedia PDF Downloads 157
24287 Developing a Maturity Model of Digital Twin Application for Infrastructure Asset Management

Authors: Qingqing Feng, S. Thomas Ng, Frank J. Xu, Jiduo Xing

Abstract:

Faced with unprecedented challenges including aging assets, lack of maintenance budget, overtaxed and inefficient usage, and outcry for better service quality from the society, today’s infrastructure systems has become the main focus of many metropolises to pursue sustainable urban development and improve resilience. Digital twin, being one of the most innovative enabling technologies nowadays, may open up new ways for tackling various infrastructure asset management (IAM) problems. Digital twin application for IAM, as its name indicated, represents an evolving digital model of intended infrastructure that possesses functions including real-time monitoring; what-if events simulation; and scheduling, maintenance, and management optimization based on technologies like IoT, big data and AI. Up to now, there are already vast quantities of global initiatives of digital twin applications like 'Virtual Singapore' and 'Digital Built Britain'. With digital twin technology permeating the IAM field progressively, it is necessary to consider the maturity of the application and how those institutional or industrial digital twin application processes will evolve in future. In order to deal with the gap of lacking such kind of benchmark, a draft maturity model is developed for digital twin application in the IAM field. Firstly, an overview of current smart cities maturity models is given, based on which the draft Maturity Model of Digital Twin Application for Infrastructure Asset Management (MM-DTIAM) is developed for multi-stakeholders to evaluate and derive informed decision. The process of development follows a systematic approach with four major procedures, namely scoping, designing, populating and testing. Through in-depth literature review, interview and focus group meeting, the key domain areas are populated, defined and iteratively tuned. Finally, the case study of several digital twin projects is conducted for self-verification. The findings of the research reveal that: (i) the developed maturity model outlines five maturing levels leading to an optimised digital twin application from the aspects of strategic intent, data, technology, governance, and stakeholders’ engagement; (ii) based on the case study, levels 1 to 3 are already partially implemented in some initiatives while level 4 is on the way; and (iii) more practices are still needed to refine the draft to be mutually exclusive and collectively exhaustive in key domain areas.

Keywords: digital twin, infrastructure asset management, maturity model, smart city

Procedia PDF Downloads 141
24286 An Approach for Coagulant Dosage Optimization Using Soft Jar Test: A Case Study of Bangkhen Water Treatment Plant

Authors: Ninlawat Phuangchoke, Waraporn Viyanon, Setta Sasananan

Abstract:

The most important process of the water treatment plant process is the coagulation using alum and poly aluminum chloride (PACL), and the value of usage per day is a hundred thousand baht. Therefore, determining the dosage of alum and PACL are the most important factors to be prescribed. Water production is economical and valuable. This research applies an artificial neural network (ANN), which uses the Levenberg–Marquardt algorithm to create a mathematical model (Soft Jar Test) for prediction chemical dose used to coagulation such as alum and PACL, which input data consists of turbidity, pH, alkalinity, conductivity, and, oxygen consumption (OC) of Bangkhen water treatment plant (BKWTP) Metropolitan Waterworks Authority. The data collected from 1 January 2019 to 31 December 2019 cover changing seasons of Thailand. The input data of ANN is divided into three groups training set, test set, and validation set, which the best model performance with a coefficient of determination and mean absolute error of alum are 0.73, 3.18, and PACL is 0.59, 3.21 respectively.

Keywords: soft jar test, jar test, water treatment plant process, artificial neural network

Procedia PDF Downloads 150