Search results for: model estimation
17003 Hybrid Localization Schemes for Wireless Sensor Networks
Authors: Fatima Babar, Majid I. Khan, Malik Najmus Saqib, Muhammad Tahir
Abstract:
This article provides range based improvements over a well-known single-hop range free localization scheme, Approximate Point in Triangulation (APIT) by proposing an energy efficient Barycentric coordinate based Point-In-Triangulation (PIT) test along with PIT based trilateration. These improvements result in energy efficiency, reduced localization error and improved localization coverage compared to APIT and its variants. Moreover, we propose to embed Received signal strength indication (RSSI) based distance estimation in DV-Hop which is a multi-hop localization scheme. The proposed localization algorithm achieves energy efficiency and reduced localization error compared to DV-Hop and its available improvements. Furthermore, a hybrid multi-hop localization scheme is also proposed that utilize Barycentric coordinate based PIT test and both range based (Received signal strength indicator) and range free (hop count) techniques for distance estimation. Our experimental results provide evidence that proposed hybrid multi-hop localization scheme results in two to five times reduction in the localization error compare to DV-Hop and its variants, at reduced energy requirements.Keywords: Localization, Trilateration, Triangulation, Wireless Sensor Networks
Procedia PDF Downloads 46717002 A Study of Population Growth Models and Future Population of India
Authors: Sheena K. J., Jyoti Badge, Sayed Mohammed Zeeshan
Abstract:
A Comparative Study of Exponential and Logistic Population Growth Models in India India is the second most populous city in the world, just behind China, and is going to be in the first place by next year. The Indian population has remarkably at higher rate than the other countries from the past 20 years. There were many scientists and demographers who has formulated various models of population growth in order to study and predict the future population. Some of the models are Fibonacci population growth model, Exponential growth model, Logistic growth model, Lotka-Volterra model, etc. These models have been effective in the past to an extent in predicting the population. However, it is essential to have a detailed comparative study between the population models to come out with a more accurate one. Having said that, this research study helps to analyze and compare the two population models under consideration - exponential and logistic growth models, thereby identifying the most effective one. Using the census data of 2011, the approximate population for 2016 to 2031 are calculated for 20 Indian states using both the models, compared and recorded the data with the actual population. On comparing the results of both models, it is found that logistic population model is more accurate than the exponential model, and using this model, we can predict the future population in a more effective way. This will give an insight to the researchers about the effective models of population and how effective these population models are in predicting the future population.Keywords: population growth, population models, exponential model, logistic model, fibonacci model, lotka-volterra model, future population prediction, demographers
Procedia PDF Downloads 12417001 Digital Twin of Real Electrical Distribution System with Real Time Recursive Load Flow Calculation and State Estimation
Authors: Anosh Arshad Sundhu, Francesco Giordano, Giacomo Della Croce, Maurizio Arnone
Abstract:
Digital Twin (DT) is a technology that generates a virtual representation of a physical system or process, enabling real-time monitoring, analysis, and simulation. DT of an Electrical Distribution System (EDS) can perform online analysis by integrating the static and real-time data in order to show the current grid status and predictions about the future status to the Distribution System Operator (DSO), producers and consumers. DT technology for EDS also offers the opportunity to DSO to test hypothetical scenarios. This paper discusses the development of a DT of an EDS by Smart Grid Controller (SGC) application, which is developed using open-source libraries and languages. The developed application can be integrated with Supervisory Control and Data Acquisition System (SCADA) of any EDS for creating the DT. The paper shows the performance of developed tools inside the application, tested on real EDS for grid observability, Smart Recursive Load Flow (SRLF) calculation and state estimation of loads in MV feeders.Keywords: digital twin, distributed energy resources, remote terminal units, supervisory control and data acquisition system, smart recursive load flow
Procedia PDF Downloads 11017000 Spectroscopic Relation between Open Cluster and Globular Cluster
Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma
Abstract:
The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model
Procedia PDF Downloads 22616999 Discovering the Dimension of Abstractness: Structure-Based Model that Learns New Categories and Categorizes on Different Levels of Abstraction
Authors: Georgi I. Petkov, Ivan I. Vankov, Yolina A. Petrova
Abstract:
A structure-based model of category learning and categorization at different levels of abstraction is presented. The model compares different structures and expresses their similarity implicitly in the forms of mappings. Based on this similarity, the model can categorize different targets either as members of categories that it already has or creates new categories. The model is novel using two threshold parameters to evaluate the structural correspondence. If the similarity between two structures exceeds the higher threshold, a new sub-ordinate category is created. Vice versa, if the similarity does not exceed the higher threshold but does the lower one, the model creates a new category on higher level of abstraction.Keywords: analogy-making, categorization, learning of categories, abstraction, hierarchical structure
Procedia PDF Downloads 19116998 A Mean–Variance–Skewness Portfolio Optimization Model
Authors: Kostas Metaxiotis
Abstract:
Portfolio optimization is one of the most important topics in finance. This paper proposes a mean–variance–skewness (MVS) portfolio optimization model. Traditionally, the portfolio optimization problem is solved by using the mean–variance (MV) framework. In this study, we formulate the proposed model as a three-objective optimization problem, where the portfolio's expected return and skewness are maximized whereas the portfolio risk is minimized. For solving the proposed three-objective portfolio optimization model we apply an adapted version of the non-dominated sorting genetic algorithm (NSGAII). Finally, we use a real dataset from FTSE-100 for validating the proposed model.Keywords: evolutionary algorithms, portfolio optimization, skewness, stock selection
Procedia PDF Downloads 19816997 Developing a Total Quality Management Model Using Structural Equation Modeling for Indonesian Healthcare Industry
Authors: Jonny, T. Yuri M. Zagloel
Abstract:
This paper is made to present an Indonesian Healthcare model. Currently, there are nine TQM (Total Quality Management) practices in healthcare industry. However, these practices are not integrated yet. Therefore, this paper aims to integrate these practices as a model by using Structural Equation Modeling (SEM). After administering about 210 questionnaires to various stakeholders of this industry, a LISREL program was used to evaluate the model's fitness. The result confirmed that the model is fit because the p-value was about 0.45 or above required 0.05. This has signified that previously mentioned of nine TQM practices are able to be integrated as an Indonesian healthcare model.Keywords: healthcare, total quality management (TQM), structural equation modeling (SEM), linear structural relations (LISREL)
Procedia PDF Downloads 29216996 Boosting Profits and Enhancement of Environment through Adsorption of Methane during Upstream Processes
Authors: Sudipt Agarwal, Siddharth Verma, S. M. Iqbal, Hitik Kalra
Abstract:
Natural gas as a fuel has created wonders, but on the contrary, the ill-effects of methane have been a great worry for professionals. The largest source of methane emission is the oil and gas industry among all industries. Methane depletes groundwater and being a greenhouse gas has devastating effects on the atmosphere too. Methane remains for a decade or two in the atmosphere and later breaks into carbon dioxide and thus damages it immensely, as it warms up the atmosphere 72 times more than carbon dioxide in those two decades and keeps on harming after breaking into carbon dioxide afterward. The property of a fluid to adhere to the surface of a solid, better known as adsorption, can be a great boon to minimize the hindrance caused by methane. Adsorption of methane during upstream processes can save the groundwater and atmospheric depletion around the site which can be hugely lucrative to earn profits which are reduced due to environmental degradation leading to project cancellation. The paper would deal with reasons why casing and cementing are not able to prevent leakage and would suggest methods to adsorb methane during upstream processes with mathematical explanation using volumetric analysis of adsorption of methane on the surface of activated carbon doped with copper oxides (which increases the absorption by 54%). The paper would explain in detail (through a cost estimation) how the proposed idea can be hugely beneficial not only to environment but also to the profits earned.Keywords: adsorption, casing, cementing, cost estimation, volumetric analysis
Procedia PDF Downloads 19116995 Sparse Unmixing of Hyperspectral Data by Exploiting Joint-Sparsity and Rank-Deficiency
Authors: Fanqiang Kong, Chending Bian
Abstract:
In this work, we exploit two assumed properties of the abundances of the observed signatures (endmembers) in order to reconstruct the abundances from hyperspectral data. Joint-sparsity is the first property of the abundances, which assumes the adjacent pixels can be expressed as different linear combinations of same materials. The second property is rank-deficiency where the number of endmembers participating in hyperspectral data is very small compared with the dimensionality of spectral library, which means that the abundances matrix of the endmembers is a low-rank matrix. These assumptions lead to an optimization problem for the sparse unmixing model that requires minimizing a combined l2,p-norm and nuclear norm. We propose a variable splitting and augmented Lagrangian algorithm to solve the optimization problem. Experimental evaluation carried out on synthetic and real hyperspectral data shows that the proposed method outperforms the state-of-the-art algorithms with a better spectral unmixing accuracy.Keywords: hyperspectral unmixing, joint-sparse, low-rank representation, abundance estimation
Procedia PDF Downloads 26116994 On the Fourth-Order Hybrid Beta Polynomial Kernels in Kernel Density Estimation
Authors: Benson Ade Eniola Afere
Abstract:
This paper introduces a family of fourth-order hybrid beta polynomial kernels developed for statistical analysis. The assessment of these kernels' performance centers on two critical metrics: asymptotic mean integrated squared error (AMISE) and kernel efficiency. Through the utilization of both simulated and real-world datasets, a comprehensive evaluation was conducted, facilitating a thorough comparison with conventional fourth-order polynomial kernels. The evaluation procedure encompassed the computation of AMISE and efficiency values for both the proposed hybrid kernels and the established classical kernels. The consistently observed trend was the superior performance of the hybrid kernels when compared to their classical counterparts. This trend persisted across diverse datasets, underscoring the resilience and efficacy of the hybrid approach. By leveraging these performance metrics and conducting evaluations on both simulated and real-world data, this study furnishes compelling evidence in favour of the superiority of the proposed hybrid beta polynomial kernels. The discernible enhancement in performance, as indicated by lower AMISE values and higher efficiency scores, strongly suggests that the proposed kernels offer heightened suitability for statistical analysis tasks when compared to traditional kernels.Keywords: AMISE, efficiency, fourth-order Kernels, hybrid Kernels, Kernel density estimation
Procedia PDF Downloads 7016993 Dynamic Measurement System Modeling with Machine Learning Algorithms
Authors: Changqiao Wu, Guoqing Ding, Xin Chen
Abstract:
In this paper, ways of modeling dynamic measurement systems are discussed. Specially, for linear system with single-input single-output, it could be modeled with shallow neural network. Then, gradient based optimization algorithms are used for searching the proper coefficients. Besides, method with normal equation and second order gradient descent are proposed to accelerate the modeling process, and ways of better gradient estimation are discussed. It shows that the mathematical essence of the learning objective is maximum likelihood with noises under Gaussian distribution. For conventional gradient descent, the mini-batch learning and gradient with momentum contribute to faster convergence and enhance model ability. Lastly, experimental results proved the effectiveness of second order gradient descent algorithm, and indicated that optimization with normal equation was the most suitable for linear dynamic models.Keywords: dynamic system modeling, neural network, normal equation, second order gradient descent
Procedia PDF Downloads 12716992 Kinetic Studies on CO₂ Gasification of Low and High Ash Indian Coals in Context of Underground Coal Gasification
Authors: Geeta Kumari, Prabu Vairakannu
Abstract:
Underground coal gasification (UCG) technology is an efficient and an economic in-situ clean coal technology, which converts unmineable coals into calorific valuable gases. This technology avoids ash disposal, coal mining, and storage problems. CO₂ gas can be a potential gasifying medium for UCG. CO₂ is a greenhouse gas and, the liberation of this gas to the atmosphere from thermal power plant industries leads to global warming. Hence, the capture and reutilization of CO₂ gas are crucial for clean energy production. However, the reactivity of high ash Indian coals with CO₂ needs to be assessed. In the present study, two varieties of Indian coals (low ash and high ash) are used for thermogravimetric analyses (TGA). Two low ash north east Indian coals (LAC) and a typical high ash Indian coal (HAC) are procured from the coal mines of India. Low ash coal with 9% ash (LAC-1) and 4% ash (LAC-2) and high ash coal (HAC) with 42% ash are used for the study. TGA studies are carried out to evaluate the activation energy for pyrolysis and gasification of coal under N₂ and CO₂ atmosphere. Coats and Redfern method is used to estimate the activation energy of coal under different temperature regimes. Volumetric model is assumed for the estimation of the activation energy. The activation energy estimated under different temperature range. The inherent properties of coals play a major role in their reactivity. The results show that the activation energy decreases with the decrease in the inherent percentage of coal ash due to the ash layer hindrance. A reverse trend was observed with volatile matter. High volatile matter of coal leads to the estimation of low activation energy. It was observed that the activation energy under CO₂ atmosphere at 400-600°C is less as compared to N₂ inert atmosphere. At this temperature range, it is estimated that 15-23% reduction in the activation energy under CO₂ atmosphere. This shows the reactivity of CO₂ gas with higher hydrocarbons of the coal volatile matters. The reactivity of CO₂ with the volatile matter of coal might occur through dry reforming reaction in which CO₂ reacts with higher hydrocarbon, aromatics of the tar content. The observed trend of Ea in the temperature range of 150-200˚C and 400-600˚C is HAC > LAC-1 >LAC-2 in both N₂ and CO₂ atmosphere. At the temperature range of 850-1000˚C, higher activation energy is estimated when compared to those values in the temperature range of 400-600°C. Above 800°C, char gasification through Boudouard reaction progressed under CO₂ atmosphere. It was observed that 8-20 kJ/mol of activation energy is increased during char gasification above 800°C compared to volatile matter pyrolysis between the temperature ranges of 400-600°C. The overall activation energy of the coals in the temperature range of 30-1000˚C is higher in N₂ atmosphere than CO₂ atmosphere. It can be concluded that higher hydrocarbons such as tar effectively undergoes cracking and reforming reactions in presence of CO₂. Thus, CO₂ gas is beneficial for the production of high calorific value syngas using high ash Indian coals.Keywords: clean coal technology, CO₂ gasification, activation energy, underground coal gasification
Procedia PDF Downloads 17116991 A Research on Flipped-Classroom Teaching Model in English for Academic Purpose Teaching
Authors: Li Shuang
Abstract:
With rigid teaching procedures and limited academic performance assessment methods, traditional teaching model stands in the way of college English reform in China, which features EAP (English for Academic Purpose) teaching. Flipped-classroom teaching, which has been extensively applied to science subjects teaching, however, covers the shortage of traditional teaching model in EAP teaching, via creatively inverting traditional teaching procedures. Besides, the application of flipped-classroom teaching model in EAP teaching also proves that this new teaching philosophy is not confined to science subjects teaching; it goes perfectly well with liberal-arts subjects teaching. Data analysis, desk research survey, and comparative study are referred to in the essay so as to prove its feasibility and advantages in EAP teaching.Keywords: EAP, traditional teaching method, flipped-classroom teaching model, teaching model design
Procedia PDF Downloads 31116990 A Comparative Study of Sampling-Based Uncertainty Propagation with First Order Error Analysis and Percentile-Based Optimization
Authors: M. Gulam Kibria, Shourav Ahmed, Kais Zaman
Abstract:
In system analysis, the information on the uncertain input variables cause uncertainty in the system responses. Different probabilistic approaches for uncertainty representation and propagation in such cases exist in the literature. Different uncertainty representation approaches result in different outputs. Some of the approaches might result in a better estimation of system response than the other approaches. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge (MUQC) has posed challenges about uncertainty quantification. Subproblem A, the uncertainty characterization subproblem, of the challenge posed is addressed in this study. In this subproblem, the challenge is to gather knowledge about unknown model inputs which have inherent aleatory and epistemic uncertainties in them with responses (output) of the given computational model. We use two different methodologies to approach the problem. In the first methodology we use sampling-based uncertainty propagation with first order error analysis. In the other approach we place emphasis on the use of Percentile-Based Optimization (PBO). The NASA Langley MUQC’s subproblem A is developed in such a way that both aleatory and epistemic uncertainties need to be managed. The challenge problem classifies each uncertain parameter as belonging to one the following three types: (i) An aleatory uncertainty modeled as a random variable. It has a fixed functional form and known coefficients. This uncertainty cannot be reduced. (ii) An epistemic uncertainty modeled as a fixed but poorly known physical quantity that lies within a given interval. This uncertainty is reducible. (iii) A parameter might be aleatory but sufficient data might not be available to adequately model it as a single random variable. For example, the parameters of a normal variable, e.g., the mean and standard deviation, might not be precisely known but could be assumed to lie within some intervals. It results in a distributional p-box having the physical parameter with an aleatory uncertainty, but the parameters prescribing its mathematical model are subjected to epistemic uncertainties. Each of the parameters of the random variable is an unknown element of a known interval. This uncertainty is reducible. From the study, it is observed that due to practical limitations or computational expense, the sampling is not exhaustive in sampling-based methodology. That is why the sampling-based methodology has high probability of underestimating the output bounds. Therefore, an optimization-based strategy to convert uncertainty described by interval data into a probabilistic framework is necessary. This is achieved in this study by using PBO.Keywords: aleatory uncertainty, epistemic uncertainty, first order error analysis, uncertainty quantification, percentile-based optimization
Procedia PDF Downloads 24016989 Application of Data Driven Based Models as Early Warning Tools of High Stream Flow Events and Floods
Authors: Mohammed Seyam, Faridah Othman, Ahmed El-Shafie
Abstract:
The early warning of high stream flow events (HSF) and floods is an important aspect in the management of surface water and rivers systems. This process can be performed using either process-based models or data driven-based models such as artificial intelligence (AI) techniques. The main goal of this study is to develop efficient AI-based model for predicting the real-time hourly stream flow (Q) and apply it as early warning tool of HSF and floods in the downstream area of the Selangor River basin, taken here as a paradigm of humid tropical rivers in Southeast Asia. The performance of AI-based models has been improved through the integration of the lag time (Lt) estimation in the modelling process. A total of 8753 patterns of Q, water level, and rainfall hourly records representing one-year period (2011) were utilized in the modelling process. Six hydrological scenarios have been arranged through hypothetical cases of input variables to investigate how the changes in RF intensity in upstream stations can lead formation of floods. The initial SF was changed for each scenario in order to include wide range of hydrological situations in this study. The performance evaluation of the developed AI-based model shows that high correlation coefficient (R) between the observed and predicted Q is achieved. The AI-based model has been successfully employed in early warning throughout the advance detection of the hydrological conditions that could lead to formations of floods and HSF, where represented by three levels of severity (i.e., alert, warning, and danger). Based on the results of the scenarios, reaching the danger level in the downstream area required high RF intensity in at least two upstream areas. According to results of applications, it can be concluded that AI-based models are beneficial tools to the local authorities for flood control and awareness.Keywords: floods, stream flow, hydrological modelling, hydrology, artificial intelligence
Procedia PDF Downloads 24816988 An Approach for Detection Efficiency Determination of High Purity Germanium Detector Using Cesium-137
Authors: Abdulsalam M. Alhawsawi
Abstract:
Estimation of a radiation detector's efficiency plays a significant role in calculating the activity of radioactive samples. Detector efficiency is measured using sources that emit a variety of energies from low to high-energy photons along the energy spectrum. Some photon energies are hard to find in lab settings either because check sources are hard to obtain or the sources have short half-lives. This work aims to develop a method to determine the efficiency of a High Purity Germanium Detector (HPGe) based on the 662 keV gamma ray photon emitted from Cs-137. Cesium-137 is readily available in most labs with radiation detection and health physics applications and has a long half-life of ~30 years. Several photon efficiencies were calculated using the MCNP5 simulation code. The simulated efficiency of the 662 keV photon was used as a base to calculate other photon efficiencies in a point source and a Marinelli Beaker form. In the Marinelli Beaker filled with water case, the efficiency of the 59 keV low energy photons from Am-241 was estimated with a 9% error compared to the MCNP5 simulated efficiency. The 1.17 and 1.33 MeV high energy photons emitted by Co-60 had errors of 4% and 5%, respectively. The estimated errors are considered acceptable in calculating the activity of unknown samples as they fall within the 95% confidence level.Keywords: MCNP5, MonteCarlo simulations, efficiency calculation, absolute efficiency, activity estimation, Cs-137
Procedia PDF Downloads 11616987 Analytical Model to Predict the Shear Capacity of Reinforced Concrete Beams Externally Strengthened with CFRP Composites Conditions
Authors: Rajai Al-Rousan
Abstract:
This paper presents a proposed analytical model for predicting the shear strength of reinforced concrete beams strengthened with CFRP composites as external reinforcement. The proposed analytical model can predict the shear contribution of CFRP composites of RC beams with an acceptable coefficient of correlation with the tested results. Based on the comparison of the proposed model with the published well-known models (ACI model, Triantafillou model, and Colotti model), the ACI model had a wider range of 0.16 to 10.08 for the ratio between tested and predicted ultimate shears at failure. Also, an acceptable range of 0.27 to 2.78 for the ratio between tested and predicted ultimate shears by the Triantafillou model. Finally, the best prediction (the ratio between the tested and predicted ones) of the ultimate shear capacity is observed by using Colotti model with a range of 0.20 to 1.78. Thus, the contribution of the CFRP composites as external reinforcement can be predicted with high accuracy by using the proposed analytical model.Keywords: predicting, shear capacity, reinforced concrete, beams, strengthened, externally, CFRP composites
Procedia PDF Downloads 22916986 Effect of Drying on the Concrete Structures
Authors: A. Brahma
Abstract:
The drying of hydraulics materials is unavoidable and conducted to important spontaneous deformations. In this study, we show that it is possible to describe the drying shrinkage of the high-performance concrete by a simple expression. A multiple regression model was developed for the prediction of the drying shrinkage of the high-performance concrete. The assessment of the proposed model has been done by a set of statistical tests. The model developed takes in consideration the main parameters of confection and conservation. There was a very good agreement between drying shrinkage predicted by the multiple regression model and experimental results. The developed model adjusts easily to all hydraulic concrete types.Keywords: hydraulic concretes, drying, shrinkage, prediction, modeling
Procedia PDF Downloads 36816985 Parametric Study of Vertical Diffusion Stills for Water Desalination
Authors: A. Seleem, M. Mortada, M. El-Morsi, M. Younan
Abstract:
Diffusion stills have been effective in water desalination. The present work represents a model of the distillation process by using vertical single-effect diffusion stills. A semi-analytical model has been developed to model the process. A software computer code using Engineering Equation Solver EES software has been developed to solve the equations of the developed model. An experimental setup has been constructed, and used for the validation of the model. The model is also validated against former literature results. The results obtained from the present experimental test rig, and the data from the literature, have been compared with the results of the code to find its best range of validity. In addition, a parametric analysis of the system has been developed using the model to determine the effect of operating conditions on the system's performance. The dominant parameters that affect the productivity of the still are the hot plate temperature that ranges from (55-90 °C) and feed flow rate in range of (0.00694-0.0211 kg/m2-s).Keywords: analytical model, solar distillation, sustainable water systems, vertical diffusion still
Procedia PDF Downloads 40516984 2D and 3D Unsteady Simulation of the Heat Transfer in the Sample during Heat Treatment by Moving Heat Source
Authors: Zdeněk Veselý, Milan Honner, Jiří Mach
Abstract:
The aim of the performed work is to establish the 2D and 3D model of direct unsteady task of sample heat treatment by moving source employing computer model on the basis of finite element method. The complex boundary condition on heat loaded sample surface is the essential feature of the task. Computer model describes heat treatment of the sample during heat source movement over the sample surface. It is started from the 2D task of sample cross section as a basic model. Possibilities of extension from 2D to 3D task are discussed. The effect of the addition of third model dimension on the temperature distribution in the sample is showed. Comparison of various model parameters on the sample temperatures is observed. Influence of heat source motion on the depth of material heat treatment is shown for several velocities of the movement. Presented computer model is prepared for the utilization in laser treatment of machine parts.Keywords: computer simulation, unsteady model, heat treatment, complex boundary condition, moving heat source
Procedia PDF Downloads 39316983 Research of the Three-Dimensional Visualization Geological Modeling of Mine Based on Surpac
Authors: Honggang Qu, Yong Xu, Rongmei Liu, Zhenji Gao, Bin Wang
Abstract:
Today's mining industry is advancing gradually toward digital and visual direction. The three-dimensional visualization geological modeling of mine is the digital characterization of mineral deposits and is one of the key technology of digital mining. Three-dimensional geological modeling is a technology that combines geological spatial information management, geological interpretation, geological spatial analysis and prediction, geostatistical analysis, entity content analysis and graphic visualization in a three-dimensional environment with computer technology and is used in geological analysis. In this paper, the three-dimensional geological modeling of an iron mine through the use of Surpac is constructed, and the weight difference of the estimation methods between the distance power inverse ratio method and ordinary kriging is studied, and the ore body volume and reserves are simulated and calculated by using these two methods. Compared with the actual mine reserves, its result is relatively accurate, so it provides scientific bases for mine resource assessment, reserve calculation, mining design and so on.Keywords: three-dimensional geological modeling, geological database, geostatistics, block model
Procedia PDF Downloads 7716982 EarlyWarning for Financial Stress Events:A Credit-Regime Switching Approach
Abstract:
We propose a new early warning model for predicting financial stress events for a given future time. In this model, we examine whether credit conditions play an important role as a nonlinear propagator of shocks when predicting the likelihood of occurrence of financial stress events for a given future time. This propagation takes the form of a threshold regression in which a regime change occurs if credit conditions cross a critical threshold. Given the new early warning model for financial stress events, we evaluate the performance of this model and currently available alternatives, such as the model from signal extraction approach, and linear regression model. In-sample forecasting results indicate that the three types of models are useful tools for predicting financial stress events while none of them outperforms others across all criteria considered. The out-of-sample forecasting results suggest that the credit-regime switching model performs better than the two others across all criteria and all forecasting horizons considered.Keywords: cut-off probability, early warning model, financial crisis, financial stress, regime-switching model, forecasting horizons
Procedia PDF Downloads 43516981 Model of the Increasing the Capacity of the Train and Railway Track by Using the New Type of Wagon
Authors: Martin Kendra, Jaroslav Mašek, Juraj Čamaj, Martin Búda
Abstract:
The paper deals with possibilities of increase train capacity by using a new type of railway wagon. In the first part is created a mathematical model to calculate the capacity of the train. The model is based on the main limiting parameters of the train - maximum number of axles per train, the maximum gross weight of the train, the maximum length of train and number of TEUs per one wagon. In the second part is the model applied to four different model trains with different composition of the train set and three different average weights of TEU and a train consisting of a new type of wagons. The result is to identify where the carrying capacity of the original trains is higher, respectively less than a capacity of the train consisting of a new type of wagons.Keywords: loading units, theoretical capacity model, train capacity, wagon for intermodal transport
Procedia PDF Downloads 49716980 Assessing the Impact of Climate Change on Pulses Production in Khyber Pakhtunkhwa, Pakistan
Authors: Khuram Nawaz Sadozai, Rizwan Ahmad, Munawar Raza Kazmi, Awais Habib
Abstract:
Climate change and crop production are intrinsically associated with each other. Therefore, this research study is designed to assess the impact of climate change on pulses production in Southern districts of Khyber Pakhtunkhwa (KP) Province of Pakistan. Two pulses (i.e. chickpea and mung bean) were selected for this research study with respect to climate change. Climatic variables such as temperature, humidity and precipitation along with pulses production and area under cultivation of pulses were encompassed as the major variables of this study. Secondary data of climatic variables and crop variables for the period of thirty four years (1986-2020) were obtained from Pakistan Metrological Department and Agriculture Statistics of KP respectively. Panel data set of chickpea and mung bean crops was estimated separately. The analysis validate that both data sets were a balanced panel data. The Hausman specification test was run separately for both the panel data sets whose findings had suggested the fixed effect model can be deemed as an appropriate model for chickpea panel data, however random effect model was appropriate for estimation of the panel data of mung bean. Major findings confirm that maximum temperature is statistically significant for the chickpea yield. This implies if maximum temperature increases by 1 0C, it can enhance the chickpea yield by 0.0463 units. However, the impact of precipitation was reported insignificant. Furthermore, the humidity was statistically significant and has a positive association with chickpea yield. In case of mung bean the minimum temperature was significantly contributing in the yield of mung bean. This study concludes that temperature and humidity can significantly contribute to enhance the pulses yield. It is recommended that capacity building of pulses growers may be made to adapt the climate change strategies. Moreover, government may ensure the availability of climate change resistant varieties of pulses to encourage the pulses cultivation.Keywords: climate change, pulses productivity, agriculture, Pakistan
Procedia PDF Downloads 4416979 Financial Inclusion and Modernization: Secure Energy Performance in Shanghai Cooperation Organization
Authors: Shama Urooj
Abstract:
The present work investigates the relationship among financial inclusion, modernization, and energy performance in SCO member countries during the years 2011–2021. PCA is used to create composite indexes of financial inclusion, modernization, and energy performance. We used panel regression models that are both reliable and heteroscedasticity-consistent to look at the relationship among variables. The findings indicate that financial inclusion (FI) and modernization, along with the increased FDI, all appear to contribute to the energy performance in the SCO member countries. However, per capita GDP has a negative impact on energy performance. These results are unbiased and consistent with the robust results obtained by applying different econometric models. Feasible Generalized Least Square (FGLS) estimation is also used for checking the uniformity of the main model results. This research work concludes that there has been no policy coherence in SCO member countries regarding the coordination of growing financial inclusion and modernization for energy sustainability in recent years. In order to improve energy performance with modern development, policies regarding financial inclusion and modernization need be integrated both at national as well as international levels.Keywords: financial inclusion, energy performance, modernization, technological development, SCO.
Procedia PDF Downloads 7516978 Modeling and Simulation Methods Using MATLAB/Simulink
Authors: Jamuna Konda, Umamaheswara Reddy Karumuri, Sriramya Muthugi, Varun Pishati, Ravi Shakya,
Abstract:
This paper investigates the challenges involved in mathematical modeling of plant simulation models ensuring the performance of the plant models much closer to the real time physical model. The paper includes the analysis performed and investigation on different methods of modeling, design and development for plant model. Issues which impact the design time, model accuracy as real time model, tool dependence are analyzed. The real time hardware plant would be a combination of multiple physical models. It is more challenging to test the complete system with all possible test scenarios. There are possibilities of failure or damage of the system due to any unwanted test execution on real time.Keywords: model based design (MBD), MATLAB, Simulink, stateflow, plant model, real time model, real-time workshop (RTW), target language compiler (TLC)
Procedia PDF Downloads 34316977 Mixture statistical modeling for predecting mortality human immunodeficiency virus (HIV) and tuberculosis(TB) infection patients
Authors: Mohd Asrul Affendi Bi Abdullah, Nyi Nyi Naing
Abstract:
The purpose of this study was to identify comparable manner between negative binomial death rate (NBDR) and zero inflated negative binomial death rate (ZINBDR) with died patients with (HIV + T B+) and (HIV + T B−). HIV and TB is a serious world wide problem in the developing country. Data were analyzed with applying NBDR and ZINBDR to make comparison which a favorable model is better to used. The ZINBDR model is able to account for the disproportionately large number of zero within the data and is shown to be a consistently better fit than the NBDR model. Hence, as a results ZINBDR model is a superior fit to the data than the NBDR model and provides additional information regarding the died mechanisms HIV+TB. The ZINBDR model is shown to be a use tool for analysis death rate according age categorical.Keywords: zero inflated negative binomial death rate, HIV and TB, AIC and BIC, death rate
Procedia PDF Downloads 43216976 Model-Based Software Regression Test Suite Reduction
Authors: Shiwei Deng, Yang Bao
Abstract:
In this paper, we present a model-based regression test suite reducing approach that uses EFSM model dependence analysis and probability-driven greedy algorithm to reduce software regression test suites. The approach automatically identifies the difference between the original model and the modified model as a set of elementary model modifications. The EFSM dependence analysis is performed for each elementary modification to reduce the regression test suite, and then the probability-driven greedy algorithm is adopted to select the minimum set of test cases from the reduced regression test suite that cover all interaction patterns. Our initial experience shows that the approach may significantly reduce the size of regression test suites.Keywords: dependence analysis, EFSM model, greedy algorithm, regression test
Procedia PDF Downloads 42716975 Dynamics of a Susceptible-Infected-Recovered Model along with Time Delay, Modulated Incidence, and Nonlinear Treatment
Authors: Abhishek Kumar, Nilam
Abstract:
As we know that, time delay exists almost in every biological phenomenon. Therefore, in the present study, we propose a susceptible–infected–recovered (SIR) epidemic model along with time delay, modulated incidence rate of infection, and Holling Type II nonlinear treatment rate. The present model aims to provide a strategy to control the spread of epidemics. In the mathematical study of the model, it has been shown that the model has two equilibriums which are named as disease-free equilibrium (DFE) and endemic equilibrium (EE). Further, stability analysis of the model is discussed. To prove the stability of the model at DFE, we derived basic reproduction number, denoted by (R₀). With the help of basic reproduction number (R₀), we showed that the model is locally asymptotically stable at DFE when the basic reproduction number (R₀) less than unity and unstable when the basic reproduction number (R₀) is greater than unity. Furthermore, stability analysis of the model at endemic equilibrium has also been discussed. Finally, numerical simulations have been done using MATLAB 2012b to exemplify the theoretical results.Keywords: time delayed SIR epidemic model, modulated incidence rate, Holling type II nonlinear treatment rate, stability
Procedia PDF Downloads 15516974 Simulation of Flow through Dam Foundation by FEM and ANN Methods Case Study: Shahid Abbaspour Dam
Authors: Mehrdad Shahrbanozadeh, Gholam Abbas Barani, Saeed Shojaee
Abstract:
In this study, a finite element (Seep3D model) and an artificial neural network (ANN) model were developed to simulate flow through dam foundation. Seep3D model is capable of simulating three-dimensional flow through a heterogeneous and anisotropic, saturated and unsaturated porous media. Flow through the Shahid Abbaspour dam foundation has been used as a case study. The FEM with 24960 triangular elements and 28707 nodes applied to model flow through foundation of this dam. The FEM being made denser in the neighborhood of the curtain screen. The ANN model developed for Shahid Abbaspour dam is a feedforward four layer network employing the sigmoid function as an activator and the back-propagation algorithm for the network learning. The water level elevations of the upstream and downstream of the dam have been used as input variables and the piezometric heads as the target outputs in the ANN model. The two models are calibrated and verified using the Shahid Abbaspour’s dam piezometric data. Results of the models were compared with those measured by the piezometers which are in good agreement. The model results also revealed that the ANN model performed as good as and in some cases better than the FEM.Keywords: seepage, dam foundation, finite element method, neural network, seep 3D model
Procedia PDF Downloads 473