Search results for: insurance estimation
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2229

Search results for: insurance estimation

549 Short-Term versus Long-Term Effect of Waterpipe Smoking Exposure on Cardiovascular Biomarkers in Mice

Authors: Abeer Rababa'h, Ragad Bsoul, Mohammad Alkhatatbeh, Karem Alzoubi

Abstract:

Introduction: Tobacco use is one of the main risk factors to cardiovascular diseases (CVD) and atherosclerosis in particular. WPS contains several toxic materials such as: nicotine, carcinogens, tar, carbon monoxide and heavy metals. Thus, WPS is considered to be as one of the toxic environmental factors that should be investigated intensively. Therefore, the aim of this study is to investigate the effect of WPS on several cardiovascular biological markers that may cause atherosclerosis in mice. The study also conducted to study the temporal effects of WPS on the atherosclerotic biomarkers upon short (2 weeks) and long-term (8 weeks) exposures. Methods: mice were exposed to WPS and heart homogenates were analyzed to elucidate the effects of WPS on matrix metalloproteinase (MMPs), endothelin-1 (ET-1) and, myeloperoxidase (MPO). Following protein estimation, enzyme-linked immunosorbent assays were done to measure the levels of MMPs (isoforms 1, 3, and 9), MPO, and ET-1 protein expressions. Results: our data showed that acute exposure to WPS significantly enhances the levels of MMP-3, MMP- 9, and MPO expressions (p < 0.05) compared to their corresponding control. However, the body was capable to normalize the level of expressions for such parameters following continuous exposure for 8 weeks (p > 0.05). Additionally, we showed that the level of ET-1 expression was significantly higher upon chronic exposure to WPS compared to both control and acute exposure groups (p < 0.05). Conclusion: Waterpipe exposure has a significant negative effect on atherosclerosis and the enhancement of the atherosclerotic biomarkers expression (MMP-3 and 9, MPO, and ET-1) might represent an early scavenger of compensatory efforts to maintain cardiac function after WP exposure.

Keywords: atherosclerotic biomarkers, cardiovascular disease, matrix metalloproteinase, waterpipe

Procedia PDF Downloads 350
548 Assessment of Forage Utilization for Pasture-Based Livestock Production in Udubo Grazing Reserve, Bauchi State

Authors: Mustapha Saidu, Bilyaminu Mohammed

Abstract:

The study was conducted in Udubo Grazing Reserve between July 2019 and October 2019 to assess forage utilization for pasture-based livestock production in reserve. The grazing land was cross-divided into grids, where 15 coordinates were selected as the sample points. Grids of one-kilometer interval were made. The grids were systematically selected 1 grid after 7 grids. 1 × 1-meter quadrat was made at the coordinate of the selected grids for measurement, estimation, and sample collection. The results of the study indicated that Zornia glochidiatah has the highest percent of species composition (42%), while Mitracarpus hirtus has the lowest percent (0.1%). Urochloa mosambicensis has 48 percent of height removed and 27 percent used by weight, Zornia glochidiata 60 percent of height removed and 57 percent used by weight, Alysicapus veginalis has 55 percent of height removed, and 40 percent used by weight, and Cenchrus biflorus has 40 percent of height removed and 28 percent used by weight. The target is 50 percent utilization of forage by weight during a grazing period as well as at the end of the grazing season. The study found that Orochloa mosambicensis, Alysicarpus veginalis, and Cenchrus biflorus had lower percent by weight which is normal, while Zornia glochidiata had a higher percent by weight which is an indication of danger. The study recommends that the identification of key plant species in pasture and rangeland is critical to implementing a successful grazing management plan. There should be collective action and promotion of historically generated grazing knowledge through public and private advocacies.

Keywords: forage, grazing reserve, live stock, pasture, plant species

Procedia PDF Downloads 87
547 Community Forest Management and Ecological and Economic Sustainability: A Two-Way Street

Authors: Sony Baral, Harald Vacik

Abstract:

This study analyzes the sustainability of community forest management in two community forests in Terai and Hills of Nepal, representing four forest types: 1) Shorearobusta, 2) Terai hardwood, 3) Schima-Castanopsis, and 4) other Hills. The sustainability goals for this region include maintaining and enhancing the forest stocks. Considering this, we analysed changes in species composition, stand density, growing stock volume, and growth-to-removal ratio at 3-5 year intervals from 2005-2016 within 109 permanent forest plots (57 in the Terai and 52 in the Hills). To complement inventory data, forest users, forest committee members, and forest officials were consulted. The results indicate that the relative representation of economically valuable tree species has increased. Based on trends in stand density, both forests are being sustainably managed. Pole-sized trees dominated the diameter distribution, however, with a limited number of mature trees and declined regeneration. The forests were over-harvested until 2013 but under-harvested in the recent period in the Hills. In contrast, both forest types were under-harvested throughout the inventory period in the Terai. We found that the ecological dimension of sustainable forest management is strongly achieved while the economic dimension is lacking behind the current potential. Thus, we conclude that maintaining a large number of trees in the forest does not necessarily ensure both ecological and economical sustainability. Instead, priority should be given on a rational estimation of the annual harvest rates to enhance forest resource conditions together with regular benefits to the local communities.

Keywords: community forests, diversity, growing stock, forest management, sustainability, nepal

Procedia PDF Downloads 95
546 Study on the Process of Detumbling Space Target by Laser

Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming

Abstract:

The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.

Keywords: detumbling, laser ablation drive, space target, space debris remove

Procedia PDF Downloads 82
545 Nonparametric Truncated Spline Regression Model on the Data of Human Development Index in Indonesia

Authors: Kornelius Ronald Demu, Dewi Retno Sari Saputro, Purnami Widyaningsih

Abstract:

Human Development Index (HDI) is a standard measurement for a country's human development. Several factors may have influenced it, such as life expectancy, gross domestic product (GDP) based on the province's annual expenditure, the number of poor people, and the percentage of an illiterate people. The scatter plot between HDI and the influenced factors show that the plot does not follow a specific pattern or form. Therefore, the HDI's data in Indonesia can be applied with a nonparametric regression model. The estimation of the regression curve in the nonparametric regression model is flexible because it follows the shape of the data pattern. One of the nonparametric regression's method is a truncated spline. Truncated spline regression is one of the nonparametric approach, which is a modification of the segmented polynomial functions. The estimator of a truncated spline regression model was affected by the selection of the optimal knots point. Knot points is a focus point of spline truncated functions. The optimal knots point was determined by the minimum value of generalized cross validation (GCV). In this article were applied the data of Human Development Index with a truncated spline nonparametric regression model. The results of this research were obtained the best-truncated spline regression model to the HDI's data in Indonesia with the combination of optimal knots point 5-5-5-4. Life expectancy and the percentage of an illiterate people were the significant factors depend to the HDI in Indonesia. The coefficient of determination is 94.54%. This means the regression model is good enough to applied on the data of HDI in Indonesia.

Keywords: generalized cross validation (GCV), Human Development Index (HDI), knots point, nonparametric regression, truncated spline

Procedia PDF Downloads 336
544 The Effect of Female Access to Healthcare and Educational Attainment on Nigerian Agricultural Productivity Level

Authors: Esther M. Folarin, Evans Osabuohien, Ademola Onabote

Abstract:

Agriculture constitutes an important part of development and poverty mitigation in lower-middle-income countries, like Nigeria. The level of agricultural productivity in the Nigerian economy in line with the level of demand necessary to meet the desired expectation of the Nigerian populace is threatening to meeting the standard of the United Nations (UN) Sustainable Development Goals (SDGs); This includes the SDG-2 (achieve food security through agricultural productivity). The overall objective of the study is to reveal the performance of the interaction variable in the model among other factors that help in the achievement of greater Nigerian agricultural productivity. The study makes use of Wave 4 (2018/2019) of the Living Standard Measurement Studies, Integrated Survey on Agriculture (LSMS-ISA). Qualitative analysis of the information was also used to provide complimentary answers to the quantitative analysis done in the study. The study employed human capital theory and Grossman’s theory of health Demand in explaining the relationships that exist between the variables within the model of the study. The study engages the Instrumental Variable Regression technique in achieving the broad objectives among other techniques for the other specific objectives. The estimation results show that there exists a positive relationship between female healthcare and the level of female agricultural productivity in Nigeria. In conclusion, the study emphasises the need for more provision and empowerment for greater female access to healthcare and educational attainment levels that aids higher female agricultural productivity and consequently an improvement in the total agricultural productivity of the Nigerian economy.

Keywords: agricultural productivity, education, female, healthcare, investment

Procedia PDF Downloads 80
543 Critical Success Factors Quality Requirement Change Management

Authors: Jamshed Ahmad, Abdul Wahid Khan, Javed Ali Khan

Abstract:

Managing software quality requirements change management is a difficult task in the field of software engineering. Avoiding incoming changes result in user dissatisfaction while accommodating to many requirement changes may delay product delivery. Poor requirements management is solely considered the primary cause of the software failure. It becomes more challenging in global software outsourcing. Addressing success factors in quality requirement change management is desired today due to the frequent change requests from the end-users. In this research study, success factors are recognized and scrutinized with the help of a systematic literature review (SLR). In total, 16 success factors were identified, which significantly impacted software quality requirement change management. The findings show that Proper Requirement Change Management, Rapid Delivery, Quality Software Product, Access to Market, Project Management, Skills and Methodologies, Low Cost/Effort Estimation, Clear Plan and Road Map, Agile Processes, Low Labor Cost, User Satisfaction, Communication/Close Coordination, Proper Scheduling and Time Constraints, Frequent Technological Changes, Robust Model, Geographical distribution/Cultural differences are the key factors that influence software quality requirement change. The recognized success factors and validated with the help of various research methods, i.e., case studies, interviews, surveys and experiments. These factors are then scrutinized in continents, database, company size and period of time. Based on these findings, requirement change will be implemented in a better way.

Keywords: global software development, requirement engineering, systematic literature review, success factors

Procedia PDF Downloads 195
542 IoT and Deep Learning approach for Growth Stage Segregation and Harvest Time Prediction of Aquaponic and Vermiponic Swiss Chards

Authors: Praveen Chandramenon, Andrew Gascoyne, Fideline Tchuenbou-Magaia

Abstract:

Aquaponics offers a simple conclusive solution to the food and environmental crisis of the world. This approach combines the idea of Aquaculture (growing fish) to Hydroponics (growing vegetables and plants in a soilless method). Smart Aquaponics explores the use of smart technology including artificial intelligence and IoT, to assist farmers with better decision making and online monitoring and control of the system. Identification of different growth stages of Swiss Chard plants and predicting its harvest time is found to be important in Aquaponic yield management. This paper brings out the comparative analysis of a standard Aquaponics with a Vermiponics (Aquaponics with worms), which was grown in the controlled environment, by implementing IoT and deep learning-based growth stage segregation and harvest time prediction of Swiss Chards before and after applying an optimal freshwater replenishment. Data collection, Growth stage classification and Harvest Time prediction has been performed with and without water replenishment. The paper discusses the experimental design, IoT and sensor communication with architecture, data collection process, image segmentation, various regression and classification models and error estimation used in the project. The paper concludes with the results comparison, including best models that performs growth stage segregation and harvest time prediction of the Aquaponic and Vermiponic testbed with and without freshwater replenishment.

Keywords: aquaponics, deep learning, internet of things, vermiponics

Procedia PDF Downloads 69
541 Improving Fault Tolerance and Load Balancing in Heterogeneous Grid Computing Using Fractal Transform

Authors: Saad M. Darwish, Adel A. El-Zoghabi, Moustafa F. Ashry

Abstract:

The popularity of the Internet and the availability of powerful computers and high-speed networks as low-cost commodity components are changing the way we use computers today. These technical opportunities have led to the possibility of using geographically distributed and multi-owner resources to solve large-scale problems in science, engineering, and commerce. Recent research on these topics has led to the emergence of a new paradigm known as Grid computing. To achieve the promising potentials of tremendous distributed resources, effective and efficient load balancing algorithms are fundamentally important. Unfortunately, load balancing algorithms in traditional parallel and distributed systems, which usually run on homogeneous and dedicated resources, cannot work well in the new circumstances. In this paper, the concept of a fast fractal transform in heterogeneous grid computing based on R-tree and the domain-range entropy is proposed to improve fault tolerance and load balancing algorithm by improve connectivity, communication delay, network bandwidth, resource availability, and resource unpredictability. A novel two-dimension figure of merit is suggested to describe the network effects on load balance and fault tolerance estimation. Fault tolerance is enhanced by adaptively decrease replication time and message cost while load balance is enhanced by adaptively decrease mean job response time. Experimental results show that the proposed method yields superior performance over other methods.

Keywords: Grid computing, load balancing, fault tolerance, R-tree, heterogeneous systems

Procedia PDF Downloads 488
540 Financial Modeling for Net Present Benefit Analysis of Electric Bus and Diesel Bus and Applications to NYC, LA, and Chicago

Authors: Jollen Dai, Truman You, Xinyun Du, Katrina Liu

Abstract:

Transportation is one of the leading sources of greenhouse gas emissions (GHG). Thus, to meet the Paris Agreement 2015, all countries must adopt a different and more sustainable transportation system. From bikes to Maglev, the world is slowly shifting to sustainable transportation. To develop a utility public transit system, a sustainable web of buses must be implemented. As of now, only a handful of cities have adopted a detailed plan to implement a full fleet of e-buses by the 2030s, with Shenzhen in the lead. Every change requires a detailed plan and a focused analysis of the impacts of the change. In this report, the economic implications and financial implications have been taken into consideration to develop a well-rounded 10-year plan for New York City. We also apply the same financial model to the other cities, LA and Chicago. We picked NYC, Chicago, and LA to conduct the comparative NPB analysis since they are all big metropolitan cities and have complex transportation systems. All three cities have started an action plan to achieve a full fleet of e-bus in the decades. Plus, their energy carbon footprint and their energy price are very different, which are the key factors to the benefits of electric buses. Using TCO (Total Cost Ownership) financial analysis, we developed a model to calculate NPB (Net Present Benefit) /and compare EBS (electric buses) to DBS (diesel buses). We have considered all essential aspects in our model: initial investment, including the cost of a bus, charger, and installation, government fund (federal, state, local), labor cost, energy (electricity or diesel) cost, maintenance cost, insurance cost, health and environment benefit, and V2G (vehicle to grid) benefit. We see about $1,400,000 in benefits for a 12-year lifetime of an EBS compared to DBS provided the government fund to offset 50% of EBS purchase cost. With the government subsidy, an EBS starts to make positive cash flow in 5th year and can pay back its investment in 5 years. Please remember that in our model, we consider environmental and health benefits, and every year, $50,000 is counted as health benefits per bus. Besides health benefits, the significant benefits come from the energy cost savings and maintenance savings, which are about $600,000 and $200,000 in 12-year life cycle. Using linear regression, given certain budget limitations, we then designed an optimal three-phase process to replace all NYC electric buses in 10 years, i.e., by 2033. The linear regression process is to minimize the total cost over the years and have the lowest environmental cost. The overall benefits to replace all DBS with EBS for NYC is over $2.1 billion by the year of 2033. For LA, and Chicago, the benefits for electrification of the current bus fleet are $1.04 billion and $634 million by 2033. All NPB analyses and the algorithm to optimize the electrification phase process are implemented in Python code and can be shared.

Keywords: financial modeling, total cost ownership, net present benefits, electric bus, diesel bus, NYC, LA, Chicago

Procedia PDF Downloads 48
539 An Estimating Equation for Survival Data with a Possibly Time-Varying Covariates under a Semiparametric Transformation Models

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

An estimating equation technique is an alternative method of the widely used maximum likelihood methods, which enables us to ease some complexity due to the complex characteristics of time-varying covariates. In the situations, when both the time-varying covariates and left-truncation are considered in the model, the maximum likelihood estimation procedures become much more burdensome and complex. To ease the complexity, in this study, the modified estimating equations those have been given high attention and considerations in many researchers under semiparametric transformation model was proposed. The purpose of this article was to develop the modified estimating equation under flexible and general class of semiparametric transformation models for left-truncated and right censored survival data with time-varying covariates. Besides the commonly applied Cox proportional hazards model, such kind of problems can be also analyzed with a general class of semiparametric transformation models to estimate the effect of treatment given possibly time-varying covariates on the survival time. The consistency and asymptotic properties of the estimators were intuitively derived via the expectation-maximization (EM) algorithm. The characteristics of the estimators in the finite sample performance for the proposed model were illustrated via simulation studies and Stanford heart transplant real data examples. To sum up the study, the bias for covariates has been adjusted by estimating density function for the truncation time variable. Then the effect of possibly time-varying covariates was evaluated in some special semiparametric transformation models.

Keywords: EM algorithm, estimating equation, semiparametric transformation models, time-to-event outcomes, time varying covariate

Procedia PDF Downloads 151
538 Challenges of Blockchain Applications in the Supply Chain Industry: A Regulatory Perspective

Authors: Pardis Moslemzadeh Tehrani

Abstract:

Due to the emergence of blockchain technology and the benefits of cryptocurrencies, intelligent or smart contracts are gaining traction. Artificial intelligence (AI) is transforming our lives, and it is being embraced by a wide range of sectors. Smart contracts, which are at the heart of blockchains, incorporate AI characteristics. Such contracts are referred to as "smart" contracts because of the underlying technology that allows contracting parties to agree on terms expressed in computer code that defines machine-readable instructions for computers to follow under specific situations. The transmission happens automatically if the conditions are met. Initially utilised for financial transactions, blockchain applications have since expanded to include the financial, insurance, and medical sectors, as well as supply networks. Raw material acquisition by suppliers, design, and fabrication by manufacturers, delivery of final products to consumers, and even post-sales logistics assistance are all part of supply chains. Many issues are linked with managing supply chains from the planning and coordination stages, which can be implemented in a smart contract in a blockchain due to their complexity. Manufacturing delays and limited third-party amounts of product components have raised concerns about the integrity and accountability of supply chains for food and pharmaceutical items. Other concerns include regulatory compliance in multiple jurisdictions and transportation circumstances (for instance, many products must be kept in temperature-controlled environments to ensure their effectiveness). Products are handled by several providers before reaching customers in modern economic systems. Information is sent between suppliers, shippers, distributors, and retailers at every stage of the production and distribution process. Information travels more effectively when individuals are eliminated from the equation. The usage of blockchain technology could be a viable solution to these coordination issues. In blockchains, smart contracts allow for the rapid transmission of production data, logistical data, inventory levels, and sales data. This research investigates the legal and technical advantages and disadvantages of AI-blockchain technology in the supply chain business. It aims to uncover the applicable legal problems and barriers to the use of AI-blockchain technology to supply chains, particularly in the food industry. It also discusses the essential legal and technological issues and impediments to supply chain implementation for stakeholders, as well as methods for overcoming them before releasing the technology to clients. Because there has been little research done on this topic, it is difficult for industrial stakeholders to grasp how blockchain technology could be used in their respective operations. As a result, the focus of this research will be on building advanced and complex contractual terms in supply chain smart contracts on blockchains to cover all unforeseen supply chain challenges.

Keywords: blockchain, supply chain, IoT, smart contract

Procedia PDF Downloads 125
537 An Analysis of the Impact of Government Budget Deficits on Economic Performance. A Zimbabwean Perspective

Authors: Tafadzwa Shumba, Rose C. Nyatondo, Regret Sunge

Abstract:

This research analyses the impact of budget deficits on the economic performance of Zimbabwe. The study employs the autoregressive distributed lag (ARDL) confines testing method to co-integration and long-run estimation using time series data from 1980-2018. The Augmented Dick Fuller (ADF) and the Granger approach were used to testing for stationarity and causality among the factors. Co-integration test results affirm a long term association between GDP development rate and descriptive factors. Causality test results show a unidirectional connection between budget shortfall to GDP development and bi-directional causality amid debt and budget deficit. This study also found unidirectional causality from debt to GDP growth rate. ARDL estimates indicate a significantly positive long term and significantly negative short term impact of budget shortfall on GDP. This suggests that budget deficits have a short-run growth retarding effect and a long-run growth-inducing effect. The long-run results follow the Keynesian theory that posits that fiscal deficits result in an increase in GDP growth. Short-run outcomes follow the neoclassical theory. In light of these findings, the government is recommended to minimize financing of recurrent expenditure using a budget deficit. To achieve sustainable growth and development, the government needs to spend an absorbable budget deficit focusing on capital projects such as the development of human capital and infrastructure.

Keywords: ARDL, budget deficit, economic performance, long run

Procedia PDF Downloads 96
536 Sustainable Land Use Evaluation Based on Preservative Approach: Neighborhoods of Susa City

Authors: Somaye Khademi, Elahe Zoghi Hoseini, Mostafa Norouzi

Abstract:

Determining the manner of land-use and the spatial structure of cities on the one hand, and the economic value of each piece of land, on the other hand, land-use planning is always considered as the main part of urban planning. In this regard, emphasizing the efficient use of land, the sustainable development approach has presented a new perspective on urban planning and consequently on its most important pillar, i.e. land-use planning. In order to evaluate urban land-use, it has been attempted in this paper to select the most significant indicators affecting urban land-use and matching sustainable development indicators. Due to the significance of preserving ancient monuments and the surroundings as one of the main pillars of achieving sustainability, in this research, sustainability indicators have been selected emphasizing the preservation of ancient monuments and historical observance of the city of Susa as one of the historical cities of Iran. It has also been attempted to integrate these criteria with other land-use sustainability indicators. For this purpose, Kernel Density Estimation (KDE) and the AHP model have been used for providing maps displaying spatial density and combining layers as well as providing final maps respectively. Moreover, the rating of sustainability will be studied in different districts of the city of Shush so as to evaluate the status of land sustainability in different parts of the city. The results of the study show that different neighborhoods of Shush do not have the same sustainability in land-use such that neighborhoods located in the eastern half of the city, i.e. the new neighborhoods, have a higher sustainability than those of the western half. It seems that the allocation of a high percentage of these areas to arid lands and historical areas is one of the main reasons for their sustainability.

Keywords: city of Susa, historical heritage, land-use evaluation, urban sustainable development

Procedia PDF Downloads 377
535 Spectroscopic Relation between Open Cluster and Globular Cluster

Authors: Robin Singh, Mayank Nautiyal, Priyank Jain, Vatasta Koul, Vaibhav Sharma

Abstract:

The curiosity to investigate the space and its mysteries was dependably the main impetus of human interest, as the particle of livings exists from the "debut de l'Univers" (beginning of the Universe) typified with its few other living things. The sharp drive to uncover the secrets of stars and their unusual deportment was dependably an ignitor of stars investigation. As humankind lives in civilizations and states, stars likewise live in provinces named ‘clusters’. Clusters are separates into 2 composes i.e. open clusters and globular clusters. An open cluster is a gathering of thousand stars that were moulded from a comparable goliath sub-nuclear cloud and for the most part; contain Propulsion I (extremely metal-rich) and Propulsion II (mild metal-rich), where globular clusters are around gathering of more than thirty thousand stars that circles a galactic focus and basically contain Propulsion III (to a great degree metal-poor) stars. Futurology of this paper lies in the spectroscopic investigation of globular clusters like M92 and NGC419 and open clusters like M34 and IC2391 in different color bands by using software like VIREO virtual observatory, Aladin, CMUNIWIN, and MS-Excel. Assessing the outcome Hertzsprung-Russel (HR) diagram with exemplary cosmological models like Einstein model, De Sitter and Planck survey demonstrate for a superior age estimation of respective clusters. Colour-Magnitude Diagram of these clusters was obtained by photometric analysis in g and r bands which further transformed into BV bands which will unravel the idea of stars exhibit in the individual clusters.

Keywords: color magnitude diagram, globular clusters, open clusters, Einstein model

Procedia PDF Downloads 225
534 Modeling and Numerical Simulation of Heat Transfer and Internal Loads at Insulating Glass Units

Authors: Nina Penkova, Kalin Krumov, Liliana Zashcova, Ivan Kassabov

Abstract:

The insulating glass units (IGU) are widely used in the advanced and renovated buildings in order to reduce the energy for heating and cooling. Rules for the choice of IGU to ensure energy efficiency and thermal comfort in the indoor space are well known. The existing of internal loads - gage or vacuum pressure in the hermetized gas space, requires additional attention at the design of the facades. The internal loads appear at variations of the altitude, meteorological pressure and gas temperature according to the same at the process of sealing. The gas temperature depends on the presence of coatings, coating position in the transparent multi-layer system, IGU geometry and space orientation, its fixing on the facades and varies with the climate conditions. An algorithm for modeling and numerical simulation of thermal fields and internal pressure in the gas cavity at insulating glass units as function of the meteorological conditions is developed. It includes models of the radiation heat transfer in solar and infrared wave length, indoor and outdoor convection heat transfer and free convection in the hermetized gas space, assuming the gas as compressible. The algorithm allows prediction of temperature and pressure stratification in the gas domain of the IGU at different fixing system. The models are validated by comparison of the numerical results with experimental data obtained by Hot-box testing. Numerical calculations and estimation of 3D temperature, fluid flow fields, thermal performances and internal loads at IGU in window system are implemented.

Keywords: insulating glass units, thermal loads, internal pressure, CFD analysis

Procedia PDF Downloads 272
533 A Stochastic Diffusion Process Based on the Two-Parameters Weibull Density Function

Authors: Meriem Bahij, Ahmed Nafidi, Boujemâa Achchab, Sílvio M. A. Gama, José A. O. Matos

Abstract:

Stochastic modeling concerns the use of probability to model real-world situations in which uncertainty is present. Therefore, the purpose of stochastic modeling is to estimate the probability of outcomes within a forecast, i.e. to be able to predict what conditions or decisions might happen under different situations. In the present study, we present a model of a stochastic diffusion process based on the bi-Weibull distribution function (its trend is proportional to the bi-Weibull probability density function). In general, the Weibull distribution has the ability to assume the characteristics of many different types of distributions. This has made it very popular among engineers and quality practitioners, who have considered it the most commonly used distribution for studying problems such as modeling reliability data, accelerated life testing, and maintainability modeling and analysis. In this work, we start by obtaining the probabilistic characteristics of this model, as the explicit expression of the process, its trends, and its distribution by transforming the diffusion process in a Wiener process as shown in the Ricciaardi theorem. Then, we develop the statistical inference of this model using the maximum likelihood methodology. Finally, we analyse with simulated data the computational problems associated with the parameters, an issue of great importance in its application to real data with the use of the convergence analysis methods. Overall, the use of a stochastic model reflects only a pragmatic decision on the part of the modeler. According to the data that is available and the universe of models known to the modeler, this model represents the best currently available description of the phenomenon under consideration.

Keywords: diffusion process, discrete sampling, likelihood estimation method, simulation, stochastic diffusion process, trends functions, bi-parameters weibull density function

Procedia PDF Downloads 306
532 Comparative Analysis of the Third Generation of Research Data for Evaluation of Solar Energy Potential

Authors: Claudineia Brazil, Elison Eduardo Jardim Bierhals, Luciane Teresa Salvi, Rafael Haag

Abstract:

Renewable energy sources are dependent on climatic variability, so for adequate energy planning, observations of the meteorological variables are required, preferably representing long-period series. Despite the scientific and technological advances that meteorological measurement systems have undergone in the last decades, there is still a considerable lack of meteorological observations that form series of long periods. The reanalysis is a system of assimilation of data prepared using general atmospheric circulation models, based on the combination of data collected at surface stations, ocean buoys, satellites and radiosondes, allowing the production of long period data, for a wide gamma. The third generation of reanalysis data emerged in 2010, among them is the Climate Forecast System Reanalysis (CFSR) developed by the National Centers for Environmental Prediction (NCEP), these data have a spatial resolution of 0.50 x 0.50. In order to overcome these difficulties, it aims to evaluate the performance of solar radiation estimation through alternative data bases, such as data from Reanalysis and from meteorological satellites that satisfactorily meet the absence of observations of solar radiation at global and/or regional level. The results of the analysis of the solar radiation data indicated that the reanalysis data of the CFSR model presented a good performance in relation to the observed data, with determination coefficient around 0.90. Therefore, it is concluded that these data have the potential to be used as an alternative source in locations with no seasons or long series of solar radiation, important for the evaluation of solar energy potential.

Keywords: climate, reanalysis, renewable energy, solar radiation

Procedia PDF Downloads 208
531 An Approach to Correlate the Statistical-Based Lorenz Method, as a Way of Measuring Heterogeneity, with Kozeny-Carman Equation

Authors: H. Khanfari, M. Johari Fard

Abstract:

Dealing with carbonate reservoirs can be mind-boggling for the reservoir engineers due to various digenetic processes that cause a variety of properties through the reservoir. A good estimation of the reservoir heterogeneity which is defined as the quality of variation in rock properties with location in a reservoir or formation, can better help modeling the reservoir and thus can offer better understanding of the behavior of that reservoir. Most of reservoirs are heterogeneous formations whose mineralogy, organic content, natural fractures, and other properties vary from place to place. Over years, reservoir engineers have tried to establish methods to describe the heterogeneity, because heterogeneity is important in modeling the reservoir flow and in well testing. Geological methods are used to describe the variations in the rock properties because of the similarities of environments in which different beds have deposited in. To illustrate the heterogeneity of a reservoir vertically, two methods are generally used in petroleum work: Dykstra-Parsons permeability variations (V) and Lorenz coefficient (L) that are reviewed briefly in this paper. The concept of Lorenz is based on statistics and has been used in petroleum from that point of view. In this paper, we correlated the statistical-based Lorenz method to a petroleum concept, i.e. Kozeny-Carman equation and derived the straight line plot of Lorenz graph for a homogeneous system. Finally, we applied the two methods on a heterogeneous field in South Iran and discussed each, separately, with numbers and figures. As expected, these methods show great departure from homogeneity. Therefore, for future investment, the reservoir needs to be treated carefully.

Keywords: carbonate reservoirs, heterogeneity, homogeneous system, Dykstra-Parsons permeability variations (V), Lorenz coefficient (L)

Procedia PDF Downloads 218
530 Bacteriological and Mineral Analyses of Leachate Samples from Erifun Dumpsite, Ado-Ekiti, Ekiti State, Nigeria

Authors: Adebowale T. Odeyemi, Oluwafemi A. Ajenifuja

Abstract:

The leachate samples collected from Erifun dumpsite along Federal Polythenic road, Ado-Ekiti, Ekiti State, were subjected to bacteriological and mineral analyses. The bacteriological estimation and isolation were done using serial dilution and pour plating techniques. Antibiotic susceptibility test was done using agar disc diffusion technique. Atomic Absorption Spectophotometry method was used to analyze the heavy metal contents in the leachate samples. The bacterial and coliform counts ranged from 4.2 × 105 CFU/ml to 2.97 × 106 CFU/ml and 5.0 × 104 CFU/ml to 2.45 x 106 CFU/ml, respectively. The isolated bacteria and percentage of occurrence include Bacillus cereus (22%), Enterobacter aerogenes (18%), Staphylococcus aureus (16%), Proteus vulgaris (14%), Escherichia coli (14%), Bacillus licheniformis (12%) and Klebsiella aerogenes (4%). The mineral value ranged as follow; iron (21.30mg/L - 25.60mg/L), zinc (1.80mg/L - 5.60mg/L), copper (1.00mg/L - 2.60mg/L), chromium (0.50mg/L - 1.30mg/L), candium (0.20mg/L - 1.30mg/L), nickel (0.20mg/L - 0.80mg/L), lead (0.05mg/L-0.30mg/L), cobalt (0.03mg/L - 0.30mg/L) and in all samples manganese was not detected. The entire organisms isolated exhibited a high level of resistance to most of the antibiotics used. There is an urgent need for awareness to be created about the present situation of the leachate in Erifun, on the need for treatment of the nearby stream and other water sources before they can be used for drinking and other domestic use. In conclusion, a good method of waste disposal is required in those communities to prevent leachate formation, percolation, and runoff into water bodies during the raining season.

Keywords: antibiotic susceptibility, dumpsite, bacteriological analysis, heavy metal

Procedia PDF Downloads 140
529 Seismic Retrofits – A Catalyst for Minimizing the Building Sector’s Carbon Footprint

Authors: Juliane Spaak

Abstract:

A life-cycle assessment was performed, looking at seven retrofit projects in New Zealand using LCAQuickV3.5. The study found that retrofits save up to 80% of embodied carbon emissions for the structural elements compared to a new building. In other words, it is only a 20% carbon investment to transform and extend a building’s life. In addition, the systems were evaluated by looking at environmental impacts over the design life of these buildings and resilience using FEMA P58 and PACT software. With the increasing interest in Zero Carbon targets, significant changes in the building and construction sector are required. Emissions for buildings arise from both embodied carbon and operations. Based on the significant advancements in building energy technology, the focus is moving more toward embodied carbon, a large portion of which is associated with the structure. Since older buildings make up most of the real estate stock of our cities around the world, their reuse through structural retrofit and wider refurbishment plays an important role in extending the life of a building’s embodied carbon. New Zealand’s building owners and engineers have learned a lot about seismic issues following a decade of significant earthquakes. Recent earthquakes have brought to light the necessity to move away from constructing code-minimum structures that are designed for life safety but are frequently ‘disposable’ after a moderate earthquake event, especially in relation to a structure’s ability to minimize damage. This means weaker buildings sit as ‘carbon liabilities’, with considerably more carbon likely to be expended remediating damage after a shake. Renovating and retrofitting older assets plays a big part in reducing the carbon profile of the buildings sector, as breathing new life into a building’s structure is vastly more sustainable than the highest quality ‘green’ new builds, which are inherently more carbon-intensive. The demolition of viable older buildings (often including heritage buildings) is increasingly at odds with society’s desire for a lower carbon economy. Bringing seismic resilience and carbon best practice together in decision-making can open the door to commercially attractive outcomes, with retrofits that include structural and sustainability upgrades transforming the asset’s revenue generation. Across the global real estate market, tenants are increasingly demanding the buildings they occupy be resilient and aligned with their own climate targets. The relationship between seismic performance and ‘sustainable design’ has yet to fully mature, yet in a wider context is of profound consequence. A whole-of-life carbon perspective on a building means designing for the likely natural hazards within the asset’s expected lifespan, be that earthquake, storms, damage, bushfires, fires, and so on, ¬with financial mitigation (e.g., insurance) part, but not all, of the picture.

Keywords: retrofit, sustainability, earthquake, reuse, carbon, resilient

Procedia PDF Downloads 72
528 Analysis of Tourism Development Level and Research on Improvement Strategies- Take Chongqing as an Example

Authors: Jiajun Lu, Yun Ma

Abstract:

As a member of the tertiary industry, tourism is an important driving factor for urban economic development. As a well-known tourist city in China, according to statistics, the added value of tourism and related industries in 2022 will reach 106.326 billion yuan, a year-on-year increase of 1.2%, accounting for 3.7% of the city's GDP. However, the overall tourism development level of Chongqing is seriously unbalanced, and the tourism strength of the main urban area is much higher than that of the southeast Chongqing, northeast Chongqing and the surrounding city tourism area, and the overall tourism strength of the other three regions is relatively balanced. Based on the estimation of tourism development level and the geographic detector method, this paper finds that the important factors affecting the tourism development level of non-main urban areas in Chongqing are A-level tourist attractions. Through GIS geospatial analysis technology and SPSS data correlation research method, the spatial distribution characteristics and influencing factors of A-level tourist attractions in Chongqing were quantitatively analyzed by using data such as geospatial data cloud, relevant documents of Chongqing Municipal Commission of Culture and Tourism Development, planning cloud, and relevant statistical yearbooks. The results show that: (1) The spatial distribution of tourist attractions in non-main urban areas of Chongqing is agglomeration and uneven. (2) The spatial distribution of A-level tourist attractions in non-main urban areas of Chongqing is affected by ecological factors, and the degree of influence is in the order of water factors> topographic factors > green space factors.

Keywords: tourist attractions, geographic detectors, quantitative research, ecological factors, GIS technology, SPSS analysis

Procedia PDF Downloads 4
527 Estimation of Carbon Uptake of Seoul City Street Trees in Seoul and Plans for Increase Carbon Uptake by Improving Species

Authors: Min Woo Park, Jin Do Chung, Kyu Yeol Kim, Byoung Uk Im, Jang Woo Kim, Hae Yeul Ryu

Abstract:

Nine representative species of trees among all the street trees were selected to estimate the absorption amount of carbon dioxide emitted from street trees in Seoul calculating the biomass, amount of carbon saved, and annual absorption amount of carbon dioxide in each of the species. Planting distance of street trees in Seoul was 1,851,180 m, the number of planting lines was 1,287, the number of planted trees was 284,498 and 46 species of trees were planted as of 2013. According to the result of plugging the quantity of species of street trees in Seoul on the absorption amount of each of the species, 120,097 ton of biomass, 60,049.8 ton of amount of carbon saved, and 11,294 t CO2/year of annual absorption amount of carbon dioxide were calculated. Street ratio mentioned on the road statistics in Seoul in 2022 is 23.13%. If the street trees are assumed to be increased in the same rate, the number of street trees in Seoul was calculated to be 294,823. The planting distance was estimated to be 1,918,360 m, and the annual absorption amount of carbon dioxide was measured to be 11,704 t CO2/year. Plans for improving the annual absorption amount of carbon dioxide from street trees were established based on the expected amount of absorption. First of all, it is to improve the annual absorption amount of carbon dioxide by increasing the number of planted street trees after adjusting the planting distance of street trees. If adjusting the current planting distance to 6 m, it was turned out that 12,692.7 t CO2/year was absorbed on an annual basis. Secondly, it is to change the species of trees to tulip trees that represent high absorption rate. If increasing the proportion of tulip trees to 30% up to 2022, the annual absorption rate of carbon dioxide was calculated to be 17804.4 t CO2/year.

Keywords: absorption of carbon dioxide, source of absorbing carbon dioxide, trees in city, improving species

Procedia PDF Downloads 360
526 Analysis of Earthquake Potential and Shock Level Scenarios in South Sulawesi

Authors: Takhul Bakhtiar

Abstract:

In South Sulawesi Province, there is an active Walanae Fault causing this area to frequently experience earthquakes. This study aims to determine the level of seismicity of the earthquake in order to obtain the potential for earthquakes in the future. The estimation of the potential for earthquakes is then made a scenario model determine the estimated level of shocks as an effort to mitigate earthquake disasters in the region. The method used in this study is the Gutenberg Richter Method through the statistical likelihood approach. This study used earthquake data in the South Sulawesi region in 1972 - 2022. The research location is located at the coordinates of 3.5° – 5.5° South Latitude and 119.5° – 120.5° East Longitude and divided into two segments, namely the northern segment at the coordinates of 3.5° – 4.5° South Latitude and 119,5° – 120,5° East Longitude then the southern segment with coordinates of 4.5° – 5.5° South Latitude and 119,5° – 120.5° East Longitude. This study uses earthquake parameters with a magnitude > 1 and a depth < 50 km. The results of the analysis show that the potential for earthquakes in the next ten years with a magnitude of M = 7 in the northern segment is estimated at 98.81% with an estimated shock level of VI-VII MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng then IV - V MMI in the cities of Bulukumba, Selayar, Makassar and Gowa. In the southern segment, the potential for earthquakes in the next ten years with a magnitude of M = 7 is estimated at 32.89% with an estimated VI-VII MMI shock level in the cities of Bulukumba, Selayar, Makassar and Gowa, then III-IV MMI around the cities of Pare-Pare, Barru, Pinrang and Soppeng.

Keywords: Gutenberg Richter, likelihood method, seismicity, shakemap and MMI scale

Procedia PDF Downloads 119
525 Aerodynamic Modeling Using Flight Data at High Angle of Attack

Authors: Rakesh Kumar, A. K. Ghosh

Abstract:

The paper presents the modeling of linear and nonlinear longitudinal aerodynamics using real flight data of Hansa-3 aircraft gathered at low and high angles of attack. The Neural-Gauss-Newton (NGN) method has been applied to model the linear and nonlinear longitudinal dynamics and estimate parameters from flight data. Unsteady aerodynamics due to flow separation at high angles of attack near stall has been included in the aerodynamic model using Kirchhoff’s quasi-steady stall model. NGN method is an algorithm that utilizes Feed Forward Neural Network (FFNN) and Gauss-Newton optimization to estimate the parameters and it does not require any a priori postulation of mathematical model or solving of equations of motion. NGN method was validated on real flight data generated at moderate angles of attack before application to the data at high angles of attack. The estimates obtained from compatible flight data using NGN method were validated by comparing with wind tunnel values and the maximum likelihood estimates. Validation was also carried out by comparing the response of measured motion variables with the response generated by using estimates a different control input. Next, NGN method was applied to real flight data generated by executing a well-designed quasi-steady stall maneuver. The results obtained in terms of stall characteristics and aerodynamic parameters were encouraging and reasonably accurate to establish NGN as a method for modeling nonlinear aerodynamics from real flight data at high angles of attack.

Keywords: parameter estimation, NGN method, linear and nonlinear, aerodynamic modeling

Procedia PDF Downloads 443
524 Impact of Vehicle Travel Characteristics on Level of Service: A Comparative Analysis of Rural and Urban Freeways

Authors: Anwaar Ahmed, Muhammad Bilal Khurshid, Samuel Labi

Abstract:

The effect of trucks on the level of service is determined by considering passenger car equivalents (PCE) of trucks. The current version of Highway Capacity Manual (HCM) uses a single PCE value for all tucks combined. However, the composition of truck traffic varies from location to location; therefore a single PCE-value for all trucks may not correctly represent the impact of truck traffic at specific locations. Consequently, present study developed separate PCE values for single-unit and combination trucks to replace the single value provided in the HCM on different freeways. Site specific PCE values, were developed using concept of spatial lagging headways (the distance from the rear bumper of a leading vehicle to the rear bumper of the following vehicle) measured from field traffic data. The study used data from four locations on a single urban freeway and three different rural freeways in Indiana. Three-stage-least-squares (3SLS) regression techniques were used to generate models that predicted lagging headways for passenger cars, single unit trucks (SUT), and combination trucks (CT). The estimated PCE values for single-unit and combination truck for basic urban freeways (level terrain) were: 1.35 and 1.60, respectively. For rural freeways the estimated PCE values for single-unit and combination truck were: 1.30 and 1.45, respectively. As expected, traffic variables such as vehicle flow rates and speed have significant impacts on vehicle headways. Study results revealed that the use of separate PCE values for different truck classes can have significant influence on the LOS estimation.

Keywords: level of service, capacity analysis, lagging headway, trucks

Procedia PDF Downloads 354
523 Artificial intelligence and Law

Authors: Mehrnoosh Abouzari, Shahrokh Shahraei

Abstract:

With the development of artificial intelligence in the present age, intelligent machines and systems have proven their actual and potential capabilities and are mindful of increasing their presence in various fields of human life in the fields of industry, financial transactions, marketing, manufacturing, service affairs, politics, economics and various branches of the humanities .Therefore, despite the conservatism and prudence of law enforcement, the traces of artificial intelligence can be seen in various areas of law. Including judicial robotics capability estimation, intelligent judicial decision making system, intelligent defender and attorney strategy adjustment, dissemination and regulation of different and scattered laws in each case to achieve judicial coherence and reduce opinion, reduce prolonged hearing and discontent compared to the current legal system with designing rule-based systems, case-based, knowledge-based systems, etc. are efforts to apply AI in law. In this article, we will identify the ways in which AI is applied in its laws and regulations, identify the dominant concerns in this area and outline the relationship between these two areas in order to answer the question of how artificial intelligence can be used in different areas of law and what the implications of this application will be. The authors believe that the use of artificial intelligence in the three areas of legislative, judiciary and executive power can be very effective in governments' decisions and smart governance, and helping to reach smart communities across human and geographical boundaries that humanity's long-held dream of achieving is a global village free of violence and personalization and human error. Therefore, in this article, we are going to analyze the dimensions of how to use artificial intelligence in the three legislative, judicial and executive branches of government in order to realize its application.

Keywords: artificial intelligence, law, intelligent system, judge

Procedia PDF Downloads 117
522 Development and Validation of Selective Methods for Estimation of Valaciclovir in Pharmaceutical Dosage Form

Authors: Eman M. Morgan, Hayam M. Lotfy, Yasmin M. Fayez, Mohamed Abdelkawy, Engy Shokry

Abstract:

Two simple, selective, economic, safe, accurate, precise and environmentally friendly methods were developed and validated for the quantitative determination of valaciclovir (VAL) in the presence of its related substances R1 (acyclovir), R2 (guanine) in bulk powder and in the commercial pharmaceutical product containing the drug. Method A is a colorimetric method where VAL selectively reacts with ferric hydroxamate and the developed color was measured at 490 nm over a concentration range of 0.4-2 mg/mL with percentage recovery 100.05 ± 0.58 and correlation coefficient 0.9999. Method B is a reversed phase ultra performance liquid chromatographic technique (UPLC) which is considered superior in technology to the high-performance liquid chromatography with respect to speed, resolution, solvent consumption, time, and cost of analysis. Efficient separation was achieved on Agilent Zorbax CN column using ammonium acetate (0.1%) and acetonitrile as a mobile phase in a linear gradient program. Elution time for the separation was less than 5 min and ultraviolet detection was carried out at 256 nm over a concentration range of 2-50 μg/mL with mean percentage recovery 100.11±0.55 and correlation coefficient 0.9999. The proposed methods were fully validated as per International Conference on Harmonization specifications and effectively applied for the analysis of valaciclovir in pure form and tablets dosage form. Statistical comparison of the results obtained by the proposed and official or reported methods revealed no significant difference in the performance of these methods regarding the accuracy and precision respectively.

Keywords: hydroxamic acid, related substances, UPLC, valaciclovir

Procedia PDF Downloads 245
521 Classical and Bayesian Inference of the Generalized Log-Logistic Distribution with Applications to Survival Data

Authors: Abdisalam Hassan Muse, Samuel Mwalili, Oscar Ngesa

Abstract:

A generalized log-logistic distribution with variable shapes of the hazard rate was introduced and studied, extending the log-logistic distribution by adding an extra parameter to the classical distribution, leading to greater flexibility in analysing and modeling various data types. The proposed distribution has a large number of well-known lifetime special sub-models such as; Weibull, log-logistic, exponential, and Burr XII distributions. Its basic mathematical and statistical properties were derived. The method of maximum likelihood was adopted for estimating the unknown parameters of the proposed distribution, and a Monte Carlo simulation study is carried out to assess the behavior of the estimators. The importance of this distribution is that its tendency to model both monotone (increasing and decreasing) and non-monotone (unimodal and bathtub shape) or reversed “bathtub” shape hazard rate functions which are quite common in survival and reliability data analysis. Furthermore, the flexibility and usefulness of the proposed distribution are illustrated in a real-life data set and compared to its sub-models; Weibull, log-logistic, and BurrXII distributions and other parametric survival distributions with 3-parmaeters; like the exponentiated Weibull distribution, the 3-parameter lognormal distribution, the 3- parameter gamma distribution, the 3-parameter Weibull distribution, and the 3-parameter log-logistic (also known as shifted log-logistic) distribution. The proposed distribution provided a better fit than all of the competitive distributions based on the goodness-of-fit tests, the log-likelihood, and information criterion values. Finally, Bayesian analysis and performance of Gibbs sampling for the data set are also carried out.

Keywords: hazard rate function, log-logistic distribution, maximum likelihood estimation, generalized log-logistic distribution, survival data, Monte Carlo simulation

Procedia PDF Downloads 201
520 Retail Strategy to Reduce Waste Keeping High Profit Utilizing Taylor's Law in Point-of-Sales Data

Authors: Gen Sakoda, Hideki Takayasu, Misako Takayasu

Abstract:

Waste reduction is a fundamental problem for sustainability. Methods for waste reduction with point-of-sales (POS) data are proposed, utilizing the knowledge of a recent econophysics study on a statistical property of POS data. Concretely, the non-stationary time series analysis method based on the Particle Filter is developed, which considers abnormal fluctuation scaling known as Taylor's law. This method is extended for handling incomplete sales data because of stock-outs by introducing maximum likelihood estimation for censored data. The way for optimal stock determination with pricing the cost of waste reduction is also proposed. This study focuses on the examination of the methods for large sales numbers where Taylor's law is obvious. Numerical analysis using aggregated POS data shows the effectiveness of the methods to reduce food waste maintaining a high profit for large sales numbers. Moreover, the way of pricing the cost of waste reduction reveals that a small profit loss realizes substantial waste reduction, especially in the case that the proportionality constant  of Taylor’s law is small. Specifically, around 1% profit loss realizes half disposal at =0.12, which is the actual  value of processed food items used in this research. The methods provide practical and effective solutions for waste reduction keeping a high profit, especially with large sales numbers.

Keywords: food waste reduction, particle filter, point-of-sales, sustainable development goals, Taylor's law, time series analysis

Procedia PDF Downloads 130