Search results for: cost prediction
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 8181

Search results for: cost prediction

6561 Optimal Maintenance Policy for a Three-Unit System

Authors: A. Abbou, V. Makis, N. Salari

Abstract:

We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.

Keywords: reliability, maintenance optimization, Markov decision process, heuristics

Procedia PDF Downloads 224
6560 Improved Throttled Load Balancing Approach for Cloud Environment

Authors: Sushant Singh, Anurag Jain, Seema Sabharwal

Abstract:

Cloud computing is advancing with a rapid speed. Already, it has been adopted by a huge set of users. Easy to use and anywhere access like potential of cloud computing has made it more attractive relative to other technologies. This has resulted in reduction of deployment cost on user side. It has also allowed the big companies to sell their infrastructure to recover the installation cost for the organization. Roots of cloud computing have extended from Grid computing. Along with the inherited characteristics of its predecessor technologies it has also adopted the loopholes present in those technologies. Some of the loopholes are identified and corrected recently, but still some are yet to be rectified. Two major areas where still scope of improvement exists are security and performance. The proposed work is devoted to performance enhancement for the user of the existing cloud system by improving the basic throttled mapping approach between task and resources. The improved procedure has been tested using the cloud analyst simulator. The results are compared with the original and it has been found that proposed work is one step ahead of existing techniques.

Keywords: cloud analyst, cloud computing, load balancing, throttled

Procedia PDF Downloads 254
6559 Synthesis of Nanoparticles and Thin Film of Cu₂ZnSnS₄ by Hydrothermal Method and Its Application as Congo Red Photocatalyst

Authors: Paula Salazar, Rodrigo Henríquez, Pablo Zerega

Abstract:

The textile, food and pharmaceutical industries are expanding daily worldwide, and they are located within the most polluting industries due to the fact that wastewater is discharged into watercourses with high concentrations of dyes and traces of drugs. Many of these compounds are stable to light and biodegradation, being considered as emerging organic contaminants. Advanced oxidation processes (AOPs) emerge as an effective alternative for the removal and elimination of this type of contaminants. Heterogeneous photocatalysis has been extensively studied as it is an efficient, low-cost and durable method. As the main photocatalyst, TiO₂ has been used for the degradation of a large number of dyes and drugs. The disadvantage of TiO₂ is its absorption in the UV region of the solar spectrum. On the other hand, quaternary chalcogenides based on Cu₂SnZnX₄ (X = S, Se) are a possible alternative due to their narrow bandgap (ca. between 0.8 to 1.5 eV depending on the phase considered), low cost, an abundance of its constituent elements in the earth's crust and its low toxicity. The objective of this research was to synthesize Cu₂SnZnS₄ (CZTS) through of a low-cost hydrothermal method and evaluate it as a potential photo-catalyst in the photo-degradation process of Congo Red. The synthesis of the nanoparticle in suspension and film onto fluorine-doped tin oxide coated glass (FTO) was carried out using a mixture of: 2 mmol CuCl₂, 1 mmol ZnCl₂, 1 mmol SnCl₂ and 4 mmol CH4N₂S in a Teflon reactor at 180⁰C for 72 h. Characterization was performed through scanning electron microscopy (SEM), X-ray diffraction (XRD) and UV VIS spectroscopy. Photo-degradation monitoring was carried out employing a UV VIS spectrophotometer. The results show that photodegradation of 55% of the dye can be obtained after 4h of exposure to polychromatic light, it should be noted that the Congo Red dye is being studied for the first time.

Keywords: CZTS, hydrothermal, photocatalysis, dye

Procedia PDF Downloads 128
6558 A Generalized Weighted Loss for Support Vextor Classification and Multilayer Perceptron

Authors: Filippo Portera

Abstract:

Usually standard algorithms employ a loss where each error is the mere absolute difference between the true value and the prediction, in case of a regression task. In the present, we present several error weighting schemes that are a generalization of the consolidated routine. We study both a binary classification model for Support Vextor Classification and a regression net for Multylayer Perceptron. Results proves that the error is never worse than the standard procedure and several times it is better.

Keywords: loss, binary-classification, MLP, weights, regression

Procedia PDF Downloads 101
6557 Dual-Rail Logic Unit in Double Pass Transistor Logic

Authors: Hamdi Belgacem, Fradi Aymen

Abstract:

In this paper we present a low power, low cost differential logic unit (LU). The proposed LU receives dual-rail inputs and generates dual-rail outputs. The proposed circuit can be used in Arithmetic and Logic Units (ALU) of processor. It can be also dedicated for self-checking applications based on dual duplication code. Four logic functions as well as their inverses are implemented within a single Logic Unit. The hardware overhead for the implementation of the proposed LU is lower than the hardware overhead required for standard LU implemented with standard CMOS logic style. This new implementation is attractive as fewer transistors are required to implement important logic functions. The proposed differential logic unit can perform 8 Boolean logical operations by using only 16 transistors. Spice simulations using a 32 nm technology was utilized to evaluate the performance of the proposed circuit and to prove its acceptable electrical behaviour.

Keywords: differential logic unit, double pass transistor logic, low power CMOS design, low cost CMOS design

Procedia PDF Downloads 456
6556 Comparative Study of Conventional and Satellite Based Agriculture Information System

Authors: Rafia Hassan, Ali Rizwan, Sadaf Farhan, Bushra Sabir

Abstract:

The purpose of this study is to compare the conventional crop monitoring system with the satellite based crop monitoring system in Pakistan. This study is conducted for SUPARCO (Space and Upper Atmosphere Research Commission). The study focused on the wheat crop, as it is the main cash crop of Pakistan and province of Punjab. This study will answer the following: Which system is better in terms of cost, time and man power? The man power calculated for Punjab CRS is: 1,418 personnel and for SUPARCO: 26 personnel. The total cost calculated for SUPARCO is almost 13.35 million and CRS is 47.705 million. The man hours calculated for CRS (Crop Reporting Service) are 1,543,200 hrs (136 days) and man hours for SUPARCO are 8, 320hrs (40 days). It means that SUPARCO workers finish their work 96 days earlier than CRS workers. The results show that the satellite based crop monitoring system is efficient in terms of manpower, cost and time as compared to the conventional system, and also generates early crop forecasts and estimations. The research instruments used included: Interviews, physical visits, group discussions, questionnaires, study of reports and work flows. A total of 93 employees were selected using Yamane’s formula for data collection, which is done with the help questionnaires and interviews. Comparative graphing is used for the analysis of data to formulate the results of the research. The research findings also demonstrate that although conventional methods have a strong impact still in Pakistan (for crop monitoring) but it is the time to bring a change through technology, so that our agriculture will also be developed along modern lines.

Keywords: area frame, crop reporting service, CRS, sample frame, SRS/GIS, satellite remote sensing/ geographic information system

Procedia PDF Downloads 294
6555 Power Generation through Water Vapour: An Approach of Using Sea/River/Lake Water as Renewable Energy Source

Authors: Riad

Abstract:

As present world needs more and more energy in a low cost way, it needs to find out the optimal way of power generation. In the sense of low cost, renewable energy is one of the greatest sources of power generation. Water vapour of sea/river/lake can be used for power generation by using the greenhouse effect in a large flat type water chamber floating on the water surface. The water chamber will always be kept half filled. When water evaporates by sunlight, the high pressured gaseous water will be stored in the chamber. By passing through a pipe and by using aerodynamics it can be used for power generation. The water level of the chamber is controlled by some means. As a large amount of water evaporates, an estimation can be highlighted, approximately 3 to 4 thousand gallons of water evaporates from per acre of surface (this amount will be more by greenhouse effect). This large amount of gaseous water can be utilized for power generation by passing through a pipe. This method can be a source of power generation.

Keywords: renewable energy, greenhouse effect, water chamber, water vapour

Procedia PDF Downloads 359
6554 Relationship Between Expectation (Before) and Satisfaction (After) Receiving Services of Thai Consumers from Domestic Low-Cost Airlines

Authors: Sittichai Charoensettasilp, Chong Wu

Abstract:

This study employs sampling of 400 Thai people who live in Bangkok and have used air transportation to travel. A random convenience sampling technique is used to collect data. The results found that at 0.05 significance level the differences of means of Thai consumers’ expectations (before) and satisfaction (after) receiving services in the service marketing mix, the results of all aspects are different both in general and for each aspect of the service marketing mix. Average levels of expectations before receiving services are higher than satisfaction after receiving services in all aspects, as well. When analyzing further to the correlation between average means, the means of expectations before receiving services are higher than those of satisfaction after receiving services in general. As in all aspects of the service marketing mix, any aspect that has a big difference between expectations before receiving services and satisfaction after receiving services has low correlation.

Keywords: domestic low-cost airlines, Thai consumers, relationship, expectation before receiving services, satisfaction after receiving services

Procedia PDF Downloads 407
6553 The Logistics Collaboration in Supply Chain of Orchid Industry in Thailand

Authors: Chattrarat Hotrawaisaya

Abstract:

This research aims to formulate the logistics collaborative model which is the management tool for orchid flower exporter. The researchers study logistics activities in orchid supply chain that stakeholders can collaborate and develop, including demand forecasting, inventory management, warehouse and storage, order-processing, and transportation management. The research also explores logistics collaboration implementation into orchid’s stakeholders. The researcher collected data before implementation and after model implementation. Consequently, the costs and efficiency were calculated and compared between pre and post period of implementation. The research found that the results of applying the logistics collaborative model to orchid exporter reduces inventory cost and transport cost. The model also improves forecasting accuracy, and synchronizes supply chain of exporter. This research paper contributes the uniqueness logistics collaborative model which value to orchid industry in Thailand. The orchid exporters may use this model as their management tool which aims in competitive advantage.

Keywords: logistics, orchid, supply chain, collaboration

Procedia PDF Downloads 441
6552 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients

Authors: Bliss Singhal

Abstract:

Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.

Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels

Procedia PDF Downloads 89
6551 Gradient Boosted Trees on Spark Platform for Supervised Learning in Health Care Big Data

Authors: Gayathri Nagarajan, L. D. Dhinesh Babu

Abstract:

Health care is one of the prominent industries that generate voluminous data thereby finding the need of machine learning techniques with big data solutions for efficient processing and prediction. Missing data, incomplete data, real time streaming data, sensitive data, privacy, heterogeneity are few of the common challenges to be addressed for efficient processing and mining of health care data. In comparison with other applications, accuracy and fast processing are of higher importance for health care applications as they are related to the human life directly. Though there are many machine learning techniques and big data solutions used for efficient processing and prediction in health care data, different techniques and different frameworks are proved to be effective for different applications largely depending on the characteristics of the datasets. In this paper, we present a framework that uses ensemble machine learning technique gradient boosted trees for data classification in health care big data. The framework is built on Spark platform which is fast in comparison with other traditional frameworks. Unlike other works that focus on a single technique, our work presents a comparison of six different machine learning techniques along with gradient boosted trees on datasets of different characteristics. Five benchmark health care datasets are considered for experimentation, and the results of different machine learning techniques are discussed in comparison with gradient boosted trees. The metric chosen for comparison is misclassification error rate and the run time of the algorithms. The goal of this paper is to i) Compare the performance of gradient boosted trees with other machine learning techniques in Spark platform specifically for health care big data and ii) Discuss the results from the experiments conducted on datasets of different characteristics thereby drawing inference and conclusion. The experimental results show that the accuracy is largely dependent on the characteristics of the datasets for other machine learning techniques whereas gradient boosting trees yields reasonably stable results in terms of accuracy without largely depending on the dataset characteristics.

Keywords: big data analytics, ensemble machine learning, gradient boosted trees, Spark platform

Procedia PDF Downloads 245
6550 Validation of Asymptotic Techniques to Predict Bistatic Radar Cross Section

Authors: M. Pienaar, J. W. Odendaal, J. C. Smit, J. Joubert

Abstract:

Simulations are commonly used to predict the bistatic radar cross section (RCS) of military targets since characterization measurements can be expensive and time consuming. It is thus important to accurately predict the bistatic RCS of targets. Computational electromagnetic (CEM) methods can be used for bistatic RCS prediction. CEM methods are divided into full-wave and asymptotic methods. Full-wave methods are numerical approximations to the exact solution of Maxwell’s equations. These methods are very accurate but are computationally very intensive and time consuming. Asymptotic techniques make simplifying assumptions in solving Maxwell's equations and are thus less accurate but require less computational resources and time. Asymptotic techniques can thus be very valuable for the prediction of bistatic RCS of electrically large targets, due to the decreased computational requirements. This study extends previous work by validating the accuracy of asymptotic techniques to predict bistatic RCS through comparison with full-wave simulations as well as measurements. Validation is done with canonical structures as well as complex realistic aircraft models instead of only looking at a complex slicy structure. The slicy structure is a combination of canonical structures, including cylinders, corner reflectors and cubes. Validation is done over large bistatic angles and at different polarizations. Bistatic RCS measurements were conducted in a compact range, at the University of Pretoria, South Africa. The measurements were performed at different polarizations from 2 GHz to 6 GHz. Fixed bistatic angles of β = 30.8°, 45° and 90° were used. The measurements were calibrated with an active calibration target. The EM simulation tool FEKO was used to generate simulated results. The full-wave multi-level fast multipole method (MLFMM) simulated results together with the measured data were used as reference for validation. The accuracy of physical optics (PO) and geometrical optics (GO) was investigated. Differences relating to amplitude, lobing structure and null positions were observed between the asymptotic, full-wave and measured data. PO and GO were more accurate at angles close to the specular scattering directions and the accuracy seemed to decrease as the bistatic angle increased. At large bistatic angles PO did not perform well due to the shadow regions not being treated appropriately. PO also did not perform well for canonical structures where multi-bounce was the main scattering mechanism. PO and GO do not account for diffraction but these inaccuracies tended to decrease as the electrical size of objects increased. It was evident that both asymptotic techniques do not properly account for bistatic structural shadowing. Specular scattering was calculated accurately even if targets did not meet the electrically large criteria. It was evident that the bistatic RCS prediction performance of PO and GO depends on incident angle, frequency, target shape and observation angle. The improved computational efficiency of the asymptotic solvers yields a major advantage over full-wave solvers and measurements; however, there is still much room for improvement of the accuracy of these asymptotic techniques.

Keywords: asymptotic techniques, bistatic RCS, geometrical optics, physical optics

Procedia PDF Downloads 261
6549 A Low Cost Education Proposal Using Strain Gauges and Arduino to Develop a Balance

Authors: Thais Cavalheri Santos, Pedro Jose Gabriel Ferreira, Alexandre Daliberto Frugoli, Lucio Leonardo, Pedro Americo Frugoli

Abstract:

This paper presents a low cost education proposal to be used in engineering courses. The engineering education in universities of a developing country that is in need of an increasing number of engineers carried out with quality and affordably, pose a difficult problem to solve. In Brazil, the political and economic scenario requires academic managers able to reduce costs without compromising the quality of education. Within this context, the elaboration of a physics principles teaching method with the construction of an electronic balance is proposed. First, a method to develop and construct a load cell through which the students can understand the physical principle of strain gauges and bridge circuit will be proposed. The load cell structure was made with aluminum 6351T6, in dimensions of 80 mm x 13 mm x 13 mm and for its instrumentation, a complete Wheatstone Bridge was assembled with strain gauges of 350 ohms. Additionally, the process involves the use of a software tool to document the prototypes (design circuits), the conditioning of the signal, a microcontroller, C language programming as well as the development of the prototype. The project also intends to use an open-source I/O board (Arduino Microcontroller). To design the circuit, the Fritizing software will be used and, to program the controller, an open-source software named IDE®. A load cell was chosen because strain gauges have accuracy and their use has several applications in the industry. A prototype was developed for this study, and it confirmed the affordability of this educational idea. Furthermore, the goal of this proposal is to motivate the students to understand the several possible applications in high technology of the use of load cells and microcontroller.

Keywords: Arduino, load cell, low-cost education, strain gauge

Procedia PDF Downloads 309
6548 A Study of Anthropometric Correlation between Upper and Lower Limb Dimensions in Sudanese Population

Authors: Altayeb Abdalla Ahmed

Abstract:

Skeletal phenotype is a product of a balanced interaction between genetics and environmental factors throughout different life stages. Therefore, interlimb proportions are variable between populations. Although interlimb proportion indices have been used in anthropology in assessing the influence of various environmental factors on limbs, an extensive literature review revealed that there is a paucity of published research assessing interlimb part correlations and possibility of reconstruction. Hence, this study aims to assess the relationships between upper and lower limb parts and develop regression formulae to reconstruct the parts from one another. The left upper arm length, ulnar length, wrist breadth, hand length, hand breadth, tibial length, bimalleolar breadth, foot length, and foot breadth of 376 right-handed subjects, comprising 187 males and 189 females (aged 25-35 years), were measured. Initially, the data were analyzed using basic univariate analysis and independent t-tests; then sex-specific simple and multiple linear regression models were used to estimate upper limb parts from lower limb parts and vice-versa. The results of this study indicated significant sexual dimorphism for all variables. The results indicated a significant correlation between the upper and lower limbs parts (p < 0.01). Linear and multiple (stepwise) regression equations were developed to reconstruct the limb parts in the presence of a single or multiple dimension(s) from the other limb. Multiple stepwise regression equations generated better reconstructions than simple equations. These results are significant in forensics as it can aid in identification of multiple isolated limb parts particularly during mass disasters and criminal dismemberment. Although a DNA analysis is the most reliable tool for identification, its usage has multiple limitations in undeveloped countries, e.g., cost, facility availability, and trained personnel. Furthermore, it has important implication in plastic and orthopedic reconstructive surgeries. This study is the only reported study assessing the correlation and prediction capabilities between many of the upper and lower dimensions. The present study demonstrates a significant correlation between the interlimb parts in both sexes, which indicates a possibility to reconstruction using regression equations.

Keywords: anthropometry, correlation, limb, Sudanese

Procedia PDF Downloads 299
6547 High Electrochemical Performance of Electrode Material Based On Mesoporous RGO@(Co,Mn)3O4 Nanocomposites

Authors: Charmaine Lamiel, Van Hoa Nguyen, Deivasigamani Ranjith Kumar, Jae-Jin Shim

Abstract:

The quest for alternative sources of energy storage had led to the exploration on supercapacitors. Hybrid supercapacitors, a combination of carbon-based material and transition metals, had yielded long and improved cycle life as well as high energy and power densities. In this study, microwave irradiation was used for the facile and rapid synthesis of mesoporous RGO@(Co,Mn)3O4 nanosheets as an active electrode material. The advantages of this method include the non-use of reducing agents and acidic medium, and no further post-heat treatment. Additionally, it offers shorter reaction time at low temperature and low power requirement, which allows low fabrication and energy cost. The as-prepared electrode material demonstrated a high capacitance of 953 F•g−1 at 1 A•g−1 in a 6 M KOH electrolyte. Furthermore, the electrode exhibited a high energy density of 76.2 Wh•kg−1 (power density of 720 W•kg−1) and a high power density of 7200 W•kg−1 (energy density of 38 Wh•kg−1). The successful synthesis was considered to be efficient and cost-effective, with very promising electrochemical performance that can be used as an active material in supercapacitors.

Keywords: cobalt manganese oxide, electrochemical, graphene, microwave synthesis, supercapacitor

Procedia PDF Downloads 363
6546 Field Prognostic Factors on Discharge Prediction of Traumatic Brain Injuries

Authors: Mohammad Javad Behzadnia, Amir Bahador Boroumand

Abstract:

Introduction: Limited facility situations require allocating the most available resources for most casualties. Accordingly, Traumatic Brain Injury (TBI) is the one that may need to transport the patient as soon as possible. In a mass casualty event, deciding when the facilities are restricted is hard. The Extended Glasgow Outcome Score (GOSE) has been introduced to assess the global outcome after brain injuries. Therefore, we aimed to evaluate the prognostic factors associated with GOSE. Materials and Methods: In a multicenter cross-sectional study conducted on 144 patients with TBI admitted to trauma emergency centers. All the patients with isolated TBI who were mentally and physically healthy before the trauma entered the study. The patient’s information was evaluated, including demographic characteristics, duration of hospital stays, mechanical ventilation on admission laboratory measurements, and on-admission vital signs. We recorded the patients’ TBI-related symptoms and brain computed tomography (CT) scan findings. Results: GOSE assessments showed an increasing trend by the comparison of on-discharge (7.47 ± 1.30), within a month (7.51 ± 1.30), and within three months (7.58 ± 1.21) evaluations (P < 0.001). On discharge, GOSE was positively correlated with Glasgow Coma Scale (GCS) (r = 0.729, P < 0.001) and motor GCS (r = 0.812, P < 0.001), and inversely with age (r = −0.261, P = 0.002), hospitalization period (r = −0.678, P < 0.001), pulse rate (r = −0.256, P = 0.002) and white blood cell (WBC). Among imaging signs and trauma-related symptoms in univariate analysis, intracranial hemorrhage (ICH), interventricular hemorrhage (IVH) (P = 0.006), subarachnoid hemorrhage (SAH) (P = 0.06; marginally at P < 0.1), subdural hemorrhage (SDH) (P = 0.032), and epidural hemorrhage (EDH) (P = 0.037) were significantly associated with GOSE at discharge in multivariable analysis. Conclusion: Our study showed some predictive factors that could help to decide which casualty should transport earlier to a trauma center. According to the current study findings, GCS, pulse rate, WBC, and among imaging signs and trauma-related symptoms, ICH, IVH, SAH, SDH, and EDH are significant independent predictors of GOSE at discharge in TBI patients.

Keywords: field, Glasgow outcome score, prediction, traumatic brain injury.

Procedia PDF Downloads 79
6545 Modelling Distress Sale in Agriculture: Evidence from Maharashtra, India

Authors: Disha Bhanot, Vinish Kathuria

Abstract:

This study focusses on the issue of distress sale in horticulture sector in India, which faces unique challenges, given the perishable nature of horticulture crops, seasonal production and paucity of post-harvest produce management links. Distress sale, from a farmer’s perspective may be defined as urgent sale of normal or distressed goods, at deeply discounted prices (way below the cost of production) and it is usually characterized by unfavorable conditions for the seller (farmer). The small and marginal farmers, often involved in subsistence farming, stand to lose substantially if they receive lower prices than expected prices (typically framed in relation to cost of production). Distress sale maximizes price uncertainty of produce leading to substantial income loss; and with increase in input costs of farming, the high variability in harvest price severely affects profit margin of farmers, thereby affecting their survival. The objective of this study is to model the occurrence of distress sale by tomato cultivators in the Indian state of Maharashtra, against the background of differential access to set of factors such as - capital, irrigation facilities, warehousing, storage and processing facilities, and institutional arrangements for procurement etc. Data is being collected using primary survey of over 200 farmers in key tomato growing areas of Maharashtra, asking information on the above factors in addition to seeking information on cost of cultivation, selling price, time gap between harvesting and selling, role of middleman in selling, besides other socio-economic variables. Farmers selling their produce far below the cost of production would indicate an occurrence of distress sale. Occurrence of distress sale would then be modelled as a function of farm, household and institutional characteristics. Heckman-two-stage model would be applied to find the probability/likelihood of a famer falling into distress sale as well as to ascertain how the extent of distress sale varies in presence/absence of various factors. Findings of the study would recommend suitable interventions and promotion of strategies that would help farmers better manage price uncertainties, avoid distress sale and increase profit margins, having direct implications on poverty.

Keywords: distress sale, horticulture, income loss, India, price uncertainity

Procedia PDF Downloads 249
6544 Techno-Economic Optimization and Evaluation of an Integrated Industrial Scale NMC811 Cathode Active Material Manufacturing Process

Authors: Usama Mohamed, Sam Booth, Aliysn J. Nedoma

Abstract:

As part of the transition to electric vehicles, there has been a recent increase in demand for battery manufacturing. Cathodes typically account for approximately 50% of the total lithium-ion battery cell cost and are a pivotal factor in determining the viability of new industrial infrastructure. Cathodes which offer lower costs whilst maintaining or increasing performance, such as nickel-rich layered cathodes, have a significant competitive advantage when scaling up the manufacturing process. This project evaluates the techno-economic value proposition of an integrated industrial scale cathode active material (CAM) production process, closing the mass and energy balances, and optimizing the operation conditions using a sensitivity analysis. This is done by developing a process model of a co-precipitation synthesis route using Aspen Plus software and validated based on experimental data. The mechanism chemistry and equilibrium conditions were established based on previous literature and HSC-Chemistry software. This is then followed by integrating the energy streams, adding waste recovery and treatment processes, as well as testing the effect of key parameters (temperature, pH, reaction time, etc.) on CAM production yield and emissions. Finally, an economic analysis estimating the fixed and variable costs (including capital expenditure, labor costs, raw materials, etc.) to calculate the cost of CAM ($/kg and $/kWh), total plant cost ($) and net present value (NPV). This work sets the foundational blueprint for future research into sustainable industrial scale processes for CAM manufacturing.

Keywords: cathodes, industrial production, nickel-rich layered cathodes, process modelling, techno-economic analysis

Procedia PDF Downloads 102
6543 2D-Modeling with Lego Mindstorms

Authors: Miroslav Popelka, Jakub Nozicka

Abstract:

The whole work is based on possibility to use Lego Mindstorms robotics systems to reduce costs. Lego Mindstorms consists of a wide variety of hardware components necessary to simulate, programme and test of robotics systems in practice. To programme algorithm, which simulates space using the ultrasonic sensor, was used development environment supplied with kit. Software Matlab was used to render values afterwards they were measured by ultrasonic sensor. The algorithm created for this paper uses theoretical knowledge from area of signal processing. Data being processed by algorithm are collected by ultrasonic sensor that scans 2D space in front of it. Ultrasonic sensor is placed on moving arm of robot which provides horizontal moving of sensor. Vertical movement of sensor is provided by wheel drive. The robot follows map in order to get correct positioning of measured data. Based on discovered facts it is possible to consider Lego Mindstorm for low-cost and capable kit for real-time modelling.

Keywords: LEGO Mindstorms, ultrasonic sensor, real-time modeling, 2D object, low-cost robotics systems, sensors, Matlab, EV3 Home Edition Software

Procedia PDF Downloads 476
6542 Recycling of Plastic Waste into Composites Using Kaolin as Reinforcement

Authors: Gloria P. Manu, Johnson K. Efavi, Abu Yaya, Grace K. Arkorful, Frank Godson

Abstract:

Plastics have been used extensively in both food and water packaging and other applications because of their inherent properties of low bulk densities and inertness as well as its low cost. Waste management of these plastics after usage is troubling in Ghana. One way of addressing the environmental problems associated with these plastic wastes is by recycling into useful products such as composites for energy and construction applications using natural or local materials as reinforcement. In this work, composites have been formed from waste low-density polyethylene (LDPE) and kaolin at temperatures as low as 70 ֯C using low-cost solvents like kerosene. Chemical surface modifications have been employed to improve the interfacial bonding resulting in the enhancement of properties of the composites. Kaolin particles of sizes ≤ 90µm were dispersed in the polyethylene matrix. The content of the LDPE was varied between 10, 20, 30, 40, 50, 60, and 70 %wt. Results obtained indicated that all the composites exhibited impressive compressive and flexural strengths with the 50%wt. composition having the highest strength. The hardness value of the composites increased as the polyethylene composition reduces and that of the kaolin increased. The average density and water of absorption of the composites were 530kg/m³ and 1.3% respectively.

Keywords: polyethylene, recycling, waste, composite, kaolin

Procedia PDF Downloads 177
6541 Multi-Criteria Decision Making Network Optimization for Green Supply Chains

Authors: Bandar A. Alkhayyal

Abstract:

Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.

Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains

Procedia PDF Downloads 162
6540 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes

Authors: V. Makis, L. Jafari

Abstract:

In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.

Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control

Procedia PDF Downloads 580
6539 Partners Sharing Resources, Costs, and Risks

Authors: Lee Li

Abstract:

The strategic management literature posits that the major motive of strategic alliances is to share resources, costs and risks. However, the literature also indicates that such sharing leads to transaction costs which are positively correlated with environmental dynamism. As such, it is not clear why firms are willing to cover high transaction costs for sharing resources, costs and risks. This study categorizes resources into firm-specific and general resource; costs into accounting and non-accounting cost; and risks into visible and invisible risks. Using data from 167 Canadian firms in technology industries, we find that sharing firm-specific resources and non-accounting costs are negatively correlated with environmental dynamism but sharing general resources, accounting costs and visible risks are positively correlated with environmental dynamism. Findings suggest that sharing certain resources, costs and risks do not necessarily incur high transaction costs.

Keywords: environmental dynamism, strategic alliances, resource/cost/risk sharing

Procedia PDF Downloads 367
6538 Primary Care Physicians in Urgent Care Centres of the United Kingdom

Authors: Mohammad Ansari, Ahmed Ismail, Satinder Mann

Abstract:

Overcrowding in Emergency departments (ED) of United Kingdom has become a common problem. Urgent Care centres were developed nearly a decade ago to reduce pressure on EDs. Unfortunately, the development of Urgent Care centres has failed to produce the projected effects. It was thought that nearly 40% patients attending ED would go to Urgent Care centres and these would be staffed by Primary care Physicians. Data reveals that no more than 20% patients were seen by Primary Care Physicians even when the Urgent Care Centre was based in the ED. This study was carried out at the ED of George Eliot Hospital, Nuneaton, UK where the Urgent Care centre was based in the ED and employed Primary Care Physicians with special interest in trauma for nearly one year. This was then followed by a Primary Care Physician and Advanced Nurse Practitioner. We compared the number of patients seen during these periods and the cost-effectiveness of the service.We randomly selected a week of patients seen by Primary Care Physicians with special interest in Trauma and by Primary Care Physicians and the Advanced Nurse Practitioner. We compared the number and type of patients seen during these two periods. Nearly 38% patients were seen by Primary care Physician with special interest in Trauma, whilst only 14.3% patients were seen by the Primary care Physician and Advanced Nurse Practitioner. The Primary Care Physicians with special interest in trauma were paid less. Our study confirmed that unless Primary Care Physicians are able to treat minor trauma and interpret x-rays, the urgent care service is not going to be cost effective. Numerous previous studies have shown that 15 to 20% patients attending ED can be treated by Primary Care Physicians who do not require any investigations for their management. It is advantageous to have Urgent Care Centres within the ED because if the patient deteriorates they can be transferred to ED. We recommend that the Urgent care Centres should be a part of ED. Our study shows that Urgent care Centres in the ED can be helpful and cost effective if staffed by either senior Emergency Physicians or Primary Care Physicians with special interest and experience in the management of minor trauma.

Keywords: urgent care centres, primary care physician, advanced nurse practitioner, trauma

Procedia PDF Downloads 431
6537 The Usage of Nitrogen Gas and Alum for Sludge Dewatering

Authors: Mamdouh Yousef Saleh, Medhat Hosny El-Zahar, Shymaa El-Dosoky

Abstract:

In most cases, the associated processing cost of dewatering sludge increase with the solid particles concentration. All experiments in this study were conducted on biological sludge type. All experiments help to reduce the greenhouse gases in addition, the technology used was faster in time and less in cost compared to other methods. First, the bubbling pressure was used to dissolve N₂ gas into the sludge, second alum was added to accelerate the process of coagulation of the sludge particles and facilitate their flotation, and third nitrogen gas was used to help floating the sludge particles and reduce the processing time because of the nitrogen gas from the inert gases. The conclusions of this experiment were as follows: first, the best conditions were obtained when the bubbling pressure was 0.6 bar. Second, the best alum dose was determined to help the sludge agglomerate and float. During the experiment, the best alum dose was 80 mg/L. It increased concentration of the sludge by 7-8 times. Third, the economic dose of nitrogen gas was 60 mg/L with separation efficiency of 85%. The sludge concentration was about 8-9 times. That happened due to the gas released tiny bubbles which adhere to the suspended matter causing them to float to the surface of the water where it could be then removed.

Keywords: nitrogen gas, biological treatment, alum, dewatering sludge, greenhouse gases

Procedia PDF Downloads 221
6536 Direct Composite Veneers as Treatment of Anterior Teeth: Case Report

Authors: Amerah Alsalem

Abstract:

Aim: Laminate veneers are restorations which are envisioned to correct existing abnormalities, esthetic deficiencies, and discolorations. Laminate veneer restorations may be processed in two different ways: direct or indirect. Materials and methods: Direct composite laminate veneers require minimal preparation compared to indirect composite veneers, cost less and are easier to repair, so are useful in young patients. However, composites can have inherent limitations such as shrinkage, limited toughness; color instability and susceptibility to wear that reduce the lifespan of the restoration and cause postoperative complications. Every new material or method introduced to the field of dentistry aims to achieve esthetics and successful dental treatments with minimal invasiveness. Therefore, direct laminate veneer restorations have been developed for advanced esthetic problems of anterior teeth. Tooth discolorations, rotated teeth, coronal fractures, congenital or acquired malformations, diastemas, discolored restorations, palatally positioned teeth, the absence of lateral incisors, abrasions and erosions are the main indications for direct laminate veneer restorations. Result: Direct veneers, as esthetic procedures, have become treatment alternatives for patients with esthetic problems of anterior teeth in recent years. The cost, social and time factors have to be considered. Although ceramic laminate veneer restorations have some advantages like color stability and high resistance against abrasion, they have also some disadvantages, including high cost and long chair time. Moreover, they have some problems such as the necessity of an additional adhesive cement. Conclusion: Although there are still some disadvantages, especially discolorations and fragility, with the development of new composite resins, direct laminate veneer restorations can be a treatment option for patients with esthetic problems of anterior teeth, when applied judiciously with good patient hygiene motivation.

Keywords: direct, veneers, composite, anterior

Procedia PDF Downloads 286
6535 New York’s Heat Pump Mandate: Doubling Annual Heating Costs to Achieve a 13% Reduction in New York’s CO₂ Gas Emissions

Authors: William Burdick

Abstract:

Manmade climate change is an existential threat that must be mitigated at the earliest opportunity. The role of government in climate change mitigation is enacting and enforcing law and policy to affect substantial reductions in greenhouse gasses, in the short and long term, without substantial increases in the cost of energy. To be optimally effective those laws and policies must be established and enforced based on peer reviewed evidence and scientific facts and result in substantial outcomes in years, not decades. Over the next fifty years, New York’s 2019 Climate Change and Community Protection Act and 2021 All Electric Building Act that mandate replacing natural gas heating systems with heat pumps will, immediately double annual heating costs and by 2075, yield less than 16.2% reduction in CO₂ emissions from heating systems in new housing units, less than a 13% reduction in total CO₂ emissions, and affect a $40B in cumulative additional heating cost, compared to natural gas fueled heating systems.

Keywords: climate change, mandate, heat pump, natural gas

Procedia PDF Downloads 73
6534 Estimation of Fragility Curves Using Proposed Ground Motion Selection and Scaling Procedure

Authors: Esra Zengin, Sinan Akkar

Abstract:

Reliable and accurate prediction of nonlinear structural response requires specification of appropriate earthquake ground motions to be used in nonlinear time history analysis. The current research has mainly focused on selection and manipulation of real earthquake records that can be seen as the most critical step in the performance based seismic design and assessment of the structures. Utilizing amplitude scaled ground motions that matches with the target spectra is commonly used technique for the estimation of nonlinear structural response. Representative ground motion ensembles are selected to match target spectrum such as scenario-based spectrum derived from ground motion prediction equations, Uniform Hazard Spectrum (UHS), Conditional Mean Spectrum (CMS) or Conditional Spectrum (CS). Different sets of criteria exist among those developed methodologies to select and scale ground motions with the objective of obtaining robust estimation of the structural performance. This study presents ground motion selection and scaling procedure that considers the spectral variability at target demand with the level of ground motion dispersion. The proposed methodology provides a set of ground motions whose response spectra match target median and corresponding variance within a specified period interval. The efficient and simple algorithm is used to assemble the ground motion sets. The scaling stage is based on the minimization of the error between scaled median and the target spectra where the dispersion of the earthquake shaking is preserved along the period interval. The impact of the spectral variability on nonlinear response distribution is investigated at the level of inelastic single degree of freedom systems. In order to see the effect of different selection and scaling methodologies on fragility curve estimations, results are compared with those obtained by CMS-based scaling methodology. The variability in fragility curves due to the consideration of dispersion in ground motion selection process is also examined.

Keywords: ground motion selection, scaling, uncertainty, fragility curve

Procedia PDF Downloads 588
6533 Development of an Implicit Coupled Partitioned Model for the Prediction of the Behavior of a Flexible Slender Shaped Membrane in Interaction with Free Surface Flow under the Influence of a Moving Flotsam

Authors: Mahtab Makaremi Masouleh, Günter Wozniak

Abstract:

This research is part of an interdisciplinary project, promoting the design of a light temporary installable textile defence system against flood. In case river water levels increase abruptly especially in winter time, one can expect massive extra load on a textile protective structure in term of impact as a result of floating debris and even tree trunks. Estimation of this impulsive force on such structures is of a great importance, as it can ensure the reliability of the design in critical cases. This fact provides the motivation for the numerical analysis of a fluid structure interaction application, comprising flexible slender shaped and free-surface water flow, where an accelerated heavy flotsam tends to approach the membrane. In this context, the analysis on both the behavior of the flexible membrane and its interaction with moving flotsam is conducted by finite elements based solvers of the explicit solver and implicit Abacus solver available as products of SIMULIA software. On the other hand, a study on how free surface water flow behaves in response to moving structures, has been investigated using the finite volume solver of Star CCM+ from Siemens PLM Software. An automatic communication tool (CSE, SIMULIA Co-Simulation Engine) and the implementation of an effective partitioned strategy in form of an implicit coupling algorithm makes it possible for partitioned domains to be interconnected powerfully. The applied procedure ensures stability and convergence in the solution of these complicated issues, albeit with high computational cost; however, the other complexity of this study stems from mesh criterion in the fluid domain, where the two structures approach each other. This contribution presents the approaches for the establishment of a convergent numerical solution and compares the results with experimental findings.

Keywords: co-simulation, flexible thin structure, fluid-structure interaction, implicit coupling algorithm, moving flotsam

Procedia PDF Downloads 391
6532 A Long Short-Term Memory Based Deep Learning Model for Corporate Bond Price Predictions

Authors: Vikrant Gupta, Amrit Goswami

Abstract:

The fixed income market forms the basis of the modern financial market. All other assets in financial markets derive their value from the bond market. Owing to its over-the-counter nature, corporate bonds have relatively less data publicly available and thus is researched upon far less compared to Equities. Bond price prediction is a complex financial time series forecasting problem and is considered very crucial in the domain of finance. The bond prices are highly volatile and full of noise which makes it very difficult for traditional statistical time-series models to capture the complexity in series patterns which leads to inefficient forecasts. To overcome the inefficiencies of statistical models, various machine learning techniques were initially used in the literature for more accurate forecasting of time-series. However, simple machine learning methods such as linear regression, support vectors, random forests fail to provide efficient results when tested on highly complex sequences such as stock prices and bond prices. hence to capture these intricate sequence patterns, various deep learning-based methodologies have been discussed in the literature. In this study, a recurrent neural network-based deep learning model using long short term networks for prediction of corporate bond prices has been discussed. Long Short Term networks (LSTM) have been widely used in the literature for various sequence learning tasks in various domains such as machine translation, speech recognition, etc. In recent years, various studies have discussed the effectiveness of LSTMs in forecasting complex time-series sequences and have shown promising results when compared to other methodologies. LSTMs are a special kind of recurrent neural networks which are capable of learning long term dependencies due to its memory function which traditional neural networks fail to capture. In this study, a simple LSTM, Stacked LSTM and a Masked LSTM based model has been discussed with respect to varying input sequences (three days, seven days and 14 days). In order to facilitate faster learning and to gradually decompose the complexity of bond price sequence, an Empirical Mode Decomposition (EMD) has been used, which has resulted in accuracy improvement of the standalone LSTM model. With a variety of Technical Indicators and EMD decomposed time series, Masked LSTM outperformed the other two counterparts in terms of prediction accuracy. To benchmark the proposed model, the results have been compared with traditional time series models (ARIMA), shallow neural networks and above discussed three different LSTM models. In summary, our results show that the use of LSTM models provide more accurate results and should be explored more within the asset management industry.

Keywords: bond prices, long short-term memory, time series forecasting, empirical mode decomposition

Procedia PDF Downloads 140