Search results for: optimization model
17372 Spatially Downscaling Land Surface Temperature with a Non-Linear Model
Authors: Kai Liu
Abstract:
Remote sensing-derived land surface temperature (LST) can provide an indication of the temporal and spatial patterns of surface evapotranspiration (ET). However, the spatial resolution achieved by existing commonly satellite products is ~1 km, which remains too coarse for ET estimations. This paper proposed a model that can disaggregate coarse resolution MODIS LST at 1 km scale to fine spatial resolutions at the scale of 250 m. Our approach attempted to weaken the impacts of soil moisture and growing statues on LST variations. The proposed model spatially disaggregates the coarse thermal data by using a non-linear model involving Bowen ratio, normalized difference vegetation index (NDVI) and photochemical reflectance index (PRI). This LST disaggregation model was tested on two heterogeneous landscapes in central Iowa, USA and Heihe River, China, during the growing seasons. Statistical results demonstrated that our model achieved better than the two classical methods (DisTrad and TsHARP). Furthermore, using the surface energy balance model, it was observed that the estimated ETs using the disaggregated LST from our model were more accurate than those using the disaggregated LST from DisTrad and TsHARP.Keywords: Bowen ration, downscaling, evapotranspiration, land surface temperature
Procedia PDF Downloads 32917371 Optimization of Pretreatment Process of Napier Grass for Improved Sugar Yield
Authors: Shashikant Kumar, Chandraraj K.
Abstract:
Perennial grasses have presented interesting choices in the current demand for renewable and sustainable energy sources to alleviate the load of the global energy problem. The perennial grass Napier grass (Pennisetum purpureum Schumach) is a promising feedstock for the production of cellulosic ethanol. The conversion of biomass into glucose and xylose is a crucial stage in the production of bioethanol, and it necessitates optimal pretreatment. Alkali treatment, among the several pretreatments available, effectively reduces lignin concentration and crystallinity of cellulose. Response surface methodology was used to optimize the alkali pretreatment of Napier grass for maximal reducing sugar production. The combined effects of three independent variables, viz. sodium hydroxide concentration, temperature, and reaction time, were studied. A second-order polynomial equation was used to fit the observed data. Maximum reducing sugar (590.54 mg/g) was obtained under the following conditions: 1.6 % sodium hydroxide, a reaction period of 30 min., and 120˚C. The results showed that Napier grass is a desirable feedstock for bioethanol production.Keywords: Napier grass, optimization, pretreatment, sodium hydroxide
Procedia PDF Downloads 50417370 Estimation of Synchronous Machine Synchronizing and Damping Torque Coefficients
Authors: Khaled M. EL-Naggar
Abstract:
Synchronizing and damping torque coefficients of a synchronous machine can give a quite clear picture for machine behavior during transients. These coefficients are used as a power system transient stability measurement. In this paper, a crow search optimization algorithm is presented and implemented to study the power system stability during transients. The algorithm makes use of the machine responses to perform the stability study in time domain. The problem is formulated as a dynamic estimation problem. An objective function that minimizes the error square in the estimated coefficients is designed. The method is tested using practical system with different study cases. Results are reported and a thorough discussion is presented. The study illustrates that the proposed method can estimate the stability coefficients for the critical stable cases where other methods may fail. The tests proved that the proposed tool is an accurate and reliable tool for estimating the machine coefficients for assessment of power system stability.Keywords: optimization, estimation, synchronous, machine, crow search
Procedia PDF Downloads 13717369 A Robust Optimization Method for Service Quality Improvement in Health Care Systems under Budget Uncertainty
Authors: H. Ashrafi, S. Ebrahimi, H. Kamalzadeh
Abstract:
With the development of business competition, it is important for healthcare providers to improve their service qualities. In order to improve service quality of a clinic, four important dimensions are defined: tangibles, responsiveness, empathy, and reliability. Moreover, there are several service stages in hospitals such as financial screening and examination. One of the most challenging limitations for improving service quality is budget which impressively affects the service quality. In this paper, we present an approach to address budget uncertainty and provide guidelines for service resource allocation. In this paper, a service quality improvement approach is proposed which can be adopted to multistage service processes to improve service quality, while controlling the costs. A multi-objective function based on the importance of each area and dimension is defined to link operational variables to service quality dimensions. The results demonstrate that our approach is not ultra-conservative and it shows the actual condition very well. Moreover, it is shown that different strategies can affect the number of employees in different stages.Keywords: allocation, budget uncertainty, healthcare resource, service quality assessment, robust optimization
Procedia PDF Downloads 18217368 Clustering Based Level Set Evaluation for Low Contrast Images
Authors: Bikshalu Kalagadda, Srikanth Rangu
Abstract:
The important object of images segmentation is to extract objects with respect to some input features. One of the important methods for image segmentation is Level set method. Generally medical images and synthetic images with low contrast of pixel profile, for such images difficult to locate interested features in images. In conventional level set function, develops irregularity during its process of evaluation of contour of objects, this destroy the stability of evolution process. For this problem a remedy is proposed, a new hybrid algorithm is Clustering Level Set Evolution. Kernel fuzzy particles swarm optimization clustering with the Distance Regularized Level Set (DRLS) and Selective Binary, and Gaussian Filtering Regularized Level Set (SBGFRLS) methods are used. The ability of identifying different regions becomes easy with improved speed. Efficiency of the modified method can be evaluated by comparing with the previous method for similar specifications. Comparison can be carried out by considering medical and synthetic images.Keywords: segmentation, clustering, level set function, re-initialization, Kernel fuzzy, swarm optimization
Procedia PDF Downloads 35117367 Optimal Capacitor Placement in Distribution Using Cuckoo Optimization Algorithm
Authors: Ali Ravangard, S. Mohammadi
Abstract:
Shunt Capacitors have several uses in the electric power systems. They are utilized as sources of reactive power by connecting them in line-to-neutral. Electric utilities have also connected capacitors in series with long lines in order to reduce its impedance. This is particularly common in the transmission level, where the lines have length in several hundreds of kilometers. However, this post will generally discuss shunt capacitors. In distribution systems, shunt capacitors are used to reduce power losses, to improve voltage profile, and to increase the maximum flow through cables and transformers. This paper presents a new method to determine the optimal locations and economical sizing of fixed and/or switched shunt capacitors with a view to power losses reduction and voltage stability enhancement. For solving the problem, a new enhanced cuckoo optimization algorithm is presented.The proposed method is tested on distribution test system and the results show that the algorithm suitable for practical implementation on real systems with any size.Keywords: capacitor placement, power losses, voltage stability, radial distribution systems
Procedia PDF Downloads 37417366 Designing and Simulation of the Rotor and Hub of the Unmanned Helicopter
Authors: Zbigniew Czyz, Ksenia Siadkowska, Krzysztof Skiba, Karol Scislowski
Abstract:
Today’s progress in the rotorcraft is mostly associated with an optimization of aircraft performance achieved by active and passive modifications of main rotor assemblies and a tail propeller. The key task is to improve their performance, improve the hover quality factor for rotors but not change in specific fuel consumption. One of the tasks to improve the helicopter is an active optimization of the main rotor providing for flight stages, i.e., an ascend, flight, a descend. An active interference with the airflow around the rotor blade section can significantly change characteristics of the aerodynamic airfoil. The efficiency of actuator systems modifying aerodynamic coefficients in the current solutions is relatively high and significantly affects the increase in strength. The solution to actively change aerodynamic characteristics assumes a periodic change of geometric features of blades depending on flight stages. Changing geometric parameters of blade warping enables an optimization of main rotor performance depending on helicopter flight stages. Structurally, an adaptation of shape memory alloys does not significantly affect rotor blade fatigue strength, which contributes to reduce costs associated with an adaptation of the system to the existing blades, and gains from a better performance can easily amortize such a modification and improve profitability of such a structure. In order to obtain quantitative and qualitative data to solve this research problem, a number of numerical analyses have been necessary. The main problem is a selection of design parameters of the main rotor and a preliminary optimization of its performance to improve the hover quality factor for rotors. This design concept assumes a three-bladed main rotor with a chord of 0.07 m and radius R = 1 m. The value of rotor speed is a calculated parameter of an optimization function. To specify the initial distribution of geometric warping, a special software has been created that uses a numerical method of a blade element which respects dynamic design features such as fluctuations of a blade in its joints. A number of performance analyses as a function of rotor speed, forward speed, and altitude have been performed. The calculations were carried out for the full model assembly. This approach makes it possible to observe the behavior of components and their mutual interaction resulting from the forces. The key element of each rotor is the shaft, hub and pins holding the joints and blade yokes. These components are exposed to the highest loads. As a result of the analysis, the safety factor was determined at the level of k > 1.5, which gives grounds to obtain certification for the strength of the structure. The construction of the joint rotor has numerous moving elements in its structure. Despite the high safety factor, the places with the highest stresses, where the signs of wear and tear may appear, have been indicated. The numerical analysis carried out showed that the most loaded element is the pin connecting the modular bearing of the blade yoke with the element of the horizontal oscillation joint. The stresses in this element result in a safety factor of k=1.7. The other analysed rotor components have a safety factor of more than 2 and in the case of the shaft, this factor is more than 3. However, it must be remembered that the structure is as strong as the weakest cell is. Designed rotor for unmanned aerial vehicles adapted to work with blades with intelligent materials in its structure meets the requirements for certification testing. Acknowledgement: This work has been financed by the Polish National Centre for Research and Development under the LIDER program, Grant Agreement No. LIDER/45/0177/L-9/17/NCBR/2018.Keywords: main rotor, rotorcraft aerodynamics, shape memory alloy, materials, unmanned helicopter
Procedia PDF Downloads 15717365 Reverse Logistics Network Optimization for E-Commerce
Authors: Albert W. K. Tan
Abstract:
This research consolidates a comprehensive array of publications from peer-reviewed journals, case studies, and seminar reports focused on reverse logistics and network design. By synthesizing this secondary knowledge, our objective is to identify and articulate key decision factors crucial to reverse logistics network design for e-commerce. Through this exploration, we aim to present a refined mathematical model that offers valuable insights for companies seeking to optimize their reverse logistics operations. The primary goal of this research endeavor is to develop a comprehensive framework tailored to advising organizations and companies on crafting effective networks for their reverse logistics operations, thereby facilitating the achievement of their organizational goals. This involves a thorough examination of various network configurations, weighing their advantages and disadvantages to ensure alignment with specific business objectives. The key objectives of this research include: (i) Identifying pivotal factors pertinent to network design decisions within the realm of reverse logistics across diverse supply chains. (ii) Formulating a structured framework designed to offer informed recommendations for sound network design decisions applicable to relevant industries and scenarios. (iii) Propose a mathematical model to optimize its reverse logistics network. A conceptual framework for designing a reverse logistics network has been developed through a combination of insights from the literature review and information gathered from company websites. This framework encompasses four key stages in the selection of reverse logistics operations modes: (1) Collection, (2) Sorting and testing, (3) Processing, and (4) Storage. Key factors to consider in reverse logistics network design: I) Centralized vs. decentralized processing: Centralized processing, a long-standing practice in reverse logistics, has recently gained greater attention from manufacturing companies. In this system, all products within the reverse logistics pipeline are brought to a central facility for sorting, processing, and subsequent shipment to their next destinations. Centralization offers the advantage of efficiently managing the reverse logistics flow, potentially leading to increased revenues from returned items. Moreover, it aids in determining the most appropriate reverse channel for handling returns. On the contrary, a decentralized system is more suitable when products are returned directly from consumers to retailers. In this scenario, individual sales outlets serve as gatekeepers for processing returns. Considerations encompass the product lifecycle, product value and cost, return volume, and the geographic distribution of returns. II) In-house vs. third-party logistics providers: The decision between insourcing and outsourcing in reverse logistics network design is pivotal. In insourcing, a company handles the entire reverse logistics process, including material reuse. In contrast, outsourcing involves third-party providers taking on various aspects of reverse logistics. Companies may choose outsourcing due to resource constraints or lack of expertise, with the extent of outsourcing varying based on factors such as personnel skills and cost considerations. Based on the conceptual framework, the authors have constructed a mathematical model that optimizes reverse logistics network design decisions. The model will consider key factors identified in the framework, such as transportation costs, facility capacities, and lead times. The authors have employed mixed LP to find the optimal solutions that minimize costs while meeting organizational objectives.Keywords: reverse logistics, supply chain management, optimization, e-commerce
Procedia PDF Downloads 3817364 Predictive Modeling of Flank Wear in Hard Turning Using the Taguchi Method
Authors: Suha K. Shihab, Zahid A. Khan, Aas Mohammad, Arshad Noor Siddiquee
Abstract:
This paper presents the influence of cutting parameters (cutting speed, feed and depth of cut) on flank wear (VB) in turning of 52100 hard alloy steel using multilayer coated carbide insert under dry condition. Nine experiments were performed based on Taguchi’s L9 orthogonal array. Analysis of variance (ANOVA) was used to determine the effects of the cutting parameters on flank wear. The results of the study revealed that the cutting speed (A) and feed rate (B) are the dominant factors affecting flank wear, while the depth of cut (C) has not a significant effect. The optimal combination of the cutting parameters for flank wear is found to be A1B1C1. The mathematical model for flank wear is found to be statistically significant. The predicted and measured values of flank wear are found to be very close to each other.Keywords: flank wear, hard turning, Taguchi approach, optimization
Procedia PDF Downloads 66217363 Technology Valuation of Unconventional Gas R&D Project Using Real Option Approach
Authors: Young Yoon, Jinsoo Kim
Abstract:
The adoption of information and communication technologies (ICT) in all industry is growing under industry 4.0. Many oil companies also are increasingly adopting ICT to improve the efficiency of existing operations, take more accurate and quicker decision making and reduce entire cost by optimization. It is true that ICT is playing an important role in the process of unconventional oil and gas development and companies must take advantage of ICT to gain competitive advantage. In this study, real option approach has been applied to Unconventional gas R&D project to evaluate ICT of them. Many unconventional gas reserves such as shale gas and coal-bed methane(CBM) has developed due to technological improvement and high energy price. There are many uncertainties in unconventional development on the three stage(Exploration, Development, Production). The traditional quantitative benefits-cost method, such as net present value(NPV) is not sufficient for capturing ICT value. We attempted to evaluate the ICT valuation by applying the compound option model; the model is applied to real CBM project case, showing how it consider uncertainties. Variables are treated as uncertain and a Monte Carlo simulation is performed to consider variables effect. Acknowledgement—This work was supported by the Energy Efficiency & Resources Core Technology Program of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) granted financial resource from the Ministry of Trade, Industry & Energy, Republic of Korea (No. 20152510101880) and by the National Research Foundation of Korea Grant funded by the Korean Government (NRF-205S1A3A2046684).Keywords: information and communication technologies, R&D, real option, unconventional gas
Procedia PDF Downloads 22817362 A Strategic Partner Evaluation Model for the Project Based Enterprises
Authors: Woosik Jang, Seung H. Han
Abstract:
The optimal partner selection is one of the most important factors to pursue the project’s success. However, in practice, there is a gaps in perception of success depending on the role of the enterprises for the projects. This frequently makes a relations between the partner evaluation results and the project’s final performances, insufficiently. To meet this challenges, this study proposes a strategic partner evaluation model considering the perception gaps between enterprises. A total 3 times of survey was performed; factor selection, perception gap analysis, and case application. After then total 8 factors are extracted from independent sample t-test and Borich model to set-up the evaluation model. Finally, through the case applications, only 16 enterprises are re-evaluated to “Good” grade among the 22 “Good” grade from existing model. On the contrary, 12 enterprises are re-evaluated to “Good” grade among the 19 “Bad” grade from existing model. Consequently, the perception gaps based evaluation model is expected to improve the decision making quality and also enhance the probability of project’s success.Keywords: partner evaluation model, project based enterprise, decision making, perception gap, project performance
Procedia PDF Downloads 15617361 Cross-Dipole Right-Hand Circularly Polarized UHF/VHF Yagi-Uda Antenna for Satellite Applications
Authors: Shativel S., Chandana B. R., Kavya B. C., Obli B. Vikram, Suganthi J., Nagendra Rao G.
Abstract:
Satellite communication plays a pivotal role in modern global communication networks, serving as a vital link between terrestrial infrastructure and remote regions. The demand for reliable satellite reception systems, especially in UHF (Ultra High Frequency) and VHF (Very High Frequency) bands, has grown significantly over the years. This research paper presents the design and optimization of a high-gain, dual-band crossed Yagi-Uda antenna in CST Studio Suite, specifically tailored for satellite reception. The proposed antenna system incorporates a circularly polarized (Right-Hand Circular Polarization - RHCP) design to reduce Faraday loss. Our aim was to use fewer elements and achieve gain, so the antenna is constructed using 6x2 elements arranged in cross dipole and supported with a boom. We have achieved 10.67dBi at 146MHz and 9.28dBi at 437.5MHz.The process includes parameter optimization and fine-tuning of the Yagi-Uda array’s elements, such as the length and spacing of directors and reflectors, to achieve high gain and desirable radiation patterns. Furthermore, the optimization process considers the requirements for UHF and VHF frequency bands, ensuring broad frequency coverage for satellite reception. The results of this research are anticipated to significantly contribute to the advancement of satellite reception systems, enhancing their capabilities to reliably connect remote and underserved areas to the global communication network. Through innovative antenna design and simulation techniques, this study seeks to provide a foundation for the development of next-generation satellite communication infrastructure.Keywords: Yagi-Uda antenna, RHCP, gain, UHF antenna, VHF antenna, CST, radiation pattern.
Procedia PDF Downloads 6017360 A Super-Efficiency Model for Evaluating Efficiency in the Presence of Time Lag Effect
Authors: Yanshuang Zhang, Byungho Jeong
Abstract:
In many cases, there is a time lag between the consumption of inputs and the production of outputs. This time lag effect should be considered in evaluating the performance of organizations. Recently, a couple of DEA models were developed for considering time lag effect in efficiency evaluation of research activities. Multi-periods input(MpI) and Multi-periods output(MpO) models are integrated models to calculate simple efficiency considering time lag effect. However, these models can’t discriminate efficient DMUs because of the nature of basic DEA model in which efficiency scores are limited to ‘1’. That is, efficient DMUs can’t be discriminated because their efficiency scores are same. Thus, this paper suggests a super-efficiency model for efficiency evaluation under the consideration of time lag effect based on the MpO model. A case example using a long-term research project is given to compare the suggested model with the MpO model.Keywords: DEA, super-efficiency, time lag, multi-periods input
Procedia PDF Downloads 47017359 The Social Model of Disability and Disability Rights: Defending a Conceptual Alignment between the Social Model’s Concept of Disability and the Nature of Rights and Duties
Authors: Adi Goldiner
Abstract:
Historically, the social model of disability has played a pivotal role in bringing rights discourse into the disability debate. Against this backdrop, the paper explores the conceptual alignment between the social model’s account of disability and the nature of rights. Specifically, the paper examines the possibility that the social model conceptualizes disability in a way that aligns with the nature of rights and thus motivates the invocation of disability rights. Methodologically, the paper juxtaposes the literature on the social model of disability, primarily the work of the Union of the Physically Impaired Against Segregation in the UK and related scholarship, with theories of moral rights. By focusing on the interplay between the social model of disability and rights, the paper provides a conceptual explanation for the rise of disability rights. In addition, the paper sheds light on the nature of rights, their function and limitations, in the context of disability rights. The paper concludes that the social model’s conceptualization of disability is hospitable to rights, because it opens up the possibility that there are duties that correlate with disability rights. Under the social model, disability is a condition that can be eliminated by the removal of social, structural, and attitudinal barriers. Accordingly, the social model dispels the idea that the actions of others towards disabled people will have a marginal impact on their interests in not being disabled. Equally important, the social model refutes the idea that in order to significantly serve people's interest in not being disabled, it is necessary to cure bodily impairments, which is not always possible. As rights correlate with duties that are possible to comply with, as well as those that significantly serve the interests of the right holders, the social model’s conceptualization of disability invites the reframing of problems related to disability in terms of infringements of disability rights. A possible objection to the paper’s argument is raised, according to which the social model is at odds with the invocation of disability rights because disability rights are ineffective in realizing the social model's goal of improving the lives of disabled by eliminating disability. The paper responds to this objection by drawing a distinction between ‘moral rights,’ which, conceptually, are not subject to criticism of ineffectiveness, and ‘legal rights’ which are.Keywords: disability rights, duties, moral rights, social model
Procedia PDF Downloads 40417358 Statistical Classification, Downscaling and Uncertainty Assessment for Global Climate Model Outputs
Authors: Queen Suraajini Rajendran, Sai Hung Cheung
Abstract:
Statistical down scaling models are required to connect the global climate model outputs and the local weather variables for climate change impact prediction. For reliable climate change impact studies, the uncertainty associated with the model including natural variability, uncertainty in the climate model(s), down scaling model, model inadequacy and in the predicted results should be quantified appropriately. In this work, a new approach is developed by the authors for statistical classification, statistical down scaling and uncertainty assessment and is applied to Singapore rainfall. It is a robust Bayesian uncertainty analysis methodology and tools based on coupling dependent modeling error with classification and statistical down scaling models in a way that the dependency among modeling errors will impact the results of both classification and statistical down scaling model calibration and uncertainty analysis for future prediction. Singapore data are considered here and the uncertainty and prediction results are obtained. From the results obtained, directions of research for improvement are briefly presented.Keywords: statistical downscaling, global climate model, climate change, uncertainty
Procedia PDF Downloads 36717357 Value Engineering and Its Impact on Drainage Design Optimization for Penang International Airport Expansion
Authors: R.M. Asyraf, A. Norazah, S.M. Khairuddin, B. Noraziah
Abstract:
Designing a system at present requires a vital, challenging task; to ensure the design philosophy is maintained in economical ways. This paper perceived the value engineering (VE) approach applied in infrastructure works, namely stormwater drainage. This method is adopted in line as consultants have completed the detailed design. Function Analysis System Technique (FAST) diagram and VE job plan, information, function analysis, creative judgement, development, and recommendation phase are used to scrutinize the initial design of stormwater drainage. An estimated cost reduction using the VE approach of 2% over the initial proposal was obtained. This cost reduction is obtained from the design optimization of the drainage foundation and structural system, where the pile design and drainage base structure are optimized. Likewise, the design of the on-site detention tank (OSD) pump was revised and contribute to the cost reduction obtained. This case study shows that the VE approach can be an important tool in optimizing the design to reduce costs.Keywords: value engineering, function analysis system technique, stormwater drainage, cost reduction
Procedia PDF Downloads 14317356 A New Fuzzy Fractional Order Model of Transmission of Covid-19 With Quarantine Class
Authors: Asma Hanif, A. I. K. Butt, Shabir Ahmad, Rahim Ud Din, Mustafa Inc
Abstract:
This paper is devoted to a study of the fuzzy fractional mathematical model reviewing the transmission dynamics of the infectious disease Covid-19. The proposed dynamical model consists of susceptible, exposed, symptomatic, asymptomatic, quarantine, hospitalized and recovered compartments. In this study, we deal with the fuzzy fractional model defined in Caputo’s sense. We show the positivity of state variables that all the state variables that represent different compartments of the model are positive. Using Gronwall inequality, we show that the solution of the model is bounded. Using the notion of the next-generation matrix, we find the basic reproduction number of the model. We demonstrate the local and global stability of the equilibrium point by using the concept of Castillo-Chavez and Lyapunov theory with the Lasalle invariant principle, respectively. We present the results that reveal the existence and uniqueness of the solution of the considered model through the fixed point theorem of Schauder and Banach. Using the fuzzy hybrid Laplace method, we acquire the approximate solution of the proposed model. The results are graphically presented via MATLAB-17.Keywords: Caputo fractional derivative, existence and uniqueness, gronwall inequality, Lyapunov theory
Procedia PDF Downloads 10417355 Optimization of Ultrasound-Assisted Extraction of Oil from Spent Coffee Grounds Using a Central Composite Rotatable Design
Authors: Malek Miladi, Miguel Vegara, Maria Perez-Infantes, Khaled Mohamed Ramadan, Antonio Ruiz-Canales, Damaris Nunez-Gomez
Abstract:
Coffee is the second consumed commodity worldwide, yet it also generates colossal waste. Proper management of coffee waste is proposed by converting them into products with higher added value to achieve sustainability of the economic and ecological footprint and protect the environment. Based on this, a study looking at the recovery of coffee waste is becoming more relevant in recent decades. Spent coffee grounds (SCG's) resulted from brewing coffee represents the major waste produced among all coffee industry. The fact that SCGs has no economic value be abundant in nature and industry, do not compete with agriculture and especially its high oil content (between 7-15% from its total dry matter weight depending on the coffee varieties, Arabica or Robusta), encourages its use as a sustainable feedstock for bio-oil production. The bio-oil extraction is a crucial step towards biodiesel production by the transesterification process. However, conventional methods used for oil extraction are not recommended due to their high consumption of energy, time, and generation of toxic volatile organic solvents. Thus, finding a sustainable, economical, and efficient extraction technique is crucial to scale up the process and to ensure more environment-friendly production. Under this perspective, the aim of this work was the statistical study to know an efficient strategy for oil extraction by n-hexane using indirect sonication. The coffee waste mixed Arabica and Robusta, which was used in this work. The temperature effect, sonication time, and solvent-to-solid ratio on the oil yield were statistically investigated as dependent variables by Central Composite Rotatable Design (CCRD) 23. The results were analyzed using STATISTICA 7 StatSoft software. The CCRD showed the significance of all the variables tested (P < 0.05) on the process output. The validation of the model by analysis of variance (ANOVA) showed good adjustment for the results obtained for a 95% confidence interval, and also, the predicted values graph vs. experimental values confirmed the satisfactory correlation between the model results. Besides, the identification of the optimum experimental conditions was based on the study of the surface response graphs (2-D and 3-D) and the critical statistical values. Based on the CCDR results, 29 ºC, 56.6 min, and solvent-to-solid ratio 16 were the better experimental conditions defined statistically for coffee waste oil extraction using n-hexane as solvent. In these conditions, the oil yield was >9% in all cases. The results confirmed the efficiency of using an ultrasound bath in extracting oil as a more economical, green, and efficient way when compared to the Soxhlet method.Keywords: coffee waste, optimization, oil yield, statistical planning
Procedia PDF Downloads 11817354 User-Based Cannibalization Mitigation in an Online Marketplace
Authors: Vivian Guo, Yan Qu
Abstract:
Online marketplaces are not only digital places where consumers buy and sell merchandise, and they are also destinations for brands to connect with real consumers at the moment when customers are in the shopping mindset. For many marketplaces, brands have been important partners through advertising. There can be, however, a risk of advertising impacting a consumer’s shopping journey if it hurts the use experience or takes the user away from the site. Both could lead to the loss of transaction revenue for the marketplace. In this paper, we present user-based methods for cannibalization control by selectively turning off ads to users who are likely to be cannibalized by ads subject to business objectives. We present ways of measuring cannibalization of advertising in the context of an online marketplace and propose novel ways of measuring cannibalization through purchase propensity and uplift modeling. A/B testing has shown that our methods can significantly improve user purchase and engagement metrics while operating within business objectives. To our knowledge, this is the first paper that addresses cannibalization mitigation at the user-level in the context of advertising.Keywords: cannibalization, machine learning, online marketplace, revenue optimization, yield optimization
Procedia PDF Downloads 15917353 A New Car-Following Model with Consideration of the Brake Light
Authors: Zhiyuan Tang, Ju Zhang, Wenyuan Wu
Abstract:
In this research, a car-following model with consideration of the status of the brake light is proposed. The numerical results show that the stability of the traffic flow is improved. The ability of the brake light to reduce car accident is also showed.Keywords: brake light, car-following model, traffic flow, regional planning, transportation
Procedia PDF Downloads 57817352 Collision Avoidance Based on Model Predictive Control for Nonlinear Octocopter Model
Authors: Doğan Yıldız, Aydan Müşerref Erkmen
Abstract:
The controller of the octocopter is mostly based on the PID controller. For complex maneuvers, PID controllers have limited performance capability like in collision avoidance. When an octocopter needs avoidance from an obstacle, it must instantly show an agile maneuver. Also, this kind of maneuver is affected severely by the nonlinear characteristic of octocopter. When these kinds of limitations are considered, the situation is highly challenging for the PID controller. In the proposed study, these challenges are tried to minimize by using the model predictive controller (MPC) for collision avoidance with a nonlinear octocopter model. The aim is to show that MPC-based collision avoidance has the capability to deal with fast varying conditions in case of obstacle detection and diminish the nonlinear effects of octocopter with varying disturbances.Keywords: model predictive control, nonlinear octocopter model, collision avoidance, obstacle detection
Procedia PDF Downloads 18917351 Utilization of Mustard Leaves (Brassica juncea) Powder for the Development of Cereal Based Extruded Snacks
Authors: Maya S. Rathod, Bahadur Singh Hathan
Abstract:
Mustard leaves are rich in folates, vitamin A, K and B-complex. Mustard greens are low in calories and fats and rich in dietary fiber. They are rich in potassium, manganese, iron, copper, calcium, magnesium and low in sodium. It is very rich in antioxidants and Phytonutrients. For the optimization of process variables (moisture content and mustard leave powder), the experiments were conducted according to central composite Face Centered Composite design of RSM. The mustard leaves powder was replaced with composite flour (a combination of rice, chickpea and corn in the ratio of 70:15:15). The extrudate was extruded in a twin screw extruder at a barrel temperature of 120°C. The independent variables were mustard leaves powder (2-10 %) and moisture content (12-20 %). Responses analyzed were bulk density, water solubility index, water absorption index, lateral expansion, hardness, antioxidant activity, total phenolic content and overall acceptability. The optimum conditions obtained were 7.19 g mustard leaves powder in 100 g premix having 16.8 % moisture content (w.b).Keywords: extrusion, mustard leaves powder, optimization, response surface methodology
Procedia PDF Downloads 54217350 An Alternative Richards’ Growth Model Based on Hyperbolic Sine Function
Authors: Samuel Oluwafemi Oyamakin, Angela Unna Chukwu
Abstract:
Richrads growth equation being a generalized logistic growth equation was improved upon by introducing an allometric parameter using the hyperbolic sine function. The integral solution to this was called hyperbolic Richards growth model having transformed the solution from deterministic to a stochastic growth model. Its ability in model prediction was compared with the classical Richards growth model an approach which mimicked the natural variability of heights/diameter increment with respect to age and therefore provides a more realistic height/diameter predictions using the coefficient of determination (R2), Mean Absolute Error (MAE) and Mean Square Error (MSE) results. The Kolmogorov-Smirnov test and Shapiro-Wilk test was also used to test the behavior of the error term for possible violations. The mean function of top height/Dbh over age using the two models under study predicted closely the observed values of top height/Dbh in the hyperbolic Richards nonlinear growth models better than the classical Richards growth model.Keywords: height, diameter at breast height, DBH, hyperbolic sine function, Pinus caribaea, Richards' growth model
Procedia PDF Downloads 39017349 Fair Value Accounting and Evolution of the Ohlson Model
Authors: Mohamed Zaher Bouaziz
Abstract:
Our study examines the Ohlson Model, which links a company's market value to its equity and net earnings, in the context of the evolution of the Canadian accounting model, characterized by more extensive use of fair value and a broader measure of performance after IFRS adoption. Our hypothesis is that if equity is reported at its fair value, this valuation is closely linked to market capitalization, so the weight of earnings weakens or even disappears in the Ohlson Model. Drawing on Canada's adoption of the International Financial Reporting Standards (IFRS), our results support our hypothesis that equity appears to include most of the relevant information for investors, while earnings have become less important. However, the predictive power of earnings does not disappear.Keywords: fair value accounting, Ohlson model, IFRS adoption, value-relevance of equity and earnings
Procedia PDF Downloads 18817348 A Constitutive Model of Ligaments and Tendons Accounting for Fiber-Matrix Interaction
Authors: Ratchada Sopakayang, Gerhard A. Holzapfel
Abstract:
In this study, a new constitutive model is developed to describe the hyperelastic behavior of collagenous tissues with a parallel arrangement of collagen fibers such as ligaments and tendons. The model is formulated using a continuum approach incorporating the structural changes of the main tissue components: collagen fibers, proteoglycan-rich matrix and fiber-matrix interaction. The mechanical contribution of the interaction between the fibers and the matrix is simply expressed by a coupling term. The structural change of the collagen fibers is incorporated in the constitutive model to describe the activation of the fibers under tissue straining. Finally, the constitutive model can easily describe the stress-stretch nonlinearity which occurs when a ligament/tendon is axially stretched. This study shows that the interaction between the fibers and the matrix contributes to the mechanical tissue response. Therefore, the model may lead to a better understanding of the physiological mechanisms of ligaments and tendons under axial loading.Keywords: constitutive model, fiber-matrix, hyperelasticity, interaction, ligament, tendon
Procedia PDF Downloads 29717347 Approach to Study the Workability of Concrete with the Fractal Model
Authors: Achouri Fatima, Chouicha Kaddour
Abstract:
The main parameters affecting the workability are the water content, particle size, and the total surface of the grains, as long as the mixing water begins by wetting the surface of the grains and then fills the voids between the grains to form entrapped water, the quantity of water remaining is called free water. The aim is to undertake a fractal approach through the relationship between the concrete formulation parameters and workability, to develop this approach a series of concrete taken from the literature was investigated by varying formulation parameters such as G / S, the quantity of cement C and the quantity of mixing water E. We also call on other model as the model for the thickness of the water layer and model of the thickness of the paste layer to judge their relevance, hence the following results : the relevance of the model of the thickness of the water layer is considered relevant when there is a variation in the water quantity, the model of the thickness of the layer of the paste is only applicable if we consider that the paste is made with the grain value Dmax = 2.85: value from which we see a stable model.Keywords: concrete, fractal method, paste thickness, water thickness, workability
Procedia PDF Downloads 37817346 Optimization of Biodiesel Production from Sunflower Oil Using Central Composite Design
Authors: Pascal Mwenge, Jefrey Pilusa, Tumisang Seodigeng
Abstract:
The current study investigated the effect of catalyst ratio and methanol to oil ratio on biodiesel production by using central composite design. Biodiesel was produced by transesterification using sodium hydroxide as a homogeneous catalyst, a laboratory scale reactor consisting of flat bottom flask mounts with a reflux condenser, and a heating plate was used to produce biodiesel. Key parameters, including time, temperature, and mixing rate was kept constant at 60 minutes, 60 oC and 600 RPM, respectively. From the results obtained, it was observed that the biodiesel yield depends on catalyst ratio and methanol to oil ratio. The highest yield of 50.65% was obtained at catalyst ratio of 0.5 wt.% and methanol to oil mole ratio 10.5. The analysis of variances of biodiesel yield showed the R Squared value of 0.8387. A quadratic mathematical model was developed to predict the biodiesel yield in the specified parameters ranges.Keywords: ANOVA, biodiesel, catalyst, transesterification, central composite design
Procedia PDF Downloads 14817345 Improved Predictive Models for the IRMA Network Using Nonlinear Optimisation
Authors: Vishwesh Kulkarni, Nikhil Bellarykar
Abstract:
Cellular complexity stems from the interactions among thousands of different molecular species. Thanks to the emerging fields of systems and synthetic biology, scientists are beginning to unravel these regulatory, signaling, and metabolic interactions and to understand their coordinated action. Reverse engineering of biological networks has has several benefits but a poor quality of data combined with the difficulty in reproducing it limits the applicability of these methods. A few years back, many of the commonly used predictive algorithms were tested on a network constructed in the yeast Saccharomyces cerevisiae (S. cerevisiae) to resolve this issue. The network was a synthetic network of five genes regulating each other for the so-called in vivo reverse-engineering and modeling assessment (IRMA). The network was constructed in S. cereviase since it is a simple and well characterized organism. The synthetic network included a variety of regulatory interactions, thus capturing the behaviour of larger eukaryotic gene networks on a smaller scale. We derive a new set of algorithms by solving a nonlinear optimization problem and show how these algorithms outperform other algorithms on these datasets.Keywords: synthetic gene network, network identification, optimization, nonlinear modeling
Procedia PDF Downloads 15517344 Improve Student Performance Prediction Using Majority Vote Ensemble Model for Higher Education
Authors: Wade Ghribi, Abdelmoty M. Ahmed, Ahmed Said Badawy, Belgacem Bouallegue
Abstract:
In higher education institutions, the most pressing priority is to improve student performance and retention. Large volumes of student data are used in Educational Data Mining techniques to find new hidden information from students' learning behavior, particularly to uncover the early symptom of at-risk pupils. On the other hand, data with noise, outliers, and irrelevant information may provide incorrect conclusions. By identifying features of students' data that have the potential to improve performance prediction results, comparing and identifying the most appropriate ensemble learning technique after preprocessing the data, and optimizing the hyperparameters, this paper aims to develop a reliable students' performance prediction model for Higher Education Institutions. Data was gathered from two different systems: a student information system and an e-learning system for undergraduate students in the College of Computer Science of a Saudi Arabian State University. The cases of 4413 students were used in this article. The process includes data collection, data integration, data preprocessing (such as cleaning, normalization, and transformation), feature selection, pattern extraction, and, finally, model optimization and assessment. Random Forest, Bagging, Stacking, Majority Vote, and two types of Boosting techniques, AdaBoost and XGBoost, are ensemble learning approaches, whereas Decision Tree, Support Vector Machine, and Artificial Neural Network are supervised learning techniques. Hyperparameters for ensemble learning systems will be fine-tuned to provide enhanced performance and optimal output. The findings imply that combining features of students' behavior from e-learning and students' information systems using Majority Vote produced better outcomes than the other ensemble techniques.Keywords: educational data mining, student performance prediction, e-learning, classification, ensemble learning, higher education
Procedia PDF Downloads 10517343 Kalman Filter for Bilinear Systems with Application
Authors: Abdullah E. Al-Mazrooei
Abstract:
In this paper, we present a new kind of the bilinear systems in the form of state space model. The evolution of this system depends on the product of state vector by its self. The well known Lotak Volterra and Lorenz models are special cases of this new model. We also present here a generalization of Kalman filter which is suitable to work with the new bilinear model. An application to real measurements is introduced to illustrate the efficiency of the proposed algorithm.Keywords: bilinear systems, state space model, Kalman filter, application, models
Procedia PDF Downloads 439