Search results for: cost and time
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 21986

Search results for: cost and time

20606 Modelling of Reactive Methodologies in Auto-Scaling Time-Sensitive Services With a MAPE-K Architecture

Authors: Óscar Muñoz Garrigós, José Manuel Bernabeu Aubán

Abstract:

Time-sensitive services are the base of the cloud services industry. Keeping low service saturation is essential for controlling response time. All auto-scalable services make use of reactive auto-scaling. However, reactive auto-scaling has few in-depth studies. This presentation shows a model for reactive auto-scaling methodologies with a MAPE-k architecture. Queuing theory can compute different properties of static services but lacks some parameters related to the transition between models. Our model uses queuing theory parameters to relate the transition between models. It associates MAPE-k related times, the sampling frequency, the cooldown period, the number of requests that an instance can handle per unit of time, the number of incoming requests at a time instant, and a function that describes the acceleration in the service's ability to handle more requests. This model is later used as a solution to horizontally auto-scale time-sensitive services composed of microservices, reevaluating the model’s parameters periodically to allocate resources. The solution requires limiting the acceleration of the growth in the number of incoming requests to keep a constrained response time. Business benefits determine such limits. The solution can add a dynamic number of instances and remains valid under different system sizes. The study includes performance recommendations to improve results according to the incoming load shape and business benefits. The exposed methodology is tested in a simulation. The simulator contains a load generator and a service composed of two microservices, where the frontend microservice depends on a backend microservice with a 1:1 request relation ratio. A common request takes 2.3 seconds to be computed by the service and is discarded if it takes more than 7 seconds. Both microservices contain a load balancer that assigns requests to the less loaded instance and preemptively discards requests if they are not finished in time to prevent resource saturation. When load decreases, instances with lower load are kept in the backlog where no more requests are assigned. If the load grows and an instance in the backlog is required, it returns to the running state, but if it finishes the computation of all requests and is no longer required, it is permanently deallocated. A few load patterns are required to represent the worst-case scenario for reactive systems: the following scenarios test response times, resource consumption and business costs. The first scenario is a burst-load scenario. All methodologies will discard requests if the rapidness of the burst is high enough. This scenario focuses on the number of discarded requests and the variance of the response time. The second scenario contains sudden load drops followed by bursts to observe how the methodology behaves when releasing resources that are lately required. The third scenario contains diverse growth accelerations in the number of incoming requests to observe how approaches that add a different number of instances can handle the load with less business cost. The exposed methodology is compared against a multiple threshold CPU methodology allocating/deallocating 10 or 20 instances, outperforming the competitor in all studied metrics.

Keywords: reactive auto-scaling, auto-scaling, microservices, cloud computing

Procedia PDF Downloads 88
20605 Optimization of Personnel Selection Problems via Unconstrained Geometric Programming

Authors: Vildan Kistik, Tuncay Can

Abstract:

From a business perspective, cost and profit are two key factors for businesses. The intent of most businesses is to minimize the cost to maximize or equalize the profit, so as to provide the greatest benefit to itself. However, the physical system is very complicated because of technological constructions, rapid increase of competitive environments and similar factors. In such a system it is not easy to maximize profits or to minimize costs. Businesses must decide on the competence and competence of the personnel to be recruited, taking into consideration many criteria in selecting personnel. There are many criteria to determine the competence and competence of a staff member. Factors such as the level of education, experience, psychological and sociological position, and human relationships that exist in the field are just some of the important factors in selecting a staff for a firm. Personnel selection is a very important and costly process in terms of businesses in today's competitive market. Although there are many mathematical methods developed for the selection of personnel, unfortunately the use of these mathematical methods is rarely encountered in real life. In this study, unlike other methods, an exponential programming model was established based on the possibilities of failing in case the selected personnel was started to work. With the necessary transformations, the problem has been transformed into unconstrained Geometrical Programming problem and personnel selection problem is approached with geometric programming technique. Personnel selection scenarios for a classroom were established with the help of normal distribution and optimum solutions were obtained. In the most appropriate solutions, the personnel selection process for the classroom has been achieved with minimum cost.

Keywords: geometric programming, personnel selection, non-linear programming, operations research

Procedia PDF Downloads 264
20604 Method Development and Validation for Quantification of Active Content and Impurities of Clodinafop Propargyl and Its Enantiomeric Separation by High-Performance Liquid Chromatography

Authors: Kamlesh Vishwakarma, Bipul Behari Saha, Sunilkumar Sing, Abhishek Mishra, Sreenivas Rao

Abstract:

A rapid, sensitive and inexpensive method has been developed for complete analysis of Clodinafop Propargyl. Clodinafop Propargyl enantiomers were separated on chiral column, Chiral Pak AS-H (250 mm. 4.6mm x 5µm) with mobile phase n-hexane: IPA (96:4) at flow rate 1.5 ml/min. The effluent was monitored by UV detector at 230 nm. Clodinafop Propagyl content and impurity quantification was done with reverse phase HPLC. The present study describes a HPLC method using simple mobile phase for the quantification of Clodinafop Propargyl and its impurities. The method was validated and found to be accurate, precise, convenient and effective. Moreover, the lower solvent consumption along with short analytical run time led to a cost effective analytical method.

Keywords: Clodinafop Propargyl, method, validation, HPLC-UV

Procedia PDF Downloads 364
20603 Energy Use and Econometric Models of Soybean Production in Mazandaran Province of Iran

Authors: Majid AghaAlikhani, Mostafa Hojati, Saeid Satari-Yuzbashkandi

Abstract:

This paper studies energy use patterns and relationship between energy input and yield for soybean (Glycine max (L.) Merrill) in Mazandaran province of Iran. In this study, data were collected by administering a questionnaire in face-to-face interviews. Results revealed that the highest share of energy consumption belongs to chemical fertilizers (29.29%) followed by diesel (23.42%) and electricity (22.80%). Our investigations showed that a total energy input of 23404.1 MJ.ha-1 was consumed for soybean production. The energy productivity, specific energy, and net energy values were estimated as 0.12 kg MJ-1, 8.03 MJ kg-1, and 49412.71 MJ.ha-1, respectively. The ratio of energy outputs to energy inputs was 3.11. Obtained results indicated that direct, indirect, renewable and non-renewable energies were (56.83%), (43.17%), (15.78%) and (84.22%), respectively. Three econometric models were also developed to estimate the impact of energy inputs on yield. The results of econometric models revealed that impact of chemical, fertilizer, and water on yield were significant at 1% probability level. Also, direct and non-renewable energies were found to be rather high. Cost analysis revealed that total cost of soybean production per ha was around 518.43$. Accordingly, the benefit-cost ratio was estimated as 2.58. The energy use efficiency in soybean production was found as 3.11. This reveals that the inputs used in soybean production are used efficiently. However, due to higher rate of nitrogen fertilizer consumption, sustainable agriculture should be extended and extension staff could be proposed substitution of chemical fertilizer by biological fertilizer or green manure.

Keywords: Cobbe Douglas function, economical analysis, energy efficiency, energy use patterns, soybean

Procedia PDF Downloads 324
20602 The Strategic Entering Time of a Commerce Platform

Authors: Chia-li Wang

Abstract:

The surge of service and commerce platforms, such as e-commerce and internet-of-things, have rapidly changed our lives. How to avoid the congestion and get the job done in the platform is now a common problem that many people encounter every day. This requires platform users to make decisions about when to enter the platform. To that end, we investigate the strategic entering time of a simple platform containing random numbers of buyers and sellers of some item. Upon a trade, the buyer and the seller gain respective profits, yet they pay the cost of waiting in the platform. To maximize their expected payoffs from trading, both buyers and sellers can choose their entering times. This creates an interesting and practical framework of a game that is played among buyers, among sellers, and between them. That is, a strategy employed by a player is not only against players of its type but also a response to those of the other type, and, thus, a strategy profile is composed of strategies of buyers and sellers. The players' best response, the Nash equilibrium (NE) strategy profile, is derived by a pair of differential equations, which, in turn, are used to establish its existence and uniqueness. More importantly, its structure sheds valuable insights of how the entering strategy of one side (buyers or sellers) is affected by the entering behavior of the other side. These results provide a base for the study of dynamic pricing for stochastic demand-supply imbalances. Finally, comparisons between the social welfares (the sum of the payoffs incurred by individual participants) obtained by the optimal strategy and by the NE strategy are conducted for showing the efficiency loss relative to the socially optimal solution. That should help to manage the platform better.

Keywords: double-sided queue, non-cooperative game, nash equilibrium, price of anarchy

Procedia PDF Downloads 82
20601 Comparison of Machine Learning Models for the Prediction of System Marginal Price of Greek Energy Market

Authors: Ioannis P. Panapakidis, Marios N. Moschakis

Abstract:

The Greek Energy Market is structured as a mandatory pool where the producers make their bid offers in day-ahead basis. The System Operator solves an optimization routine aiming at the minimization of the cost of produced electricity. The solution of the optimization problem leads to the calculation of the System Marginal Price (SMP). Accurate forecasts of the SMP can lead to increased profits and more efficient portfolio management from the producer`s perspective. Aim of this study is to provide a comparative analysis of various machine learning models such as artificial neural networks and neuro-fuzzy models for the prediction of the SMP of the Greek market. Machine learning algorithms are favored in predictions problems since they can capture and simulate the volatilities of complex time series.

Keywords: deregulated energy market, forecasting, machine learning, system marginal price

Procedia PDF Downloads 207
20600 Ranking the Factors That Influence the Construction Project Success: The Jordanian Perspective

Authors: Ghanim A. Bekr

Abstract:

Project success is what must be done for the project to be acceptable to the client, stakeholders and end-users who will be affected by the project. The study of project success and the critical success factors (CSFs) are the means adopted to improve the effectiveness of project. This research is conducted to make an attempt to identify which variables influence the success of project implementation. This study has selected, through an extensive literature review and interviews, (83) factors categorized in (7) groups that the questionnaire respondents were asked to score. The responses from 66 professionals with an average of 15 years of experience in different types of construction projects in Jordan were collected and analyzed using SPSS and most important factors for success for various success criteria are presented depending on the relative importance index to rank the categories. The research revealed the significant groups of factors are: Client related factors, Contractor’s related factors, Project Manager (PM) related factors, and Project management related factors. In addition the top ten sub factors are: Assertion of the client towards short time of the project, availability of skilled labor, Assertion of the client towards high level of the quality, capability of the client in taking risk, previous experience of the PM in similar projects, previous experience of the contractor in similar projects, decision making by the client/ the client’s representative at the right time, assertion of client towards low cost of project, experience in project management in previous projects, and flow of the information among parties. The results would be helpful to construction project professionals in taking proactive measures for successful completion of construction projects in Jordan.

Keywords: construction projects, critical success factors, Jordan, project success

Procedia PDF Downloads 155
20599 Autonomic Recovery Plan with Server Virtualization

Authors: S. Hameed, S. Anwer, M. Saad, M. Saady

Abstract:

For autonomic recovery with server virtualization, a cogent plan that includes recovery techniques and backups with virtualized servers can be developed instead of assigning an idle server to backup operations. In addition to hardware cost reduction and data center trail, the disaster recovery plan can ensure system uptime and to meet objectives of high availability, recovery time, recovery point, server provisioning, and quality of services. This autonomic solution would also support disaster management, testing, and development of the recovery site. In this research, a workflow plan is proposed for supporting disaster recovery with virtualization providing virtual monitoring, requirements engineering, solution decision making, quality testing, and disaster management. This recovery model would make disaster recovery a lot easier, faster, and less error prone.

Keywords: autonomous intelligence, disaster recovery, cloud computing, server virtualization

Procedia PDF Downloads 156
20598 A Performance Analysis Study of an Active Solar Still Integrating Fin at the Basin Plate

Authors: O. Ansari, H. Hafs, A. Bah, M. Asbik, M. Malha, M. Bakhouya

Abstract:

Water is one of the most important and vulnerable natural resources due to human activities and climate change. Water-level continues declining year after year and it is primarily caused by sustained, extensive, and traditional usage methods. Improving water utilization becomes an urgent issue in order satisfy the increasing population needs. Desalination of seawater or brackish water could help in increasing water potential. However, a cost-effective desalination process is required. The most appropriate method for performing this desalination is solar-driven distillation, given its simplicity, low cost and especially the availability of the solar energy source. The main objective of this paper is to demonstrate the influence of coupling integrated basin plate by fins with preheating by solar collector on the performance of solar still. The energy balance equations for the various elements of the solar still are introduced. A numerical example is used to show the efficiency of the proposed solution.

Keywords: active solar still, desalination, fins, solar collector

Procedia PDF Downloads 210
20597 Toward a New Approach for Modeling Lean, Agile and Leagile Supply Chains

Authors: Bouchra Abdelilah, Akram El Korchi, Atmane Baddou

Abstract:

With the very competitive business era that we witness nowadays, companies needs more that anytime to use all the resources they have in order to maximize performance and satisfy the customers’ needs. The changes occurring in the market business are often due to the variations of demand, which requires a very specific supply chain strategy. Supply chains aims to balance cost, quality, and service level and lead time. Still, managers are confused when faced with the strategies working the best for the supply chain: lean, agile and leagile. This paper presents a decision making tool that aims to assist the manager in choosing the supply chain strategy that suits the most his business, depending on the type of product and the nature of demand. Analyzing the different characteristics of supply chain will enable us to guide the manager to the suitable strategy between lean, agile and leagile.

Keywords: supply chain, lean, agile, flexibility, performance

Procedia PDF Downloads 852
20596 Developing a Self-Healing Concrete Filler Using Poly(Methyl Methacrylate) Based Two-Part Adhesive

Authors: Shima Taheri, Simon Clark

Abstract:

Concrete is an essential building material used in the majority of structures. Degradation of concrete over time increases the life-cycle cost of an asset with an estimated annual cost of billions of dollars to national economies. Most of the concrete failure occurs due to cracks, which propagate through a structure and cause weakening leading to failure. Stopping crack propagation is thus the key to protecting concrete structures from failure and is the best way to prevent inconveniences and catastrophes. Furthermore, the majority of cracks occur deep within the concrete in inaccessible areas and are invisible to normal inspection. Few materials intrinsically possess self-healing ability, but one that does is concrete. However, self-healing in concrete is limited to small dormant cracks in a moist environment and is difficult to control. In this project, we developed a method for self-healing of nascent fractures in concrete components through the automatic release of self-curing healing agents encapsulated in breakable nano- and micro-structures. The Poly(methyl methacrylate) (PMMA) based two-part adhesive is encapsulated in core-shell structures with brittle/weak inert shell, synthesized via miniemulsion/solvent evaporation polymerization. Stress fields associated with propagating cracks can break these capsules releasing the healing agents at the point where they are needed. The shell thickness is playing an important role in preserving the content until the final setting of concrete. The capsules can also be surface functionalized with carboxyl groups to overcome the homogenous mixing issues. Currently, this formulated self-healing system can replace up to 1% of cement in a concrete formulation. Increasing this amount to 5-7% in the concrete formulation without compromising compression strength and shrinkage properties, is still under investigation. This self-healing system will not only increase the durability of structures by stopping crack propagation but also allow the use of less cement in concrete construction, thereby adding to the global effort for CO2 emission reduction.

Keywords: self-healing concrete, concrete crack, concrete deterioration, durability

Procedia PDF Downloads 111
20595 Adopting Precast Insulated Concrete Panels for Building Envelope in Hot Climate Zones

Authors: Mohammed Sherzad

Abstract:

The absorbedness of solar radiation within the concrete building is higher than other buildings type, especially in hot climate zones. However, one of the primary issues of architects and the owners in hot climate zones is the building’s exterior plastered and painted finishing which is commonly used are fading and peeling adding a high cost on maintenance. Case studies of different exterior finishing’ treatments used in vernacular and contemporary dwellings in the United Arab Emirates were surveyed. The traditional plastered façade treatment was more sustainable than new buildings. In addition, using precast concrete insulated sandwich panels with the exposed colored aggregate surface in contemporary designed dwellings sustained the extensive heat reducing the overall cost of maintenance and contributed aesthetically to the buildings’ envelope in addition to its thermal insulation property.

Keywords: precast concrete panels, façade treatment, hot climate

Procedia PDF Downloads 127
20594 Assessment of Procurement-Demand of Milk Plant Using Quality Control Tools: A Case Study

Authors: Jagdeep Singh, Prem Singh

Abstract:

Milk is considered as an essential and complete food. The present study was conducted at Milk Plant Mohali especially in reference to the procurement section where the cash inflow was maximum, with the objective to achieve higher productivity and reduce wastage of milk. In milk plant it was observed that during the month of Jan-2014 to March-2014 the average procurement of milk was Rs. 4, 19, 361 liter per month and cost of procurement of milk is Rs 35/- per liter. The total cost of procurement thereby equal to Rs. 1crore 46 lakh per month, but there was mismatch in procurement-production of milk, which leads to an average loss of Rs. 12, 94, 405 per month. To solve the procurement-production problem Quality Control Tools like brainstorming, Flow Chart, Cause effect diagram and Pareto analysis are applied wherever applicable. With the successful implementation of Quality Control tools an average saving of Rs. 4, 59, 445 per month is done.

Keywords: milk, procurement-demand, quality control tools,

Procedia PDF Downloads 523
20593 Effect of Thermal Energy on Inorganic Coagulation for the Treatment of Industrial Wastewater

Authors: Abhishek Singh, Rajlakshmi Barman, Tanmay Shah

Abstract:

Coagulation is considered to be one of the predominant water treatment processes which improve the cost effectiveness of wastewater. The sole purpose of this experiment on thermal coagulation is to increase the efficiency and the rate of reaction. The process uses renewable sources of energy which comprises of improved and minimized time method in order to eradicate the water scarcity of the regions which are on the brink of depletion. This paper includes the various effects of temperature on the standard coagulation treatment of wastewater and their effect on water quality. In addition, the coagulation is done with the mix of bottom/fly-ash that will act as an adsorbent and removes most of the minor and macro particles by means of adsorption which not only helps to reduce the environmental burden of fly ash but also enhance economic benefit. Also, the method of sand filtration is amalgamated in the process. The sand filter is an environmentally-friendly wastewater treatment method, which is relatively simple and inexpensive. The existing parameters were satisfied with the experimental results obtained in this study and were found satisfactory. The initial turbidity of the wastewater is 162 NTU. The initial temperature of the wastewater is 27 C. The temperature variation of the entire process is 50 C-80 C. The concentration of alum in wastewater is 60mg/L-320mg/L. The turbidity range is 8.31-28.1 NTU after treatment. pH variation is 7.73-8.29. The effective time taken is 10 minutes for thermal mixing and sedimentation. The results indicate that the presence of thermal energy affects the coagulation treatment process. The influence of thermal energy on turbidity is assessed along with renewable energy sources and increase of the rate of reaction of the treatment process.

Keywords: adsorbent, sand filter, temperature, thermal coagulation

Procedia PDF Downloads 318
20592 Miracle Fruit Application in Sour Beverages: Effect of Different Concentrations on the Temporal Sensory Profile and Overall Linking

Authors: Jéssica F. Rodrigues, Amanda C. Andrade, Sabrina C. Bastos, Sandra B. Coelho, Ana Carla M. Pinheiro

Abstract:

Currently, there is a great demand for the use of natural sweeteners due to the harmful effects of the high sugar and artificial sweeteners consumption on the health. Miracle fruit, which is known for its unique ability to modify the sour taste in sweet taste, has been shown to be a good alternative sweetener. However, it has a high production cost, being important to optimize lower contents to be used. Thus, the aim of this study was to assess the effect of different miracle fruit contents on the temporal (Time-intensity - TI and Temporal Dominance of Sensations - TDS) sensory profile and overall linking of lemonade, to determine the better content to be used as a natural sweetener in sour beverages. TI and TDS results showed that the concentrations of 150 mg, 300 mg and 600 mg miracle fruit were effective in reducing the acidity and promoting the sweet perception in lemonade. Furthermore, the concentrations of 300 mg and 600 mg obtained similar profiles. Through the acceptance test, the concentration of 300 mg miracle fruit was shown to be an efficient substitute for sucrose and sucralose in lemonade, once they had similar hedonic values between ‘I liked it slightly’ and ‘I liked it moderately’. Therefore, 300mg miracle fruit consists in an adequate content to be used as a natural sweetener of lemonade. The results of this work will help the food industry on the efficient application of a new natural sweetener- the Miracle fruit extract in sour beverages, reducing costs and providing a product that meets the consumer desires.

Keywords: acceptance, natural sweetener, temporal dominance of sensations, time-intensity

Procedia PDF Downloads 243
20591 Cognitive Behaviour Drama: Playful Method to Address Fears in Children on the Higher-End of the Autism Spectrum

Authors: H.Karnezi, K. Tierney

Abstract:

Childhood fears that persist over time and interfere with the children’s normal functioning may have detrimental effects on their social and emotional development. Cognitive behavior therapy is considered highly effective in treating fears and anxieties. However, given that many childhood fears are based on fantasy, the applicability of CBT may be hindered by cognitive immaturity. Furthermore, a lack of motivation to engage in therapy is another commonly encountered obstacle. The purpose of this study was to introduce and evaluate a more developmentally appropriate intervention model, specifically designed to provide phobic children with the motivation to overcome their fears. To this end, principles and techniques from cognitive and behavior therapies are incorporated into the ‘Drama in Education’ model. The Cognitive Behaviour Drama (CBD) method involves using the phobic children’s creativity to involve them in the therapeutic process. The children are invited to engage in exciting fictional scenarios tailored around their strengths and special interests. Once their commitment to the drama is established, a problem that they will feel motivated to solve is introduced. To resolve it, the children will have to overcome a number of obstacles culminating in an in vivo confrontation with the fear stimulus. The study examined the application of the CBD model in three single cases. Results in all three cases shown complete elimination of all fear-related symptoms. Preliminary results justify further evaluation of the Cognitive Behaviour Drama model. It is time and cost-effective, ensuring the clients' immediate engagement in the therapeutic process.

Keywords: phobias, autism, intervention, drama

Procedia PDF Downloads 121
20590 Analyzing Time Lag in Seismic Waves and Its Effects on Isolated Structures

Authors: Faizan Ahmad, Jenna Wong

Abstract:

Time lag between peak values of horizontal and vertical seismic waves is a well-known phenomenon. Horizontal and vertical seismic waves, secondary and primary waves in nature respectively, travel through different layers of soil and the travel time is dependent upon the medium of wave transmission. In seismic analysis, many standardized codes do not require the actual vertical acceleration to be part of the analysis procedure. Instead, a factor load addition for a particular site is used to capture strength demands in case of vertical excitation. This study reviews the effects of vertical accelerations to analyze the behavior of a linearly rubber isolated structure in different time lag situations and frequency content by application of historical and simulated ground motions using SAP2000. The response of the structure is reviewed under multiple sets of ground motions and trends based on time lag and frequency variations are drawn. The accuracy of these results is discussed and evaluated to provide reasoning for use of real vertical excitations in seismic analysis procedures, especially for isolated structures.

Keywords: seismic analysis, vertical accelerations, time lag, isolated structures

Procedia PDF Downloads 329
20589 Study on Planning of Smart GRID Using Landscape Ecology

Authors: Sunglim Lee, Susumu Fujii, Koji Okamura

Abstract:

Smart grid is a new approach for electric power grid that uses information and communications technology to control the electric power grid. Smart grid provides real-time control of the electric power grid, controlling the direction of power flow or time of the flow. Control devices are installed on the power lines of the electric power grid to implement smart grid. The number of the control devices should be determined, in relation with the area one control device covers and the cost associated with the control devices. One approach to determine the number of the control devices is to use the data on the surplus power generated by home solar generators. In current implementations, the surplus power is sent all the way to the power plant, which may cause power loss. To reduce the power loss, the surplus power may be sent to a control device and sent to where the power is needed from the control device. Under assumption that the control devices are installed on a lattice of equal size squares, our goal is to figure out the optimal spacing between the control devices, where the power sharing area (the area covered by one control device) is kept small to avoid power loss, and at the same time the power sharing area is big enough to have no surplus power wasted. To achieve this goal, a simulation using landscape ecology method is conducted on a sample area. First an aerial photograph of the land of interest is turned into a mosaic map where each area is colored according to the ratio of the amount of power production to the amount of power consumption in the area. The amount of power consumption is estimated according to the characteristics of the buildings in the area. The power production is calculated by the sum of the area of the roofs shown in the aerial photograph and assuming that solar panels are installed on all the roofs. The mosaic map is colored in three colors, each color representing producer, consumer, and neither. We started with a mosaic map with 100 m grid size, and the grid size is grown until there is no red grid. One control device is installed on each grid, so that the grid is the area which the control device covers. As the result of this simulation we got 350 m as the optimal spacing between the control devices that makes effective use of the surplus power for the sample area.

Keywords: landscape ecology, IT, smart grid, aerial photograph, simulation

Procedia PDF Downloads 437
20588 Computational Simulations on Stability of Model Predictive Control for Linear Discrete-Time Stochastic Systems

Authors: Tomoaki Hashimoto

Abstract:

Model predictive control is a kind of optimal feedback control in which control performance over a finite future is optimized with a performance index that has a moving initial time and a moving terminal time. This paper examines the stability of model predictive control for linear discrete-time systems with additive stochastic disturbances. A sufficient condition for the stability of the closed-loop system with model predictive control is derived by means of a linear matrix inequality. The objective of this paper is to show the results of computational simulations in order to verify the validity of the obtained stability condition.

Keywords: computational simulations, optimal control, predictive control, stochastic systems, discrete-time systems

Procedia PDF Downloads 425
20587 Optimal Harmonic Filters Design of Taiwan High Speed Rail Traction System

Authors: Ying-Pin Chang

Abstract:

This paper presents a method for combining a particle swarm optimization with nonlinear time-varying evolution and orthogonal arrays (PSO-NTVEOA) in the planning of harmonic filters for the high speed railway traction system with specially connected transformers in unbalanced three-phase power systems. The objective is to minimize the cost of the filter, the filters loss, the total harmonic distortion of currents and voltages at each bus simultaneously. An orthogonal array is first conducted to obtain the initial solution set. The set is then treated as the initial training sample. Next, the PSO-NTVEOA method parameters are determined by using matrix experiments with an orthogonal array, in which a minimal number of experiments would have an effect that approximates the full factorial experiments. This PSO-NTVEOA method is then applied to design optimal harmonic filters in Taiwan High Speed Rail (THSR) traction system, where both rectifiers and inverters with IGBT are used. From the results of the illustrative examples, the feasibility of the PSO-NTVEOA to design an optimal passive harmonic filter of THSR system is verified and the design approach can greatly reduce the harmonic distortion. Three design schemes are compared that V-V connection suppressing the 3rd order harmonic, and Scott and Le Blanc connection for the harmonic improvement is better than the V-V connection.

Keywords: harmonic filters, particle swarm optimization, nonlinear time-varying evolution, orthogonal arrays, specially connected transformers

Procedia PDF Downloads 383
20586 Patterns, Triggers, and Predictors of Relapses among Children with Steroid Sensitive Idiopathic Nephrotic Syndrome at the University of Abuja Teaching Hospital, Gwagwalada, Abuja, Nigeria

Authors: Emmanuel Ademola Anigilaje, Ibraheem Ishola

Abstract:

Background: Childhood steroid-sensitive idiopathic nephrotic syndrome (SSINS) is plagued with relapses that contribute to its morbidity and the cost of treatment. Materials and Methods: This is a retrospective review of relapses among children with SSINS at the University of Abuja Teaching Hospital from January 2016 to July 2020. Triggers related to relapse incidents were noted. Chi-square test was deployed for predictors (factors at the first clinical presentations that associate with subsequent relapses) of relapses. Predictors with p-values of less than 0.05 were considered significant and 95% confidence intervals (CI) and odd ratio (OR) were described. Results: Sixty SSINS comprising 52 males (86.7%), aged 23 months to 18 years, with a mean age of 7.04±4.16 years were studied. Thirty-eight (63.3%) subjects had 126 relapses including infrequent relapses in 30 (78.9%) and frequent relapses in 8 (21.1%). The commonest triggers were acute upper respiratory tract infections (68, 53.9%) and urinary tract infections (UTIs) in 25 (19.8%) relapses. In 4 (3.2%) relapses, no trigger was identified. The time-to-first relapse ranged 14 days to 365 days with a median time of 60 days. The significant predictors were hypertension (OR=3.4, 95% CI; 1.04-11.09, p=0.038), UTIs (OR=9.9, 95% CI; 1.16-80.71, p= 0.014), malaria fever (OR=8.0, 95% CI; 2.45-26.38, p˂0.001), micro-haematuria (OR=4.9, 95% CI; 11.58-15.16, p=0.004), elevated serum creatinine (OR=12.3, 95%CI; 1.48-101.20, p=0.005) and hypercholesterolaemia (OR=4.1, 95%CI; 1.35-12.63, p=0.011). Conclusion: While the pathogenesis of relapses remains unknown, it is prudent to consider relapse-specific preventive strategies against triggers and predictors of relapses in our setting.

Keywords: Patterns, triggers, predictors, steroid-sensitive idiopathic nephrotic syndrome, relapses, Nigeria

Procedia PDF Downloads 149
20585 Some Pertinent Issues and Considerations on CBSE

Authors: Anil Kumar Tripathi, Ratneshwer Gupta

Abstract:

All the software engineering researches and best industry practices aim at providing software products with high degree of quality and functionality at low cost and less time. These requirements are addressed by the Component Based Software Engineering (CBSE) as well. CBSE, which deals with the software construction by components’ assembly, is a revolutionary extension of Software Engineering. CBSE must define and describe processes to assure timely completion of high quality software systems that are composed of a variety of pre built software components. Though these features provide distinct and visible benefits in software design and programming, they also raise some challenging problems. The aim of this work is to summarize the pertinent issues and considerations in CBSE to make an understanding in forms of concepts and observations that may lead to development of newer ways of dealing with the problems and challenges in CBSE.

Keywords: software component, component based software engineering, software process, testing, maintenance

Procedia PDF Downloads 393
20584 Measuring the Effect of Ventilation on Cooking in Indoor Air Quality by Low-Cost Air Sensors

Authors: Andres Gonzalez, Adam Boies, Jacob Swanson, David Kittelson

Abstract:

The concern of the indoor air quality (IAQ) has been increasing due to its risk to human health. The smoking, sweeping, and stove and stovetop use are the activities that have a major contribution to the indoor air pollution. Outdoor air pollution also affects IAQ. The most important factors over IAQ from cooking activities are the materials, fuels, foods, and ventilation. The low-cost, mobile air quality monitoring (LCMAQM) sensors, is reachable technology to assess the IAQ. This is because of the lower cost of LCMAQM compared to conventional instruments. The IAQ was assessed, using LCMAQM, during cooking activities in a University of Minnesota graduate-housing evaluating different ventilation systems. The gases measured are carbon monoxide (CO) and carbon dioxide (CO2). The particles measured are particle matter (PM) 2.5 micrometer (µm) and lung deposited surface area (LDSA). The measurements are being conducted during April 2019 in Como Student Community Cooperative (CSCC) that is a graduate housing at the University of Minnesota. The measurements are conducted using an electric stove for cooking. The amount and type of food and oil using for cooking are the same for each measurement. There are six measurements: two experiments measure air quality without any ventilation, two using an extractor as mechanical ventilation, and two using the extractor and windows open as mechanical and natural ventilation. 3The results of experiments show that natural ventilation is most efficient system to control particles and CO2. The natural ventilation reduces the concentration in 79% for LDSA and 55% for PM2.5, compared to the no ventilation. In the same way, CO2 reduces its concentration in 35%. A well-mixed vessel model was implemented to assess particle the formation and decay rates. Removal rates by the extractor were significantly higher for LDSA, which is dominated by smaller particles, than for PM2.5, but in both cases much lower compared to the natural ventilation. There was significant day to day variation in particle concentrations under nominally identical conditions. This may be related to the fat content of the food. Further research is needed to assess the impact of the fat in food on particle generations.

Keywords: cooking, indoor air quality, low-cost sensor, ventilation

Procedia PDF Downloads 107
20583 Life Course Events, Residential and Job Relocation and Commute Time in Australian Cities

Authors: Solmaz Jahed Shiran, Elizabeth Taylor, John Hearne

Abstract:

Over the past decade a growing body of research, known as mobility biography approach has emerged that focuses on changes in travel behaviour over the life course of individuals. Mobility biographies suggest that changes in travel behaviour have a certain relation to important key events in life courses such as residential relocation, workplace changes, marriage and the birth of children. Taking this approach as the theoretical background, this study uses data from the Household, Income and Labor Dynamics Survey in Australia (HILDA) to model a set of life course events and their interaction with the commute time. By analysing longitudinal data, it is possible to assign different key events during the life course to change a person’s travel behaviour. Changes in the journey-to-work travel time is used as an indication of travel behaviour change in this study. Results of a linear regression model for change in commute time show a significant influence from socio-demographic factors like income and age, the previous home-to-work commute time and remoteness of the residence. Residential relocation and job change have significant influences on commute time. Other life events such as birth of a child, marriage and divorce or separation have also a strong impact on commute time change. Overall, the research confirms previous studies of links between life course events and travel behaviour.

Keywords: life course events, residential mobility, travel behaviour, commute time, job change

Procedia PDF Downloads 197
20582 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction

Authors: Joy Cao, Min Zhou

Abstract:

Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.

Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.

Procedia PDF Downloads 82
20581 Role of Desire in Risk-Perception: A Case Study of Syrian Refugees’ Migration towards Europe

Authors: Lejla Sunagic

Abstract:

The aim of the manuscript is to further the understanding of risky decision-making in the context of forced and irregular migration. The empirical evidence is collected through interviews with Syrian refugees who arrived in Europe via irregular pathways. Analytically, it has been approached through the juxtaposition between risk perception and the notion of desire. As different frameworks have been developed to address differences in risk perception, the common thread was the understanding that individual risk-taking has been addressed in terms of benefits outweighing risks. However, this framework cannot explain a big risk an individual takes because of an underprivileged position and due to a lack of positive alternatives, termed as risk-taking from vulnerability. The accounts of the field members of this study that crossed the sea in rubber boats to arrive in Europe make an empirical fit to such a postulate by reporting that the risk they have taken was not the choice but the only coping strategy. However, the vulnerability argument falls short of explaining why the interviewees, thinking retrospectively, find the risky journey they have taken to be worth it, while they would strongly advise others to restrain from taking such a huge risk. This inconsistency has been addressed by adding the notion of desire to migrate to the elements of risk perception. Desire, as a subjective experience, was what made the risk appear smaller in cost-benefit analysis at the time of decision-making of those who have realized migration. However, when they reflect on others in the context of potential migration via the same pathway, the interviewees addressed the others’ lack of capacity to avoid the same obstacles that they themselves were able to circumvent while omitting to reflect on others’ desire to migrate. Thus, in the risk-benefit analysis performed for others, the risk remains unblurred and tips over the benefits, given the inability to take into account the desire of others. If desire, as the transformative potential of migration, is taken out of the cost-benefit analysis of irregular migration, refugees might not have taken the risky journey. By casting the theoretical argument in the language of configuration, the study is filling in the gap of knowledge on the combination of migration drivers and the way they interact and produce migration outcomes.

Keywords: refugees, risk perception, desire, irregular migration

Procedia PDF Downloads 92
20580 Highly Active, Non-Platinum Metal Catalyst Material as Bi-Functional Air Cathode in Zinc Air Battery

Authors: Thirupathi Thippani, Kothandaraman Ramanujam

Abstract:

Current research on energy storage has been paid to metal-air batteries, because of attractive alternate energy source for the future. Metal – air batteries have the probability to significantly increase the power density, decrease the cost of energy storage and also used for a long time due to its high energy density, low-level pollution, light weight. The performance of these batteries mostly restricted by the slow kinetics of the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER) on cathode during battery discharge and charge. The ORR and OER are conventionally carried out with precious metals (such as Pt) and metal oxides (such as RuO₂ and IrO₂) as catalysts separately. However, these metal-based catalysts are regularly undergoing some difficulties, including high cost, low selectivity, poor stability and unfavorable to environmental effects. So, in order to develop the active, stable, corrosion resistance and inexpensive bi-functional catalyst material is mandatory for the commercialization of zinc-air rechargeable battery technology. We have attempted and synthesized non-precious metal (NPM) catalysts comprising cobalt and N-doped multiwalled carbon nanotubes (N-MWCNTs-Co) were synthesized by the solid-state pyrolysis (SSP) of melamine with Co₃O₄. N-MWCNTs-Co acts as an excellent electrocatalyst for both the oxygen reduction reaction (ORR) and the oxygen evolution reaction (OER), and hence can be used in secondary metal-air batteries and in unitized regenerative fuel cells. It is important to study the OER and ORR at high concentrations of KOH as most of the metal-air batteries employ KOH concentrations > 4M. In the first 16 cycles of the zinc-air battery while using N-MWCNTs-Co, 20 wt.% Pt/C or 20 wt.% IrO₂/C as air electrodes. In the ORR regime (the discharge profile of the zinc-air battery), the cell voltage exhibited by N-MWCNTs-Co was 44 and 83 mV higher (based on 5th cycle) in comparison to of 20 wt.% Pt/C and 20 wt.% IrO₂/C respectively. To demonstrate this promise, a zinc-air battery was assembled and tested at a current density of 0.5 Ag⁻¹ for charge-discharge 100 cycles.

Keywords: oxygen reduction reaction (ORR), oxygen evolution reaction(OER), non-platinum, zinc air battery

Procedia PDF Downloads 225
20579 Green Technology for the Treatment of Industrial Effluent Contaminated with Dyes

Authors: Afzaal Gulzar, Shafaq Mubarak, M. Zia-Ur-Rehman

Abstract:

Industrial waste waters put environmental constrains to the water quality of aqueous reserves. Number of techniques has been used to treat them before disposal to water bodies. In this work a novel green approach is study by using poultry waste eggshells as a low cost efficient adsorbent for the dyes present in industrial effluent of textile and paper industries. The developed technique not only used to treat contaminated waters but also resulted in the utilization of poultry eggshell waste which in turn assists in solid waste management. Batch sorption studies like contact time, adsorbent dose, dye concentration, temp and pH has been conducted to find the optimum adsorption parameters.

Keywords: green technology, solid waste management, industrial effluent, eggshell waste utilization, waste water treatment

Procedia PDF Downloads 457
20578 Reactivity Study on South African Calcium Based Material Using a pH-Stat and Citric Acid: A Statistical Approach

Authors: Hilary Rutto, Mbali Chiliza, Tumisang Seodigeng

Abstract:

The study on reactivity of calcined calcium-based material is very important in dry flue gas desulphurisation (FGD) process, so as to produce absorbent with high sulphur dioxide capture capacity during the hydration process. The effect of calcining temperature and time on the reactivity of calcined limestone material were investigated. In this study, the reactivity was measured using a pH stat apparatus and also confirming the result by performing citric acid reactivity test. The reactivity was calculated using the shrinking core model. Based on the experiments, a mathematical model is developed to correlate the effect of time and temperature to the reactivity of absorbent. The calcination process variables were temperature (700 -1000°C) and time (1-6 hrs). It was found that reactivity increases with an increase in time and temperature.

Keywords: reactivity, citric acid, calcination, time

Procedia PDF Downloads 211
20577 Large Time Asymptotic Behavior to Solutions of a Forced Burgers Equation

Authors: Satyanarayana Engu, Ahmed Mohd, V. Murugan

Abstract:

We study the large time asymptotics of solutions to the Cauchy problem for a forced Burgers equation (FBE) with the initial data, which is continuous and summable on R. For which, we first derive explicit solutions of FBE assuming a different class of initial data in terms of Hermite polynomials. Later, by violating this assumption we prove the existence of a solution to the considered Cauchy problem. Finally, we give an asymptotic approximate solution and establish that the error will be of order O(t^(-1/2)) with respect to L^p -norm, where 1≤p≤∞, for large time.

Keywords: Burgers equation, Cole-Hopf transformation, Hermite polynomials, large time asymptotics

Procedia PDF Downloads 325