Search results for: stochastic dynamic programming
3430 Competition and Cooperation of Prosumers in Cournot Games with Uncertainty
Authors: Yong-Heng Shi, Peng Hao, Bai-Chen Xie
Abstract:
Solar prosumers are playing increasingly prominent roles in the power system. However, its uncertainty affects the outcomes and functions of the power market, especially in the asymmetric information environment. Therefore, an important issue is how to take effective measures to reduce the impact of uncertainty on market equilibrium. We propose a two-level stochastic differential game model to explore the Cournot decision problem of prosumers. In particular, we study the impact of punishment and cooperation mechanisms on the efficiency of the Cournot game in which prosumers face uncertainty. The results show that under the penalty mechanism of fixed and variable rates, producers and consumers tend to take conservative actions to hedge risks, and the variable rates mechanism is more reasonable. Compared with non-cooperative situations, prosumers can improve the efficiency of the game through cooperation, which we attribute to the superposition of market power and uncertainty reduction. In addition, the market environment of asymmetric information intensifies the role of uncertainty. It reduces social welfare but increases the income of prosumers. For regulators, promoting alliances is an effective measure to realize the integration, optimization, and stable grid connection of producers and consumers.Keywords: Cournot games, power market, uncertainty, prosumer cooperation
Procedia PDF Downloads 1073429 Dynamic Process of Single Water Droplet Impacting on a Hot Heptane Surface
Authors: Mingjun Xu, Shouxiang Lu
Abstract:
Understanding the interaction mechanism between the water droplet and pool fire has an important significance in engineering application of water sprinkle/spray/mist fire suppression. The micro impact process is unclear when the droplet impacts on the burning liquid surface at present. To deepen the understanding of the mechanisms of pool fire suppression with water spray/mist, dynamic processes of single water droplet impinging onto a hot heptane surface are visualized with the aid of a high-speed digital camera at 2000 fps. Each test is repeated 20 times. The water droplet diameter is around 1.98 mm, and the impact Weber number ranges from 30 to 695. The heptane is heated by a hot plate to mimic the burning condition, and the temperature varies from 30 to 90°C. The results show that three typical phenomena, including penetration, crater-jet and surface bubble, are observed, and the pool temperature has a significant influence on the critical condition for the appearance of each phenomenon. A global picture of different phenomena is built according to impact Weber number and pool temperature. In addition, the pool temperature and Weber number have important influences on the characteristic parameters including maximum crater depth, crown height and liquid column height. For a fixed Weber number, the liquid column height increases with pool temperature.Keywords: droplet impact, fire suppression, hot surface, water spray
Procedia PDF Downloads 2433428 The Role of Movement Quality after Osgood-Schlatter Disease in an Amateur Football Player: A Case Study
Authors: D. Pogliana, A. Maso, N. Milani, D. Panzin, S. Rivaroli, J. Konin
Abstract:
This case aims to identify the role of movement quality during the final stage of return to sport (RTS) in a male amateur football player 13 years old after passing the acute phase of the bilateral Osgood-Schlatter disease (OSD). The patient, after a year from passing the acute phase of OSD with the abstention of physical activity, reports bilateral anterior knee pain at the beginning of the football sport activity. Interventions: After the orthopedist check, who recommended physiotherapy sessions for the correction of motor patterns and the isometric reinforcement of the muscles of the quadriceps, the rehabilitation intervention was developed in 7 weeks through 14 sessions of neuro-motor training (NMT) with a frequency of two weekly sessions and six sessions of muscle-strengthening with a frequency of one weekly session. The sessions of NMT were carried out through free body exercises (or with overloads) with visual bio-feedback with the help of two cameras (one with anterior vision and one with lateral vision of the subject) and a big touch screen. The aim of these sessions of NMT was to modify the dysfunctional motor patterns evaluated by the 2D motion analysis test. The test was carried out at the beginning and at the end of the rehabilitation course and included five movements: single-leg squat (SLS), drop jump (DJ), single-leg hop (SLH), lateral shuffle (LS), and change of direction (COD). Each of these movements was evaluated through the video analysis of dynamic valgus knee, pelvic tilt, trunk control, shock absorption, and motor strategy. A free image analysis software (Kinovea) was then used to calculate scores. Results: Baseline assessment of the subject showed a total score of 59% on the right limb and 64% on the left limb (considering an optimal score above 85%) with large deficits in shock absorption capabilities, the presence of dynamic valgus knee, and dysfunctional motor strategies defined “quadriceps dominant.” After six weeks of training, the subject achieved a total score of 80% on the right limb and 86% on the left limb, with significant improvements in shock absorption capabilities, the presence of dynamic knee valgus, and the employment of more hip-oriented motor strategies on both lower limbs. The improvements shown in dynamic knee valgus, greater hip-oriented motor strategies, and improved shock absorption identified through six weeks of the NMT program can help a teenager amateur football player to manage the anterior knee pain during sports activity. In conclusion, NMT was a good choice to help a 13 years old male amateur football player to return to performance without pain after OSD and can also be used with all this type of athletes of the other teams' sports.Keywords: movement analysis, neuro-motor training, knee pain, movement strategies
Procedia PDF Downloads 1333427 Dynamic Analysis of the Heat Transfer in the Magnetically Assisted Reactor
Authors: Tomasz Borowski, Dawid Sołoducha, Rafał Rakoczy, Marian Kordas
Abstract:
The application of magnetic field is essential for a wide range of technologies or processes (i.e., magnetic hyperthermia, bioprocessing). From the practical point of view, bioprocess control is often limited to the regulation of temperature at constant values favourable to microbial growth. The main aim of this study is to determine the effect of various types of electromagnetic fields (i.e., static or alternating) on the heat transfer in a self-designed magnetically assisted reactor. The experimental set-up is equipped with a measuring instrument which controlled the temperature of the liquid inside the container and supervised the real-time acquisition of all the experimental data coming from the sensors. Temperature signals are also sampled from generator of magnetic field. The obtained temperature profiles were mathematically described and analyzed. The parameters characterizing the response to a step input of a first-order dynamic system were obtained and discussed. For example, the higher values of the time constant means slow signal (in this case, temperature) increase. After the period equal to about five-time constants, the sample temperature nearly reached the asymptotic value. This dynamical analysis allowed us to understand the heating effect under the action of various types of electromagnetic fields. Moreover, the proposed mathematical description can be used to compare the influence of different types of magnetic fields on heat transfer operations.Keywords: heat transfer, magnetically assisted reactor, dynamical analysis, transient function
Procedia PDF Downloads 1713426 Measuring Energy Efficiency Performance of Mena Countries
Authors: Azam Mohammadbagheri, Bahram Fathi
Abstract:
DEA has become a very popular method of performance measure, but it still suffers from some shortcomings. One of these shortcomings is the issue of having multiple optimal solutions to weights for efficient DMUs. The cross efficiency evaluation as an extension of DEA is proposed to avoid this problem. Lam (2010) is also proposed a mixed-integer linear programming formulation based on linear discriminate analysis and super efficiency method (MILP model) to avoid having multiple optimal solutions to weights. In this study, we modified MILP model to determine more suitable weight sets and also evaluate the energy efficiency of MENA countries as an application of the proposed model.Keywords: data envelopment analysis, discriminate analysis, cross efficiency, MILP model
Procedia PDF Downloads 6873425 Bayesian Value at Risk Forecast Using Realized Conditional Autoregressive Expectiel Mdodel with an Application of Cryptocurrency
Authors: Niya Chen, Jennifer Chan
Abstract:
In the financial market, risk management helps to minimize potential loss and maximize profit. There are two ways to assess risks; the first way is to calculate the risk directly based on the volatility. The most common risk measurements are Value at Risk (VaR), sharp ratio, and beta. Alternatively, we could look at the quantile of the return to assess the risk. Popular return models such as GARCH and stochastic volatility (SV) focus on modeling the mean of the return distribution via capturing the volatility dynamics; however, the quantile/expectile method will give us an idea of the distribution with the extreme return value. It will allow us to forecast VaR using return which is direct information. The advantage of using these non-parametric methods is that it is not bounded by the distribution assumptions from the parametric method. But the difference between them is that expectile uses a second-order loss function while quantile regression uses a first-order loss function. We consider several quantile functions, different volatility measures, and estimates from some volatility models. To estimate the expectile of the model, we use Realized Conditional Autoregressive Expectile (CARE) model with the bayesian method to achieve this. We would like to see if our proposed models outperform existing models in cryptocurrency, and we will test it by using Bitcoin mainly as well as Ethereum.Keywords: expectile, CARE Model, CARR Model, quantile, cryptocurrency, Value at Risk
Procedia PDF Downloads 1093424 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 1153423 Off-Farm Work and Cost Efficiency in Staple Food Production among Small-Scale Farmers in North Central Nigeria
Authors: C. E. Ogbanje, S. A. N. D. Chidebelu, N. J. Nweze
Abstract:
The study evaluated off-farm work and cost efficiency in staple food production among small-scale farmers in North Central Nigeria. Multistage sampling technique was used to select 360 respondents (participants and non-participants in off-farm work). Primary data obtained were analysed using stochastic cost frontier and test of means’ difference. Capital input was lower for participants (N2,596.58) than non-participants (N11,099.14). Gamma (γ) was statistically significant. Farm size significantly (p<0.01) increased cost outlay for participants and non-participants. Average input prices of enterprises one and two significantly (p<0.01) increased cost. Sex, household size, credit obtained, formal education, farming experience, and farm income significantly (p<0.05) reduced cost inefficiency for non-participants. Average cost efficiency was 11%. Farm capital was wasted. Participants’ substitution of capital for labour did not put them at a disadvantage. Extension agents should encourage farmers to obtain financial relief from off-farm work but not to the extent of endangering farm cost efficiency.Keywords: cost efficiency, mean difference, North Central Nigeria, off-farm work, participants and non-participants, small-scale farmers
Procedia PDF Downloads 3623422 Optimizing the Passenger Throughput at an Airport Security Checkpoint
Authors: Kun Li, Yuzheng Liu, Xiuqi Fan
Abstract:
High-security standard and high efficiency of screening seem to be contradictory to each other in the airport security check process. Improving the efficiency as far as possible while maintaining the same security standard is significantly meaningful. This paper utilizes the knowledge of Operation Research and Stochastic Process to establish mathematical models to explore this problem. We analyze the current process of airport security check and use the M/G/1 and M/G/k models in queuing theory to describe the process. Then we find the least efficient part is the pre-check lane, the bottleneck of the queuing system. To improve passenger throughput and reduce the variance of passengers’ waiting time, we adjust our models and use Monte Carlo method, then put forward three modifications: adjust the ratio of Pre-Check lane to regular lane flexibly, determine the optimal number of security check screening lines based on cost analysis and adjust the distribution of arrival and service time based on Monte Carlo simulation results. We also analyze the impact of cultural differences as the sensitivity analysis. Finally, we give the recommendations for the current process of airport security check process.Keywords: queue theory, security check, stochatic process, Monte Carlo simulation
Procedia PDF Downloads 2003421 From Two-Way to Multi-Way: A Comparative Study for Map-Reduce Join Algorithms
Authors: Marwa Hussien Mohamed, Mohamed Helmy Khafagy
Abstract:
Map-Reduce is a programming model which is widely used to extract valuable information from enormous volumes of data. Map-reduce designed to support heterogeneous datasets. Apache Hadoop map-reduce used extensively to uncover hidden pattern like data mining, SQL, etc. The most important operation for data analysis is joining operation. But, map-reduce framework does not directly support join algorithm. This paper explains and compares two-way and multi-way map-reduce join algorithms for map reduce also we implement MR join Algorithms and show the performance of each phase in MR join algorithms. Our experimental results show that map side join and map merge join in two-way join algorithms has the longest time according to preprocessing step sorting data and reduce side cascade join has the longest time at Multi-Way join algorithms.Keywords: Hadoop, MapReduce, multi-way join, two-way join, Ubuntu
Procedia PDF Downloads 4873420 Continuous-Time Analysis And Performance Assessment For Digital Control Of High-Frequency Switching Synchronous Dc-Dc Converter
Authors: Rihab Hamdi, Amel Hadri Hamida, Ouafae Bennis, Sakina Zerouali
Abstract:
This paper features a performance analysis and robustness assessment of a digitally controlled DC-DC three-cell buck converter associated in parallel, operating in continuous conduction mode (CCM), facing feeding parameters variation and loads disturbance. The control strategy relies on the continuous-time with an averaged modeling technique for high-frequency switching converter. The methodology is to modulate the complete design procedure, in regard to the existence of an instantaneous current operating point for designing the digital closed-loop, to the same continuous-time domain. Moreover, the adopted approach is to include a digital voltage control (DVC) technique, taking an account for digital control delays and sampling effects, which aims at improving efficiency and dynamic response and preventing generally undesired phenomena. The results obtained under load change, input change, and reference change clearly demonstrates an excellent dynamic response of the proposed technique, also as provide stability in any operating conditions, the effectiveness is fast with a smooth tracking of the specified output voltage. Simulations studies in MATLAB/Simulink environment are performed to verify the concept.Keywords: continuous conduction mode, digital control, parallel multi-cells converter, performance analysis, power electronics
Procedia PDF Downloads 1503419 Seismic Assessment of a Pre-Cast Recycled Concrete Block Arch System
Authors: Amaia Martinez Martinez, Martin Turek, Carlos Ventura, Jay Drew
Abstract:
This study aims to assess the seismic performance of arch and dome structural systems made from easy to assemble precast blocks of recycled concrete. These systems have been developed by Lock Block Ltd. Company from Vancouver, Canada, as an extension of their currently used retaining wall system. The characterization of the seismic behavior of these structures is performed by a combination of experimental static and dynamic testing, and analytical modeling. For the experimental testing, several tilt tests, as well as a program of shake table testing were undertaken using small scale arch models. A suite of earthquakes with different characteristics from important past events are chosen and scaled properly for the dynamic testing. Shake table testing applying the ground motions in just one direction (in the weak direction of the arch) and in the three directions were conducted and compared. The models were tested with increasing intensity until collapse occurred; which determines the failure level for each earthquake. Since the failure intensity varied with type of earthquake, a sensitivity analysis of the different parameters was performed, being impulses the dominant factor. For all cases, the arches exhibited the typical four-hinge failure mechanism, which was also shown in the analytical model. Experimental testing was also performed reinforcing the arches using a steel band over the structures anchored at both ends of the arch. The models were tested with different pretension levels. The bands were instrumented with strain gauges to measure the force produced by the shaking. These forces were used to develop engineering guidelines for the design of the reinforcement needed for these systems. In addition, an analytical discrete element model was created using 3DEC software. The blocks were designed as rigid blocks, assigning all the properties to the joints including also the contribution of the interlocking shear key between blocks. The model is calibrated to the experimental static tests and validated with the obtained results from the dynamic tests. Then the model can be used to scale up the results to the full scale structure and expanding it to different configurations and boundary conditions.Keywords: arch, discrete element model, seismic assessment, shake-table testing
Procedia PDF Downloads 2063418 Comparative Analysis of Simulation-Based and Mixed-Integer Linear Programming Approaches for Optimizing Building Modernization Pathways Towards Decarbonization
Authors: Nico Fuchs, Fabian Wüllhorst, Laura Maier, Dirk Müller
Abstract:
The decarbonization of building stocks necessitates the modernization of existing buildings. Key measures for this include reducing energy demands through insulation of the building envelope, replacing heat generators, and installing solar systems. Given limited financial resources, it is impractical to modernize all buildings in a portfolio simultaneously; instead, prioritization of buildings and modernization measures for a given planning horizon is essential. Optimization models for modernization pathways can assist portfolio managers in this prioritization. However, modeling and solving these large-scale optimization problems, often represented as mixed-integer problems (MIP), necessitates simplifying the operation of building energy systems particularly with respect to system dynamics and transient behavior. This raises the question of which level of simplification remains sufficient to accurately account for realistic costs and emissions of building energy systems, ensuring a fair comparison of different modernization measures. This study addresses this issue by comparing a two-stage simulation-based optimization approach with a single-stage mathematical optimization in a mixed-integer linear programming (MILP) formulation. The simulation-based approach serves as a benchmark for realistic energy system operation but requires a restriction of the solution space to discrete choices of modernization measures, such as the sizing of heating systems. After calculating the operation of different energy systems in terms of the resulting final energy demands in simulation models on a first stage, the results serve as input for a second stage MILP optimization, where the design of each building in the portfolio is optimized. In contrast to the simulation-based approach, the MILP-based approach can capture a broader variety of modernization measures due to the efficiency of MILP solvers but necessitates simplifying the building energy system operation. Both approaches are employed to determine the cost-optimal design and dimensioning of several buildings in a portfolio to meet climate targets within limited yearly budgets, resulting in a modernization pathway for the entire portfolio. The comparison reveals that the MILP formulation successfully captures design decisions of building energy systems, such as the selection of heating systems and the modernization of building envelopes. However, the results regarding the optimal dimensioning of heating technologies differ from the results of the two-stage simulation-based approach, as the MILP model tends to overestimate operational efficiency, highlighting the limitations of the MILP approach.Keywords: building energy system optimization, model accuracy in optimization, modernization pathways, building stock decarbonization
Procedia PDF Downloads 343417 A Multidimensional Genetic Algorithm Applicable for Our VRP Variant Dealing with the Problems of Infrastructure Defaults SVRDP-CMTW: “Safety Vehicle Routing Diagnosis Problem with Control and Modified Time Windows”
Authors: Ben Mansour Mouin, Elloumi Abdelkarim
Abstract:
We will discuss the problem of routing a fleet of different vehicles from a central depot to different types of infrastructure-defaults with dynamic maintenance requests, modified time windows, and control of default maintained. For this reason, we propose a modified metaheuristicto to solve our mathematical model. SVRDP-CMTW is a variant VRP of an optimal vehicle plan that facilitates the maintenance task of different types of infrastructure-defaults. This task will be monitored after the maintenance, based on its priorities, the degree of danger associated with each default, and the neighborhood at the black-spots. We will present, in this paper, a multidimensional genetic algorithm “MGA” by detailing its characteristics, proposed mechanisms, and roles in our work. The coding of this algorithm represents the necessary parameters that characterize each infrastructure-default with the objective of minimizing a combination of cost, distance and maintenance times while satisfying the priority levels of the most urgent defaults. The developed algorithm will allow the dynamic integration of newly detected defaults at the execution time. This result will be displayed in our programmed interactive system at the routing time. This multidimensional genetic algorithm replaces N genetic algorithm to solve P different type problems of infrastructure defaults (instead of N algorithm for P problem we can solve in one multidimensional algorithm simultaneously who can solve all these problemsatonce).Keywords: mathematical model, VRP, multidimensional genetic algorithm, metaheuristics
Procedia PDF Downloads 1963416 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 893415 Buddhism and Society: The History and Contribution of Buddhist Education in Taiwan
Authors: Meilee Shen
Abstract:
Buddhist monks and nuns have changed within the dynamic culture of Taiwan that they find themselves in. The diverse cultures, economic development, and advanced educational levels of the island are all part of this. Buddhist education has become an interesting aspect in the history of Taiwanese Buddhism. In recent years, Buddhists in Taiwan have made significant contributions to both academic and religious studies. This paper will focus on the following questions: What is Buddhist education? How does a Buddhist education change monastic role in Taiwanese Buddhism? Finally, how has Buddhist education benefited Taiwanese society? Research indicates that Buddhist education in Taiwan possesses four features: 1. Master teaching disciple: Buddhist masters teach monastic rules to monastic disciples only. 2. Monastic education: It is mainly focused on Buddhist doctrines and sangha rules. 3. From Buddhist education to secular education: Buddhist studies were introduced into secular educational environments that were the beginning for outsiders to study Buddhism. It also opened a door to recruit young college students to enter the monastery. 4. Academic Buddhist training: Buddhist monks and nuns have begun to study at secular colleges in various programs besides Buddhist studies. In recent years, Buddhist colleges and secular universities’ religious studies programs have begun to admit overseas students due to the low birth-rate in Taiwan. Therefore, the relationship between Buddhism and Taiwanese society is dynamic.Keywords: Buddhist college and university in Taiwan, Buddhist education, institutionalization in Taiwanese Buddhism, monastic and secular education, Taiwanese Buddhist monks and nuns
Procedia PDF Downloads 1713414 Electrical Load Estimation Using Estimated Fuzzy Linear Parameters
Authors: Bader Alkandari, Jamal Y. Madouh, Ahmad M. Alkandari, Anwar A. Alnaqi
Abstract:
A new formulation of fuzzy linear estimation problem is presented. It is formulated as a linear programming problem. The objective is to minimize the spread of the data points, taking into consideration the type of the membership function of the fuzzy parameters to satisfy the constraints on each measurement point and to insure that the original membership is included in the estimated membership. Different models are developed for a fuzzy triangular membership. The proposed models are applied to different examples from the area of fuzzy linear regression and finally to different examples for estimating the electrical load on a busbar. It had been found that the proposed technique is more suited for electrical load estimation, since the nature of the load is characterized by the uncertainty and vagueness.Keywords: fuzzy regression, load estimation, fuzzy linear parameters, electrical load estimation
Procedia PDF Downloads 5403413 Direct Measurement of Pressure and Temperature Variations During High-Speed Friction Experiments
Authors: Simon Guerin-Marthe, Marie Violay
Abstract:
Thermal Pressurization (TP) has been proposed as a key mechanism involved in the weakening of faults during dynamic ruptures. Theoretical and numerical studies clearly show how frictional heating can lead to an increase in pore fluid pressure due to the rapid slip along faults occurring during earthquakes. In addition, recent laboratory studies have evidenced local pore pressure or local temperature variation during rotary shear tests, which are consistent with TP theoretical and numerical models. The aim of this study is to complement previous ones by measuring both local pore pressure and local temperature variations in the vicinity of a water-saturated calcite gouge layer subjected to a controlled slip velocity in direct double shear configuration. Laboratory investigation of TP process is crucial in order to understand the conditions at which it is likely to become a dominant mechanism controlling dynamic friction. It is also important in order to understand the timing and magnitude of temperature and pore pressure variations, to help understanding when it is negligible, and how it competes with other rather strengthening-mechanisms such as dilatancy, which can occur during rock failure. Here we present unique direct measurements of temperature and pressure variations during high-speed friction experiments under various load point velocities and show the timing of these variations relatively to the slip event.Keywords: thermal pressurization, double-shear test, high-speed friction, dilatancy
Procedia PDF Downloads 613412 An Efficient Traceability Mechanism in the Audited Cloud Data Storage
Authors: Ramya P, Lino Abraham Varghese, S. Bose
Abstract:
By cloud storage services, the data can be stored in the cloud, and can be shared across multiple users. Due to the unexpected hardware/software failures and human errors, which make the data stored in the cloud be lost or corrupted easily it affected the integrity of data in cloud. Some mechanisms have been designed to allow both data owners and public verifiers to efficiently audit cloud data integrity without retrieving the entire data from the cloud server. But public auditing on the integrity of shared data with the existing mechanisms will unavoidably reveal confidential information such as identity of the person, to public verifiers. Here a privacy-preserving mechanism is proposed to support public auditing on shared data stored in the cloud. It uses group signatures to compute verification metadata needed to audit the correctness of shared data. The identity of the signer on each block in shared data is kept confidential from public verifiers, who are easily verifying shared data integrity without retrieving the entire file. But on demand, the signer of the each block is reveal to the owner alone. Group private key is generated once by the owner in the static group, where as in the dynamic group, the group private key is change when the users revoke from the group. When the users leave from the group the already signed blocks are resigned by cloud service provider instead of owner is efficiently handled by efficient proxy re-signature scheme.Keywords: data integrity, dynamic group, group signature, public auditing
Procedia PDF Downloads 3923411 Analyzing the Practicality of Drawing Inferences in Automation of Commonsense Reasoning
Authors: Chandan Hegde, K. Ashwini
Abstract:
Commonsense reasoning is the simulation of human ability to make decisions during the situations that we encounter every day. It has been several decades since the introduction of this subfield of artificial intelligence, but it has barely made some significant progress. The modern computing aids also have remained impotent in this regard due to the absence of a strong methodology towards commonsense reasoning development. Among several accountable reasons for the lack of progress, drawing inference out of commonsense knowledge-base stands out. This review paper emphasizes on a detailed analysis of representation of reasoning uncertainties and feasible prospects of programming aids for drawing inferences. Also, the difficulties in deducing and systematizing commonsense reasoning and the substantial progress made in reasoning that influences the study have been discussed. Additionally, the paper discusses the possible impacts of an effective inference technique in commonsense reasoning.Keywords: artificial intelligence, commonsense reasoning, knowledge base, uncertainty in reasoning
Procedia PDF Downloads 1873410 Multivariate Control Chart to Determine Efficiency Measurements in Industrial Processes
Authors: J. J. Vargas, N. Prieto, L. A. Toro
Abstract:
Control charts are commonly used to monitor processes involving either variable or attribute of quality characteristics and determining the control limits as a critical task for quality engineers to improve the processes. Nonetheless, in some applications it is necessary to include an estimation of efficiency. In this paper, the ability to define the efficiency of an industrial process was added to a control chart by means of incorporating a data envelopment analysis (DEA) approach. In depth, a Bayesian estimation was performed to calculate the posterior probability distribution of parameters as means and variance and covariance matrix. This technique allows to analyse the data set without the need of using the hypothetical large sample implied in the problem and to be treated as an approximation to the finite sample distribution. A rejection simulation method was carried out to generate random variables from the parameter functions. Each resulting vector was used by stochastic DEA model during several cycles for establishing the distribution of each efficiency measures for each DMU (decision making units). A control limit was calculated with model obtained and if a condition of a low level efficiency of DMU is presented, system efficiency is out of control. In the efficiency calculated a global optimum was reached, which ensures model reliability.Keywords: data envelopment analysis, DEA, Multivariate control chart, rejection simulation method
Procedia PDF Downloads 3733409 Unsteady Three-Dimensional Adaptive Spatial-Temporal Multi-Scale Direct Simulation Monte Carlo Solver to Simulate Rarefied Gas Flows in Micro/Nano Devices
Authors: Mirvat Shamseddine, Issam Lakkis
Abstract:
We present an efficient, three-dimensional parallel multi-scale Direct Simulation Monte Carlo (DSMC) algorithm for the simulation of unsteady rarefied gas flows in micro/nanosystems. The algorithm employs a novel spatiotemporal adaptivity scheme. The scheme performs a fully dynamic multi-level grid adaption based on the gradients of flow macro-parameters and an automatic temporal adaptation. The computational domain consists of a hierarchical octree-based Cartesian grid representation of the flow domain and a triangular mesh for the solid object surfaces. The hybrid mesh, combined with the spatiotemporal adaptivity scheme, allows for increased flexibility and efficient data management, rendering the framework suitable for efficient particle-tracing and dynamic grid refinement and coarsening. The parallel algorithm is optimized to run DSMC simulations of strongly unsteady, non-equilibrium flows over multiple cores. The presented method is validated by comparing with benchmark studies and then employed to improve the design of micro-scale hotwire thermal sensors in rarefied gas flows.Keywords: DSMC, oct-tree hierarchical grid, ray tracing, spatial-temporal adaptivity scheme, unsteady rarefied gas flows
Procedia PDF Downloads 2993408 Numerical Simulation of Lifeboat Launching Using Overset Meshing
Authors: Alok Khaware, Vinay Kumar Gupta, Jean Noel Pederzani
Abstract:
Lifeboat launching from marine vessel or offshore platform is one of the important areas of research in offshore applications. With the advancement of computational fluid dynamic simulation (CFD) technology to solve fluid induced motions coupled with Six Degree of Freedom (6DOF), rigid body dynamics solver, it is now possible to predict the motion of the lifeboat precisely in different challenging conditions. Traditionally dynamic remeshing approach is used to solve this kind of problems, but remeshing approach has some bottlenecks to control good quality mesh in transient moving mesh cases. In the present study, an overset method with higher-order interpolation is used to simulate a lifeboat launched from an offshore platform into calm water, and volume of fluid (VOF) method is used to track free surface. Overset mesh consists of a set of overlapping component meshes, which allows complex geometries to be meshed with lesser effort. Good quality mesh with local refinement is generated at the beginning of the simulation and stay unchanged throughout the simulation. Overset mesh accuracy depends on the precise interpolation technique; the present study includes a robust and accurate least square interpolation method and results obtained with overset mesh shows good agreement with experiment.Keywords: computational fluid dynamics, free surface flow, lifeboat launching, overset mesh, volume of fluid
Procedia PDF Downloads 2773407 Assessment and Mitigation of Slope Stability Hazards Along Kombolcha-Desse Road, Northern Ethiopia
Authors: Biruk Wolde Eremacho
Abstract:
The Kombolcha to Desse road, linking Addis Ababa with Northern Ethiopia towns traverses through one of the most difficult mountainous ranges in Ethiopia. The presence of loose unconsolidated materials (colluvium materials), highly weathered and fractured basalt rocks high relief, steep natural slopes, nature of geologic formations exposed along the road section, poor drainage conditions, occurrence of high seasonal rains, and seismically active nature of the region created favorable condition for slope instability in the area. Thus, keeping in mind all above points the present study was conceived to study in detail the slope stability condition of the area. It was realized that detailed slope stability studies along this road section are very necessary to identify critical slopes and to provide the best remedial measures to minimize the slope instability problems which frequently disrupt and endanger the traffic movement on this important road. For the present study based on the field manifestation of instability two most critical slope sections were identified for detailed slope stability analysis. The deterministic slope stability analysis approach was followed to perform the detailed slope stability analysis of the selected slope sections. Factor of safety for the selected slope sections was determined for the different anticipated conditions (i.e., static and dynamic with varied water saturations) using Slope/W and Slide software. Both static and seismic slope stability analysis were carried out and factor of safety was deduced for each anticipated conditions. In general, detailed slope stability analysis of the two critical slope sections reveals that for only static dry condition both the slopes sections would be stable. However, for the rest anticipated conditions defined by static and dynamic situations with varied water saturations both critical slope sections would be unstable. Moreover, the causes of slope instability in the study area are governed by different factors; therefore integrated approaches of remedial measures are more appropriate to mitigate the possible slope instability in the study area. Depending on site condition and slope stability analysis result four types of suitable preventive and remedial measures are recommended namely; proper managements of drainages, retaining structures, gabions, and managing steeply cut slopes.Keywords: factor of safety, remedial measures, slope stability analysis, static and dynamic condition
Procedia PDF Downloads 2793406 Some Accuracy Related Aspects in Two-Fluid Hydrodynamic Sub-Grid Modeling of Gas-Solid Riser Flows
Authors: Joseph Mouallem, Seyed Reza Amini Niaki, Norman Chavez-Cussy, Christian Costa Milioli, Fernando Eduardo Milioli
Abstract:
Sub-grid closures for filtered two-fluid models (fTFM) useful in large scale simulations (LSS) of riser flows can be derived from highly resolved simulations (HRS) with microscopic two-fluid modeling (mTFM). Accurate sub-grid closures require accurate mTFM formulations as well as accurate correlation of relevant filtered parameters to suitable independent variables. This article deals with both of those issues. The accuracy of mTFM is touched by assessing the impact of gas sub-grid turbulence over HRS filtered predictions. A gas turbulence alike effect is artificially inserted by means of a stochastic forcing procedure implemented in the physical space over the momentum conservation equation of the gas phase. The correlation issue is touched by introducing a three-filtered variable correlation analysis (three-marker analysis) performed under a variety of different macro-scale conditions typical or risers. While the more elaborated correlation procedure clearly improved accuracy, accounting for gas sub-grid turbulence had no significant impact over predictions.Keywords: fluidization, gas-particle flow, two-fluid model, sub-grid models, filtered closures
Procedia PDF Downloads 1243405 Analyzing the Impact of Migration on HIV and AIDS Incidence Cases in Malaysia
Authors: Ofosuhene O. Apenteng, Noor Azina Ismail
Abstract:
The human immunodeficiency virus (HIV) that causes acquired immune deficiency syndrome (AIDS) remains a global cause of morbidity and mortality. It has caused panic since its emergence. Relationships between migration and HIV/AIDS have become complex. In the absence of prospectively designed studies, dynamic mathematical models that take into account the migration movement which will give very useful information. We have explored the utility of mathematical models in understanding transmission dynamics of HIV and AIDS and in assessing the magnitude of how migration has impact on the disease. The model was calibrated to HIV and AIDS incidence data from Malaysia Ministry of Health from the period of 1986 to 2011 using Bayesian analysis with combination of Markov chain Monte Carlo method (MCMC) approach to estimate the model parameters. From the estimated parameters, the estimated basic reproduction number was 22.5812. The rate at which the susceptible individual moved to HIV compartment has the highest sensitivity value which is more significant as compared to the remaining parameters. Thus, the disease becomes unstable. This is a big concern and not good indicator from the public health point of view since the aim is to stabilize the epidemic at the disease-free equilibrium. However, these results suggest that the government as a policy maker should make further efforts to curb illegal activities performed by migrants. It is shown that our models reflect considerably the dynamic behavior of the HIV/AIDS epidemic in Malaysia and eventually could be used strategically for other countries.Keywords: epidemic model, reproduction number, HIV, MCMC, parameter estimation
Procedia PDF Downloads 3663404 Dynamic Conformal Arc versus Intensity Modulated Radiotherapy for Image Guided Stereotactic Radiotherapy of Cranial Lesion
Authors: Chor Yi Ng, Christine Kong, Loretta Teo, Stephen Yau, FC Cheung, TL Poon, Francis Lee
Abstract:
Purpose: Dynamic conformal arc (DCA) and intensity modulated radiotherapy (IMRT) are two treatment techniques commonly used for stereotactic radiosurgery/radiotherapy of cranial lesions. IMRT plans usually give better dose conformity while DCA plans have better dose fall off. Rapid dose fall off is preferred for radiotherapy of cranial lesions, but dose conformity is also important. For certain lesions, DCA plans have good conformity, while for some lesions, the conformity is just unacceptable with DCA plans, and IMRT has to be used. The choice between the two may not be apparent until each plan is prepared and dose indices compared. We described a deviation index (DI) which is a measurement of the deviation of the target shape from a sphere, and test its functionality to choose between the two techniques. Method and Materials: From May 2015 to May 2017, our institute has performed stereotactic radiotherapy for 105 patients treating a total of 115 lesions (64 DCA plans and 51 IMRT plans). Patients were treated with the Varian Clinac iX with HDMLC. Brainlab Exactrac system was used for patient setup. Treatment planning was done with Brainlab iPlan RT Dose (Version 4.5.4). DCA plans were found to give better dose fall off in terms of R50% (R50% (DCA) = 4.75 Vs R50% (IMRT) = 5.242) while IMRT plans have better conformity in terms of treatment volume ratio (TVR) (TVR(DCA) = 1.273 Vs TVR(IMRT) = 1.222). Deviation Index (DI) is proposed to better facilitate the choice between the two techniques. DI is the ratio of the volume of a 1 mm shell of the PTV and the volume of a 1 mm shell of a sphere of identical volume. DI will be close to 1 for a near spherical PTV while a large DI will imply a more irregular PTV. To study the functionality of DI, 23 cases were chosen with PTV volume ranged from 1.149 cc to 29.83 cc, and DI ranged from 1.059 to 3.202. For each case, we did a nine field IMRT plan with one pass optimization and a five arc DCA plan. Then the TVR and R50% of each case were compared and correlated with the DI. Results: For the 23 cases, TVRs and R50% of the DCA and IMRT plans were examined. The conformity for IMRT plans are better than DCA plans, with majority of the TVR(DCA)/TVR(IMRT) ratios > 1, values ranging from 0.877 to1.538. While the dose fall off is better for DCA plans, with majority of the R50%(DCA)/ R50%(IMRT) ratios < 1. Their correlations with DI were also studied. A strong positive correlation was found between the ratio of TVRs and DI (correlation coefficient = 0.839), while the correlation between the ratio of R50%s and DI was insignificant (correlation coefficient = -0.190). Conclusion: The results suggest DI can be used as a guide for choosing the planning technique. For DI greater than a certain value, we can expect the conformity for DCA plans to become unacceptably great, and IMRT will be the technique of choice.Keywords: cranial lesions, dynamic conformal arc, IMRT, image guided radiotherapy, stereotactic radiotherapy
Procedia PDF Downloads 2413403 Some Pertinent Issues and Considerations on CBSE
Authors: Anil Kumar Tripathi, Ratneshwer Gupta
Abstract:
All the software engineering researches and best industry practices aim at providing software products with high degree of quality and functionality at low cost and less time. These requirements are addressed by the Component Based Software Engineering (CBSE) as well. CBSE, which deals with the software construction by components’ assembly, is a revolutionary extension of Software Engineering. CBSE must define and describe processes to assure timely completion of high quality software systems that are composed of a variety of pre built software components. Though these features provide distinct and visible benefits in software design and programming, they also raise some challenging problems. The aim of this work is to summarize the pertinent issues and considerations in CBSE to make an understanding in forms of concepts and observations that may lead to development of newer ways of dealing with the problems and challenges in CBSE.Keywords: software component, component based software engineering, software process, testing, maintenance
Procedia PDF Downloads 4013402 Deciding Graph Non-Hamiltonicity via a Closure Algorithm
Authors: E. R. Swart, S. J. Gismondi, N. R. Swart, C. E. Bell
Abstract:
We present an heuristic algorithm that decides graph non-Hamiltonicity. All graphs are directed, each undirected edge regarded as a pair of counter directed arcs. Each of the n! Hamilton cycles in a complete graph on n+1 vertices is mapped to an n-permutation matrix P where p(u,i)=1 if and only if the ith arc in a cycle enters vertex u, starting and ending at vertex n+1. We first create exclusion set E by noting all arcs (u, v) not in G, sufficient to code precisely all cycles excluded from G i.e. cycles not in G use at least one arc not in G. Members are pairs of components of P, {p(u,i),p(v,i+1)}, i=1, n-1. A doubly stochastic-like relaxed LP formulation of the Hamilton cycle decision problem is constructed. Each {p(u,i),p(v,i+1)} in E is coded as variable q(u,i,v,i+1)=0 i.e. shrinks the feasible region. We then implement the Weak Closure Algorithm (WCA) that tests necessary conditions of a matching, together with Boolean closure to decide 0/1 variable assignments. Each {p(u,i),p(v,j)} not in E is tested for membership in E, and if possible, added to E (q(u,i,v,j)=0) to iteratively maximize |E|. If the WCA constructs E to be maximal, the set of all {p(u,i),p(v,j)}, then G is decided non-Hamiltonian. Only non-Hamiltonian G share this maximal property. Ten non-Hamiltonian graphs (10 through 104 vertices) and 2000 randomized 31 vertex non-Hamiltonian graphs are tested and correctly decided non-Hamiltonian. For Hamiltonian G, the complement of E covers a matching, perhaps useful in searching for cycles. We also present an example where the WCA fails.Keywords: Hamilton cycle decision problem, computational complexity theory, graph theory, theoretical computer science
Procedia PDF Downloads 3733401 Mathematical Modeling and Algorithms for the Capacitated Facility Location and Allocation Problem with Emission Restriction
Authors: Sagar Hedaoo, Fazle Baki, Ahmed Azab
Abstract:
In supply chain management, network design for scalable manufacturing facilities is an emerging field of research. Facility location allocation assigns facilities to customers to optimize the overall cost of the supply chain. To further optimize the costs, capacities of these facilities can be changed in accordance with customer demands. A mathematical model is formulated to fully express the problem at hand and to solve small-to-mid range instances. A dedicated constraint has been developed to restrict emissions in line with the Kyoto protocol. This problem is NP-Hard; hence, a simulated annealing metaheuristic has been developed to solve larger instances. A case study on the USA-Canada cross border crossing is used.Keywords: emission, mixed integer linear programming, metaheuristic, simulated annealing
Procedia PDF Downloads 309