Search results for: fractional stochastic processes
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6059

Search results for: fractional stochastic processes

5219 Steady-State Behavior of a Multi-Phase M/M/1 Queue in Random Evolution Subject to Catastrophe Failure

Authors: Reni M. Sagayaraj, Anand Gnana S. Selvam, Reynald R. Susainathan

Abstract:

In this paper, we consider stochastic queueing models for Steady-state behavior of a multi-phase M/M/1 queue in random evolution subject to catastrophe failure. The arrival flow of customers is described by a marked Markovian arrival process. The service times of different type customers have a phase-type distribution with different parameters. To facilitate the investigation of the system we use a generalized phase-type service time distribution. This model contains a repair state, when a catastrophe occurs the system is transferred to the failure state. The paper focuses on the steady-state equation, and observes that, the steady-state behavior of the underlying queueing model along with the average queue size is analyzed.

Keywords: M/G/1 queuing system, multi-phase, random evolution, steady-state equation, catastrophe failure

Procedia PDF Downloads 311
5218 Optimal Load Control Strategy in the Presence of Stochastically Dependent Renewable Energy Sources

Authors: Mahmoud M. Othman, Almoataz Y. Abdelaziz, Yasser G. Hegazy

Abstract:

This paper presents a load control strategy based on modification of the Big Bang Big Crunch optimization method. The proposed strategy aims to determine the optimal load to be controlled and the corresponding time of control in order to minimize the energy purchased from substation. The presented strategy helps the distribution network operator to rely on the renewable energy sources in supplying the system demand. The renewable energy sources used in the presented study are modeled using the diagonal band Copula method and sequential Monte Carlo method in order to accurately consider the multivariate stochastic dependence between wind power, photovoltaic power and the system demand. The proposed algorithms are implemented in MATLAB environment and tested on the IEEE 37-node feeder. Several case studies are done and the subsequent discussions show the effectiveness of the proposed algorithm.

Keywords: big bang big crunch, distributed generation, load control, optimization, planning

Procedia PDF Downloads 329
5217 The Technological Problem of Simulation of the Logistics Center

Authors: Juraj Camaj, Anna Dolinayova, Jana Lalinska, Miroslav Bariak

Abstract:

Planning of infrastructure and processes in logistic center within the frame of various kinds of logistic hubs and technological activities in them represent quite complex problem. The main goal is to design appropriate layout, which enables to realize expected operation on the desired levels. The simulation software represents progressive contemporary experimental technique, which can support complex processes of infrastructure planning and all of activities on it. It means that simulation experiments, reflecting various planned infrastructure variants, investigate and verify their eligibilities in relation with corresponding expected operation. The inducted approach enables to make qualified decisions about infrastructure investments or measures, which derive benefit from simulation-based verifications. The paper represents simulation software for simulation infrastructural layout and technological activities in marshalling yard, intermodal terminal, warehouse and combination between them as the parts of logistic center.

Keywords: marshalling yard, intermodal terminal, warehouse, transport technology, simulation

Procedia PDF Downloads 501
5216 Neural Synchronization - The Brain’s Transfer of Sensory Data

Authors: David Edgar

Abstract:

To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.

Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)

Procedia PDF Downloads 111
5215 Diagnostic Assessment for Mastery Learning of Engineering Students with a Bayesian Network Model

Authors: Zhidong Zhang, Yingchen Yang

Abstract:

In this study, a diagnostic assessment model for Mastery Engineering Learning was established based on a group of undergraduate students who studied in an engineering course. A diagnostic assessment model can examine both students' learning process and report achievement results. One very unique characteristic is that the diagnostic assessment model can recognize the errors and anything blocking students in their learning processes. The feedback is provided to help students to know how to solve the learning problems with alternative strategies and help the instructor to find alternative pedagogical strategies in the instructional designs. Dynamics is a core course in which is a common course being shared by several engineering programs. This course is a very challenging for engineering students to solve the problems. Thus knowledge acquisition and problem-solving skills are crucial for student success. Therefore, developing an effective and valid assessment model for student learning are of great importance. Diagnostic assessment is such a model which can provide effective feedback for both students and instructor in the mastery of engineering learning.

Keywords: diagnostic assessment, mastery learning, engineering, bayesian network model, learning processes

Procedia PDF Downloads 143
5214 Sidelobe Free Inverse Synthetic Aperture Radar Imaging of Non Cooperative Moving Targets Using WiFi

Authors: Jiamin Huang, Shuliang Gui, Zengshan Tian, Fei Yan, Xiaodong Wu

Abstract:

In recent years, with the rapid development of radio frequency technology, the differences between radar sensing and wireless communication in terms of receiving and sending channels, signal processing, data management and control are gradually shrinking. There has been a trend of integrated communication radar sensing. However, most of the existing radar imaging technologies based on communication signals are combined with synthetic aperture radar (SAR) imaging, which does not conform to the practical application case of the integration of communication and radar. Therefore, in this paper proposes a high-precision imaging method using communication signals based on the imaging mechanism of inverse synthetic aperture radar (ISAR) imaging. This method makes full use of the structural characteristics of the orthogonal frequency division multiplexing (OFDM) signal, so the sidelobe effect in distance compression is removed and combines radon transform and Fractional Fourier Transform (FrFT) parameter estimation methods to achieve ISAR imaging of non-cooperative targets. The simulation experiment and measured results verify the feasibility and effectiveness of the method, and prove its broad application prospects in the field of intelligent transportation.

Keywords: integration of communication and radar, OFDM, radon, FrFT, ISAR

Procedia PDF Downloads 106
5213 Modeling of Building a Conceptual Scheme for Multimodal Freight Transportation Information System

Authors: Gia Surguladze, Nino Topuria, Lily Petriashvili, Giorgi Surguladze

Abstract:

Modeling of building processes of a multimodal freight transportation support information system is discussed based on modern CASE technologies. Functional efficiencies of ports in the eastern part of the Black Sea are analyzed taking into account their ecological, seasonal, resource usage parameters. By resources, we mean capacities of berths, cranes, automotive transport, as well as work crews and neighbouring airports. For the purpose of designing database of computer support system for Managerial (Logistics) function, using Object-Role Modeling (ORM) tool (NORMA – Natural ORM Architecture) is proposed, after which Entity Relationship Model (ERM) is generated in automated process. The software is developed based on Process-Oriented and Service-Oriented architecture, in Visual Studio.NET environment.

Keywords: seaport resources, business-processes, multimodal transportation, CASE technology, object-role model, entity relationship model, SOA

Procedia PDF Downloads 412
5212 PM10 Concentration Emitted from Blasting and Crushing Processes of Limestone Mines in Saraburi Province, Thailand

Authors: Kanokrat Makkwao, Tassanee Prueksasit

Abstract:

This study aimed to investigate PM10 emitted from different limestone mines in Saraburi province, Thailand. The blasting and crushing were the main processes selected for PM10 sampling. PM10 was collected in two mines including, a limestone mine for cement manufacturing (mine A) and a limestone mine for construction (mine B). The IMPACT samplers were used to collect PM10. At blasting, the points aligning with the upwind and downwind direction were assigned for the sampling. The ranges of PM10 concentrations at mine A and B were 0.267-5.592 and 0.130-0.325 mg/m³, respectively, and the concentration at blasting from mine A was significantly higher than mine B (p < 0.05). During crushing at mine A, the PM10 concentration with the range of 1.153-3.716 and 0.085-1.724 mg/m³ at crusher and piles in respectively were observed whereas the PM10 concentration measured at four sampling points in mine B, including secondary crusher, tertiary crusher, screening point, and piles, were ranged 1.032-16.529, 10.957-74.057, 0.655-4.956, and 0.169-1.699 mg/m³, respectively. The emission of PM10 concentration at the crushing units was different in the ranges depending on types of machine, its operation, dust collection and control system, and environmental conditions.

Keywords: PM₁₀ concentration, limestone mines, blasting, crushing

Procedia PDF Downloads 131
5211 The Strategies and Mediating Processes of Learning the Inflectional Morphology in English: A Case Study for Taiwanese English Learners

Authors: Hsiu-Ling Hsu, En-Minh (John) Lan

Abstract:

Pronunciation has received more and more language researchers’ and teachers’ attention because it is important for effective or even successful communication. How to consistently and correctly orally produce verbal morphology, such as English regular past tense inflection, has been a big challenge and troublesome for FL learners. The research aims to explore EFL (English as a foreign language) learners’ developmental trajectory of the inflectional morphology, that is, what mediating processes and strategies EFL learners use, to attain native-like prosodic structure of inflectional morphemes (e.g., –ed and –s suffixes) by comparing the differences among EFL learners at different English levels. This research adopted a self-repair analysis and Prosodic Transfer Hypothesis with three developmental stages as a theoretical framework. To answer the research questions, we conducted two experiments, grammatical tense test written production (Experiment 1) and read-aloud oral production (Experiment 2), and recruited 30 participants who were divided into three groups, low-, middle-, and advanced EFL learners. Experiment 1 was conducted to ensure that participants had learned the knowledge of forming the English regular past tense rules and Experiment 2 was carried out to compare the data across FL English learner groups at different English levels. The EFL learners’ self-repair data showed at least four interesting findings. First, low achievers were more sensitive to the plural suffix -s than the past tense suffix -ed. Middle achievers exhibited a greater responsiveness to the past tense suffix, while high achievers demonstrated equal sensitivity to both suffixes. Additionally, two strategies used by EFL English learners to produce verbs and nouns with inflectional morphemes were to delete internal syllable and to divide a four-syllable verb (e.g., ‘graduated’) into two prosodic structures (e.g., ‘gradu’ and ‘ated’ or ‘gradua’ and ‘ted’). Third, true vowel epenthesis was found only in the low EFL achievers. Moreover fortition (native-like sound) was observed in the low and middle EFL achievers. These findings and self-repair data disclosed mediating processes between the developmental stages and provided insight on how Taiwan EFL learners attained the adjunction prosodic structures of inflectional Morphemes in English.

Keywords: inflectional morphology, prosodic structure, developmental trajectory, strategies and mediating processes, English as a foreign language

Procedia PDF Downloads 53
5210 Changing Left Ventricular Hypertrophy After Kidney Transplantation

Authors: Zohreh Rostami, Arezoo Khosravi, Mohammad Nikpoor Aghdam, Mahmood Salesi

Abstract:

Background: Cardiovascular mortality in chronic kidney disease (CKD) and end stage renal disease (ESRD) patients have a strong relationship with baseline or progressive left ventricular hypertrophy (LVH) meanwhile in hemodialysis patients 10% decrement in left ventricular mass was associated with 28% reduction in cardiovascular mortality risk. In consonance with these arguments, we designed a study to measure morphological and functional echocardiographic variations early after transplantation. Method: The patients with normal renal function underwent two advanced echocardiographic studies to examine the structural and functional changes in left ventricular mass before and 3-month after transplantation. Results: From a total of 23 participants 21(91.3%) presented with left ventricular hypertrophy, 60.9% in eccentric and 30.4% in concentric group. Diastolic dysfunction improved in concentric group after transplantation. Both in pre and post transplantation global longitudinal strain (GLS)- average in eccentric group was more than concentric (-17.45 ± 2.75 vs -14.3 ± 3.38 p=0.03) and (-18.08 ± 2.6 vs -16.1 ± 2.7 p= 0.04) respectively. Conclusion: Improvement and recovery of left ventricular function in concentric group was better and sooner than eccentric after kidney transplantation. Although fractional shortening and diastolic function and GLS-4C in pre-transplantation in concentric group was worse than eccentric, but therapeutic response to kidney transplantation in concentric was more and earlier than eccentric group.

Keywords: chronic kidney disease, end stage renal disease, left ventricular hypertrophy, global longitudinal strain

Procedia PDF Downloads 43
5209 Microgrid Design Under Optimal Control With Batch Reinforcement Learning

Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion

Abstract:

Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.

Keywords: batch-constrained reinforcement learning, control, design, optimal

Procedia PDF Downloads 107
5208 The Impact of Automation on Supply Chain Management in West Africa

Authors: Nwauzoma Ohale Rowland, Bright Ugochukwu Umunna

Abstract:

The world has been referred to as a global village for decades, adapting various technological and digital innovations to progress along the lines of development. Different continents have fully automated processes and procedures in the various sectors of their economies. This paper attempts to ascertain why the West African sub-continent while displaying a slow progression, has also joined the race toward having a fully automated process, albeit only in certain areas of its economy. Different reasons for this have been posited and will be discussed in this work. These reasons include high illiteracy rates and poor acceptance of new technologies. Studies were carried out that involved interactions with different business sectors and also a secondary-level investigation of experiments to ascertain the impact of automation in supply chain management on the West African market. Our reports show remarkable growth in businesses and sectors that have automated their processes. While this is the case, other results have also confirmed as due to the high illiteracy rates; the labour force has also been affected.

Keywords: Africa, automation, business, innovation, supply chain management, technology

Procedia PDF Downloads 82
5207 Effects of Preparation Conditions on the Properties of Crumb Rubber Modified Binder

Authors: Baha Vural Kök, Mehmet Yilmaz, Mustafa Akpolat, Cihat Sav

Abstract:

Various types of additives are used frequently in order to improve the rheological and mechanical properties of bituminous mixtures. Small devices instead of full scale machines are used for bitumen modification in the laboratory. These laboratory scale devices vary in terms of their properties such as mixing rate, mixing blade and the amount of binder. In this study, the effect of mixing rate and time during the bitumen modification processes on conventional and rheological properties of pure and crumb rubber modified binder were investigated. Penetration, softening point, rotational viscosity (RV) and dynamic shear rheometer (DSR) tests were applied to pure and CR modified bitumen. It was concluded that the penetration and softening point test did not show the efficiency of CR obtained by different mixing conditions. Besides, oxidation that occurred during the preparation processes plays a great part in the improvement effects of the modified binder.

Keywords: bitumen, crumb rubber, modification, rheological properties

Procedia PDF Downloads 295
5206 Optimal Design of Submersible Permanent Magnet Linear Synchronous Motor Based Design of Experiment and Genetic Algorithm

Authors: Xiao Zhang, Wensheng Xiao, Junguo Cui, Hongmin Wang

Abstract:

Submersible permanent magnet linear synchronous motors (SPMLSMs) are electromagnetic devices, which can directly drive plunger pump to obtain the crude oil. Those motors have been gradually applied in oil fields due to high thrust force density and high efficiency. Since the force performance closely depends on the concrete structural parameters, the seven different structural parameters are investigated in detail. This paper presents an optimum design of an SPMLSM to minimize the detent force and maximize the thrust by using design of experiment (DOE) and genetic algorithm (GA). The three significant structural parameters (air-gap length, slot width, pole-arc coefficient) are separately screened using 27 1/16 fractional factorial design (FFD) to investigate the significant effect of seven parameters used in this research on the force performance. Response surface methodology (RSM) is well adapted to make analytical model of thrust and detent force with constraints of corresponding significant parameters and enable objective function to be easily created, respectively. GA is performed as a searching tool to search for the Pareto-optimal solutions. By finite element analysis, the proposed PMLSM shows merits in improving thrust and reducing the detent force dramatically.

Keywords: optimization, force performance, design of experiment (DOE), genetic algorithm (GA)

Procedia PDF Downloads 275
5205 Recursive Doubly Complementary Filter Design Using Particle Swarm Optimization

Authors: Ju-Hong Lee, Ding-Chen Chung

Abstract:

This paper deals with the optimal design of recursive doubly complementary (DC) digital filter design using a metaheuristic based optimization technique. Based on the theory of DC digital filters using two recursive digital all-pass filters (DAFs), the design problem is appropriately formulated to result in an objective function which is a weighted sum of the phase response errors of the designed DAFs. To deal with the stability of the recursive DC filters during the design process, we can either impose some necessary constraints on the phases of the recursive DAFs. Through a frequency sampling and a weighted least squares approach, the optimization problem of the objective function can be solved by utilizing a population based stochastic optimization approach. The resulting DC digital filters can possess satisfactory frequency response. Simulation results are presented for illustration and comparison.

Keywords: doubly complementary, digital all-pass filter, weighted least squares algorithm, particle swarm optimization

Procedia PDF Downloads 667
5204 Designing Agile Product Development Processes by Transferring Mechanisms of Action Used in Agile Software Development

Authors: Guenther Schuh, Michael Riesener, Jan Kantelberg

Abstract:

Due to the fugacity of markets and the reduction of product lifecycles, manufacturing companies from high-wage countries are nowadays faced with the challenge to place more innovative products within even shorter development time on the market. At the same time, volatile customer requirements have to be satisfied in order to successfully differentiate from market competitors. One potential approach to address the explained challenges is provided by agile values and principles. These agile values and principles already proofed their success within software development projects in the form of management frameworks like Scrum or concrete procedure models such as Extreme Programming or Crystal Clear. Those models lead to significant improvements regarding quality, costs and development time and are therefore used within most software development projects. Motivated by the success within the software industry, manufacturing companies have tried to transfer agile mechanisms of action to the development of hardware products ever since. Though first empirical studies show similar effects in the agile development of hardware products, no comprehensive procedure model for the design of development iterations has been developed for hardware development yet due to different constraints of the domains. For this reason, this paper focusses on the design of agile product development processes by transferring mechanisms of action used in agile software development towards product development. This is conducted by decomposing the individual systems 'product development' and 'agile software development' into relevant elements and symbiotically composing the elements of both systems in respect of the design of agile product development processes afterwards. In a first step, existing product development processes are described following existing approaches of the system theory. By analyzing existing case studies from industrial companies as well as academic approaches, characteristic objectives, activities and artefacts are identified within a target-, action- and object-system. In partial model two, mechanisms of action are derived from existing procedure models of agile software development. These mechanisms of action are classified in a superior strategy level, in a system level comprising characteristic, domain-independent activities and their cause-effect relationships as well as in an activity-based element level. Within partial model three, the influence of the identified agile mechanism of action towards the characteristic system elements of product development processes is analyzed. For this reason, target-, action- and object-system of the product development are compared with the strategy-, system- and element-level of agile mechanism of action by using the graph theory. Furthermore, the necessity of existence of activities within iteration can be determined by defining activity-specific degrees of freedom. Based on this analysis, agile product development processes are designed in form of different types of iterations within a last step. By defining iteration-differentiating characteristics and their interdependencies, a logic for the configuration of activities, their form of execution as well as relevant artefacts for the specific iteration is developed. Furthermore, characteristic types of iteration for the agile product development are identified.

Keywords: activity-based process model, agile mechanisms of action, agile product development, degrees of freedom

Procedia PDF Downloads 189
5203 Fenton Sludge's Catalytic Ability with Synergistic Effects During Reuse for Landfill Leachate Treatment

Authors: Mohd Salim Mahtab, Izharul Haq Farooqi, Anwar Khursheed

Abstract:

Advanced oxidation processes (AOPs) based on Fenton are versatile options for treating complex wastewaters containing refractory compounds. However, the classical Fenton process (CFP) has limitations, such as high sludge production and reagent dosage, which limit its broad use and result in secondary contamination. As a result, long-term solutions are required for process intensification and the removal of these impediments. This study shows that Fenton sludge could serve as a catalyst in the Fe³⁺/Fe²⁺ reductive pathway, allowing non-regenerated sludge to be reused for complex wastewater treatment, such as landfill leachate treatment, even in the absence of Fenton's reagents. Experiments with and without pH adjustments in stages I and II demonstrated that an acidic pH is desirable. Humic compounds in leachate could improve the cycle of Fe³⁺/Fe²⁺ under optimal conditions, and the chemical oxygen demand (COD) removal efficiency was 22±2% and 62±2%% in stages I and II, respectively. Furthermore, excellent total suspended solids (TSS) removal (> 95%) and color removal (> 80%) were obtained in stage II. The processes underlying synergistic (oxidation/coagulation/adsorption) effects were addressed. The design of the experiment (DOE) is growing increasingly popular and has thus been implemented in the chemical, water, and environmental domains. The relevance of the statistical model for the desired response was validated using the explicitly stated optimal conditions. The operational factors, characteristics of reused sludge, toxicity analysis, cost calculation, and future research objectives were also discussed. Reusing non-regenerated Fenton sludge, according to the study's findings, can minimize hazardous solid toxic emissions and total treatment costs.

Keywords: advanced oxidation processes, catalysis, Fe³⁺/Fe²⁺ cycle, fenton sludge

Procedia PDF Downloads 76
5202 Automated Process Quality Monitoring and Diagnostics for Large-Scale Measurement Data

Authors: Hyun-Woo Cho

Abstract:

Continuous monitoring of industrial plants is one of necessary tasks when it comes to ensuring high-quality final products. In terms of monitoring and diagnosis, it is quite critical and important to detect some incipient abnormal events of manufacturing processes in order to improve safety and reliability of operations involved and to reduce related losses. In this work a new multivariate statistical online diagnostic method is presented using a case study. For building some reference models an empirical discriminant model is constructed based on various past operation runs. When a fault is detected on-line, an on-line diagnostic module is initiated. Finally, the status of the current operating conditions is compared with the reference model to make a diagnostic decision. The performance of the presented framework is evaluated using a dataset from complex industrial processes. It has been shown that the proposed diagnostic method outperforms other techniques especially in terms of incipient detection of any faults occurred.

Keywords: data mining, empirical model, on-line diagnostics, process fault, process monitoring

Procedia PDF Downloads 389
5201 Advantages of Fuzzy Control Application in Fast and Sensitive Technological Processes

Authors: Radim Farana, Bogdan Walek, Michal Janosek, Jaroslav Zacek

Abstract:

This paper presents the advantages of fuzzy control use in technological processes control. The paper presents a real application of the Linguistic Fuzzy-Logic Control, developed at the University of Ostrava for the control of physical models in the Intelligent Systems Laboratory. The paper presents an example of a sensitive non-linear model, such as a magnetic levitation model and obtained results which show how modern information technologies can help to solve actual technical problems. A special method based on the LFLC controller with partial components is presented in this paper followed by the method of automatic context change, which is very helpful to achieve more accurate control results. The main advantage of the used system is its robustness in changing conditions demonstrated by comparing with conventional PID controller. This technology and real models are also used as a background for problem-oriented teaching, realized at the department for master students and their collaborative as well as individual final projects.

Keywords: control, fuzzy logic, sensitive system, technological proves

Procedia PDF Downloads 452
5200 Investigation on the Changes in the Chemical Composition and Ecological State of Soils Contaminated with Heavy Metals

Authors: Metodi Mladenov

Abstract:

Heavy metals contamination of soils is a big problem mainly as a result of industrial production. From this point of view, this is of interests the processes for decontamination of soils for crop of production with low content of heavy metals and suitable for consumption from the animals and the peoples. In the current article, there are presented data for established changes in chemical composition and ecological state on soils contaminated from non-ferrous metallurgy manufacturing, for seven years time period. There was done investigation on alteration of pH, conductivity and contain of the next elements: As, Cd, Cu, Cr, Ni, Pb, Zn, Co, Mn and Al. Also, there was done visual observations under the processes of recovery of root-inhabitable soil layer and reforestation. Obtained data show friendly changes for the investigated indicators pH and conductivity and decreasing of content of some form analyzed elements. Visual observations show augmentation of plant cover areas and change in species structure with increase of number of shrubby and wood specimens.

Keywords: conductivity, contamination of soils, chemical composition, inductively coupled plasma–optical emission spectrometry, heavy metals, visual observation

Procedia PDF Downloads 158
5199 Modelling Fluidization by Data-Based Recurrence Computational Fluid Dynamics

Authors: Varun Dongre, Stefan Pirker, Stefan Heinrich

Abstract:

Over the last decades, the numerical modelling of fluidized bed processes has become feasible even for industrial processes. Commonly, continuous two-fluid models are applied to describe large-scale fluidization. In order to allow for coarse grids novel two-fluid models account for unresolved sub-grid heterogeneities. However, computational efforts remain high – in the order of several hours of compute-time for a few seconds of real-time – thus preventing the representation of long-term phenomena such as heating or particle conversion processes. In order to overcome this limitation, data-based recurrence computational fluid dynamics (rCFD) has been put forward in recent years. rCFD can be regarded as a data-based method that relies on the numerical predictions of a conventional short-term simulation. This data is stored in a database and then used by rCFD to efficiently time-extrapolate the flow behavior in high spatial resolution. This study will compare the numerical predictions of rCFD simulations with those of corresponding full CFD reference simulations for lab-scale and pilot-scale fluidized beds. In assessing the predictive capabilities of rCFD simulations, we focus on solid mixing and secondary gas holdup. We observed that predictions made by rCFD simulations are highly sensitive to numerical parameters such as diffusivity associated with face swaps. We achieved a computational speed-up of four orders of magnitude (10,000 time faster than classical TFM simulation) eventually allowing for real-time simulations of fluidized beds. In the next step, we apply the checkerboarding technique by introducing gas tracers subjected to convection and diffusion. We then analyze the concentration profiles by observing mixing, transport of gas tracers, insights about the convective and diffusive pattern of the gas tracers, and further towards heat and mass transfer methods. Finally, we run rCFD simulations and calibrate them with numerical and physical parameters compared with convectional Two-fluid model (full CFD) simulation. As a result, this study gives a clear indication of the applicability, predictive capabilities, and existing limitations of rCFD in the realm of fluidization modelling.

Keywords: multiphase flow, recurrence CFD, two-fluid model, industrial processes

Procedia PDF Downloads 56
5198 Anaerobic Co-digestion in Two-Phase TPAD System of Sewage Sludge and Fish Waste

Authors: Rocio López, Miriam Tena, Montserrat Pérez, Rosario Solera

Abstract:

Biotransformation of organic waste into biogas is considered an interesting alternative for the production of clean energy from renewable sources by reducing the volume and organic content of waste Anaerobic digestion is considered one of the most efficient technologies to transform waste into fertilizer and biogas in order to obtain electrical energy or biofuel within the concept of the circular economy. Currently, three types of anaerobic processes have been developed on a commercial scale: (1) single-stage process where sludge bioconversion is completed in a single chamber, (2) two-stage process where the acidogenic and methanogenic stages are separated into two chambers and, finally, (3) temperature-phase sequencing (TPAD) process that combines a thermophilic pretreatment unit prior to mesophilic anaerobic digestion. Two-stage processes can provide hydrogen and methane with easier control of the first and second stage conditions producing higher total energy recovery and substrate degradation than single-stage processes. On the other hand, co-digestion is the simultaneous anaerobic digestion of a mixture of two or more substrates. The technology is similar to anaerobic digestion but is a more attractive option as it produces increased methane yields due to the positive synergism of the mixtures in the digestion medium thus increasing the economic viability of biogas plants. The present study focuses on the energy recovery by anaerobic co-digestion of sewage sludge and waste from the aquaculture-fishing sector. The valorization is approached through the application of a temperature sequential phase process or TPAD technology (Temperature - Phased Anaerobic Digestion). Moreover, two-phase of microorganisms is considered. Thus, the selected process allows the development of a thermophilic acidogenic phase followed by a mesophilic methanogenic phase to obtain hydrogen (H₂) in the first stage and methane (CH₄) in the second stage. The combination of these technologies makes it possible to unify all the advantages of these anaerobic digestion processes individually. To achieve these objectives, a sequential study has been carried out in which the biochemical potential of hydrogen (BHP) is tested followed by a BMP test, which will allow checking the feasibility of the two-stage process. The best results obtained were high total and soluble COD yields (59.8% and 82.67%, respectively) as well as H₂ production rates of 12LH₂/kg SVadded and methane of 28.76 L CH₄/kg SVadded for TPAD.

Keywords: anaerobic co-digestion, TPAD, two-phase, BHP, BMP, sewage sludge, fish waste

Procedia PDF Downloads 137
5197 Experimental Study of Hydrogen and Water Vapor Extraction from Helium with Zeolite Membranes for Tritium Processes

Authors: Rodrigo Antunes, Olga Borisevich, David Demange

Abstract:

The Tritium Laboratory Karlsruhe (TLK) has identified zeolite membranes as most promising for tritium processes in the future fusion reactors. Tritium diluted in purge gases or gaseous effluents, and present in both molecular and oxidized forms, can be pre-concentrated by a stage of zeolite membranes followed by a main downstream recovery stage (e.g., catalytic membrane reactor). Since 2011 several membrane zeolite samples have been tested to measure the membrane performances in the separation of hydrogen and water vapor from helium streams. These experiments were carried out in the ZIMT (Zeolite Inorganic Membranes for Tritium) facility where mass spectrometry and cold traps were used to measure the membranes’ performances. The membranes were tested at temperatures ranging from 25 °C up to 130 °C, at feed pressures between 1 and 3 bar, and typical feed flows of 2 l/min. During this experimental campaign, several zeolite-type membranes were studied: a hollow-fiber MFI nanocomposite membrane purchased from IRCELYON (France), and tubular MFI-ZSM5, NaA and H-SOD membranes purchased from Institute for Ceramic Technologies and Systems (IKTS, Germany). Among these membranes, only the MFI-based showed relevant performances for the H2/He separation, with rather high permeances (~0.5 – 0.7 μmol/sm2Pa for H2 at 25 °C for MFI-ZSM5), however with a limited ideal selectivity of around 2 for H2/He regardless of the feed concentration. Both MFI and NaA showed higher separation performances when water vapor was used instead; for example, at 30 °C, the separation factor for MFI-ZSM5 is approximately 10 and 38 for 0.2% and 10% H2O/He, respectively. The H-SOD evidenced to be considerably defective and therefore not considered for further experiments. In this contribution, a comprehensive analysis of the experimental methods and results obtained for the separation performance of different zeolite membranes during the past four years in inactive environment is given. These results are encouraging for the experimental campaign with molecular and oxidized tritium that will follow in 2017.

Keywords: gas separation, nuclear fusion, tritium processes, zeolite membranes

Procedia PDF Downloads 238
5196 Notched Bands in Ultra-Wideband UWB Filter Design for Advanced Wireless Applications

Authors: Abdul Basit, Amil Daraz, Guoqiang Zhang

Abstract:

With the increasing demand for wireless communication systems for unlicensed indoor applications, the FCC, in February 2002, allocated unlicensed bands ranging from 3.1 GHZ to 10.6 GHz with fractional bandwidth of about 109 %, because it plays a key role in the radiofrequency (RF) front ends devices and has been widely applied in many other microwave circuits. Targeting the proposed band defined by the FCC for the UWB system, this article presents a UWB bandpass filter with three stop bands for the mitigation of wireless bands that may interfere with the UWB range. For this purpose, two resonators are utilized for the implementation of triple-notched bands. The C-shaped resonator is used for the first notch band creation at 3.4 GHz to suppress the WiMAX signal, while the H-shaped resonator is employed in the initial UWB design to introduce the dual notched characteristic at 4.5 GHz and 8.1 GHz to reject the WLAN and Satellite Communication signals. The overall circuit area covered by the proposed design is 30.6 mm × 20 mm, or in terms of guided wavelength at the first stopband, its size is 0.06 λg × 0.02 λg. The presented structure shows a good return loss under -10 dB over most of the passband and greater than -15 dB for the notched frequency bands. Finally, the filter is simulated and analyzed in HFSS 15.0. All the bands for the rejection of wireless signals are independently controlled, which makes this work superior to the rest of the UWB filters presented in the literature.

Keywords: a bandpass filter (BPF), ultra-wideband (UWB), wireless communication, C-shaped resonator, triple notch

Procedia PDF Downloads 64
5195 Comparative Analysis of Costs and Well Drilling Techniques for Water, Geothermal Energy, Oil and Gas Production

Authors: Thales Maluf, Nazem Nascimento

Abstract:

The development of society relies heavily on the total amount of energy obtained and its consumption. Over the years, there has been an advancement on energy attainment, which is directly related to some natural resources and developing systems. Some of these resources should be highlighted for its remarkable presence in world´s energy grid, such as water, petroleum, and gas, while others deserve attention for representing an alternative to diversify the energy grid, like geothermal sources. Therefore, because all these resources can be extracted from the underground, drilling wells is a mandatory activity in terms of exploration, and it involves a previous geological study and an adequate preparation. It also involves a cleaning process and an extraction process that can be executed by different procedures. For that reason, this research aims the enhancement of exploration processes through a comparative analysis of drilling costs and techniques used to produce them. The analysis itself is based on a bibliographical review based on books, scientific papers, schoolwork and mainly explore drilling methods and technologies, equipment used, well measurements, extraction methods, and production costs. Besides techniques and costs regarding the drilling processes, some properties and general characteristics of these sources are also compared. Preliminary studies show that there are some major differences regarding the exploration processes, mostly because these resources are naturally distinct. Water wells, for instance, have hundreds of meters of length because water is stored close to the surface, while oil, gas, and geothermal production wells can reach thousands of meters, which make them more expensive to be drilled. The drilling methods present some general similarities especially regarding the main mechanism of perforation, but since water is a resource stored closer to the surface than the other ones, there is a wider variety of methods. Water wells can be drilled by rotary mechanisms, percussion mechanisms, rotary-percussion mechanisms, and some other simpler methods. Oil and gas production wells, on the other hand, require rotary or rotary-percussion drilling with a proper structure called drill rig and resistant materials for the drill bits and the other components, mostly because they´re stored in sedimentary basins that can be located thousands of meters under the ground. Geothermal production wells also require rotary or rotary-percussion drilling and require the existence of an injection well and an extraction well. The exploration efficiency also depends on the permeability of the soil, and that is why it has been developed the Enhanced Geothermal Systems (EGS). Throughout this review study, it can be verified that the analysis of the extraction processes of energy resources is essential since these resources are responsible for society development. Furthermore, the comparative analysis of costs and well drilling techniques for water, geothermal energy, oil, and gas production, which is the main goal of this research, can enable the growth of energy generation field through the emergence of ideas that improve the efficiency of energy generation processes.

Keywords: drilling, water, oil, Gas, geothermal energy

Procedia PDF Downloads 130
5194 Optimal Bayesian Chart for Controlling Expected Number of Defects in Production Processes

Authors: V. Makis, L. Jafari

Abstract:

In this paper, we develop an optimal Bayesian chart to control the expected number of defects per inspection unit in production processes with long production runs. We formulate this control problem in the optimal stopping framework. The objective is to determine the optimal stopping rule minimizing the long-run expected average cost per unit time considering partial information obtained from the process sampling at regular epochs. We prove the optimality of the control limit policy, i.e., the process is stopped and the search for assignable causes is initiated when the posterior probability that the process is out of control exceeds a control limit. An algorithm in the semi-Markov decision process framework is developed to calculate the optimal control limit and the corresponding average cost. Numerical examples are presented to illustrate the developed optimal control chart and to compare it with the traditional u-chart.

Keywords: Bayesian u-chart, economic design, optimal stopping, semi-Markov decision process, statistical process control

Procedia PDF Downloads 557
5193 Requirements Engineering via Controlling Actors Definition for the Organizations of European Critical Infrastructure

Authors: Jiri F. Urbanek, Jiri Barta, Oldrich Svoboda, Jiri J. Urbanek

Abstract:

The organizations of European and Czech critical infrastructure have specific position, mission, characteristics and behaviour in European Union and Czech state/ business environments, regarding specific requirements for regional and global security environments. They must respect policy of national security and global rules, requirements and standards in all their inherent and outer processes of supply-customer chains and networks. A controlling is generalized capability to have control over situational policy. This paper aims and purposes are to introduce the controlling as quite new necessary process attribute providing for critical infrastructure is environment the capability and profit to achieve its commitment regarding to the effectiveness of the quality management system in meeting customer/ user requirements and also the continual improvement of critical infrastructure organization’s processes overall performance and efficiency, as well as its societal security via continual planning improvement via DYVELOP modelling.

Keywords: added value, DYVELOP, controlling, environments, process approach

Procedia PDF Downloads 393
5192 Development and Characterisation of Nonwoven Fabrics for Apparel Applications

Authors: Muhammad Cheema, Tahir Shah, Subhash Anand

Abstract:

The cost of making apparel fabrics for garment manufacturing is very high because of their conventional manufacturing processes and new methods/processes are being constantly developed for making fabrics by unconventional methods. With the advancements in technology and the availability of the innovative fibres, durable nonwoven fabrics by using the hydroentanglement process that can compete with the woven fabrics in terms of their aesthetic and tensile properties are being developed. In the work reported here, the hydroentangled nonwoven fabrics were developed through a hybrid nonwoven manufacturing processes by using fibrillated Tencel® and bi-component (sheath/core) polyethylene/polyester (PE/PET) fibres, in which the initial nonwoven fabrics were prepared by the needle-punching method followed by hydroentanglement process carried out at optimal pressures of 50 to 250bars. The prepared fabrics were characterized according to the British Standards (BS 3356:1990, BS 9237:1995, BS 13934-1:1999) and the attained results were compared with those for a standard plain-weave cotton, polyester woven fabric and commercially available nonwoven fabric (Evolon®). The developed hydroentangled fabrics showed better drape properties owing to their flexural rigidity of 252 mg.cm in the machine direction, while the corresponding commercial hydroentangled fabric displayed a value of 1340 mg.cm in the machine direction. The tensile strength of the developed hydroentangled fabrics showed an approximately 200% increase than the commercial hydroentangled fabrics. Similarly, the developed hydroentangled fabrics showed higher properties in term of air permeability, such as the developed hydroentangled fabric exhibited 448 mm/sec and Evolon fabric exhibited 69 mm/sec at 100 Pa pressure. Thus for apparel fabrics, the work combining the existing methods of nonwoven production, provides additional benefits in terms of cost, time and also helps in reducing the carbon footprint for the apparel fabric manufacture.

Keywords: hydroentanglement, nonwoven apparel, durable nonwoven, wearable nonwoven

Procedia PDF Downloads 245
5191 Optimal Maintenance Policy for a Three-Unit System

Authors: A. Abbou, V. Makis, N. Salari

Abstract:

We study the condition-based maintenance (CBM) problem of a system subject to stochastic deterioration. The system is composed of three units (or modules): (i) Module 1 deterioration follows a Markov process with two operational states and one failure state. The operational states are partially observable through periodic condition monitoring. (ii) Module 2 deterioration follows a Gamma process with a known failure threshold. The deterioration level of this module is fully observable through periodic inspections. (iii) Only the operating age information is available of Module 3. The lifetime of this module has a general distribution. A CBM policy prescribes when to initiate a maintenance intervention and which modules to repair during intervention. Our objective is to determine the optimal CBM policy minimizing the long-run expected average cost of operating the system. This is achieved by formulating a Markov decision process (MDP) and developing the value iteration algorithm for solving the MDP. We provide numerical examples illustrating the cost-effectiveness of the optimal CBM policy through a comparison with heuristic policies commonly found in the literature.

Keywords: reliability, maintenance optimization, Markov decision process, heuristics

Procedia PDF Downloads 204
5190 A Novel Meta-Heuristic Algorithm Based on Cloud Theory for Redundancy Allocation Problem under Realistic Condition

Authors: H. Mousavi, M. Sharifi, H. Pourvaziri

Abstract:

Redundancy Allocation Problem (RAP) is a well-known mathematical problem for modeling series-parallel systems. It is a combinatorial optimization problem which focuses on determining an optimal assignment of components in a system design. In this paper, to be more practical, we have considered the problem of redundancy allocation of series system with interval valued reliability of components. Therefore, during the search process, the reliabilities of the components are considered as a stochastic variable with a lower and upper bounds. In order to optimize the problem, we proposed a simulated annealing based on cloud theory (CBSAA). Also, the Monte Carlo simulation (MCS) is embedded to the CBSAA to handle the random variable components’ reliability. This novel approach has been investigated by numerical examples and the experimental results have shown that the CBSAA combining MCS is an efficient tool to solve the RAP of systems with interval-valued component reliabilities.

Keywords: redundancy allocation problem, simulated annealing, cloud theory, monte carlo simulation

Procedia PDF Downloads 401