Search results for: parameter linear programming
3594 Jordan Water District Interactive Billing and Accounting Information System
Authors: Adrian J. Forca, Simeon J. Cainday III
Abstract:
The Jordan Water District Interactive Billing and Accounting Information Systems is designed for Jordan Water District to uplift the efficiency and effectiveness of its services to its customers. It is designed to process computations of water bills in accurate and fast way through automating the manual process and ensures that correct rates and fees are applied. In addition to billing process, a mobile app will be integrated into it to support rapid and accurate water bill generation. An interactive feature will be incorporated to support electronic billing to customers who wish to receive water bills through the use of electronic mail. The system will also improve, organize and avoid data inaccuracy in accounting processes because data will be stored in a database which is designed logically correct through normalization. Furthermore, strict programming constraints will be plunged to validate account access privilege based on job function and data being stored and retrieved to ensure data security, reliability, and accuracy. The system will be able to cater the billing and accounting services of Jordan Water District resulting in setting forth the manual process and adapt to the modern technological innovations.Keywords: accounting, bill, information system, interactive
Procedia PDF Downloads 2513593 Performance Comparison of Joint Diagonalization Structure (JDS) Method and Wideband MUSIC Method
Authors: Sandeep Santosh, O. P. Sahu
Abstract:
We simulate an efficient multiple wideband and nonstationary source localization algorithm by exploiting both the non-stationarity of the signals and the array geometric information.This algorithm is based on joint diagonalization structure (JDS) of a set of short time power spectrum matrices at different time instants of each frequency bin. JDS can be used for quick and accurate multiple non-stationary source localization. The JDS algorithm is a one stage process i.e it directly searches the Direction of arrivals (DOAs) over the continuous location parameter space. The JDS method requires that the number of sensors is not less than the number of sources. By observing the simulation results, one can conclude that the JDS method can localize two sources when their difference is not less than 7 degree but the Wideband MUSIC is able to localize two sources for difference of 18 degree.Keywords: joint diagonalization structure (JDS), wideband direction of arrival (DOA), wideband MUSIC
Procedia PDF Downloads 4693592 Improved Blood Glucose-Insulin Monitoring with Dual-Layer Predictive Control Design
Authors: Vahid Nademi
Abstract:
In response to widely used wearable medical devices equipped with a continuous glucose monitor (CGM) and insulin pump, the advanced control methods are still demanding to get the full benefit of these devices. Unlike costly clinical trials, implementing effective insulin-glucose control strategies can provide significant contributions to the patients suffering from chronic diseases such as diabetes. This study deals with a key role of two-layer insulin-glucose regulator based on model-predictive-control (MPC) scheme so that the patient’s predicted glucose profile is in compliance with the insulin level injected through insulin pump automatically. It is achieved by iterative optimization algorithm which is called an integrated perturbation analysis and sequential quadratic programming (IPA-SQP) solver for handling uncertainties due to unexpected variations in glucose-insulin values and body’s characteristics. The feasibility evaluation of the discussed control approach is also studied by means of numerical simulations of two case scenarios via measured data. The obtained results are presented to verify the superior and reliable performance of the proposed control scheme with no negative impact on patient safety.Keywords: blood glucose monitoring, insulin pump, predictive control, optimization
Procedia PDF Downloads 1363591 Truck Scheduling Problem in a Cross-Dock Centre with Fixed Due Dates
Authors: Mohsen S. Sajadieha, Danyar Molavia
Abstract:
In this paper, a truck scheduling problem is investigated at a two-touch cross-docking center with due dates for outbound trucks as a hard constraint. The objective is to minimize the total cost comprising penalty and delivery cost of delayed shipments. The sequence of unloading shipments is considered and is assumed that shipments are sent to shipping dock doors immediately after unloading and a First-In-First-Out (FIFO) policy is considered for loading the shipments. A mixed integer programming model is developed for the proposed model. Two meta-heuristic algorithms including genetic algorithm (GA) and variable neighborhood search (VNS) are developed to solve the problem in medium and large sized scales. The numerical results show that increase in due dates for outbound trucks has a crucial impact on the reduction of penalty costs of delayed shipments. In addition, by increase the due dates, the improvement in the objective function arises on average in comparison with the situation that the cross-dock is multi-touch and shipments are sent to shipping dock doors only after unloading the whole inbound truck.Keywords: cross-docking, truck scheduling, fixed due date, door assignment
Procedia PDF Downloads 4043590 Comparison of Monte Carlo Simulations and Experimental Results for the Measurement of Complex DNA Damage Induced by Ionizing Radiations of Different Quality
Authors: Ifigeneia V. Mavragani, Zacharenia Nikitaki, George Kalantzis, George Iliakis, Alexandros G. Georgakilas
Abstract:
Complex DNA damage consisting of a combination of DNA lesions, such as Double Strand Breaks (DSBs) and non-DSB base lesions occurring in a small volume is considered as one of the most important biological endpoints regarding ionizing radiation (IR) exposure. Strong theoretical (Monte Carlo simulations) and experimental evidence suggests an increment of the complexity of DNA damage and therefore repair resistance with increasing linear energy transfer (LET). Experimental detection of complex (clustered) DNA damage is often associated with technical deficiencies limiting its measurement, especially in cellular or tissue systems. Our groups have recently made significant improvements towards the identification of key parameters relating to the efficient detection of complex DSBs and non-DSBs in human cellular systems exposed to IR of varying quality (γ-, X-rays 0.3-1 keV/μm, α-particles 116 keV/μm and 36Ar ions 270 keV/μm). The induction and processing of DSB and non-DSB-oxidative clusters were measured using adaptations of immunofluorescence (γH2AX or 53PB1 foci staining as DSB probes and human repair enzymes OGG1 or APE1 as probes for oxidized purines and abasic sites respectively). In the current study, Relative Biological Effectiveness (RBE) values for DSB and non-DSB induction have been measured in different human normal (FEP18-11-T1) and cancerous cell lines (MCF7, HepG2, A549, MO59K/J). The experimental results are compared to simulation data obtained using a validated microdosimetric fast Monte Carlo DNA Damage Simulation code (MCDS). Moreover, this simulation approach is implemented in two realistic clinical cases, i.e. prostate cancer treatment using X-rays generated by a linear accelerator and a pediatric osteosarcoma case using a 200.6 MeV proton pencil beam. RBE values for complex DNA damage induction are calculated for the tumor areas. These results reveal a disparity between theory and experiment and underline the necessity for implementing highly precise and more efficient experimental and simulation approaches.Keywords: complex DNA damage, DNA damage simulation, protons, radiotherapy
Procedia PDF Downloads 3253589 Influence of Channel Depth on the Performance of Wavy Fin Absorber Solar Air Heater
Authors: Abhishek Priyam, Prabha Chand
Abstract:
Channel depth is an important design parameter to be fixed in designing a solar air heater. In this paper, a mathematical model has been developed to study the influence of channel duct on the thermal performance of solar air heaters. The channel depth has been varied from 1.5 cm to 3.5 cm for the mass flow range 0.01 to 0.11 kg/s. Based on first law of thermodynamics, the channel depth of 1.5 cm shows better thermal performance for all the mass flow range. Also, better thermohydraulic performance has been found up to 0.05 kg/s, and beyond this, thermohydraulic efficiency starts decreasing. It has been seen that, with the increase in the mass flow rate, the difference between thermal and thermohydraulic efficiency increases because of the increase in pressure drop. At lower mass flow rate, 0.01 kg/s, the thermal and thermohydraulic efficiencies for respective channel depth remain the same.Keywords: channel depth, thermal efficiency, wavy fin, thermohydraulic efficiency
Procedia PDF Downloads 3723588 Formative Assessment in an Introductory Python Programming Course
Authors: María José Núñez-Ruiz, Luis Álvarez-González, Cristian Olivares-Rodriguez, Benjamin Lazo-Letelier
Abstract:
This paper begins with some concept of formative assessment and the relationship with learning objective: contents objectives, processes objectives, and metacognitive objectives. Two methodologies are describes Evidence-Based teaching and Question Drive Instruction. To do formative assessments in larges classes a Classroom Response System (CRS) is needed. But most of CRS use only Multiple Choice Questions (MCQ), True/False question, or text entry; however, this is insufficient to formative assessment. To do that a new CRS, call FAMA was developed. FAMA support six types of questions: Choice, Order, Inline choice, Text entry, Associated, and Slider. An experiment participated in 149 students from four engineering careers. For results, Kendall's Range Correlation Analysis and descriptive analysis was done. In conclusion, there is a strong relation between contents question, process questions (ask in formative assessment without a score) and metacognitive questions, asked in summative assessment. As future work, the lecturer can do personalized teaching, because knows the behavior of all students in each formative assessmentKeywords: Python language, formative assessment, classroom response systems, evidence-Based teaching, question drive instruction
Procedia PDF Downloads 1323587 Estimation of Break Points of Housing Price Growth Rate for Top MSAs in Texas Area
Abstract:
Applying the structural break estimation method proposed by Perron and Bai (1998) to the housing price growth rate of top 5 MSAs in the Texas area, this paper estimated the structural break date for the growth rate of housing prices index. As shown in the estimation results, the break dates for each region are quite different, which indicates the heterogeneity of the housing market in response to macroeconomic conditions.Keywords: structural break, housing prices index, ADF test, linear model
Procedia PDF Downloads 1503586 Cutting Tool-Life Test of Ceramic Insert for Engine Sleeve
Authors: Adam Janásek, Marek Pagáč
Abstract:
The article is looking for an experimental determination of tool life tests for ceramic cutting inserts. Mentioned experimental determination should provide an added information about cutting process. The mechanism of tool wear, cutting temperature in machining, quality machined surface and machining process itself is the information, which are important for whole manufacturing process. Mainly, the roughness plays very important role in determining how a real object will interact with its environment. The main aim was to determine the number of machined inserts, tool life and micro-geometry, as well. On the basis of previous tests the tool-wear was measured at constant cutting parameter which is more typical for high volume manufacturing processes.Keywords: ceramic, insert, machining, surface roughness, tool-life, tool-wear
Procedia PDF Downloads 4943585 Effect of Selenite and Selenate Uptake by Maize Plants on Specific Leaf Area
Authors: F. Garousi, Sz. Veres, É. Bódi, Sz. Várallyay, B. Kovács
Abstract:
Specific leaf area (SLA; cm2leaf g-1leaf) is a key ecophysiological parameter influencing leaf physiology, photosynthesis, and whole plant carbon gain and also can be used as a rapid and diagnostic tool. In this study, two species of soluble inorganic selenium forms, selenite (SeIV) and selenate (SeVI) at different concentrations were investigated on maize plants that were growing in nutrient solutions during 2 weeks and at the end of the experiment, amounts of SLA for first and second leaves of maize were measured. In accordance with the results we observed that our regarded Se concentrations in both forms of SeIV and SeVI were not effective on maize plants’ SLA significantly although high level of 3 mg.kg-1 SeIV had negative affect on growth of the samples that had been treated by it but about SeVI samples we did not observe this state and our different considered SeVI concentrations were not toxic for maize plants.Keywords: maize, sodium selenate, sodium selenite, specific leaf area
Procedia PDF Downloads 4003584 Modelling and Control of Milk Fermentation Process in Biochemical Reactor
Authors: Jožef Ritonja
Abstract:
The biochemical industry is one of the most important modern industries. Biochemical reactors are crucial devices of the biochemical industry. The essential bioprocess carried out in bioreactors is the fermentation process. A thorough insight into the fermentation process and the knowledge how to control it are essential for effective use of bioreactors to produce high quality and quantitatively enough products. The development of the control system starts with the determination of a mathematical model that describes the steady state and dynamic properties of the controlled plant satisfactorily, and is suitable for the development of the control system. The paper analyses the fermentation process in bioreactors thoroughly, using existing mathematical models. Most existing mathematical models do not allow the design of a control system for controlling the fermentation process in batch bioreactors. Due to this, a mathematical model was developed and presented that allows the development of a control system for batch bioreactors. Based on the developed mathematical model, a control system was designed to ensure optimal response of the biochemical quantities in the fermentation process. Due to the time-varying and non-linear nature of the controlled plant, the conventional control system with a proportional-integral-differential controller with constant parameters does not provide the desired transient response. The improved adaptive control system was proposed to improve the dynamics of the fermentation. The use of the adaptive control is suggested because the parameters’ variations of the fermentation process are very slow. The developed control system was tested to produce dairy products in the laboratory bioreactor. A carbon dioxide concentration was chosen as the controlled variable. The carbon dioxide concentration correlates well with the other, for the quality of the fermentation process in significant quantities. The level of the carbon dioxide concentration gives important information about the fermentation process. The obtained results showed that the designed control system provides minimum error between reference and actual values of carbon dioxide concentration during a transient response and in a steady state. The recommended control system makes reference signal tracking much more efficient than the currently used conventional control systems which are based on linear control theory. The proposed control system represents a very effective solution for the improvement of the milk fermentation process.Keywords: biochemical reactor, fermentation process, modelling, adaptive control
Procedia PDF Downloads 1293583 Population Size Estimation Based on the GPD
Authors: O. Anan, D. Böhning, A. Maruotti
Abstract:
The purpose of the study is to estimate the elusive target population size under a truncated count model that accounts for heterogeneity. The purposed estimator is based on the generalized Poisson distribution (GPD), which extends the Poisson distribution by adding a dispersion parameter. Thus, it becomes an useful model for capture-recapture data where concurrent events are not homogeneous. In addition, it can account for over-dispersion and under-dispersion. The ratios of neighboring frequency counts are used as a tool for investigating the validity of whether generalized Poisson or Poisson distribution. Since capture-recapture approaches do not provide the zero counts, the estimated parameters can be achieved by modifying the EM-algorithm technique for the zero-truncated generalized Poisson distribution. The properties and the comparative performance of proposed estimator were investigated through simulation studies. Furthermore, some empirical examples are represented insights on the behavior of the estimators.Keywords: capture, recapture methods, ratio plot, heterogeneous population, zero-truncated count
Procedia PDF Downloads 4353582 Randomness in Cybertext: A Study on Computer-Generated Poetry from the Perspective of Semiotics
Authors: Hongliang Zhang
Abstract:
The use of chance procedures and randomizers in poetry-writing can be traced back to surrealist works, which, by appealing to Sigmund Freud's theories, were still logocentrism. In the 1960s, random permutation and combination were extensively used by the Oulipo, John Cage and Jackson Mac Low, which further deconstructed the metaphysical presence of writing. Today, the randomly-generated digital poetry has emerged as a genre of cybertext which should be co-authored by readers. At the same time, the classical theories have now been updated by cybernetics and media theories. N· Katherine Hayles put forward the concept of ‘the floating signifiers’ by Jacques Lacan to be the ‘the flickering signifiers’ , arguing that the technology per se has become a part of the textual production. This paper makes a historical review of the computer-generated poetry in the perspective of semiotics, emphasizing that the randomly-generated digital poetry which hands over the dual tasks of both interpretation and writing to the readers demonstrates the intervention of media technology in literature. With the participation of computerized algorithm and programming languages, poems randomly generated by computers have not only blurred the boundary between encoder and decoder, but also raises the issue of human-machine. It is also a significant feature of the cybertext that the productive process of the text is full of randomness.Keywords: cybertext, digital poetry, poetry generator, semiotics
Procedia PDF Downloads 1753581 VIAN-DH: Computational Multimodal Conversation Analysis Software and Infrastructure
Authors: Teodora Vukovic, Christoph Hottiger, Noah Bubenhofer
Abstract:
The development of VIAN-DH aims at bridging two linguistic approaches: conversation analysis/interactional linguistics (IL), so far a dominantly qualitative field, and computational/corpus linguistics and its quantitative and automated methods. Contemporary IL investigates the systematic organization of conversations and interactions composed of speech, gaze, gestures, and body positioning, among others. These highly integrated multimodal behaviour is analysed based on video data aimed at uncovering so called “multimodal gestalts”, patterns of linguistic and embodied conduct that reoccur in specific sequential positions employed for specific purposes. Multimodal analyses (and other disciplines using videos) are so far dependent on time and resource intensive processes of manual transcription of each component from video materials. Automating these tasks requires advanced programming skills, which is often not in the scope of IL. Moreover, the use of different tools makes the integration and analysis of different formats challenging. Consequently, IL research often deals with relatively small samples of annotated data which are suitable for qualitative analysis but not enough for making generalized empirical claims derived quantitatively. VIAN-DH aims to create a workspace where many annotation layers required for the multimodal analysis of videos can be created, processed, and correlated in one platform. VIAN-DH will provide a graphical interface that operates state-of-the-art tools for automating parts of the data processing. The integration of tools that already exist in computational linguistics and computer vision, facilitates data processing for researchers lacking programming skills, speeds up the overall research process, and enables the processing of large amounts of data. The main features to be introduced are automatic speech recognition for the transcription of language, automatic image recognition for extraction of gestures and other visual cues, as well as grammatical annotation for adding morphological and syntactic information to the verbal content. In the ongoing instance of VIAN-DH, we focus on gesture extraction (pointing gestures, in particular), making use of existing models created for sign language and adapting them for this specific purpose. In order to view and search the data, VIAN-DH will provide a unified format and enable the import of the main existing formats of annotated video data and the export to other formats used in the field, while integrating different data source formats in a way that they can be combined in research. VIAN-DH will adapt querying methods from corpus linguistics to enable parallel search of many annotation levels, combining token-level and chronological search for various types of data. VIAN-DH strives to bring crucial and potentially revolutionary innovation to the field of IL, (that can also extend to other fields using video materials). It will allow the processing of large amounts of data automatically and, the implementation of quantitative analyses, combining it with the qualitative approach. It will facilitate the investigation of correlations between linguistic patterns (lexical or grammatical) with conversational aspects (turn-taking or gestures). Users will be able to automatically transcribe and annotate visual, spoken and grammatical information from videos, and to correlate those different levels and perform queries and analyses.Keywords: multimodal analysis, corpus linguistics, computational linguistics, image recognition, speech recognition
Procedia PDF Downloads 1083580 Mapping Tunnelling Parameters for Global Optimization in Big Data via Dye Laser Simulation
Authors: Sahil Imtiyaz
Abstract:
One of the biggest challenges has emerged from the ever-expanding, dynamic, and instantaneously changing space-Big Data; and to find a data point and inherit wisdom to this space is a hard task. In this paper, we reduce the space of big data in Hamiltonian formalism that is in concordance with Ising Model. For this formulation, we simulate the system using dye laser in FORTRAN and analyse the dynamics of the data point in energy well of rhodium atom. After mapping the photon intensity and pulse width with energy and potential we concluded that as we increase the energy there is also increase in probability of tunnelling up to some point and then it starts decreasing and then shows a randomizing behaviour. It is due to decoherence with the environment and hence there is a loss of ‘quantumness’. This interprets the efficiency parameter and the extent of quantum evolution. The results are strongly encouraging in favour of the use of ‘Topological Property’ as a source of information instead of the qubit.Keywords: big data, optimization, quantum evolution, hamiltonian, dye laser, fermionic computations
Procedia PDF Downloads 1943579 Perception and Implementation of Machine Translation Applications by the Iranian English Translators
Authors: Abdul Amir Hazbavi
Abstract:
The present study is an attempt to provide a relatively comprehensive preview of the Iranian English translators’ perception on Machine Translation. Furthermore, the study tries to shed light on the status of implementation of Machine Translation among the Iranian English Translators. To reach the aforementioned objectives, the Localization Industry Standards Association’s questioner for measuring perceptions with regard to the adoption of a technology innovation was adapted and used to investigate three parameter among the participants of the study, namely familiarity with Machine Translation, general perception on Machine Translation and implementation of Machine Translation systems in translation tasks. The participants of the study were 224 last-year undergraduate Iranian students of English translation at 10 universities across the country. The study revealed a very low level of adoption and a very high level of willingness to get familiar with and learn about Machine Translation, as well as a positive perception of and attitude toward Machine Translation by the Iranian English translators.Keywords: translation technology, machine translation, perception, implementation
Procedia PDF Downloads 5243578 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices
Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu
Abstract:
Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.Keywords: anthropometry, childhood obesity, gender, morbid obesity
Procedia PDF Downloads 3253577 An Optimal Hybrid EMS System for a Hyperloop Prototype Vehicle
Authors: J. F. Gonzalez-Rojo, Federico Lluesma-Rodriguez, Temoatzin Gonzalez
Abstract:
Hyperloop, a new mode of transport, is gaining significance. It consists of the use of a ground-based transport system which includes a levitation system, that avoids rolling friction forces, and which has been covered with a tube, controlling the inner atmosphere lowering the aerodynamic drag forces. Thus, hyperloop is proposed as a solution to the current limitation on ground transportation. Rolling and aerodynamic problems, that limit large speeds for traditional high-speed rail or even maglev systems, are overcome using a hyperloop solution. Zeleros is one of the companies developing technology for hyperloop application worldwide. It is working on a concept that reduces the infrastructure cost and minimizes the power consumption as well as the losses associated with magnetic drag forces. For this purpose, Zeleros proposes a Hybrid ElectroMagnetic Suspension (EMS) for its prototype. In the present manuscript an active and optimal electromagnetic suspension levitation method based on nearly zero power consumption individual modules is presented. This system consists of several hybrid permanent magnet-coil levitation units that can be arranged along the vehicle. The proposed unit manages to redirect the magnetic field along a defined direction forming a magnetic circuit and minimizing the loses due to field dispersion. This is achieved using an electrical steel core. Each module can stabilize the gap distance using the coil current and either linear or non-linear control methods. The ratio between weight and levitation force for each unit is 1/10. In addition, the quotient between the lifted weight and power consumption at the target gap distance is 1/3 [kg/W]. One degree of freedom (DoF) (along the gap direction) is controlled by a single unit. However, when several units are present, a 5 DoF control (2 translational and 3 rotational) can be achieved, leading to the full attitude control of the vehicle. The proposed system has been successfully tested reaching TRL-4 in a laboratory test bench and is currently in TRL-5 state development if the module association in order to control 5 DoF is considered.Keywords: active optimal control, electromagnetic levitation, HEMS, high-speed transport, hyperloop
Procedia PDF Downloads 1463576 Design of a Fuzzy Luenberger Observer for Fault Nonlinear System
Authors: Mounir Bekaik, Messaoud Ramdani
Abstract:
We present in this work a new technique of stabilization for fault nonlinear systems. The approach we adopt focus on a fuzzy Luenverger observer. The T-S approximation of the nonlinear observer is based on fuzzy C-Means clustering algorithm to find local linear subsystems. The MOESP identification approach was applied to design an empirical model describing the subsystems state variables. The gain of the observer is given by the minimization of the estimation error through Lyapunov-krasovskii functional and LMI approach. We consider a three tank hydraulic system for an illustrative example.Keywords: nonlinear system, fuzzy, faults, TS, Lyapunov-Krasovskii, observer
Procedia PDF Downloads 3333575 Efficient Utilization of Biomass for Bioenergy in Environmental Control
Authors: Subir Kundu, Sukhendra Singh, Sumedha Ojha, Kanika Kundu
Abstract:
The continuous decline of petroleum and natural gas reserves and non linear rise of oil price has brought about a realisation of the need for a change in our perpetual dependence on the fossil fuel. A day to day increased consumption of crude and petroleum products has made a considerable impact on our foreign exchange reserves. Hence, an alternate resource for the conversion of energy (both liquid and gas) is essential for the substitution of conventional fuels. Biomass is the alternate solution for the present scenario. Biomass can be converted into both liquid as well as gaseous fuels and other feedstocks for the industries.Keywords: bioenergy, biomass conversion, biorefining, efficient utilisation of night soil
Procedia PDF Downloads 4063574 Automatic Calibration of Agent-Based Models Using Deep Neural Networks
Authors: Sima Najafzadehkhoei, George Vega Yon
Abstract:
This paper presents an approach for calibrating Agent-Based Models (ABMs) efficiently, utilizing Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks. These machine learning techniques are applied to Susceptible-Infected-Recovered (SIR) models, which are a core framework in the study of epidemiology. Our method replicates parameter values from observed trajectory curves, enhancing the accuracy of predictions when compared to traditional calibration techniques. Through the use of simulated data, we train the models to predict epidemiological parameters more accurately. Two primary approaches were explored: one where the number of susceptible, infected, and recovered individuals is fully known, and another using only the number of infected individuals. Our method shows promise for application in other ABMs where calibration is computationally intensive and expensive.Keywords: ABM, calibration, CNN, LSTM, epidemiology
Procedia PDF Downloads 243573 Distribution System Modelling: A Holistic Approach for Harmonic Studies
Authors: Stanislav Babaev, Vladimir Cuk, Sjef Cobben, Jan Desmet
Abstract:
The procedures for performing harmonic studies for medium-voltage distribution feeders have become relatively mature topics since the early 1980s. The efforts of various electric power engineers and researchers were mainly focused on handling large harmonic non-linear loads connected scarcely at several buses of medium-voltage feeders. In order to assess the impact of these loads on the voltage quality of the distribution system, specific modeling and simulation strategies were proposed. These methodologies could deliver a reasonable estimation accuracy given the requirements of least computational efforts and reduced complexity. To uphold these requirements, certain analysis assumptions have been made, which became de facto standards for establishing guidelines for harmonic analysis. Among others, typical assumptions include balanced conditions of the study and the negligible impact of impedance frequency characteristics of various power system components. In latter, skin and proximity effects are usually omitted, and resistance and reactance values are modeled based on the theoretical equations. Further, the simplifications of the modelling routine have led to the commonly accepted practice of neglecting phase angle diversity effects. This is mainly associated with developed load models, which only in a handful of cases are representing the complete harmonic behavior of a certain device as well as accounting on the harmonic interaction between grid harmonic voltages and harmonic currents. While these modelling practices were proven to be reasonably effective for medium-voltage levels, similar approaches have been adopted for low-voltage distribution systems. Given modern conditions and massive increase in usage of residential electronic devices, recent and ongoing boom of electric vehicles, and large-scale installing of distributed solar power, the harmonics in current low-voltage grids are characterized by high degree of variability and demonstrate sufficient diversity leading to a certain level of cancellation effects. It is obvious, that new modelling algorithms overcoming previously made assumptions have to be accepted. In this work, a simulation approach aimed to deal with some of the typical assumptions is proposed. A practical low-voltage feeder is modeled in PowerFactory. In order to demonstrate the importance of diversity effect and harmonic interaction, previously developed measurement-based models of photovoltaic inverter and battery charger are used as loads. The Python-based script aiming to supply varying voltage background distortion profile and the associated current harmonic response of loads is used as the core of unbalanced simulation. Furthermore, the impact of uncertainty of feeder frequency-impedance characteristics on total harmonic distortion levels is shown along with scenarios involving linear resistive loads, which further alter the impedance of the system. The comparative analysis demonstrates sufficient differences with cases when all the assumptions are in place, and results indicate that new modelling and simulation procedures need to be adopted for low-voltage distribution systems with high penetration of non-linear loads and renewable generation.Keywords: electric power system, harmonic distortion, power quality, public low-voltage network, harmonic modelling
Procedia PDF Downloads 1593572 GPU Accelerated Fractal Image Compression for Medical Imaging in Parallel Computing Platform
Authors: Md. Enamul Haque, Abdullah Al Kaisan, Mahmudur R. Saniat, Aminur Rahman
Abstract:
In this paper, we have implemented both sequential and parallel version of fractal image compression algorithms using CUDA (Compute Unified Device Architecture) programming model for parallelizing the program in Graphics Processing Unit for medical images, as they are highly similar within the image itself. There is several improvements in the implementation of the algorithm as well. Fractal image compression is based on the self similarity of an image, meaning an image having similarity in majority of the regions. We take this opportunity to implement the compression algorithm and monitor the effect of it using both parallel and sequential implementation. Fractal compression has the property of high compression rate and the dimensionless scheme. Compression scheme for fractal image is of two kinds, one is encoding and another is decoding. Encoding is very much computational expensive. On the other hand decoding is less computational. The application of fractal compression to medical images would allow obtaining much higher compression ratios. While the fractal magnification an inseparable feature of the fractal compression would be very useful in presenting the reconstructed image in a highly readable form. However, like all irreversible methods, the fractal compression is connected with the problem of information loss, which is especially troublesome in the medical imaging. A very time consuming encoding process, which can last even several hours, is another bothersome drawback of the fractal compression.Keywords: accelerated GPU, CUDA, parallel computing, fractal image compression
Procedia PDF Downloads 3363571 The Mental Workload of ICU Nurses in Performing Human-Machine Tasks: A Cross-sectional Survey
Authors: Yan Yan, Erhong Sun, Lin Peng, Xuchun Ye
Abstract:
Aims: The present study aimed to explore Intensive Care Unit(ICU) nurses’ mental workload (MWL) and associated factors with it in performing human-machine tasks. Background: A wide range of emerging technologies have penetrated widely in the field of health care, and ICU nurses are facing a dramatic increase in nursing human-machine tasks. However, there is still a paucity of literature reporting on the general MWL of ICU nurses performing human-machine tasks and the associated influencing factors. Methods: A cross-sectional survey was employed. The data was collected from January to February 2021 from 9 tertiary hospitals in 6 provinces (Shanghai, Gansu, Guangdong, Liaoning, Shandong, and Hubei). Two-stage sampling was used to recruit eligible ICU nurses (n=427). The data were collected with an electronic questionnaire comprising sociodemographic characteristics and the measures of MWL, self-efficacy, system usability, and task difficulty. The univariate analysis, two-way analysis of variance(ANOVA), and a linear mixed model were used for data analysis. Results: Overall, the mental workload of ICU nurses in performing human-machine tasks was medium (score 52.04 on a 0-100 scale). Among the typical nursing human-machine tasks selected, the MWL of ICU nurses in completing first aid and life support tasks (‘Using a defibrillator to defibrillate’ and ‘Use of ventilator’) was significantly higher than others (p < .001). And ICU nurses’ MWL in performing human-machine tasks was also associated with age (p = .001), professional title (p = .002), years of working in ICU (p < .001), willingness to study emerging technology actively (p = .006), task difficulty (p < .001), and system usability (p < .001). Conclusion: The MWL of ICU nurses is at a moderate level in the context of a rapid increase in nursing human-machine tasks. However, there are significant differences in MWL when performing different types of human-machine tasks, and MWL can be influenced by a combination of factors. Nursing managers need to develop intervention strategies in multiple ways. Implications for practice: Multidimensional approaches are required to perform human-machine tasks better, including enhancing nurses' willingness to learn emerging technologies actively, developing training strategies that vary with tasks, and identifying obstacles in the process of human-machine system interaction.Keywords: mental workload(MWL), nurse, ICU, human-machine, tasks, cross-sectional study, linear mixed model, China
Procedia PDF Downloads 1053570 EEG Signal Processing Methods to Differentiate Mental States
Authors: Sun H. Hwang, Young E. Lee, Yunhan Ga, Gilwon Yoon
Abstract:
EEG is a very complex signal with noises and other bio-potential interferences. EOG is the most distinct interfering signal when EEG signals are measured and analyzed. It is very important how to process raw EEG signals in order to obtain useful information. In this study, the EEG signal processing techniques such as EOG filtering and outlier removal were examined to minimize unwanted EOG signals and other noises. The two different mental states of resting and focusing were examined through EEG analysis. A focused state was induced by letting subjects to watch a red dot on the white screen. EEG data for 32 healthy subjects were measured. EEG data after 60-Hz notch filtering were processed by a commercially available EOG filtering and our presented algorithm based on the removal of outliers. The ratio of beta wave to theta wave was used as a parameter for determining the degree of focusing. The results show that our algorithm was more appropriate than the existing EOG filtering.Keywords: EEG, focus, mental state, outlier, signal processing
Procedia PDF Downloads 2843569 Reliability Based Optimal Design of Laterally Loaded Pile with Limited Residual Strain Energy Capacity
Authors: M. Movahedi Rad
Abstract:
In this study, a general approach to the reliability based limit analysis of laterally loaded piles is presented. In engineering practice, the uncertainties play a very important role. The aim of this study is to evaluate the lateral load capacity of free head and fixed-head long pile when the plastic limit analysis is considered. In addition to the plastic limit analysis to control the plastic behaviour of the structure, uncertain bound on the complementary strain energy of the residual forces is also applied. This bound has a significant effect for the load parameter. The solution to reliability-based problems is obtained by a computer program which is governed by the reliability index calculation.Keywords: reliability, laterally loaded pile, residual strain energy, probability, limit analysis
Procedia PDF Downloads 3493568 Effect of Fault Depth on Near-Fault Peak Ground Velocity
Authors: Yanyan Yu, Haiping Ding, Pengjun Chen, Yiou Sun
Abstract:
Fault depth is an important parameter to be determined in ground motion simulation, and peak ground velocity (PGV) demonstrates good application prospect. Using numerical simulation method, the variations of distribution and peak value of near-fault PGV with different fault depth were studied in detail, and the reason of some phenomena were discussed. The simulation results show that the distribution characteristics of PGV of fault-parallel (FP) component and fault-normal (FN) component are distinctly different; the value of PGV FN component is much larger than that of FP component. With the increase of fault depth, the distribution region of the FN component strong PGV moves forward along the rupture direction, while the strong PGV zone of FP component becomes gradually far away from the fault trace along the direction perpendicular to the strike. However, no matter FN component or FP component, the strong PGV distribution area and its value are both quickly reduced with increased fault depth. The results above suggest that the fault depth have significant effect on both FN component and FP component of near-fault PGV.Keywords: fault depth, near-fault, PGV, numerical simulation
Procedia PDF Downloads 3473567 Biomarkers, A Reliable Tool for Delineating Spill Trajectory
Authors: Okpor Victor, Selegha Abrakasa
Abstract:
Oil (Petroleum) spill occur frequently and in this era of a higher degree of awareness, it is pertinent that the trajectory of the spill is properly defined, to make certain of the area of impact by the spill. In this study, biomarkers that are known as the custodians of paleo information in oils are suggested to be used as reliable tools for defining the pathway of a spill. Samples were collected as tills alongside the GPS coordinates of the sample points suspected to have been impacted by a spill. Oils in the samples were extracted and analyzed as whole oil using GC–MS. Some biomarker parametric ratios were derived, and the ratio showed consistency of values along the sample trail from sample 1 to sample 20. The consistency of the values indicates that the oils at each sample point are the same hence the same value. This method can be used to validate the trajectory/pathway of a spill and also to define or establish a suspected pathway for a spill. The Oleanane/C30Hopane ratio showed good consistency and was suggested as a reliable parameter for establishing the trajectory of an oil spill.Keywords: spill, biomarkers, trajectory, pathway
Procedia PDF Downloads 653566 [Keynote Speech]: Competitive Evaluation of Power Plants in Energy Policy
Authors: Beril Tuğrul
Abstract:
Electrical energy is the most important form of energy and electrical power plants have highest impact factor in energy policy. This study is in relation with evaluation of various power plants including fossil fuels, nuclear and renewable energy based power plants. The power plants evaluated with regard to their overall impact that considered for establishing of the plants. Both positive and negative impacts of power plant operation are compared view of different arguments. Then calculate the impact factor by using variation linear extrapolation for each argument. With this study, power plants assessed with the different point of view and clarified objectively. Procedia PDF Downloads 5243565 Multi-Agent Coverage Control with Bounded Gain Forgetting Composite Adaptive Controller
Authors: Mert Turanli, Hakan Temeltas
Abstract:
In this paper, we present an adaptive controller for decentralized coordination problem of multiple non-holonomic agents. The performance of the presented Multi-Agent Bounded Gain Forgetting (BGF) Composite Adaptive controller is compared against the tracking error criterion with a Feedback Linearization controller. By using the method, the sensor nodes move and reconfigure themselves in a coordinated way in response to a sensed environment. The multi-agent coordination is achieved through Centroidal Voronoi Tessellations and Coverage Control. Also, a consensus protocol is used for synchronization of the parameter vectors. The two controllers are given with their Lyapunov stability analysis and their stability is verified with simulation results. The simulations are carried out in MATLAB and ROS environments. Better performance is obtained with BGF Adaptive Controller.Keywords: adaptive control, centroidal voronoi tessellations, composite adaptation, coordination, multi robots
Procedia PDF Downloads 348