Search results for: switching time
17362 Characterization of Pectinase from Local Microorganisms to Support Industry Based Green Chemistry
Authors: Sasangka Prasetyawan, Anna Roosdiana, Diah Mardiana, Suratmo
Abstract:
Pectinase are enzymes that hydrolyze pectin compounds. The use of this enzyme is primarily to reduce the viscosity of the beverage thus simplifying the purification process. Pectinase activity influenced by microbial sources . Exploration of two types of microbes that Aspergillus spp. and Bacillus spp. pectinase give different performance, but the use of local strain is still not widely studied. The aim of this research is exploration of pectinase from A. niger and B. firmus include production conditions and characterization. Bacillus firmus incubated and shaken at a speed of 200 rpm at pH variation (5, 6, 7, 8, 9, 10), temperature (30, 35, 40, 45, 50) °C and incubation time (6, 12, 18, 24, 30, 36 ) hours. Media was centrifuged at 3000 rpm, pectinase enzyme activity determined. Enzyme production by A. niger determined to variations in temperature and pH were similar to B. firmus, but the variation of the incubation time was 24, 48, 72, 96, 120 hours. Pectinase crude extract was further purified by precipitation using ammonium sulfate saturation in fraction 0-20 %, 20-40 %, 40-60 %, 60-80 %, then dialyzed. Determination of optimum conditions pectinase activity performed by measuring the variation of enzyme activity on pH (4, 6, 7, 8, 10), temperature (30, 35, 40, 45, 50) °C, and the incubation time (10, 20, 30, 40, 50) minutes . Determination of kinetic parameters of pectinase enzyme reaction carried out by measuring the rate of enzyme reactions at the optimum conditions, but the variation of the concentration of substrate (pectin 0.1 % , 0.2 % , 0.3 % , 0.4 % , 0.5 % ). The results showed that the optimum conditions of production of pectinase from B. firmus achieved at pH 7-8.0, 40-50 ⁰C temperature and fermentation time 18 hours. Purification of pectinase showed the highest purity in the 40-80 % ammonium sulfate fraction. Character pectinase obtained : the optimum working conditions of A. niger pectinase at pH 5 , while pectinase from B. firmus at pH 7, temperature and optimum incubation time showed the same value, namely the temperature of 50 ⁰C and incubation time of 30 minutes. The presence of metal ions can affect the activity of pectinase , the concentration of Zn 2 + , Pb 2 + , Ca 2 + and K + and 2 mM Mg 2 + above 6 mM inhibit the activity of pectinase .Keywords: pectinase, Bacillus firmus, Aspergillus niger, green chemistry
Procedia PDF Downloads 36717361 Load Balancing and Resource Utilization in Cloud Computing
Authors: Gagandeep Kaur
Abstract:
Cloud computing uses various computing resources such as CPU, memory, processor etc. which is used to deliver service over the network and is one of the emerging fields for large scale distributed computing. In cloud computing, execution of large number of tasks with available resources to achieve high performance, minimal total time for completion, minimum response time, effective utilization of resources etc. are the major research areas. In the proposed research, an algorithm has been proposed to achieve high performance in load balancing and resource utilization. The proposed algorithm is used to reduce the makespan, increase the resource utilization and performance cost for independent tasks. Further scheduling metrics based on algorithm in cloud computing has been proposed.Keywords: resource utilization, response time, load balancing, performance cost
Procedia PDF Downloads 18217360 Study of Motion of Impurity Ions in Poly(Vinylidene Fluoride) from View Point of Microstructure of Polymer Solid
Authors: Yuichi Anada
Abstract:
Electrical properties of polymer solid is characterized by dielectric relaxation phenomenon. Complex permittivity shows a high dependence on frequency of external stimulation in the broad frequency range from 0.1mHz to 10GHz. The complex-permittivity dispersion gives us a lot of useful information about the molecular motion of polymers and the structure of polymer aggregates. However, the large dispersion of permittivity at low frequencies due to DC conduction of impurity ions often covers the dielectric relaxation in polymer solid. In experimental investigation, many researchers have tried to remove the DC conduction experimentally or analytically for a long time. On the other hand, our laboratory chose another way of research for this problem from the point of view of a reversal in thinking. The way of our research is to use the impurity ions in the DC conduction as a probe to detect the motion of polymer molecules and to investigate the structure of polymer aggregates. In addition to the complex permittivity, the electric modulus and the conductivity relaxation time are strong tools for investigating the ionic motion in DC conduction. In a non-crystalline part of melt-crystallized polymers, free spaces with inhomogeneous size exist between crystallites. As the impurity ions exist in the non-crystalline part and move through these inhomogeneous free spaces, the motion of ions reflects the microstructure of non-crystalline part. The ionic motion of impurity ions in poly(vinylidene fluoride) (PVDF) is investigated in this study. Frequency dependence of the loss permittivity of PVDF shows a characteristic of the direct current (DC) conduction below 1 kHz of frequency at 435 K. The electric modulus-frequency curve shows a characteristic of the dispersion with the single conductivity relaxation time. Namely, it is the Debye-type dispersion. The conductivity relaxation time analyzed from this curve is 0.00003 s at 435 K. From the plot of conductivity relaxation time of PVDF together with the other polymers against permittivity, it was found that there are two group of polymers; one of the group is characterized by small conductivity relaxation time and large permittivity, and another is characterized by large conductivity relaxation time and small permittivity.Keywords: conductivity relaxation time, electric modulus, ionic motion, permittivity, poly(vinylidene fluoride), DC conduction
Procedia PDF Downloads 17017359 Colour and Curcuminoids Removal from Turmeric Wastewater Using Activated Carbon Adsorption
Authors: Nattawat Thongpraphai, Anusorn Boonpoke
Abstract:
This study aimed to determine the removal of colour and curcuminoids from turmeric wastewater using granular activated carbon (GAC) adsorption. The adsorption isotherm and kinetic behavior of colour and curcuminoids was invested using batch and fixed bed columns tests. The results indicated that the removal efficiency of colour and curcuminoids were 80.13 and 78.64%, respectively at 8 hr of equilibrium time. The adsorption isotherm of colour and curcuminoids were well fitted with the Freundlich adsorption model. The maximum adsorption capacity of colour and curcuminoids were 130 Pt-Co/g and 17 mg/g, respectively. The continuous experiment data showed that the exhaustion concentration of colour and curcuminoids occurred at 39 hr of operation time. The adsorption characteristic of colour and curcuminoids from turmeric wastewater by GAC can be described by the Thomas model. The maximum adsorption capacity obtained from kinetic approach were 39954 Pt-Co/g and 0.0516 mg/kg for colour and curcuminoids, respectively. Moreover, the decrease of colour and curcuminoids concentration during the service time showed a similar trend.Keywords: adsorption, turmeric, colour, curcuminoids, activated carbon
Procedia PDF Downloads 42417358 Fire Characteristic of Commercial Retardant Flame Polycarbonate under Different Oxygen Concentration: Ignition Time and Heat Blockage
Authors: Xuelin Zhang, Shouxiang Lu, Changhai Li
Abstract:
The commercial retardant flame polycarbonate samples as the main high speed train interior carriage material with different thicknesses were investigated in Fire Propagation Apparatus with different external heat fluxes under different oxygen concentration from 12% to 40% to study the fire characteristics and quantitatively analyze the ignition time, mass loss rate and heat blockage. The additives of commercial retardant flame polycarbonate were intumescent and maintained a steady height before ignition when heated. The results showed the transformed ignition time (1/t_ig)ⁿ increased linearly with external flux under different oxygen concentration after deducting the heat blockage due to pyrolysis products, the mass loss rate was taken on linearly with external heat fluxes and the slop of the fitting line for mass loss rate and external heat fluxes decreased with the enhanced oxygen concentration and the heat blockage independent on external heat fluxes rose with oxygen concentration increasing. The inquired data as the input of the fire simulation model was the most important to be used to evaluate the fire risk of commercial retardant flame polycarbonate.Keywords: ignition time, mass loss rate, heat blockage, fire characteristic
Procedia PDF Downloads 28217357 A Neurofeedback Learning Model Using Time-Frequency Analysis for Volleyball Performance Enhancement
Authors: Hamed Yousefi, Farnaz Mohammadi, Niloufar Mirian, Navid Amini
Abstract:
Investigating possible capacities of visual functions where adapted mechanisms can enhance the capability of sports trainees is a promising area of research, not only from the cognitive viewpoint but also in terms of unlimited applications in sports training. In this paper, the visual evoked potential (VEP) and event-related potential (ERP) signals of amateur and trained volleyball players in a pilot study were processed. Two groups of amateur and trained subjects are asked to imagine themselves in the state of receiving a ball while they are shown a simulated volleyball field. The proposed method is based on a set of time-frequency features using algorithms such as Gabor filter, continuous wavelet transform, and a multi-stage wavelet decomposition that are extracted from VEP signals that can be indicative of being amateur or trained. The linear discriminant classifier achieves the accuracy, sensitivity, and specificity of 100% when the average of the repetitions of the signal corresponding to the task is used. The main purpose of this study is to investigate the feasibility of a fast, robust, and reliable feature/model determination as a neurofeedback parameter to be utilized for improving the volleyball players’ performance. The proposed measure has potential applications in brain-computer interface technology where a real-time biomarker is needed.Keywords: visual evoked potential, time-frequency feature extraction, short-time Fourier transform, event-related spectrum potential classification, linear discriminant analysis
Procedia PDF Downloads 13817356 Comparative Study of Deep Reinforcement Learning Algorithm Against Evolutionary Algorithms for Finding the Optimal Values in a Simulated Environment Space
Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt
Abstract:
Traditional optimization methods like evolutionary algorithms are widely used in production processes to find an optimal or near-optimal solution of control parameters based on the simulated environment space of a process. These algorithms are computationally intensive and therefore do not provide the opportunity for real-time optimization. This paper utilizes the Deep Reinforcement Learning (DRL) framework to find an optimal or near-optimal solution for control parameters. A model based on maximum a posteriori policy optimization (Hybrid-MPO) that can handle both numerical and categorical parameters is used as a benchmark for comparison. A comparative study shows that DRL can find optimal solutions of similar quality as compared to evolutionary algorithms while requiring significantly less time making them preferable for real-time optimization. The results are confirmed in a large-scale validation study on datasets from production and other fields. A trained XGBoost model is used as a surrogate for process simulation. Finally, multiple ways to improve the model are discussed.Keywords: reinforcement learning, evolutionary algorithms, production process optimization, real-time optimization, hybrid-MPO
Procedia PDF Downloads 11217355 From Comfort to Safety: Assessing the Influence of Car Seat Design on Driver Reaction and Performance
Authors: Sabariah Mohd Yusoff, Qamaruddin Adzeem Muhamad Murad
Abstract:
This study investigates the impact of car seat design on driver response time, addressing a critical gap in understanding how ergonomic features influence both performance and safety. Controlled driving experiments were conducted with fourteen participants (11 male, 3 female) across three locations chosen for their varying traffic conditions to account for differences in driver alertness. Participants interacted with various seat designs while performing driving tasks, and objective metrics such as braking and steering response times were meticulously recorded. Advanced statistical methods, including regression analysis and t-tests, were employed to identify design factors that significantly affect driver response times. Subjective feedback was gathered through detailed questionnaires—focused on driving experience and knowledge of response time—and in-depth interviews. This qualitative data was analyzed thematically to provide insights into driver comfort and usability preferences. The study aims to identify key seat design features that impact driver response time and to gain a deeper understanding of driver preferences for comfort and usability. The findings are expected to inform evidence-based guidelines for optimizing car seat design, ultimately enhancing driver performance and safety. The research offers valuable implications for automotive manufacturers and designers, contributing to the development of seats that improve driver response time and overall driving safety.Keywords: car seat design, driver response time, cognitive driving, ergonomics optimization
Procedia PDF Downloads 2417354 Simulation and Experimental Research on Pocketing Operation for Toolpath Optimization in CNC Milling
Authors: Rakesh Prajapati, Purvik Patel, Avadhoot Rajurkar
Abstract:
Nowadays, manufacturing industries augment their production lines with modern machining centers backed by CAM software. Several attempts are being made to cut down the programming time for machining complex geometries. Special programs/software have been developed to generate the digital numerical data and to prepare NC programs by using suitable post-processors for different machines. By selecting the tools and manufacturing process then applying tool paths and NC program are generated. More and more complex mechanical parts that earlier were being cast and assembled/manufactured by other processes are now being machined. Majority of these parts require lots of pocketing operations and find their applications in die and mold, turbo machinery, aircraft, nuclear, defense etc. Pocketing operations involve removal of large quantity of material from the metal surface. The modeling of warm cast and clamping a piece of food processing parts which the used of Pro-E and MasterCAM® software. Pocketing operation has been specifically chosen for toolpath optimization. Then after apply Pocketing toolpath, Multi Tool Selection and Reduce Air Time give the results of software simulation time and experimental machining time.Keywords: toolpath, part program, optimization, pocket
Procedia PDF Downloads 28717353 Analysis of Brain Signals Using Neural Networks Optimized by Co-Evolution Algorithms
Authors: Zahra Abdolkarimi, Naser Zourikalatehsamad,
Abstract:
Up to 40 years ago, after recognition of epilepsy, it was generally believed that these attacks occurred randomly and suddenly. However, thanks to the advance of mathematics and engineering, such attacks can be predicted within a few minutes or hours. In this way, various algorithms for long-term prediction of the time and frequency of the first attack are presented. In this paper, by considering the nonlinear nature of brain signals and dynamic recorded brain signals, ANFIS model is presented to predict the brain signals, since according to physiologic structure of the onset of attacks, more complex neural structures can better model the signal during attacks. Contribution of this work is the co-evolution algorithm for optimization of ANFIS network parameters. Our objective is to predict brain signals based on time series obtained from brain signals of the people suffering from epilepsy using ANFIS. Results reveal that compared to other methods, this method has less sensitivity to uncertainties such as presence of noise and interruption in recorded signals of the brain as well as more accuracy. Long-term prediction capacity of the model illustrates the usage of planted systems for warning medication and preventing brain signals.Keywords: co-evolution algorithms, brain signals, time series, neural networks, ANFIS model, physiologic structure, time prediction, epilepsy suffering, illustrates model
Procedia PDF Downloads 28217352 Dynamic Analysis of Submerged Floating Tunnel Subjected to Hydrodynamic and Seismic Loadings
Authors: Naik Muhammad, Zahid Ullah, Dong-Ho Choi
Abstract:
Submerged floating tunnel (SFT) is a new solution for the transportation infrastructure through sea straits, fjords, and inland waters, and can be a good alternative to long span suspension bridges. SFT is a massive cylindrical structure that floats at a certain depth below the water surface and subjected to extreme environmental conditions. The identification of dominant structural response of SFT becomes more important due to intended environmental conditions for the design of SFT. The time domain dynamic problem of SFT moored by vertical and inclined mooring cables/anchors is formulated. The dynamic time history analysis of SFT subjected to hydrodynamic and seismic excitations is performed. The SFT is modeled by finite element 3D beam, and the mooring cables are modeled by truss elements. Based on the dynamic time history analysis the displacements and internal forces of SFT were calculated. The response of SFT is presented for hydrodynamic and seismic excitations. The transverse internal forces of SFT were the maximum compared to vertical direction, for both hydrodynamic and seismic cases; this indicates that the cable system provides very small stiffness in transverse direction as compared to vertical direction of SFT.Keywords: submerged floating tunnel, hydrodynamic analysis, time history analysis, seismic response
Procedia PDF Downloads 32917351 Kalman Filter Gain Elimination in Linear Estimation
Authors: Nicholas D. Assimakis
Abstract:
In linear estimation, the traditional Kalman filter uses the Kalman filter gain in order to produce estimation and prediction of the n-dimensional state vector using the m-dimensional measurement vector. The computation of the Kalman filter gain requires the inversion of an m x m matrix in every iteration. In this paper, a variation of the Kalman filter eliminating the Kalman filter gain is proposed. In the time varying case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix and the inversion of an m x m matrix in every iteration. In the time invariant case, the elimination of the Kalman filter gain requires the inversion of an n x n matrix in every iteration. The proposed Kalman filter gain elimination algorithm may be faster than the conventional Kalman filter, depending on the model dimensions.Keywords: discrete time, estimation, Kalman filter, Kalman filter gain
Procedia PDF Downloads 19517350 Experimental Study on the Heat Transfer Characteristics of the 200W Class Woofer Speaker
Authors: Hyung-Jin Kim, Dae-Wan Kim, Moo-Yeon Lee
Abstract:
The objective of this study is to experimentally investigate the heat transfer characteristics of 200 W class woofer speaker units with the input voice signals. The temperature and heat transfer characteristics of the 200 W class woofer speaker unit were experimentally tested with the several input voice signals such as 1500 Hz, 2500 Hz, and 5000 Hz respectively. From the experiments, it can be observed that the temperature of the woofer speaker unit including the voice-coil part increases with a decrease in input voice signals. Also, the temperature difference in measured points of the voice coil is increased with decrease of the input voice signals. In addition, the heat transfer characteristics of the woofer speaker in case of the input voice signal of 1500 Hz is 40% higher than that of the woofer speaker in case of the input voice signal of 5000 Hz at the measuring time of 200 seconds. It can be concluded from the experiments that initially the temperature of the voice signal increases rapidly with time, after a certain period of time it increases exponentially. Also during this time dependent temperature change, it can be observed that high voice signal is stable than low voice signal.Keywords: heat transfer, temperature, voice coil, woofer speaker
Procedia PDF Downloads 36017349 Infrastructure Change Monitoring Using Multitemporal Multispectral Satellite Images
Authors: U. Datta
Abstract:
The main objective of this study is to find a suitable approach to monitor the land infrastructure growth over a period of time using multispectral satellite images. Bi-temporal change detection method is unable to indicate the continuous change occurring over a long period of time. To achieve this objective, the approach used here estimates a statistical model from series of multispectral image data over a long period of time, assuming there is no considerable change during that time period and then compare it with the multispectral image data obtained at a later time. The change is estimated pixel-wise. Statistical composite hypothesis technique is used for estimating pixel based change detection in a defined region. The generalized likelihood ratio test (GLRT) is used to detect the changed pixel from probabilistic estimated model of the corresponding pixel. The changed pixel is detected assuming that the images have been co-registered prior to estimation. To minimize error due to co-registration, 8-neighborhood pixels around the pixel under test are also considered. The multispectral images from Sentinel-2 and Landsat-8 from 2015 to 2018 are used for this purpose. There are different challenges in this method. First and foremost challenge is to get quite a large number of datasets for multivariate distribution modelling. A large number of images are always discarded due to cloud coverage. Due to imperfect modelling there will be high probability of false alarm. Overall conclusion that can be drawn from this work is that the probabilistic method described in this paper has given some promising results, which need to be pursued further.Keywords: co-registration, GLRT, infrastructure growth, multispectral, multitemporal, pixel-based change detection
Procedia PDF Downloads 13517348 A Heart Arrhythmia Prediction Using Machine Learning’s Classification Approach and the Concept of Data Mining
Authors: Roshani S. Golhar, Neerajkumar S. Sathawane, Snehal Dongre
Abstract:
Background and objectives: As the, cardiovascular illnesses increasing and becoming cause of mortality worldwide, killing around lot of people each year. Arrhythmia is a type of cardiac illness characterized by a change in the linearity of the heartbeat. The goal of this study is to develop novel deep learning algorithms for successfully interpreting arrhythmia using a single second segment. Because the ECG signal indicates unique electrical heart activity across time, considerable changes between time intervals are detected. Such variances, as well as the limited number of learning data available for each arrhythmia, make standard learning methods difficult, and so impede its exaggeration. Conclusions: The proposed method was able to outperform several state-of-the-art methods. Also proposed technique is an effective and convenient approach to deep learning for heartbeat interpretation, that could be probably used in real-time healthcare monitoring systemsKeywords: electrocardiogram, ECG classification, neural networks, convolutional neural networks, portable document format
Procedia PDF Downloads 6917347 In vitro Effects of Salvia officinalis on Bovine Spermatozoa
Authors: Eva Tvrdá, Boris Botman, Marek Halenár, Tomáš Slanina, Norbert Lukáč
Abstract:
In vitro storage and processing of animal semen represents a risk factor to spermatozoa vitality, potentially leading to reduced fertility. A variety of substances isolated from natural sources may exhibit protective or antioxidant properties on the spermatozoon, thus extending the lifespan of stored ejaculates. This study compared the ability of different concentrations of the Salvia officinalis extract on the motility, mitochondrial activity, viability and reactive oxygen species (ROS) production by bovine spermatozoa during different time periods (0, 2, 6 and 24 h) of in vitro culture. Spermatozoa motility was assessed using the Computer-assisted sperm analysis (CASA) system. Cell viability was examined using the metabolic activity MTT assay, the eosin-nigrosin staining technique was used to evaluate the sperm viability and ROS generation was quantified using luminometry. The CASA analysis revealed that the motility in the experimental groups supplemented with 0.5-2 µg/mL Salvia extract was significantly lower in comparison with the control (P<0.05; Time 24 h). At the same time, a long-term exposure of spermatozoa to concentrations ranging between 0.05 µg/mL and 2 µg/mL had a negative impact on the mitochondrial metabolism (P<0.05; Time 24 h). The viability staining revealed that 0.001-1 µg/mL Salvia extract had no effects on bovine male gametes, however 2 µg/mL Salvia had a persisting negative effect on spermatozoa (P<0.05). Furthermore 0.05-2 µg/mL Salvia exhibited an immediate ROS-promoting effect on the sperm culture (P>0.05; Time 0 h and 2 h), which remained significant throughout the entire in vitro culture (P<0.05; Time 24 h). Our results point out to the necessity to examine specific effects the biomolecules present in Salvia officinalis may have individually or collectively on the in vitro sperm vitality and oxidative profile.Keywords: bulls, CASA, MTT test, reactive oxygen species, sage, Salvia officinalis, spermatozoa
Procedia PDF Downloads 33817346 Stable Time Reversed Integration of the Navier-Stokes Equation Using an Adjoint Gradient Method
Authors: Jurriaan Gillissen
Abstract:
This work is concerned with stabilizing the numerical integration of the Navier-Stokes equation (NSE), backwards in time. Applications involve the detection of sources of, e.g., sound, heat, and pollutants. Stable reverse numerical integration of parabolic differential equations is also relevant for image de-blurring. While the literature addresses the reverse integration problem of the advection-diffusion equation, the problem of numerical reverse integration of the NSE has, to our knowledge, not yet been addressed. Owing to the presence of viscosity, the NSE is irreversible, i.e., when going backwards in time, the fluid behaves, as if it had a negative viscosity. As an effect, perturbations from the perfect solution, due to round off errors or discretization errors, grow exponentially in time, and reverse integration of the NSE is inherently unstable, regardless of using an implicit time integration scheme. Consequently, some sort of filtering is required, in order to achieve a stable, numerical, reversed integration. The challenge is to find a filter with a minimal adverse affect on the accuracy of the reversed integration. In the present work, we explore an adjoint gradient method (AGM) to achieve this goal, and we apply this technique to two-dimensional (2D), decaying turbulence. The AGM solves for the initial velocity field u0 at t = 0, that, when integrated forward in time, produces a final velocity field u1 at t = 1, that is as close as is feasibly possible to some specified target field v1. The initial field u0 defines a minimum of a cost-functional J, that measures the distance between u1 and v1. In the minimization procedure, the u0 is updated iteratively along the gradient of J w.r.t. u0, where the gradient is obtained by transporting J backwards in time from t = 1 to t = 0, using the adjoint NSE. The AGM thus effectively replaces the backward integration by multiple forward and backward adjoint integrations. Since the viscosity is negative in the adjoint NSE, each step of the AGM is numerically stable. Nevertheless, when applied to turbulence, the AGM develops instabilities, which limit the backward integration to small times. This is due to the exponential divergence of phase space trajectories in turbulent flow, which produces a multitude of local minima in J, when the integration time is large. As an effect, the AGM may select unphysical, noisy initial conditions. In order to improve this situation, we propose two remedies. First, we replace the integration by a sequence of smaller integrations, i.e., we divide the integration time into segments, where in each segment the target field v1 is taken as the initial field u0 from the previous segment. Second, we add an additional term (regularizer) to J, which is proportional to a high-order Laplacian of u0, and which dampens the gradients of u0. We show that suitable values for the segment size and for the regularizer, allow a stable reverse integration of 2D decaying turbulence, with accurate results for more then O(10) turbulent, integral time scales.Keywords: time reversed integration, parabolic differential equations, adjoint gradient method, two dimensional turbulence
Procedia PDF Downloads 22417345 Application of Seasonal Autoregressive Integrated Moving Average Model for Forecasting Monthly Flows in Waterval River, South Africa
Authors: Kassahun Birhanu Tadesse, Megersa Olumana Dinka
Abstract:
Reliable future river flow information is basic for planning and management of any river systems. For data scarce river system having only a river flow records like the Waterval River, a univariate time series models are appropriate for river flow forecasting. In this study, a univariate Seasonal Autoregressive Integrated Moving Average (SARIMA) model was applied for forecasting Waterval River flow using GRETL statistical software. Mean monthly river flows from 1960 to 2016 were used for modeling. Different unit root tests and Mann-Kendall trend analysis were performed to test the stationarity of the observed flow time series. The time series was differenced to remove the seasonality. Using the correlogram of seasonally differenced time series, different SARIMA models were identified, their parameters were estimated, and diagnostic check-up of model forecasts was performed using white noise and heteroscedasticity tests. Finally, based on minimum Akaike Information (AIc) and Hannan-Quinn (HQc) criteria, SARIMA (3, 0, 2) x (3, 1, 3)12 was selected as the best model for Waterval River flow forecasting. Therefore, this model can be used to generate future river information for water resources development and management in Waterval River system. SARIMA model can also be used for forecasting other similar univariate time series with seasonality characteristics.Keywords: heteroscedasticity, stationarity test, trend analysis, validation, white noise
Procedia PDF Downloads 20517344 Some Integral Inequalities of Hermite-Hadamard Type on Time Scale and Their Applications
Authors: Artion Kashuri, Rozana Liko
Abstract:
In this paper, the authors establish an integral identity using delta differentiable functions. By applying this identity, some new results via a general class of convex functions with respect to two nonnegative functions on a time scale are given. Also, for suitable choices of nonnegative functions, some special cases are deduced. Finally, in order to illustrate the efficiency of our main results, some applications to special means are obtained as well. We hope that current work using our idea and technique will attract the attention of researchers working in mathematical analysis, mathematical inequalities, numerical analysis, special functions, fractional calculus, quantum mechanics, quantum calculus, physics, probability and statistics, differential and difference equations, optimization theory, and other related fields in pure and applied sciences.Keywords: convex functions, Hermite-Hadamard inequality, special means, time scale
Procedia PDF Downloads 15017343 Electrochemical Studies of Si, Si-Ge- and Ge-Air Batteries
Authors: R. C. Sharma, Rishabh Bansal, Prajwal Menon, Manoj K. Sharma
Abstract:
Silicon-air battery is highly promising for electric vehicles due to its high theoretical energy density (8470 Whkg⁻¹) and its discharge products are non-toxic. For the first time, pure silicon and germanium powders are used as anode material. Nickel wire meshes embedded with charcoal and manganese dioxide powder as cathode and concentrated potassium hydroxide is used as electrolyte. Voltage-time curves have been presented in this study for pure silicon and germanium powder and 5% and 10% germanium with silicon powder. Silicon powder cell assembly gives a stable voltage of 0.88 V for ~20 minutes while Si-Ge provides cell voltage of 0.80-0.76 V for ~10-12 minutes, and pure germanium cell provides cell voltage 0.80-0.76 V for ~30 minutes. The cell voltage is higher for concentrated (10%) sodium hydroxide solution (1.08 V) and it is stable for ~40 minutes. A sharp decrease in cell voltage beyond 40 min may be due to rapid corrosion.Keywords: Silicon-air battery, Germanium-air battery, voltage-time curve, open circuit voltage, Anodic corrosion
Procedia PDF Downloads 23717342 Time Series Simulation by Conditional Generative Adversarial Net
Authors: Rao Fu, Jie Chen, Shutian Zeng, Yiping Zhuang, Agus Sudjianto
Abstract:
Generative Adversarial Net (GAN) has proved to be a powerful machine learning tool in image data analysis and generation. In this paper, we propose to use Conditional Generative Adversarial Net (CGAN) to learn and simulate time series data. The conditions include both categorical and continuous variables with different auxiliary information. Our simulation studies show that CGAN has the capability to learn different types of normal and heavy-tailed distributions, as well as dependent structures of different time series. It also has the capability to generate conditional predictive distributions consistent with training data distributions. We also provide an in-depth discussion on the rationale behind GAN and the neural networks as hierarchical splines to establish a clear connection with existing statistical methods of distribution generation. In practice, CGAN has a wide range of applications in market risk and counterparty risk analysis: it can be applied to learn historical data and generate scenarios for the calculation of Value-at-Risk (VaR) and Expected Shortfall (ES), and it can also predict the movement of the market risk factors. We present a real data analysis including a backtesting to demonstrate that CGAN can outperform Historical Simulation (HS), a popular method in market risk analysis to calculate VaR. CGAN can also be applied in economic time series modeling and forecasting. In this regard, we have included an example of hypothetical shock analysis for economic models and the generation of potential CCAR scenarios by CGAN at the end of the paper.Keywords: conditional generative adversarial net, market and credit risk management, neural network, time series
Procedia PDF Downloads 14317341 Optimal Sequential Scheduling of Imperfect Maintenance Last Policy for a System Subject to Shocks
Authors: Yen-Luan Chen
Abstract:
Maintenance has a great impact on the capacity of production and on the quality of the products, and therefore, it deserves continuous improvement. Maintenance procedure done before a failure is called preventive maintenance (PM). Sequential PM, which specifies that a system should be maintained at a sequence of intervals with unequal lengths, is one of the commonly used PM policies. This article proposes a generalized sequential PM policy for a system subject to shocks with imperfect maintenance and random working time. The shocks arrive according to a non-homogeneous Poisson process (NHPP) with varied intensity function in each maintenance interval. As a shock occurs, the system suffers two types of failures with number-dependent probabilities: type-I (minor) failure, which is rectified by a minimal repair, and type-II (catastrophic) failure, which is removed by a corrective maintenance (CM). The imperfect maintenance is carried out to improve the system failure characteristic due to the altered shock process. The sequential preventive maintenance-last (PML) policy is defined as that the system is maintained before any CM occurs at a planned time Ti or at the completion of a working time in the i-th maintenance interval, whichever occurs last. At the N-th maintenance, the system is replaced rather than maintained. This article first takes up the sequential PML policy with random working time and imperfect maintenance in reliability engineering. The optimal preventive maintenance schedule that minimizes the mean cost rate of a replacement cycle is derived analytically and determined in terms of its existence and uniqueness. The proposed models provide a general framework for analyzing the maintenance policies in reliability theory.Keywords: optimization, preventive maintenance, random working time, minimal repair, replacement, reliability
Procedia PDF Downloads 27517340 Distributed Cost-Based Scheduling in Cloud Computing Environment
Authors: Rupali, Anil Kumar Jaiswal
Abstract:
Cloud computing can be defined as one of the prominent technologies that lets a user change, configure and access the services online. it can be said that this is a prototype of computing that helps in saving cost and time of a user practically the use of cloud computing can be found in various fields like education, health, banking etc. Cloud computing is an internet dependent technology thus it is the major responsibility of Cloud Service Providers(CSPs) to care of data stored by user at data centers. Scheduling in cloud computing environment plays a vital role as to achieve maximum utilization and user satisfaction cloud providers need to schedule resources effectively. Job scheduling for cloud computing is analyzed in the following work. To complete, recreate the task calculation, and conveyed scheduling methods CloudSim3.0.3 is utilized. This research work discusses the job scheduling for circulated processing condition also by exploring on this issue we find it works with minimum time and less cost. In this work two load balancing techniques have been employed: ‘Throttled stack adjustment policy’ and ‘Active VM load balancing policy’ with two brokerage services ‘Advanced Response Time’ and ‘Reconfigure Dynamically’ to evaluate the VM_Cost, DC_Cost, Response Time, and Data Processing Time. The proposed techniques are compared with Round Robin scheduling policy.Keywords: physical machines, virtual machines, support for repetition, self-healing, highly scalable programming model
Procedia PDF Downloads 16817339 A Non-linear Damage Model For The Annulus Of the Intervertebral Disc Under Cyclic Loading, Including Recovery
Authors: Shruti Motiwale, Xianlin Zhou, Reuben H. Kraft
Abstract:
Military and sports personnel are often required to wear heavy helmets for extended periods of time. This leads to excessive cyclic loads on the neck and an increased chance of injury. Computational models offer one approach to understand and predict the time progression of disc degeneration under severe cyclic loading. In this paper, we have applied an analytic non-linear damage evolution model to estimate damage evolution in an intervertebral disc due to cyclic loads over decade-long time periods. We have also proposed a novel strategy for inclusion of recovery in the damage model. Our results show that damage only grows 20% in the initial 75% of the life, growing exponentially in the remaining 25% life. The analysis also shows that it is crucial to include recovery in a damage model.Keywords: cervical spine, computational biomechanics, damage evolution, intervertebral disc, continuum damage mechanics
Procedia PDF Downloads 56817338 The Use of Boosted Multivariate Trees in Medical Decision-Making for Repeated Measurements
Authors: Ebru Turgal, Beyza Doganay Erdogan
Abstract:
Machine learning aims to model the relationship between the response and features. Medical decision-making researchers would like to make decisions about patients’ course and treatment, by examining the repeated measurements over time. Boosting approach is now being used in machine learning area for these aims as an influential tool. The aim of this study is to show the usage of multivariate tree boosting in this field. The main reason for utilizing this approach in the field of decision-making is the ease solutions of complex relationships. To show how multivariate tree boosting method can be used to identify important features and feature-time interaction, we used the data, which was collected retrospectively from Ankara University Chest Diseases Department records. Dataset includes repeated PF ratio measurements. The follow-up time is planned for 120 hours. A set of different models is tested. In conclusion, main idea of classification with weighed combination of classifiers is a reliable method which was shown with simulations several times. Furthermore, time varying variables will be taken into consideration within this concept and it could be possible to make accurate decisions about regression and survival problems.Keywords: boosted multivariate trees, longitudinal data, multivariate regression tree, panel data
Procedia PDF Downloads 20317337 ‘Ethical Relativism’ in Offshore Business: A Critical Assessment
Authors: Biswanath Swain
Abstract:
Ethical relativism, as an ethical perspective, holds that moral worth of a course of action is dependent on a particular space and time. Moral rightness or wrongness of a course of action varies from space to space and from time to time. In short, ethical relativism holds that morality is relative to the context. If we reflect conscientiously on the scope of this perspective, we will find that it is wide-spread amongst the marketers involved in the offshore business. However, the irony is that most of the marketers gone along with ethical relativism in their offshore business have been found to be unsuccessful in terms of loss in market-share and bankruptcy. The upshot is purely self-defeating in nature for the marketers. GSK in China and Nestle Maggi in India are some of the burning examples of that sort. The paper argues and recommends that a marketer, as an alternative, should have recourse to Kantian ethical perspective to deliberate courses of action sensitive to offshore business as Kantian ethical perspective is logically and methodologically sound in nature.Keywords: business, course of action, Kant, morality, offshore, relativism
Procedia PDF Downloads 30317336 Futuristic Black Box Design Considerations and Global Networking for Real Time Monitoring of Flight Performance Parameters
Authors: K. Parandhama Gowd
Abstract:
The aim of this research paper is to conceptualize, discuss, analyze and propose alternate design methodologies for futuristic Black Box for flight safety. The proposal also includes global networking concepts for real time surveillance and monitoring of flight performance parameters including GPS parameters. It is expected that this proposal will serve as a failsafe real time diagnostic tool for accident investigation and location of debris in real time. In this paper, an attempt is made to improve the existing methods of flight data recording techniques and improve upon design considerations for futuristic FDR to overcome the trauma of not able to locate the block box. Since modern day communications and information technologies with large bandwidth are available coupled with faster computer processing techniques, the attempt made in this paper to develop a failsafe recording technique is feasible. Further data fusion/data warehousing technologies are available for exploitation.Keywords: flight data recorder (FDR), black box, diagnostic tool, global networking, cockpit voice and data recorder (CVDR), air traffic control (ATC), air traffic, telemetry, tracking and control centers ATTTCC)
Procedia PDF Downloads 57217335 The Foundation Binary-Signals Mechanics and Actual-Information Model of Universe
Authors: Elsadig Naseraddeen Ahmed Mohamed
Abstract:
In contrast to the uncertainty and complementary principle, it will be shown in the present paper that the probability of the simultaneous occupation event of any definite values of coordinates by any definite values of momentum and energy at any definite instance of time can be described by a binary definite function equivalent to the difference between their numbers of occupation and evacuation epochs up to that time and also equivalent to the number of exchanges between those occupation and evacuation epochs up to that times modulus two, these binary definite quantities can be defined at all point in the time’s real-line so it form a binary signal represent a complete mechanical description of physical reality, the time of these exchanges represent the boundary of occupation and evacuation epochs from which we can calculate these binary signals using the fact that the time of universe events actually extends in the positive and negative of time’s real-line in one direction of extension when these number of exchanges increase, so there exists noninvertible transformation matrix can be defined as the matrix multiplication of invertible rotation matrix and noninvertible scaling matrix change the direction and magnitude of exchange event vector respectively, these noninvertible transformation will be called actual transformation in contrast to information transformations by which we can navigate the universe’s events transformed by actual transformations backward and forward in time’s real-line, so these information transformations will be derived as an elements of a group can be associated to their corresponded actual transformations. The actual and information model of the universe will be derived by assuming the existence of time instance zero before and at which there is no coordinate occupied by any definite values of momentum and energy, and then after that time, the universe begin its expanding in spacetime, this assumption makes the need for the existence of Laplace’s demon who at one moment can measure the positions and momentums of all constituent particle of the universe and then use the law of classical mechanics to predict all future and past of universe’s events, superfluous, we only need for the establishment of our analog to digital converters to sense the binary signals that determine the boundaries of occupation and evacuation epochs of the definite values of coordinates relative to its origin by the definite values of momentum and energy as present events of the universe from them we can predict approximately in high precision it's past and future events.Keywords: binary-signal mechanics, actual-information model of the universe, actual-transformation, information-transformation, uncertainty principle, Laplace's demon
Procedia PDF Downloads 17517334 Applying Lean Six Sigma in an Emergency Department, of a Private Hospital
Authors: Sarah Al-Lumai, Fatima Al-Attar, Nour Jamal, Badria Al-Dabbous, Manal Abdulla
Abstract:
Today, many commonly used Industrial Engineering tools and techniques are being used in hospitals around the world for the goal of producing a more efficient and effective healthcare system. A common quality improvement methodology known as Lean Six-Sigma has been successful in manufacturing industries and recently in healthcare. The objective of our project is to use the Lean Six-Sigma methodology to reduce waiting time in the Emergency Department (ED), in a local private hospital. Furthermore, a comprehensive literature review was conducted to evaluate the success of Lean Six-Sigma in the ED. According to the study conducted by Ibn Sina Hospital, in Morocco, the most common problem that patients complain about is waiting time. To ensure patient satisfaction many hospitals such as North Shore University Hospital were able to reduce waiting time up to 37% by using Lean Six-Sigma. Other hospitals, such as John Hopkins’s medical center used Lean Six-Sigma successfully to enhance the overall patient flow that ultimately decreased waiting time. Furthermore, it was found that capacity constraints, such as staff shortages and lack of beds were one of the main reasons behind long waiting time. With the use of Lean Six-Sigma and bed management, hospitals like Memorial Hermann Southwest Hospital were able to reduce patient delays. Moreover, in order to successfully implement Lean Six-Sigma in our project, two common methodologies were considered, DMAIC and DMADV. After the assessment of both methodologies, it was found that DMAIC was a more suitable approach to our project because it is more concerned with improving an already existing process. With many of its successes, Lean Six-Sigma has its limitation especially in healthcare; but limitations can be minimized if properly approached.Keywords: lean six sigma, DMAIC, hospital, methodology
Procedia PDF Downloads 49617333 Electronic Media and Physical Activity of Primary School Children
Authors: Srna Jenko Miholic, Marta Borovec, Josipa Persun
Abstract:
The constant expansion of technology has further accelerated the development of media and vice versa. Although its promotion includes all kinds of interesting and positive sides, the poor functioning of the media is still being researched and proven. Young people, as well as children from the earliest age, resort to the media the most, so it is necessary to defend the role of adults as it were parents, teachers, and environment against virtual co-educators such as the media. The research aim of this study was to determine the time consumption of using electronic media by primary school children as well as their involvement in certain physical activities. Furthermore, to determine what is happening when parents restrict their children's access to electronic media and encourage them to participate in alternative contents during their leisure time. Result reveals a higher percentage of parents restrict their children's access to electronic media and then encourage children to socialize with family and friends, spend time outdoors, engage in physical activity, read books or learn something unrelated to school content even though it would not be children's favorite activity. The results highlight the importance of parental control when it comes to children's use of electronic media and the positive effects that parental control has in terms of encouraging children to be useful, socially desirable, physically active, and healthy activities.Keywords: elementary school, digital media, leisure time, parents, physical engagement
Procedia PDF Downloads 147