Search results for: Energy Management System (EMS)
4161 A Comprehensive CFD Model for Sugar-Cane Bagasse Heterogeneous Combustion in a Grate Boiler System
Authors: Daniel José de Oliveira Ferreira, Juan Harold Sosa-Arnao, Bruno Cássio Moreira, Leonardo Paes Rangel, Song Won Park
Abstract:
The comprehensive CFD models have been used to represent and study the heterogeneous combustion of biomass. In the present work, the operation of a global flue gas circuit in the sugar-cane bagasse combustion, from wind boxes below primary air grate supply, passing by bagasse insertion in swirl burners and boiler furnace, to boiler bank outlet is simulated. It uses five different meshes representing each part of this system located in sequence: wind boxes and grate, boiler furnace, swirl burners, super heaters and boiler bank. The model considers turbulence using standard k-ε, combustion using EDM, radiation heat transfer using DTM with 16 ray directions and bagasse particle tracking represented by Schiller-Naumann model. The results showed good agreement with expected behavior found in literature and equipment design. The more detailed results view in separated parts of flue gas system allows to observe some flow behaviors that cannot be represented by usual simplifications like bagasse supply under homogeneous axial and rotational vectors and others that can be represented using new considerations like the representation of 26 thousand grate orifices by 144 rectangular inlets.Keywords: comprehensive CFD model, sugar-cane bagasse combustion, sugar-cane bagasse grate boiler, axial
Procedia PDF Downloads 4734160 Nonlinear Analysis in Investigating the Complexity of Neurophysiological Data during Reflex Behavior
Authors: Juliana A. Knocikova
Abstract:
Methods of nonlinear signal analysis are based on finding that random behavior can arise in deterministic nonlinear systems with a few degrees of freedom. Considering the dynamical systems, entropy is usually understood as a rate of information production. Changes in temporal dynamics of physiological data are indicating evolving of system in time, thus a level of new signal pattern generation. During last decades, many algorithms were introduced to assess some patterns of physiological responses to external stimulus. However, the reflex responses are usually characterized by short periods of time. This characteristic represents a great limitation for usual methods of nonlinear analysis. To solve the problems of short recordings, parameter of approximate entropy has been introduced as a measure of system complexity. Low value of this parameter is reflecting regularity and predictability in analyzed time series. On the other side, increasing of this parameter means unpredictability and a random behavior, hence a higher system complexity. Reduced neurophysiological data complexity has been observed repeatedly when analyzing electroneurogram and electromyogram activities during defence reflex responses. Quantitative phrenic neurogram changes are also obvious during severe hypoxia, as well as during airway reflex episodes. Concluding, the approximate entropy parameter serves as a convenient tool for analysis of reflex behavior characterized by short lasting time series.Keywords: approximate entropy, neurophysiological data, nonlinear dynamics, reflex
Procedia PDF Downloads 3004159 Applying Arima Data Mining Techniques to ERP to Generate Sales Demand Forecasting: A Case Study
Authors: Ghaleb Y. Abbasi, Israa Abu Rumman
Abstract:
This paper modeled sales history archived from 2012 to 2015 bulked in monthly bins for five products for a medical supply company in Jordan. The sales forecasts and extracted consistent patterns in the sales demand history from the Enterprise Resource Planning (ERP) system were used to predict future forecasting and generate sales demand forecasting using time series analysis statistical technique called Auto Regressive Integrated Moving Average (ARIMA). This was used to model and estimate realistic sales demand patterns and predict future forecasting to decide the best models for five products. Analysis revealed that the current replenishment system indicated inventory overstocking.Keywords: ARIMA models, sales demand forecasting, time series, R code
Procedia PDF Downloads 3854158 Study of Some Aromatic Thiourea Derivatives as Lube Oil Antioxidant
Authors: Rasha S. Kamal, Nehal S. Ahmed, Amal M. Nassar, Nour E. A. Abd El-Sattar
Abstract:
In the present work, some lube oil antioxidants based on ester of some aromatic thiourea derivative were prepared by two steps: the first step is the reaction of succinyl chloride with ammonium thiocyanate in addition to anthranilic acid as three component system to prepare thiourea derivative (A); the second step is esterification of compound (A) by different alcohol (decyl C₁₀, tetradecyl C₁₄, and octadecyl C₁₈) alcohol. The structures of the prepared compounds were confirmed by infra-red spectroscopy, nuclear magnetic resonance, elemental analysis and determination of the molecular weights. All the prepared compounds were soluble in lube oil. The efficiency of the prepared compounds as antioxidants lube oil additives was investigated and it was found that these prepared compounds give good result as lube oil antioxidant.Keywords: antioxidant lube oil, three component system, aromatic thiourea derivatives, esterification
Procedia PDF Downloads 2424157 A Concept of Rational Water Management at Local Utilities: The Use of RO for Water Supply and Wastewater Treatment/Reuse
Authors: N. Matveev, A. Pervov
Abstract:
Local utilities often face problems of local industrial wastes, storm water disposal due to existing strict regulations. For many local industries, the problem of wastewater treatment and discharge into surface reservoirs can’t be solved through the use of conventional biological treatment techniques. Current discharge standards require very strict removal of a number of impurities such as ammonia, nitrates, phosphate, etc. To reach this level of removal, expensive reagents and sorbents are used. The modern concept of rational water resources management requires the development of new efficient techniques that provide wastewater treatment and reuse. As RO membranes simultaneously reject all dissolved impurities such as BOD, TDS, ammonia, phosphates etc., they become very attractive for the direct treatment of wastewater without biological stage. To treat wastewater, specially designed membrane "open channel" modules are used that do not possess "dead areas" that cause fouling or require pretreatment. A solution to RO concentrate disposal problem is presented that consists of reducing of initial wastewater volume by 100 times. Concentrate is withdrawn from membrane unit as sludge moisture. The efficient use of membrane RO techniques is connected with a salt balance in water system. Thus, to provide high ecological efficiency of developed techniques, all components of water supply and wastewater discharge systems should be accounted for.Keywords: reverse osmosis, stormwater treatment, open-channel module, wastewater reuse
Procedia PDF Downloads 3194156 Winged Test Rocket with Fully Autonomous Guidance and Control for Realizing Reusable Suborbital Vehicle
Authors: Koichi Yonemoto, Hiroshi Yamasaki, Masatomo Ichige, Yusuke Ura, Guna S. Gossamsetti, Takumi Ohki, Kento Shirakata, Ahsan R. Choudhuri, Shinji Ishimoto, Takashi Mugitani, Hiroya Asakawa, Hideaki Nanri
Abstract:
This paper presents the strategic development plan of winged rockets WIRES (WInged REusable Sounding rocket) aiming at unmanned suborbital winged rocket for demonstrating future fully reusable space transportation technologies, such as aerodynamics, Navigation, Guidance and Control (NGC), composite structure, propulsion system, and cryogenic tanks etc., by universities in collaboration with government and industries, as well as the past and current flight test results.Keywords: autonomous guidance and control, reusable rocket, space transportation system, suborbital vehicle, winged rocket
Procedia PDF Downloads 3654155 Stochastic Matrices and Lp Norms for Ill-Conditioned Linear Systems
Authors: Riadh Zorgati, Thomas Triboulet
Abstract:
In quite diverse application areas such as astronomy, medical imaging, geophysics or nondestructive evaluation, many problems related to calibration, fitting or estimation of a large number of input parameters of a model from a small amount of output noisy data, can be cast as inverse problems. Due to noisy data corruption, insufficient data and model errors, most inverse problems are ill-posed in a Hadamard sense, i.e. existence, uniqueness and stability of the solution are not guaranteed. A wide class of inverse problems in physics relates to the Fredholm equation of the first kind. The ill-posedness of such inverse problem results, after discretization, in a very ill-conditioned linear system of equations, the condition number of the associated matrix can typically range from 109 to 1018. This condition number plays the role of an amplifier of uncertainties on data during inversion and then, renders the inverse problem difficult to handle numerically. Similar problems appear in other areas such as numerical optimization when using interior points algorithms for solving linear programs leads to face ill-conditioned systems of linear equations. Devising efficient solution approaches for such system of equations is therefore of great practical interest. Efficient iterative algorithms are proposed for solving a system of linear equations. The approach is based on a preconditioning of the initial matrix of the system with an approximation of a generalized inverse leading to a stochastic preconditioned matrix. This approach, valid for non-negative matrices, is first extended to hermitian, semi-definite positive matrices and then generalized to any complex rectangular matrices. The main results obtained are as follows: 1) We are able to build a generalized inverse of any complex rectangular matrix which satisfies the convergence condition requested in iterative algorithms for solving a system of linear equations. This completes the (short) list of generalized inverse having this property, after Kaczmarz and Cimmino matrices. Theoretical results on both the characterization of the type of generalized inverse obtained and the convergence are derived. 2) Thanks to its properties, this matrix can be efficiently used in different solving schemes as Richardson-Tanabe or preconditioned conjugate gradients. 3) By using Lp norms, we propose generalized Kaczmarz’s type matrices. We also show how Cimmino's matrix can be considered as a particular case consisting in choosing the Euclidian norm in an asymmetrical structure. 4) Regarding numerical results obtained on some pathological well-known test-cases (Hilbert, Nakasaka, …), some of the proposed algorithms are empirically shown to be more efficient on ill-conditioned problems and more robust to error propagation than the known classical techniques we have tested (Gauss, Moore-Penrose inverse, minimum residue, conjugate gradients, Kaczmarz, Cimmino). We end on a very early prospective application of our approach based on stochastic matrices aiming at computing some parameters (such as the extreme values, the mean, the variance, …) of the solution of a linear system prior to its resolution. Such an approach, if it were to be efficient, would be a source of information on the solution of a system of linear equations.Keywords: conditioning, generalized inverse, linear system, norms, stochastic matrix
Procedia PDF Downloads 1364154 Nondecoupling Signatures of Supersymmetry and an Lμ-Lτ Gauge Boson at Belle-II
Authors: Heerak Banerjee, Sourov Roy
Abstract:
Supersymmetry, one of the most celebrated fields of study for explaining experimental observations where the standard model (SM) falls short, is reeling from the lack of experimental vindication. At the same time, the idea of additional gauge symmetry, in particular, the gauged Lμ-Lτ symmetric models have also generated significant interest. They have been extensively proposed in order to explain the tantalizing discrepancy in the predicted and measured value of the muon anomalous magnetic moment alongside several other issues plaguing the SM. While very little parameter space within these models remain unconstrained, this work finds that the γ + Missing Energy (ME) signal at the Belle-II detector will be a smoking gun for supersymmetry (SUSY) in the presence of a gauged U(1)Lμ-Lτ symmetry. A remarkable consequence of breaking the enhanced symmetry appearing in the limit of degenerate (s)leptons is the nondecoupling of the radiative contribution of heavy charged sleptons to the γ-Z΄ kinetic mixing. The signal process, e⁺e⁻ →γZ΄→γ+ME, is an outcome of this ubiquitous feature. Taking the severe constraints on gauged Lμ-Lτ models by several low energy observables into account, it is shown that any significant excess in all but the highest photon energy bin would be an undeniable signature of such heavy scalar fields in SUSY coupling to the additional gauge boson Z΄. The number of signal events depends crucially on the logarithm of the ratio of stau to smuon mass in the presence of SUSY. In addition, the number is also inversely proportional to the e⁺e⁻ collision energy, making a low-energy, high-luminosity collider like Belle-II an ideal testing ground for this channel. This process can probe large swathes of the hitherto free slepton mass ratio vs. additional gauge coupling (gₓ) parameter space. More importantly, it can explore the narrow slice of Z΄ mass (MZ΄) vs. gₓ parameter space still allowed in gauged U(1)Lμ-Lτ models for superheavy sparticles. The spectacular finding that the signal significance is independent of individual slepton masses is an exciting prospect indeed. Further, the prospect that signatures of even superheavy SUSY particles that may have escaped detection at the LHC may show up at the Belle-II detector is an invigorating revelation.Keywords: additional gauge symmetry, electron-positron collider, kinetic mixing, nondecoupling radiative effect, supersymmetry
Procedia PDF Downloads 1274153 Effect of Sensory Manipulations on Human Joint Stiffness Strategy and Its Adaptation for Human Dynamic Stability
Authors: Aizreena Azaman, Mai Ishibashi, Masanori Ishizawa, Shin-Ichiroh Yamamoto
Abstract:
Sensory input plays an important role to human posture control system to initiate strategy in order to counterpart any unbalance condition and thus, prevent fall. In previous study, joint stiffness was observed able to describe certain issues regarding to movement performance. But, correlation between balance ability and joint stiffness is still remains unknown. In this study, joint stiffening strategy at ankle and hip were observed under different sensory manipulations and its correlation with conventional clinical test (Functional Reach Test) for balance ability was investigated. In order to create unstable condition, two different surface perturbations (tilt up-tilt (TT) down and forward-backward (FB)) at four different frequencies (0.2, 0.4, 0.6 and 0.8 Hz) were introduced. Furthermore, four different sensory manipulation conditions (include vision and vestibular system) were applied to the subject and they were asked to maintain their position as possible. The results suggested that joint stiffness were high during difficult balance situation. Less balance people generated high average joint stiffness compared to balance people. Besides, adaptation of posture control system under repetitive external perturbation also suggested less during sensory limited condition. Overall, analysis of joint stiffening response possible to predict unbalance situation faced by human.Keywords: balance ability, joint stiffness, sensory, adaptation, dynamic
Procedia PDF Downloads 4604152 Active Deformable Micro-Cutters with Nano-Abrasives
Authors: M. Pappa, C. Efstathiou, G. Livanos, P. Xidas, D. Vakondios, E. Maravelakis, M. Zervakis, A. Antoniadis
Abstract:
The choice of cutting tools in manufacturing processes is an essential parameter on which the required manufacturing time, the consumed energy and the cost effort all depend. If the number of tool changing times could be minimized or even eliminated by using a single convex tool providing multiple profiles, then a significant benefit of time and energy saving, as well as tool cost, would be achieved. A typical machine contains a variety of tools in order to deal with different curvatures and material removal rates. In order to minimize the required cutting tool changes, Actively Deformable micro-Cutters (ADmC) will be developed. The design of the Actively Deformable micro-Cutters will be based on the same cutting technique and mounting method as that in typical cutters.Keywords: deformable cutters, cutting tool, milling, turning, manufacturing
Procedia PDF Downloads 4524151 Design and Analysis of Piping System with Supports Using CAESAR-II
Authors: M. Jamuna Rani, K. Ramanathan
Abstract:
A steam power plant is housed with various types of equipments like boiler, turbine, heat exchanger etc. These equipments are mainly connected with piping systems. Such a piping layout design depends mainly on stress analysis and flexibility. It will vary with respect to pipe geometrical properties, pressure, temperature, and supports. The present paper is to analyze the presence and effect of hangers and expansion joints in the piping layout/routing using CAESAR-II software. Main aim of piping stress analysis is to provide adequate flexibility for absorbing thermal expansion, code compliance for stresses and displacement incurred in piping system. The design is said to be safe if all these are in allowable range as per code. In this study, a sample problem is considered for analysis as per power piping ASME B31.1 code and the results thus obtained are compared.Keywords: ASTM B31.1, hanger, expansion joint, CAESAR-II
Procedia PDF Downloads 3644150 An Insite to the Probabilistic Assessment of Reserves in Conventional Reservoirs
Authors: Sai Sudarshan, Harsh Vyas, Riddhiman Sherlekar
Abstract:
The oil and gas industry has been unwilling to adopt stochastic definition of reserves. Nevertheless, Monte Carlo simulation methods have gained acceptance by engineers, geoscientists and other professionals who want to evaluate prospects or otherwise analyze problems that involve uncertainty. One of the common applications of Monte Carlo simulation is the estimation of recoverable hydrocarbon from a reservoir.Monte Carlo Simulation makes use of random samples of parameters or inputs to explore the behavior of a complex system or process. It finds application whenever one needs to make an estimate, forecast or decision where there is significant uncertainty. First, the project focuses on performing Monte-Carlo Simulation on a given data set using U. S Department of Energy’s MonteCarlo Software, which is a freeware e&p tool. Further, an algorithm for simulation has been developed for MATLAB and program performs simulation by prompting user for input distributions and parameters associated with each distribution (i.e. mean, st.dev, min., max., most likely, etc.). It also prompts user for desired probability for which reserves are to be calculated. The algorithm so developed and tested in MATLAB further finds implementation in Python where existing libraries on statistics and graph plotting have been imported to generate better outcome. With PyQt designer, codes for a simple graphical user interface have also been written. The graph so plotted is then validated with already available results from U.S DOE MonteCarlo Software.Keywords: simulation, probability, confidence interval, sensitivity analysis
Procedia PDF Downloads 3824149 Simulation Studies of High-Intensity, Nanosecond Pulsed Electric Fields Induced Dynamic Membrane Electroporation
Authors: Jiahui Song
Abstract:
The application of an electric field can cause poration at cell membranes. This includes the outer plasma membrane, as well as the membranes of intracellular organelles. In order to analyze and predict such electroporation effects, it becomes necessary to first evaluate the electric fields and the transmembrane voltages. This information can then be used to assess changes in the pore formation energy that finally yields the pore distributions and their radii based on the Smolchowski equation. The dynamic pore model can be achieved by including a dynamic aspect and a dependence on the pore population density into the pore formation energy equation. These changes make the pore formation energy E(r) self-adjusting in response to pore formation without causing uncontrolled growth and expansion. By using dynamic membrane tension, membrane electroporation in response to a 180kV/cm trapezoidal pulse with a 10 ns on time and 1.5 ns rise- and fall-times is discussed. Poration is predicted to occur at times beyond the peak at around 9.2 ns. Modeling also yields time-dependent distributions of the membrane pore population after multiple pulses. It shows that the pore distribution shifts to larger values of the radius with multiple pulsing. Molecular dynamics (MD) simulations are also carried out for a fixed field of 0.5 V/nm to demonstrate nanopore formation from a microscopic point of view. The result shows that the pore is predicted to be about 0.9 nm in diameter and somewhat narrower at the central point.Keywords: high-intensity, nanosecond, dynamics, electroporation
Procedia PDF Downloads 1604148 Numerical Investigation of a Supersonic Ejector for Refrigeration System
Authors: Karima Megdouli, Bourhan Taschtouch
Abstract:
Supersonic ejectors have many applications in refrigeration systems. And improving ejector performance is the key to improve the efficiency of these systems. One of the main advantages of the ejector is its geometric simplicity and the absence of moving parts. This paper presents a theoretical model for evaluating the performance of a new supersonic ejector configuration for refrigeration system applications. The relationship between the flow field and the key parameters of the new configuration has been illustrated by analyzing the Mach number and flow velocity contours. The method of characteristics (MOC) is used to design the supersonic nozzle of the ejector. The results obtained are compared with those obtained by CFD. The ejector is optimized by minimizing exergy destruction due to irreversibility and shock waves. The optimization converges to an efficient optimum solution, ensuring improved and stable performance over the whole considered range of uncertain operating conditions.Keywords: supersonic ejector, theoretical model, CFD, optimization, performance
Procedia PDF Downloads 764147 Comparison of Dose Rate and Energy Dependence of Soft Tissue Equivalence Dosimeter with Electron and Photon Beams Using Magnetic Resonance Imaging
Authors: Bakhtiar Azadbakht, Karim Adinehvand, Amin Sahebnasagh
Abstract:
The purpose of this study was to evaluate dependence of PAGAT polymer gel dosimeter 1/T2 on different electron and photon energies as well as on different mean dose rates for a standard clinically used Co-60 therapy unit and an ELECTA linear accelerator. A multi echo sequence with 32 equidistant echoes was used for the evaluation of irradiated polymer gel dosimeters. The optimal post-manufacture irradiation and post imaging times were both determined to be one day. The sensitivity of PAGAT polymer gel dosimeter with irradiation of photon and electron beams was represented by the slope of calibration curve in the linear region measured for each modality. The response of PAGAT gel with photon and electron beams is very similar in the lower dose region. The R2-dose response was linear up to 30Gy. In electron beams the R2-dose response for doses less than 3Gy is not exact, but in photon beams the R2-dose response for doses less than 2Gy is not exact. Dosimeter energy dependence was studied for electron energies of 4, 12 and 18MeV and photon energies of 1.25, 4, 6 and 18MV. Dose rate dependence was studied in 6MeV electron beam and 6MV photon beam with the use of dose rates 80, 160, 240, 320, 400, and 480cGy/min. Evaluation of dosimeters were performed on Siemens Symphony, Germany 1.5T Scanner in the head coil. In this study no trend in polymer-gel dosimeter 1/T2 dependence was found on mean dose rate and energy for electron and photon beams.Keywords: polymer gels, PAGAT gel, electron and photon beams, MRI
Procedia PDF Downloads 4734146 Opacity Synthesis with Orwellian Observers
Authors: Moez Yeddes
Abstract:
The property of opacity is widely used in the formal verification of security in computer systems and protocols. Opacity is a general language-theoretic scheme of many security properties of systems. Opacity is parametrized with framework in which several security properties of a system can be expressed. A secret behaviour of a system is opaque if a passive attacker can never deduce its occurrence from the system observation. Instead of considering the case of static observability where the set of observable events is fixed off-line or dynamic observability where the set of observable events changes over time depending on the history of the trace, we introduce Orwellian partial observability where unobservable events are not revealed provided that downgrading events never occurs in the future of the trace. Orwellian partial observability is needed to model intransitive information flow. This Orwellian observability is knwon as ipurge function. We show in previous work how to verify opacity for regular secret is opaque for a regular language L w.r.t. an Orwellian projection is PSPACE-complete while it has been proved undecidable even for a regular language L w.r.t. a general Orwellian observation function. In this paper, we address two problems of opacification of a regular secret ϕ for a regular language L w.r.t. an Orwellian projection: Given L and a secret ϕ ∈ L, the first problem consist to compute some minimal regular super-language M of L, if it exists, such that ϕ is opaque for M and the second consists to compute the supremal sub-language M′ of L such that ϕ is opaque for M′. We derive both language-theoretic characterizations and algorithms to solve these two dual problems.Keywords: security policies, opacity, formal verification, orwellian observation
Procedia PDF Downloads 2254145 Blindness and Deafness, the Outcomes of Varicella Zoster Virus Encephalitis in HIV Positive Patient
Authors: Hadiseh Hosamirudsari, Farhad Afsarikordehmahin, Pooria Sekhavatfar
Abstract:
Concomitant cortical blindness and deafness that follow varicella zoster virus (VZV) infection is rare. We describe a case of ophthalmic zoster that caused cortical blindness and deafness after central nervous system (CNS) involvement. A 42-year old, HIV infected woman has developed progressive blurry vision and deafness, 4 weeks after ophthalmic zoster. A physical examination and positive VZV polymerase chain reaction (PCR) of cerebrospinal fluid (CSF) suggested VZV encephalitis. Complication of VZV encephalitis is considered as the cause of blindness and deafness. In neurological deficit patient especially with a history of herpes zoster, VZV infection should be regarded as the responsible agent in inflammatory disorders of nervous system. The immunocompromised state of patient (including HIV) is as important an agent as VZV infection in developing the disease.Keywords: blindness, deafness, hiv, VZV encephalitis
Procedia PDF Downloads 3084144 Investigation on the Behavior of Conventional Reinforced Coupling Beams
Authors: Akash K. Walunj, Dipendu Bhunia, Samarth Gupta, Prabhat Gupta
Abstract:
Coupled shear walls consist of two shear walls connected intermittently by beams along the height. The behavior of coupled shear walls is mainly governed by the coupling beams. The coupling beams are designed for ductile inelastic behavior in order to dissipate energy. The base of the shear walls may be designed for elastic or ductile inelastic behavior. The amount of energy dissipation depends on the yield moment capacity and plastic rotation capacity of the coupling beams. In this paper, an analytical model of coupling beam was developed to calculate the rotations and moment capacities of coupling beam with conventional reinforcement.Keywords: design studies, computational model(s), case study/studies, modelling, coupling beam
Procedia PDF Downloads 4764143 Improvements of the Difficulty in Hospital Acceptance at the Scene by the Introduction of Smartphone Application for Emergency-Medical-Service System: A Population-Based Before-And-After Observation Study in Osaka City, Japan
Authors: Yusuke Katayama, Tetsuhisa Kitamura, Kosuke Kiyohara, Sumito Hayashida, Taku Iwami, Takashi Kawamura, Takeshi Shimazu
Abstract:
Background: Recently, the number of ambulance dispatches has been increasing in Japan and it is, therefore, difficult to accept emergency patients to hospitals smoothly and appropriately because of the limited hospital capacity. To facilitate the request for patient transport by ambulances and hospital acceptance, the emergency information system using information technology has been built up and introduced in various communities. However, its effectiveness has not been insufficiently revealed in Japan. In 2013, we developed a smartphone application system that enables the emergency-medical-service (EMS) personnel to share information about on-scene ambulance and hospital situation. The aim of this study was to assess the introduction effect of this application for EMS system in Osaka City, Japan. Methods: This study was a retrospective study with population-based ambulance records of Osaka Municipal Fire Department. This study period was six years from January 1, 2010 to December 31, 2015. In this study, we enrolled emergency patients that on-scene EMS personnel conducted the hospital selection for them. The main endpoint was difficulty in hospital acceptance at the scene. The definition of difficulty in hospital acceptance at the scene was to make >=5 phone calls by EMS personnel at the scene to each hospital until a decision to transport was determined. The definition of the smartphone application group was emergency patients transported in the period of 2013-2015 after the introduction of this application, and we assessed the introduction effect of smartphone application with multivariable logistic regression model. Results: A total of 600,526 emergency patients for whom EMS personnel selected hospitals were eligible for our analysis. There were 300,131 smartphone application group (50.0%) in 2010-2012 and 300,395 non-smartphone application group (50.0%) in 2013-2015. The proportion of the difficulty in hospital acceptance was 14.2% (42,585/300,131) in the smartphone application group and 10.9% (32,819/300,395) in the non-smartphone application group, and the difficulty in hospital acceptance significantly decreased by the introduction of the smartphone application (adjusted odds ration; 0.730, 95% confidence interval; 0.718-0.741, P<0.001). Conclusions: Sharing information between ambulance and hospital by introducing smartphone application at the scene was associated with decreasing the difficulty in hospital acceptance. Our findings may be considerable useful for developing emergency medical information system with using IT in other areas of the world.Keywords: difficulty in hospital acceptance, emergency medical service, infomation technology, smartphone application
Procedia PDF Downloads 2754142 Smart Model with the DEMATEL and ANFIS Multistage to Assess the Value of the Brand
Authors: Hamed Saremi
Abstract:
One of the challenges in manufacturing and service companies to provide a product or service is recognized Brand to consumers in target markets. They provide most of their processes under the same capacity. But the constant threat of devastating internal and external resources to prevent a rise Brands and more companies are recognizing the stages are bankrupt. This paper has tried to identify and analyze effective indicators of brand equity and focuses on indicators and presents a model of intelligent create a model to prevent possible damage. In this study identified indicators of brand equity based on literature study and according to expert opinions, set of indicators By techniques DEMATEL Then to used Multi-Step Adaptive Neural-Fuzzy Inference system (ANFIS) to design a multi-stage intelligent system for assessment of brand equity.Keywords: anfis, dematel, brand, cosmetic product, brand value
Procedia PDF Downloads 4104141 Optimum Design of Grillage Systems Using Firefly Algorithm Optimization Method
Authors: F. Erdal, E. Dogan, F. E. Uz
Abstract:
In this study, firefly optimization based optimum design algorithm is presented for the grillage systems. Naming of the algorithm is derived from the fireflies, whose sense of movement is taken as a model in the development of the algorithm. Fireflies’ being unisex and attraction between each other constitute the basis of the algorithm. The design algorithm considers the displacement and strength constraints which are implemented from LRFD-AISC (Load and Resistance Factor Design-American Institute of Steel Construction). It selects the appropriate W (Wide Flange)-sections for the transverse and longitudinal beams of the grillage system among 272 discrete W-section designations given in LRFD-AISC so that the design limitations described in LRFD are satisfied and the weight of the system is confined to be minimal. Number of design examples is considered to demonstrate the efficiency of the algorithm presented.Keywords: firefly algorithm, steel grillage systems, optimum design, stochastic search techniques
Procedia PDF Downloads 4354140 Inventory Control for Purchased Part under Long Lead Time and Uncertain Demand: MRP vs Demand-Driven MRP Approach
Authors: M. J. Shofa, A. Hidayatno, O. M. Armand
Abstract:
MRP as a production control system is appropriate for the deterministic environment. Unfortunately, most production systems such as customer demands are stochastic. Demand-Driven MRP (DDMRP) is a new approach for inventory control system, and it deals with demand uncertainty. The objective of this paper is to compare the MRP and DDMRP work for a long lead time and uncertain demand in terms of on-hand inventory levels. The evaluation is conducted through a discrete event simulation using purchased part data from an automotive company. The result is MRP gives 50,759 pcs / day while DDMRP gives 34,835 pcs / day (reduce 32%), it means DDMRP is more effective inventory control than MRP in terms of on-hand inventory levels.Keywords: Demand-Driven MRP, long lead time, MRP, uncertain demand
Procedia PDF Downloads 3014139 Thermal and Acoustic Design of Mobile Hydraulic Vehicle Engine Room
Authors: Homin Kim, Hyungjo Byun, Jinyoung Do, Yongil Lee, Hyunho Shin, Seungbae Lee
Abstract:
Engine room of mobile hydraulic vehicle is densely packed with an engine and many hydraulic components mostly generating heat and sound. Though hydraulic oil cooler, ATF cooler, and axle oil cooler etc. are added to vehicle cooling system of mobile vehicle, the overheating may cause downgraded performance and frequent failures. In order to improve thermal and acoustic environment of engine room, the computational approaches by Computational Fluid Dynamics (CFD) and Boundary Element Method (BEM) are used together with necessary modal analysis of belt-driven system. The engine room design layout and process, which satisfies the design objectives of sound power level and temperature levels of radiator water, charged air cooler, transmission and hydraulic oil coolers, is discussed.Keywords: acoustics, CFD, engine room design, mobile hydraulics
Procedia PDF Downloads 3274138 Development of a Process Method to Manufacture Spreads from Powder Hardstock
Authors: Phakamani Xaba, Robert Huberts, Bilainu Oboirien
Abstract:
It has been over 200 years since margarine was discovered and manufactured using liquid oil, liquified hardstock oils and other oil phase & aqueous phase ingredients. Henry W. Bradley first used vegetable oils in liquid state and around 1871, since then; spreads have been traditionally manufactured using liquified oils. The main objective of this study was to develop a process method to produce spreads using spray dried hardstock fat powders as a structing fats in place of current liquid structuring fats. A high shear mixing system was used to condition the fat phase and the aqueous phase was prepared separately. Using a single scraped surface heat exchanger and pin stirrer, margarine was produced. The process method was developed for to produce spreads with 40%, 50% and 60% fat . The developed method was divided into three steps. In the first step, fat powders were conditioned by melting and dissolving them into liquid oils. The liquified portion of the oils were at 65 °C, whilst the spray dried fat powder was at 25 °C. The two were mixed using a mixing vessel at 900 rpm for 4 minutes. The rest of the ingredients i.e., lecithin, colorant, vitamins & flavours were added at ambient conditions to complete the fat/ oil phase. The water phase was prepared separately by mixing salt, water, preservative, acidifier in the mixing tank. Milk was also separately prepared by pasteurizing it at 79°C prior to feeding it into the aqueous phase. All the water phase contents were chilled to 8 °C. The oil phase and water phase were mixed in a tank, then fed into a single scraped surface heat exchanger. After the scraped surface heat exchanger, the emulsion was fed in a pin stirrer to work the formed crystals and produce margarine. The margarine produced using the developed process had fat levels of 40%, 50% and 60%. The margarine passed all the qualitative, stability, and taste assessments. The scores were 6/10, 7/10 & 7.5/10 for the 40%, 50% & 60% fat spreads, respectively. The success of the trials brought about differentiated knowledge on how to manufacture spreads using non micronized spray dried fat powders as hardstock. Manufacturers do not need to store structuring fats at 80-90°C and even high in winter, instead, they can adapt their processes to use fat powders which need to be stored at 25 °C. The developed process method used one scrape surface heat exchanger instead of the four to five currently used in votator based plants. The use of a single scraped surface heat exchanger translated to about 61% energy savings i.e., 23 kW per ton of product. Furthermore, it was found that the energy saved by implementing separate pasteurization was calculated to be 6.5 kW per ton of product produced.Keywords: margarine emulsion, votator technology, margarine processing, scraped sur, fat powders
Procedia PDF Downloads 904137 An Analysis of the Impact of Immunosuppression upon the Prevalence and Risk of Cancer
Authors: Aruha Khan, Brynn E. Kankel, Paraskevi Papadopoulou
Abstract:
In recent years, extensive research upon ‘stress’ has provided insight into its two distinct guises, namely the short–term (fight–or–flight) response versus the long–term (chronic) response. Specifically, the long–term or chronic response is associated with the suppression or dysregulation of immune function. It is also widely noted that the occurrence of cancer is greatly correlated to the suppression of the immune system. It is thus necessary to explore the impact of long–term or chronic stress upon the prevalence and risk of cancer. To what extent can the dysregulation of immune function caused by long–term exposure to stress be controlled or minimized? This study focuses explicitly upon immunosuppression due to its ability to increase disease susceptibility, including cancer itself. Based upon an analysis of the literature relating to the fundamental structure of the immune system alongside the prospective linkage of chronic stress and the development of cancer, immunosuppression may not necessarily correlate directly to the acquisition of cancer—although it remains a contributing factor. A cross-sectional analysis of the survey data from the University of Tennessee Medical Center (UTMC) and Harvard Medical School (HMS) will provide additional supporting evidence (or otherwise) for the hypothesis of the study about whether immunosuppression (caused by the chronic stress response) notably impacts the prevalence of cancer. Finally, a multidimensional framework related to education on chronic stress and its effects is proposed.Keywords: immune system, immunosuppression, long–term (chronic) stress, risk of cancer
Procedia PDF Downloads 1344136 Effect of Urban Solid Waste Management Practices on the Sustainability of Urban Infrastructure in Sokoto Metropolis
Authors: Rilwanu, Bello, Usmn Bello Saad, Hamza Umar Yaro, Isyka Ibrahim, Adebayo Oluwole, Jimoh Abdurrahman
Abstract:
Urban solid waste management is a critical issue affecting the sustainability of urban infrastructure globally. In rapidly growing cities like Sokoto metropolis inefficient waste management practices led to significant environmental and economic challenges. The research aimed at assessing the effect of waste management practices on the sustainability of urban infrastructure in Sokoto. It also includes assessing the current state of solid waste management practices and its impact on the sustainability of sokoto urban infrastructure. The methodology adopted both primary and secondary sources of data. The targeted population include the staff of SUDA, STEPA and some of the resident in the metropolis. Descriptive method was adopted in the analysis and presentation of data. The study revealed that the waste management practice adopted is solid metropolis was very poor as its associated with poor funding, no availability of sufficient vehicles, bad attitude of resident upon waste disposal which led to blockage of streets and water channels which can subsequently lead to flood. The study recommended that the state government need to increase in funding the relevant authority and also provide the waste dumping sites as well as modern vehicles and equipment to ensure effective solid waste management and disposal.Keywords: waste, infrastructure, sustainability, management, S, sustainability, solid waste, urban infrastructure
Procedia PDF Downloads 184135 Intelligent Indoor Localization Using WLAN Fingerprinting
Authors: Gideon C. Joseph
Abstract:
The ability to localize mobile devices is quite important, as some applications may require location information of these devices to operate or deliver better services to the users. Although there are several ways of acquiring location data of mobile devices, the WLAN fingerprinting approach has been considered in this work. This approach uses the Received Signal Strength Indicator (RSSI) measurement as a function of the position of the mobile device. RSSI is a quantitative technique of describing the radio frequency power carried by a signal. RSSI may be used to determine RF link quality and is very useful in dense traffic scenarios where interference is of major concern, for example, indoor environments. This research aims to design a system that can predict the location of a mobile device, when supplied with the mobile’s RSSIs. The developed system takes as input the RSSIs relating to the mobile device, and outputs parameters that describe the location of the device such as the longitude, latitude, floor, and building. The relationship between the Received Signal Strengths (RSSs) of mobile devices and their corresponding locations is meant to be modelled; hence, subsequent locations of mobile devices can be predicted using the developed model. It is obvious that describing mathematical relationships between the RSSIs measurements and localization parameters is one option to modelling the problem, but the complexity of such an approach is a serious turn-off. In contrast, we propose an intelligent system that can learn the mapping of such RSSIs measurements to the localization parameters to be predicted. The system is capable of upgrading its performance as more experiential knowledge is acquired. The most appealing consideration to using such a system for this task is that complicated mathematical analysis and theoretical frameworks are excluded or not needed; the intelligent system on its own learns the underlying relationship in the supplied data (RSSI levels) that corresponds to the localization parameters. These localization parameters to be predicted are of two different tasks: Longitude and latitude of mobile devices are real values (regression problem), while the floor and building of the mobile devices are of integer values or categorical (classification problem). This research work presents artificial neural network based intelligent systems to model the relationship between the RSSIs predictors and the mobile device localization parameters. The designed systems were trained and validated on the collected WLAN fingerprint database. The trained networks were then tested with another supplied database to obtain the performance of trained systems on achieved Mean Absolute Error (MAE) and error rates for the regression and classification tasks involved therein.Keywords: indoor localization, WLAN fingerprinting, neural networks, classification, regression
Procedia PDF Downloads 3474134 Methodological Resolutions for Definition Problems in Turkish Navigation Terminology
Authors: Ayşe Yurdakul, Eckehard Schnieder
Abstract:
Nowadays, there are multilingual and multidisciplinary communication problems because of the increasing technical progress. Each technical field has its own specific terminology and in each particular language, there are differences in relation to definitions of terms. Besides, there could be several translations in the certain target language for one term of the source language. First of all, these problems of semantic relations between terms include the synonymy, antonymy, hypernymy/hyponymy, ambiguity, risk of confusion and translation problems. Therefore, the iglos terminology management system of the Institute for Traffic Safety and Automation Engineering of the Technische Universität Braunschweig has the goal to avoid these problems by a methodological standardisation of term definitions on the basis of the iglos sign model and iglos relation types. The focus of this paper should be on standardisation of navigation terminology as an example.Keywords: iglos, localisation, methodological approaches, navigation, positioning, definition problems, terminology
Procedia PDF Downloads 3684133 Experimenting with Error Performance of Systems Employing Pulse Shaping Filters on a Software-Defined-Radio Platform
Authors: Chia-Yu Yao
Abstract:
This paper presents experimental results on testing the symbol-error-rate (SER) performance of quadrature amplitude modulation (QAM) systems employing symmetric pulse-shaping square-root (SR) filters designed by minimizing the roughness function and by minimizing the peak-to-average power ratio (PAR). The device used in the experiments is the 'bladeRF' software-defined-radio platform. PAR is a well-known measurement, whereas the roughness function is a concept for measuring the jitter-induced interference. The experimental results show that the system employing minimum-roughness pulse-shaping SR filters outperforms the system employing minimum-PAR pulse-shaping SR filters in the sense of SER performance.Keywords: pulse-shaping filters, FIR filters, jittering, QAM
Procedia PDF Downloads 3414132 Co-Gasification Process for Green and Blue Hydrogen Production: Innovative Process Development, Economic Analysis, and Exergy Assessment
Authors: Yousaf Ayub
Abstract:
A co-gasification process, which involves the utilization of both biomass and plastic waste, has been developed to enable the production of blue and green hydrogen. To support this endeavor, an Aspen Plus simulation model has been meticulously created, and sustainability analysis is being conducted, focusing on economic viability, energy efficiency, advanced exergy considerations, and exergoeconomics evaluations. In terms of economic analysis, the process has demonstrated strong economic sustainability, as evidenced by an internal rate of return (IRR) of 8% at a process efficiency level of 70%. At present, the process has the potential to generate approximately 1100 kWh of electric power, with any excess electricity, beyond meeting the process requirements, capable of being harnessed for green hydrogen production via an alkaline electrolysis cell (AEC). This surplus electricity translates to a potential daily hydrogen production of around 200 kg. The exergy analysis of the model highlights that the gasifier component exhibits the lowest exergy efficiency, resulting in the highest energy losses, amounting to approximately 40%. Additionally, advanced exergy analysis findings pinpoint the gasifier as the primary source of exergy destruction, totaling around 9000 kW, with associated exergoeconomics costs amounting to 6500 $/h. Consequently, improving the gasifier's performance is a critical focal point for enhancing the overall sustainability of the process, encompassing energy, exergy, and economic considerations.Keywords: blue hydrogen, green hydrogen, co-gasification, waste valorization, exergy analysis
Procedia PDF Downloads 66