Search results for: ordering time
15321 Evaluation of the Impact of Reducing the Traffic Light Cycle for Cars to Improve Non-Vehicular Transportation: A Case of Study in Lima
Authors: Gheyder Concha Bendezu, Rodrigo Lescano Loli, Aldo Bravo Lizano
Abstract:
In big urbanized cities of Latin America, motor vehicles have priority over non-motor vehicles and pedestrians. There is an important problem that affects people's health and quality of life; lack of inclusion towards pedestrians makes it difficult for them to move smoothly and safely since the city has been planned for the transit of motor vehicles. Faced with the new trend for sustainable and economical transport, the city is forced to develop infrastructure in order to incorporate pedestrians and users with non-motorized vehicles in the transport system. The present research aims to study the influence of non-motorized vehicles on an avenue, the optimization of a cycle using traffic lights based on simulation in Synchro software, to improve the flow of non-motor vehicles. The evaluation is of the microscopic type; for this reason, field data was collected, such as vehicular, pedestrian, and non-motor vehicle user demand. With the values of speed and travel time, it is represented in the current scenario that contains the existing problem. These data allow to create a microsimulation model in Vissim software, later to be calibrated and validated so that it has a behavior similar to reality. The results of this model are compared with the efficiency parameters of the proposed model; these parameters are the queue length, the travel speed, and mainly the travel times of the users at this intersection. The results reflect a reduction of 27% in travel time, that is, an improvement between the proposed model and the current one for this great avenue. The tail length of motor vehicles is also reduced by 12.5%, a considerable improvement. All this represents an improvement in the level of service and in the quality of life of users.Keywords: bikeway, microsimulation, pedestrians, queue length, traffic light cycle, travel time
Procedia PDF Downloads 17615320 Building Energy Modeling for Networks of Data Centers
Authors: Eric Kumar, Erica Cochran, Zhiang Zhang, Wei Liang, Ronak Mody
Abstract:
The objective of this article was to create a modelling framework that exposes the marginal costs of shifting workloads across geographically distributed data-centers. Geographical distribution of internet services helps to optimize their performance for localized end users with lowered communications times and increased availability. However, due to the geographical and temporal effects, the physical embodiments of a service's data center infrastructure can vary greatly. In this work, we first identify that the sources of variances in the physical infrastructure primarily stem from local weather conditions, specific user traffic profiles, energy sources, and the types of IT hardware available at the time of deployment. Second, we create a traffic simulator that indicates the IT load at each data-center in the set as an approximator for user traffic profiles. Third, we implement a framework that quantifies the global level energy demands using building energy models and the traffic profiles. The results of the model provide a time series of energy demands that can be used for further life cycle analysis of internet services.Keywords: data-centers, energy, life cycle, network simulation
Procedia PDF Downloads 14715319 Reinforcement Learning for Quality-Oriented Production Process Parameter Optimization Based on Predictive Models
Authors: Akshay Paranjape, Nils Plettenberg, Robert Schmitt
Abstract:
Producing faulty products can be costly for manufacturing companies and wastes resources. To reduce scrap rates in manufacturing, process parameters can be optimized using machine learning. Thus far, research mainly focused on optimizing specific processes using traditional algorithms. To develop a framework that enables real-time optimization based on a predictive model for an arbitrary production process, this study explores the application of reinforcement learning (RL) in this field. Based on a thorough review of literature about RL and process parameter optimization, a model based on maximum a posteriori policy optimization that can handle both numerical and categorical parameters is proposed. A case study compares the model to state–of–the–art traditional algorithms and shows that RL can find optima of similar quality while requiring significantly less time. These results are confirmed in a large-scale validation study on data sets from both production and other fields. Finally, multiple ways to improve the model are discussed.Keywords: reinforcement learning, production process optimization, evolutionary algorithms, policy optimization, actor critic approach
Procedia PDF Downloads 9715318 Optimization of Groundwater Utilization in Fish Aquaculture
Authors: M. Ahmed Eldesouky, S. Nasr, A. Beltagy
Abstract:
Groundwater is generally considered as the best source for aquaculture as it is well protected from contamination. The most common problem limiting the use of groundwater in Egypt is its high iron, manganese and ammonia content. This problem is often overcome by applying the treatment before use. Aeration in many cases is not enough to oxidize iron and manganese in complex forms with organics. Most of the treatment we use potassium permanganate as an oxidizer followed by a pressurized closed green sand filter. The aim of present study is to investigate the optimum characteristics of groundwater to give lowest iron, manganese and ammonia, maximum production and quality of fish in aquaculture in El-Max Research Station. The major design goal of the system was determined the optimum time for harvesting the treated water, pH, and Glauconite weight to use it for aquaculture process in the research site and achieve the Egyptian law (48/1982) and EPA level required for aquaculture. The water characteristics are [Fe = 0.116 mg/L, Mn = 1.36 mg/L,TN = 0.44 mg/L , TP = 0.07 mg/L , Ammonia = 0.386 mg/L] by using the glauconite filter we obtained high efficiency for removal for [(Fe, Mn and Ammonia] ,but in the Lab we obtained result for (Fe, 43-97), ( Mn,92-99 ), and ( Ammonia, 66-88 )]. We summarized the results to show the optimum time, pH, Glauconite weight, and the best model for design in the region.Keywords: aquaculture, ammonia in groundwater, groundwater, iron and manganese in water, groundwater treatment
Procedia PDF Downloads 23315317 A Multi-Agent System for Accelerating the Delivery Process of Clinical Diagnostic Laboratory Results Using GSM Technology
Authors: Ayman M. Mansour, Bilal Hawashin, Hesham Alsalem
Abstract:
Faster delivery of laboratory test results is one of the most noticeable signs of good laboratory service and is often used as a key performance indicator of laboratory performance. Despite the availability of technology, the delivery time of clinical laboratory test results continues to be a cause of customer dissatisfaction which makes patients feel frustrated and they became careless to get their laboratory test results. The Medical Clinical Laboratory test results are highly sensitive and could harm patients especially with the severe case if they deliver in wrong time. Such results affect the treatment done by physicians if arrived at correct time efforts should, therefore, be made to ensure faster delivery of lab test results by utilizing new trusted, Robust and fast system. In this paper, we proposed a distributed Multi-Agent System to enhance and faster the process of laboratory test results delivery using SMS. The developed system relies on SMS messages because of the wide availability of GSM network comparing to the other network. The software provides the capability of knowledge sharing between different units and different laboratory medical centers. The system was built using java programming. To implement the proposed system we had many possible techniques. One of these is to use the peer-to-peer (P2P) model, where all the peers are treated equally and the service is distributed among all the peers of the network. However, for the pure P2P model, it is difficult to maintain the coherence of the network, discover new peers and ensure security. Also, security is a quite important issue since each node is allowed to join the network without any control mechanism. We thus take the hybrid P2P model, a model between the Client/Server model and the pure P2P model using GSM technology through SMS messages. This model satisfies our need. A GUI has been developed to provide the laboratory staff with the simple and easy way to interact with the system. This system provides quick response rate and the decision is faster than the manual methods. This will save patients life.Keywords: multi-agent system, delivery process, GSM technology, clinical laboratory results
Procedia PDF Downloads 24915316 Determination of Sintering Parameters of TiB₂ – Ti₃SiC₂ Composites
Authors: Bilge Yaman Islak, Erhan Ayas
Abstract:
The densification behavior of TiB₂ – Ti₃SiC₂ composites is investigated for temperatures in the range of 1200°C to 1400°C, for the pressure of 40 and 50MPa, and for holding time between 15-30 min by spark plasma sintering (SPS) technique. Ti, Si, TiC and 5 wt.% TiB₂ were used to synthesize TiB₂ – Ti₃SiC₂ composites and the effect of different sintering parameters on the densification and phase evolution of these composites were investigated. The bulk densities were determined by using the Archimedes method. The polished and fractured surfaces of the samples were examined using a scanning electron microscope equipped with an energy dispersive spectroscopy (EDS). The phase analyses were accomplished by using the X-Ray diffractometer. Sintering temperature and holding time are found to play a dominant role in the phase development of composites. TiₓCᵧ and TiSi₂ secondary phases were found in 5 wt.%TiB₂ – Ti₃SiC₂ composites densified at 1200°C and 1400°C under the pressure of 40 MPa, due to decomposition of Ti₃SiC₂. The results indicated that 5 wt.%TiB₂ – Ti₃SiC₂ composites were densified into the dense parts with a relative density of 98.77% by sintering at 1300 °C, for 15 min, under a pressure of 50 MPa via SPS without the formation of any other ancillary phase. This work was funded and supported by Scientific Research Projects Commission of Eskisehir Osmangazi University with the Project Number 201915C103 (2019-2517).Keywords: densification, phase evolution, sintering, TiB₂ – Ti₃SiC₂ composites
Procedia PDF Downloads 14115315 Use of Numerical Tools Dedicated to Fire Safety Engineering for the Rolling Stock
Authors: Guillaume Craveur
Abstract:
This study shows the opportunity to use numerical tools dedicated to Fire Safety Engineering for the Rolling Stock. Indeed, some lawful requirements can now be demonstrated by using numerical tools. The first part of this study presents the use of modelling evacuation tool to satisfy the criteria of evacuation time for the rolling stock. The buildingEXODUS software is used to model and simulate the evacuation of rolling stock. Firstly, in order to demonstrate the reliability of this tool to calculate the complete evacuation time, a comparative study was achieved between a real test and simulations done with buildingEXODUS. Multiple simulations are performed to capture the stochastic variations in egress times. Then, a new study is done to calculate the complete evacuation time of a train with the same geometry but with a different interior architecture. The second part of this study shows some applications of Computational Fluid Dynamics. This work presents the approach of a multi scales validation of numerical simulations of standardized tests with Fire Dynamics Simulations software developed by the National Institute of Standards and Technology (NIST). This work highlights in first the cone calorimeter test, described in the standard ISO 5660, in order to characterize the fire reaction of materials. The aim of this process is to readjust measurement results from the cone calorimeter test in order to create a data set usable at the seat scale. In the second step, the modelisation concerns the fire seat test described in the standard EN 45545-2. The data set obtained thanks to the validation of the cone calorimeter test was set up in the fire seat test. To conclude with the third step, after controlled the data obtained for the seat from the cone calorimeter test, a larger scale simulation with a real part of train is achieved.Keywords: fire safety engineering, numerical tools, rolling stock, multi-scales validation
Procedia PDF Downloads 30315314 Development of a Real-Time Brain-Computer Interface for Interactive Robot Therapy: An Exploration of EEG and EMG Features during Hypnosis
Authors: Maryam Alimardani, Kazuo Hiraki
Abstract:
This study presents a framework for development of a new generation of therapy robots that can interact with users by monitoring their physiological and mental states. Here, we focused on one of the controversial methods of therapy, hypnotherapy. Hypnosis has shown to be useful in treatment of many clinical conditions. But, even for healthy people, it can be used as an effective technique for relaxation or enhancement of memory and concentration. Our aim is to develop a robot that collects information about user’s mental and physical states using electroencephalogram (EEG) and electromyography (EMG) signals and performs costeffective hypnosis at the comfort of user’s house. The presented framework consists of three main steps: (1) Find the EEG-correlates of mind state before, during, and after hypnosis and establish a cognitive model for state changes, (2) Develop a system that can track the changes in EEG and EMG activities in real time and determines if the user is ready for suggestion, and (3) Implement our system in a humanoid robot that will talk and conduct hypnosis on users based on their mental states. This paper presents a pilot study in regard to the first stage, detection of EEG and EMG features during hypnosis.Keywords: hypnosis, EEG, robotherapy, brain-computer interface (BCI)
Procedia PDF Downloads 25615313 Flood Early Warning and Management System
Authors: Yogesh Kumar Singh, T. S. Murugesh Prabhu, Upasana Dutta, Girishchandra Yendargaye, Rahul Yadav, Rohini Gopinath Kale, Binay Kumar, Manoj Khare
Abstract:
The Indian subcontinent is severely affected by floods that cause intense irreversible devastation to crops and livelihoods. With increased incidences of floods and their related catastrophes, an Early Warning System for Flood Prediction and an efficient Flood Management System for the river basins of India is a must. Accurately modeled hydrological conditions and a web-based early warning system may significantly reduce economic losses incurred due to floods and enable end users to issue advisories with better lead time. This study describes the design and development of an EWS-FP using advanced computational tools/methods, viz. High-Performance Computing (HPC), Remote Sensing, GIS technologies, and open-source tools for the Mahanadi River Basin of India. The flood prediction is based on a robust 2D hydrodynamic model, which solves shallow water equations using the finite volume method. Considering the complexity of the hydrological modeling and the size of the basins in India, it is always a tug of war between better forecast lead time and optimal resolution at which the simulations are to be run. High-performance computing technology provides a good computational means to overcome this issue for the construction of national-level or basin-level flash flood warning systems having a high resolution at local-level warning analysis with a better lead time. High-performance computers with capacities at the order of teraflops and petaflops prove useful while running simulations on such big areas at optimum resolutions. In this study, a free and open-source, HPC-based 2-D hydrodynamic model, with the capability to simulate rainfall run-off, river routing, and tidal forcing, is used. The model was tested for a part of the Mahanadi River Basin (Mahanadi Delta) with actual and predicted discharge, rainfall, and tide data. The simulation time was reduced from 8 hrs to 3 hrs by increasing CPU nodes from 45 to 135, which shows good scalability and performance enhancement. The simulated flood inundation spread and stage were compared with SAR data and CWC Observed Gauge data, respectively. The system shows good accuracy and better lead time suitable for flood forecasting in near-real-time. To disseminate warning to the end user, a network-enabled solution is developed using open-source software. The system has query-based flood damage assessment modules with outputs in the form of spatial maps and statistical databases. System effectively facilitates the management of post-disaster activities caused due to floods, like displaying spatial maps of the area affected, inundated roads, etc., and maintains a steady flow of information at all levels with different access rights depending upon the criticality of the information. It is designed to facilitate users in managing information related to flooding during critical flood seasons and analyzing the extent of the damage.Keywords: flood, modeling, HPC, FOSS
Procedia PDF Downloads 8915312 Compatibility of Sulphate Resisting Cement with Super and Hyper-Plasticizer
Authors: Alper Cumhur, Hasan Baylavlı, Eren Gödek
Abstract:
Use of superplasticity chemical admixtures in concrete production is widespread all over the world and has become almost inevitable. Super-plasticizers (SPA), extend the setting time of concrete by adsorbing onto cement particles and provide concrete to preserve its fresh state workability properties. Hyper-plasticizers (HPA), as a special type of superplasticizer, provide the production of qualified concretes by increasing the workability properties of concrete, effectively. However, compatibility of cement with super and hyper-plasticizers is quite important for achieving efficient workability in order to produce qualified concretes. In 2011, the EN 197-1 standard is edited and cement classifications were updated. In this study, the compatibility of hyper-plasticizer and CEM I SR0 type sulphate resisting cement (SRC) that firstly classified in EN 197-1 is investigated. Within the scope of the experimental studies, a reference cement mortar was designed with a water/cement ratio of 0.50 confirming to EN 196-1. Fresh unit density of mortar was measured and spread diameters (at 0, 60, 120 min after mix preparation) and setting time of reference mortar were determined with flow table and Vicat tests, respectively. Three mortars are being re-prepared with using both super and hyper-plasticizer confirming to ASTM C494 by 0.50, 0.75 and 1.00% of cement weight. Fresh unit densities, spread diameters and setting times of super and hyper plasticizer added mortars (SPM, HPM) will be determined. Theoretical air-entrainment values of both SPMs and HPMs will be calculated by taking the differences between the densities of plasticizer added mortars and reference mortar. The flow table and Vicat tests are going to be repeated to these mortars and results will be compared. In conclusion, compatibility of SRC with SPA and HPA will be investigated. It is expected that optimum dosages of SPA and HPA will be determined for providing the required workability and setting conditions of SRC mortars, and the advantages/disadvantages of both SPA and HPA will be discussed.Keywords: CEM I SR0, hyper-plasticizer, setting time, sulphate resisting cement, super-plasticizer, workability
Procedia PDF Downloads 21615311 Effect of cold water immersion on bone mineral metabolism in aging rats
Authors: Irena Baranowska-Bosiacka, Mateusz Bosiacki, Patrycja Kupnicka, Anna Lubkowska, Dariusz Chlubek
Abstract:
Physical activity and a balanced diet are among the key factors of "healthy ageing". Physical effort, including swimming in cold water (including bathing in natural water reservoirs), is widely recognized as a hardening factor, with a positive effect on the mental and physical health. At the same time, there is little scientific evidence to verify this hypothesis. In the literature to date, it is possible to obtain data on the impact of these factors on selected physiological and biochemical parameters of the blood, at the same time there are no results of research on the effect of immersing in cold water on mineral metabolism, especially bones, hence it seems important to perform such an analysis in relation to the key elements such as calcium (Ca), magnesium (Mg) and phosphorus (P). Taking the above into account, a hypothesis was put forward about the possibility of a positive effect of exercise in cold water on mineral metabolism and bone density in aging rats. The aim of the study was to evaluate the effect of an 8-week swimming training on mineral metabolism and bone density in aging rats in response to exercise in cold water (5oC) in comparison to swimming in thermal comfort (36oC) and sedentary (control) rats of both sexes. The examination of the concentration of the examined elements in the bones was carried out using inductively coupled plasma atomic emission spectrometry (ICP-OES). The mineral density of the femurs of the rats was measured using the Hologic Horizon DEXA System® densitometer. The results of our study showed that swimming in cold water affects bone mineral metabolism in aging rats by changing the Ca, Mg, P concentration and at the same time increasing their bone density. In males, a decrease in Mg concentration and no changes in bone density were observed. In the light of the research results, it seems that swimming in cold water may be a factor that positively modifies the bone aging process by improving the mechanisms affecting their density.Keywords: swimming in cold water, adaptation to cold water, bone mineral metabolism, aging
Procedia PDF Downloads 6015310 Preclinical Studying of Stable Fe-Citrate Effect on 68Ga-Citrate Tissue Distribution
Authors: A. S. Lunev, A. A. Larenkov, O. E. Klementyeva, G. E. Kodina
Abstract:
Background and aims: 68Ga-citrate is one of prospective radiopharmaceutical for PET-imaging of inflammation and infection. 68Ga-citrate is 67Ga-citrate analogue using since 1970s for SPECT-imaging. There's known rebinding reaction occurs past Ga-citrate injection and gallium (similar iron Fe3+) binds with blood transferrin. Then radiolabeled protein complex is delivered to pathological foci (inflammation/infection sites). But excessive gallium bindings with transferrin are cause of slow blood clearance, long accumulation time in foci (24-72 h) and exception of application possibility of the short-lived gallium-68 (T½ = 68 min). Injection of additional chemical agents (e.g. Fe3+ compounds) competing with radioactive gallium to the blood transferrin joining (blocking of its metal binding capacity) is one of the ways to solve formulated problem. This phenomenon can be used for correction of 68Ga-citrate pharmacokinetics for increasing of the blood clearance and accumulation in foci. The aim of real studying is research of effect of stable Fe-citrate on 68Ga-citrate tissue distribution. Materials and methods: 68Ga-citrate without/with extra injection of stable Fe-citrate (III) was injected nonlinear mice with inflammation models (aseptic soft tissue inflammation, lung infection, osteomyelitis). PET/X-RAY Genisys4 (Sofie Bioscience, USA) was used for non-invasive PET imaging (for 30, 60, 120 min past injection 68Ga-citrate) with subsequent reconstruction of imaging and their analysis (value of clearance, distribution volume). Scanning time is 10 min. Results and conclusions: I. v. injection of stable Fe-citrate blocks the metal-binding capability of transferrin serum and allows decreasing gallium-68 radioactivity in blood significantly and increasing accumulation in inflammation (3-5 time). It allows receiving more informative PET-images of inflammation early (for 30-60 min after injection). Pharmacokinetic parameters prove it. Noted there is no statistically significant difference between 68Ga-citrate accumulation for different inflammation model because PET imaging is indication of pathological processes and is not their identification.Keywords: 68Ga-citrate, Fe-citrate, PET imaging, mice, inflammation, infection
Procedia PDF Downloads 48815309 Classification of ECG Signal Based on Mixture of Linear and Non-Linear Features
Authors: Mohammad Karimi Moridani, Mohammad Abdi Zadeh, Zahra Shahiazar Mazraeh
Abstract:
In recent years, the use of intelligent systems in biomedical engineering has increased dramatically, especially in the diagnosis of various diseases. Also, due to the relatively simple recording of the electrocardiogram signal (ECG), this signal is a good tool to show the function of the heart and diseases associated with it. The aim of this paper is to design an intelligent system for automatically detecting a normal electrocardiogram signal from abnormal one. Using this diagnostic system, it is possible to identify a person's heart condition in a very short time and with high accuracy. The data used in this article are from the Physionet database, available in 2016 for use by researchers to provide the best method for detecting normal signals from abnormalities. Data is of both genders and the data recording time varies between several seconds to several minutes. All data is also labeled normal or abnormal. Due to the low positional accuracy and ECG signal time limit and the similarity of the signal in some diseases with the normal signal, the heart rate variability (HRV) signal was used. Measuring and analyzing the heart rate variability with time to evaluate the activity of the heart and differentiating different types of heart failure from one another is of interest to the experts. In the preprocessing stage, after noise cancelation by the adaptive Kalman filter and extracting the R wave by the Pan and Tampkinz algorithm, R-R intervals were extracted and the HRV signal was generated. In the process of processing this paper, a new idea was presented that, in addition to using the statistical characteristics of the signal to create a return map and extraction of nonlinear characteristics of the HRV signal due to the nonlinear nature of the signal. Finally, the artificial neural networks widely used in the field of ECG signal processing as well as distinctive features were used to classify the normal signals from abnormal ones. To evaluate the efficiency of proposed classifiers in this paper, the area under curve ROC was used. The results of the simulation in the MATLAB environment showed that the AUC of the MLP and SVM neural network was 0.893 and 0.947, respectively. As well as, the results of the proposed algorithm in this paper indicated that the more use of nonlinear characteristics in normal signal classification of the patient showed better performance. Today, research is aimed at quantitatively analyzing the linear and non-linear or descriptive and random nature of the heart rate variability signal, because it has been shown that the amount of these properties can be used to indicate the health status of the individual's heart. The study of nonlinear behavior and dynamics of the heart's neural control system in the short and long-term provides new information on how the cardiovascular system functions, and has led to the development of research in this field. Given that the ECG signal contains important information and is one of the common tools used by physicians to diagnose heart disease, but due to the limited accuracy of time and the fact that some information about this signal is hidden from the viewpoint of physicians, the design of the intelligent system proposed in this paper can help physicians with greater speed and accuracy in the diagnosis of normal and patient individuals and can be used as a complementary system in the treatment centers.Keywords: neart rate variability, signal processing, linear and non-linear features, classification methods, ROC Curve
Procedia PDF Downloads 26215308 Performance Analysis of Permanent Magnet Synchronous Motor Using Direct Torque Control Based ANFIS Controller for Electric Vehicle
Authors: Marulasiddappa H. B., Pushparajesh Viswanathan
Abstract:
Day by day, the uses of internal combustion engines (ICE) are deteriorating because of pollution and less fuel availability. In the present scenario, the electric vehicle (EV) plays a major role in the place of an ICE vehicle. The performance of EVs can be improved by the proper selection of electric motors. Initially, EV preferred induction motors for traction purposes, but due to complexity in controlling induction motor, permanent magnet synchronous motor (PMSM) is replacing induction motor in EV due to its advantages. Direct torque control (DTC) is one of the known techniques for PMSM drive in EV to control the torque and speed. However, the presence of torque ripple is the main drawback of this technique. Many control strategies are followed to reduce the torque ripples in PMSM. In this paper, the adaptive neuro-fuzzy inference system (ANFIS) controller technique is proposed to reduce torque ripples and settling time. Here the performance parameters like torque, speed and settling time are compared between conventional proportional-integral (PI) controller with ANFIS controller.Keywords: direct torque control, electric vehicle, torque ripple, PMSM
Procedia PDF Downloads 16415307 Integration of Climatic Factors in the Meta-Population Modelling of the Dynamic of Malaria Transmission, Case of Douala and Yaoundé, Two Cities of Cameroon
Authors: Justin-Herve Noubissi, Jean Claude Kamgang, Eric Ramat, Januarius Asongu, Christophe Cambier
Abstract:
The goal of our study is to analyse the impact of climatic factors in malaria transmission taking into account migration between Douala and Yaoundé, two cities of Cameroon country. We show how variations of climatic factors such as temperature and relative humidity affect the malaria spread. We propose a meta-population model of the dynamic transmission of malaria that evolves in space and time and that takes into account temperature and relative humidity and the migration between Douala and Yaoundé. We also integrate the variation of environmental factors as events also called mathematical impulsion that can disrupt the model evolution at any time. Our modelling has been done using the Discrete EVents System Specification (DEVS) formalism. Our implementation has been done on Virtual Laboratory Environment (VLE) that uses DEVS formalism and abstract simulators for coupling models by integrating the concept of DEVS.Keywords: compartmental models, DEVS, discrete events, meta-population model, VLE
Procedia PDF Downloads 55415306 Optimizing Logistics for Courier Organizations with Considerations of Congestions and Pickups: A Courier Delivery System in Amman as Case Study
Authors: Nader A. Al Theeb, Zaid Abu Manneh, Ibrahim Al-Qadi
Abstract:
Traveling salesman problem (TSP) is a combinatorial integer optimization problem that asks "What is the optimal route for a vehicle to traverse in order to deliver requests to a given set of customers?”. It is widely used by the package carrier companies’ distribution centers. The main goal of applying the TSP in courier organizations is to minimize the time that it takes for the courier in each trip to deliver or pick up the shipments during a day. In this article, an optimization model is constructed to create a new TSP variant to optimize the routing in a courier organization with a consideration of congestion in Amman, the capital of Jordan. Real data were collected by different methods and analyzed. Then, concert technology - CPLEX was used to solve the proposed model for some random generated data instances and for the real collected data. At the end, results have shown a great improvement in time compared with the current trip times, and an economic study was conducted afterwards to figure out the impact of using such models.Keywords: travel salesman problem, congestions, pick-up, integer programming, package carriers, service engineering
Procedia PDF Downloads 43015305 The Seller’s Sense: Buying-Selling Perspective Affects the Sensitivity to Expected-Value Differences
Authors: Taher Abofol, Eldad Yechiam, Thorsten Pachur
Abstract:
In four studies, we examined whether seller and buyers differ not only in subjective price levels for objects (i.e., the endowment effect) but also in their relative accuracy given objects varying in expected value. If, as has been proposed, sellers stand to accrue a more substantial loss than buyers do, then their pricing decisions should be more sensitive to expected-value differences between objects. This is implied by loss aversion due to the steeper slope of prospect theory’s value function for losses than for gains, as well as by loss attention account, which posits that losses increase the attention invested in a task. Both accounts suggest that losses increased sensitivity to relative values of different objects, which should result in better alignment of pricing decisions to the objective value of objects on the part of sellers. Under loss attention, this characteristic should only emerge under certain boundary conditions. In Study 1 a published dataset was reanalyzed, in which 152 participants indicated buying or selling prices for monetary lotteries with different expected values. Relative EV sensitivity was calculated for participants as the Spearman rank correlation between their pricing decisions for each of the lotteries and the lotteries' expected values. An ANOVA revealed a main effect of perspective (sellers versus buyers), F(1,150) = 85.3, p < .0001 with greater EV sensitivity for sellers. Study 2 examined the prediction (implied by loss attention) that the positive effect of losses on performance emerges particularly under conditions of time constraints. A published dataset was reanalyzed, where 84 participants were asked to provide selling and buying prices for monetary lotteries in three deliberations time conditions (5, 10, 15 seconds). As in Study 1, an ANOVA revealed greater EV sensitivity for sellers than for buyers, F(1,82) = 9.34, p = .003. Importantly, there was also an interaction of perspective by deliberation time. Post-hoc tests revealed that there were main effects of perspective both in the condition with 5s deliberation time, and in the condition with 10s deliberation time, but not in the 15s condition. Thus, sellers’ EV-sensitivity advantage disappeared with extended deliberation. Study 3 replicated the design of study 1 but administered the task three times to test if the effect decays with repeated presentation. The results showed that the difference between buyers and sellers’ EV sensitivity was replicated in repeated task presentations. Study 4 examined the loss attention prediction that EV-sensitivity differences can be eliminated by manipulations that reduce the differential attention investment of sellers and buyers. This was carried out by randomly mixing selling and buying trials for each participant. The results revealed no differences in EV sensitivity between selling and buying trials. The pattern of results is consistent with an attentional resource-based account of the differences between sellers and buyers. Thus, asking people to price, an object from a seller's perspective rather than the buyer's improves the relative accuracy of pricing decisions; subtle changes in the framing of one’s perspective in a trading negotiation may improve price accuracy.Keywords: decision making, endowment effect, pricing, loss aversion, loss attention
Procedia PDF Downloads 34515304 Pb and NI Removal from Aqueous Environment by Green Synthesized Iron Nanoparticles Using Fruit Cucumis Melo and Leaves of Ficus Virens
Authors: Amandeep Kaur, Sangeeta Sharma
Abstract:
Keeping in view the serious entanglement of heavy metals ( Pb+2 and Ni+2) ions in an aqueous environment, a rapid search for efficient adsorbents for the adsorption of heavy metals has become highly desirable. In this quest, green synthesized Fe np’s have gathered attention because of their excellent adsorption capability of heavy metals from aqueous solution. This research report aims at the fabrication of Fe np’s using the fruit Cucumis melo and leaves of Ficus virens via a biogenic synthesis route. Further, synthesized CM-Fe-np’s and FV-Fe-np’s have been tested as potential bio-adsorbents for the removal of Pb+2 and Ni+2 by carrying out adsorption batch experiments. The influence of myriad parameters like initial concentration of Pb/Ni (5,10,15,20,25 mg/L), contact time (10 to 200 min.), adsorbent dosage (0.5, 0.10, 0.15 mg/L), shaking speed (120 to 350 rpm) and pH value (6,7,8,9) has been investigated. The maximum removal with CM-Fe-np’s and FV-Fe-np’s has been achieved at pH 7, metal conc. 5 mg/L, dosage 0.9 g/L, shaking speed 200 rpm and reaction contact time 200 min during the adsorption experiment. The results obtained are found to be in accordance with Freundlich and Langmuir's adsorption models; consequently, they could be highly applicable to the wastewater treatment plant.Keywords: adsorption, biogenic synthesis, nanoparticles, nickel, lead
Procedia PDF Downloads 8715303 Research on Routing Protocol in Ship Dynamic Positioning Based on WSN Clustering Data Fusion System
Authors: Zhou Mo, Dennis Chow
Abstract:
In the dynamic positioning system (DPS) for vessels, the reliable information transmission between each note basically relies on the wireless protocols. From the perspective of cluster-based routing pro-tocols for wireless sensor networks, the data fusion technology based on the sleep scheduling mechanism and remaining energy in network layer is proposed, which applies the sleep scheduling mechanism to the routing protocols, considering the remaining energy of node and location information when selecting cluster-head. The problem of uneven distribution of nodes in each cluster is solved by the Equilibrium. At the same time, Classified Forwarding Mechanism as well as Redelivery Policy strategy is adopted to avoid congestion in the transmission of huge amount of data, reduce the delay in data delivery and enhance the real-time response. In this paper, a simulation test is conducted to improve the routing protocols, which turns out to reduce the energy consumption of nodes and increase the efficiency of data delivery.Keywords: DPS for vessel, wireless sensor network, data fusion, routing protocols
Procedia PDF Downloads 46715302 Optimization of Marine Waste Collection Considering Dynamic Transport and Ship’s Wake Impact
Authors: Guillaume Richard, Sarra Zaied
Abstract:
Marine waste quantities increase more and more, 5 million tons of plastic waste enter the ocean every year. Their spatiotemporal distribution is never homogeneous and depends mainly on the hydrodynamic characteristics of the environment, as well as the size and location of the waste. As part of optimizing collect of marine plastic wastes, it is important to measure and monitor their evolution over time. In this context, diverse studies have been dedicated to describing waste behavior in order to identify its accumulation in ocean areas. None of the existing tools which track objects at sea had the objective of tracking down a slick of waste. Moreover, the applications related to marine waste are in the minority compared to rescue applications or oil slicks tracking applications. These approaches are able to accurately simulate an object's behavior over time but not during the collection mission of a waste sheet. This paper presents numerical modeling of a boat’s wake impact on the floating marine waste behavior during a collection mission. The aim is to predict the trajectory of a marine waste slick to optimize its collection using meteorological data of ocean currents, wind, and possibly waves. We have made the choice to use Ocean Parcels which is a Python library suitable for trajectoring particles in the ocean. The modeling results showed the important role of advection and diffusion processes in the spatiotemporal distribution of floating plastic litter. The performance of the proposed method was evaluated on real data collected from the Copernicus Marine Environment Monitoring Service (CMEMS). The results of the evaluation in Cape of Good Hope (South Africa) prove that the proposed approach can effectively predict the position and velocity of marine litter during collection, which allowed for optimizing time and more than $90\%$ of the amount of collected waste.Keywords: marine litter, advection-diffusion equation, sea current, numerical model
Procedia PDF Downloads 8715301 Optical Flow Based System for Cross Traffic Alert
Authors: Giuseppe Spampinato, Salvatore Curti, Ivana Guarneri, Arcangelo Bruna
Abstract:
This document describes an advanced system and methodology for Cross Traffic Alert (CTA), able to detect vehicles that move into the vehicle driving path from the left or right side. The camera is supposed to be not only on a vehicle still, e.g. at a traffic light or at an intersection, but also moving slowly, e.g. in a car park. In all of the aforementioned conditions, a driver’s short loss of concentration or distraction can easily lead to a serious accident. A valid support to avoid these kinds of car crashes is represented by the proposed system. It is an extension of our previous work, related to a clustering system, which only works on fixed cameras. Just a vanish point calculation and simple optical flow filtering, to eliminate motion vectors due to the car relative movement, is performed to let the system achieve high performances with different scenarios, cameras and resolutions. The proposed system just uses as input the optical flow, which is hardware implemented in the proposed platform and since the elaboration of the whole system is really speed and power consumption, it is inserted directly in the camera framework, allowing to execute all the processing in real-time.Keywords: clustering, cross traffic alert, optical flow, real time, vanishing point
Procedia PDF Downloads 20315300 Examining the Concept of Sustainability in the Scenery Architecture of Naqsh-e-Jahan Square
Authors: Mahmood Naghizadeh, Maryam Memarian, Hourshad Irvash
Abstract:
Following the rise in the world population and the upward growth of urbanization, the design, planning, and management of the site scenery for the purpose of presentation and expansion of sustainable site scenery has turned to be the greatest concern to experts. Since the fundamental principles of the site scenery change more and less haphazardly over time, sustainable site scenery can be viewed as an ideal goal because both sustainability and dynamism come into view in urban site scenery and it wouldn’t be designed according to a set of pre-determined principles. Sustainable site scenery, as the ongoing interaction between idealism and pragmatism with sustainability factors, is a dynamic phenomenon created by bringing cultural, historical, social and natural scenery together. Such an interaction is not to subdue other factors but to reinforce the aforementioned factors. The sustainable site scenery is a persistently occurring event not only has attenuated over time but has gained strength. The sustainability of a site scenery or an event over time depends on its site identity which grows out of its continuous association with the past. The sustainability of a site scene or an event in a time frame intertwined with the identity of the place from past to present. This past history supports the present and future of the scene. The result of such a supportive role is the sustainability of site scenery. Isfahan Naqsh-e-Jahan Square is one of the most outstanding squares in the world and the best embodiment of Iranian site scenery architecture. This square is an arena that brings people together and a dynamic city center comprising various urban and religious complexes, spaces and facilities and is considered as one of the most favorable traditional urban space of Iran. Such a place can illustrate many factors related to sustainable site scenery. One the other hand, there are still no specific principles concerning sustainability in the architecture of site scenery. Meanwhile, sustainability is recognized as a rather modern view in architecture. The purpose of this research is to identify factors involved in sustainability in general and to examine their effects on site scenery architecture in particular. Finally, these factors will be studied with taking Naqsh-e-Jahan Square into account. This research adopts an analytic-descriptive approach that has benefited from the review of literature available in library studies and the documents related to sustainability and site scenery architecture. The statistical population used for the purpose of this research includes square constructed during the Safavid dynasty and Naqsh-e-Jahan Square was picked out as the case study. The purpose of this paper is to come up with a rough definition of sustainable site scenery and demonstrate this concept by analyzing it and recognizing the social, economic and ecological aspects of this project.Keywords: Naqsh-e-Jahan Square, site scenery architecture, sustainability, sustainable site scenery
Procedia PDF Downloads 31315299 Investigation of Clusters of MRSA Cases in a Hospital in Western Kenya
Authors: Lillian Musila, Valerie Oundo, Daniel Erwin, Willie Sang
Abstract:
Staphylococcus aureus infections are a major cause of nosocomial infections in Kenya. Methicillin resistant S. aureus (MRSA) infections are a significant burden to public health and are associated with considerable morbidity and mortality. At a hospital in Western Kenya two clusters of MRSA cases emerged within short periods of time. In this study we explored whether these clusters represented a nosocomial outbreak by characterizing the isolates using phenotypic and molecular assays and examining epidemiological data to identify possible transmission patterns. Specimens from the site of infection of the subjects were collected, cultured and S. aureus isolates identified phenotypically and confirmed by APIStaph™. MRSA were identified by cefoxitin disk screening per CLSI guidelines. MRSA were further characterized based on their antibiotic susceptibility patterns and spa gene typing. Characteristics of cases with MRSA isolates were compared with those with MSSA isolated around the same time period. Two cases of MRSA infection were identified in the two week period between 21 April and 4 May 2015. A further 2 MRSA isolates were identified on the same day on 7 September 2015. The antibiotic resistance patterns of the two MRSA isolates in the 1st cluster of cases were different suggesting that these were distinct isolates. One isolate had spa type t2029 and the other had a novel spa type. The 2 isolates were obtained from urine and an open skin wound. In the 2nd cluster of MRSA isolates, the antibiotic susceptibility patterns were similar but isolates had different spa types: one was t037 and the other a novel spa type different from the novel MRSA spa type in the first cluster. Both cases in the second cluster were admitted into the hospital but one infection was community- and the other hospital-acquired. Only one of the four MRSA cases was classified as an HAI from an infection acquired post-operatively. When compared to other S. aureus strains isolated within the same time period from the same hospital only one spa type t2029 was found in both MRSA and non-MRSA strains. None of the cases infected with MRSA in the two clusters shared any common epidemiological characteristic such as age, sex or known risk factors for MRSA such as prolonged hospitalization or institutionalization. These data suggest that the observed MRSA clusters were multi strain clusters and not an outbreak of a single strain. There was no clear relationship between the isolates by spa type suggesting that no transmission was occurring within the hospital between these cluster cases but rather that the majority of the MRSA strains were circulating in the community. There was high diversity of spa types among the MRSA strains with none of the isolates sharing spa types. Identification of disease clusters in space and time is critical for immediate infection control action and patient management. Spa gene typing is a rapid way of confirming or ruling out MRSA outbreaks so that costly interventions are applied only when necessary.Keywords: cluster, Kenya, MRSA, spa typing
Procedia PDF Downloads 33115298 Frequency Selective Filters for Estimating the Equivalent Circuit Parameters of Li-Ion Battery
Authors: Arpita Mondal, Aurobinda Routray, Sreeraj Puravankara, Rajashree Biswas
Abstract:
The most difficult part of designing a battery management system (BMS) is battery modeling. A good battery model can capture the dynamics which helps in energy management, by accurate model-based state estimation algorithms. So far the most suitable and fruitful model is the equivalent circuit model (ECM). However, in real-time applications, the model parameters are time-varying, changes with current, temperature, state of charge (SOC), and aging of the battery and this make a great impact on the performance of the model. Therefore, to increase the equivalent circuit model performance, the parameter estimation has been carried out in the frequency domain. The battery is a very complex system, which is associated with various chemical reactions and heat generation. Therefore, it’s very difficult to select the optimal model structure. As we know, if the model order is increased, the model accuracy will be improved automatically. However, the higher order model will face the tendency of over-parameterization and unfavorable prediction capability, while the model complexity will increase enormously. In the time domain, it becomes difficult to solve higher order differential equations as the model order increases. This problem can be resolved by frequency domain analysis, where the overall computational problems due to ill-conditioning reduce. In the frequency domain, several dominating frequencies can be found in the input as well as output data. The selective frequency domain estimation has been carried out, first by estimating the frequencies of the input and output by subspace decomposition, then by choosing the specific bands from the most dominating to the least, while carrying out the least-square, recursive least square and Kalman Filter based parameter estimation. In this paper, a second order battery model consisting of three resistors, two capacitors, and one SOC controlled voltage source has been chosen. For model identification and validation hybrid pulse power characterization (HPPC) tests have been carried out on a 2.6 Ah LiFePO₄ battery.Keywords: equivalent circuit model, frequency estimation, parameter estimation, subspace decomposition
Procedia PDF Downloads 15015297 Theory of Constraints: Approach for Performance Enhancement and Boosting Overhaul Activities
Authors: Sunil Dutta
Abstract:
Synchronization is defined as ‘the sequencing and re-sequencing of all relative and absolute activities in time and space and continuous alignment of those actions with purposeful objective in a complex and dynamic atmosphere. In a complex and dynamic production / maintenance setup, no single group can work in isolation for long. In addition, many activities in projects take place simultaneously at the same time. Work of every section / group is interwoven with work of others. The various activities / interactions which take place in production / overhaul workshops are interlinked because of physical requirements (information, material, workforces, equipment, and space) and dependencies. The activity sequencing is determined by physical dependencies of various department / sections / units (e.g., inventory availability must be ensured before stripping and disassembling of equipment), whereas resource dependencies do not. Theory of constraint facilitates identification, analyses and exploitation of the constraint in methodical manner. These constraints (equipment, manpower, policies etc.) prevent the department / sections / units from getting optimum exploitation of available resources. The significance of theory of constraints for achieving synchronization at overhaul workshop is illustrated in this paper.Keywords: synchronization, overhaul, throughput, obsolescence, uncertainty
Procedia PDF Downloads 35115296 Productivity of Construction Companies Using the Management of Threats and Opportunities in Construction Projects of Iran
Authors: Nima Amani, Ali Salehi Dastjerdi, Fatemeh Ahmadi, Ardalan Sabamehr
Abstract:
The cost overrun of the construction projects has always been one of the main problems of the construction companies caused by the risky nature of the construction projects. Therefore, today, the application of risk management is inevitable. Although in theory, the issue of risk management is divided into the opportunities and threats management, in practice, most of the projects have been focused on the threats management. However, considering the opportunities management and applying the opportunities-response strategies can lead to the improved profitability of the construction projects of the companies. In this paper, a new technique is developed to identify the opportunities in the construction projects using an improved protocol and propose the appropriate opportunities-response strategies to the construction companies to provide them with higher profitability. To evaluate the effectiveness of the protocol for selecting the most appropriate strategies in response to the opportunities and threats, two projects from a construction company in Iran were studied. Both projects selected were in mid-range in terms of size and similar in terms of time, run time and costs. Finally, the output indicates that using the proposed opportunities-response strategies show that the company's profitability in the future can be increased approximately for similar projects.Keywords: opportunities management, risk-response strategy, opportunity-response strategy, productivity, risk management
Procedia PDF Downloads 22915295 Lactate in Critically Ill Patients an Outcome Marker with Time
Authors: Sherif Sabri, Suzy Fawzi, Sanaa Abdelshafy, Ayman Nagah
Abstract:
Introduction: Static derangements in lactate homeostasis during ICU stay have become established as a clinically useful marker of increased risk of hospital and ICU mortality. Lactate indices or kinetic alteration of the anaerobic metabolism make it a potential parameter to evaluate disease severity and intervention adequacy. This is an inexpensive and simple clinical parameter that can be obtained by a minimally invasive means. Aim of work: Comparing the predictive value of dynamic indices of hyperlactatemia in the first twenty four hours of intensive care unit (ICU) admission with other static values are more commonly used. Patients and Methods: This study included 40 critically ill patients above 18 years old of both sexes with Hyperlactamia (≥ 2 m mol/L). Patients were divided into septic group (n=20) and low oxygen transport group (n=20), which include all causes of low-O2. Six lactate indices specifically relating to the first 24 hours of ICU admission were considered, three static indices and three dynamic indices. Results: There were no statistically significant differences among the two groups regarding age, most of the laboratory results including ABG and the need for mechanical ventilation. Admission lactate was significantly higher in low-oxygen transport group than the septic group [37.5±11.4 versus 30.6±7.8 P-value 0.034]. Maximum lactate was significantly higher in low-oxygen transport group than the septic group P-value (0.044). On the other hand absolute lactate (mg) was higher in septic group P-value (< 0.001). Percentage change of lactate was higher in the septic group (47.8±11.3) than the low-oxygen transport group (26.1±12.6) with highly significant P-value (< 0.001). Lastly, time weighted lactate was higher in the low-oxygen transport group (1.72±0.81) than the septic group (1.05±0.8) with significant P-value (0.012). There were statistically significant differences regarding lactate indices in survivors and non survivors, whether in septic or low-oxygen transport group. Conclusion: In critically ill patients, time weighted lactate and percent in lactate change in the first 24 hours can be an independent predictive factor in ICU mortality. Also, a rising compared to a falling blood lactate concentration over the first 24 hours can be associated with significant increase in the risk of mortality.Keywords: critically ill patients, lactate indices, mortality in intensive care, anaerobic metabolism
Procedia PDF Downloads 24215294 The Effects of Source and Timing on the Acceptance of New Product Recommendation: A Lab Experiment
Abstract:
A new product is important for companies to extend consumers and manifest competitiveness. New product often involves new features that consumers might not be familiar with while it may also have a competitive advantage to attract consumers compared to established products. However, although most online retailers employ recommendation agents (RA) to influence consumers’ product choice decision, recommended new products are not accepted and chosen as expected. We argue that it might also be caused by providing a new product recommendation in the wrong way at the wrong time. This study seeks to discuss how new product evaluations sourced from third parties could be employed in RAs as evidence of the superiority for the new product and how the new product recommendation could be provided to a consumer at the right time so that it can be accepted and finally chosen during the consumer’s decision-making process. A 2*2 controlled laboratory experiment was conducted to understand the selection of new product recommendation sources and recommendation timing. Human subjects were randomly assigned to one of the four treatments to minimize the effects of individual differences on the results. Participants were told to make purchase choices from our product categories. We find that a new product recommended right after a similar existing product and with the source of the expert review will be more likely to be accepted. Based on this study, both theoretical and practical contributions are provided regarding new product recommendation.Keywords: new product recommendation, recommendation timing, recommendation source, recommendation agents
Procedia PDF Downloads 15415293 Production of Biodiesel from Avocado Waste in Hossana City, Ethiopia
Authors: Tarikayehu Amanuel, Abraham Mohammed
Abstract:
The production of biodiesel from waste materials is becoming an increasingly important research area in the field of renewable energy. One potential waste material source is avocado, a fruit with a large seed and peel that are typically discarded after consumption. This research aims to investigate the feasibility of using avocado waste as a feedstock for the production of biodiesel. The study focuses on extracting oil from the waste material using the transesterification technique and then characterizing the properties of oil to determine its suitability for conversion to biodiesel. The study was conducted experimentally, and a maximum oil yield of 11.583% (150g of oil produced from 1.295kg of avocado waste powder) was obtained from avocado waste powder at an extraction time of 4hr. An 87% fatty acid methyl ester (biodiesel) conversion was also obtained using a methanol/oil ratio of 6:1, 1.3g NaOH, reaction time 60min, and 65°C reaction temperature. Furthermore, from 145 ml of avocado waste oil, 126.15 ml of biodiesel was produced, indicating a high percentage of conversion (87%). Conclusively, the produced biodiesel showed comparable physical and chemical characteristics to that of standard biodiesel samples considered for the study. The results of this research could help to identify a new source of biofuel production while also addressing the issue of waste disposal in the food industry.Keywords: biodiesel, avocado, transesterification, soxhlet extraction
Procedia PDF Downloads 7015292 DCASH: Dynamic Cache Synchronization Algorithm for Heterogeneous Reverse Y Synchronizing Mobile Database Systems
Authors: Gunasekaran Raja, Kottilingam Kottursamy, Rajakumar Arul, Ramkumar Jayaraman, Krithika Sairam, Lakshmi Ravi
Abstract:
The synchronization server maintains a dynamically changing cache, which contains the data items which were requested and collected by the mobile node from the server. The order and presence of tuples in the cache changes dynamically according to the frequency of updates performed on the data, by the server and client. To synchronize, the data which has been modified by client and the server at an instant are collected, batched together by the type of modification (insert/ update/ delete), and sorted according to their update frequencies. This ensures that the DCASH (Dynamic Cache Synchronization Algorithm for Heterogeneous Reverse Y synchronizing Mobile Database Systems) gives priority to the frequently accessed data with high usage. The optimal memory management algorithm is proposed to manage data items according to their frequency, theorems were written to show the current mobile data activity is reverse Y in nature and the experiments were tested with 2g and 3g networks for various mobile devices to show the reduced response time and energy consumption.Keywords: mobile databases, synchronization, cache, response time
Procedia PDF Downloads 406